The gist of utility-based AI is that you make decision for which action to select based on the “fitness” of each action, which is calculated as a function of an arbitrary number of inputs (considerations). There is an important distinction between the value of an input, and its utility for a particular action. The value is objective (e.g. getting an ice-cream is not that important in life); the utility is subjective and context-specific (if I’m really really hungry, the ice-cream’s perceived value skyrockets). A utility score is calculated from a value using response curves (linear, quadratic, logistic, etc) of the form y= f(x) , where x,y in [0,1].
I’ve already implemented my own version (heavily borrowed, slightly adjusted, developed with the help of the wonderful Desmos) of the curves so that they are as simple to use as possible. So far, there are 3 curves:
(below images link to the corresponding Desmos graphs when clicked)
and planning to add a sine and a normal distribution and that should be it for starters. In the above graphs, m and k modify the shape of the curve while b and c translate it in x/y axes.
What doesn’t make sense with the utility AI that I’ve seen so far is the combinatorial explosion that looks like it is encouraged: e.g. for an Attack decision score, it would calculate scores for Attack(Target0) Attack(Target1) for N targets. But what happens when there are lots of targets? In one presentation, number of targets is limited to 12, which is a poor approach, although practical. But what happens if actions are parameterizable in many more ways? I don’t want to create multithreaded AI and evaluations once-per-X-seconds because it takes forever.
Problem Scenario: Spellcaster NPC with a dagger, with a sword and an axe in the inventory, able to cast Frost Ray, Fire Bolt, Acid Touch and Lightning, is surrounded by 10 enemies. A melee attack can be a Power Attack, Normal or Quick Attack. This creates (10 x 3 x 3) + (10 x 4) combinations of attack actions only! Not even considering retreat or other actions. This is way too many decisions to be evaluated just for a single unit.
What some people advocate is to use utility-based AI as part of a Behavior Tree: the Selector node. That node will run the utility calculations to select the best-scoring child node . Of course that requires a behavior tree implementation, which is something that I’m planning for later. Of course, it also complicates the implementation, as we now have 2 AI systems to worry about.
A simple alternative that I’ve been thinking is to use for starters a tiered form of utility calculations, akin to a behavior tree composed only of Selector nodes.
Tier 1: Select between high-level behaviors (e.g Attack, Retreat, SelfBuff, Idle)
Tier 2: Select among commands that satisfy that behavior (e.g. MeleeAttack, RangedAttack, HeadButt, SuperKick, TornadoStrike)
Tier 3: Execute a mini-FSM that sets up the command’s parameters and executes the command. A melee attack FSM would for example select a target, and if the target was at melee range, it would schedule the attack (and the FSM completes), or if the target is not at melee range, it would schedule a move towards the target (and the FSM, again, completes). Such small FSMs are not a big fuss to implement, and allow scriptability and fast execution. The FSM should be able to reuse whatever intermediate data have been computed by the previous tiers.
A couple notes: sometimes tier 1 could directly lead to tier 3 (e.g. there are no particular “retreat” skills, besides I guess an Expeditious Retreat spell 🙂 ). Additionally, while I mentioned FSM, there no reason why that could not be a small behavior tree, or even a class that runs a few if-then-else statements. The goal of tier 3 is the same: set the parameters and schedule a command for execution.
So that’s it for now, all talk and no action. Next time hopefully with an implementation of the above tiered system.