Roguelike Intelligence - Stateless AIs

From RogueBasin
Jump to navigation Jump to search

Since I'm an AI engineer, and since I'm unemployed right now, and since I'm interested in roguelike games, I thought I'd write a series of articles on game AI for roguelikes as a contribution to the community.

Some of these techniques (Stateless AI's and State Machines in particular) are properly "pseudointelligence" rather than "artificial intelligence", because they don't involve any actual learning. But "AI" is how game designers refer to the decision-making code for their artificial antagonists, regardless of its actual status, so I'm going to use it.

The ground I plan to cover here includes roughly these articles; I may add an article or two to this or change the order of articles depending on what people are interested in reading.

  1. Stateless AI
  2. State Machine AI
  3. "Evolving" stateless AI's or state machine AI's
  4. Modeling the player
  5. Minimaxing in application to roguelike domains
  6. Neural Networks and training them with backprop
  7. "Evolving" neural networks.
  8. Recurrent Neural Networks (NN's with memory).

Roguelike Game AI, part 1: Stateless AI's

The classic "zombie" AI is easy to code: It goes like this:

       ZOMBIE AI
          if can-move-toward-player
            move-toward-player 
          else if can-attack-player
            attack-player
          else stand-still

And off your zombie goes, closing on the player as fast as it can and beating the player as hard as possible once it gets there. It's the simplest possible monster AI, the first monster we all implement when we're testing monster movement code for the first time.

Suppose you want to have a different kind of brain dead monster called a "Golem", which uses ranged attacks at range. Consider this:

        GOLEM AI
          If can-attack-player
            attack-player
          else if can-move-toward-player
            move-toward-player
          else stand-still

This is the same as the zombie if it's carrying a sword, but if you give it a ranged weapon its behavior is dramatically different. If it has a ranged attack, it will stop dead as soon as it catches sight of the player character and stand there shooting until the player kills it or gets out of range or it runs out of ammo. If it's not dead and still has ammo, it will run after the player until it's in range again and start shooting some more. If it has both a ranged and hand-to-hand attack, it shoots till it runs out of ammo, then switches to hand-to-hand and behaves like a zombie.

This seems like a smarter, more versatile monster, and it's no harder AI work than the simple, stupid zombie. It can really mess up the way the dungeon works though, if other monsters can't get past it in corridors.

These are (very) simple examples of a stateless AI. It's called stateless because every time it's the monster's turn to act, it starts in the same 'state' -- ready to just start on the if statements until it finds an action it can do. Angband is a popular game whose monsters use (nearly) stateless AI.

Not all stateless AI's are as stupid as those given above. You can make a simple modification of the zombie AI for another kind of AI for slightly-less stupid monsters.

GHOUL AI
        If can-move-toward-player
           AND (random < move-probability
                OR can't-attack-player)
              move-toward-player
        else if can-attack-player
             attack-player
        else stand-still

Now, ghouls behave a lot like a zombie or a golem, except that its move/attack decision has a random element not predictable by the player. It may pause on the way to the player to fire a ranged weapon, but it won't just stand there firing and being a roadblock to other monsters, and it won't just run toward the player allowing him to ignore it till it gets there. Its move probability can be set in monster configuration so you could make different types of monsters using the ghoul AI more or less likely to move, which will distinguish your monsters from each other a little bit more.

Now, one more complication and your monsters will have just about the same intelligence as Angband monsters.

        ANIMAL AI
          If damage-taken > morale
      if can-run-away-from-player
                 run-away-from-player
             else if can-attack-player
                 attack-player
          else if can-move-toward-player
                     AND (random < move-probability
                         OR can't-attack-player)
                     move-toward-player
               else if can-attack-player 
                    attack-player
               else stand-still

The complication was the damage check in the first line. Now you can assign your monsters different morale as well as different move probabilities, and the timid ones with morale less than one percent of their hitpoints will run away when they take even a tiny wound while the maniacal ones with morale equal to or higher than hitpoints will fight to the death. Monsters that run away will come back once they heal a little bit, fight when cornered, and generally behave about like you'd expect a bunch of really stupid animals to behave. The damage check is sort-of a mode, since damage carries from one round to the next. But this is still a modeless AI, because we make the decision every round. The random-state is also sort-of a mode, but the decision made is effectively purely random, so it doesn't count.

Now let's look at code smarter than Angband's, introducing the idea of a monster with a ranged attack and a preferred range. That will go like this:

       ARCHER AI
   if can-move-away-from-player
             AND (damage-taken > morale 
                 OR too-close-to-player)
             move-away-from-player
          else if can-move-toward-player
             AND damage-taken < morale
             AND too-far-from-player
             move-toward-player
          else if can-attack-player
      attack-player
          else stand-still


This creature will use a "gun-n-run" strategy, trying to use a ranged attack at range while staying out of the player's reach. If it's faster than the player character, or if a bunch of them are encountered together, it can be very dangerous to a character with no ranged attacks, even if it's ridiculously weak.

If it has no ranged attacks, it becomes a silly "spectator" in the dungeon, walking around and keeping an eye on the player character but not doing much of anything else. Give it a morale of 1, lots of movement and hitpoints, and name it The "Dungeon Survivor Camera Crew". Your players will report intense satisfaction from finally maneuvering it into a deadend and killing it.

Now, you can take this a lot further in terms of dreaming up stateless AI's that pursue different strategies in the dungeon, including AI's that stay in packs, trying not to get too widely separated from others of their kind. But before you do all of that, there's another opportunity to make your monsters smarter using these simple, stateless AIs.

You can increase the basic pseudointelligence of stateless monsters, and increase the differentiation between monsters, by making the various primitives involved in the stateless AI's more sophisticated and different on a per-monster basis.

For example, run-away-from-player can be stupid (moving to the adjacent square that maximizes distance to the player) or smarter (heading for a room exit - by preference, one that the monster is closer to than the player) or smart (heading for an exit, but avoiding dead ends).

Move-toward-player can be stupid (stepping to the adjacent square closest to the player) or smarter (following the player's scent track if the player's not in sight) or smart (doing pathfinding to find a way around obstacles when the player's not in sight) or exhibit both brains and teamwork (pathfinding if player's not in sight, trading places with a more badly-damaged monster of the same type if one is closer to the player).

Attack-player can be stupid (using whatever the monster's only attack is) or smarter (picking among several attack forms based on what's available, such as switching to meelee weapons when out of ammo) or smart (picking among different attacks based on what's available and known about the player character).

"stand-still" routines can do some monster-specific thing like throwing a healing spell at the most badly wounded monster in the room, which is another kind of monster teamwork, or casting a teleport spell, which will frequently get the monster out of whatever "stymied" situation it's in, or telling the player a rumor or insulting him.

If you have a few dozen different stateless AI's, and then a few different versions of most of the primitives they use, combinatorics is on your side. You will be able to make a menagerie of "simple" monsters that exhibit hundreds of different kinds of behavior.

That said, this type of pre-coded, stateless AI is still very limited; it is in fact the very simplest form of pseudointelligence for implementing game adversaries

Some of you will by now have noticed something. I first introduced the "zombie AI" which defaults to movement if possible, then I introduced the "golem AI" which defaults to attack if possible. Then I introduced the "ghoul AI" which makes a random choice between movement and attack when both choices are valid.

What I didn't point out in the first article was that the "ghoul AI" can be used to imitate both the "zombie AI" and the "golem AI" -- all you need to do is set the move-probability parameter to one or zero respectively.

I went on to introduce the "animal AI" -- but that can be used to imitate the "ghoul AI." All you need to do is set the morale to some value greater than the creature's hitpoints and it will never retreat.

Then I introduced a run-n-gun monster with the "archer AI". But here I wasn't strictly generalizing; The "animal AI" can't be imitated strictly within the code of the "archer AI".

This may not seem important, but bear with me for a little bit; here's an AI pseudocode that can be used to implement both the "archer AI" and the "animal AI".

         TYPICAL AI
            If damage > morale
               if can-run-away-from-player
                  run-away-from-player
               else if can-attack-player
                  attack-player
            else if too-far-from-player
               AND can-attack-player
               AND can-move-toward-player
                   if  random < charge-probability
                       move-toward-player     
                   else attack-player
            else if too-close-to-character
               AND can-attack-player
               AND can-move-away-from-player
                   if random < retreat-probability
                      move-away-from-player
                   else attack-player
            else if can-attack-player
               attack-player
            else if too-far-from-player 
               AND can-move-toward-player
     move-toward-player
            else if too-close-to-player
               AND can-move-away-from-player
                   move-away-from-player
            else stand-still


Now, if we want an "archer AI" we set retreat-probability to 1 and charge-probability to 1.

If we want an "animal AI" we give it a too-close-to-player function that is never true, a too-far-from-player function that is always true, and copy the "animal AI's" move-probability parameter into the "typical AI's" charge-probability parameter.

The separation of charge-probability and retreat-probability means that the creature may pause while moving toward the player to fire a ranged weapon, or pause while moving away from the player to fire a ranged weapon. But since these are different tactical situations it seemed reasonable that the probabilities should be different.

One other thing I did in the above AI was to separate the functions run-away-from-player and move-away-from-player. The first is for panicked situations when the monster doesn't want to be anywhere close to the player; it could be implemented in a smart, spell-using monster as casting a long-range teleport spell. The second is tactical; it means the monster wishes to be further away from the player for tactical reasons. The smart, spell-casting monster could implement this as a classic "blink" - a limited teleport that moves it, but only ten to fifteen squares away. Less gifted creatures will implement this as stepping to an adjacent square further from the player.

What I wanted to demonstrate by folding both "animal" and "archer" into the same AI was this; if you are using stateless AI's you can have a universal AI routine that's shared between all monsters. The distinction between monster AI's is then reduced to a simple array of parameters and methods.

Even if no worthwhile combination of the two is possible, you can still combine any two stateless AI's into the same stateless AI code by just making an additional parameter that says which branch of the decision tree to follow and putting each of the original stateless AI's on one branch of the decision tree.

Instead of the more sensible code above, for example I could have just written

      TYPICAL AI:
         if am-I-an-archer?
            {...archer AI ....}
         else
            {...animal AI ....}


But I do want to make a point: this kind of combination should not be done automatically or thoughtlessly; If I had done it without thinking, for example, then I would have redundant branches for "zombie" and "golem" and "ghoul" in my code (or redundant objects all inheriting from "monster", which is exactly the same kind of mental spaghetti) when all of those behaviors can easily be modeled by the "animal" code. Also, I wouldn't have the possibility of firing- while-advancing for archers, and without doing that I wouldn't have thought of the case of firing-while-retreating.

If you use OO a lot, or without seriously considering what is redundant or what would be better combined, it's probably worth taking a good long look at your code to see how much redundant stuff you have built in. This happens a lot with behavioral modeling code; frequently you'll create something more complex, adding capabilities incrementally, without noticing that it's made a bunch of other things redundant.

Another risk is that you might create something different, and miss out on other benefits or different behaviors you could model if instead you made the routine you had more general. This is what happened when I implemented the "golem" AI, and later the "archer" AI; in both cases there was a more general AI that included both kinds of behavior where I was drawing a line between them, and in both cases the more general AI also allowed new behaviors that neither of the previous two allowed.

The take-home lesson is that, as a design principle, it's always better to have a stateless AI that's more general than it is to have more different stateless AI's. OO programming can be handy, but it allows you to very easily miss benefits, redundancies, and synergies if you're not being particularly alert for them. It's equally valid to say that it helps you by keeping such issues outof your hair, but the fact is that your design decisions in response to those issues are things your game can benefit from.

Now I think I've said enough about stateless AI's and it's time to move on to state-machine AI's.


Reference section

Here is a list of the primitives I've used in building stateless AI's in these first two articles.

  • can-move-toward-player
  • move-toward-player
  • can-move-away-from-player
  • move-away-from-player
  • can-run-away-from-player
  • run-away-from-player
  • can-attack-player
  • attack-player
  • too-far-from-player
  • too-close-to-player
  • stand-still

Here are some other primitives you may find handy in building a good stateless AI:

  • player-is-friend
  • player-has-food
  • player-too-powerful
  • player-is-same-alignment
  • player-is-opposed-alignment
  • enough-buddies-in-pack
  • too-far-from-pack-center
  • can-move-toward-pack-center
  • move-toward-pack-center
  • too-close-to-pack-center
  • can-move-away-from-pack-center
  • move-away-from-pack-center

Stateless AI's aren't really general enough to model dynamic monster/monster relationships, but if you want to try it you'll need a lot more primitives than just these.

Ray Dillinger