Artificial Intelligence in Games:
Food for Thought Series

These notes are rough. They are hard to understand, and may require some effort and background. They do not provide any detail, nor explanation. At some points, I copied sections of some of my replies on the comp.ai.games USENET forum. Comments are very welcome.

Several of these ideas have been implemented in the artificial intelligence for tactical games project, along with the source code, snapshots, and movies. I have translated a part of the powerpoint slides.

Copyright (c)2001-2003 Aleks Jakulin (jakulin-@-gmail.com)

Last changed on the 10th of December, 2003.

Contents

1. Intelligence Amplification
2. Psychological Transparence
3. Local Control Optimization
3.1. Multi-Objective Optimization
3.2. Types of Criteria
3.3. Constraints
4. Emergent Tactics
5. Machine Learning and Behavior Cloning
5.1. Glossary
5.2. Instance-Based Methods
5.3. Nave Bayesian Classifier
5.4. Trees and Rules
5.5. Qualitative Reasoning and Equation Discovery
6. Notes on Methodology
6.1. Spatial Tagging
6.2. Tuning
7. Game AI on the Player's Side
8. Force-Based Unit Movement
9. Simulated Worlds
10. Game AI and Academic AI

Intelligence Amplification

Artificial intelligence in games is usually used for creating player's opponents. Opposite to this, the objective of intelligence amplification is rescuing the player from the boredom of repetition and letting him focus just on the interesting aspects of the game. The player gives high-level strategic orders, the computer-controlled units take care of detail. At the same time, the full detail and dynamics of the game is maintained with the computer control of detail, rather than lost through abstraction. There are still bullets flying, rather than tiles shifting. In addition to this, intelligence amplification is fully applicable to multi-player games.

Let's illustrate this concept on an example of a tactical game. The player is in command of a number of squads, each composed of several individual soldiers. The player's orders refer to the whole squad, whereas the members of the squad choose and adjust the formation. Each individual member of the squad is intelligent, trying to maximize his efficiency and minimize his exposure.

The player can give his squad two kinds of orders: explicit and implicit. Most games support only explicit orders: move, attack, guard, build, etc. Unlike explicit orders, implicit orders transmit information from the player to the units and assists them in making better autonomous decisions. For example, a player might want to inform his squad that he expects that opponents will approach from the east, rather than the west, by drawing an arrow of expected attack. Alternatively, the player might want to draw a circle where he expects the ambush.

Psychological Transparence

We perceive other people as intelligent, because we understand their decisions. We can introspectively reason about their motivations and intentions. If we do not understand an aspect of their behavior, we can ask. It is different with animals, but because their emotional responses are similar to human ones, we still perceive them autonomous, alive, and intelligent. The artificial computer creatures all too often end up as emotionally dull soulless bitmaps sliding around the screen. To influence the player to perceive the creatures as intelligent, he has to be provided more insight on their actions, intentions, thoughts and emotions.

Emotions are simple to model. Joy is positive feedback, bursting after successful completion of a hard task, and then leveling off. Fear emerges in face of uncertainty and danger, but in a temporarily safe situation. We can see that most emotions applicable to computer games can be derived as functions of concepts such as: success/failure, extent of danger/safety, expectations, etc. Other emotions, apart from joy and fear, are trust, surprise, fear, disgust, and anticipation. Diversity is important: two individuals never have identical emotional response, so they need to be randomized. Emotions are contagious: in a happy atmosphere, everyone becomes happier.

It is appropriate to visualize emotions with stereotypical animation. People express fear by rapid head motion, low posture, and bulged eyes. Happy individuals are smiling, have a straightened body, move in a slow and graceful way, while groups bunch up. Sad creatures look downwards and move slowly.

Although the intentions, perceptions and motivations of the artificial creatures could be described with language, this approach is far too cumbersome for most games. We are not creating artificial friends; we are just trying to enrich the player's environment. Intentions can be easily visualized with a graphical language. In a tactical game, the map can be tagged with place flags such as "good defensive position", "dangerous passage", etc., in addition to vectors indicating directions. Alternatively, the route taken might be visualized schematically. This way the player understands how his units intend to act, what information are their decisions based on, what could happen, and when is it really worth detailing or correcting the order.

Emotion in current computer games is either non-existent or absolutely superficial (state: running away, state: fighting aggressively, state: waiting for the enemy). But games themselves are intended to provide a very different kind of fun than drama, and they do not require the full spectrum of emotion. This is the framework where my ideas are intended to fit.

I'm trying to encourage a small evolutionary, step towards slightly better modeling of emotions like happiness, fear, and show reasons why this would enhance gameplay in ordinary action and strategy games. And modeling happiness and fear is not hard. Also, I strongly dislike the scripted emotions, and prefer emotions at a continuous scale, depicted through parameters in 3D animation.

Finally, psychological transparence is a wider term than emotion. It includes giving the player some insight in the cognition of artificially intelligent entities. For example, if you don't understand why your units are moving into bushes, you might disagree, and declare them stupid. If they somehow explained why they are doing this, the player would understand.

Local Control Optimization

Local control optimization is a generalization of the well-known concepts of boids, swarms or flocks. It is not necessarily a linear system, based on addition of forces, but a nonlinear one, based on maximization of utility. It only considers only step ahead, although it can be embedded in planning. LCO represents a declarative approach to programming. We need only specify the requirements for the solution, rather than all the steps to the solution as with conventional procedural programming.

Utility is the fundamental unit of quality. You cannot add oranges and apples, but once you estimate their content of calories, which is the fundamental reason for eating them, you can add that. In tactical games, the utility is the contribution of a soldier to the odds of winning. This means you have to norm exposure, life, survival, effect, to a single measure of the odds of winning. LCO attempts to maximize this by choosing the action which contributes the most.

The evaluation of utility is performed for a particular state. This is what needs to be implemented. In most games, actions are discrete. With LCO, actions become continuous. This makes the search for the best action slightly more complicated. I'm suggesting a simple approach based on numeric optimization.

LCO can be implemented either with optimization or by force transformation. The force transformation approach involves adding up all the attraction and repulsion vectors of a given state, resulting in the motion vector. Optimization involves creating a set of random actions, after which each of them is evaluated, and the best one is chosen. The best one can be used as the seed for another set of random actions in order to improve the previous best solution. To prevent jittering, several of the random actions should be extrapolations of the previous action. Of course, better optimization algorithms are applicable, and it is possible to create hard-wired approximations to the solution.

Local control optimization is not appropriate for global pathfinding and planning. However, the two methods can be integrated, so that the global pathfinding always provides a general direction, while the LCO guides the units in that direction. The path is essentially a list of waypoints, while a plan is a list of state transitions.

Local control optimization is dependent on the state evaluation function. For example, depending on our relative strength, we might want to either minimize exposure, or maximize effect. To solve this problem, LCO can be combined with finite state machines. Each state has a different evaluation function. In the state of attack, effect carries most weight. In the state of defense, exposure has to be minimized. Further evaluation functions are required for states like seeking an ambush position, retreating, infiltrating, and others.

It is extremely undesirable for an agent to oscillate between two states. This problem is solved by introducing the well-known concept of hysteresis to the state transitions. Hysteresis induces the agent to try to complete a goal before switching to a different goal, unless the new goal is significantly more important. Namely, the results of actions are not immediate, but require a certain period of accumulating the effort, before the results become evident through the improved status. Hysteresis attempts to include the accumulated effort in the current state evaluation, without requiring an explicit plan or goal.

Multi-Objective Optimization

I find the primary benefit of pareto optimization in filtering out those possibilities which are obviously inferior to others. We end up with a smaller set of possibilities on which we can perform more detailed analysis (perhaps including an even greater number of criteria).

With pareto optimization, I cannot get beyond the notion that it is simply another abstraction of "I don't know what's best". You can reduce the number of solutions which require detailed investigation, but that's not sufficient. You still have to pick one best solution, and pareto optimality won't help you here. Sometimes it's "good" to sacrifice yourself, for example.

I favor the approach of tracking back to the ultimate goal and aim, thus reducing all criteria to a single unit of measure. In finance this unit is money, in some religions karma, in utiliarianism 'sighs of happiness'. In more concrete situations, it's useful to have intermediate goals: for example, winning a battle; further intermediate goal for an individual is affecting the odds for winning, but these 'odds' are just a mental tool for evaluating or estimating primitive measures of success. I'm sorry if this sounds too philosophical, but it does work in practice when doing probabilistic grounding of heuristics.

Types of Criteria

Human mind is good at finding influences between concepts, and at operating with huge numbers of these influences. However, "rational" mind is not good at quantifying the influences. Human language is even more notoriously terrible at quantification. Our intuition performs the quantification, but in mysterious ways, which are often not open to introspection.

The most frequently used tricks we can perform are (from the simplest to the complex):

- comparing       a > b, a > c
- ranking         a > b > c
- weighting       a = 60%, b = 25%, c = 15%
a,b,c are different criteria.

Furthermore, we assumed that the scale of each criterion value is somehow linearly related to the benefit. This is sometimes not the case, and a whole field, measure theory, is dedicated to the study of this problem. For example, a particular criterion might have different values, and these values can be considered sub-criteria:

criterion: shirt_color
I like blue more than green. I like black more than white. I like green more
than brown.
In machine learning, we use the term "attribute" instead of "criterion", but the concepts are very similar. Also, the concept of "heuristic" is very similar to "criterion."

Constraints

The "declarative" approach merely describes how the input/output mapping should look, without explicitly implementing the computation that would be required by the "procedural" approach.

However, constraints are usually binary: this is acceptable, that is unacceptable. In that sense, constraints are bi-valued criteria, the values being {0, -infinity}.

Emergent Tactics

Emergent phenomena arise as the outcome of low-level rules or concepts without requiring design. Many hard-to-program rules of thumb or appealing geometries for formations can be easily substituted for a boids-like model. Most formations and tactical dispositions can be derived from a few simple concepts, which generate sophisticated behavior.

Although the cohesion of a group might be considered an aesthetic aim in itself, the actual reason for it is indirect: cohesion enables rapid communication, aggregation of perception, synchronization of response, and scaling of power. Excessive proximity increases the likelihood of friendly fire, reduces the visibility coverage, and encumbers the movement. Formation is merely a rule-of-thumb for these requirements. Interestingly, the requirements themselves can be modeled and formations will emerge with the LCO framework mentioned in the last FT.

The two most important factors for tactics are exposure and effect. Exposure implies how likely it is for the opponent to score a hit, whereas effect implies how likely it is for us to score a hit. Formations emerge when the exposure and effect of the whole squad is considered, and each individual strives to maximize the utility of the whole squad rather than his own. Consequentially, patterns of movement such as spreading out after entering a door will emerge automatically.

It is impossible to maintain full coverage of the whole environment at all moments. To resolve this, we can take advantage of temporal coherence: the situation at a certain point cannot change much in 5 seconds. However, it is more desirable to view the area that has not been seen rather than area that has been observed for a while.

Some regions carry more importance than others do. For example, the area around the door deserves more careful observation than the corner area. These elements are tied with the frequency of unit movement and strategic importance of a location, and can be precomputed. In the framework of intelligence amplification, the player can affect the importance with implicit orders. This importance can be accounted for by using it to amplify both the exposure and the effect. Consequently, the orientation of the units is properly adjusted.

Exposure is dependent on the locality. In open space, a unit is exposed from all sides, but he can only cover a part of it. In an ambush, the exposure is less than the coverage, thus amplifying the effect. Exposure and effect can be precomputed for each location of the map, and later used to weigh the desirability of a particular location when determining the unit position and the speed of movement.

When determining the actions, the ratio between effect and exposure is used. However, the relative weights of effect and exposure vary depending on the tactical situation and intention. In defense, exposure has to be minimized even at the expense of effect, but during an attack, the exposure has to be temporarily set aside.

Machine Learning and Behavior Cloning

It is wrong to think that the purpose of machine learning in the context of AI is to provide learning opponents. Maybe this will be possible in a few years. On the other hand, and at this very moment, it makes sense to apply machine learning and data mining to recordings of what people do in simulated environments, and clone them behaviorally. Why hand-tune state transitions in game FSM's: simply train the state machines from recordings of human players of the game. Why hand-tune behavior within states? Simply train the priorities and activities within each state. The technology is all cooked and ready for this step. The today's tools for data mining present the learned knowledge to a programmer very transparently: there is no black box phenomenon of incomprehensive opponents.

I stressed learning 'state transitions' and 'state behavior,' not learning complete FSM. The technology is ready for learning state transitions, but not yet complete FSM's. Not that it cannot be done, but the combinatorial explosion of such a step is problematic. Introspect, and you will notice that human states such as fear, panic, defense, attack are all hardwired in our emotions, 'learned' in hardware through eons of evolution! We merely learn to act within and switch between these states, we don't learn the states themselves.

Autonomous behavior is hard to program manually. It is desirable to teach it by providing examples rather than by explicit programming. The fundamental goal is in assisting the programmer and improving his productivity. This way, the artificial intelligence in the game can achieve greater sophistication.

Sophistication and diversity of behavior are often the real objectives of contemporary AI development, not the superhuman skill of computer opponents. The fundamental problem is not as much the learning method, but the way of presenting the learning problem. We will introduce the basic terminology and concepts from machine learning, ranging from nominal, ordinal and continuous attributes and classes, to association, classification and regression problems.

Machine learning should not be considered as a black box. Learning is a complex problem, and fully automated methods are not yet able to solving it autonomously. The programmer must analyze a situation, and divide it into a number of sufficiently simple subproblems, which can be solved with machine learning algorithms. It is meaningful to record player's actions and use the recordings for learning.

Machine learning methods can be roughly divided into symbolic and subsymbolic methods. The first kind generates transparent knowledge, which can be used by people for analysis and understanding. Subsymbolic methods, such as neural networks or support vector machines, store their knowledge in a form that is not intelligible without additional processing. Without the transparence of knowledge, the programmer cannot be sure about the reliability of the results. On the other hand, in continuous domains, there are few non-visual ways of explaining the knowledge.

Before attempting to use machine learning, you should familiarize yourself with important concepts such as generalization (complexity, Occam's razor, pruning), and validation methods (cross-validation, test/learn set, meta-learning).

The usual understanding of machine learning as applied to AI has been of adapting the strategies to the player. However, there are other important applications: improving the programmer productivity by facilitating behavior cloning and achieving more behavioral diversity (which is the ultimate aim of AI in games). Behavior cloning is an approach in machine learning, where the computer attempts to imitate a recording of player's actions in a game.

Glossary:

attributes:     properties of our example
class: 	        the decision we should learn to make on the basis of attributes
example:        attributes + class, we use this for learning
classification: decisions are discrete
regression:     decisions are continuous
For example, if we want to determine whether a certain patient has flu, on the basis of his temperature, tongue color, age, and redness of eyes, we describe the problem like this:
*attributes:
  temperature 	(continuous-real),
  tongue color 	(nominal-unsorted: yellow, green, red, black),
  age 		(ordinal-sorted: child, young, medium, old),
  eye color:	(nominal: white, pink, red),
*class: 	(ordinal: diseased, sick, healthy) 
(this is a classification problem, because class is ordinal)

example: temperature=40, tongue=yellow, age=old,    eyes=white -> class = diseased
example: temperature=37, tongue=green,  age=old,    eyes=pink  -> class = sick
example: temperature=38, tongue=red,    age=young,  eyes=white -> class = sick
example: temperature=37, tongue=red,    age=medium, eyes=white -> class = healthy
classify:temperature=37, tongue=yellow, age=?,      eyes=red   -> class = ?
Many machine learning algorithms assign probabilities to each class, while some just pick the most likely class. For computer games, such domains would be realistic:
*attributes: 
  friendly-strength, 
  enemy-strength, 
  available-ammo, 
  distance-to-cover
*class: (advance, stand-ground, seek-cover, retreat, panic)

attributes: 
  type-cover 	(bush, tree, forest, rock, hill, house), 
  enemy-weapon 	(gun, rocket, mortar), 
  enemy-strength, 
  unit-type	(infantry, motorized, armored)
class: 		(good-cover, medium-cover, bad-cover)
And now to the overview of learning algorithms. I do not describe them, but there is a lot of information available both on the internet and in books.

Instance-Based Methods

Instance-based methods of classification and regression are perhaps the most straightforward paradigm in machine learning. Methods are extremely robust, and especially appropriate for run-time learning. It is based upon the idea of storing experience in form of examples and deducing by comparing the current situation to the most similar stored ones. Because there is no learning as such, and because most processing needs to be done in classification, these methods are called lazy. Typical examples of case-based reasoning are nearest neighbor classifier and locally weighted regression.

There are several tricks for managing noise and uncertainty, and separating important from redundant cases. Instance-based classification is not appropriate for all learning problems, and proper care must be taken, especially for preprocessing the data and ensuring high performance. Nearest neighbor methods have other uses: they are very effective for interpolation.

Instance-based methods are extremely simple, fabulously intuitive, yet tragically unknown. It's about time to change this. Nearest neighbor classifiers are the first machine learning method to try. If they won't work, neural networks, support vector machines, usually won't work either.

Nave Bayesian Classifier

In machine learning, we use statistics when we give up attempting to understand a problem in detail. In essence, we average the outcomes, thus arrive at the probability distribution of the outcomes. This same probability distribution is what we predict in the future. However, even computing a simple probability is not a trivial task, and there are relatively simple corrections that improve the quality of the probability assessment.

Of course, a learning problem is described with attributes, and we can examine the conditional probabilities of outcomes, depending on the values of the attributes. Nave Bayesian classifier considers only one attribute at a time. It cannot learn very complex problems, but it is exceptionally simple, effective, and robust. It is especially appropriate for learning domains with a large number of nominal attributes with few values.

NBC is another underrepresented simple method. It is appropriate for the nominal attributes, where instance-based methods do not work well. This applies predominantly for higher-level decisions.

Trees and Rules

Classification and regression trees come in a tremendous number of flavors. Yet the core paradigm is simple: we slice up the attribute space into pieces, until those pieces are either simple enough, or deemed incomprehensive. The process of cutting a slice corresponds to a node, while the final pieces are the leaves of the tree.

In practice, a node represents a single decision ("temperature > 37", "aspirine_ingestion = T"). The leaves in classification trees carry a class probability distribution (50% sick, 50% healthy), while in regression trees leaves carry the regression coefficients (z = 10*x + 3*y). It is possible to convert a tree into a set of rules, simply by assigning each leaf its own rule. Other methods of rule induction allow multiple rules to cover a single example.

Classification and regression trees are very popular in the machine learning community, especially because the learned models are comprehensible to people. In fact, they are most frequently used as an analytical tool for understanding complex problems. However, they are not particularly effective with continuous attributes, as the models are not continuous themselves: the "decisions" are always binary, scalpel-sharp cuts through the domain. Thus, hysteresis or tree smoothing (across decision boundaries) or interpolation might be needed to prevent brittleness and jittering.

Although classification trees are well-known (often slightly unappropriately called decision trees), they are not very appropriate for games. Regression trees are often more applicable, but unknown.

Qualitative Reasoning and Equation Discovery

Quantitative reasoning refers to numeric descriptions of the environment and behavior. For example, a quantitative model of a ball flying through the air can be a parametric formula of a parabola. But such models are not appropriate for understanding the fundamentals, not to mention reasoning about them. How could we, for example, arrive at the conclusion that the ball will fall on the ground?

One possible solution to this problem is qualitative reasoning. Instead of deriving precise formulae, we can work with qualitative rules that refer to the gradients. The resulting rules and observations are far simpler and more appropriate for reasoning. Knowing that "if the distance from the enemies is decreasing, our shooting precision is decreasing", we can place an appropriate set of behavior rules into the system.

For example, it is possible to create qualitative decision trees: in nodes, in addition to referring to absolute values (x > 50), we can refer to concepts such as "x is monotonously increasing, y is monotonously decreasing", while leaves carry possible instructions: "decrease z".

Detailed weights and functions are relatively easy to fill in once the general principles have been discovered and modeled. The tools for equation discovery can be used for that aim. The input is a time series, while the output of the equation discovery algorithms are formulae, ranging from simple to partial differential equations, describing the quantitative relationships between attributes. Most algorithms generate many candidate equations, fitting their parameters to the data, and evaluating their quality. The most promising equations are then further refined. Many methods keep notice of the physical units: they don't attempt to add apples and oranges, or meters and seconds.

Notes on Methodology

Spatial Tagging

Concepts like tactical importance, exposure, overview of a location are hard to program in detail, or even describe. Even if it can be programmed, this can take a lot of effort. A better policy is to place semantic tags on the game map for those positions which differ from the expected. Such tags range from "good ambush position", "excellent defense position from attacks from north", "a great viewing position", "important bottleneck mountain pass", etc.

It is not very difficult to program perception of road-side areas with dense vegetation as potential ambush positions, but such a heuristic is not absolutely reliable. Only appropriately unusual situations require tagging.

Tuning

Human enjoyment of games, especially of tactical games, is derived from enjoying progress, mastery, proficiency, experimentation, and learning. Players are not to be considered pigeons, rats, or Pavlov's dogs that frantically work to get an arbitrary reward in form of graphics or music. Games are learning devices, and the fundamental quality of good learning is one which is appropriately demanding yet manageable. The aim of game AI is not to provide a lasting challenge, but to provide a smoothly ramping level of difficulty, while the player is learning to think either strategically, tactically, or reactively. Artificial intelligence in games takes the role of a never-bored and never-boring opponent. Intelligence amplification takes over those tasks that the player has already mastered and no longer interest him.

With our current knowledge, it is almost impossible to quantify difficulty, or the human learning curve, and adjust the gameplay to suit it. As with all skills which are scientifically unquantifiable, one has to resort to trial and error, sometimes fancily called "Art" or "Zen," or simply "tweaking". Good game designers recognize that the stage of play balancing are crucial elements of creating a good game. And competent game AI programming engineer the code to allow easy tuning of difficulty and the learning curve in the final stages of development.

Programming behavior is quite different from other kinds of programming. The behavior has to look good. Although "looking good" can be quantified as "efficient," efficiency is itself sometimes hard to quantify itself. So, we again get to the dichotomy between art and engineering. To get the behavior to look good, many modifications are needed after the initial code has been written. The traditional way of modifying the source code, for example in C++, a tremendous amount of time is wasted merely recompiling and attempting to reproduce the mistake. Hand-coding of hard-wired AI is an atrocious mistake.

From the very beginning, the programmer should expect to tune the AI to make it look good. This requires proper tools and appropriate structuring of the game code (e.g. parameterization, transparency, serialization). Tuning requires tools needed to examine, monitor, and tweak the behavior while running the game, without editing and recompiling interruptions.

Game AI on the Player's Side

The objective of AI in most computer games has been usually to provide opponents. The reach of AI is limited in this application. Most players prefer opponents who fight bitterly, but rarely win. The author once developed a rational AI module for enemies. They would enter the fight only if they estimated that they would win, else they would run away or call their friends. Nobody disputed their intelligence, but the author quickly fixed them to underestimate the player and to be predictable but diverse. The objective of opponent AI is merely to assure that their defeat looks gallant and animated. The player must feel that solely his wits and skills are responsible for the opponents' demise.

In recent years, there has been an upsurge of nurturing games (Black and White, Creatures), real-time strategy games, and team-based action games (SWAT 3). The crucial aspect of the AI in all these games is that the AI primarily supports the player, not only secondarily opposes him. No longer is the reach of AI programming limited to assuring a predictable defeat. However, the challenges too increase: the player is monitoring almost every step made by his AI-controlled entities. State-of-the-art AI is required for such games, and the role it plays is amplification of player's intelligence and skill.

In subsequent sections, we will provoke the reader by exposing a set of fallacies in game AI programming. We will anthropomorphize an arbitrary but unfortunate AI driven creature and call it AIex, to be curt.

Nurturing Creatures:
Extend friendly AI!

There has been lots of discussion on whether the player would be interested in upgrading his game's AI. Well, why should the player do something like that? To make his enemies kill him faster? Why shouldn't they be given just bigger guns? The challenge is the same!

The player would be quite interested in downloading, perhaps even buying a new pack of AIexes that would give him an edge in fighting against his human opponents. The player would crave training, breeding, even programming his AIexes to fight better on his side and give him an edge against that quick-fingered kid next door with the default V1.0 menagerie.

One should wonder about the player's expectations from his friendly AIex. The player does not require AIex to do everything by himself. Instead, the player primarily wants a good servant, who has a basic understanding of the player's goals, who will take care of boring details, who will perform duties of limited scope meticulously and with patience, who will not burden the all-important player with irrelevant remarks, but only contact him in case things go awry.

Fulfill the Orders!
Don't order: advise!

All a common AIex understands are simple orders: `move', `kill'. When the remainder of his brain spins through the air above a mine field, the last thought buzzing through it is how nice it would have been if the player had marked an area on the map as a probable mine field. While feeling the last drops of his blood drain from a back shot wound, he wonders why the player didn't also mark that other mountain road as a probable enemy attack direction. Mourning the night after two thirds of his team died in an ambush, he quietly accuses the player for not flagging those bushes as likely ambush positions.

The language with which the player communicates with AIex has to be extended. It needs not be natural language; it should only facilitate the player's expression of important requirements and hints.

Dialogue and Transparence
Wink!

One of the player's tanks is driven by AIex who occasionally blurts out "Kill!" Is this sufficient to understand what's on AIex's little mind? A good AIex will let the human who controls him only what the player really wants to know. Useful information would be the planned course of movement, requests for permission to retreat, pleas for reinforcement, reports of enemy movements, detected mines, etc. If the player sees that AIex intends to move across a minefield, he will obviously intervene.

And a bloodthirsty grimace is far more aesthetically expressive than repeatedly babbling, "Kill!"

Analyzing the Environment
Friendly environment talks back!

AIex is just a little bit of software in a strange little game world. Every 10 milliseconds, he rushes through those thousands of objects, analyzing their importance to his actions. Wouldn't it be nice if a fridge told him, "Hey AIex, I see you're hungry, here is food, come, eat!" The fridge only looks around once a second or so, just in case someone hungry rushes by.

The level designers may too choose to place their own little micro-threads in bushes that occasionally whisper to a running, battered GI: `Hide here, soldier, hide here!' And a door may tell a goblin, `Buddy, step back, I'll open.' The a wispy rope bridge will warn an incoming juggernaut troll, rolling towards it, `Yo, troll, you're too heavy, go away.' A river will pace an incoming jaguar, `Jaguar - jump - now!'

In a similar line of work, and now for several years, most state-of-the-art games preprocess the levels and tag the level map with hints, such as informative waypoints for pathfinding (Half Life), useful shooting positions (HALO), traffic statistics and sniping spots (van der Sterren).

Force-Based Unit Movement

No subject in AI has been explored more deeply than pathfinding. Yet, no aspect of game AI draws more complaints than pathfinding problems. AIex is constantly being stuck behind doors, stuck in lines, not to mention all other disasters that happen to those AIexes that are driven by pathfinder navigation code. Flies are another extreme, they don't collide, they aren't stuck, and they generally don't look stupid (unless you face them with a window, unanticipated by their genes), but their perspective is excessively local: that's why they cannot find an efficient way around the window.

One solution lies in using the pathfinder merely as a guide, while a flocking-like collision avoidance model takes care of local movement and behavior. The pathfinder provides waypoints so that AIex can arrive to them in a mostly straight line. When the second waypoint becomes visible, AIex will check off the first one.

In addition to the attraction force of the waypoint, other forces too attract AIex's attention: safety of a ditch, desire to look backwards, desire to observe look dangerous areas, maintain proximity to his mates, and yes: an aggressive urge to approach his enemies and kill them. Higher layers of AI control the direction and strength of these forces. We will explain the concept on the example of formations.

Understanding Formations

A commonly-bred AIex has eleven states in his high tech fuzzy finite state machine: `death-animation-1', , `death-animation-10', and `kill'. Moreover, programmers worldwide think that formations are little pretty triangles. Right, triangles are pretty, and AIexes should always try to look pretty, but wedge formations aren't used because wedges are pretty.

Soldiers in a wedge formation are all able to fire at an enemy in front of them without having to fire through fellows, while half of them can fire in case of an attack from the side. When a flanking attack is expected, a column or step formations are used, which enables all the soldiers to fire in case of a flanking enemy attack. A diamond formation provides good coverage with half of the soldiers immediately effective regardless of the direction of enemy attack. Finally, a line formation is used during frontal assaults.

The fundamentals of troop movement lie in assuring 360-degree visibility, ideally with some redundancy, maximizing the effectiveness upon an attack, and maintaining the cohesion and communication throughout the team. This is best achieved with a soft flocking-like model, locally maximizing effectiveness and visibility, while minimizing exposure. The states are not obsolete with this approach. The weights upon individual requirements are different in defense, attack, reconnaissance, retreat and maneuver.

Simulated Worlds

It is now clear that one of the essential aspects of intelligence is embodiment: cognition and knowledge united in a single system which interfaces with its environment. It is also becoming clear that we cannot possibly pass all the knowledge into the system, we should rather let the system acquire the knowledge. For achieving that, the system should be placed in an environment, so that the acquisition of knowledge can be active.

At the current state of affairs we cannot possibly hope (or want!) to put the system in our real environment. Instead, we should simulate an environment, and let the AI's live in it. Computer games are a wonderful way of creating a world where our initial attempts at AI could live, gain experience, interact with people. Computer games have shown to be models good enough for real people to live in.

It is very important that AI's and humans communicate. Only then will it be able to have a shared model of the world and possible cooperation. Human language is something too complex, but we could invent a proto-language ('point and grunt?'), that both AI's and people could learn. Step by step, people would bring in features of human languages, and so would AI's be able to slowly pick up more human characteristics.

A game world would have to be interesting in sense that there is something to learn, something to create, something to build. Intelligence emerges only if there are boundaries to push, knowledge to acquire and then apply. On the other hand, it shouldn't be too hard at first. Emergent ascent to the peak of intelligence requires a gentle slope all the way long. A world should start simple and abstract, but yet complex enough for emergence of certain cognitive abilities. This complexity is also what we seek in our game worlds. A game you have figured out is no longer interesting. A game you cannot figure out is boring. Exactly the same applies to AI that learn only when the problems are at the right balance of simplicity and complexity.

ALife got too hung up with genetic algorithms as the sole learning mechanism: "Why program intelligence? Just wait 'till it evolves by itself?" Right, in a million years or so. We should work more than that. Only this way we will be able to push beyond trivial life forms. Self-reflection and learning should be among the abilities of the very first generation of simworld AI's.

And this way, someday, AI's will be ready to graduate from the virtual world, to be welcome in our real world.

We need not even simulate, because a program lives in its universe of computer hardware and computer networks. It is an environment with many sensations: memory availability, network latency, interrupts. These are the perceptions of a 'living' computer. It should 'feel' it has to save its memory to the hard drive. It should 'feel' that the power is running out. It should 'feel' that the user is waiting. It should 'feel' impatience when waiting for the network. It should 'feel' friendship to the neighboring computers, make acquaintances, help out if something breaks down. Why try to make it an inferior human, just let it be a better computer.

Game AI and Academic AI

There has been a lot of talk about what AI can do for games. The consensus is that 'not much.' Game developers are people that talk in concrete terms, whereas AI researchers often talk haze and philosophy. If you ask them to be concrete, they then spew mathematics and algorithms. You can observe this disconnect comparing AI Wisdom and AI textbooks. AI Wisdom is hands-on and practical, AI textbooks are hypothetical and theoretical. I will now refer to the fields as EI (engineered entertainment intelligence) and AI (academic analysis of intelligence).

If you ask an AI researcher what he thinks of EI, he will respond that it's in-a-rut, plugging algorithms around without truly understanding what they mean, and so on. If you ask a game development practician what he thinks of AI textbooks, she will respond that it's impractical, incomprehensive, mathematical and hazy. EI is development, engineering intelligence-like artifacts. AI is analysis, research into intelligence-like artifacts. EI is sensing, AI is intuition. EI is results, AI is insights. EI is sales, AI is grants. EI is standardization, AI is anarchy. Few EI people manage to get published in AI journals (too boring), few AI people manage to get published in EI publications (too impractical).

It's important to realize that research and development are two sides of the same brain, just like feeling and thinking, perceiving and judging, introversion and extraversion, disorder and order, plasticity and stability, fun and work, yin and yang. These are dichotomies, and Chris Lofting's web site has much on this topic. The crucial insight is in the fact that the optimum lies in the balance and collaboration between the two, not in either extreme. Of course, researchers often think that there is too much development, and developers often think that there is too much research. Every salesman peddles his own goods.