advertisement


Talk by Alex Whittaker

USE OF ARTIFICIAL INTELLIGENCE IN PC GAMES
by Alex Whittaker

This talk will discuss the innovative ways in which characters in PC Games can be programmed with individual personalities giving them the ability to interact in more complex ways.

Imagine that we are creating an agent for a computer game; the agent is to be a companion to the player avatar, moving through the environment and reacting intelligently to what it encounters. We will assume that the agent is a small monkey that sits on the avatar's shoulder for much of the time, though it can explore the environment independently. As well as acting intelligently we would like the agent to behave believably - we want it to support the player's suspension of disbelief, so godlike intelligence is not required, an internal emotional state is. Ultimately we want the agent to have an exaggerated and anthropomorphic personality - whilst I'm sure normal monkeys have plenty of personality when you get to know them, we need ours to be more accessible in our case after all a monkey probably is just for Christmas.

The AI field has had dreams about creating independently intelligent agents for years (well before Alan Turing) and has consistently failed to realise them. The trouble, stated plainly, is that designing even a rudimentary intelligence from the ground up using a symbolic representation has proved to be a massively complex problem. How to design an intelligent symbolic system is very well explored and understood. However, the scale of the problem is so mountainous that we are only able to explore its foothills, so that we do not even know if when we get to the summit we will discover intelligence like ours.

There have however been some successes taking a very different route, what is called behavioural intelligence makes no use of symbol manipulation but rather, has representations at a lower level. It is this approach, so well suited to computer games, that is most widely used through the marketplace. It is important that we understand that in this kind of solution it is the intelligence itself that is the artifact, and not the agency - that is, someone needs to carefully design the intelligent behaviour in advance. Historically it has been the responsibility of the programmer to design this intelligence, but at Elixir Studios and other leading edge developers this control passes to the designers. We use a behavioural editing and debugging suite that allows us build up behaviours graphically, and launch and examine (and debug) them within the game environment.

So getting back to our monkey agent, what are the processes involved in building up its behaviour for a computer game? The first stage in the design process is documentation - as there are often several designers on a game each with strong ideas it becomes critical to crystallise all that hand-waving into firm (unified) ideas. I think that this can be quite difficult to do and a strong framework might be useful - list the agents fears, desires, goals etc.

Once there is a strong idea of what kind of behaviour we want the agent to express the designers can begin to use the behavioural editor to describe it. If we attempt to work at a high level - describing what the monkey does when it sees a bowl of fruit, what it does when it sees a lion etc. we will be working forever - there are countless possible encounters and we can't predict them all. Instead we need to describe the behaviour at a much lower level, what does the monkey do when hungry? What does the monkey do when tired? Does tired beat hungry or vice-versa? If we begin at this lower level then release the monkey into a game environment then quite quickly the interaction between these low-level behaviours and a complex environment makes the agent behave in a non-deterministic manner.

Once we have our low level behaviour, we have probably described something that has the appearance of an insect level of intelligence. From here if we want to extend this illusion to a higher intelligence then the designer will want to add some emotional responses. It is generally accepted that to represent an emotional layer of intelligence we need a symbolic intelligent system: If we want our monkey to be jealous of its banana then it has to believe that a second monkey's wants to have that banana, that means that our monkey needs some symbol manipulation - it needs to know that monkeys like bananas, that monkeys steal things they like, that if the other monkey steals the banana then our monkey wont have it etc. It is possible however, to spoof a primitive emotional layer using a behavioural system. For this we need to form a simple model of emotion that is orthogonal to the agent's mental state. By maintaining two states for the agent one logical and one emotional we can create an emergent behaviour that appears both intelligent and emotional.

By working up from the lowest level we can create a monkey agent that responds to its environment in a consistently intelligent way. Observing the agent for a short while we will see behaviours that indicate that there is some emotional layer as well, reinforcing our belief in the agent. When our monkey agent decides to impersonate the player avatar tiptoeing along behind on two feet - this indicates some deeper personality that we hope will convince the player. The layered nature of the behavioural system lends itself very well to these special case exceptions. Once we have the core behaviour in place the designer can add this kind of finesse to extend the sense of personality in an agent.

Behavioural AI presents a route by which a designer can create an intelligence that is believable at both the logical and emotional level. The system also lends itself very well to the extension of special case behaviours that can be used to powerfully reinforce the illusion of intelligence. The resources required both memory and processing are well suited to the computer games domain, and behavioural AI solutions are becoming increasingly common.

What does this mean for the creative writer? First, because the agents are now becoming truly autonomous they are also non-deterministic - we cannot definitively predict what they are going to do under any circumstances. Second, because their behaviour is a distillation of the designer's forethought, without a large amount of testing, agents may not always be reliably intelligent under more obscure circumstances. So in some circumstances, writing may have to move from explicit stage directions, towards a script for an improvisation group. As for dialog, well it can be put into behavioural system - but if it is put in without any sense of the agents underlying state then we may end up with our wisecracking monkey suggesting that it might be time to get a banana curry whilst it is busy dodging the snapping jaws of a crocodile. With sensitivity to the underlying behavioural system however, witty, perceptive dialog can lift the agent by providing a window to its current emotional and logical states in a way that an animator may struggle to get across.

Copyright © Alex Whittiker

Home