Get your Portable ID!

Friday, January 22, 2010

AI and Game Observations

It occurred to me today that I've made a number of useful observations on public forums about video games, AI and programming which I should really do something about in the future. I need a dumping ground for those observations, so why not here?



From: amicusNYCL in response to Monkeedude1212

| Because programming -IS- Logic. If you tell the program to do soemthing at Random, its not a very good AI. If you tell it to do the most strategically sound plan, it doesn't vary much at all.

You tell it to try to learn the rules, and make the best decision that it can.

Consider AI for chess. The best AI can beat any human because it can spend the processing power to look, say, 25 moves into the future. When the computer considers all possible moves and for each one looks at all possible next moves, next moves, etc, for 25 turns, it's going to be able to quantify which move it should make now to have the best chance at winning. When you download a chess game and you can set the difficulty, the main thing they change is how far ahead the AI is allowed to look. An "easy" AI might only look 3 moves ahead. It's been a while since I took any AI courses, but I seem to remember that the human masters like Kasparov are capable of looking ahead around 10-12 turns.

So it's not that you tell the AI to make bad decisions, you simply limit the information it has to work with. This is more equivalent to what most humans do when they make bad decisions ("I didn't think of that").



From: me

Indeed.

In fact, feeding bogus data to the AI is one of the realistic ways to limit, say, a racing game's agents - if they don't see the post in front of them because they aren't spending enough time per frame watching the road and are instead eyeballing their opponent, they're going to crash, just like any human. So you simulate that by using player proximity and the "erraticness" of the other opponents to model distraction and modulate the AI's awareness of dynamic obstacles and hazards.



From: Monkeedude1212

A computer can mimick the logic of a human being, yes.
But it can't copy our illogical decisions. Because our Illogical decisions are just based on poor logic.
You can program a computer to make a mistake - but its not the same.


From: me

Our illogical behavior is largely deterministic as well.

We tend to behave illogically only in response to specific stimuli (fear, anger, hunger, lust) or when our system is under strain (fatigue, extreme hunger or thirst, neurological stress), nearly all of which can be simulated effectively enough for a game simulation.

So now we examine the character of our illogical behavior - we prioritize actions inappropriately, mistake one input for another of a similar kind, suffer from reduced reflexes or recognition time, respond with an inappropriate reaction to a familiar stimulus, fail to suppress responses which we would ordinarily not allow in ourselves due to social strictures or personal beliefs, or simply fail to notice things in an appropriate timeframe.

What about that couldn't be simulated with an extremely simple system of defaults with a laundry list of pre-programmed failure behaviors? Illogic may be more complicated to simulate in a limited domain of actions than logic - the elevator can go up, down, or stop, but it's hard to make it change its mind when it's tired of going up - but illogic is really easy to simulate when the expected domain of AI activity includes nearly any action. This is the sort of condition found in most sandbox games - you expect the pedestrians and enemies to behave in an almost random fashion because you expect humans "in the wild" to be unpredictable. This means that anything short of obviously programmatical behavior or obvious illogical *group* behaviors will seem fairly realistic to the player - especially if the AI isn't just "instanced" appearing and disappearing with a short library of functions, but instead is programmed with agendas, no matter how simple (go from residential A to commercial B, append grocery bag model to arms, use carrying walk animation, return to residential A).

The AI in Ultima 7 was praised as being exceedingly lifelike because the AI had agendas, day/night schedules, and would respond to stimuli like violence, the appearance of a monster, etc in a variety of ways depending on their character role. This sort of realism (if not actually passing whatever Turing-test-like metric you employ when observing it) will serve to satisfy the requirements of suspension of disbelief on the part of the player.

One of the best things about video games is the potential to surprise the player with unexpected behaviors. The first Quake bots, even though following fairly simple nodegraphs, would continually surprise players by behaving in a fashion seen as "unpredictable" simply because the player themselves had not been taking the most efficient routes between "pickups."

The first "learning" Unreal bots would actually remember routes that bots saw players take and append nodes and traversal instructions so that it could follow or use that route in the future. As a result, you think you can evade a bot by leaping out a window they never go near then are alarmed to find that not only does it follow you out, it uses the same route to escape you in the next round.

The emergent behaviors of The Sims have been pleasing and surprising gamers for nearly 10 years now, all based on fairly simple wants/needs systems along with some basic stimulus response. The Sim is less intelligent than the average cockroach, and yet they are still capable of behaviors which seem satisfyingly realistic, at least in the short term. If a Sim is too tired to make a full meal, it might just grab a bag of snacks from the refrigerator. They might fall asleep on the couch instead of going to bed. These are all failure states in an ideal AI's daily routine, and yet they give the human touch - with very little computational cost.

The point here is that AI doesn't need to be perfectly human to be humanlike, and it's far from impossible to simulate illogical behavior - you just have to program some chaos into the system by which the AI selects actions.



From: Lumpy

Yes and no. Back in the day when I was writing Quake bots, there were things you could do to always beat the AI. The AI cant pick out patterns that are luring it into a trap. WE are a long LONG way from having AI that can think about the situation and make a decision on it's own...

"Player 4 has done this 4 times trying to lead me down that corridor, what the hell is he doing? I'm gonna sit and wait or try and circle around to see what is up."

AI cant make a conscious decision that is not preprogrammed.


From: me

| AI cant make a conscious decision that is not preprogrammed.

Definitely. The job of the AI designer is to come up with a set of default behaviors and reactions which make the AI appear to be doing so.

You may not be able to make an AI figure out intent, but you can train them to recognize erratic motion - players in a pure deathmatch game don't often stop or double back quickly without an obvious reason, so something like that could trigger the bot to go into "cautious mode" and fire, say, a grenade to the entrance of that corridor then try to circle around. About 90% of the time it'd look like the bot was paranoid, but the few times it worked, the victim would be completely convinced.

I agree that we're a long way from being able to solve the problem of actual rational AI. I think we first have to figure out how logical frameworks and learning work before we can begin tackling making computers think and reason like people.

Fortunately, it's often a lot easier to make them LOOK like they're thinking and reasoning like people.


I regret that I can't find the old post I made about multithreaded MMO servers and load distribution or the one about dividing work within AI systems. Must be too old to show up on Google anymore and Slashdot doesn't seem to have a good way of browsing through your own comment history. I'll have to keep that in mind when designing the EG comment system.

No comments:

Post a Comment