Gaming intelligence in the XBox 360

Nonsense — Read in the Chicago Tribune:

“The artificial intelligence, it’s the way in which the world works. When you walk up to an individual in the world, he reacts one way if you punch him. If you say hello to him, he does something else. It’s how objects work when acted upon.”

This looks like a dumb expert system that does exactly what it was programmed for… It gets worse here:

In Microsoft’s new fantasy game Kameo, players can ride a horse into an army of 3,500 ogres. Amazingly, each ogre has its own intelligence and reacts to the player independent of the thousands of ogres around it.

If you want to program an AI, you’re better off doing the opposite: One of the keys to human intelligence is that you and I go every length to not think. In a group, you do not behave independent from others, on the contrary. You’ve strong odds to do exactly what others do unless you’ve a very good reason not to.

But hey… Let’s assume it’s just the journalist…

Comments on Gaming intelligence in the XBox 360

  1. maybe the ogres are a lot smarter than the humans, and not prone to mob mentalities.

    but somehow i doubt the system, no matter how fancy, will model reactions of thousands of independent bad guys in a mob.

  2. There’s several interesting pieces of research in social psychologists that you might find interesting when it comes to modeling a mob.

    Namely, Gabriel Tarde and Gustave Lebon had good insights about a century ago. Some of Carl G. Jung’s analysises on group unconsciousness make perfect sense too. But these three are starting to take dust.

    The more interesting pieces of research, imho, are those of Musafer Sherif (the Robber’s Cave experiment) and of Serge Moscovici (on active minority influence). Arguably, Georg Simmel had the insights long before both of them, but the two remain major contributors to social psychology nonetheless. Both pinpoint a gell effect that occurs when it comes to groups, that was studied at individual level by Edgar Schein (who studied brainwashing on prisonners of war in the 50s).

    The bottom-line, when it comes to programming a mob (or a multi-agent system), is this, more or less: It makes sense to only consider those agents that have strong interactions with their environment, i.e. those that are towards the edges of the mob (via Sherif), those who are stubbornly consistent in having a non-confirmist behavior (via Moscovici), and those around the latter (via Sherif, again). You can then assume, at first approximation, that all the individuals within the mob act as one herde. And you can compare this, to a certain extent, to the skin effect in electronics, whereby current stays on the surface of the conductor.

    This approach would reduce the computational complexity of the ogre system by orders of magnitudes. And as far as I can tell, the resulting behavior would be much closer to that of a real mob. :)