In my AI For Games class, we've been instructed to read Turing's original paper "Computing Machinery and Intelligence", in which he describes what is now widely called the "Turing Test" for judging the quality of an artificial intelligence, and describe how we might apply such a test to game AI. I thought this was a pretty interesting idea. Our responses to this prompt were not supposed to be particularly long or formal, so I thought I might post mine here and my readers may enjoy it.
Summary of the Paper
For those who don't know, Alan Turing was a brilliant mathematician and scientist active in the 1940s and 50s, and is widely considered the father of computer science and artificial intelligence. In his paper "Computing Machinery and Intelligence", Turing makes the case that not only will machines one day be able to learn and think, to have intelligence, but also that the best measure for this intelligence is to compare the machine behaviorally with a human. I personally am not sure I agree with this as a good measure of a machine's intelligence, but for now we will take it as a given rather than delve into the definition of intelligence.
In the paper, Turing describes a method for comparing a human and computer which he calls "The Imitation Game". A computer and human will interact with a third party, a human judge. The judge will be able to neither see nor hear the computer and human, but will communicate via typing. The judge asks questions of both the human and computer, and must decide which is the computer. The computer's goal, therefore, is to act as human as possible so as to convince the judge. The current time limit for this test is 5 minutes, though of course fooling a judge for longer would be ideal.
Though Turing expected it would be possible at the turn of the century, at this time, no one has been able to programme a computer such that it can fool a judge for the full 5 minutes.
My main issue with the Turing test is that rather than judging a computer's ability to learn things from experience and grow and change based on them, or have unique individual "thoughts", it instead judges a computer's mastery of a human language and ability to manipulate said language in a way similar to a human, in addition to its actual responses. I don't feel that the ability to use a language well is necessarily evidence of human-like intelligence, nor that it will always be present in an intelligent machine. However, I do understand the reason Turing used such an approach - the definition of intelligence is a tricky thing, and using a behavioural test is much simpler and cleaner way to judge AI.
Turing Test as Applied to Games
Games are an intriguing instance of machines interacting with humans, because AI within a game has a much more limited way to interact. For now, let us consider only the case where AI is controlling a "player" entity (either opposing or helpful), which might alternatively be controlled by a human player. This is the situation which makes the most sense to look at from a Turing test point of view.
In this situation, let us further stipulate that in-game chat systems are not being utilized, neither voice nor text based. Now the ways that players can interact are limited directly to the game mechanics. A Turing test in this situation would involve a third party judge player playing the game with two other players (either at once or in separate instances, as the game allows), one a computer AI and one human. The judge must then determine which was the computer and which was the human.
This seems fairly straightforward, and actually somewhat uninteresting. By limiting interaction so severely, is the test actually meaningful? Are we only judging how well the computer can play the game, and not how well it can respond and react like a human? The answer would greatly depend on how interactive the game is and how much on the fly strategy changing a human might be able to do in said game compared to a computer.
In my description of the game-based Turing test, note that I specified "playing the game with" rather than "playing the game against". Certainly in many cases the latter will apply, but I don't believe such a test need be exclusive to opponent AIs. Helper team members are a notable example.
When thinking about the ways in which players interact without using language, I immediately thought of the game Journey, released earlier this year for the Playstation 3. In Journey, the player moves across a desert toward a distant point. Occasionally players cross paths - but rather than allowing speech between players, the game limits interaction to visual movements of the player's avatars, ability to charge up each-other's items, and to using a series of different musical sounds. What these sounds mean is not defined - it is a language created each time two new players interact, by the two players themselves.
While there are not AI players in the game, it seems like an intriguing idea to create an AI who might be able to pose as a player in Journey and see if it could interact in such a constrained environment well enough to fool a judge in the Turing test. This puts both the player and AI on the same foot when it comes to "language" yet still allows a very organic interaction compared to typical game mechanics.
In addition to his discussion of the test itself, the final section of Turing's paper was devoted to discussing ideas about machine learning. One analogy he made struck me: picture a mind (be it human, animal, or machine) as fuel for a nuclear reaction. A stimulus to the mind is like a neutron being fired into said fuel. If the fuel is sub-critical, the reaction will carry on for a brief time and die away - or, as another simile of Turing's poses it, "like a piano string struck by a hammer". This is similar to how a typical machine "mind" might act - a stimulus is supplied, it's programme runs in response, but eventually runs out of new instructions to execute (even if in a loop, it's the same instructions over and over). Turing posits that most of the time, human brains also act in this way - and animals always do. But if the fuel is in super-critical concentration, the reaction won't stop - it will grow and grow. The example Turing gives is, "An idea presented to such a mind that may give rise to a whole 'theory' consisting of secondary, tertiary and more remote ideas." A stimulus causes the mind to come up with a poem, a mathematical theorem, a new invention. In other words, this super-critical mind can be inspired rather than just effected by it's environment. A single idea becomes a springboard for more ideas. The question Turing then poses is, can we create a machine mind that can act in this "super-critical" fashion?
This really stuck with me as a much better description of my view of true machine intelligence. Not aping human behaviour but instead the simple yet extraordinary ability to take in a stimulus of some sort and produce something, not based directly from that stimulus, but some product of all the previous stimuli the machine experienced as well as the one just taken in. Whether this has direct application to games is questionable, but it could be assumed that an AI with this kind of ability would be a more engaging opponent or ally in any game.
Turing goes on to discuss how we might develop an intelligent machine - and talks about creating a "child" machine instead of a full-fledged intelligent "adult" machine. This child machine would have the ability to learn based on positive or negative responses, and to take in direct instructions in some form as well. While again this may not directly effect games, it describes a learning machine which would make a far more challenging opponent or ally than one using a pre-set list of strategies.
I feel that games suit themselves well as a possible application of the Turing test due to their constraint on methods of interaction between players. It would be more difficult or even impossible to use the Turing test to measure the AI of non-player controlled entities in games, however, as the entire approach is based on comparing human and non-human control - if no human control is possible, the computer can't possibly convince a judge of its supposed humanity. For these types of entities, other tests must be developed, or temporary human control must be allowed.