Philip continues, "Teams from Australia, the Czech Republic, the United States, Japan and Singapore competed in the final. Competitors created bots to play a specially modified Unreal Tournament 2004 Death Match. Expert judges then tried to tell whether they were playing a bot or a human, just from their observation of the way they played the game. Judges included AI experts, a game development executive, game developers, as well as an expert human player. The result? The winning team AMIS, from Charles University in Prague, managed to fool 2 out of the 5 expert judges, and achieved an average 'human-ness rating' of 2.4 out of 4. All the human players were judged more human than the bots overall, but the judges were fooled often enough to suggest that in next year's contest, some bots may be able to pass the test by fooling 4 out of 5 judges. AMIS won $2,000 cash plus an all expenses paid trip to 2K's Canberra studio. You can check out the full results and competition videos, and try an online video quiz that lets you judge for yourself."
writes "Computers can't play like people — yet. An unusual kind of computer game bot-programming contest has just been held in Perth, Australia, as part of the IEEE Symposium on Computational Intelligence and Games. The contest was not about programming the bot that plays the best. The aim was to see if a bot could convince another player that it was actually a human player. Game Development Studio 2K Australia (creator of BioShock) provided $7,000 cash plus a trip to their studio in Canberra for anyone who could create a bot to pass this 'Turing Test for Bots.' People like to play against opponents who are like themselves — opponents with personality, who can surprise, who sometimes make mistakes, yet don't robotically make the same mistakes over and over. Computers are superbly fast and accurate at playing games, but can they be programmed to be more fun to play — to play like you and me?"
Read on for the rest of Philip's thoughts.