Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
AI Games

Two Teams Win the BotPrize 56

An anonymous reader writes "For the past five years, the 2K BotPrize has challenged artificial intelligence researchers and programmers to create a computer-game-playing bot that plays like a person. It's one thing to make bots that play computer games very well — computers are faster and more accurate than a person can ever be — but it's a different thing to make bots that are fun to play against. In a breakthrough result, after years of striving and improvement from 14 different international teams from nine countries, two teams have crossed the humanness barrier! The teams share $7000 in prize money and a trip to games company 2K's Canberra studio. The winners are the UT^2 team from the University of Texas at Austin, and Mihai Polceanu, a doctoral student from Romania, currently studying Artificial Intelligence at ENIB CERV — Centre de Réalité Virtuelle, Brest, France. The UT^2 team is Professor Risto Miikulainen, and doctoral students Jacob Schrum and Igor Karpov. The bots created by the two teams both achieved a humanness rating of 52%, easily exceeding the average humanness rating of the human players, at 40%. It is especially fitting that the prize has been won in the 2012 Alan Turing Centenary Year. The famous Turing test — where a computer has to have a conversation with a human, and pretends to be another human — was the inspiration for the BotPrize competition. Where to now for human-like bots? Next year we hope to propose a new and exciting challenge for game playing bot creators to push their technologies to the next level of human-like performance."
This discussion has been archived. No new comments can be posted.

Two Teams Win the BotPrize

Comments Filter:
  • by Anonymous Coward on Saturday September 15, 2012 @11:46PM (#41349937)

    There are two fascinating things about the parent post.

    One of them is that I have just realized how to pass a Turing test -- just have your program pretend to be a frothing nutcase. Technically, that counts as human, but apparently it relieves you of the normal human requirement that your utterances be appropriate to the context, which is really the hardest part of passing the Turing test.

    The other is that someone apparently modded it up.

  • by mooingyak ( 720677 ) on Sunday September 16, 2012 @12:54AM (#41350147)

    One of them is that I have just realized how to pass a Turing test -- just have your program pretend to be a frothing nutcase.

    I've heard that the program most often mistaken for a human was frequently rude and nasty to its correspondents.

  • by Baloroth ( 2370816 ) on Sunday September 16, 2012 @01:28AM (#41350229)

    ...or the judges are very bad at distinguishing human from bot. One interesting thing to note is that lower skill default bots were rated quite highly on the "humanness" rating (higher than the average for humans), which might suggest the judges thought human players are worse than bots. The default bots "humanness" average was only slightly below the average for the actual human players (~37% vs ~41%), which suggests the methodology is a little questionable. If you can't distinguish the default, "non-humanized" bots from actual humans, how would you expect to distinguish bots that have been intended to be "humanized"?

  • by Dean Edmonds ( 189342 ) on Sunday September 16, 2012 @03:02AM (#41350499)

    I find it interesting that the ordering of judges on the "Most human humans" list is the exact opposite of those on the "Best human judges" list. So the more robotic a judge appeared to others, the better they were able to recognize the true bots in the games. A great example of "it takes one to know one".

"If it's not loud, it doesn't work!" -- Blank Reg, from "Max Headroom"