Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Games

AI's Victories In Go Inspire Better Human Game Playing (scientificamerican.com) 14

Emily Willingham writes via Scientific American: In 2016 a computer named AlphaGo made headlines for defeating then world champion Lee Sedol at the ancient, popular strategy game Go. The "superhuman" artificial intelligence, developed by Google DeepMind, lost only one of the five rounds to Sedol, generating comparisons to Garry Kasparov's 1997 chess loss to IBM's Deep Blue. Go, which involves players facing off by moving black and white pieces called stones with the goal of occupying territory on the game board, had been viewed as a more intractable challenge to a machine opponent than chess. Much agonizing about the threat of AI to human ingenuity and livelihood followed AlphaGo's victory, not unlike what's happening right now with ChatGPT and its kin. In a 2016 news conference after the loss, though, a subdued Sedol offered a comment with a kernel of positivity. "Its style was different, and it was such an unusual experience that it took time for me to adjust," he said. "AlphaGo made me realize that I must study Go more."

At the time European Go champion Fan Hui, who'd also lost a private round of five games to AlphaGo months earlier, told Wired that the matches made him see the game "completely differently." This improved his play so much that his world ranking "skyrocketed," according to Wired. Formally tracking the messy process of human decision-making can be tough. But a decades-long record of professional Go player moves gave researchers a way to assess the human strategic response to an AI provocation. A new study now confirms that Fan Hui's improvements after facing the AlphaGo challenge weren't just a singular fluke. In 2017, after that humbling AI win in 2016, human Go players gained access to data detailing the moves made by the AI system and, in a very humanlike way, developed new strategies that led to better-quality decisions in their game play. A confirmation of the changes in human game play appear in findings published on March 13 in the Proceedings of the National Academy of Sciences USA.

The team found that before AI beat human Go champions, the level of human decision quality stayed pretty uniform for 66 years. After that fateful 2016-2017 period, decision quality scores began to climb. Humans were making better game play choices -- maybe not enough to consistently beat superhuman AIs but still better. Novelty scores also shot up after 2016-2017 from humans introducing new moves into games earlier during the game play sequence. And in their assessment of the link between novel moves and better-quality decisions, [the researchers] found that before AlphaGo succeeded against human players, humans' novel moves contributed less to good-quality decisions, on average, than nonnovel moves. After these landmark AI wins, the novel moves humans introduced into games contributed more on average than already known moves to better decision quality scores.

This discussion has been archived. No new comments can be posted.

AI's Victories In Go Inspire Better Human Game Playing

Comments Filter:
  • Who Trained Who?

    (OK, paraphrasing AC DC from a Stephen King movie, but you get it)

    • by shanen ( 462549 )

      Not a bad FP, but surprised to see so little interest in the topic on Slashdot.

      I actually see it as a kind of bootstrap problem, though I prefer to look at the broader perspective. Yes, we taught AI the game of Go and it then taught us how to play the game better. But we also created more complicated societies and then those societies make us into more complicated people, who create more complicated societies, until... From that perspective the problem is that there's a point where people can't keep up, not

      • by nojayuk ( 567177 ) on Tuesday March 14, 2023 @01:13PM (#63370271)

        Analysis of many of the games AlphaGo and the later version AlphaZero played against human opponents revealed that it didn't play like its opponents -- the AI program would get one or two points ahead by the mid-game and then work to maintain that lead to the end whereas human players would try and maximise their lead all the way through the match. This might be due to the AI having learned how to play Go to consistently maintain that slight lead while a human player is always worried that they could make an mistake or overlook something that could cost them points in the endgame so they unconsciously try to keep a cushion just in case.

        • by shanen ( 462549 )

          Not sure of your point. However they could change the reward function to favor larger wins over small ones and the program would search harder for riskier strategies...

  • by Mononymous ( 6156676 ) on Tuesday March 14, 2023 @07:38AM (#63369319)

    players facing off by moving black and white pieces called stones

    That's not how it's played.

    • players facing off by moving black and white pieces called stones

      That's not how it's played.

      Precisely why the AI will never see it coming!

    • This is exactly how it's played. You move pieces from your pile onto the game board.

  • Meaning people attempt to copy techniques used by the AI that people had never thought about?
    • by tlhIngan ( 30335 )

      Go, like Chess and other such games, is solvable. The problem is the solution space is huge and possibly larger than the universe, but there is a solution to the game. (A solution is like solving Tic-Tac-Toe - there's a way to play the game ideally).

      Our current computers are currently not fast enough nor have enough memory to hold the complete solution to either game, so in lieu of that, we often use partial solutions instead - either by evaluating a subset of all possible moves, or learning algorithms that

      • Go, like Chess and other such games, is solvable. The problem is the solution space is huge and possibly larger than the universe, but there is a solution to the game. (A solution is like solving Tic-Tac-Toe - there's a way to play the game ideally).

        However, like tic-tac-toe, the eventual solution may be a draw. So, solvable means amenable to analysis, whatever the result might be, even if that result is that the game with perfect play on both sides is unwinnable, ... as WOPR already figured out for at least one "game."

        • Re:Inspire? (Score:4, Informative)

          by nojayuk ( 567177 ) on Tuesday March 14, 2023 @12:00PM (#63370061)

          Go is always win-lose, assuming the usual rules such as komi, a number of points given to white which plays second. Tournament komi is typically 7.5 so there's always that half-point between the two players which prevents a draw.

      • by Ichijo ( 607641 )

        So yes, humans are going to copy the AI

        Maybe that's what Hans Niemann did, and the reason Magnus Carlsen can't figure out the strategy [slashdot.org] is because he doesn't have the same "teacher".

      • That's why I don't really like these games, they are extremely expensive and don't bring so much interest, on the other hand I play anyway in something random, like https://gamblizard.ca/best-canadian-online-casinos/fast-payout-casinos/ [gamblizard.ca]this one seems to me the best Canadian Online casino and I like the fact that it is fast payout, it really changes the game because it becomes more interesting, because you see the money directly.
  • Custom web applications is something that is a little bit difficult to create, but if you go to a team of good professionals, this task becomes many times faster. So I want to tell you right away that it is enough just to go to this site https://perfsol.tech/services/... [perfsol.tech] , and here you can find the company which will do this work very fast and we will be one of the best. Custom web applications will be created in the shortest time at the steepest prices just with the help of this company.

Life is a whim of several billion cells to be you for a while.

Working...