Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Software AI Google Games Technology

DeepMind Produces a General-Purpose Game-Playing System, Capable of Mastering Games Like Chess and Go Without Human Help (ieee.org) 124

DeepMind has created a system that can quickly master any game in the class that includes chess, Go, and Shogi, and do so without human guidance. "The system, called AlphaZero, began its life last year by beating a DeepMind system that had been specialized just for Go," reports IEEE Spectrum. "That earlier system had itself made history by beating one of the world's best Go players, but it needed human help to get through a months-long course of improvement. AlphaZero trained itself -- in just 3 days." From the report: The research, published today in the journal Science, was performed by a team led by DeepMind's David Silver. The paper was accompanied by a commentary by Murray Campbell, an AI researcher at the IBM Thomas J. Watson Research Center in Yorktown Heights, N.Y. AlphaZero can crack any game that provides all the information that's relevant to decision-making; the new generation of games to which Campbell alludes do not. Poker furnishes a good example of such games of "imperfect" information: Players can hold their cards close to their chests. Other examples include many multiplayer games, such as StarCraft II, Dota, and Minecraft. But they may not pose a worthy challenge for long.

DeepMind developed the self-training method, called deep reinforcement learning, specifically to attack Go. Today's announcement that they've generalized it to other games means they were able to find tricks to preserve its playing strength after giving up certain advantages peculiar to playing Go. The biggest such advantage was the symmetry of the Go board, which allowed the specialized machine to calculate more possibilities by treating many of them as mirror images. The researchers have so far unleashed their creation only on Go, chess and Shogi, a Japanese form of chess. Go and Shogi are astronomically complex, and that's why both games long resisted the "brute-force" algorithms that the IBM team used against Kasparov two decades ago.

This discussion has been archived. No new comments can be posted.

DeepMind Produces a General-Purpose Game-Playing System, Capable of Mastering Games Like Chess and Go Without Human Help

Comments Filter:
  • Combat Assault, Logistics, Operations and Planning could be with in its capabilities. With some fine tuning.
    Most military systems are more complex and costly due to the human element and the protection of life. Removing humans and maybe one Abrams tank can be out fought by 100 trucks with auto guns/launchers? Just wondering?.

    AI wise! If it can be done! It will be done! By someone!

    Just my 2 cents ;)
  • Cant wait to check it out, Im sure itll be expensive though.
  • Let me ask (Score:2, Informative)

    So lets ask a question: if DeepMind is useful WHY ARE THEY USING IT TO PLAY GO AND CHESS? Every "AI" system has this amazing power: the ability to play games. Not every game of course: Chess and Go. So friggin stupid. Yeah we get it, computers are good at playing Chess and Go. Amazing stuff.
    • Read John Nash. Many real world systems can be modeled as games.
    • Re:Let me ask (Score:4, Interesting)

      by Solandri ( 704621 ) on Friday December 07, 2018 @04:41AM (#57764614)
      Chess and Go are deterministic [wikipedia.org]. You can perfectly know the entire state of the game universe. And for a given system state, any one action always results in the exact same outcome, every time.

      Almost no systems in the real world are deterministic. That's why stochastic approaches to AI (develops a statistical model based on multiple repetitions - e.g. fuzzy logic, machine learning) have been much more successful in real world tasks.
  • ... stock market. Especially the futures and options market.
    • No. It was built to play Chess and Go. Have you noticed that all these "AI systems" play Chess and Go? Very odd.
      • by epine ( 68316 ) on Friday December 07, 2018 @02:35AM (#57764446)

        Have you noticed that all these "AI systems" play Chess and Go? Very odd.

        Thank you for spamming the entire thread with your imperceptive and unenlightened comments.

        There's nothing odd about the choice of chess and Go whatsoever. Humanity has thousands of years of experience with these games. We know they aren't trivial, and we know they're not so complex that we can't understand progress, when we see it.

        Additionally, the large literature of expert games was a useful hand-rail between hand-crafted and fully autonomous.

        Quite apart from the neural network portion, Monte Carlo tree search (MCTS) is a fundamental algorithm in computer science, and this work demonstrates that MCTS is ready for prime time, having defeated from scratch exceptionally strong chess programs that have been painstaking hand-tuned over five decades and hundreds of man years. MCTS exists within the large and growing theory of multi-armed bandit problems. These are fundamental problems in many important industries (such as drug discovery, to name just one).

        Multi-armed bandit [wikipedia.org]

        Recurrent self-learning is another important algorithm in computer science and machine learning.

        And finally, the neural network portion is far closer to the human brain than the vast majority of algorithms used in computing. Without any human instruction, these neural networks are learning to detect patterns of almost arbitrary complexity (so long as they seem to help in winning games).

        I was reading Galileo in the original last night (English translation, but his original prose). He knew about Kepler, but wasn't sold on elliptical motion. Then he carefully observes four previously unknown moons of Jupiter and correctly determines that they can't all be in circular orbits. The word he used (in English translation) was "oval". But he still didn't choose to accept Kepler's work (apparently, he felt that Kepler's ellipse and his oval were not the same thing).

        Galileo was a giant in the history of science. But still a little wooden headed on a few points, nonetheless.

        I think Odd Buster Spamalot is nuts to criticise Galileo for not being Newton. Only because Galileo sorted enough of the fundamentals out in the first place (about the proper concerns and methods of science), was it even possible for Newton to become Newton (and he knew it, himself, and he's famous for having said so).

        The computers we now apply to neural networks are roughly a factor of one billion times more powerful than the computers of the 1960s (thirty doublings over 45 years gets you there at the traditional pace of Moore's law).

        You could complain that neural networks are only good at this one thing, but actually no: they are now state of the art in image classification (IC), speaker-independent large-vocabulary continuous speech recognition (CSR), and machine translation (MT), as well. All of these endeavours also date back to the 1960s, and have thousands of man-years of deep research behind them. Then DNNs come along, finally on a sufficiently powerful computer, with a few small tweaks to the algorithms, and simply cleans up the state of the art with nothing more than a small team of graduate students doing a quick project within the scope of their degree program to push this along (the subsequent move to industrial scale was immediate and brisk). Traditional MT research programs would have hundreds of professional researchers, slaving away for decades, at least, and never accomplished as much.

        We're all of ten years away now from the day where no competent doctor ever reads an x-ray (or other radiological image) without computer assistance (definitely including a powerful NN component).

        Watson was a bit idiotic, right from the beginning. The problem was Jeopardy, itself, which was always rather facile in the nature of the questions asked, and fundamentally more a test of ridiculously wide and shallow

        • Have you noticed that all these "AI systems" play Chess and Go? Very odd.

          Thank you for spamming the entire thread with your imperceptive and unenlightened comments.

          There's nothing odd about the choice of chess and Go whatsoever. Humanity has thousands of years of experience with these games. We know they aren't trivial, and we know they're not so complex that we can't understand progress, when we see it.

          Impressively long comment, but it doesn't change the fact that what computers are good at is games - things with well defined, and complete sets of rules.

          Yes, you can gamify a lot of things (e.g. factories, to some extent), and very profitably, but you can't gamify all of existence. It still isn't general AI, which is the point.

  • Yeah, they built a huge database of moves and then they read it back while playing. That's exactly how humans play these games, isn't it?

    For bonus points, they embody that database in a format that they can't interrogate in any useful way outside of actually playing the games.

  • But can it play Tic Tac Toe?

  • biology knowledge care to speculate on the role of quantum entangled processes in human self awareness and intelligence? Would this imply that human self awareness and intelligence require quantum computing elements?

Keep up the good work! But please don't ask me to help.

Working...