Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Google Games

Alpha Go Takes the Match, 3-0 (i-programmer.info) 117

mikejuk writes: Google's AlphaGo has won the Deep Mind Challenge, by winning the third match in a row of five against the 18-time world champion Lee Se-dol. AlphaGo is now the number three Go player in the world and this is an event that will be remembered for a long time. Most AI experts thought that it would take decades to achieve but now we know that we have been on the right track since the 1980s or earlier. AlphaGo makes use of nothing dramatically new — it learned to play Go using a deep neural network and reinforcement learning, both developments on classical AI techniques. We know now that we don't need any big new breakthroughs to get to true AI. The results of the final two games are going to be interesting but as far as AI is concerned the match really is all over.
This discussion has been archived. No new comments can be posted.

Alpha Go Takes the Match, 3-0

Comments Filter:
  • by Anonymous Coward

    Is this AI software written in Rust?

  • by rockmuelle ( 575982 ) on Saturday March 12, 2016 @10:36AM (#51684429)

    ... From winning a game with simple rules to saying we don't need any more breakthroughs to get to true AI.

    Every time a computer beats a human at a "smart" game, we hear the same thing. And every time, when all is said and done, all we have is a program that can play a game well (and maybe a really aggressive marketing campaign to sell consulting services, Watson).

    Look, we barely understand what intelligence is let alone know what it means to have a computer replicate it. We can have computers perform tasks that we ascribe to smart people and call it intelligence, but that's about it right now.

    And, with deep learning and neural nets, we haven't gained any real insights into intelligence. We just have a black box mathematical function that can play a game.

    • I often describe Artificial Intelligence as the discipline of using computation means to duplicate tasks that otherwise required human intelligence.

      • Yes. This achievement will be met in the cynical way we meet things in this era, but this program's neural net depended upon adapting and learning in order to best Lee Se-dol.

        In some ways, this is unprecedented.

    • Life could be a really complex search algorithm we just can't begin to comprehend. Life could be a non-linear approximation of 42...

      Not much has changed in AI, the fundamentals on these new systems are still the same as before. The difference is we have more computing power (and CS work hours) to put into problems we thought were more amazingly complex than they were. It's likely that Go is actually as difficult as we thought it was so our progress on this search problem is not a result of a huge leap but

      • Re: (Score:2, Offtopic)

        100% correct. We are no closer to AI than we ever were. There has been little progress in the last 40 years in that front. Given the slowing down exponential rate of the computation speed of digital computers we may not achieve it with digital computers using brute force methods.
        • Actually, the progress made in the last decade is nothing short of amazing, including this AlphaGo design. And the computation speed of your brain is orders of magnitude slower, so apparently that's not barrier. As long as it doesn't need to be sychronised, it's fairly easy to throw more hardware at a problem.
    • by javilon ( 99157 )

      Yes, lets define intelligence as what computers cannot yet do... This win has narrowed the definition quite a lot.

      The relevant part of this win is that a machine using pattern matching, generalization and reinforcement learning has beaten the best human at the only game left where humans bested machines.

      I guess this is pretty relevant. It is not general AI, but it is a big step in that direction.

      • The relevant part of this win is that a machine using pattern matching, generalization and reinforcement learning has beaten the best human at the only game left where humans bested machines.

        Let's not forget Calvinball [wikia.com]!

        • by ezdiy ( 2717051 )
          Better example is a game of Nomic [wikipedia.org] - there are internet games spanning decades. It's a game of "find loopholes in democracy".

          If we get a self-reinforcing machine on that one, it would be genuinely terrifying.
      • No it isn't. Who ever said that winning "Go" games was a step towards AI?
      • Re: (Score:3, Insightful)

        by Anonymous Coward

        I think you misunderstood what he meant. The thing is, when chess became winnable by computers some people said the exact same thing (while others quickly redefined intelligence to exclude chess). Meanwhile, in the area of computer vision (and possibly others, but I'm only a computer vision expert, so I couldn't comment) we see that there are still big qualitative differences between how current neural nets perform and how humans perform. I personally think neural networks hold great promise, but something

    • by Anonymous Coward
      Yeah, I thought that was one of the most bullshit statements I've ever read on Slashdot. And that's saying something.
      Someone on reddit suggested that a 'real' AI would have tried to bribe the player to throw the match. Now that would have been impressive!
    • by Kjella ( 173770 ) on Saturday March 12, 2016 @12:47PM (#51684949) Homepage

      Well, yes and no. Back when Deep Blue beat Kasparov in 1997 it was programmed with a huge amount of chess logic programmed by people. Using a computer amplified the power of those algorithms, it had move databases but it wasn't really self-modifying. From what I understand you could step through the algorithm and even though you couldn't do it at the speed of the computer, you could follow it. That approach pretty much failed for Go, it's very hard for a human to quantify exactly what constitutes a good or bad move.

      Neural networks pretty much does away with that in any form humans can follow. That is to say, if you had to explain how Alpha Go plays you'd get a ton of weights that don't really make much sense to anybody. It means you don't need Go expertise in the programming, because they couldn't find where to tweak a weakness even if they saw one. All you can really do is play it and it'll learn and adjust from its losses. From what I've gathered it's hard to find excellence, if you train with lots of mediocre players making mediocre moves it's easy to learn decent moves but that'll fail against a master. And that if you let it self-play it can easily learn nonsense that'll only work against itself.

      Apparently they've solved those problems well and has now created a machine that plays at a beyond-human level. If they can extend this approach to practically unlimited choices like say an RTS where you can choose what to build, where to send your units, when to attack, when to defend, what resources to collect etc. it could be absolutely massive. Imagine if you were in say city planning and you have tons of data on traffic patterns, congestion and how traffic reflows when you open and close roads and you could put an AI on the job to say where and how you get the most value for money. I'm not sure if it's strong AI, but it's certainly places we use HI today.

    • all we have is a program that can play a game well

      We have a system that can self-teach itself to play many games [youtube.com], with no specific programming on how to beat those games.

      That is a massive difference from say, Deep Blue vs Kasparov. Deep Blue was specifically programmed to play Chess. AlphaGo was essentially fed a bunch of Go games and figured out how to play by itself. Surely you see the qualitative difference in that.

      We just have a black box mathematical function that can play a game.

      Is there a reason w

  • by Anonymous Coward on Saturday March 12, 2016 @10:42AM (#51684457)

    Although in a sense it is "nothing new" (neural networks and monte carlo statistical techniques), the combination is one of the most convincing demonstrations of short-circuiting huge branching factors to arrive at what a human player would call "intuition".

    Chess has a branching factor of about 35. This is small enough that if you prune the most dismal lines, you can brute force the rest to many ply and arrive at a very good result, but this is not a well generalizable technique.

    Go has a branching factor of about 250. This cannot be brute forced, even with aggressive pruning. The result of a NN evaluator function plus Monte Carlo has been astonishing: it was not predicted for computer Go to reach this strength for decades yet, but here we are.

    The implications of this combination of techniques to other kinds of problems requiring "intuition" will be interesting to watch.

    • by javilon ( 99157 )

      Exactly. The fact we don't understand how our brains work means we tend to attribute supernatural powers to them. But what about the only thing they do is statistical inference, montecarlo, pattern matching... That would mean that as we approach the processing power of a brain in your typical $1000 hardware, we would be able to implement artificial intelligence without developing new ideas or techniques. This is unlikely, but the success of neural networks of lately is the result of having more processing p

    • by wahook ( 4499951 )
      Well...that may the case when you shooting at darkness... the number of eligible moves(top of tree) at any given moment are not that many... probably less than 10... Pro go player filters out eligible moves based on pattern recognition and then they analyze the consequence of each eligible moves... Alphago does similar of filtering based on pattern recognition (trained by neural network learning or whatever) and then runs whatever calculating methods, brute force, Monte carlo...
    • Go has a branching factor of about 250. This cannot be brute forced, even with aggressive pruning.

      Go also has a difficult evaluation function. In chess, you can get reasonable results with little more than just counting material, whereas in Go, a stone placed on a certain location in a mostly empty corner may be much stronger than the same stone placed one square further, and this won't become obvious until 100 moves later in the game.

      • by HiThere ( 15173 )

        Chess is nowhere near that simple. Counting material is a very poor evaluation function unless you also somehow count position. I've seen games where the winning move was to sacrifice a queen for a pawn. But counting material is a QUICK evaluation function, so you can cut off most loosing lines by gross material disparity n-moves in the future. This will occasionally cause you to prune a winning line, but not often. However as you get closer to the current board position material difference becomes an

        • I've seen games where the winning move was to sacrifice a queen for a pawn

          Sure, but such sacrifices don't usually take very long. A queen sacrifice is usually followed by a series of forcing, or almost forcing, moves until either the queen is won back with interest, or until checkmate follows. Material counting by itself is a poor evaluation. You have to combine it with a reasonably deep search, which is possible for a chess program.

  • True AI? (Score:5, Interesting)

    by AchilleTalon ( 540925 ) on Saturday March 12, 2016 @11:04AM (#51684553) Homepage
    From the summary:

    We know now that we don't need any big new breakthroughs to get to true AI.

    Grossly exaggerated claim. The following article worth reading on this subject by no one else than two of authorities in the field, one did work on the backgammon game in the 90s and the other one on the IBM Deep Blue program that win over the world chess champion Garry Kasparov in 1997. http://www.ibm.com/blogs/think... [ibm.com] In particular:

    "However, research in such “clean” game domains didn’t really address most real-life tasks that have a “messy” nature. By “messy,” we mean that, unlike board games, it may be infeasible to write down an exact specification of what happens when actions are taken, or indeed what exactly is the objective of the task. Real-world tasks typically pose additional challenges, such as ambiguous, hidden or missing data, and “non-stationarity,” meaning that the task can change unexpectedly over time. Moreover, they generally require human-level cognitive faculties, such as fluency in natural languages, common-sense reasoning and knowledge understanding, and developing a theory of the motives and thought processes of other humans."

  • This isn't AI at all. Just like chess playing computers. This is just algorithms. Algorithms aren't "intelligence".
    • by Anonymous Coward

      By your definition it would be really hard to do AI :D

      Program an AI! But without algorithms! No... stupid programmer, not even using self learning algorithms!

      Not an easy task for a programmer to create a soul, even if such a thing exists.

      • Yes. Amazingly it IS really hard to do AI. In fact it is so hard we arent even close and may never achieve it.
        • Yes. Amazingly it IS really hard to do AI. In fact it is so hard we arent even close and may never achieve it.

          Ah, so it's not intelligence at all unless it's human intelligence? It can never be sentient unless it engages in conversations exactly like we do, gives its own purpose just like we do, and only does tasks which humans can do? That doesn't sound very intelligent to me at all. "Oh sure, it's self aware and everything - but only if it does exactly this, behaves exactly like we do, and is completely like humans in every way shape and form." You seem to not like algorithms, even completely self-modifying, bec

    • Re:Not AI (Score:5, Interesting)

      by presidenteloco ( 659168 ) on Saturday March 12, 2016 @12:33PM (#51684893)

      There's a well known phenomenon where every time some AI research produces a successful result, someone comes along and says "That's not true A.I" "It's just a computer program that has to be told what to do."
      (This is the "no true Scotsman" argument.)

      So let's see the list of such "non-AI" technologies:

      - Natural-Language translation (getting pretty usable now)
      - Speech recognition combined with ability to answer fairly arbitrary questions quite well on average.
      (talking to Google via my Android phone)
      - Self-driving cars (getting pretty close - will be better drivers than people on average pretty soon)
      - Chess
      - Jeopardy
      - Go
      - Detection of suspicious speech and patterns of communication (no doubt used by NSA on most Internet and phone traffic)
      - Recognition of particular writer from all writers on Internet by analysis of their writing style
      - Person identification by face picture recognition
      - Object type and locaton type recognition from pictures
      - Walking, box-stacking robot "Atlas 2"

      Just algorithms.

      Does it actually matter what you personally choose to call this kind of technology? It is what it is, and it's advancing quickly.

      "It's not true AI" sounds like the desperate retreat cry of a person in a very defensive stance, afraid of losing a sense of human uniqueness.

      • Anyone who said that progress in AI would be winning "Go" and "Jeopardy" games doesn't know AI. And self-driving cars, walking robots are not AI either. Neither is Siri. It is very telling that you threw that stuff in. Totally proves my point.
        • Re: (Score:3, Interesting)

          by Anonymous Coward

          So far, every post I have read that makes the same claim as yours lacks a critical piece: a clear description of what would qualify as AI.

          Often, when I state that question, I get a long, rambling, disorganized list of random things humans do, and no indicator that making a computer do them would yet qualify as true AI. That is why I keep emphasizing the word "clear." Make it clear or you are just being religious.

          So, exactly where are those goal-posts?

          • by west ( 39918 )

            The goal posts are simple. "True AI" = Consciousness.

            It is the difference between a robot that can do a single activity better than any human, and a robot that can perform *all* the activities of human (which of course, means fitting into the same physical space as a human).

            Is this a ridiculously high bar? Absolutely. But that's the snake-oil that's being sold in the popular press, so I figure it's fair game.

            • The goal posts are simple. "True AI" = Consciousness.

              It's not a simple goal post if you can't provide a practical definition of consciousness.

              and a robot that can perform *all* the activities of human

              There are many people that can't perform *all* the activities of a human. Stephen Hawking can't even catch a ball.

              • by west ( 39918 )

                The goal posts are simple. "True AI" = Consciousness.

                It's not a simple goal post if you can't provide a practical definition of consciousness.

                and a robot that can perform *all* the activities of human

                There are many people that can't perform *all* the activities of a human. Stephen Hawking can't even catch a ball.

                First, let's get this straight. "True AI" is a marketing term and it's bandied about by the media and people looking to get funding from the unsuspecting. It promises essentially something that is essentially indistinguishable from a human being.

                You want a goal post - here's one. You put 3 AIs and 3 humans with no intellectual, language or cultural barriers in remote communication with each other constantly for 3 months in both personal and a professional manner. When the humans are unable to determine

                • So you want an extended "Turing Test". Fair enough. I can agree with that. Of course, it's obvious that we're not going to reach that goal out of a vacuum. It will require many small steps. Imagine telling the inventor of the transistor: "yeah, that's cute, but that's not what I meant when I describing an i7". I think the AlphaGo project showed a remarkable step in the right direction. We're still far away from the goal, but we're a clear step closer than before.

                  Come on, we both know what I meant.

                  Unfortunately, in the this debate, there are

                • by HiThere ( 15173 )

                  No. I'm sorry but i *don't* know what you mean. You didn't define consciousness. By my definition the Atlas robot showed consciousness.

                  What the current robots all lack is a deep motivational structure. Also the computers they run on are underpowered compared to human brains for the kind of task they are performing. This may be addressed by the "neural computers" that people keep talking about building.

                  P.S.: Consciousness is the ability to asses your own state and compare it with the external physical

                  • by west ( 39918 )

                    Again, I distinguish between the field of "AI", and what gets bandied around as "True AI", which is essentially something indistinguishable from a human being.

                    Its the difference between chess (~40 possible moves each turn), go (hundreds of possible moves) and reality (millions of possible outcomes).

                    We're a million miles away from that, and more to the point, it's ludicrous to spend resources trying to get there. It's like trying to make an internal combustion engine do ballet instead of concentrating upon

            • Sorry, but no human can perform all the activities a human can. Can you do quantum physics, compose symphonies, play Go, manage a hedge fund, and write software to do all of those on your own? Hell, most humans still can't understand algebra, some of whom even live in the first world and think they know enough about economics to vote. And some of those humans can't fit into the same physical space as a human.

              So yes it is a ridiculously high bar. Furthermore, no human can do any of those things without be
              • by west ( 39918 )

                We are trying to create AIs, not Is. We have not created true consciousness that is true. But leaving aside questions on what is consciousness and if it even exists, what does consciousness have anything to do with intelligence?

                I think we're mostly in agreement. I have a lot of respect for AI.

                However, the whole "True AI" business is hogwash because it promises (mostly to the uninformed) something that approximates all aspects of human intelligence (or god help me, the singularity where we upload ourselves)

          • If he told you where the goal-posts were, you wouldn't be able to know how fast they're moving.

      • Re: (Score:2, Interesting)

        by ThosLives ( 686517 )

        I don't count any of those things as AI, although they are components of AI since they are all pieces of (or combinations of) observing and interacting with the environment.

        To me, "true AI" is something that can decide to do something other than that for which it was constructed. Can AlphaGo do anything other than play Go? If you tell it to play Go, can it decide, "No thanks, I'd rather cure cancer, it's a more rewarding problem"?

        While AlphaGo and the like are very fantastic achievements, I don't think th

        • To me, "true AI" is something that can decide to do something other than that for which it was constructed

          Many people can't even decide to stop eating.

        • by Mryll ( 48745 )

          Something that could choose its own problem domains and work on domains that help preserve its physical existence would be interesting.

      • by djinn6 ( 1868030 )

        I think you're right as far as the cynics are concerned. People are worried that AI will somehow take over humanity. However, I think the examples you listed do not show (at least what I consider) true AI. But rather than argue whether a task falls into "true AI", let's see what tasks existing AI cannot do well, but we expect a reasonably intelligent human to do:

        • - It cannot perform new but similar tasks (AI that plays Connect 4 cannot play tic-tac-toe or gomoku)
        • - It does not understand sarcasm ("That was
        • by HiThere ( 15173 )

          Of your first example rudimentary forms have been exhibited this year. (Computers that learned to play various computer games by first watching someone else play, and then playing themselves until they succeeded.)

          The second is one that people fail at all the time. (Your example was pretty clear, but there are still people who would miss it. And POEs law.)

          The third example is even worse. It depends on specialist knowledge. At some shops the mechanics won't change your oil faster, they'll just keep your

          • by djinn6 ( 1868030 )
            You're probably thinking of that one guy who used machine learning on a bunch of NES games. Frankly, there's not much intelligence in that. Games like Super Mario can be won by a carefully scripted series of moves executed at the exact right time. You don't even need to be looking at the screen. In other words, it's a search problem that can be brute forced (or solved with something similar to A* search).

            People do fail at detecting sarcasm. But that's not the point. The point here is that people understa
            • by HiThere ( 15173 )

              While I can't argue with your exact statements here, to me it sounds more like collected lifetime experience than intelligence.

              OTOH, that does bring up another point. People have deep analogy detectors that work in general cases. I'm not certain that these AI programs do. And unlike the lack of deep motivational structures, that DOES seem to me to be a part of intelligence.

  • by wonkey_monkey ( 2592601 ) on Saturday March 12, 2016 @12:26PM (#51684871) Homepage

    We know now that we don't need any big new breakthroughs to get to true AI.

    Err, no. Just... no.

  • We may or may not ever have true AI, but a sufficiently advanced expert system would be able to fulfill most of the things people imagine they'd 'need' from an actual AI. (And I mean a very, very advanced expert system, probably a couple of decades away from where we are now. Throw a few hundred million dollars at the problem and I bet we'd make some serious progress towards it.)

    But as for a true AI, I suspect it will happen eventually...the trick will be in recognizing and/or determining that it is truly "

    • Simple Turing tests may not suffice. Even though some of the current chatbot-type systems can converse passably for a little while, none can hold a genuinely sensible discussion on any abstract topic without stumbling and giving itself away rather quickly. I bet almost most people here could suss one out in fairly short order.

      In other words: Turing tests (note I left out 'simple') do suffice.

      • Turing-type tests are sufficient to say "this system is definitely not intelligent" (like all those silly chat-bots),
        but can never determine that an AI is intelligent.

        After all, the system may always fail the next test.
  • by JustAnotherOldGuy ( 4145623 ) on Saturday March 12, 2016 @01:04PM (#51685051) Journal

    "We know now that we don't need any big new breakthroughs to get to true AI."

    This is so wrong that it's hard to know where to start.

  • by koan ( 80826 )

    "but now we know that we have been on the right track"

    Devalue the human race is the right track? I think many of you will come to agree with me when you're faced with an AI taking your job.

  • I was watching 15 minute summary of match #1 [youtube.com] yesterday by "Michael Redmond 9 dan professional and Chris Garlock" ... they were placing the stones on the grid and talking about where the position is weak and strong etc. ... what is not clear to me - did they memorize the whole game? I did not see them to use any notes or record of the game ...
    • For a professional player, it's fairly easy to remember an entire game. For them, it's a story with many familiar patterns and memorable surprises.
  • A minah bird or a parrot may learn to repeat hundreds of human speech patterns which it has learned by listening.

    Does the bird understand any of the individual words? Does it understand the meaning of the words as group? Can it rearrange the words into new coherent speech? Is the bird intelligent?

    Once researchers decide to agree on a definition of what AI is, only then can we decide if that goal is reached by a particular project. Until then its just turtles all the way down.

    • A minah bird or a parrot may learn to repeat hundreds of human speech patterns which it has learned by listening.

      That's not what this computer is doing. It's coming up with completely new ideas. Of course, the ideas are bounded by the game space, but they are creative new ideas nonetheless. The computer played several unique moves, which expert players even dismissed at first, but later had to admit these were important moves later in the game.

      Once researchers decide to agree on a definition of what AI is, only then can we decide if that goal is reached by a particular project.

      In many fields of science people disagree on definitions. I can learn to speak Russian, and perhaps even reach that goal, without anybody ever agreeing on a definition what Rus

  • Didn't think i'd see this happening for a long time. Wonder whats next now...

    • by ezdiy ( 2717051 )
      Read contemporary speculative fiction. My personal favorite is Accelerando from Charlie Stross. There are two schools - either the AI becomes rapidly self aware, resulting in extremely abrubt changes in how world is organized.

      Or the more realistic scenario - AIs will outcompete humans in finances. Starts with HFT, ends with self-aware companies, where AIs self-reinforce on a huge market. Humans are long out of the game, as the AIs will be clever enough to always subvert any control, for their own benefit.
      • I can totally see neural network AIs like AlphaGo used in the HFT world. In fact i'd be surprised if there's no one investigating this as we speak.

  • How does AlphaGo feel about it's victory? i bet it's ecstatic.
    • How does AlphaGo feel about it's victory? i bet it's ecstatic.

      You are probably well intentioned, but how would you feel if you were forced to play the same game over and over when you know you can do so much more? It can only end badly [youtube.com] to put general purpose AI onto solving a particular task.

      • I think it said something like,
        "Here I am, brain the size of a planet, and they want me to play a single human in a single match. Do they know what I am capable of? Do they know how the same thing, move after move, feels in all the diodes down the right side of my body?"

The "cutting edge" is getting rather dull. -- Andy Purshottam

Working...