Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Classic Games (Games) Games

Can DeepMind's AI Really Beat Human Starcraft II Champions? (arstechnica.com) 129

Google acquired DeepMind for $500 million in 2014, and its AI programs later beat the world's best player in Go, as well as the top AI chess programs. But when its AlphaStar system beat two top Starcraft II players -- was it cheating?

Long-time Slashdot reader AmiMoJo quotes BoingBoing: It claimed the AI was limited to what human players can physically do, putting its achievement in the realm of strategic analysis rather than finger twitchery. But there's a problem: it was often tracked clicking with superhuman speed and efficiency.

Aleksi Pietikainen writes "It is deeply unsatisfying to have prominent members of this research project make claims of human-like mechanical limitations when the agent is very obviously breaking them and winning its games specifically because it is demonstrating superhuman execution."

"It wasn't an entirely fair fight," argues Ars Technica, noting the limitations DeepMind placed on its AI "seem to imply that AlphaStar could take 50 actions in a single second or 15 actions per second for three seconds." And in addition, "This API may allow the software to glean more information... " After playing back some of AlphaZero's back-to-back 5-0 victories over StarCraft pros, the company staged a final live match between AlphaStar and [top Starcraft II player Grzegorz "MaNa"] Komincz. This match used a new version of AlphaStar with an important new limitation: it was forced to use a camera view that tried to simulate the limitations of the human StarCraft interface. The new interface only allowed AlphaStar to see a small portion of the battlefield at once, and it could only issue orders to units that were in its current field of view....

We don't know exactly why Komincz won this game after losing the previous five. It doesn't seem like the limitation of the camera view directly explains AlphaStar's inability to respond effectively to the drop attack from the Warp Prism. But a reasonable conjecture is that the limitations of the camera view degraded AlphaStar's performance across the board, preventing it from producing units quite as effectively or managing its troops with quite the same deadly precision in the opening minutes.

This discussion has been archived. No new comments can be posted.

Can DeepMind's AI Really Beat Human Starcraft II Champions?

Comments Filter:
  • Super Cheater (Score:5, Insightful)

    by JustAnotherOldGuy ( 4145623 ) on Sunday February 03, 2019 @12:42AM (#58062548) Journal

    This is not really beating a human fairly. If you could click that fast then sure, but otherwise it's not a fair fight.

    • I think the use of "Fair" ignores various game factors; but the term, "Level Playing Field" is more accurate.
    • by gweihir ( 88907 )

      The only fair contest would have been a round-based one with ample time for execution on both sides. Yes, machines are faster, but they are utterly dumb on a level humans cannot even reach while staying functional.

      • Yes, machines are faster, but they are utterly dumb on a level humans cannot even reach while staying functional.

        Sometimes fast is better than smart.

        No one cares how smart you are if your opponent gets the first 20 shots in while you're still drawing your gun. But you'll be the smartest bullet-riddled corpse they ever saw.

        • by gweihir ( 88907 )

          No argument about that. And fast is something machines can do in many cases were humans cannot. But it is limited and it is not a sign of intelligence. What I mean is that I do not object at all to "this machine has beaten a human at a strategy game". What I object to is the conclusion that "this machine is intelligent". It is not. To justify that conclusion, this would have to be set up quite a bit differently, and then the machine would lose. At least today and for the next few decades.

          • What I object to is the conclusion that "this machine is intelligent". It is not.

            Agreed.

            -

            To justify that conclusion, this would have to be set up quite a bit differently, and then the machine would lose. At least today and for the next few decades.

            It'll be a while. I don't know about decades but sooner or later we'll get there. We'll have to define "intelligence" well enough so that we're able to tell if we got there or not.

  • by Anonymous Coward on Sunday February 03, 2019 @12:53AM (#58062568)

    When AlphaZero was pitted against Stockfish, the best chess AI, they set the match up with an outdated version of stockfish, bizarre time controls that removed stockfish's edge in time management (a static time per move was enforced), stockfish didn't get its opening books (a mini database containing information about the best moves to start with), nor did it get endgame tablebases (another mini database of information about moving at the end of games) and it was limited to a very small amount of ram (only 1GB when it should've had 64GB or more). Deepmind will CONTINUE to mislead people about what they've accomplished at every opportunity.

    • Re: (Score:2, Informative)

      bizarre time controls that removed stockfish's edge in time management

      AlphaZero got the same time control.

      stockfish didn't get its opening books

      AlphaZero didn't get an opening book either.

      nor did it get endgame tablebases

      Neither did AlphaZero. Also note that in many of the games, Stockfish was basically lost in the early middlegame.

      only 1GB when it should've had 64GB or more

      That's the only legitimate concern, but the whole argument is stupid nitpicking nevertheless. This is like a race between a horse and the first model car. The exact conditions and outcome are secondary to the proof of validity of general principles. AlphaZero was just the first iteration of a new development. The fact

      • Still it sounds like they have chosen the battlefield that suits them best. E.g. maybe Deepmind is just better without the libraries because Stockfish was counting on these, and wasn't optimized to work without them?
        • That's like saying a fat runner is optimized to use a car. Stockfish isn't really optimized for opening books, it just sucks without them, mainly because the difference between a good and poor move in the opening may not manifest itself in a concrete eval difference far beyond the search horizon. As shown in some of the games, Stockfish doesn't care if its bishop gets trapped behind its own pawns. A bishop is still a bishop. It may get a penalty for limited mobility, but it doesn't get a penalty for being

          • Still I would conclude that Google's Deepmind only showed that it does not need an opening library.
            I would say a neural network just combines the advantages of a database and of calculating moves ahead. The weighting of different connections in a neural network seems pretty equivalent to me to storing a library of good and bad starting moves.
            Therefore it has pretty much a database function, and it is not surprising that it is superior to a software without one.
            • The big difference is that an opening book contains literal moves, whereas a neural net represents generalized patterns, similar to how a human grandmaster's brain has these patterns. If you give AlphaZero a position that's not in any of the games it played, it will still find appropriate patterns and use them to evaluate the position.

              it is not surprising that it is superior to a software without one

              If you take a weak engine with an opening book, then Stockfish is still going to be superior, because as soon as it plays a non-book move, the weaker engine is on its own. Eve

          • by Luthair ( 847766 )
            Sounds more like the owner of the horse put square tires on the car and claimed victory.
      • bizarre time controls that removed stockfish's edge in time management

        AlphaZero got the same time control.

        stockfish didn't get its opening books

        AlphaZero didn't get an opening book either.

        nor did it get endgame tablebases

        Neither did AlphaZero.

        [snipped]

        Sure, the SF setup was not optimal, but it wasn't completely crippled either.

        If you let me choose the parameters of the game, I'd beat AlphaZero in chess even though I would play under the same parameters[1]. Its totally fair because I'd be playing with the same restrictions as AlphaZero! That's how you measured fairness, right?

        [1] Single core, 386 with 4MB of RAM. Sure, it's not optimal for AZ, but it's not completely crippled either!

        • I'd beat AlphaZero in chess even though I would play under the same parameters (Single core, 386 with 4MB of RAM)

          Let me get this clear. You are arguing that your brain is roughly equivalent to a single core 386 ?

          • I'd beat AlphaZero in chess even though I would play under the same parameters (Single core, 386 with 4MB of RAM)

            Let me get this clear. You are arguing that your brain is roughly equivalent to a single core 386 ?

            No, I'm saying that I'd work under the same parameters. I might not even turn on the 386 handed to me, after all. But I'm still working with the same parameters, which as you pointed out, is totally fair.

            • which as you pointed out, is totally fair.

              No, it's not fair, because a 386 with 4MB is completely incapable of even running the AlphaZero code (or Stockfish code), simply because the program and its data won't even fit.

              That's what I call crippled.

              The conditions in the match were not optimal, but did not make a huge difference. Maybe you could have gained a few dozen elo by optimizing the system, which is not really a big deal overall, and certainly not "crippled", especially considering that Deepmind could have added similar elo to AlphaZero by add

              • Battle Chess ran on my 8086 with 640KB of RAM. Given the scenario outlined by the grandparent, I'd bring it along to play against AlphaGo. Sure, AlphaGo would segfault on startup having exhausted the memory and forfeit, but even when playing as white I'd get one valid move before it died.

                DeepMind's claim is that their deep neural network design can beat something programmed by a human programmer. If they'd added a similar database to their code, it would not have supported their claim. If they could ha

              • which as you pointed out, is totally fair.

                No, it's not fair, because a 386 with 4MB is completely incapable of even running the AlphaZero code (or Stockfish code), simply because the program and its data won't even fit.

                Are you claiming that the contest is only fair if both parties get all the resources they claim they need?

                • by Shaitan ( 22585 )

                  When those parties are machines and on a neutral playing field and they are technical requirements, yes.

                  If you are trying to establish that your gasoline based vehicle outperforms the diesel model it isn't exactly fair to refuse to allow diesel fuel in the race. It also isn't fair to exclude the types of optimizations that work better for diesel than gasoline.

                  • Are you claiming that the contest is only fair if both parties get all the resources they claim they need?

                    When those parties are machines and on a neutral playing field and they are technical requirements, yes.

                    Well, that was my point: it wasn't really fair - in the AZ vs SF, SF wasn't given all the resources that SF was claimed to require while AZ was given all the resources it apparently needed.

                    So, of course, if I'm allowed to determine the parameters under which the contestants will compete, it's possible to always choose the winner in advance simply by tuning the parameters to favour one party over the other while being "fair" because both parties get the same parameters.

                    • by Shaitan ( 22585 )

                      Right. I'd contend you optimize both as best you reasonably can. If that would make SF victory a given, so be it. You just benchmark your progress over time against SF. There is no need to rig contests and generate all sorts of bogus claims about being the undisputed champion of all things.

                      If anything that is just going to hamper AZ's development and progress. Now there is no incentive to work on whatever deficits might lead to AZ failing against SF and any improvements that would have been gained trying wi

              • by Shaitan ( 22585 )

                The entire point of AlphaZero is to not need the databases. The whole point is for it to beat the classical chessbot with its artificial intelligence.

                If you don't set up the chessbot as it is normally run you haven't beaten the chessbot. If you give AlphaZero tables you've also defeated the point of the experiment, the idea isn't that AlphaZero is the better chessbot, the idea is that AlphaZero's intelligence is such that it can even beat a chessbot.

                It's like setting the difficulty to "Dumb Noob" on a game

          • by Shaitan ( 22585 )

            I very much doubt his brain or yours could compete with a single core 386. Sure it has some impressive stats on massively parallel micro-ops but the single threaded performance sucks.

      • by Shaitan ( 22585 )

        No it isn't nitpicking, stockfish is designed to have those databases whereas AlphaZero is not. You haven't beaten it if you haven't beaten it as intended to run.

    • by gweihir ( 88907 )

      As the promise of "AI" slowly crumbles, a lot of people are desperate to hide the severe limitations and problems of this tech. One way they try to scam the public is by rigged "contests", like this one or the meaningless "Go" stunt.

  • by blahplusplus ( 757119 ) on Sunday February 03, 2019 @01:10AM (#58062606)

    ... to test AI. Since RTS games already have a bad UI where the bottneck is the human being in the chair, aka trying to control many units with a limited UI v ia keyboard and mouse is cumbersome at best. It was even back in the Warcraft 2 days when you tried to bloodlust ogres or heal paladins -- healing paladins being damn near impossible. While warcraft 3 'fixed' the issue with impossible casting /w large numbers of units using autocast.

    The main problem being is that games like starcraft can be played perfectly because it's really an action game masquerading as a strategy game, aka the actions take place in real time. So for a computer like deepmind, the human appears super slow. Imagine if you ropponent appeared retarded in terms of their reflexes. That's basically deepmind vs any human opponent in an RTS. So a computers perfect information and perfect reflexes mean making 99% accurate micromanaging decisions for units everywhere at once.

    You can't do that as a human player. Deepmind for an RTS is like having an aimbot in quake. Not really impressive since we already know making bots that can win against humans is trivially easy.

    • by Anonymous Coward

      See Artosis' analysis on all these games at https://www.youtube.com/watch?v=_YWmU-E2WFc

      His fundamental point: AlphaStar didn't win just from better micro, it got itself into positions where better micro could win. And that itself is quite a feat because following a script cannot get you into that position consistently.

      Note that humans regularly beat the best AI scripted bots in SC2, that too when the bot is allowed to cheat somewhat. AlphaStar isn't a finished project yet, still has to learn the 8 other mat

      • And that itself is quite a feat because following a script cannot get you into that position consistently.

        Give it a script with several basic build orders (because each game was against a different agent, with a different playing style), and that kind of micro, and it would be able to do that consistently.

      • by Shaitan ( 22585 )

        So? Let it demonstrate them without cheating.

        The bots do not need to be set up to cheat over and over again in these contests. There is nothing that says AlphaXXXX sux0rs if it doesn't beat everybody everywhere the first time around. What it does well is still impressive without claiming it is the new champion of the world and while it gains the bigger headline to claim it, it gains a lot of headlines over time to show the actual progress in fair competition.

      • Note that humans regularly beat the best AI scripted bots in SC2, that too when the bot is allowed to cheat somewhat.

        I think humans would regularly beat AlphaStar too if they played regularly. The strats chosen weren't that good, and it didn't take long for a human to poke holes in them.

        • I agree, that last game in particular showed that AlphaStar (that iteration) had clear weaknesses that could be found and exploited, given time. Of course, AlphaStar itself can also learn in that same time, and a lot quicker than most humans.

          But personally I think nitpicking about clickrates or even effective strategies is missing the larger point, which is that a self-taught ML network with no explicit human programming can actually have working "strategies" for such a complex game at all. This is high-lev

          • I can talk about this a lot and I am happy to do so. We know that the system can only play on this map. We know that it can create a well optimized, though undirected, build order. It also started by following human replays. Also, it has very very precise micro (basically, all the units target the correct opponent unit, and move back at the correct time). My hypothesis of how it plays is: it has an OK build, then sends units to certain places on the map at certain times, because that worked before. It is no
            • I'm not sure you're giving it enough credit. In such a strategic game, good mechanics are not enough. If AlphaStar's pre-learned strategies were as shallow as you feel, I doubt a world-class Protoss player would let himself get trapped into a purely mechanical defeat so easily - five times in a row. I would imagine a player of that quality could identify and circumvent any obvious strategies after the first game, but it took until his 6th game before he found an exploitable weakness. I do certainly agree th

              • If AlphaStar's pre-learned strategies were as shallow as you feel, I doubt a world-class Protoss player would let himself get trapped into a purely mechanical defeat so easily - five times in a row. I would imagine a player of that quality could identify and circumvent any obvious strategies after the first game, but it took until his 6th game before he found an exploitable weakness.

                I don't think number of games is what you should look at here. The games were played one after another, very quickly. As soon as he had time to stop and think for a while (and I think he discussed it with TLO), he was able to come up with a winning strategy. And actually beating it is not very hard: all you have to do is insist that they play on a map the computer has never seen. A new map would cause the computer to become hopelessly confused.

                Each AI agent (there were five different agents for each human

                • Perhaps you're right, I haven't watched them in as much depth as you seem to have. Perhaps its micro is really just that good, that it never needed to develop better strategies to win against unprepared human pros. I look forward to the inevitable series of rematches, pitting a further upgraded AlphaStar against humans with a better idea of what to exploit.

                  I remember Lee Sedol being pretty confidant of victory after he reviewed AlphaGo's matches against Fan Hui. Everyone underestimated how much it could imp

                  • I look forward to the inevitable series of rematches,

                    I absolutely agree, I enjoyed these.

                    The thing I'm really looking to see is an improvement in object permanence: can it remember things that have gone out of its vision? I'd bet that's where the big improvement will come from.

    • by tlhIngan ( 30335 )

      Your post could be summarized as "RTS games are decided by APM" (Actions Per Minute). Basically how you do depends on how many commands you can issue to the game per minute/ Most players can only do around 10 or so, while the pro players are besting over 100. And the theory is, the more APM, the greater your chances to win.

      An AI even limited to a keyboard and mouse can issue far more than 100 APM since it can easily issue a command every time it runs the evaluate-plan-execute loop which can runs dozens of t

    • by Luthair ( 847766 )
      Its actually why these games when done properly (which this wasn't, and I pointed this out on the cheerleading article) are meaningful, the point of these is to eventually have a use in the real world. The real world is full of warts, an example might be using an AI for airport security (which has been talked about) but people don't walk around an airport with beacons indicating their precise location at all times, a system monitoring it would need to cope with weird camera angles, blind spots, etc.
  • by Anonymous Coward

    Just wanted to note something people seem to be missing. Some pro player was incredibly dominant winning 7 of 9 big tournaments. He peaks at 500 and has an EPM of 330? Have you considered that he's not winning on strategy but on speed and precision alone?

    So when an AI does it it is all of a sudden not fair? That's pretty bogus for me. All it tells me is that SC2 is a horrible "strategy" game and that the strategy aspect of it peaks really easily and then after that the primary method of winning is to simply

    • So when an AI does it it is all of a sudden not fair?

      It's not about fairness. We've known that computers can click faster than humans for a long time, just like forklifts can life more than humans.

      It's just not very impressive. Woohoo Google, good job building a computer that can click fast? Should they be congratulated for that?

      • Woohoo Google, good job building a computer that can click fast? Should they be congratulated for that?

        You think that beating a pro in Starcraft is just a matter of clicking fast ?

        • You think that beating a pro in Starcraft is just a matter of clicking fast ?

          Did you watch the games? In this case, it was more like precision than speed. All you need to do is program some basic strategies, then give the computer really good micro, and it will win.

          To see examples of what computers can do, check this out [youtube.com]. Here is another one (the stalker micro at 0:11 is very similar to what Deep Mind was doing) [youtube.com].

        • Woohoo Google, good job building a computer that can click fast? Should they be congratulated for that?

          You think that beating a pro in Starcraft is just a matter of clicking fast ?

          When you can issue orders at a sustained rate of 100 orders per second to 100 individual, ungrouped units, you are going to beat the player who is physically constrained to a peak or perhaps 2 orders per second, with a sustained rate of less than 1 per second.

          It is easy to win almost any real-time game by simply moving hundreds of times faster than your opponents.

          It's hard to be impressed when the strategy used by the winner involved issuing orders at a sustained pace no human can meet, even for a short b

        • by Shaitan ( 22585 )

          Pretty much yeah. Really it's more that the way these games work there is a disparity in speed which strategy can't overcome. Especially between sufficiently strong players who are going to employ solid strategy.

        • by q_e_t ( 5104099 )
          Seemingly clicking fast is a significant component of winning the game.
  • Watson won at Jeopardy also because it could press the button faster, which is considered the key skill. However, despite that, it still had to answer the questions, which was impressive.

    I would say that this result is also impressive, even if the machine was not really quite as good as the humans.

    • The difference between Jeopardy and the Starcraft AI is that the Starcraft AI wouldn't have come close to beating a human if it weren't for the inhuman precision.

      The strategies the computer came up with were lousy (and very map-dependant), but it was able to compensate by having extremely precise movements.
    • by Guspaz ( 556486 )

      It didn't answer the questions that were read out loud, though. It answered the questions that were fed to it electronically. I can't imagine that Watson would have done nearly as well if it had to do speech recognition using the same audio inputs that a human player would (that is, a microphone at its podium).

    • My understanding is that Watson was connected to the internet during the Jeopardy game. The humans weren't.
      • by mark-t ( 151149 )
        Your understanding is incorrect. Watson had no live internet hookup during the match.
        • Indeed, instead it had much of the internet inside it. Certainly all of wikipedia. Pre-indexed.

          But the big issue remains, that Watson could play at all is impressive.

          • by mark-t ( 151149 )

            Only if you consider less than a hundredth of one percent of the Internet to be "much" of it.

            Watson's storage is substantial, but is dwarfed by the size of the entire Internet.

  • by slack_justyb ( 862874 ) on Sunday February 03, 2019 @01:38AM (#58062670)

    I've seen a couple of comments already where folks are talking about DeepMind being able to micro and click faster than a human. While that's neat, that's not entirely the goal here with DeepMind. The makers already put an artificial limit on the actions DeepMind can execute to 600/minute. In comparison the humans are executing at around 250-300 actions per minute. Now that indeed made DeepMind's micro game strong, what was the real tipping point was that DeepMind could see anywhere where a unit was located. Humans however can only see a "screen at a time". When DeepMind's makers went back and implemented "screen at a time" limitation, DeepMind was easily fooled again. And that's the thing here. Not the "can I beat a human?" but "can a human fool me?". As soon as the amount of information coming IN to DeepMind was reduced, the data coming OUT couldn't compensate and the humans were able to slowly figure out how to trick the AI into an unwinnable situation.

    There's a continual fallacy on Slashdot where pure research like DeepMind is confused for "who's jerb can it take and reasons why it can't take that jerb." The media here is presenting in the terms of "Hey look! Something AI can do better than us worthless puny humans!" but DeepMind is mostly research first. The entire point here isn't, "Hey can I pawn this guy?" It's why did limiting the input allow the human to so easily fool the machine? Because researchers aren't sure why the AI was so easily fooled where when it had a wider field of view, it could not be so easily fooled. That question has a lot more wider ranging implications than how great the micro game is for DeepMind.

    • what was the real tipping point was that DeepMind could see anywhere where a unit was located. Humans however can only see a "screen at a time". When DeepMind's makers went back and implemented "screen at a time" limitation, DeepMind was easily fooled again

      That aspect was over-hyped. In the set that was won by the human (where the screen was limited), the human had time to analyze the playing style of the computer, and come up with a good counter-strategy. That is the main reason the computer lost.

      Limiting the screen reduced the skill of the computer a few percentages, but it wasn't the only or (in my opinion) the main difference.

    • It's why did limiting the input allow the human to so easily fool the machine?

      You are sincerely asking this question?

      Any stage magician (or three year old near a cookie jar) could easily demonstrate the answer about limited input and orchestrating an expected "answer".

      TL;DR, re/indirection

      • You are sincerely asking this question?

        No what I'm getting at isn't the human "why" here. That's pretty obvious. The "why" is the empirical side of that. The maths that would be behind explaining the "why". Yes, we can easily chalk it up to misdirection, the more important thing is how to teach a computer that misdirection is happening and that requires understanding the "why" the computer got there in the first place.

  • Sounds like a bug that it did poorer in areas that should've been unrelated to camera view. Makes me wonder if it was trained on whole-map data, and it was unsure how to act when only given part of the map.

    • The original version was trained on the whole map, then the second version was trained to see only the camera view. In training, it managed to get 95% as good as the whole-map version.

      The human had time to prepare a counter-strategy to DeepMind, that is really the big variable that changed.
  • While watching the match, it seemed to me that it was not winning on being human level intelligent or adjusting on the fly. The AI was simply able to not only click on units really quickly, but it was able to switch to different groups faster than a person could. This allowed the computer to effectively be on the entire board all at the same time.

    When you think about it, the AI was showing no intelligence. It had ran a statistical analysis, and came up with the notion that if you can click really fast, th
    • by Anonymous Coward

      AI isn't supposed to "show intelligence." It is supposed to mimic intelligence. As in....fake it....through engineering.

      That is why the "A" in "AI" stands for "artificial." You know...as in "not real." It isn't actually intelligent, and it isn't supposed to be. Straight from the dictionary [merriam-webster.com], AI is:

      the capability of a machine to imitate intelligent human behavior

      If the machines were actually intelligent, then we wouldn't call it "artificial intelligence." We would call it "synthetic intelligence." But

    • by gweihir ( 88907 )

      When you think about it, the AI was showing no intelligence. It had ran a statistical analysis, and came up with the notion that if you can click really fast, these units are the best in all situations. It does not adjust to new stuff. Just in effect, runs a script.

      Indeed. And that is all "AI" can do today and in the foreseeable future. It is also quite telling that the human players always go into this without having any data from old games to prepare for their opponent, while the machine has all data available on the humans. This is simply because the humans would wipe the floor with it if they knew in advance what they are going into. It basically does not get any more unfair than this. The whole thing is a meaningless PR stunt.

  • "We don't know" (Score:4, Informative)

    by phantomfive ( 622387 ) on Sunday February 03, 2019 @03:22AM (#58062862) Journal

    We don't know exactly why Komincz won this game after losing the previous five

    You could know if you'd watch the games. In the first set, DeepMind won with inhumanly superior micro. It was really cool, but computers have been better at micro for a long time. Speed and precision are things computers are good at, that's why we have aimbots.

    In the second set, the human readjusted, and thought of strategies that would defend against the superior micro (by building more powerful units), while taking advantage of the computer's weaknesses (poor knowledge of army compositions, weak knowledge of positioning, and seemingly no object permanence: once enemy units are out of view, it has no idea where they are or if they exist).

    • by gweihir ( 88907 )

      Same on the Go stunt by Google: The machine had all games of the human player, the human player had none of the machine. Even after only 3 games, the human thought he had figured out a way to beat the machine, but, of course, there never were any additional games, as that would have been an utter disaster for Google.

      • Yeah, that is something that should be emphasized more. Having a knowledge of a player's strats can be a huge advantage. We see it fairly often in Starcraft: some player will come up with a new strategy, win everything, and it will take a few months for other players to come up with a good counter-strategy. But soon everyone knows how to counter it, and the player falls from the heights.
        • by gweihir ( 88907 )

          While I do not follow Starcraft games, this does not surprise me one bit.

        • No, and that is just your ignorance of Go. You can't formalize a player's strategy in Go. Otherwise previous AI attempts would have already beaten human professionals. Instead, they could barely keep up with human amateurs even with a 9 stone handicap.
          • No, and that is just your ignorance of Go. You can't formalize a player's strategy in Go

            Hey, welcome to the conversation, that's nice but we're talking about Starcraft.

            • You were replying to a comment about Go. If you were talking about Starcraft, then why were you agreeing with him about Go? Why didn't you say "I think this is a different situation with Starcraft". Instead, you agreed with him, implying the same thing applies across both games.
              • I'd be surprised if it didn't apply to Go. It certainly applies to chess, although the time scale is longer. A new player will appear (Bobby Fischer, Tal) with a new playing style, be very dominant, but after a while people will adjust to his style and he will fade (now many people can do the things Fischer innovated).
      • That's a load of shit. There are not enough human games to train the neural nets. The millions of games it played were against itself. Lee Sedol won the fourth game, then lost the fifth in a similarly decisive manner. "Never were any additional games" my arse.

        Then they let loose AlphaZero on the internet Go servers as an anonymouse person and professionals quickly picked up on the fact that the anonymous person was not playing like anyone else.

        So, no your "never were any additional games" is full of s
    • You could know if you'd watch the games.

      While it would generally be expected that most here could watch and understand, I am betting that there are quite a few people who can not. They will look and see bright lights, lots of mouse pointer movement, and some explosions.

      Thank you for explaining. Now I do not even have to watch. You explained it very well. :)

      (but yes, i could watch and understand, it is just that SC is not my thing)

      • I think the biggest problem they are going to have to deal with is the lack of object permanence. "Guessing" what isn't on the screen is a huge part of Starcraft. That's just my opinion, though.
    • DeepMind won with inhumanly superior micro

      Like mistakenly shooting piles of rock!

      No, they limited it's actions per minute. MaNa typically had a higher APM. But dont' take my word for it, they released the whole histrogram [deepmind.com] Poor TLO peaked at 2000 APM! Like... DUDE, chill. Or at least cut back on the meth.

      , the human readjusted, and thought of strategies that would defend against the superior micro

      Not really. In the game I saw, Alphastar got really confused by guardian drop harrasses and... couldn't figure out why it's ground units couldn't reach the air. What they changed, was that it didn't have total (leagal) knowledge of the ent

      • No, they limited it's actions per minute. MaNa typically had a higher APM. But dont' take my word for it, they released the whole histrogram [deepmind.com] Poor TLO peaked at 2000 APM! Like... DUDE, chill. Or at least cut back on the meth.

        You're either dumb or ignorant, but TLO wasn't clicking 2000 times a second.

        If you don't want to be ignorant anymore, you could start by looking at the difference between APM and EAPM. There's more to micro than speed.

        • Of course not, there's keyboard shortcuts. Which make more of a clack, than a click.

          But, if a graph is labeled "Actions Per Minute (APM)", and has three curves, and the orange one is labeled "TLO", and some area under the curve goes up to the little hash mark labeled with "2000".... Maaaan, I guess I'm just too dumb to figure out what that means. Could be anything really. Who knows?

          But I get you. I do. It's the stupidest thing to see these people pointlessly clicking around in some vain attempt to keep th

  • Where exactly does one draw the line about what constitutes "cheating", versus exploiting a natural advantage?

    Is it cheating if the AI gets to use API calls to control its forces, rather than physically pushing keys on a keyboard and moving a mouse, the way a human player does? Arguably so, if keyboard-and-mouse-dexterity are considered part of the skill set for the game. Perhaps a fair contest should require the AI to use robotic arms and video cameras on a gaming PC.

    On the other hand, if it's only the s

    • Where exactly does one draw the line about what constitutes "cheating", versus exploiting a natural advantage?

      I suggest that by looking a the goal, you can determine the line. Google here wants to create intelligence, but they won with a pure mechanical advantage.

      So we can congratulate them for......making a computer that clicks with precision. Good job Google. But from watching the games, it's clear they failed on intelligence.

  • I'm not impressed by any piece of software that beats humans at games, and it certainly doesn't make me any more impressed with the so-called half-assed excuse for 'AI' they keep trotting out. Feels mostly like more of the cheap marketing bullshit they keep pushing on us.
    • by gweihir ( 88907 )

      Very much so. Incidentally, when I recently had a chance to ask the head of the Watson team Europe in private about AGI, he immediately said "not in the next 50 years". That is a statement directly implying "we do not know whether it is even possible". IBM and Google experts know they have nothing except automation on steroids. But humans like to dream and like to ignore reality. Marketing is just one discipline that exploits that.

      • Incidentally, when I recently had a chance to ask the head of the Watson team Europe in private about AGI, he immediately said "not in the next 50 years".

        Normally I would post a comment here affirming your intelligence (and I do, I affirm your intelligence) and agreeing with you.

        I still mostly agree with you, but it should be pointed out that the Google team seems to have started coming up with slightly new algorithms, so if they keep going, they may find some insight that leads to a more complete AGI solution. Low probability but it seems there is real progress finally.

        • by gweihir ( 88907 )

          Incidentally, when I recently had a chance to ask the head of the Watson team Europe in private about AGI, he immediately said "not in the next 50 years".

          Normally I would post a comment here affirming your intelligence (and I do, I affirm your intelligence) and agreeing with you.

          I still mostly agree with you, but it should be pointed out that the Google team seems to have started coming up with slightly new algorithms, so if they keep going, they may find some insight that leads to a more complete AGI solution. Low probability but it seems there is real progress finally.

          Well, thanks. Not what I am looking for here, but the occasional person that actually understands what I am talking about and either has good counter-arguments or agrees is nice.

          Still, nothing that is AGI to even a very tiny amount is known today. Don't get me wrong, AI is hugely useful, but AGI does not exist. Maybe we will eventually find something that can do AGI without consciousness or we will find a different way to do computing that allows consciousness (digital computing clearly does not), but not

          • How can you measure how close or far we are?
            • by gweihir ( 88907 )

              History of technology. Technology moves at a pretty constant speed from theory how something could work, to lab demo, to you can buy it, to it is in general use, to it becomes mature technology. With networked computers we are at general use now and will take an estimated 30-50 years before the technology is mature. If there is not even a credible theory, we are at least 30-50 years from the lab demo, and possibly much much longer. May even be impossible.

              Whether you take the steam engine, electricity, compu

    • Because there are similarities between these strategy games and professional jobs. The lesson is pretty clear: AI is coming for knowledge worker jobs.

  • Ow what would you call it if the machine has all games of its opponent to prepare and the human opponent has none of the machine? This whole thing was an useless stunt. And did you notice how fast the Go machine was retired afterwards? Very likely because it would have had no chance after the humans had it seen play a few times.

    Why do people fall for this kind of crap?

  • Why do we still use the term AI when it actually doesn't even exist yet?
  • Well of course it was cheating. They gave it TOTAL AWARENESS of the whole map (that it could legally see via fog-of-war rules). They fudged this one by limiting it's "screen changes" as in, while it knew everything that was happening across the map, it could only choose one area to issue commands to. They ALSO had some games were it had to use those screens to see what was happening at those locations (just like a human). And it lost. Arguably, it didn't have enough training time with that setup.

    It only

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...