Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Google Software Entertainment Games

DeepMind AI AlphaStar Wins 10-1 Against 'StarCarft II' Pros (newscientist.com) 103

In a series of matches streamed on YouTube and Twitch, DeepMind AI AlphaStar defeated two top-ranked professionals 10-1 at real-time strategy game StarCraft II. "This is of course an exciting moment for us," said David Silver at DeepMind in a live stream watched by more than 55,000 people. "For the first time we saw an AI that was able to defeat a professional player." New Scientist reports: DeepMind created five versions of their AI, called AlphaStar, and trained them on footage of human games. The different AIs then played against each other in a league, with the leading AI accumulating the equivalent of 200 years of game experience. With this, AlphaStar beat professional players Dario Wunsch and Grzegorz Komincz -- ranked 44th and 13th in the world respectively. AlphaStar's success came with some caveats: the AI played only on a single map, and using a single kind of player (there are three in the game). The professionals also had to contend with playing different versions of AlphaStar from match to match. While the AlphaStar was playing on a single graphics processing unit, a computer chip found in many gaming computers, it was trained on 16 tensor processing units hosted in the Google cloud -- processing power beyond the realms of many.
This discussion has been archived. No new comments can be posted.

DeepMind AI AlphaStar Wins 10-1 Against 'StarCarft II' Pros

Comments Filter:
  • Still Cheating (Score:2, Informative)

    by Luthair ( 847766 )
    Much like the dota bot from last year AlphaStar is effectively cheating as it is aware of the entire map at once, not restricted to the viewport as humans are. These are only really a novelty until they start operating on imperfect knowledge and imperfect inputs as humans are (even if it was arbitrarily limited on reaction speed).
    • Re:Still Cheating (Score:5, Insightful)

      by psycho12345 ( 1134609 ) on Thursday January 24, 2019 @08:33PM (#58018150)
      Actually, the AI is restricted to the viewport, it is using an interface into the game, so no it doesn't have maphack, like a normal scripted AI would.
      • Actually, the AI is restricted to the viewport

        At the end of the match they tested an AI that was restricted to the viewport, but it lost badly.

        it is using an interface into the game,

        It is not using the same interface that humans use. It can instantly see all of its units' health, and is able to fight simultaneously in three places on the map.

    • Re:Still Cheating (Score:5, Insightful)

      by djinn6 ( 1868030 ) on Thursday January 24, 2019 @08:43PM (#58018198)

      It's already operating with imperfect knowledge. From Vox [vox.com]:

      During the 10 matches, the AI had one big advantage that a human player doesn’t have: It was able to see all of the parts of the map where it had visibility, while a human player has to manipulate the camera.

      Emphasis mine. Yes, it's an advantage, but it's not cheating. Humans can use the minimap to see what's going on as well.

      The problem with these matches and Starcraft in general, is that the it's able to win just with good micro. So getting really good at Starcraft doesn't get us closer to actual AI.

      • Re:Still Cheating (Score:5, Insightful)

        by The Evil Atheist ( 2484676 ) on Thursday January 24, 2019 @09:37PM (#58018438)
        How can it not get closer? AI didn't used to be able to beat professional StarCraft players. Now they can. That is, by definition, moving closer, because it's not moving backwards, and clearly an improvement from not moving at all.

        This is just more goalpost shifting, finding nonsense reasons to argue why it "doesn't count". Consider the alternative that maybe the things humans do aren't as as clever as we tell ourselves. If it's just "good micro", why can't humans use "good micro" to beat the AI, if we're so great?
        • Re:Still Cheating (Score:5, Insightful)

          by djinn6 ( 1868030 ) on Thursday January 24, 2019 @10:07PM (#58018552)

          How can it not get closer? AI didn't used to be able to beat professional StarCraft players. Now they can. That is, by definition, moving closer, because it's not moving backwards, and clearly an improvement from not moving at all.

          It is not moving at all. There's no viable route from this to general purpose AI. By playing countless games against itself on the same map, it is still performing a search on a decision tree, weighing the (now more fuzzy) nodes. It did not acquire any actual understanding of Starcraft mechanics. It does not logically reason about anything that it hasn't seen before. If you give it Warcraft instead, it'll take another several months of work from a team of very intelligent humans to make it good at it. In fact, I'll bet a big enough balance patch will cause it to have to throw out everything it's learned.

          This is just more goalpost shifting, finding nonsense reasons to argue why it "doesn't count". Consider the alternative that maybe the things humans do aren't as as clever as we tell ourselves.

          The goal post has always been to replace human intelligence. I don't see any AI building Starcraft-playing AI's, or discussing how long it will be before they are replaced by even better AIs.

          If it's just "good micro", why can't humans use "good micro" to beat the AI, if we're so great?

          Because humans have muscles that take time to move?

          • Re: (Score:3, Interesting)

            There's no viable route from this to general purpose AI

            How do you know? Is there a roadmap that says all AI must progress in this way? If there were such a roadmap, why do we still need research then? Do you even understand why we research things we don't have adequate knowledge/experience?

            It did not acquire any actual understanding of Starcraft mechanics

            If it beat professional human players, then yes it did acquire an actual understanding of Starcraft mechanics. In fact, a better understanding than humans.

            If you give it Warcraft instead, it'll take another several months of work from a team of very intelligent humans to make it good at it. In fact, I'll bet a big enough balance patch will cause it to have to throw out everything it's learned.

            So what? The fact it wasn't even able to do something like Starcraft before. Do you not understand that technological pro

            • Re:Still Cheating (Score:4, Informative)

              by djinn6 ( 1868030 ) on Friday January 25, 2019 @04:55AM (#58019576)

              It did not acquire any actual understanding of Starcraft mechanics

              If it beat professional human players, then yes it did acquire an actual understanding of Starcraft mechanics. In fact, a better understanding than humans.

              That you can train a dog to bark twice when shown 1+1, and three times when shown 2+1 does not mean the dog can do arithmetic.

              • If you train a dog to beat humans in Starcraft, it means the dog understands Starcraft.

              • No, because you trained it to do two sums. You did not train it to do arithmetic.

                Your original complaint was that it did not develop an understanding of Starcraft mechanics. If it can play better than humans, it has demonstrably developed an understanding of Starcraft mechanics. Because your criteria was "Starcraft mechanics", not "all games ever".

                See? More implicit goalpost shifting when you don't even understand your own argument.
          • Re: (Score:3, Interesting)

            by Njovich ( 553857 )

            The goal post has always been to replace human intelligence

            The goalpost in AI research has always been chess. After that was solved some made random other goals, but there is certainly no concensus there.

            Not sure what you mean by replacing human intelligence, it sounds like you are not happy until all humans are dead. Hopefully you are not an AI saying that. In the context of starcraft 2, this system has replaced human AI. If you want an AI system that can do everything in all fields that humans can do, wel

            • by djinn6 ( 1868030 )

              The goal post has always been to replace human intelligence

              The goalpost in AI research has always been chess.

              From Wikipedia:

              Modern AI research began in the mid 1950s.[20] The first generation of AI researchers was convinced that artificial general intelligence was possible and that it would exist in just a few decades. As AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."[21] Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who accurately embodied what AI researchers believed they could create by the year 2001.

              • Thereâ(TM)s a great movie on AI from 1970 too: Colossus: The Forbin Project - https://www.imdb.com/title/tt0... [imdb.com]

                It touches all the aspects of a super connected AI reaching singularity, and explores the humanityâ(TM)s response to it (or humanityâ(TM)s efforts trying to come to terms with something like that).

            • The goalpost in AI research has always been chess

              Go read a book by Marvin Minsky, you don't know history and your ignorance shows.

          • In fact, I'll bet a big enough balance patch will cause it to have to throw out everything it's learned.

            A different map would throw out everything it learned.

          • By playing countless games against itself on the same map, it is still performing a search on a decision tree, weighing the (now more fuzzy) nodes.

            No it isn't. Its a multi-level (deep) Neural Net with various mechanisms to allow it to interface with the game. It doesn't have an inherent knowledge of the game. Its just trained on the game much in the way a child would.

            Its already surprisingly "general".

          • It seems like the AI was unable to remember where the units were when they went out of sight.
        • If it's just "good micro", why can't humans use "good micro" to beat the AI, if we're so great?

          The AI in Bullfrog's "Dungeon Keeper" used to drop a bunch of enemies in your territory and then immediately pick them up again, giving you no time to attack them. You had to keep on them in case it did let them go, but it was incredibly annoying, and only narrowly within the rules. The human player couldn't grab, say, thirty monsters at once, drop them and grab them again.

          • And if anyone here bothered reading the article, you'd know this is not simply just dropping a bunch of enemies into your territory.
      • by rtb61 ( 674572 )

        That is actually two advantages and not just one, see the map, and select units in those zones. That is where is wins hands or instant super clicky fingers down. Starcraft was always less about strategy and more about unit selection and unit movement, clickfest.

      • Emphasis mine. Yes, it's an advantage, but it's not cheating. Humans can use the minimap to see what's going on as well.

        The minimap doesn't show you what units are in an area, it doesn't let you see their hitpoints. It doesn't let you click on the units: if you want to click on the unit, you have to move your screen there then click on it, that's two actions.

        It's cheating.

      • On the other hand, why should AI follow rules made for humans in the first place?

        Perhaps thereâ(TM)s lingering hope that humans should _feel_ equal to an AI.

        AI operates without feelings and human limitations as effectively as possible. So, an AI is subjected to a world thatâ(TM)s very limited. How is that fair, if itâ(TM)s done to only comfort us. Itâ(TM)s just a show and makes headlines.

        The focus seems to be to mimic a human, but AI can be something new. A completely new form of âo

      • This wasn't covered in the video, but in the DeepMind Blog [deepmind.com] about the match, they link to a paper describing a custom network architecture [ox.ac.uk] specifically designed to do "micro" during a battle, where each individual unit is acting as its own miniature agent. From the paper:

        In this paper, we focus on the problem of micromanagement in StarCraft, which refers to the low-level control of individual units’ positioning and attack commands as they fight enemies. This task is naturally re

        • There's no way any human can get their "micro" to the level where they're calculating optimal behavior for individual units on the battlefield.

          Sure, but there's no reason why the AI should mimic all the deficiencies of the human brain.

      • by Luthair ( 847766 )

        Its perfect knowledge for the player - note I did not say it saw through the fog of war. If a unit peeks through the fog for a moment when a human does not have the viewport at that part of the map AlphaStar knows every detail but the human does not. AlphaStar is able to be aware of what all units are doing at all times, Hence perfect knowledge.

    • Re: (Score:3, Informative)

      by wlorenz65 ( 5474378 )
      Wrong for the live game against MaNa. Quote: "Following the broadcast of the recorded matches, DeepMind introduced a new version of AlphaStar that MaNa took on in a live match. The agent that played the live game didn't have the benefit of the overhead camera and instead had to make decisions on where to place its focus in the same way a human would."

      But you are right for the recorded games from December 2018.
      • The agent that played the live game didn't have the benefit of the overhead camera

        The agent lost that match. It lost the match badly, despite having inhuman levels of micro ability.

    • Much like the dota bot from last year AlphaStar is effectively cheating as it is aware of the entire map at once, not restricted to the viewport as humans are.

      That's not true. AlphaStar is using the same interface as other players. It's not simply multitask faster than the human players.

    • Poor decisions in strategy (bad builds, attacking up ramps, attacking into arcs) combined with super-human micro. The blink micro from the computer was beautiful, and the ability to fight in four different positions on the map was something only a computer could do. The computer could micro perfectly on one side of the map while warping in units on the other side of the map.

      We've known for a long time that computers are better than humans at micro [youtube.com], that's not very interesting.
    • The DoTA bot was certainly very restricted as a swath of the game was simply barred. Namely certain abilities that it couldn't figure out how to deal with. I think the random factor did it in. That was pretty bullshitty, but a good example of the extent of AI capabilities. Rather than gloating about how well it did, the journalists should have sold it more like "This is where AI is currently at". But tech journalists generally suck.

      This example, AlphaStar, is effectively cheating by limiting the map sele

  • Nothing to see here..
    • Yup APM spam FTW
      • It wasn't APM spam, the APM was within a normal range, but the multi-tasking was super-human. Able to micro simultaneously in three different places on the map, able to know exactly how much health each of your units have (you know the exact moment you should blink black your stalker, for example).
      • Nope. APM was lower than the human players:

        In its games against TLO and MaNa, AlphaStar had an average APM of around 280, significantly lower than the professional players, although its actions may be more precise. This lower APM is, in part, because AlphaStar starts its training using replays and thus mimics the way humans play the game. Additionally, AlphaStar reacts with a delay between observation and action of 350ms on average.

        Check out that chart [deepmind.com]. AlphaStar is at a mean APM of 277, TLO is at 390, and MaNa is at 678 because apparently he just never stops clickig shit.

        I don't understand why some people just hate AI and try to discredit and dismiss all advancements? Is it just natural skepticism? I'm all for that, just.... try to put in a little more effort and stop spouting bullsht.

        • APM reaches 1500 per minute in this match which is 3x what the best gamers can do. Gamers tend to establish a rhythm by maintaining APM even if they don't need to which skews results.
          https://www.youtube.com/watch?... [youtube.com]
          • . . . They show you the whole histogram. Seriously, look at the bloody chart. TLO peaks at.... 2000 APM!? You can see the curve for the durations that TLOwas simply clicking shit faster than AlphaStar. AND you can see what slice Alpha-star issued commands faster than MaNa, who really didn't get much above 680.

            But yeah, poor TLO's fingers. Like... lay off the meth dude.

  • Great news (Score:2, Interesting)

    Computers are good at playing computer games with a strict ruleset. Imagine that.
    • Are you saying Starcraft has a strict ruleset?
      • Re: (Score:3, Interesting)

        by Anonymous Coward

        Are you saying it doesn't?

        In competitive StarCraft, you take into consider build orders where you need to have done a specific set of steps in a specific amount of time. If you fail to do it, it puts you behind the opponent and will cause you to lose the game.

        It's so meticulous that seasoned casters can actually tell a build based on how many workers a player has in a minute.

        Just look at the Liquipedia entry for Protoss build orders [liquipedia.net] and you'll see what I mean. And that's just one race out of three.

    • reality has a strict ruleset.
  • by mentil ( 1748130 ) on Thursday January 24, 2019 @11:51PM (#58018938)

    Playing a multiplayer game with bots used to be seen as an inferior experience to playing with real humans. Now imagine that instead of something like AlphaStar's utility function being set to trying to win, it's set to trying to make the human opponents have the most fun. Of course it'd need some understanding of the mindset of the player; they might not want to always win, or always have close matches, or possibly they're a sore loser. However, this could be inferred somewhat by player behavior (even outside of the match proper, e.g. in menus).

    Put that in a game and ship it, and that could be a killer feature. People might prefer to play with a bot that'll guarantee a fun time, over a human that might rage quit or be an unfair match that leads to a one-sided game.
    #MakeGamesSinglePlayerAgain

    • by Kjella ( 173770 ) on Friday January 25, 2019 @03:00AM (#58019356) Homepage

      This is probably much harder than you think, I sometimes play chess against the computer and even though it can match rating with human players the play is quite different. Like instead of making "reasonable" mistakes and miscalculations it's making random unmotivated moves that score poorly. Likewise, it rarely has any idea what a would be a good trap for humans, instead it's just surprisingly shallow at times as if it's maxed out the ply depth for that rating. And that's in a game that has so incredibly little nuance like chess, I can't even imagine the complexity of trying to act like a plausible rookie in Starcraft II.

      • by mentil ( 1748130 ) on Friday January 25, 2019 @03:52AM (#58019450)

        For a turn-based game, particularly when you can see what the artificial stupidity is doing, you're right. In a realtime game like Starcraft 2 there's so much that could be done, that doing nothing with a particular thing at a given moment is a reasonable move. If you want to make the suspension of disbelief better, you can teach it what mistakes humans make, and allow it to repeat them when desired. AFAIK teaching a deep-learning algorithm how to fail like a human has received little research.

    • Playing a multiplayer game with bots used to be seen as an inferior experience to playing with real humans.

      Bots can cheat and destroy players by spying on what they do and building to counter it. Most AI in games is deliberately wimpy.

      Look at how MMORPGs have been stuck in the taunt/heal/DPS model for 20 years. To break that you have to get rid of taunts, and therefore tanks, and rely more on casting controllers and other things.

      Which apparently people don't wanna do because it's too hard.

      So boss fights remain the same old thing with a few extra dance moves to keep things mildly interesting.

    • All thats left is to give them the copious chat logs of reports from players in multiplayer and let them learn how to insult us properly. Getting told to drink bleach has never felt so real!

  • by backslashdot ( 95548 ) on Friday January 25, 2019 @12:13AM (#58018990)

    Nobody notice the typo in the title? Wtf is Starcarft?

  • by The123king ( 2395060 ) on Friday January 25, 2019 @03:57AM (#58019460)
    Is that the latest racing game from Blizzard?
  • Calculators also calculate far faster and more accurately than do humans. Still, it's a machine -- not a mind.

    The difference between a machine and a mind is that a machine is driven by a set of mechanical/logical rules strictly, without exception. In contrast, a mind is driven by free will judgements. Free will is the ability to derive options, weigh them between each other, and select the option with the highest sum of desirability and likelihood.

    Both can be programmed (after all, our minds are the resu

    • a machine is driven by a set of mechanical/logical rules strictly,

      No, there's actually a large random factor.
      Your brain, likewise, is driven by a set of interconnected neurons with interactions dictated by electrochemical rules. It is not strict. There's a lot of... squishiness about when a neuron fires and how it fires. Likewise, there's a lot of squishiness with how weights in an artificial neural network get updated. That's thanks to the glory of rand().

      However, anything that follows any set of rules will always, in the natural world, come to usurp the intent of those rules.

      Haha, ok you little rebel.

      But a mind creates its own intents

      You should ask yourself why you like sex and if you don't think you've been pre-programmed

  • by Rikaan ( 5765786 )
    Thanks... i like it https://valentinstag.me/ [valentinstag.me]

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...