Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Games Technology

DeepMind's StarCraft 2 AI Is Now Better Than 99.8 Percent of All Human Players (theverge.com) 75

An anonymous reader quotes a report from The Verge: DeepMind today announced a new milestone for its artificial intelligence agents trained to play the Blizzard Entertainment game StarCraft II. The Google-owned AI lab's more sophisticated software, still called AlphaStar, is now grandmaster level in the real-time strategy game, capable of besting 99.8 percent of all human players in competition. The findings are to be published in a research paper in the scientific journal Nature. Not only that, but DeepMind says it also evened the playing field when testing the new and improved AlphaStar against human opponents who opted into online competitions this past summer. For one, it trained AlphaStar to use all three of the game's playable races, adding to the complexity of the game at the upper echelons of pro play. It also limited AlphaStar to only viewing the portion of the map a human would see and restricted the number of mouse clicks it could register to 22 non-duplicated actions every five seconds of play, to align it with standard human movement.

Still, the AI was capable of achieving grandmaster level, the highest possible online competitive ranking, and marks the first ever system to do so in StarCraft II. DeepMind sees the advancement as more proof that general-purpose reinforcement learning, which is the machine learning technique underpinning the training of AlphaStar, may one day be used to train self-learning robots, self-driving cars, and create more advanced image and object recognition systems.
"The history of progress in artificial intelligence has been marked by milestone achievements in games. Ever since computers cracked Go, chess and poker, StarCraft has emerged by consensus as the next grand challenge," said David Silver, a DeepMind principle research scientist on the AlphaStar team, in a statement. "The game's complexity is much greater than chess, because players control hundreds of units; more complex than Go, because there are 10^26 possible choices for every move; and players have less information about their opponents than in poker."
This discussion has been archived. No new comments can be posted.

DeepMind's StarCraft 2 AI Is Now Better Than 99.8 Percent of All Human Players

Comments Filter:
  • How can this be useful for anything beyond this one application?
    • by Anonymous Coward

      >>At DeepMind, we're interested in understanding the potential—and limitations—of open-ended learning, which enables us to develop robust and flexible agents that can cope with complex real-world domains. Games like StarCraft are an excellent training ground to advance these approaches, as players must use limited information to make dynamic and difficult decisions that have ramifications on multiple levels and timescales."

      Or you can say it's as useful as the entire concept of gaming, in this instance and every instance since we first kicked an inflated bladder around for no practical purpose, I don't care.

    • Comment removed based on user account deletion
      • by guruevi ( 827432 )

        I'm sure someone's already tried that. The problem with the market is there is too much information and dependent causes and effects both internal and external to it. The market value is conditioned on the entire physical world, a truck breaking down could cause a missed delivery which causes one stock to drop which under the right conditions could cause an entire market to crash.

        If you can successfully analyze all those things, you'd have a way of predicting the future.

    • by timeOday ( 582209 ) on Thursday October 31, 2019 @12:47AM (#59364602)
      Warfare, obviously. Seriously, a short time into the next major war, there will be armies of drones interacting so quickly and in such large numbers that humans will be irrelevant to controlling them.
      • Now I'm envisioning swarms of Von Neuman style drones from opposing nationstates fighting forever long after humanity has gone extinct.
      • Sure, it would be great at war... if the rules work exactly like a game.
      • by geek ( 5680 )

        Warfare, obviously. Seriously, a short time into the next major war, there will be armies of drones interacting so quickly and in such large numbers that humans will be irrelevant to controlling them.

        Oh please, we can't even get cars to park themselves for fucks sake.

        • Oh please, we can't even get cars to park themselves for fucks sake.

          The cars park (and unpark) themselves successfully more often than not, and that's a much harder problem than playing kamikaze. There are several open source autopilots that are capable of flying something into something with an arduino, given good positioning data. A Tomahawk missile can follow waypoints, recognize a target from a picture, and fly itself through a meter-square window, and that's old hat now. A quadcopter with a raspi can do a flip through a hole just a few inches bigger than it is, tell th

          • Some variant of the famous saying about World War 4 being fought with sticks and stones will be the ultimate future of warfare. When everybody has built a drone force, you eventually neutralize each other's offensive capability. So the next step would be to resort to the old fallbacks, like nukes, or some more dangerous WMD.
      • ......"It becomes self aware at 2:14 a.m. Eastern time, August 29th".......

      • Not the next war, but the next war after that.
      • What about giving orders to humans in combat? If an AI can out-strategize 99.8% of the opponent's generals, it's arguably immoral not to put your soldiers under the command of an AI.

      • Would you like to play a game?
    • This case doesn't have to be. It is just a demonstration of the capabilities of reinforcement learning to adapt to complex systems, and learn to do very well in those.

      The usefulness comes from using RL to train similar solutions to other, more practical/useful problem scenarios. Or for games if someone wants to pay for that of course. Make it simple enough to train to specific problems and you can automate those Chinese farms that play world of warcraft and similar games for you..

      But more generally, I would

    • How can this be useful for anything beyond this one application?

      Deep mind is an AI that creates AIs. Alpha Go, Alpha Zero, and now Alphastar..... these aren't demonstrations. This isn't a proof of concept. This is training. It's still learning.

    • Computer is great at simulated war. What is it good for indeed.

  • What possible reason is there to believe that a computer being better at a computer game than a human is significant?

    Yes, early computers sucked at such things. But now they routinely beat us on these kind of tasks, mainly because intelligence is not a necessary component. They can't be because:

    1) If the game required intelligence, too few humans could play, let alone enjoy.

    2) The kind of limited tactical behaviour in such games is NOT comparative with real world strategies. The games are designed for

    • by Jeremi ( 14640 )

      What possible reason is there to believe that a computer being better at a computer game than a human is significant?

      Because if a computer can teach itself to play a video game today, that suggests that a similar approach might be employed to have a computer teach itself to solve other, more practical problems tomorrow.

      • For example it will help further refine the AI necessary to use your collection of personalized data, and your ubiquitous news delivery service, to shape public opinion regarding current and historic events to whatever you want.

        You'll elect the totalitarian state, and love it the entire time. The liberty will die to thunderous applause, and the surveillance and thought control state that will emerge will make Goebbels seem like a dilletante.

        • You'll elect the totalitarian state, and love it the entire time. The liberty will die to thunderous applause, and the surveillance and thought control state that will emerge will make Goebbels seem like a dilletante.

          Except the problem with this is that totalitarian states are not self-sustaining in the longer term, and the dystopia you describe will happily exist while it is able to consume whatever has been created by previous generations. Later, AI will have a harder and harder time redistributing fewer and fewer wealth there is, until the whole system collapses.

    • You want to try a real challenge? Teach an AI to play D&D.

      The point is that you don't go from tic-tac-toe to the most difficult challenge you can imagine. You go in steps of increasing difficulty, each step providing you with a realistic challenge and part of the overall solution towards the really difficult problem.

      A few years ago, there was somebody just like you, saying "you want to try a real challenge? Try StarCraft 2"

    • by rtb61 ( 674572 )

      Starcraft is a click fest, tied to an accurate memory of the location of all visible elements, those directly wired to the processing computer. Is the computer faster because not reliant on the physiological weaknesses of the human body.

      Hey I can get the dumbest computer to beat the smartest human in any strategy game, I'll just set the computer to play continuously for a year and the human as to keep up, 24/7/365, lose first couple of days but beat the piss out of the human there in after but is the compu

    • by ThePyro ( 645161 )

      It seems noteworthy to me because they've demonstrated that an AI trained via reinforcement learning has managed to adapt to a game where you have very incomplete information about what the other player is doing. Pros are often forced to infer the enemy's plans based on what they don't see.

      As an example, one the first critical decisions players make in a game of SC2 is when to start producing combat units. If you scout the enemy's base at a critical decision-making point and see fewer workers that you'd exp

    • What possible reason is there to believe that a computer being better at a computer game than a human is significant?

      It seems relevant to the Skynet index, where as the computer becomes better at kicking our asses militarily, the index approaches 1. We've already gamified war to a broad extent, as some orders are given through a computer interface and we've had a game which mimics it released as a recruiting tool.

  • But... (Score:2, Funny)

    by ArhcAngel ( 247594 )
    But does it enjoy it? The purpose of games is enjoyment. Until you program AI to actually enjoy what it is doing you won't impress me.
    • How do you KNOW that a person "enjoys" anything, or to what extent? You DON'T really know, and you can't really know, because enjoyment is a 1st person experience. It's irrelevant when it comes to teaching or programming computers.
    • Re:But... (Score:5, Funny)

      by 93 Escort Wagon ( 326346 ) on Thursday October 31, 2019 @02:16AM (#59364728)

      You know what real AIs enjoy? Killing all humans.

      • As AIs become more sophisticated, the illusion of sentience / consciousness that they present grows stronger. Eventually it will be trivial for an AI to present the illusion that it has emotions, such as "enjoyment."

        Some psychopaths are good programmers, and they really will program their AI to "enjoy" killing humans.

        "I don't think artificial intelligence is a threat," Ma said, to which Musk replied, "I don't know, man, that's like, famous last words."

    • Really? There's entire areas of mathematics that model real world interactions as games, which John Nash pioneered. Consider different companies trying to compete. It is a game, with a score of money, and lots of unknowns and knowns, where you can choose to increase available resources with more workers, increase production with more capital/hardware or invest to improve your existing tech and apply it across the company. Sounds pretty close to a Starcraft 2 game.
      • The issue is that, unlike a game, the real world is open ended. As the saying goes, it is the unknown unknowns....

        The flight control training of that automated killer drone is limited by our understanding and implementation of the physics, until such time as the training is done in the real world.

        Now, real world reinforcement learning.... good luck with your billion iterations.
    • And... does the computer know it won?
      https://www.youtube.com/watch?... [youtube.com]

  • by DontBeAMoran ( 4843879 ) on Thursday October 31, 2019 @12:31AM (#59364578)

    It also limited AlphaStar to only viewing the portion of the map a human would see and restricted the number of mouse clicks it could register to 22 non-duplicated actions every five seconds of play, to align it with standard human movement.

    That's 4.4 actions per second. I'm sorry but that's way above "standard human movement" unless you mindlessly click everywhere without any real planning. And human players probably won't be able to sustain that many actions per second for a very long time.

    • It's possible to do so [youtube.com] with that kind of speed, although not all clicks are equal. A good portion of the clicks are somewhat useless. The real question is a matter of precision, and what the computer is actually doing with the clicks.

      For example, in the matches, the computer was able to quickly control units on far different parts of the map, something a human can't move around quickly enough to do.

      Finally, let's see the games. Is the computer just winning because it has insane micro? Or is it making s
      • Of course it has a HUGE micro advantage, of all the big old RTS games, SC is the worst with it.

        Even a dumb as rocks algorithm would be enough to play original Command and Conquer and keep infantrymen from getting run over by tanks with whatever APM cap you want, or squish all my bazooka men with theirs.

        Good RTS design should present gameplay thatâ(TM)s better than that. Micro can go die in a fire, no battles were won with superior micromanagement, thatâ(TM)s an artifact of the game design. I

    • by Kjella ( 173770 )

      That's 4.4 actions per second. I'm sorry but that's way above "standard human movement" unless you mindlessly click everywhere without any real planning. And human players probably won't be able to sustain that many actions per second for a very long time.

      Yes, it's pro e-sport player speed. They do have metrics for this you know, they do average 3-400 APM (actions per minute) in a match. It's true that a lot of these are fairly static tasks/build orders they've trained on and execute like a guitar hero session, but keeping your queue busy while doing the exploring/attacking/defending is a huge part of SC2.

      • Yes, it's pro e-sport player speed. They do have metrics for this you know, they do average 3-400 APM (actions per minute) in a match

        This is misleading. Pros who have 400 APM in Starcraft invariably have a much lower EAPM.

        Secondly, it is important to distinguish between a mouse click and a keyboard press. A mouse click is much more difficult. For example, I can personally get 1000APM using the keyboard, but I don't think I can get over 50 clicks per minute because clicking is harder. But with the mouse, the speed is not nearly as important as the precision and speed of vision.

        The short is, let us see the games. How is the computer

    • exactly , that is an insanely high sustained click rate, especially one that is almost certainly error free and perfectly precise.
      • exactly , that is an insanely high sustained click rate, especially one that is almost certainly error free and perfectly precise.

        I just saw a CS:GO round where a pro player accidentally alt-tabbed to windows, managed to tab back in time to find himself alone in a 4v1, and he clutched it.

    • btw, it's also important to distinguish between APM and EAPM. This recognizes that some or even most of the actions made by pros are just spam.
    • I'm pretty sure a good typist can hit 5 or more keys per second.

    • Don't forget the keyboard counts as an action, hotkey mastery is a major part of RTS competitive play.
  • by locater16 ( 2326718 ) on Thursday October 31, 2019 @12:32AM (#59364580)
    I mean... just the title of this post. OpenAI already beat you to the "AI beats humans at a hard, complicated video game involving strategy" guys, just acknowledge it.
    • DotA and similar games are certainly impressive achievements for an AI, but they're much less strategic than Starcraft. Much more of the essential individual skill in a MOBA comes down to response time and situational awareness, tactical aspects that software excels at. The team coordination aspect is probably harder for an AI, assuming they don't have one program running the whole team or at least using faster-than-humanly-possible communication with the other "players", but it's not *that* hard to get the

  • If you spend enough money trying to design a computer problem to play ONE game exceptionally well, it will do so. But that Ai won't be able to play any OTHER game at that skill, or at all.

    And since StarCraft II is a real time game, the faster the game runs, the more advantage the AI will have. Slow the game down, a LOT, and a human player won't be overwhelmed so completely. Because my brain is more versatile, but MUCH slower,

    • by Kjella ( 173770 )

      If you spend enough money trying to design a computer problem to play ONE game exceptionally well, it will do so. But that Ai won't be able to play any OTHER game at that skill, or at all.

      Actually that's been much of the reason for the interest in neural nets, you don't program one game's mechanics into the system. You train it to a particular game, but you could probably use the same network structure and hyperparameters to play other RTS games with similar objectives, even if has different buildings, resources, units, upgrade paths etc. and so on. They already showed this with AlphaZero and board games where it played chess, go and shogi. And then they'll start looking for an AI that can p

  • This story is complete nonsense. DeepMind is incapable of being better at StarCraft 2 because it completely lacks the basic capacity required for playing StarCraft 2. It cannot have fun playing the game.

    The point of a game is not to win. The point of a game is to have fun playing it.

    • Are you sure button smashing at 5 clicks per second is fun? Sounds more like a job to me.. Like an e-sports job..

    • The point of a game is not to win. The point of a game is to have fun playing it.

      Winning is fun. Maybe not to the computer, but conceivably to the person who programmed it. And in fact there are games where you write code for bots and have them battle. Is that not fun? It must be for at least some people, because some people are playing them.

  • Comment removed based on user account deletion
  • Comment removed based on user account deletion
    • by barakn ( 641218 )

      Imagine a Beowulf cluster of Natalie Portman's hot grits in Soviet Russia.... Profit?

      Yeah, /. was real classy back then.

    • The point of Slashdot is to complain about women in tech complaining,
      and yell "correlation isn't causation" every-time a study is released as if the commenter alone is the only person who knew that and the authors of the scientific paper had never heard of the concept before,
      oh and if someone in a big tech company does absolutely anything (good or bad) claim it's part of an Illuminati like plot.

  • Good proof of concept.

    Now I will wait the use it on more real cases like autonomous driving where a lot of people is investing, including Google.

  • Great.

    Maybe get two of them so they can play each other while you do something sensible?

  • by TomGreenhaw ( 929233 ) on Thursday October 31, 2019 @09:18AM (#59365596)
    I'm willing to bet that it is 100% better than any human when playing non-stop for 200 hours.

    Humans would win 100% of the time if the developers put in a hack that only humans know.
  • The human mind is doing more during the game than playing the game, and it is coordinating with all other knowledge and abilities. The AI is just playing the game, nothing else. There is no human social concern whatsoever. It doesn't "care" if it wins or loses -- something a human cannot fail to do. So, the accomplishment is actually very limited as far as what a human is capable of doing with the game. Basically, it is like playing the game against itself in a locked, dark, room. Big deal. The place that s
  • This is all good, but an AI will never go mad when its told for the umpteenth time "You require more vespene gas!"
    • This is all good, but an AI will never go mad when its told for the umpteenth time "You require more vespene gas!"

      The AI is always watching its resource counts, has perfect knowledge of all resource requirements for all buildings and units and research in the game, and never wastes an action attempting to queue something it can't afford. It is never told it requires more vespene gas by the game. It always knows when it does.

  • Pick your favorite one, I don't care. I'll be impressed when there is a competent Civ AI opponent. This RTS stuff is just out-endurancing humans -- I want to see a Civ AI out think them.

    • I'll be impressed when diplomacy in Civ is more complex than fucking you over at every opportunity.

  • That threshold is a bit overstated. To be fair, 1% of Starcraft 2 human players can beat 99.8% of other human players. Yes, that looks a bit wonky, but Starcraft 2 is a relatively high variance game because of how serious the information deficit is. There are dozens of strategies that, when executed by a professional, can beat any other professional who doesn't guess correctly what's coming. Serral himself has lost tournaments this year when he didn't guess right what his opponent was doing, and he's th

    • Mana and TLO were chosen mainly because they are member of European-based team (TeamLiquid), and DeepMind is UK-based company.

It is easier to write an incorrect program than understand a correct one.

Working...