Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Education Google Software Games Entertainment Technology

Google's DeepMind AI Plans To Take On StarCraft II (venturebeat.com) 75

An anonymous reader quotes a report from VentureBeat: Google and Blizzard are opening up StarCraft II to anyone who wants to teach artificial intelligence systems how to conduct warfare. Researchers can now use Google's DeepMind A.I. to test various theories for ways that machines can learn to make sense of complicated systems, in this case Blizzard's beloved real-time strategy game. In StarCraft II, players fight against one another by gathering resources to pay for defensive and offensive units. It has a healthy competitive community that is known for having a ludicrously high skill level. But considering that DeepMind A.I. has previously conquered complicated turn-based games like chess and go, a real-time strategy game makes sense as the next frontier. The companies announced the collaboration today at the BlizzCon fan event in Anaheim, California, and Google's DeepMind A.I. division posted a blog about the partnership and why StarCraft II is so ideal for machine-learning research. If you're wondering how much humans will have to teach A.I. about how to play and win at StarCraft, the answer is very little. DeepMind learned to beat the best go players in the world by teaching itself through trial and error. All the researchers had to do was explain how to determine success, and the A.I. can then begin playing games against itself on a loop while always reinforcing any strategies that lead to more success. For StarCraft, that will likely mean asking the A.I. to prioritize how long it survives and/or how much damage it does to the enemy's primary base. Or, maybe, researchers will find that defining success in a more abstract way will lead to better results, discovering the answers to all of this is the entire point of Google and Blizzard teaming up.
This discussion has been archived. No new comments can be posted.

Google's DeepMind AI Plans To Take On StarCraft II

Comments Filter:
  • by Solandri ( 704621 ) on Saturday November 05, 2016 @06:13AM (#53217871)
    Games like Chess, Go, Tic-Tac-Toe always let both players see the complete world state. Armed with that knowledge, it's easy to be systematic and deterministic.

    Games like Poker and Starcraft hide part of the world state from each player, forcing them to guess at the parts they can't see. That opens up the possibility of one player bluffing - leading the opponent down the wrong decision tree because he's fooled into thinking the part of the world state he can't see is different from what it really is. I don't think this is something an AI can "solve". Certainly one could optimize it, so that it becomes damn good at guessing when a certain player is bluffing or not. But put it up against a different player and all that "learned" experience becomes useless, or even counter-productive. Or even pit it against the same player who's aware he's playing against the AI which beat him last time, and he'll simply do something he would never normally do to throw off the computer. It's a difficult enough problem that in pretty much all commercial computer games with a fog of war feature, the computer is just programmed to cheat by ignoring the fog and seeing everything.
    • by Calydor ( 739835 )

      This isn't so much about 100% solving as it is about learning HOW to solve. If the AI can go toe-to-toe with human players that's great, if it can't even come close then it's an area that needs more work.

    • by Kjella ( 173770 )

      Playing optimally does not mean you win every time. Take for example Texas Hold 'Em, no matter how poor a hand you have pre-flop (worst is 7-2 off-suite) against the best (pair of aces) you still have about 11-12% chance depending on colors and flush draws if you just shove every time and never see a flop. If it's your one-in-a-million lucky day you could do that six times in a row and win every time. Every poker pro - and most amateurs too - will have some bad beat story where they did everything right and

      • Nicely put!
        I think there are a similar problems facing human language processing.
        Slang, cant and argot change very quickly, and so it's very similar to a Fog of War for computers.
        Teenagers, in particular, will change the meaning of words, or create words, or even maul grammar to include
        or exclude others from their cliques. Irony and sarcasm are other bollards to progress for computers, innit.
        AI Winter is Coming!

    • An AI can solve imperfect information games, it just is a lot harder. See, for instance this solution of heads-up limit Texas hold-em [sciencemag.org]. Since the game has imperfect information and aspects of randomness, it's impossible to win every single time, but in the long run, the AI plays as well as or better than any other player.

      Just how much harder it gets shouldn't be overlooked. Even imperfect information chess (Kriegspiel [wikipedia.org]) would be pretty much impossible. Now imagine how much greater a game state StarCraft 2
  • Let them play Global Nuclear War.

  • How about Global Thermonuclear War? https://www.youtube.com/watch?... [youtube.com]
  • by carvalhao ( 774969 ) on Saturday November 05, 2016 @07:32AM (#53218045) Journal
    I would be really interested to see what would the results be if you would get DeepMind playing a game like Civilization, in which cooperation and soft-power can be used to win the game. That could really give all of us some hints on how to manage diplomacy/belligerence in a way that could lead to some interesting thought experiments in the real world.
    • I would be really interested to see what would the results be if you would get DeepMind playing a game like Civilization, in which cooperation and soft-power can be used to win the game. That could really give all of us some hints on how to manage diplomacy/belligerence in a way that could lead to some interesting thought experiments in the real world.

      Not unless you made a game like Civilization which was more than a glorified board game. Civilization bears only the slightest passing resemblance to reality. As such, you can only learn the most superficial lessons from it.

      • by aliquis ( 678370 )

        Well the one thing we know is that being restricted from "being racist" or expressing "hate speech" completely ruin your civilization, society, culture, technology and people.
        So that's a sure path to a loss in the game of civilization.

        Vote Trump!
        Help us make EUrope.. well.. not worse and worse all the time again :D

      • by west ( 39918 )

        I doubt a Civilization's AI's real-world applicability to real-life diplomacy, but a truly successful Civilization AI (i.e. one that played at Master level exactly as a human would) would terrify the hell out of me. Being a master of Civ involves managing limited information and about a *thousand* degrees of freedom each move, if not more.

        It makes Go look like a cakewalk (in terms of the size of the decision tree).

        If AI's can do that, then probably 2/3rds of the intellectual-related jobs on the planet are

    • by aliquis ( 678370 )

      I wonder how long until an AI run the government.

  • EULA (Score:2, Funny)

    by Anonymous Coward

    So it's fine if Google does it, but if I do it, I get a ban for running a bot? Okay then.

  • Playing games is NOT A.I. This, like "playing" Jeopardy, is just a flashy demo of algorithms and shows us how clueless AI researchers really are.
    • > Playing games is NOT A.I.

      AlphaGo was playing a game. So ... it was not A.I.?

      But even so, it was clearly something, what shall we call it, then?
      • > Playing games is NOT A.I.
        AlphaGo was playing a game. So ... it was not A.I.?
        But even so, it was clearly something, what shall we call it, then?

        AI, but not Artificial General Intelligence (AGI).

    • What we have is Applied Intelligence and as far as the Artificial kind of AI that becomes a big philosophical debate. Even so, what is Intelligence? If you start to define it in a way where you can build upon it logically, you end up with obvious conclusions where for example, your house thermostat is intelligent.

      The domain or context is essential to the consideration too. So, the house thermostat is intelligent and within it's tiny world it performs quite well adapting and making decisions with it's simple

  • by johnnys ( 592333 ) on Saturday November 05, 2016 @08:55AM (#53218235)

    "teach artificial intelligence systems how to conduct warfare."

    Do you want Skynet?

    Because that's how you get Skynet!

  • My initial reaction when I saw this news was that it was a boring choice. It's a step up from Galaga, so maybe it makes sense as a stepping stone, but as a game something like DOTA2 would be much more interesting.

    DOTA2, unlike SC2, heavily depends on both cooperation AND competition between players as an integral part of the standard game. It's got all all the same fog-of-war issues (i.e imperfect knowledge). There's for all intents and purposes one map, so you can focus on the strategy of the game without

    • by majorme ( 515104 )
      yeah, nice trolling bro your interest in team play is fine but doesn't exactly support your theory about how complex dota2 and sc2 are. controlling a single character vs a full blown rts? you can't be serious. i cannot sanction it
      • by eddy ( 18759 )

        Yes you have a lot of units in SC2, but often they're controlled as a group. You can micro, but it's mostly about the strength of the combined force, not individual units.

        An AI would have to be able to play a whole team, i.e control and coordinate five units from a pool of over one hundred unique heroes, some of which can summon and control separate units in turn (e.g Lone Druid) and have abilities that interact in various non-obvious ways (and where it would be interesting to see new combinations and count

    • Problem with dota 2 is the snowball nature of the PvE aspect. An AI there will quite easily crush any team because they will have near perfect farm, which eliminates any skill/strategy any human is going to come up with. A fully slotted in 20 minute Spectre/Void/Sniper is not all that interesting to win against

      They also noted why SC was preferred, the team lead and presented was a former UC Berkeley student who worked on Overmind, the SCBW AI.

      Simply put, DotA 2 is less taxing APM wise, since as you as you p

      • by eddy ( 18759 )

        The only way you get to "play safe" and "have perfect lane equilibrium" is if your opponents aren't doing anything. This is no different from any other game.

        Farming is a much larger problem space in DOTA2 than in SC2, where there are already optimal or near-optimal strategies to number of workers, etc. The first few minutes of a SC2 game is just the same old boring mechanical opening shit. IMO YMMV.

  • I've always been a multiplayer man myself with just about two decades of history fighting people in Quake and StarCraft. While bots work reasonably well with single player games, multiplayer is a different matter. We're yet to be presented with a real AI in a game. They all cheat. My point is I will pay money to watch e-sports where AI and humans fight on equal terms and we have no idea who is going to win. and how. my money is on the korean pro
  • They are still going to lose to some Korean guy
  • A lot of Starcraft is how fast you can do things that don't require you to be particularly clever (clever is a large part of it, too, but time to execution is key, as is the ability to physically deal with multiple areas on the board). It seems very unfair if DeepMind can do things like see more of the board than its opponent's monitor and move/select faster than someone could with a keyboard and mouse.

    • The MIT Tech Review article [technologyreview.com] stated that they will limit the commands per second to something in line with what a human (professional) player can do.

      I am not 100% sure of what Deepmind's game awareness will be, but they do have a simplified graphics output for the AI (mostly just friend-foe, not the fancy artist made pixels).
  • Zergling rush kekeke.

    Not sure how you can defend against an AI that can simultaneously and individually control 200 units. Kind of like the mutalisk and pathing in SC1, SC2 is broken in many ways where it's easy for an AI to win simply by brute force and having superhuman unit control. Even Koreans and other high level players make many mistakes in optimization and strategy and simply make up by being physically faster than their opponent.

  • "teaching itself" (Score:4, Insightful)

    by Bobtree ( 105901 ) on Saturday November 05, 2016 @11:02AM (#53218623)

    > DeepMind learned to beat the best go players in the world by teaching itself through trial and error.

    AlphaGo was trained on databases of historical games. It looks for moves that are similar to what a human pro would play, and then reads out sequences to score the strength of the resulting position. It did not learn by itself from scratch. Once proficient, it was played against itself to improve.

  • So much for their commitment to integrity if they'll let a deep-pocketed botter buy them up.

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...