Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
AI Facebook Google Microsoft Games

AIs vs Humans - Next Battle: Starcraft (businessinsider.com) 173

braindrainbahrain writes: Having conquered checkers, chess, and more recently Go, artificial intelligence research now looks at the next frontier: the popular real-time strategy game of StarCraft.
Blizzard Entertainment's president reached out to Google's DeepMind researchers last month, who are now describing StarCraft as "our likely next target". But many top StarCraft experts believe AIs will fail because "Unlike machines, humans are good at lying," reports the Wall Street Journal. An executive at the Korea e-Sports Association tells them "It's going to be hard for AI to bluff or to trick a human player."

One University of Alberta computer scientist David Churchill counters that âoeWhen the AI finds that the only way to win is to show strength, it will do that. If you want to call that bluffing, then the AI is capable of bluffing, but there's no machismo behind it." Unfortuantely, for five years Churchill has been running AI-vs-human StarCraft tournaments, and "So far, it hasn't even been close... Using a mouse and keyboard, the world's top players can issue 500 or more commands a minute," the Journal reports. But they add that now both Facebook and Microsoft are also working on small StarCraft AI projects.
This discussion has been archived. No new comments can be posted.

AIs vs Humans - Next Battle: Starcraft

Comments Filter:
  • by JcMorin ( 930466 ) on Sunday April 24, 2016 @04:25PM (#51979339)
    For instance, micro marine so they never stay closer than their maximum range... or any unit in fact. I'm also looking at tank and medivac drop... I would see a deadly combination here. But overall strategy, I don't think AI is ready to be human... yet.
    • They do already. There's an API and actual AI competitions:

      https://github.com/bwapi/bwapi [github.com]

      • by Ksevio ( 865461 )
        But the AI loses pretty badly against human players
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      For instance, micro [manage units] so they never stay closer than their maximum range...

      The player is effectively commanding AI already. If we want to test strategy not fast twitch processing then such features should be added to the unit's AIs that both the strategic AI and players command. in other words: If the troops you nest into battle can't "never stay closer than their maximum range" on their own, then you have stupid troops who need to be replaced with smarter troops, i.e., if the AI wins because the game is an unrealistic simulation that favors AI's winning then it doesn't say shit

      • by Nemyst ( 1383049 )
        But that's besides the point because the game was never designed to be played by a machine. What an AI could exploit is very different from what players can exploit. A good player can definitely make use of speed to pull off some amazing feats, but only very rarely will this actually save them if they didn't have a good strategy behind it (and a good part of their win would be that their opponent made mistakes, was too greedy). An AI could outright break the game by spamming specific actions that a player i
    • I'm not sure how insane micro would be cheating, it is simply the computer being better at micro than humans. Knowing the DeepMind team they will be using the same inputs/outputs as the human players so the AI will not be able to actually cheat in any way. And as far as strategy is concerned, do you really think Starcraft strategy is much more advanced than Go strategy? Personally I doubt it (in fact given the frantic nature of Starcraft matches I would rather think that the strategic component would be rel

      • I don't think you can simply say that Go strategy is more developed. We're comparing apples and oranges. Go and Chess operate with a fairly strict set of rules that govern movement/placement and bound what is allowable. StarCraft does to some degree, but the fact that such a big deal is made here of "bluffing is significant. A rook in Chess can only move to certain spaces in any given situation. A Go piece can only be legally placed in certain locations depending on the situation. A single marine in StarCra
      • I'm not sure how insane micro would be cheating, it is simply the computer being better at micro than humans.

        Because it's not showing intelligence, it's being able to click fast. We already know computers can click faster than humans, that's not a question.
        The "Starcraft AI" is a thing because they are trying to improve the intelligence of computers. If all they do is click fast, they have cheated on the goal of intelligence.

        • Because it's not showing intelligence, it's being able to click fast. We already know computers can click faster than humans, that's not a question.
          The "Starcraft AI" is a thing because they are trying to improve the intelligence of computers. If all they do is click fast, they have cheated on the goal of intelligence.

          Just like a autonomous car is "cheating" because it has a faster reaction time than a human? If an autonomous car is a safer driver than a human, who cares how it "cheats"? For starcraft, if it really became a problem then limiting both humans and AI to X actions per second would be a reasonable compromise where X is what a fast human can do but it's still quite an accomplishment if AI can beat a human in an open ended game like starcraft.

          Speaking as a intermediate starcraft player, I think starcraft woul

          • Just like a autonomous car is "cheating" because it has a faster reaction time than a human?

            No, you're not thinking. The goal of an autonomous car is not to show intelligence, it's to drive autonomously. If it does that, cool, problem solved.

            The point of building an AI to play Starcraft is to show intelligence. If all it does is dumbly click and micro a single marine to victory, then that's cool, but you failed to show intelligence.

            Speaking as a intermediate starcraft player, I think starcraft would be a better game if either the number of actions per second were limited or if there was more scripting available for the human player

            If you're below masters, that is not why you lose. I constantly beat people with twice my APM. The key is to build more units faster; again, if you're not in master

            • Rather, you should never have a large stockpile of minerals or vespene gas. If you have 50 minerals and you're not saving it for a siege tank or building or something, why the hell didn't you build a marine?

              • That's a good principle in the opening game, but floating 500 minerals won't usually ruin your game unless someone attacks you at that exact moment (forgetting to build workers can keep you in bronze league, though).
            • This is why I never enjoyed StarCraft 2 as much as the original. APM is where the focus went in multiplayer. It's obvious because in the single player, the units are far more autonomous. Built SCVs automatically start harvesting, units are smarter on their own etc. All of that goes away in multiplayer matches.

              And this is the key issue. Computers winning at micro is trivial. All you have to do is stop making the units in the game autonomous at all (mini AI to help the human). Just make the game so that human

              • Built SCVs automatically start harvesting, units are smarter on their own etc. All of that goes away in multiplayer matches.

                Uh, bro, SCVs automatically start harvesting in multiplayer, too. Click on your command center and then right-click on a mineral patch, and they'll start going where you want.

                The micro and mechanics in sc1 are much more difficult.

          • Speaking as a intermediate starcraft player, I think starcraft would be a better game if either the number of actions per second were limited or if there was more scripting available for the human player. It sucks when the winner is the person who clicks the fastest instead of the person with the best strategy. I like RTS better than turn based but maybe some middle ground where it's realtime but there is a "click meter" that gets depleted might level the playing field a bit.

            So in other words, you don't like the fact that some players have an advantage over you because they are more skilled in one aspect of the game, so you want that skill to no longer be a determining factor in who wins? I mean, why not just demand they remove everything you're not good at, thereby making you a top tier player not through personal improvement, but by bringing the skill ceiling down to you.

            • So in other words, you don't like the fact that some players have an advantage over you because they are more skilled in one aspect of the game.

              It has nothing to do with me being unskilled at it. I said I think it would be a better game. Starcraft was fun when I first started playing it and it was who could outsmart the other team. It became a lot less fun when it became who can churn out units the fastest and/or micromanage the best. That's just not fun to me. Now as a programmer, writing AI to compete with other AI on starcraft, that I would enjoy but I no longer enjoy playing starcraft as much as I used to because I don't like the micromana

            • Clicking fast is a "skill" but the bigger question should be which is more important? The "Real Time" or the "Strategy?" If it's more important to make things real time, then why have units with any autonomy at all? If clicking fast is the skill we are competing on, make resource gathers need to be manually told to return cargo and then go get another load. If the strategy part is the point of the game, then clicking fast shouldn't be an overwhelming advantage. And especially in StarCraft it is, especially

        • But the goal isn't to "make intelligence", it's the same as chess and go - to win the game. Chess computers use a stack of pre-computed chess-specific tweaks. Winning at Starcraft is the goal. If bots end up doing that through insane micro, it might lead to the development of more balanced strategy games that don't reward the twitch/micro skills of bots. Plus, the main problem here isn't the number of states. Complex states can be broken down hierarchically and heuristically pretty easily. It can actually
          • But the goal isn't to "make intelligence", it's the same as chess and go - to win the game.

            We already know computers can win with insane micro, so cool, but if someone achieves that, no one will be impressed.
            It's not even a worthy goal.

      • by ranton ( 36917 )

        And as far as strategy is concerned, do you really think Starcraft strategy is much more advanced than Go strategy? Personally I doubt it

        I would agree Go strategy is more advanced, I also believe the tactical maneuvers in Starcraft require far more general intelligence than the strategy of either game. Deciding where to put a stone in Go requires far more strategic thinking than where/when to attack or what build order to use, but the process of acting on those strategic decisions is far different. Exactly how to attack, what units to target, when to evade units, how to evade units, what formation to use, etc. places far more decision making

    • by Punto ( 100573 )

      "cheating"? They're not cheating, anymore than you're cheating by using your hands and eyes to play the game. It's an AI vs Humans contest, you don't get to set the rules, especially you don't get to transfer the rules of the Humans vs Humans contest into this. The AI has special abilities, just like humans, they should be allowed to use them. I've seen the AI controlled marines escape swarms of banelings or ultralisks, they already beat humans, and it has nothing to do with "intelligence"

    • by Kuruk ( 631552 )
      Lying is just a strategy. The computer will learn it just fine by watching games and seeing the winning moves. Humans think they are so unique but the fact is we all think alike and a computer will see our patterns of play and counter then.

      I dont see why each AI challenge is meet with humans will win.
  • by backslashdot ( 95548 ) on Sunday April 24, 2016 @04:37PM (#51979399)

    First off it's very easy to write an algorithm that lies and misinforms when optimal. Second, and this is a joke, have you ever seen a progress bar be accurate when downloading or installing something?

    • by Amouth ( 879122 )

      this is a joke, have you ever seen a progress bar be accurate when downloading or installing something?

      There is a difference between being able to lie and being good at lying. Everyone knows that progress bar is wrong, so that is bad lying. Good lying would being able to convince you it is right and making you do something different than you would because of the lie.

    • by Solandri ( 704621 ) on Sunday April 24, 2016 @06:44PM (#51979977)
      Through years of trial and error, Starcraft build orders (the order in which you build units and buildings) have been optimized to get you to a certain build state in the minimum amount of time. Build orders are queued, which means there's no human-induced delay. An AI will have little to no advantage there - it could gain a slight advantage with building placement to minimize unit travel times.

      If you've watched any advanced Starcraft tournament games, the end result usually comes down to players' ability to micro while maintaining these build queues (an AI would probably win at those), or to bluffing. That's when you fake out an opponent by showing him a unit or building to make him think you're going for a certain build, but then you go for a different build. Your opponent scouts you, guesses what build you're going for, and modifies his build to counter yours. But you know you've been scouted so you change your build. Then when he's built up his army and encounters you again, he finds you've switched to a different build that his is ineffective against. And since different builds require different buildings and technology trees, it's too late to switch builds. Your opponent has to try to hold on with his inferior build as best as he can until he can get a new tech build up and running, all the while hoping your next tech shift won't counter that.

      This is why in Starcraft it's not just important to scout, it's important to know how much of your base your opponent has scouted. You'll see advanced players do all sorts of crazy things like start constructing a building, then when their opponent's scout has left or been killed, they'll destroy the building and construct a completely different one. All the unit strength the AI can muster won't do it any good if the human has bluffed it into building ground combat units, while the human has built up a massive army of air units. And like the early computer chess games, once word gets out that an AI is vulnerable to a certain bluff, people will abuse it over and over.
      • You seem to think a computer can't understand probabilities and can't learn from prior mistakes. That's a false notion. If a computer keeps falling for the same trick it can adjust itself so it acts more probabilistic.

        • by Solandri ( 704621 ) on Monday April 25, 2016 @02:07AM (#51981033)
          Oh, I know a neural net can learn and tweak its responses based on past experiences. The beauty of bluffing is it can totally screw up that learning process

          For years, Ty Cobb famously overran 3rd base instead of stopping every time a certain player fielded the ball. That forced the player to throw the ball to third base to force Cobb back. Eventually the player got used to Cobb overrunning third base and his throws to force him back got lazy and slow. Then one day in an important game with the score tied, Ty Cobb overran third base, the player made a lazy throw to third to force him back, and Cobb broke for home and scored the winning run.

          Really good players develop an innate sense for when an opponent is bluffing. I can't explain how it works, but I know it does. When I was kid, I had this innate sense for The Price is Right. I could predict with about 95% accuracy when the announcer was going to say the prize was a new car. I have no idea how I did it, but my subconscious was getting some sort of signal from the inflection of his voice or the delay in his speech or something that told my conscious mind that he was going to say a new car. Ty Cobb was also exceptional at this sort of thing. When a teammate once asked him how he was able to hit so well against a certain pitcher who gave everyone else problems, Cobb replied that the pitcher's ears wiggled every time he was about to throw a curve.

          This works when your mind is flexible enough to consider all possible inputs, even the seemingly irrelevant ones. It doesn't work with an AI programmed to look at only a limited number of "important" inputs to keep the CPU load down.
      • the end result usually comes down to players' ability to micro while maintaining these build queues

        Also positioning, and building the correct unit composition to counter your opponent, and also knowing where to attack. Some strategies are really complex, here's an example where positioning is more obvious than normal [day9.tv].

    • by dmomo ( 256005 )

      I agree. Computers don't have to lie. Meaning, they will be able to arrive at the same actions without knowing that we'd see it as lying. But they'd still do it. They don't have to know why, they just have to know that certain actions correlate to success given certain situations.

    • by houghi ( 78078 )

      The hard part of lying is not to not tell the truth, but make people believe it IS the truth.
      The example you give is a great example of that. We all know it isn't true, so it is a bad lie.

      Other things are less obvious. Are the voting machines are telling the truth, or are they lying? If there is doubt, we could check it, unless the people who are doing the checking are also the ones who gain from the lie (for whatever reasons).

      The thing is that we always assume that computers tell the truth. e.g. You have e

    • The spaces AI on many online versions of the game are good at lying. Especially if they say they will cover you when you, their partner, goes null.

    • The way the AI will play will likely involve neural networks trained on past games by champion level players. If lying forms an important part of those games, then the AI will learn to lie.

  • Data vs the Zackdorn (Score:4, Interesting)

    by spire3661 ( 1038968 ) on Sunday April 24, 2016 @04:44PM (#51979443) Journal
    Data couldnt beat Kolrami, so he forced him into what would have been an indefinite stalemate. Kolrami found this incredibly insulting and forfeit. Data won by having no ego. He busted him up.....
  • by SuperKendall ( 25149 ) on Sunday April 24, 2016 @04:44PM (#51979445)

    It seems like an AI would be really susceptible to being "trained" to react in a certain way by a player, who could then take advantage of that by sending up fake signals early and doing something that takes advantage of the anticipated AI response.

    That may seem the same as "AI's cannot lie", but it's actually more about an AI being more susceptible to bluffs than a human player would be.

    • by AK Marc ( 707885 )
      Yes. The iocane powder scene from Princess Bride will always result in the computer picking the wrong gone, as the human will learn the pattern for which the computer would pick. The idea of tricking someone else would also require understanding tricking, to recognize the pattern when appling it, as well as when it happens.
    • by AmiMoJo ( 196126 )

      The same could be said of human players. The key is that in tournaments you only play each opponent a few times at most, so the opportunity for training is somewhat limited.

      Having said that, IBM's computer did it to Kasparov by making the same mistake a few times, and then when he tried to take advantage again countering it. IIRC the engineers had to manually program it to avoid the trap though, it wasn't something the computer planned to do.

  • by Woldscum ( 1267136 ) on Sunday April 24, 2016 @04:48PM (#51979471)

    Blizzard is a dying company. HOTS is a huge flop. Overwatch will also be a flop. Starcraft is basically dead. Diablo has a following and Hearthstone is the only real hit they have. WoW is kept on life support by the fanboys. After the next expansion flops again it will finally die.

    • by Anonymous Coward

      Imagine how much money you're going to make when you short their stock. You'll be rich I say! Rich!!!

    • Actually I think all of their game have been a financial success compared to what would normally be called a success in the game industry. It is just that WoW was a ridiculous juggernaut cash cow. You simply can't plan for every game to give that kind of financial success and if revenue drops off from WoW, they will have no choice but to downsize into a more normal, if still successful, sized company. If anything it looks like they are managing the transition better that many companies in the past.
  • Unfortuantely
  • They should do this with Total Annihilation and see what kind of behavior the AI comes up with.
    There was some fun, and devious, stuff in TA -- like using your robot troop transport to fly into an enemy base, kidnap the Commander, and then self destruct.
    Tactics that weren't documented, and only emerged in the community over time as they were discovered.

  • Every games will be over in 4min as the computer just cheeses everyone.
  • As An AI Researcher (Score:5, Interesting)

    by Anonymous Coward on Sunday April 24, 2016 @06:14PM (#51979843)

    I'm a CS Masters student doing a thesis on a RTS AI. Computers can beat humans, we just haven't tossed enough CPU at it yet. RTS are exactly the same as checkers, chess, go, etc... except you have more pieces, more board positions, and more than one piece can be moved per turn. To reduce that into something computable, you need good abstractions. Once you have those the game becomes a tree search, same as all the board games. Google/IBM can bring enough computing resources to the table to win. There are some bumps in that: imperfect information, teams, etc... but they don't change the core algorithms.

    Computing the entire game tree is too expensive. They'll probably do it at a unit/battle level, at a squad level, at a city level, and at a long term strategy level. Doing things at different levels greatly reduces the search space. From your training data you'll know how well you can expect the battle manager to handle an upcoming attack with an expected loss of XYZ at some probability, so the strategy component doesn't need to bother with all the minor details of how to fight it.

    500 commands a minute? That's nothing. With the computing resources of a super computer, expect the AI to be able to issue an order to every individual unit every game turn. And yes, at the game engine level all real-time strategy games are actually turn based.

    When you have the resources, a tree search over a game's state space with a little bit of memory (so the enemy can't get your units stuck in a circle) is effectively unbeatable.

    • Hidden movement massively increases the search space however - because there is not definitive "current state" that the computer can bank on to make future predictions - it must assume a Quantum-mechanics style cloud of "possible states" the board could be in, not just in the future, but right now. Also, doubling the number of possible moves doesn't mean double the computing needs, it means 2^N computing needs to looks N moves ahead. If you want to always look 10 moves ahead, doubling the number of valid mo
      • With hidden movement, say that both sides could do an average of 50 choices per "unit time". The computer would need to evaluate it's 50 choices on Turn 1, and evaluate 50 enemy moves as well. That's 2500 combos to process in the opening turn. But because there's hidden movement, on turn 2, the player only know which of 50 moves it took, but doesn't know which of the 50 the enemy took. Hence, on turn 2, the computer must process 50 different "possible" current board positions, multiplied by the further 50x5
    • When you have the resources, a tree search over a game's state space with a little bit of memory (so the enemy can't get your units stuck in a circle) is effectively unbeatable.

      ? A tree search branches so quickly that it doesn't matter how many resources you have, you can't possibly calculate them all. That is where the intelligence comes in: figuring out what branches to prune.

    • How applicable is this to real-time RPG's? And if it's equally applicable how is that different from real-life war fighting?
    • As far as I understand micro in games like Starcraft, a computer ai should have a huge advantage. In a battle a human player will usually direct the fire of several units at once. A computer would direct every single unit and point it at the optimal target and also move away units that have taken fire. Starcraft II marines do a stutter type movement, for example, where they use the pause between shots to move, loosing almost no fire time. Thus a computer player would have perfectly balanced health, every ti

      • by Maritz ( 1829006 )
        The AI loses at the moment because it doesn't make enough of the right types of unit, or have them in the right location. I agree with your premise - an equally sized engagement should result in an AI win every time, because of micro.
    • RTS are exactly the same as checkers, chess, go, etc... except you have more pieces, more board positions, and more than one piece can be moved per turn.

      ...but more importantly, limited information. In checkers, chess, go, &c all players have perfect information. In Starcraft, you have the "fog of war", which adds a different dimension to the gameplay (i.e., the importance of scouting and the possibility of deception).

    • by AmiMoJo ( 196126 )

      Wasn't the whole point about a computer beating a Go master that the game is so complex it can't be reduced into a tree search? They had to go beyond simply mapping out all the possible moves and counter moves, like they did with Chess. In fact, even with Chess they but in a lot of biases for proven strategies.

      • Wasn't the whole point about a computer beating a Go master that the game is so complex it can't be reduced into a tree search?

        FWIW it was still a tree search, they just were more efficient at pruning than they were with the chess engine (much more efficient).

  • by FunkSoulBrother ( 140893 ) on Sunday April 24, 2016 @06:43PM (#51979971)

    How about they work on writing an AI that can play a competent game of Civ V without cheating.

    RTS is much less interesting since a big component of RTS is actions per minute/reflex based. Of course a computer is going to be better at that.

    • That actually sounds really fun.
    • by adolf ( 21054 )

      How about they work on writing an AI that can play a competent game of Civ V without cheating.

      RTS is much less interesting since a big component of RTS is actions per minute/reflex based. Of course a computer is going to be better at that.

      I'd like to point out a few things:

      1. Google and their competitors are working on this same problem.
      2. Operating Skynet is going to be closer to a Starcraft-esque RTS than Civ V.
      3. There can only be one.

    • Oh yes please. Apart from the great, scientific achievement that would be, us Civ fanbois really need a somewhat competent Civ V AI to make the game suck less.

  • a significant limiting factor in playing starcraft is players ability to manage a large number of units at the same time. with a machine, speed is not an issue, so it will always win by that measure. chess and go didn't have a time component to them, so it was purely about strategy. the only way to make a battle of starcraft a fair fight over intelligence is to slow the game down to requiring both players to agree to move on to the next event cycle (aka "tick") in the game. it would be an absurdly slow

  • How about beating humans at something like Cards Against Humanity? Or Werewolf?
    • Actually, I bet a sufficiently-well-trained AI could win at CAH (for values of "win" equal to "get the most black cards"; in practice everybody wins in a good game of CAH). Even without a camera watching player expressions and so on, an AI can learn combos that work well (and who they work well for), and see all kinds of relations between cards in terms of how different players react to them. It would take a lot of training - quite possibly an infeasible amount - to be good enough to beat *arbitrary groups*

    • How about beating humans at something like Cards Against Humanity? Or Werewolf?

      I'm still looking for the computer that can beat me at boxing.

  • Starcraft API (Score:4, Informative)

    by braindrainbahrain ( 874202 ) on Sunday April 24, 2016 @09:25PM (#51980467)
    BTW, if anyone wants to jump in and design their own Starcraft AI, this API [github.com] is available for you to do it (I have no connection to the API project, btw).

    The API is for the Starcraft Broodwar. If anyone knows of an API for the more recent Starcraft II, please post.

  • Starcraft is too much of a toy world to be convincing of AI capabilities.
    A grander challenge, but still toy-world, would be some smaller games, like the Space Station 13 series. Unlike Go, Star Craft and other games, SS13 success relies on some degree of co-operation between players.
    Teenagers, especially, use natural language and anti-language, with words and expressions that only small groups within the "in-crowd" understand. Once adults (or AIs?) start using the same groovy words, kids will often then sta

  • I mean seriously. ha ha hahah HAHA HAA H HAH AHAHAHH HAHHAHA ha he he heh heh.

    Chess players ,jeopardy players, go players all laugh and weep at the same time in sympathy for you.

  • "A machine isn't good at lying."

    Yeah right. Until some college students find the ultimate "SuperCrushRush(TM) StarCraft Bluff Algorithm" and their box mops the floor with every number of human opponents in a game of SC.

    It's a modern microcomputer people!
    Take one, give it insane specs, a small army of engineers and a few years time and they will find a way that the machine outperforms every human at a very specific task. ... This isn't news, this is blatantly obvious.

8 Catfish = 1 Octo-puss

Working...