Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI Classic Games (Games) Facebook

Humans Are Still Better Than AI at StarCraft (technologyreview.com) 142

29-year-old professional StarCraft player Song Byung-gu won 4-0 in the world's first contest between AI systems and professional human players, writes MIT Technology Review. An anonymous reader quotes their report: One of the bots, dubbed "CherryPi," was developed by Facebook's AI research lab. The other bots came from Australia, Norway, and Korea. The contest took place at Sejong University in Seoul, Korea, which has hosted annual StarCraft AI competitions since 2010. Those previous events matched AI systems against each other (rather than against humans) and were organized, in part, by the Institute of Electrical and Electronics Engineers (IEEE), a U.S.-based engineering association.

Though it has not attracted as much global scrutiny as the March 2016 tournament between Alphabet's AlphaGo bot and a human Go champion, the recent Sejong competition is significant because the AI research community considers StarCraft a particularly difficult game for bots to master. Following AlphaGo's lopsided victory over Lee Sedol last year, and other AI achievements in chess and Atari video games, attention shifted to whether bots could also defeat humans in real-time games such as StarCraft... Executives at Alphabet's AI-focused division, DeepMind, have hinted that they are interested in organizing such a competition in the future.

The event wouldn't be much of a contest if it were held now. During the Sejong competition, Song, who ranks among the best StarCraft players globally, trounced all four bots involved in less than 27 minutes total. (The longest match lasted about 10 and a half minutes; the shortest, just four and a half.) That was true even though the bots were able to move much faster and control multiple tasks at the same time. At one point, the StarCraft bot developed in Norway was completing 19,000 actions per minute. Most professional StarCraft players can't make more than a few hundred moves a minute.

This discussion has been archived. No new comments can be posted.

Humans Are Still Better Than AI at StarCraft

Comments Filter:
  • by ShanghaiBill ( 739463 ) on Sunday November 05, 2017 @05:51PM (#55495521)

    It used to be news when software beat humans at yet another game. Now it is news when we find a game that humans can still win.

    • Now it is news when we find a game that humans can still win.

      and easily (this may explain why it is news...) - from the header:

      all four bots involved in less than 27 minutes total. (The longest match lasted about 10 and a half minutes; the shortest, just four and a half.)

      * former StarCraft player here: all games that lasted less than 10 minutes, that I can remember, all was extremely easy games...

      • Seems strange that the bots were not able to deal with an early game rush, that seems like the area were bots could do quite well since the strategies are well known and it mostly comes down to playing out a script precisely.
    • really insightful. wish i could mod you up.

      now, if only jobs involved the total skill sets required by star craft, we could rest easy.

      • by Boronx ( 228853 )

        Maybe not. These bots would probably beat 90+% of humans even if we were all trained in Starcraft.

    • It used to be news when software beat humans at yet another game. Now it is news when we find a game that humans can still win.

      How about a game I just made up, it's called "not being a filthy toaster." To win you have to not be a toaster. Someone write an article about this post now, I demand the front page.

    • Human players are as good as they are going to be. AI players are constantly improving. It took decades of progress for computers to win at chess. Many programmers and chess players assumed it couldn’t be done. Exponential improvements in hardware along with specialized algorithms and the relentless pace of machine learning overcame all obstacles. It will happen with Star Craft. The human players have peaked. The AI is just getting started
  • I'm genuinely wondering, what makes Starcraft stand out? Is it something particular about this game or is it just the most well known RTS game, and AIs have generally a problem playing RTS games well?

    • Probably the most popular RTS game right now. It's also pretty well balanced between playable races as I understand it.

    • Mostly because it was a regular game at the wcg for more then a decade. Having professional players, leagues and channels devoted to it for longer than that. And about 10 years ago, it got hacked so a software could read the user variables and move the units, so the bot developing began before the machine defeated human on go, this is just the next natural step.
    • Starcraft have the best "playability" (or "fluid gameplay") in multi player RTSs in this time, and by far, besides all the technical limitations of the time: the game was very playable even with lowend graphics card / CPU and dialup connection
      • I first played this H2H against a buddy on a Pentium 166 Mhz with 64 MB RAM via a 33.6 kbps modem and STILL enjoy a round to this day. The original "craft" games were standard bearers for things to come and have yet to be overtaken.
    • by Lanthanide ( 4982283 ) on Sunday November 05, 2017 @06:25PM (#55495637)

      Here's some general thoughts on what makes SC difficult for an AI.

      Starcraft has areas of focus called 'macro' and 'micro'. Macro is base building, selecting which tech tree to advance down, upgrading, building your economy. Micro is controlling small groups or individual units and their position on the map and how they engage with enemy units. For a very rough idea, alternative terms would be macro = strategy, micro = tactics.

      Someone who is excellent at macro can be destroyed by someone who is lousy at macro and excellent at micro, and vice versa. The top players are excellent at both macro and micro.

      Scouting what your opponent is doing is a bit part of both your macro and micro strategies but in different ways. If you see the enemy has built unit factory X, then you better counter it by building Y (macro). If it's in map location Z, then you better make sure you get your units to position A to head them off (micro).

      There are 3 races in SC so 6 possible match-ups include the 'mirror' matchups. If both players are random, then each player must scout their enemy to initially learn their race. Then there are known build-orders, so if you scout your enemy at time 2 minutes, and see X and Y, then you can conclude they are (probably) using strategy A. But if you get that exact same set of information at time 4 minutes, then concluding they are using strategy A may be a mistake.

      Humans know these build orders (like expert players memorising important chess gambits), but the AI likely doesn't, so has to brute force everything. Brute-forcing is possible in chess, and thought impossible in Go. So Starcraft is an extension in this same area.

      • big* part of both

    • It's probably not just Starcraft. But since Starcraft is such a popular RTS, and is very mature as an esport, it makes a good metric for AI systems.

      I'd bet the best computer bot would also get its ass kicked at Civilization by any competent player. And that's probably true of any other game with a complex enough ruleset which doesn't rely on sheer mechanical fitness (such as an FPS aimbot). Additionally, these are games with limited information about the state of the world. That is, your view is limited

      • by Kjella ( 173770 )

        My guess is that they'd rather try something like StarCraft because the mechanics are more generic, like you collect resources, build units and attack enemies in quite free patterns whereas in Civilization the tactic is heavily driven by the game mechanics. Like, you do things in a particular order every time. I think Civilization would be easier for an AI to win with a balanced strategy and then win on micro-management.

        • I think Civilization would be easier for an AI to win with a balanced strategy and then win on micro-management.

          Were that the case, I don't think the game's built-in AI would have to cheat quite as much as it does. Granted, the in-game AI can only rely on the local machine's CPU power, but I still think you're underestimating how much strategic depth the game has. Micromanagement will only get you so far.

    • by Njorthbiatr ( 3776975 ) on Sunday November 05, 2017 @06:50PM (#55495737)

      I've played StarCraft competitively (note: not professionally) and follow the professional scene (yes there's still a pro scene). There's a special depth to Starcraft that other games lack.

      Most of the ranks below top level are all about macro/micro mechanics. Macro (economy, building units, ensuring supply cap doesn't ever get hit) is easy for AI to do. Micro is more challenging but still a task that's better for computers than humans. Top level players are already so good at both of these that there's really only diminishing returns left for computers to gain an edge over them in. They're even so good as to know exactly if they'll win a fight between each unit; almost like they're subconsciously calculating the end game of any engagement automatically. You can even see them know exactly how many attacks it will take. It's freakishly superhuman.

      They also have developed some sort of insane intuition. You can watch them play against each other and move units to locations or build defenses in places only seconds before they need them despite having zero knowledge of the impending threat (such as building turrets right when there's about to be a drop on your resource line). There's instances where the tactics become so deep that they manipulate "thinking ahead" to double or triple bluffs to create openings. To beat these players, you need to have a deep understanding of human motivations. Classic tricks such as hold position lurkers spell doom for computer opponents who need to understand where their enemy might be laying a trap for them, /if they have gone lurkers/. For all you know they're using stop lurkers to deceive you into thinking lurker/ling/hydra is their strategy, while they're in position to wipe out your workers with muta micro.

      It's simply not a game where you can calculate the odds and win every game, because you lack sufficient information to calculate any meaningful conclusions.

    • Starcraft is fairly well balanced, despite being asymmetric. As such, there is no one guaranteed strategy to work each game, and not even against the same race. It is very old at this point, and mostly well understood. As a result, you can easily find players for the AI to fight, with a wide range of skill, ranging from noobs such as myself up to Korean Pros like Jaedong, who would dismantle most AI with his muta micro alone.
    • by west ( 39918 )

      > I'm genuinely wondering, what makes Starcraft stand out?

      Nothing. It's the other way around.

      Traditional games like Chess and Go stand out by having massively fewer degrees of freedom than almost any video or even board game.

      Not too diminish the advancements in AI, but games with relatively few choices each round are perfect for computers. In StarCraft (and in most video games), there are probably thousands of possible choices each frame.

      Even if they got a Alpha Go like research budget, don't expect th

      • by Altrag ( 195300 )

        Go may only have a few dozen possibly moves per round, but the bot needs to be able to forecast multiple possibilities dozens or hundreds of rounds into the future to determine which move to make -- that's an exponential complexity and blows up really really fast. The challenge for Go (and Chess) bots is balancing the ability to forecast (power of the algorithm) against the amount of search tree pruning (speed of the algorithm.) Too much toward the former and the "player" becomes unresponsive. Too much t

      • by Pulzar ( 81031 )

        Even if they got a Alpha Go like research budget, don't expect the Civilization 7 AI to start playing a game that's challenging to higher-level human play.

        That's a sad thought.

        As the game gets more complex, the AI is only getting worse at handling it, making the latest installment the least captivating of them all :(.

        And, yet, I still hope Civ 7 could be different. So, shut up. :)

    • Actually, you have to look at who's AIs are in the game, they are not all created equal. Facebook isn't playing at the level of Alphabet, and there's rumors of some companies out there at the edge of the defense world with some scary stuff the public isn't fully up to speed with yet. Have no doubt that AI can beat these players.

    • what makes Starcraft stand out?

      In a game of go a player has 19x19 (361) possible moves (spaces) per interval (turn).

      Starcraft not only has a far larger board with the average 1v1 map being ~130x130 (16,900) possible moves (spaces) per interval (turn).

      So just based on the board size the computer already has to calculate nearly 50x as many positions.

      Now for those positions instead of 1 single move possible per space (place one token), you have multiple units.

      Assuming we have 20 units and if we view Unit.Selected = [bool] as a variable for

      • by nasch ( 598556 )

        A good starcraft player can make a move every let's say 2 seconds.

        Depending on what you mean by "good", the pros make 5 actions per second and better.

  • Most professional StarCraft players can't make more than a few hundred moves a minute.

    Unless we're talking about just clicking non-stop to make your groups move a few pixels at a time, I'm pretty sure I can't manage more than a dozen moves or so per minute.

    • The game actually measures 'Effective Actions per Minute' and disregards spam clicks.

      A 'casual player' is considered to be about 50 apm. 'proficient' players are about 150.

      Selecting a building, selecting a unit, and training a unit would be three actions. Selecting a unit and commanding it to move would be two actions.

    • by kwoff ( 516741 )
      The hands of God - Flash [youtube.com] or 400 APM [youtube.com] for example.
  • enjoy your humanity fas long as it lasts
  • At that point A.I. will completely own any non A.I. foe in star craft.

    Because at that point properly configured A.I. will have been effectively playing for 200 years.

    As with Go, it won't use existing strategies- humans will learn new strategies from playing against it.

    And it won't cheat by playing 19,000 actions per turn.

    • humans will learn new strategies from playing against it

      Unless those strategies aren't 'human-friendly', which they might well be.

      The human mind is good at pattern-detection and heuristics, but is extremely bad at brute-force.

      • Alpha Go didn't use brute force.

        Human go masters learned new plays that hadn't been discovered by any human player in 3000 years of playing Go.

        • Interesting - do you have a source? A quick Google didn't turn up anything promising.

          • alphago human masters new moves

            Long list of results... here's a typical one.

            https://www.wired.com/2016/03/... [wired.com]

            SEOUL, SOUTH KOREA â" In Game Two, the Google machine made a move that no human ever would. And it was beautiful. As the world looked on, the move so perfectly demonstrated the enormously powerful and rather mysterious talents of modern artificial intelligence.

            But in Game Four, the human made a move that no machine would ever expect. And it was beautiful too. Indeed, it was just as beautiful as

    • by Altrag ( 195300 )

      I just mentioned on a previous post, but I'm guessing the 19,000 APM is when two AIs are playing each other (and running as fast as the hardware allows) and its likely limited to a more realistic APM when playing against human opponents.

  • by 93 Escort Wagon ( 326346 ) on Sunday November 05, 2017 @07:35PM (#55495871)

    For some reason I am always amazed to find out there are people who can make enough money to live on by playing video games.

    And yes, I'm old.

    • by Altrag ( 195300 )

      People make enough money to live on by throwing or kicking balls around, slapping rubber disks toward a net, punching each other in the face, etc.

      People making money by "playing" games isn't exactly new. Of course just like physical sports, when you're playing at the professional level, its no longer just a game -- its a job and requires huge amounts of time and effort to maintain your skills at the peak level. Possibly even more effort than physical sports since in addition to maintaining their "fitness,

  • by Sarusa ( 104047 ) on Sunday November 05, 2017 @07:56PM (#55495935)

    Yes, human players can still beat second tier AIs from Facebook and universities.

    But if you turned the AlphaGo Zero team on it it would dominate it in a couple months max. AGZ learned, from scratch, how to beat every human on the planet at Go every single time in 3 days.

    • by guruevi ( 827432 ) <evi@@@evcircuits...com> on Sunday November 05, 2017 @10:53PM (#55496655) Homepage

      Not really, Go and chess are full-knowledge games, you know at all points, where your opponent currently is, what his moves have been and where he can go. Not to say, there are only a very small numbers of paths you can take at any point in time.

      StarCraft is a partial-knowledge game, you have to intuit where your opponent may be popping out what his intentions are based on very limited amounts of data and counter their strategy accordingly. There are also various ways of winning the game, you can starve your opponent, you can just go out and destroy him with superior force, annoy him continuously or simply execute a fast counterattack when their forces are away from the base in an offensive maneuver and most likely a combination of those things will win you the game. You can't just "guess" a solution because 90% of the times you will guess wrong, the game develops very differently based on tech trees your opponent chooses and choosing your own tech tree is a constant back and forth of trying to one-up your opponent.

      This is really the worst situations for AI. There are no "common situations" as you have in chess or Go that you can just hard-code ideal responses to. AI's are still very poor at pattern recognition if the patterns aren't fully visible.

      • Re: (Score:2, Troll)

        by Sarusa ( 104047 )

        This is such the God of the Gaps argument.

        Oh you can't beat Checkers. Oh, you completely solved checkers? Well, you can't beat chess.

        Uh, you beat chess, well you CERTAINLY can't beat Go. ... You beat Go? Well, you couldn't have beat Go without human training. ... ... ... You beat Go without human training, and it can beat every human on the planet every time?

        Well, you can't beat Starcraft ha ha ha!

        AIs have done pretty well in poker, which is another partial-knowledge game, and there's no reason to think a

        • by Anonymous Coward

          This is simply pure ignorance on your part, can't say I have EVER heard anyone with knowledge on the subject say making bots for any of those games that would beat any human was impossible. But checkers, chess and Go are all complete knowledge games, nothing hidden, it is not about whether you can beat SC or not. A bot can certainly beat any human should someone appropriately program it. hwoever you can't compare making bots for full knowledge limited move games against an incomplete knowledge, unlimited mo

          • by Sarusa ( 104047 )

            Sure, sure, this is exactly what they were saying about Go before AlphaGo destroyed it. Now, like Creationists, you have to invoke incomplete knowledge to preserve mankind's special snowflakeness, like that's something only humans can deal with.

            There was no 'writing it for Go' with AlphaGo Zero. The whole point of AlphaGo Zero (compared to AlphaGo Lee) was that there was no Go specific info other than the scoring. They just fed Go boards into a fresh deep net and used a new Monte Carlo evaluation algorith

            • You don't seem to be aware that you're making a faith-based argument.

              "They said we'd never achieve $FOO, and then we did. This proves we'd achieve $BAR" is a fundamentally flawed argument, regardless of what values you assign to FOO and BAR.

              "They" said we'd never beat chess, "they" said we'd never beat Go, but "they" also said we'd never achieve time-travel into the past.

              Oh, one more thing - no one said "we'd never beat Go": throughout the 90's I only ever heard "we'd never beat Go with current computers"

              • "They said we'd never achieve $FOO, and then we did. This proves we'd achieve $BAR" is a fundamentally flawed argument, regardless of what values you assign to FOO and BAR.

                Except in cases where FOO and BAR are essentially the same thing, but BAR is a bit further on the scale of size and complexity than FOO, and that we can reasonably expect our development of hardware and software to be able to tackle problems with greater scale and complexity in the future.

                • "They said we'd never achieve $FOO, and then we did. This proves we'd achieve $BAR" is a fundamentally flawed argument, regardless of what values you assign to FOO and BAR.

                  Except in cases where FOO and BAR are essentially the same thing, but BAR is a bit further on the scale of size and complexity than FOO,

                  It's debatable whether "Win at Go" and "Win at Starcraft" are the same thing separated only by complexity, but let's be generous and assume that it is. We went from needing 30 x 120MHz CPUs to win at Chess (Deep Blue), to 1202 CPUs and 176 GPUs to win at Go (Alphago).

                  IOW, we used almost 1000x more resources to win at Go than at Chess.

                  For humans, at least, Go is roughly 2.5 times more complex than Chess [xmp.net]. To address the 2.5 extra complexity going from Chess to Go, we used 1000x extra resources.

                  Starcraft, for

                  • We went from needing 30 x 120MHz CPUs to win at Chess (Deep Blue), to 1202 CPUs and 176 GPUs to win at Go (Alphago).

                    AlphaGo Zero only uses 4 TPUs, and is much stronger than the 176 GPU version. It is also much stronger than the 48 TPU version that beat Lee Sedol, while only using a fraction of the space and power. If the goal is only to narrowly beat the human world champ, maybe 1 or 2 TPUs would suffice.

                    IOW, we used almost 1000x more resources to win at Go than at Chess.

                    AlphaGo Zero uses less power than Deep Blue did, and plays at a much higher level (comparing with best human players).

                    But the biggest problem with your analysis is that Chess is solved in a completely different way. Deep

                    • We went from needing 30 x 120MHz CPUs to win at Chess (Deep Blue), to 1202 CPUs and 176 GPUs to win at Go (Alphago).

                      AlphaGo Zero only uses 4 TPUs,

                      Which is much more powerful for the task than the 176 GPUs I stated above.

                      and is much stronger than the 176 GPU version. It is also much stronger than the 48 TPU version that beat Lee Sedol, while only using a fraction of the space and power.

                      It is much more computational power. There is a reason I kept saying "resources" and "computational power" instead of "electricity". You've sort of agreed that we've used multiple magnitudes of computing power to make the AI win.

                      If the goal is only to narrowly beat the human world champ, maybe 1 or 2 TPUs would suffice.

                      IOW, we used almost 1000x more resources to win at Go than at Chess.

                      AlphaGo Zero uses less power

                      I suggest you reread what I wrote in my original post. I never claimed that Alphago uses less electricity, but you imply that I made the electricity argument. I didn't.

                      I did claim "1000x more computational pow

                    • If we need 1000x more computational power for something a human finds 2.5x more complex, do you really think that we will only need 12x more computational power for something that humans find 12x more complex?

                      I was talking about the number of neurons in the human brain taking part in the game decision process. I think it's reasonable to assume that the scaling in the brain corresponds with the scaling in nodes in artificial neural nets. Perceptive complexity isn't a very good measure, I think. People find common tasks, like walking, simpler than playing StarCraft, but that could be because their brains are optimized for the first task, and not for the second. I think someone playing StarCraft uses a bigger part

                    • If we need 1000x more computational power for something a human finds 2.5x more complex, do you really think that we will only need 12x more computational power for something that humans find 12x more complex?

                      I was talking about the number of neurons in the human brain taking part in the game decision process. I think it's reasonable to assume that the scaling in the brain corresponds with the scaling in nodes in artificial neural nets. Perceptive complexity isn't a very good measure, I think. People find common tasks, like walking, simpler than playing StarCraft, but that could be because their brains are optimized for the first task, and not for the second. I think someone playing StarCraft uses a bigger part of their brain than someone playing Go, because there's more overlap between StarCraft and daily life, so more neurons can be recruited to join in the effort. But I highly doubt the difference is more than a factor 100.

                      Besides, as you say Deep Blue did not use NN, and the method Deep Blue used will likely not work well for AG anyway, resulting in the 2.5x complexity increase needing 1000x more computational power.

                      Because Deep Blue didn't use NN, I don't think it's useful to discuss the relative complexity increase.

                      What makes you think a 12x (or whatever) complexity is a tractable problem?

                      Because DeepMind already had a 12x bigger version running before.

                      All the evidence I've seen points to AI scaling being O(N^m) with N being the complexity.

                      I doubt it. Our neocortex is only twice as big as the chimpanzee's, and our total brain is only 3x the size, but we are capable of tasks that orders of magnitude more complex.

                      Yeah, but human and chimp brains aren't AI, and don't work the same way that NNs do. NNs scale linearly, for example, but biological brains do not. NN are not digital representations of biological brains. If they were by now we'd have machines with the sentience sentience at least a cockroach, but we do not.

      • Go and chess are full-knowledge games, you know at all points, where your opponent currently is, what his moves have been and where he can go. Not to say, there are only a very small numbers of paths you can take at any point in time.

        For decades, computers have failed at Go because there are too many paths you can take. Since AlphaGo defeated Lee Sedol, people suddenly seem to think it's a relatively simple game.

        Within a few years, we'll have an AI beat a human at StarCraft too. You'd better mount your goalposts on wheels.

        • by guruevi ( 827432 )

          Go, even though the number of moves for an entire game are very high, only has a limited set of moves that wouldn't outright lose you the game. The players also have an overview of the state of the board at all times.

          Eventually, there will be "AI" that can beat people at StarCraft, but that doesn't mean it will be any time soon. So far, its been trying with brute force, simply trying to be better than human at controlling units and trying to abuse certain properties of the game.

      • StarCraft is a partial-knowledge game, you have to intuit where your opponent may be popping out what his intentions are based on very limited amounts of data and counter their strategy accordingly.

        Churn a bunch of replays through the AI and I bet it could learn these strategies and tactics pretty quickly.

        • by guruevi ( 827432 )

          We've had AI SC games for more than a decade, they have improved yet not quite as stellar.

    • Go is a full knowledge, limited move type game. You cannot compare learning the two as SC is micro management RTS with incomplete knowledge. maybe those guys would be able to make a great bot, but I would happily bet it would take a lot more than a couple of months and even longer before it dominated.
      • Go is a full knowledge, limited move type game.

        Not really. You don't know what your opponent is planning.

        • no, but you know EXACTLY what moves they have made, you do not have that luxury in SC
  • Woot! (Score:4, Funny)

    by ma1wrbu5tr ( 1066262 ) on Sunday November 05, 2017 @08:15PM (#55495993) Journal
    Go team "Smelly bags of mostly water!" Way to humiliate those dry silicates!

System restarting, wait...

Working...