Can DeepMind's AI Really Beat Human Starcraft II Champions? (arstechnica.com) 129
Google acquired DeepMind for $500 million in 2014, and its AI programs later beat the world's best player in Go, as well as the top AI chess programs. But when its AlphaStar system beat two top Starcraft II players -- was it cheating?
Long-time Slashdot reader AmiMoJo quotes BoingBoing: It claimed the AI was limited to what human players can physically do, putting its achievement in the realm of strategic analysis rather than finger twitchery. But there's a problem: it was often tracked clicking with superhuman speed and efficiency.
Aleksi Pietikainen writes "It is deeply unsatisfying to have prominent members of this research project make claims of human-like mechanical limitations when the agent is very obviously breaking them and winning its games specifically because it is demonstrating superhuman execution."
"It wasn't an entirely fair fight," argues Ars Technica, noting the limitations DeepMind placed on its AI "seem to imply that AlphaStar could take 50 actions in a single second or 15 actions per second for three seconds." And in addition, "This API may allow the software to glean more information... " After playing back some of AlphaZero's back-to-back 5-0 victories over StarCraft pros, the company staged a final live match between AlphaStar and [top Starcraft II player Grzegorz "MaNa"] Komincz. This match used a new version of AlphaStar with an important new limitation: it was forced to use a camera view that tried to simulate the limitations of the human StarCraft interface. The new interface only allowed AlphaStar to see a small portion of the battlefield at once, and it could only issue orders to units that were in its current field of view....
We don't know exactly why Komincz won this game after losing the previous five. It doesn't seem like the limitation of the camera view directly explains AlphaStar's inability to respond effectively to the drop attack from the Warp Prism. But a reasonable conjecture is that the limitations of the camera view degraded AlphaStar's performance across the board, preventing it from producing units quite as effectively or managing its troops with quite the same deadly precision in the opening minutes.
Long-time Slashdot reader AmiMoJo quotes BoingBoing: It claimed the AI was limited to what human players can physically do, putting its achievement in the realm of strategic analysis rather than finger twitchery. But there's a problem: it was often tracked clicking with superhuman speed and efficiency.
Aleksi Pietikainen writes "It is deeply unsatisfying to have prominent members of this research project make claims of human-like mechanical limitations when the agent is very obviously breaking them and winning its games specifically because it is demonstrating superhuman execution."
"It wasn't an entirely fair fight," argues Ars Technica, noting the limitations DeepMind placed on its AI "seem to imply that AlphaStar could take 50 actions in a single second or 15 actions per second for three seconds." And in addition, "This API may allow the software to glean more information... " After playing back some of AlphaZero's back-to-back 5-0 victories over StarCraft pros, the company staged a final live match between AlphaStar and [top Starcraft II player Grzegorz "MaNa"] Komincz. This match used a new version of AlphaStar with an important new limitation: it was forced to use a camera view that tried to simulate the limitations of the human StarCraft interface. The new interface only allowed AlphaStar to see a small portion of the battlefield at once, and it could only issue orders to units that were in its current field of view....
We don't know exactly why Komincz won this game after losing the previous five. It doesn't seem like the limitation of the camera view directly explains AlphaStar's inability to respond effectively to the drop attack from the Warp Prism. But a reasonable conjecture is that the limitations of the camera view degraded AlphaStar's performance across the board, preventing it from producing units quite as effectively or managing its troops with quite the same deadly precision in the opening minutes.
Super Cheater (Score:5, Insightful)
This is not really beating a human fairly. If you could click that fast then sure, but otherwise it's not a fair fight.
Re: (Score:2)
Re: (Score:2)
The only fair contest would have been a round-based one with ample time for execution on both sides. Yes, machines are faster, but they are utterly dumb on a level humans cannot even reach while staying functional.
Re: (Score:2)
Yes, machines are faster, but they are utterly dumb on a level humans cannot even reach while staying functional.
Sometimes fast is better than smart.
No one cares how smart you are if your opponent gets the first 20 shots in while you're still drawing your gun. But you'll be the smartest bullet-riddled corpse they ever saw.
Re: (Score:2)
No argument about that. And fast is something machines can do in many cases were humans cannot. But it is limited and it is not a sign of intelligence. What I mean is that I do not object at all to "this machine has beaten a human at a strategy game". What I object to is the conclusion that "this machine is intelligent". It is not. To justify that conclusion, this would have to be set up quite a bit differently, and then the machine would lose. At least today and for the next few decades.
Re: (Score:2)
What I object to is the conclusion that "this machine is intelligent". It is not.
Agreed.
-
To justify that conclusion, this would have to be set up quite a bit differently, and then the machine would lose. At least today and for the next few decades.
It'll be a while. I don't know about decades but sooner or later we'll get there. We'll have to define "intelligence" well enough so that we're able to tell if we got there or not.
Re: (Score:1)
All you need to do is have the input be a mouse. Playing field levelled.
Re: (Score:2)
All you need to do is have the input be a mouse. Playing field levelled.
Computers can click mice faster than humans, too.
They did this when they played the chess match too (Score:4, Interesting)
When AlphaZero was pitted against Stockfish, the best chess AI, they set the match up with an outdated version of stockfish, bizarre time controls that removed stockfish's edge in time management (a static time per move was enforced), stockfish didn't get its opening books (a mini database containing information about the best moves to start with), nor did it get endgame tablebases (another mini database of information about moving at the end of games) and it was limited to a very small amount of ram (only 1GB when it should've had 64GB or more). Deepmind will CONTINUE to mislead people about what they've accomplished at every opportunity.
Re: (Score:2, Informative)
bizarre time controls that removed stockfish's edge in time management
AlphaZero got the same time control.
stockfish didn't get its opening books
AlphaZero didn't get an opening book either.
nor did it get endgame tablebases
Neither did AlphaZero. Also note that in many of the games, Stockfish was basically lost in the early middlegame.
only 1GB when it should've had 64GB or more
That's the only legitimate concern, but the whole argument is stupid nitpicking nevertheless. This is like a race between a horse and the first model car. The exact conditions and outcome are secondary to the proof of validity of general principles. AlphaZero was just the first iteration of a new development. The fact
Re: (Score:2)
Re: (Score:3)
That's like saying a fat runner is optimized to use a car. Stockfish isn't really optimized for opening books, it just sucks without them, mainly because the difference between a good and poor move in the opening may not manifest itself in a concrete eval difference far beyond the search horizon. As shown in some of the games, Stockfish doesn't care if its bishop gets trapped behind its own pawns. A bishop is still a bishop. It may get a penalty for limited mobility, but it doesn't get a penalty for being
Re: (Score:3)
I would say a neural network just combines the advantages of a database and of calculating moves ahead. The weighting of different connections in a neural network seems pretty equivalent to me to storing a library of good and bad starting moves.
Therefore it has pretty much a database function, and it is not surprising that it is superior to a software without one.
Re: (Score:3)
The big difference is that an opening book contains literal moves, whereas a neural net represents generalized patterns, similar to how a human grandmaster's brain has these patterns. If you give AlphaZero a position that's not in any of the games it played, it will still find appropriate patterns and use them to evaluate the position.
it is not surprising that it is superior to a software without one
If you take a weak engine with an opening book, then Stockfish is still going to be superior, because as soon as it plays a non-book move, the weaker engine is on its own. Eve
Re: (Score:2)
Re: (Score:2)
bizarre time controls that removed stockfish's edge in time management
AlphaZero got the same time control.
stockfish didn't get its opening books
AlphaZero didn't get an opening book either.
nor did it get endgame tablebases
Neither did AlphaZero.
[snipped]
Sure, the SF setup was not optimal, but it wasn't completely crippled either.
If you let me choose the parameters of the game, I'd beat AlphaZero in chess even though I would play under the same parameters[1]. Its totally fair because I'd be playing with the same restrictions as AlphaZero! That's how you measured fairness, right?
[1] Single core, 386 with 4MB of RAM. Sure, it's not optimal for AZ, but it's not completely crippled either!
Re: (Score:3)
I'd beat AlphaZero in chess even though I would play under the same parameters (Single core, 386 with 4MB of RAM)
Let me get this clear. You are arguing that your brain is roughly equivalent to a single core 386 ?
Re: (Score:2)
I'd beat AlphaZero in chess even though I would play under the same parameters (Single core, 386 with 4MB of RAM)
Let me get this clear. You are arguing that your brain is roughly equivalent to a single core 386 ?
No, I'm saying that I'd work under the same parameters. I might not even turn on the 386 handed to me, after all. But I'm still working with the same parameters, which as you pointed out, is totally fair.
Re: (Score:2)
which as you pointed out, is totally fair.
No, it's not fair, because a 386 with 4MB is completely incapable of even running the AlphaZero code (or Stockfish code), simply because the program and its data won't even fit.
That's what I call crippled.
The conditions in the match were not optimal, but did not make a huge difference. Maybe you could have gained a few dozen elo by optimizing the system, which is not really a big deal overall, and certainly not "crippled", especially considering that Deepmind could have added similar elo to AlphaZero by add
Re: (Score:2)
Battle Chess ran on my 8086 with 640KB of RAM. Given the scenario outlined by the grandparent, I'd bring it along to play against AlphaGo. Sure, AlphaGo would segfault on startup having exhausted the memory and forfeit, but even when playing as white I'd get one valid move before it died.
DeepMind's claim is that their deep neural network design can beat something programmed by a human programmer. If they'd added a similar database to their code, it would not have supported their claim. If they could ha
Re: (Score:2)
which as you pointed out, is totally fair.
No, it's not fair, because a 386 with 4MB is completely incapable of even running the AlphaZero code (or Stockfish code), simply because the program and its data won't even fit.
Are you claiming that the contest is only fair if both parties get all the resources they claim they need?
Re: (Score:2)
When those parties are machines and on a neutral playing field and they are technical requirements, yes.
If you are trying to establish that your gasoline based vehicle outperforms the diesel model it isn't exactly fair to refuse to allow diesel fuel in the race. It also isn't fair to exclude the types of optimizations that work better for diesel than gasoline.
Re: (Score:2)
Are you claiming that the contest is only fair if both parties get all the resources they claim they need?
When those parties are machines and on a neutral playing field and they are technical requirements, yes.
Well, that was my point: it wasn't really fair - in the AZ vs SF, SF wasn't given all the resources that SF was claimed to require while AZ was given all the resources it apparently needed.
So, of course, if I'm allowed to determine the parameters under which the contestants will compete, it's possible to always choose the winner in advance simply by tuning the parameters to favour one party over the other while being "fair" because both parties get the same parameters.
Re: (Score:2)
Right. I'd contend you optimize both as best you reasonably can. If that would make SF victory a given, so be it. You just benchmark your progress over time against SF. There is no need to rig contests and generate all sorts of bogus claims about being the undisputed champion of all things.
If anything that is just going to hamper AZ's development and progress. Now there is no incentive to work on whatever deficits might lead to AZ failing against SF and any improvements that would have been gained trying wi
Re: (Score:2)
The entire point of AlphaZero is to not need the databases. The whole point is for it to beat the classical chessbot with its artificial intelligence.
If you don't set up the chessbot as it is normally run you haven't beaten the chessbot. If you give AlphaZero tables you've also defeated the point of the experiment, the idea isn't that AlphaZero is the better chessbot, the idea is that AlphaZero's intelligence is such that it can even beat a chessbot.
It's like setting the difficulty to "Dumb Noob" on a game
Re: (Score:2)
I very much doubt his brain or yours could compete with a single core 386. Sure it has some impressive stats on massively parallel micro-ops but the single threaded performance sucks.
Re: (Score:2)
No it isn't nitpicking, stockfish is designed to have those databases whereas AlphaZero is not. You haven't beaten it if you haven't beaten it as intended to run.
Re: (Score:2)
This is like a race between a horse and a car, which the horse wins because the car was forced to try to run with an empty gas tank, and then you go ahead and say "So what? The horse didn't get any petrol either so it was a tooootally fair setup".
In my analogy, Stockfish is the horse. And AlphaZero is an early model steam car. People complain because the horse didn't get the best food, and it wasn't the world's fastest horse, and it was too hot outside, and the horse didn't get proper rest.
What they are missing is that the early car is still at the beginning of the development curve, while the horse is already at its peak.
In a few years, neural network engines will be a few hundred elo higher, and there simply won't be any contest anymore.
Re: (Score:2)
"In my analogy, Stockfish is the horse. And AlphaZero is an early model steam car. People complain because the horse didn't get the best food, and it wasn't the world's fastest horse, and it was too hot outside, and the horse didn't get proper rest.
What they are missing is that the early car is still at the beginning of the development curve, while the horse is already at its peak."
And that is a useful experiment how? You still use a rested racing horse that has been properly fed and benchmark how much you'
Re: (Score:2)
As the promise of "AI" slowly crumbles, a lot of people are desperate to hide the severe limitations and problems of this tech. One way they try to scam the public is by rigged "contests", like this one or the meaningless "Go" stunt.
Re: (Score:2)
It should be clear to anyone that in the final game Mana won by the drop play.
He also built an army that was a direct counter to what the computer had.
Re: (Score:2)
That is why these "competitions" never last long enough for the humans to adjust. Would be an utter failure for the machines otherwise.
RTS is the worst genre... (Score:5, Insightful)
... to test AI. Since RTS games already have a bad UI where the bottneck is the human being in the chair, aka trying to control many units with a limited UI v ia keyboard and mouse is cumbersome at best. It was even back in the Warcraft 2 days when you tried to bloodlust ogres or heal paladins -- healing paladins being damn near impossible. While warcraft 3 'fixed' the issue with impossible casting /w large numbers of units using autocast.
The main problem being is that games like starcraft can be played perfectly because it's really an action game masquerading as a strategy game, aka the actions take place in real time. So for a computer like deepmind, the human appears super slow. Imagine if you ropponent appeared retarded in terms of their reflexes. That's basically deepmind vs any human opponent in an RTS. So a computers perfect information and perfect reflexes mean making 99% accurate micromanaging decisions for units everywhere at once.
You can't do that as a human player. Deepmind for an RTS is like having an aimbot in quake. Not really impressive since we already know making bots that can win against humans is trivially easy.
Re: (Score:1)
See Artosis' analysis on all these games at https://www.youtube.com/watch?v=_YWmU-E2WFc
His fundamental point: AlphaStar didn't win just from better micro, it got itself into positions where better micro could win. And that itself is quite a feat because following a script cannot get you into that position consistently.
Note that humans regularly beat the best AI scripted bots in SC2, that too when the bot is allowed to cheat somewhat. AlphaStar isn't a finished project yet, still has to learn the 8 other mat
Re: (Score:2)
And that itself is quite a feat because following a script cannot get you into that position consistently.
Give it a script with several basic build orders (because each game was against a different agent, with a different playing style), and that kind of micro, and it would be able to do that consistently.
Re: (Score:2)
So? Let it demonstrate them without cheating.
The bots do not need to be set up to cheat over and over again in these contests. There is nothing that says AlphaXXXX sux0rs if it doesn't beat everybody everywhere the first time around. What it does well is still impressive without claiming it is the new champion of the world and while it gains the bigger headline to claim it, it gains a lot of headlines over time to show the actual progress in fair competition.
Re: (Score:2)
Note that humans regularly beat the best AI scripted bots in SC2, that too when the bot is allowed to cheat somewhat.
I think humans would regularly beat AlphaStar too if they played regularly. The strats chosen weren't that good, and it didn't take long for a human to poke holes in them.
Re: (Score:2)
I agree, that last game in particular showed that AlphaStar (that iteration) had clear weaknesses that could be found and exploited, given time. Of course, AlphaStar itself can also learn in that same time, and a lot quicker than most humans.
But personally I think nitpicking about clickrates or even effective strategies is missing the larger point, which is that a self-taught ML network with no explicit human programming can actually have working "strategies" for such a complex game at all. This is high-lev
Re: RTS is the worst genre... (Score:2)
Re: (Score:2)
I'm not sure you're giving it enough credit. In such a strategic game, good mechanics are not enough. If AlphaStar's pre-learned strategies were as shallow as you feel, I doubt a world-class Protoss player would let himself get trapped into a purely mechanical defeat so easily - five times in a row. I would imagine a player of that quality could identify and circumvent any obvious strategies after the first game, but it took until his 6th game before he found an exploitable weakness. I do certainly agree th
Re: (Score:2)
If AlphaStar's pre-learned strategies were as shallow as you feel, I doubt a world-class Protoss player would let himself get trapped into a purely mechanical defeat so easily - five times in a row. I would imagine a player of that quality could identify and circumvent any obvious strategies after the first game, but it took until his 6th game before he found an exploitable weakness.
I don't think number of games is what you should look at here. The games were played one after another, very quickly. As soon as he had time to stop and think for a while (and I think he discussed it with TLO), he was able to come up with a winning strategy. And actually beating it is not very hard: all you have to do is insist that they play on a map the computer has never seen. A new map would cause the computer to become hopelessly confused.
Each AI agent (there were five different agents for each human
Re: (Score:2)
Perhaps you're right, I haven't watched them in as much depth as you seem to have. Perhaps its micro is really just that good, that it never needed to develop better strategies to win against unprepared human pros. I look forward to the inevitable series of rematches, pitting a further upgraded AlphaStar against humans with a better idea of what to exploit.
I remember Lee Sedol being pretty confidant of victory after he reviewed AlphaGo's matches against Fan Hui. Everyone underestimated how much it could imp
Re: (Score:2)
I look forward to the inevitable series of rematches,
I absolutely agree, I enjoyed these.
The thing I'm really looking to see is an improvement in object permanence: can it remember things that have gone out of its vision? I'd bet that's where the big improvement will come from.
Re: (Score:3)
Your post could be summarized as "RTS games are decided by APM" (Actions Per Minute). Basically how you do depends on how many commands you can issue to the game per minute/ Most players can only do around 10 or so, while the pro players are besting over 100. And the theory is, the more APM, the greater your chances to win.
An AI even limited to a keyboard and mouse can issue far more than 100 APM since it can easily issue a command every time it runs the evaluate-plan-execute loop which can runs dozens of t
Re: (Score:2)
SC2 is bad. (Score:1)
Just wanted to note something people seem to be missing. Some pro player was incredibly dominant winning 7 of 9 big tournaments. He peaks at 500 and has an EPM of 330? Have you considered that he's not winning on strategy but on speed and precision alone?
So when an AI does it it is all of a sudden not fair? That's pretty bogus for me. All it tells me is that SC2 is a horrible "strategy" game and that the strategy aspect of it peaks really easily and then after that the primary method of winning is to simply
Re: (Score:2)
So when an AI does it it is all of a sudden not fair?
It's not about fairness. We've known that computers can click faster than humans for a long time, just like forklifts can life more than humans.
It's just not very impressive. Woohoo Google, good job building a computer that can click fast? Should they be congratulated for that?
Re: (Score:2)
Woohoo Google, good job building a computer that can click fast? Should they be congratulated for that?
You think that beating a pro in Starcraft is just a matter of clicking fast ?
Re: (Score:2)
You think that beating a pro in Starcraft is just a matter of clicking fast ?
Did you watch the games? In this case, it was more like precision than speed. All you need to do is program some basic strategies, then give the computer really good micro, and it will win.
To see examples of what computers can do, check this out [youtube.com]. Here is another one (the stalker micro at 0:11 is very similar to what Deep Mind was doing) [youtube.com].
Re: (Score:2)
Woohoo Google, good job building a computer that can click fast? Should they be congratulated for that?
You think that beating a pro in Starcraft is just a matter of clicking fast ?
When you can issue orders at a sustained rate of 100 orders per second to 100 individual, ungrouped units, you are going to beat the player who is physically constrained to a peak or perhaps 2 orders per second, with a sustained rate of less than 1 per second.
It is easy to win almost any real-time game by simply moving hundreds of times faster than your opponents.
It's hard to be impressed when the strategy used by the winner involved issuing orders at a sustained pace no human can meet, even for a short b
Re: (Score:2)
Pretty much yeah. Really it's more that the way these games work there is a disparity in speed which strategy can't overcome. Especially between sufficiently strong players who are going to employ solid strategy.
Re: (Score:2)
"If it was the former, and it figured out all of that itself?"
That would make for another level of interesting in their result but it wouldn't change that the machine simply out-clicked rather than out-strategized the human players. You have to slow the machine down to a human rate of interaction so that the human can formulate and apply strategy for that. Doing otherwise is not only cheating those players, it is also cheating the bot of the opportunity to learn from that strategy.
Re: (Score:2)
Like Jeopardy, but still impressive (Score:2)
Watson won at Jeopardy also because it could press the button faster, which is considered the key skill. However, despite that, it still had to answer the questions, which was impressive.
I would say that this result is also impressive, even if the machine was not really quite as good as the humans.
Re: (Score:3)
The strategies the computer came up with were lousy (and very map-dependant), but it was able to compensate by having extremely precise movements.
Re: (Score:2)
It didn't answer the questions that were read out loud, though. It answered the questions that were fed to it electronically. I can't imagine that Watson would have done nearly as well if it had to do speech recognition using the same audio inputs that a human player would (that is, a microphone at its podium).
Re: (Score:2)
Fine, OCR from a camera positioned at the podium, then. Humans don't get the questions fed directly into their brain.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Indeed, instead it had much of the internet inside it. Certainly all of wikipedia. Pre-indexed.
But the big issue remains, that Watson could play at all is impressive.
Re: (Score:2)
Only if you consider less than a hundredth of one percent of the Internet to be "much" of it.
Watson's storage is substantial, but is dwarfed by the size of the entire Internet.
The goal of "beat" isn't the target (Score:5, Interesting)
I've seen a couple of comments already where folks are talking about DeepMind being able to micro and click faster than a human. While that's neat, that's not entirely the goal here with DeepMind. The makers already put an artificial limit on the actions DeepMind can execute to 600/minute. In comparison the humans are executing at around 250-300 actions per minute. Now that indeed made DeepMind's micro game strong, what was the real tipping point was that DeepMind could see anywhere where a unit was located. Humans however can only see a "screen at a time". When DeepMind's makers went back and implemented "screen at a time" limitation, DeepMind was easily fooled again. And that's the thing here. Not the "can I beat a human?" but "can a human fool me?". As soon as the amount of information coming IN to DeepMind was reduced, the data coming OUT couldn't compensate and the humans were able to slowly figure out how to trick the AI into an unwinnable situation.
There's a continual fallacy on Slashdot where pure research like DeepMind is confused for "who's jerb can it take and reasons why it can't take that jerb." The media here is presenting in the terms of "Hey look! Something AI can do better than us worthless puny humans!" but DeepMind is mostly research first. The entire point here isn't, "Hey can I pawn this guy?" It's why did limiting the input allow the human to so easily fool the machine? Because researchers aren't sure why the AI was so easily fooled where when it had a wider field of view, it could not be so easily fooled. That question has a lot more wider ranging implications than how great the micro game is for DeepMind.
Re: (Score:2)
what was the real tipping point was that DeepMind could see anywhere where a unit was located. Humans however can only see a "screen at a time". When DeepMind's makers went back and implemented "screen at a time" limitation, DeepMind was easily fooled again
That aspect was over-hyped. In the set that was won by the human (where the screen was limited), the human had time to analyze the playing style of the computer, and come up with a good counter-strategy. That is the main reason the computer lost.
Limiting the screen reduced the skill of the computer a few percentages, but it wasn't the only or (in my opinion) the main difference.
Re: (Score:2)
It's why did limiting the input allow the human to so easily fool the machine?
You are sincerely asking this question?
Any stage magician (or three year old near a cookie jar) could easily demonstrate the answer about limited input and orchestrating an expected "answer".
TL;DR, re/indirection
Re: (Score:2)
You are sincerely asking this question?
No what I'm getting at isn't the human "why" here. That's pretty obvious. The "why" is the empirical side of that. The maths that would be behind explaining the "why". Yes, we can easily chalk it up to misdirection, the more important thing is how to teach a computer that misdirection is happening and that requires understanding the "why" the computer got there in the first place.
Training Data (Score:2)
Sounds like a bug that it did poorer in areas that should've been unrelated to camera view. Makes me wonder if it was trained on whole-map data, and it was unsure how to act when only given part of the map.
Re: (Score:2)
The human had time to prepare a counter-strategy to DeepMind, that is really the big variable that changed.
Too fast, too accurate, not smart (Score:1)
When you think about it, the AI was showing no intelligence. It had ran a statistical analysis, and came up with the notion that if you can click really fast, th
Re: (Score:1)
AI isn't supposed to "show intelligence." It is supposed to mimic intelligence. As in....fake it....through engineering.
That is why the "A" in "AI" stands for "artificial." You know...as in "not real." It isn't actually intelligent, and it isn't supposed to be. Straight from the dictionary [merriam-webster.com], AI is:
If the machines were actually intelligent, then we wouldn't call it "artificial intelligence." We would call it "synthetic intelligence." But
Re: (Score:2)
When you think about it, the AI was showing no intelligence. It had ran a statistical analysis, and came up with the notion that if you can click really fast, these units are the best in all situations. It does not adjust to new stuff. Just in effect, runs a script.
Indeed. And that is all "AI" can do today and in the foreseeable future. It is also quite telling that the human players always go into this without having any data from old games to prepare for their opponent, while the machine has all data available on the humans. This is simply because the humans would wipe the floor with it if they knew in advance what they are going into. It basically does not get any more unfair than this. The whole thing is a meaningless PR stunt.
"We don't know" (Score:4, Informative)
We don't know exactly why Komincz won this game after losing the previous five
You could know if you'd watch the games. In the first set, DeepMind won with inhumanly superior micro. It was really cool, but computers have been better at micro for a long time. Speed and precision are things computers are good at, that's why we have aimbots.
In the second set, the human readjusted, and thought of strategies that would defend against the superior micro (by building more powerful units), while taking advantage of the computer's weaknesses (poor knowledge of army compositions, weak knowledge of positioning, and seemingly no object permanence: once enemy units are out of view, it has no idea where they are or if they exist).
Re: (Score:2)
Same on the Go stunt by Google: The machine had all games of the human player, the human player had none of the machine. Even after only 3 games, the human thought he had figured out a way to beat the machine, but, of course, there never were any additional games, as that would have been an utter disaster for Google.
Re: (Score:2)
Re: (Score:2)
While I do not follow Starcraft games, this does not surprise me one bit.
Re: (Score:2)
Re: (Score:2)
No, and that is just your ignorance of Go. You can't formalize a player's strategy in Go
Hey, welcome to the conversation, that's nice but we're talking about Starcraft.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Then they let loose AlphaZero on the internet Go servers as an anonymouse person and professionals quickly picked up on the fact that the anonymous person was not playing like anyone else.
So, no your "never were any additional games" is full of s
Re: (Score:2)
You could know if you'd watch the games.
While it would generally be expected that most here could watch and understand, I am betting that there are quite a few people who can not. They will look and see bright lights, lots of mouse pointer movement, and some explosions.
Thank you for explaining. Now I do not even have to watch. You explained it very well. :)
(but yes, i could watch and understand, it is just that SC is not my thing)
Re: (Score:2)
Re: (Score:2)
DeepMind won with inhumanly superior micro
Like mistakenly shooting piles of rock!
No, they limited it's actions per minute. MaNa typically had a higher APM. But dont' take my word for it, they released the whole histrogram [deepmind.com] Poor TLO peaked at 2000 APM! Like... DUDE, chill. Or at least cut back on the meth.
, the human readjusted, and thought of strategies that would defend against the superior micro
Not really. In the game I saw, Alphastar got really confused by guardian drop harrasses and... couldn't figure out why it's ground units couldn't reach the air. What they changed, was that it didn't have total (leagal) knowledge of the ent
Re: (Score:2)
No, they limited it's actions per minute. MaNa typically had a higher APM. But dont' take my word for it, they released the whole histrogram [deepmind.com] Poor TLO peaked at 2000 APM! Like... DUDE, chill. Or at least cut back on the meth.
You're either dumb or ignorant, but TLO wasn't clicking 2000 times a second.
If you don't want to be ignorant anymore, you could start by looking at the difference between APM and EAPM. There's more to micro than speed.
Re: (Score:2)
Of course not, there's keyboard shortcuts. Which make more of a clack, than a click.
But, if a graph is labeled "Actions Per Minute (APM)", and has three curves, and the orange one is labeled "TLO", and some area under the curve goes up to the little hash mark labeled with "2000".... Maaaan, I guess I'm just too dumb to figure out what that means. Could be anything really. Who knows?
But I get you. I do. It's the stupidest thing to see these people pointlessly clicking around in some vain attempt to keep th
Re: "We don't know" (Score:2)
It's not entirely clear what constitutes cheating (Score:2)
Where exactly does one draw the line about what constitutes "cheating", versus exploiting a natural advantage?
Is it cheating if the AI gets to use API calls to control its forces, rather than physically pushing keys on a keyboard and moving a mouse, the way a human player does? Arguably so, if keyboard-and-mouse-dexterity are considered part of the skill set for the game. Perhaps a fair contest should require the AI to use robotic arms and video cameras on a gaming PC.
On the other hand, if it's only the s
Re: (Score:2)
Where exactly does one draw the line about what constitutes "cheating", versus exploiting a natural advantage?
I suggest that by looking a the goal, you can determine the line. Google here wants to create intelligence, but they won with a pure mechanical advantage.
So we can congratulate them for......making a computer that clicks with precision. Good job Google. But from watching the games, it's clear they failed on intelligence.
Why do we even care about things like this? (Score:2)
Re: (Score:2)
Very much so. Incidentally, when I recently had a chance to ask the head of the Watson team Europe in private about AGI, he immediately said "not in the next 50 years". That is a statement directly implying "we do not know whether it is even possible". IBM and Google experts know they have nothing except automation on steroids. But humans like to dream and like to ignore reality. Marketing is just one discipline that exploits that.
Re: (Score:2)
Incidentally, when I recently had a chance to ask the head of the Watson team Europe in private about AGI, he immediately said "not in the next 50 years".
Normally I would post a comment here affirming your intelligence (and I do, I affirm your intelligence) and agreeing with you.
I still mostly agree with you, but it should be pointed out that the Google team seems to have started coming up with slightly new algorithms, so if they keep going, they may find some insight that leads to a more complete AGI solution. Low probability but it seems there is real progress finally.
Re: (Score:2)
Incidentally, when I recently had a chance to ask the head of the Watson team Europe in private about AGI, he immediately said "not in the next 50 years".
Normally I would post a comment here affirming your intelligence (and I do, I affirm your intelligence) and agreeing with you.
I still mostly agree with you, but it should be pointed out that the Google team seems to have started coming up with slightly new algorithms, so if they keep going, they may find some insight that leads to a more complete AGI solution. Low probability but it seems there is real progress finally.
Well, thanks. Not what I am looking for here, but the occasional person that actually understands what I am talking about and either has good counter-arguments or agrees is nice.
Still, nothing that is AGI to even a very tiny amount is known today. Don't get me wrong, AI is hugely useful, but AGI does not exist. Maybe we will eventually find something that can do AGI without consciousness or we will find a different way to do computing that allows consciousness (digital computing clearly does not), but not
Re: Why do we even care about things like this? (Score:2)
Re: (Score:2)
History of technology. Technology moves at a pretty constant speed from theory how something could work, to lab demo, to you can buy it, to it is in general use, to it becomes mature technology. With networked computers we are at general use now and will take an estimated 30-50 years before the technology is mature. If there is not even a credible theory, we are at least 30-50 years from the lab demo, and possibly much much longer. May even be impossible.
Whether you take the steam engine, electricity, compu
Re: (Score:2)
Because there are similarities between these strategy games and professional jobs. The lesson is pretty clear: AI is coming for knowledge worker jobs.
It was also cheating on Go (Score:2)
Ow what would you call it if the machine has all games of its opponent to prepare and the human opponent has none of the machine? This whole thing was an useless stunt. And did you notice how fast the Go machine was retired afterwards? Very likely because it would have had no chance after the humans had it seen play a few times.
Why do people fall for this kind of crap?
There is no AI (Score:1)
Was it Cheating? (Score:2)
Well of course it was cheating. They gave it TOTAL AWARENESS of the whole map (that it could legally see via fog-of-war rules). They fudged this one by limiting it's "screen changes" as in, while it knew everything that was happening across the map, it could only choose one area to issue commands to. They ALSO had some games were it had to use those screens to see what was happening at those locations (just like a human). And it lost. Arguably, it didn't have enough training time with that setup.
It only
Re: (Score:3)
Watch the pros play: how many of the 15 clicks per second are actually accurate.
Pros don't click 15 times per second, that's probably impossible physically. When they get an APM that high, it's by spamming on the keyboard in combination with clicks (something holding down a key to get the repeat, or spinning the mouse wheel).