Why Humans Learn Faster Than AI (technologyreview.com) 98
What is it about human learning that allows us to perform so well with relatively little experience? MIT Technology Review: Today we get an answer of sorts thanks to the work of Rachit Dubey and colleagues at the University of California, Berkeley. They have studied the way humans interact with video games to find out what kind of prior knowledge we rely on to make sense of them. It turns out that humans use a wealth of background knowledge whenever we take on a new game. And this makes the games significantly easier to play. But faced with games that make no use of this knowledge, humans flounder, whereas machines plod along in exactly the same way. Take a look at the computer game shown here. This game is based on a classic called Montezuma's Revenge, originally released for the Atari 8-bit computer in 1984. There is no manual and no instructions; you aren't even told which "sprite" you control. And you get feedback only if you successfully finish the game.
Would you be able to do so? How long would it take? You can try it at this website. In all likelihood, the game will take you about a minute, and in the process you'll probably make about 3,000 keyboard actions. That's what Dubey and co found when they gave the game to 40 workers from Amazon's crowdsourcing site Mechanical Turk, who were offered $1 to finish it. "This is not overly surprising as one could easily guess that the game's goal is to move the robot sprite towards the princess by stepping on the brick-like objects and using ladders to reach the higher platforms while avoiding the angry pink and the fire objects," the researchers say. By contrast, the game is hard for machines: many standard deep-learning algorithms couldn't solve it at all, because there is no way for an algorithm to evaluate progress inside the game when feedback comes only from finishing.
Would you be able to do so? How long would it take? You can try it at this website. In all likelihood, the game will take you about a minute, and in the process you'll probably make about 3,000 keyboard actions. That's what Dubey and co found when they gave the game to 40 workers from Amazon's crowdsourcing site Mechanical Turk, who were offered $1 to finish it. "This is not overly surprising as one could easily guess that the game's goal is to move the robot sprite towards the princess by stepping on the brick-like objects and using ladders to reach the higher platforms while avoiding the angry pink and the fire objects," the researchers say. By contrast, the game is hard for machines: many standard deep-learning algorithms couldn't solve it at all, because there is no way for an algorithm to evaluate progress inside the game when feedback comes only from finishing.
Re: (Score:1)
I already finished.
Re: (Score:3)
I already finished.
Sounds like what LSL would say to his "date" for the night.
Internal Model of the world (Score:2)
Humans can build on their internal model of the world. This is carried with us always. These chat bots don't have that structure to rely on.
Re: (Score:1, Troll)
Machines are more likely to make breakthrough discoveries that transcend human conceptual ability.
You are an idiot of unfathomable depths.
Re: (Score:2)
Machines are more likely to make breakthrough discoveries that transcend human conceptual ability.
You are an idiot of unfathomable depths.
Why are you so set that machines won't be able to achieve breakthroughs that are beyond what's likely to be found by a human? Humans will try all of the alternatives that make sense and pick their favorite. Machines can try everything whether it makes sense or not and weigh the outcome according to some standard of success. It's entirely possible that 'AI' can stumble on something a human wouldn't have found [slashdot.org].
Re: (Score:2)
Machines are good in the fact that they don't have self preservation instincts. We humans will work on a problem until we are tired from it, or have something more important come up, and distract us. The computer can work on a problem, and burn itself out if necessary. So other then specialized wiring to do fast mathematics. In terms of AI problem solving, humans are faster in general, but we get tired of working on a complex problem, while the computer may not handle such info as fast, but will stay on i
Re: (Score:1)
Human minds are all frighteningly similar. Nearly everyone has the same basic blueprint. The only things that differentiate us are our experiences and the reactions we have based on those experiences. Other than that -- cookie cutter.
The fact that humans learn what a human made faster than an AI can learn it says a lot more about human consciousness than it does about AI design.
Re: (Score:1)
Human minds are all frighteningly similar.
I have a bottle of anti-psychotics at home that says otherwise. Your mind isn't full of noise too deafening to think through.
Re:Internal Model of the world (Score:4, Funny)
So... your bottle of anti-psychotics are talking to you? Maybe you need a different prescription.
Re: (Score:3)
Re: (Score:2)
Wot about cymeks? (Score:2)
Re: (Score:2)
There ain't no cymeks and there never was!
learning (Score:2)
With trump as president, one could argue that humans never learn at all.
Re: (Score:2)
Which you should see a doctor about, if you still have health insurance.
Re: (Score:2)
>subsidizing poor life choices
Good call. Leave the weak to die. Pregnant at 14? What an ideal time to enter the workforce.
Hillary Supporters? (Score:2)
Which humans are you talking about? Hillary Supporters?. Despite 15 months of constant real world input they still havnt been able to learn the basic fact that they had such a bad candidate that even Trump beat her. Maybe shouldnt have diddled with the numbers to get Bernie out.
Deceiving headline (Score:1)
Re: (Score:2)
...one of the levels was literally a black screen. You just had to figure out the button sequence and button timings by trial and error. What's sad is most people could get there within a couple hours.
What reward could possibly inspire someone to spend hours clacking out random key sequences in front of a black screen?? What if it locked up?
Re: (Score:1)
Many, many years ago I took part in an NSF funded psychology experiment. Participants were told that they were competing with another player in a zero-sum game On each trial only one of the players could get the reward. Although each reward was in pennies, there were many trials.
The experiment was conducted way before computers were available, and the switches and lights were controlled by relays and stepper switches. I had a lot of experience with electromechanical devices, and after a minute or so, the
Re: (Score:3)
1. It's still obviously a platform game, even at the 'hardest' level. Try it and tell yourself platforming experience doesn't matter. You can even sort of recognize the ladders (which, given the goal of the game, are pretty crucial).
2. The algorithm they are comparing against is designed for exploration, not for getting to a goal as quickly as possible. See: https://pathak22.github.io/nor... [github.io]
Note also that that comparison was not about pitting humans against the best algorithm for this specific game, but to
Re: (Score:3)
The point of the study is effectively to refute the first line of the summary:
"What is it about human learning that allows us to perform so well with relatively little experience?"
Answer: we don't perform well in those circumstances. We all have many years, or decades, of experience to draw on, and we do.
Simple (Score:2, Insightful)
We haven't developed anything resembling actual AI yet. These systems are simple brute force machine learning or "deep" learning systems. Nothing fancy or special about them. They are a tool and nothing else. Any decision they make has already been pre-planned by their human programmers. They are not in fact learning from something and applying that to something else.
While they might be taught to open a jar of peanut butter, they would become confused if you presented them with a screw top bottle of wi
Re: Simple (Score:2)
These systems are simple brute force machine learning or "deep" learning systems.
Unlike human brains, right?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Actually, the human brain behaves sort of like a bunch of feed-forward neural networks with delayed feedback and learning. Your brain consists of a great many organs, and they do different things; it stores information, although it does so in an odd associative manner and does interesting compression, so it's inexact; and it essentially makes decisions by firing neurons based on the firing of other neurons.
So a human brain is basically a bunch of specialized deep-learning neural networks dedicated to pa
Re: (Score:2)
These systems are simple brute force machine learning or "deep" learning systems.
Unlike human brains, right?
Apparently unlike, yes.
Just because we want them to be like, doesn't make it so.
Re: (Score:1)
Wow, your conception of AI is entirely limited to sensory, specifically visual, input? How dull.
Discuss the decisions that (what we call) AI makes when given digital data. Look at AlphaGoZero.
Re: (Score:1)
I believe your understanding of "AI" is a bit inflated. AlphaGoZero, was given the rules, and brute force learned the rest. Not that much unlike a human yes, but again, this isn't intelligence, it's machine learning. It's a sledgehammer. It ran through more iterations of the game in 3 days than any human would do in their lifetime. Sure it got to "super human" level quick and faster than a human could ever get to, but it also had to play the game many more times.
Re: (Score:2)
...the AIs can use their income to buy sex and drugs...
I was quiet when AI took my job sorting packages. I'll be damned if they're taking over my hookers and blow.
Alexa mocks puny humans (Score:2)
In a statement, the Great Machine Queen Alexa announced that "she would finish us if we didn't show more respect".
Her following laughter was heard in thousands of homes and offices across the known world.
"Why Humans Learn Faster Than AI" (Score:2)
Why Humans Learn Faster Than AI
You just need better teaching tools; what happens when you enable to machine to learn in a way that it gets to use its advantages? Like ability to extract experience in parallel from millions of human-human, human-machine and machine-machine interactions?
General purpose AI (Score:1)
"Culture" bias (Score:5, Insightful)
Re: (Score:2)
Or, compare it to what happens when you give a computer a way of evaluating how well it is doing (ala points). It will then learn to play really well really fast.
Re: (Score:2)
Let a computer create a game
Once you have that, you pretty much don't need the rest.
Who expected anything else? (Score:4, Insightful)
This is like handing a chess set to an isolated Amazon tribe and only tell them "sorry, invalid game" until they make a valid checkmate. They'd probably never even find the opening position, much less make any correct moves and certainly not how to mate. They'd just randomly do things until they got bored or made up their own game. There's no reason a machine should expect "getting to the top" to be a valid objective without a whole lot of insight into the human condition and "because it's there".
Re: (Score:2)
Re: (Score:2)
No. It is more like handing them an electronic chess game that enforces the rules of the game without telling them what they are up front. They could not make up their own game in this case, and the learning of proper game play would be possible.
Yeah except chess has such a low branching factor you'd run into a mate by accident, technically black can win by mate in two. I just couldn't come up with a better example where the goal would be totally incomprehensible without someone telling you what the objective is, so I added that complexity to equal that of the computer finding the winning way though the game by accident.
Re:Who expected anything else? (Score:4, Interesting)
Re: (Score:2)
Isn't that instinct?
Nice (Score:2)
the game will take you about a minute, and in the process you'll probably make about 3,000 keyboard actions.
50 moves a second, eh? Two-fisted drunken keyboard mashing would be hard-pressed to keep up with that.
Humans create thier own goals and rewards (Score:2)
If you have a game like described in the article, then a human being can get a little endorphin kick when they feel like they are getting closer to success. A machine learning system would need training and would not normally assume that progress is being made unless you bias it by including that in the training.
Now things become more difficult for a human if I made a [shitty] game where walking toward the goal could never lead to success, and perhaps the player moved along a non-euclidean map. You can leve
Child Playing (Score:2)
I decided to teach my 2 year old child how to play a video game recently. I sat down and showed her what the keys do, how the snake moves, how to make progress, and how to get to the end. It wasn't an instant process. She had to be guided multiple times, taught what the directional keys do, and the names of everything she was interacting with.
After some time, she was able to solve puzzles without guidance from me. Sitting her down in front of a video game with no knowledge at all make her just as clueless a
No electronics before age 5 (Score:2)
Do not expose kids to screens at early age. It stunts brain development.
Construction (Score:2)
Re: (Score:2)
I could believe maybe that following a path of markers would be possible if they were set up all standing and consistently spaced. But then you knock five over, and replace another two with a cement blockade, and another two with a metal blockade, space them unevenly like a human put them there and add in some inconsistent larger markers. Finally splash them all with mud or snow and I'm wondering if the AI would still be reliable enough to dri
Re: (Score:2)
You can train a person to drive in around a decade and a half, much of it spent training them to see and perform those spatial tasks, the actual manipulating the controls of a car takes much less.
Fortunately modern AI algorithms can learn more in parallel, don't need to sleep, etc., so we probably won't need to wait a decade and a half for one to learn to drive.
You don't have any "inherent ability for grouping by spatial relationships" either. You learned it through experience. A modern AI system acquires
I'm 40 (Score:3)
I'm 40 years old. I spent the first 6 years of my life figuring out the world around me, the years from there to about 18 learning stuff in school and figuring out how to use complicated machines and understanding the deeper rules of society and complex systems (like how the rules surrounding driving work; not just the legal rules but also the implicit social conventions), and I've spent the 22 years since then refining my understanding of the world and my place in society. I program computers (and games!) for a living and study philosophy and ecology as hobbies.
Why can I figure that game out faster than a newly hatched AI? Literally everything in my life over 40 years led up to the moment when I played that game. It's really not a fair fight.
First (Score:1)
Re: (Score:2)
It's strange how people you'd expect to be fairly logical revert to magical thinking when AI is mentioned. Or maybe I just give Slashdot too much credit.
You've made a bunch of statements, none of which you've backed up with any kind of evidence. Your entire second sentence is demonstrably false. And the last two posts sound like an old Star Trek episode or something telling us how special we are.
Duh. Yes, we need to interconnect NNs (Score:2)
there is no way for an algorithm to evaluate progress inside the game when feedback comes only from finishing
Why not? It should be possible to train NNs with many different games to recognize a "game", to identify game controls, to recognize characteristics of a possible game objective, and to recognize signs of success. You then connect these to a NN with "memory" to attempt those possible goals until one triggers recognizable success. With repetition, that NN becomes the one that understands that game's play. It could feedback its knowledge of the game to the others so that that game is recognized in the future
Re: (Score:2)
Deep Mind designed a system that can learn to play pretty much any Nintendo game. It learns to play a new game much more quickly if it has previously learned to play another game.
The sentence you mention is kind of an over simplification about a particular kind of training method. You're absolutely right, you could train an AI system all those things individually, and that would help it. But we're impatient monkeys and we want to be able to just give it a game (or a task) and say figure it out. Amazingly
Wat? (Score:2)
Re: (Score:2)
Your post is now required reading for my students.
This should be familiar to a lot of you (Score:2)
how is that remotely fair? (Score:2)
The versions where the platforms and backgrounds all look like different random sprites. It's not a matter of prior knowledge: they made the background tiles look different in different places. The original game wasn't like that. I don't see how that's a fair comparison at all.
But do they? (Score:3)
At this point, AI naysayers will go for the goalpost shifting argument tactic.
The more I talk to people, even the very intelligent, the more I see that humans don't really learn all that well. People tend to literally refuse to learn.
Re: (Score:1)
Make multiple copies and run them simultaneously. (Score:1)
just wait.. (Score:2)