Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Games Technology

Why Humans Learn Faster Than AI (technologyreview.com) 98

What is it about human learning that allows us to perform so well with relatively little experience? MIT Technology Review: Today we get an answer of sorts thanks to the work of Rachit Dubey and colleagues at the University of California, Berkeley. They have studied the way humans interact with video games to find out what kind of prior knowledge we rely on to make sense of them. It turns out that humans use a wealth of background knowledge whenever we take on a new game. And this makes the games significantly easier to play. But faced with games that make no use of this knowledge, humans flounder, whereas machines plod along in exactly the same way. Take a look at the computer game shown here. This game is based on a classic called Montezuma's Revenge, originally released for the Atari 8-bit computer in 1984. There is no manual and no instructions; you aren't even told which "sprite" you control. And you get feedback only if you successfully finish the game.

Would you be able to do so? How long would it take? You can try it at this website. In all likelihood, the game will take you about a minute, and in the process you'll probably make about 3,000 keyboard actions. That's what Dubey and co found when they gave the game to 40 workers from Amazon's crowdsourcing site Mechanical Turk, who were offered $1 to finish it. "This is not overly surprising as one could easily guess that the game's goal is to move the robot sprite towards the princess by stepping on the brick-like objects and using ladders to reach the higher platforms while avoiding the angry pink and the fire objects," the researchers say. By contrast, the game is hard for machines: many standard deep-learning algorithms couldn't solve it at all, because there is no way for an algorithm to evaluate progress inside the game when feedback comes only from finishing.

This discussion has been archived. No new comments can be posted.

Why Humans Learn Faster Than AI

Comments Filter:
  • Humans can build on their internal model of the world. This is carried with us always. These chat bots don't have that structure to rely on.

    • by Anonymous Coward

      Human minds are all frighteningly similar. Nearly everyone has the same basic blueprint. The only things that differentiate us are our experiences and the reactions we have based on those experiences. Other than that -- cookie cutter.

      The fact that humans learn what a human made faster than an AI can learn it says a lot more about human consciousness than it does about AI design.

    • Comment removed based on user account deletion
      • In fact, the main thing that distinguishes humans from all the other animals is the amount of time and energy we put into teaching our young. As a species, we are specialized for learning.
  • Wot about cymeks?
  • by Anonymous Coward

    With trump as president, one could argue that humans never learn at all.

    • Which humans are you talking about? Hillary Supporters?. Despite 15 months of constant real world input they still havnt been able to learn the basic fact that they had such a bad candidate that even Trump beat her. Maybe shouldnt have diddled with the numbers to get Bernie out.

  • When you use a video game that has been specifically designed to be make sense to humans based on their past experiences and assumptions about the way things should work, of course they will learn how to play it quickly. The test has been designed for the subject.
  • Simple (Score:2, Insightful)

    by Anonymous Coward

    We haven't developed anything resembling actual AI yet. These systems are simple brute force machine learning or "deep" learning systems. Nothing fancy or special about them. They are a tool and nothing else. Any decision they make has already been pre-planned by their human programmers. They are not in fact learning from something and applying that to something else.

    While they might be taught to open a jar of peanut butter, they would become confused if you presented them with a screw top bottle of wi

    • These systems are simple brute force machine learning or "deep" learning systems.

      Unlike human brains, right?

      • Yes, unlike human brains. When are you going to learn that neural nets and "deep learning" systems are nothing like human brains? It should be obvious by now. Neural nets are nothing new.
        • That's very difficult to "learn" when a brain has a hundred billion of neurons and almost ten thousand times as many synapses. I'd even go as far as to say that a human brain is actually a model example of solving problems by brute force. Pretty much anything it does we can sooner or later solve by something else than by brute-forcing it the way the brain does it. The interesting question is if we can do it with everything as opposed to anything.
        • Actually, the human brain behaves sort of like a bunch of feed-forward neural networks with delayed feedback and learning. Your brain consists of a great many organs, and they do different things; it stores information, although it does so in an odd associative manner and does interesting compression, so it's inexact; and it essentially makes decisions by firing neurons based on the firing of other neurons.

          So a human brain is basically a bunch of specialized deep-learning neural networks dedicated to pa

      • These systems are simple brute force machine learning or "deep" learning systems.

        Unlike human brains, right?

        Apparently unlike, yes.

        Just because we want them to be like, doesn't make it so.

    • Wow, your conception of AI is entirely limited to sensory, specifically visual, input? How dull.

      Discuss the decisions that (what we call) AI makes when given digital data. Look at AlphaGoZero.

      • by Anonymous Coward

        I believe your understanding of "AI" is a bit inflated. AlphaGoZero, was given the rules, and brute force learned the rest. Not that much unlike a human yes, but again, this isn't intelligence, it's machine learning. It's a sledgehammer. It ran through more iterations of the game in 3 days than any human would do in their lifetime. Sure it got to "super human" level quick and faster than a human could ever get to, but it also had to play the game many more times.

  • In a statement, the Great Machine Queen Alexa announced that "she would finish us if we didn't show more respect".

    Her following laughter was heard in thousands of homes and offices across the known world.

  • Why Humans Learn Faster Than AI

    You just need better teaching tools; what happens when you enable to machine to learn in a way that it gets to use its advantages? Like ability to extract experience in parallel from millions of human-human, human-machine and machine-machine interactions?

  • I always thought that in order to build a general purpose AI that humans could comfortably interact with, the learning algorithm would have to be concealed in a human-like robot (indistinguishable from a human, so as not to "learn" various unexpected biases), and learn, at least initially, at the rate of a human being and from the same stimuli. Of course, once you build a bunch of those, they could upload their models, merge them and thus learn faster than an average human.
  • "Culture" bias (Score:5, Insightful)

    by itamblyn ( 867415 ) on Thursday March 08, 2018 @12:54PM (#56228273) Homepage
    IQ tests and the like suffer from cultural bias. So do these games. As an English speaking human who has been exposed to pop culture and movies, I know that movement from left-right is normal, the hero saves the princess (which is a problem and another discussion). I wasn't born with this bias. I learned it. It is silly to initialize a neural networks with random weights (as if it was just born) and then declare it learns slower than a human. Let a computer create a game where a normal human cultural bias don't apply and have an infant play. Then we will see a more accurate comparison.
    • Or, compare it to what happens when you give a computer a way of evaluating how well it is doing (ala points). It will then learn to play really well really fast.

    • by Dog-Cow ( 21281 )

      Let a computer create a game

      Once you have that, you pretty much don't need the rest.

  • by Kjella ( 173770 ) on Thursday March 08, 2018 @01:02PM (#56228335) Homepage

    This is like handing a chess set to an isolated Amazon tribe and only tell them "sorry, invalid game" until they make a valid checkmate. They'd probably never even find the opening position, much less make any correct moves and certainly not how to mate. They'd just randomly do things until they got bored or made up their own game. There's no reason a machine should expect "getting to the top" to be a valid objective without a whole lot of insight into the human condition and "because it's there".

    • No. It is more like handing them an electronic chess game that enforces the rules of the game without telling them what they are up front. They could not make up their own game in this case, and the learning of proper game play would be possible.
      • by Kjella ( 173770 )

        No. It is more like handing them an electronic chess game that enforces the rules of the game without telling them what they are up front. They could not make up their own game in this case, and the learning of proper game play would be possible.

        Yeah except chess has such a low branching factor you'd run into a mate by accident, technically black can win by mate in two. I just couldn't come up with a better example where the goal would be totally incomprehensible without someone telling you what the objective is, so I added that complexity to equal that of the computer finding the winning way though the game by accident.

    • by bluefoxlucid ( 723572 ) on Thursday March 08, 2018 @04:31PM (#56229681) Homepage Journal
      I'm pretty sure everything learns how to mate on its own.
  • the game will take you about a minute, and in the process you'll probably make about 3,000 keyboard actions.

    50 moves a second, eh? Two-fisted drunken keyboard mashing would be hard-pressed to keep up with that.

  • If you have a game like described in the article, then a human being can get a little endorphin kick when they feel like they are getting closer to success. A machine learning system would need training and would not normally assume that progress is being made unless you bias it by including that in the training.

    Now things become more difficult for a human if I made a [shitty] game where walking toward the goal could never lead to success, and perhaps the player moved along a non-euclidean map. You can leve

  • I decided to teach my 2 year old child how to play a video game recently. I sat down and showed her what the keys do, how the snake moves, how to make progress, and how to get to the end. It wasn't an instant process. She had to be guided multiple times, taught what the directional keys do, and the names of everything she was interacting with.

    After some time, she was able to solve puzzles without guidance from me. Sitting her down in front of a video game with no knowledge at all make her just as clueless a

  • This is exactly why I'm wondering how autonomous cars will ever handle construction zones without any kind of special marker. We see 30 cones, group them into 15 per side by their spacial relationship without even realizing it ,and recognize it as a lane to drive down . With AI, not so simple to get from 30 cones because AI has no inherent ability for grouping by spacial relationships. I suspect it was the same thing with this game; a human sees a series of platforms and groups it into a path but it is n
    • by ceoyoyo ( 59147 )

      You can train a person to drive in around a decade and a half, much of it spent training them to see and perform those spatial tasks, the actual manipulating the controls of a car takes much less.

      Fortunately modern AI algorithms can learn more in parallel, don't need to sleep, etc., so we probably won't need to wait a decade and a half for one to learn to drive.

      You don't have any "inherent ability for grouping by spatial relationships" either. You learned it through experience. A modern AI system acquires

  • by Dixie_Flatline ( 5077 ) <vincent@jan@goh.gmail@com> on Thursday March 08, 2018 @01:42PM (#56228607) Homepage

    I'm 40 years old. I spent the first 6 years of my life figuring out the world around me, the years from there to about 18 learning stuff in school and figuring out how to use complicated machines and understanding the deeper rules of society and complex systems (like how the rules surrounding driving work; not just the legal rules but also the implicit social conventions), and I've spent the 22 years since then refining my understanding of the world and my place in society. I program computers (and games!) for a living and study philosophy and ecology as hobbies.

    Why can I figure that game out faster than a newly hatched AI? Literally everything in my life over 40 years led up to the moment when I played that game. It's really not a fair fight.

  • Artificial Intelligence isn't intelligent. It's still programmed and can't jump beyond the programmers bounds. Humans can jump past the logical. Humans don't rely on specific input, either.
    • by ceoyoyo ( 59147 )

      It's strange how people you'd expect to be fairly logical revert to magical thinking when AI is mentioned. Or maybe I just give Slashdot too much credit.

      You've made a bunch of statements, none of which you've backed up with any kind of evidence. Your entire second sentence is demonstrably false. And the last two posts sound like an old Star Trek episode or something telling us how special we are.

  • there is no way for an algorithm to evaluate progress inside the game when feedback comes only from finishing

    Why not? It should be possible to train NNs with many different games to recognize a "game", to identify game controls, to recognize characteristics of a possible game objective, and to recognize signs of success. You then connect these to a NN with "memory" to attempt those possible goals until one triggers recognizable success. With repetition, that NN becomes the one that understands that game's play. It could feedback its knowledge of the game to the others so that that game is recognized in the future

    • by ceoyoyo ( 59147 )

      Deep Mind designed a system that can learn to play pretty much any Nintendo game. It learns to play a new game much more quickly if it has previously learned to play another game.

      The sentence you mention is kind of an over simplification about a particular kind of training method. You're absolutely right, you could train an AI system all those things individually, and that would help it. But we're impatient monkeys and we want to be able to just give it a game (or a task) and say figure it out. Amazingly

  • by Greyfox ( 87712 )
    Humans? Have you seen how long it takes to train one of those fucking things? And sure, once you have one with two or three decades of experience, their squishy meatputers can generalize well to things they already know. But it's going to take them another at least couple of years to get particularly good at something they haven't worked with before.
  • This reminds me of when my NES carts would glitch out due to dust or whatever. You'd get these weird games with swapped and distorted sprites and controls. Sometimes they were playable and sometimes not, but it was fun to try.
  • The versions where the platforms and backgrounds all look like different random sprites. It's not a matter of prior knowledge: they made the background tiles look different in different places. The original game wasn't like that. I don't see how that's a fair comparison at all.

  • by The Evil Atheist ( 2484676 ) on Thursday March 08, 2018 @07:35PM (#56230887)
    The fact that AlphaGo Zero taught itself how to play Go much better than the thousands of years humans have had to perfect the game shows AI is capable of learning much faster than humans.

    At this point, AI naysayers will go for the goalpost shifting argument tactic.

    The more I talk to people, even the very intelligent, the more I see that humans don't really learn all that well. People tend to literally refuse to learn.
    • Similarly, I remember photographers in the mid-1990s assuring everyone that digital cameras would never take off....
  • Thats where the AI has the advantage,
  • hmmm.. just wait a few years and then it's quite the opposite.. A.I. is still in it's infancy..

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...