Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Games

Video Games Are So Realistic That They Can Teach AI What the World Looks Like (vice.com) 87

Jordan Pearson, reporting for Motherboard:Thanks to the modern gaming industry, we can now spend our evenings wandering around photorealistic game worlds, like the post-apocalyptic Boston of Fallout 4 or Grand Theft Auto V's Los Santos, instead of doing things like "seeing people" and "engaging in human interaction of any kind." Games these days are so realistic, in fact, that artificial intelligence researchers are using them to teach computers how to recognize objects in real life. Not only that, but commercial video games could kick artificial intelligence research into high gear by dramatically lessening the time and money required to train AI. "If you go back to the original Doom, the walls all look exactly the same and it's very easy to predict what a wall looks like, given that data," said Mark Schmidt, a computer science professor at the University of British Columbia (UBC). "But if you go into the real world, where every wall looks different, it might not work anymore." Schmidt works with machine learning, a technique that allows computers to "train" on a large set of labelled data -- photographs of streets, for example -- so that when let loose in the real world, they can recognize, or "predict," what they're looking at. Schmidt and Alireza Shafaei, a PhD student at UBC, recently studied Grand Theft Auto V and found that self-learning software trained on images from the game performed just as well, and in some cases even better, than software trained on real photos from publicly available datasets.
This discussion has been archived. No new comments can be posted.

Video Games Are So Realistic That They Can Teach AI What the World Looks Like

Comments Filter:
  • by Joe_Dragon ( 2206452 ) on Tuesday September 13, 2016 @03:55PM (#52881537)

    What side do you want?

    China
    France
    Russia
    UK
    USA
    India
    Israel
    Pakistan
    North Korea

    • France, just because "Force de Frappe" is fun to say
  • I wonder what the AI would think of Doom as the monsters hide behind secret wall panels and jump out from behind to attack?
    • AI doesn't think at all. What we call "AI" is really just a bunch of algorithms.
      • You don't think at all, you're just a bunch of neural circuits.

        • You'll never convince such religious nuts. They'll always resort to special pleading. Just because.
          • Re: (Score:2, Insightful)

            Religious nuts? That is what you AI and Space Nutters are. You think that somehow AI is going to magically appear because your PC got a lot faster between 1990 and 2016.
            • Confusing people again?
            • You really bought in hard to that whole "things will never change" mantra, didn't you? Is there some bookie that will let you bet your 401k on progress not occurring, or something? Why are you so insistent on trying to explain to people how their fantasies of progress occurring are based on mythology and magic?

              Congratulations on your Nobel, by the way.

              • Things will change. Just not the things the Space and AI nutters are counting on. Welcome to Earth.
                • Oh, OK. So an "AI nutter" apparently believes that AI will "magically appear", because PCs got faster over 26 years. I didn't realize that you could distill their beliefs down to something that sounds so stupid, but I guess you're the expert.

          • Ah, that explains it... I haven't realized that was the origin of his statements, but I guess you're right. The irony in his "Oh so you know how the brain works?" is that I've actually been studying it for a long time, so yeah, I have a good idea about the generals. :p

            But after the "Wow, when did you receive your Nobel?", it's easy to see how his brain works - it doesn't! :p

            • If you think the brain is just a bunch of "neural circuits" then you are stuck in 1960. We tried neural nets back then. Didn't work.
              • by dmbasso ( 1052166 ) on Tuesday September 13, 2016 @05:58PM (#52882367)

                We tried neural nets back then. Didn't work.

                It seems it is you that are stuck in 1960, because connectionist techniques nowadays are nothing like that. And deep-learning has been breaking record after record, even achieving superhuman performance in some tasks. And deep-learning is just a smart math trick over the regular backprop algorithm, allowing more layers without degrading the error gradients. When the models that actually incorporate neuroscientific knowledge (current research) mature, expect even better performances.

                And wtf, this "I tried once, I failed, I'll never try again" is surely a loser-talk.

                • "Deep learning" is the 2016 version of "Neural Nets" to get some more VC money. Meanwhile it is some voice interface hooked up to Wikipedia.
              • Wait, you're not trying to suggest that we didn't know all that there was to know in 1960, are you? We were using chemical propellant for rockets then, are you really trying to suggest that we did NOT understand all that there was to understand back in 1960? That maybe we were missing something? Because that doesn't sound like you at all.

                No, you're wrong about this. Chemical propellant is the pinnacle of space launch technology, the brain is a bunch of neural circuits, we already knew all of this in the

                • The brain is not a bunch of neural circuits. I already said that. You are a moron. And it is very possible that chemical rockets ARE the only launch technology possible. If there was something better we would be using it rather than using chemical rockets FOR THE LAST 60 YEARS. You guys are incredible. You seem to think there is going to be some magical breakthrough that is going to launch you to Mars.
                  • If there was something better we would be using it rather than using chemical rockets FOR THE LAST 60 YEARS.

                    Listen, when we were doing that thing where we parodied each others' positions in the other story, and I said exactly what you just said, that was supposed to be comedy, man. That's satire. The notion that we would have known in the 60s all of the physics necessary to accomplish any kind of travel through space/time is laughable, ridiculous, and obviously satire. You shouldn't repeat it like you actually think that, it makes you sound incredibly short-sighted. You are turning into that person where it's

                • Wait, you're not trying to suggest that we didn't know all that there was to know in 1960, are you? We were using chemical propellant for rockets then,

                  I'm curious - what are we using now as rocket propellant?

                  As far as the AI argument goes: The processing power in the world the last 3 decades has increased by a factor of a few hundreds of thousands. The "AI" has improved by a factor of 2. Maybe less. The progress we've seen in AI software has been incremental; a rounding error compared to progress in other software or in hardware, and we've yet to approach the cognitive abilities of a smart cockroach, even though we're consuming orders of magnitude more p

      • What's the difference?

  • So, on one side, you have the game program converting conceptual objects from a database to 3D images, using a powerful GPU in the process.

    And in the other side you have the AI program taking that 3D image and converting it to a conceptual object, and putting it in a database, using a powerful GPU in the process.

    And then you wonder about global warming.

    • You're comparing apples and oranges.

      One is a forward transformation.
      The other is the reverse transformation.

      Mapping between the two is non-trivial.

    • Why is this so strange to you? Virtual worlds have been used for simulating inputs into control systems and the like since at least the times of Apollo (feeding simulated data into the AGC back then).
      • You missed the point. As usual. I warned him no one would understand what he was talking about.
        • No, both of you missed the point. Simulating stuff like this is cheaper, faster, simpler, therefore more efficient than working with the physical world. It is the physical alternative that would cause more global warming than this.
          • I think you missed the point. Read it again. He isn't saying simulation is bad - he is saying THIS method is bad. Seriously you need to learn to comprehend what other people are saying.
            • I think you missed the point. Read it again. He isn't saying simulation is bad - he is saying THIS method is bad.

              He's still wrong. Here's the original comment:

              So, on one side, you have the game program converting conceptual objects from a database to 3D images, using a powerful GPU in the process.

              And in the other side you have the AI program taking that 3D image and converting it to a conceptual object, and putting it in a database, using a powerful GPU in the process.

              And then you wonder about global warming.

              The rebuttal is not that it doesn't consume energy, but that this is a brain damaged way to look at it because

        • by AK Marc ( 707885 )
          No, we all understood, and recognized it as a red herring. You aren't testing what happens inside the conceptual environment. Doing the double-transformation would be silly to run a simulation to "train" the computer in basic operations. The double-transformation is done to train the recognition of the intermediate step of "seeing" a wall and recognizing it as such. The AI isn't learning how to interact with the world, that's pre-programmed, and doesn't need the intermediate step.

          The intermediate step
    • Every activity has costs. Things cost time and money. Time means people people need to commute, get fed, etc. Food and fuel production and consumption contributes to global warming. Likewise, other expenses like materials, manufacturing, etc. cost money. All that money likewise boils down to energy and labor costs, and the latter again is ultimately an energy expenditure. So it turns out that any costs are roughly proportional to energy expenditures. So limiting your costs by getting things done faster and
    • Current technology of CMOS gates shows roughly the same energy per transition as a neuron (given similar levels of complexity.) Since AI uses so much power for such marginal results, this implies that computer hardware is not properly designed/optimized for intelligence work or that AI software is woefully wrong (or both).

      Brain cells and brains as a whole aren't magical; they work by some mechanism. Equivalents of all mechanisms can be made by digital logic systems, but we don't know how to make the equival

  • by SeaFox ( 739806 ) on Tuesday September 13, 2016 @04:52PM (#52881965)

    Good. We need AI that can correctly identify a person as a prostitute and various fictional weaponry and automobiles.

    • by Nidi62 ( 1525137 )
      Thank god they aren't using Saints Row or we'll have AI robots running around using dildo bats.
  • by WolfgangVL ( 3494585 ) on Tuesday September 13, 2016 @07:18PM (#52882721)

    Now the fleets of self driving cars will be sipping "hot coffee" and beating the hookers to collect their money back for services rendered.
    They will also know how to rack up a respectable 5 start wanted level and just hide under a bridge till it all goes away.
    Oh, and my favorite new autonomous feature? Spawning attack choppers out of thin air to aid in robbing every convenience store in the city while performing the above two tasks.

  • The Talos Principle is coming to life!

To stay youthful, stay useful.

Working...