Become a fan of Slashdot on Facebook


Forgot your password?
Graphics Hardware Hacking Input Devices Games Build Science

Optical Camouflage Puts Kinect Into Stealth Mode 60

UgLyPuNk writes "Takayuki Fukatsu, a Japanese coder who works under the name Art & Mobile, has done a bit of trickery with Kinect and openFrameworks. The peripheral will still track your movement and position, but turns your image nearly transparent. Take a look (it's particularly obvious at about 1:30):"
This discussion has been archived. No new comments can be posted.

Optical Camouflage Puts Kinect Into Stealth Mode

Comments Filter:
  • by igreaterthanu ( 1942456 ) * on Friday December 03, 2010 @06:07AM (#34428970)
    The article states that he hasn't said how it is filling in the gaps, it's really easy to fill in the changes in the background with a static image and this may be how he is doing it, I could achieve the same effect with a regular webcam.
    • by Sockatume ( 732728 ) on Friday December 03, 2010 @07:02AM (#34429158)

      It's a shade more sophisticated than that. I think he's using the existing make-a-texture-mapped-model-of-the-space code, but telling it to texture map anything that's non-background with an pre-existing image of the background. It's a cute project, obviously intended to recreate the look of the sci-fi cloaking effect, rather than do anything clever. After all, you could achieve a much more effective result by just replacing the feed with one in which the person was not present.

      • Well, clever's the wrong word. Functional. It is certainly clever.

        • by Anonymous Coward
          Well, maybe a peripheral for the next Harry Potter Kinect game - bundle an "invisibility cloak" which is just a cloak in a certain colour or something and use that to make your image vanish somehow. Fun, but yes, struggling to think of any practical uses.
      • Re: (Score:3, Interesting)

        by evilbessie ( 873633 )
        I think it very much more simple than all of that, he's obviously just taken a couple of polaroids stuck them in front of the cameras and then he's just walking around in front of the depth sensor, so the software is just using the depth information to alter the still images. Not really clever at all and as far as I can see damn obvious from the video (the pink border which shows on the right hand side).
        • Re: (Score:2, Informative)

          by Anonymous Coward

          The pink border is the wallpaper of his desktop computer. He is taking a video to his monitor.

        • i think he takes a reference frame and depth sample when the software starts up. then he just blends it with the live video based on a difference between the current depth buffer and the reference one. The piano keys are moving when he's not in front of them. It seems like it's the live video feed in areas that are not the user. I imagine it would look pretty odd if someone were to walk behind him.
      • by Lumpy ( 12016 ) on Friday December 03, 2010 @08:52AM (#34429572) Homepage

        "pre existing image of the background" this makes it an epic fail, but cute project.

        I'l be impressed if it generates the background without ANY reference images or reference data.

        • by Q-Hack! ( 37846 ) *

          "pre existing image of the background" this makes it an epic fail, but cute project.

          I'l be impressed if it generates the background without ANY reference images or reference data.

          Except for the fact that he moves the camera, thus changing the POV. Whatever he is doing to achieve this effect is live.

          • by leuk_he ( 194174 )

            The camera he moves is the camera taking the image of the screen. He is not using screen capeturing technology to post the image to youtupe, he is using an external camera to take screenshots.

            Somewhere in the video he is moving to the back, and then the effect disappears completely.If he was using a live stream for the background then the fact that depth information was lost, he should have been appeared, From this fact you can assume his using a static image where he uses the depth camera to make a preada

      • Doesn't the kinect have multiple cameras? Together they might be able to fill in the gaps.
    • by Pseudonym Authority ( 1591027 ) on Friday December 03, 2010 @07:18AM (#34429210)
      Sure, maybe you could. But you haven't, so don't be a dick.
      • Can't speak for GP but I've done better using just a JPEG of a background. You can't see when I move at all.

        • by emm-tee ( 23371 )

          Can't speak for GP but I've done better using just a JPEG of a background. You can't see when I move at all.

          Are you joking? The whole point is to be able to see when he moves. It's a special effect to show a sci-fi kind of "cloaking". Sure you could implement something similar with a standard webcam, but the novelty here is that he seems to use the Kinect's depth information to work out how much distortion/lensing effect to apply. Hence when he stands against the bookshelf in the background, he disappears completely.

        • He's obviously being funny, guys.
      • by Ecuador ( 740021 )

        But he does have a point. How is this using kinect's abilities? How is this different than EffectTV's PredatorTV filter? Is the guy just using all the late kinect dev rage to get credit for doing something simple and available for years, or he is really doing something new?
        I can't really see how the kinect depth info could help. Now, the other guy with 2 kinect's providing a 3d view really did have something, and he could easily add a predator filter to make something invisible, as he would be able to see b

        • The depth info tells you what's non-background in the scene unambiguously, and allows the texture map to distort to follow the object, which is a bit closer to the way the effect was depicted in GitS and the latter MGS games.

          • by Ecuador ( 740021 ) on Friday December 03, 2010 @08:23AM (#34429444) Homepage

            Notice how when he walks far back, the distortion goes away and he disappears completely. What does that tell you?
            So, he is using depth info, but what he is doing with it is rather lame. He still has a static image of the empty room, otherwise when the person went far enough back, he would have to appear uncloaked not disappear completely. Of course you would need 2 kinects and much more work to avoid the need for a static background image and just apply the "cloak" to objects nearer than the background. But that would certainly be cool.
            What we have here, you can do better without a kinect by simple diffing of the background image. If he at least used the depth info to alter the distortion it would be interesting, but it seems to me that when he is walking towards or away from the camera the distortion does not change at all.

        • by Ecuador ( 740021 )

          Hmm, upon watching more of the video, he seems to stop the effect or "disappear" completely if he goes far back enough, so I guess that is a way he is using the kinect depth data. However, when he walks forward or backward, nothing changes on how the background is seen through him, I would have expected something like the effect of a lens moving closer or farther, in general different distortion depending on how close he is to the cam.
          Anyway, without knowing what he is trying to do, I can't be impressed by

        • by Lumpy ( 12016 )

          There has been a plugin for after effects to do this EXACT effect but in more detail and clairty for years. all you need is a green screen shot of the target and the footage you want the effect on.

    • Looks like it takes a static shot, and then projects it on top of the 3D data from the kinetic.

    • The edge artifacts in the video really make it look like they aren't doing anything special... Pretty much the same as []

      It's kind of a neat effect, but it's not apparent at all what the Kinect's tracking stuff is used for.
    • Yeah - my Mac has been doing this for years...with ONE camera angle.
  • by Anonymous Coward

    I would say it's similar to the recent hack that uses TWO kinect devices working together, but this time the programmer has used to to simulate a "predator-like" effect. I might be wrong about it, but if you watch carefully there's an obvious alignment/sync problem inside the "predator" shadow with the actual background (possible due to the image coming from a different angle).


    • No is low tech, take couple of polaroids, stick in front of cameras. leave the depth censor alone, walk around in front and ta-da your invisible.
  • a bit slashdotted (Score:4, Informative)

    by Anonymous Coward on Friday December 03, 2010 @06:12AM (#34428996)

  • by pinkushun ( 1467193 ) on Friday December 03, 2010 @06:40AM (#34429088) Journal

    It's camouflaged by work's firewall :P

  • Griffin? Is that you?
  • by Anonymous Coward
  • In case you don't get it Predator []. This video is not about doing the greatest CG. It's kind of obvious that the distortion was on purpose.
    • by EdZ ( 755139 )
      Or Ghost in the Shell (the Kinect does use an IR camera, so you could construe a Termoptic Camouflage joke). Or Metal Gear Solid. Or Neuromancer (the Panther Moderns). Or any of the many other science fiction stories that mention optical camoflage.
    • Also, if you see the glitch at 2:21, you can see that he is not even moving on the same part of the room shown at us.

      I think he has a camera pointing in one direction and the Kinect pointing to another, so he is actually projecting the Kinect volume info over an arbitrary image with a cool effect, he is not really cloaking against his background.
      Not much of a difference with current blue/green screen effects. Final result is exactly the same.
  • effectv has this effect for years.
    no specific kinect feature needed btw

  • by matt007 ( 80854 )

    looks like their site has also been well camouflaged.

  • Pretty simple trick. He took an image (static) of the empty scene, then he just overlays that over any part that the kinect detects is more than certain distance closer than it used to be (radar like). You can tell it's a static image because when we walks in front of the self-playing piano, the keys you see "through" him are stopped (in up position).

    Really don't see *any* use to this what-so-ever. The only difference between it and a live feed is that anything "not" covered up is live (as long as it doe
  • ..are you playing a game? Or did you just render your Xbox 360 obsolete?
  • I salute Steve Balmer for giving us the Kinect - perhaps the greatest pr0n device ever invented. Well done, Steve!
    • I've heard this claim before. Can you actually explain to me how this works?
      You lie alone in your couch, doing humping moves, while looking at the TV where an animated chick tells you that it feels good?
      You jerk off, while an animated character undresses and tells you how big you are?

      Seems sad to me, even by slashdot standards.
      • That sounds plausible and I'm sure this software is already on the way. Someone said if drugs were banned people would run in circles on the front lawn until they fell over; "people want to get high." Humans are animals and spend a lot of time catering to animal instincts.
  • Why make useful applications when we can make cool applications.
  • This is easily done with a regular webcam (apart from the extra depth-data). This story is lame, even if it were on a M$ fanboy game-blog...

Competence, like truth, beauty, and contact lenses, is in the eye of the beholder. -- Dr. Laurence J. Peter