Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Games

Video CES 2014: Ohio Company is Bringing Military-Grade Motion Sensors to Gaming 46

Video no longer available.
In a town called Portsmouth, Ohio, a company called Yost Engineering (YEI) Technology has quietly been making motion sensing devices for military, aerospace, industrial, robotics, and other commercial motion capture uses, including rotoscoping for the film/video industry. Now they want to bring this same technology to gaming. They tried a Kickstarter campaign in 2013, but only got a little less than 1/2 of their target amount pledged. They're going to do Kickstarter again, starting Feb. 14, 2014 -- and this time, they've been working on PR before asking for money. You can see what they're up to in gaming sensor development at www.priovr.com/. Or go to the main YEI Technology corporate site, which has a whole bunch of free downloads in addition to the usual product blurbs.

Tim: So Paul, we are here at the YEI booth. And there are some guys in suits playing video games. Talk about that a little bit. What is the hardware we are looking at?

Paul: Essentially, what we are looking at here is our full body motion tracking suit—it is called PrioVR. We have two variants of the suit on display here. We have what we call the core suit which is essentially a full body suit primarily for gaming applications. And then we have Derek in the upper body only suit. Actually Derek is not in the suit anymore but somebody else is getting into the suit there. That is kind of the same technology where you can sit on your couch and be lazy and still have the same upper body aspects without necessarily needing the legs attached.

Tim: Tell us about what kind of sensors it takes to track body motion, and where is the processing about where each body part is? Where does that take place?

Paul: Essentially what we have is in each of the sensors, we have a high performance wrist microcontroller along with a set of three sensors, a 3-axis gyroscope, a 3-axis magnetometer, a 3-axis accelerometer. Then the sensor fusion all happens on board on the microcontroller. All of that gets fed to a centralized wireless hub and then that gets transmitted to the PC for use in the game. Our goal is: By putting all of the processing on board, it frees the host system to do the processing for the game. So our goal is to make it so that the host system feels very little of the effects of that except for the communication with.

Tim: Now you mentioned PC specifically. What sort of data is flowing into what sort of software backend, so what it does it take to interact on the computer side with the sensors that you put into the suit?

Paul: Okay, essentially what we are doing is we are taking all of that inertial data we are getting from the sensors, the raw data, we are calibrating that, processing it, turning it into very accurate orientations, because very accurate orientations get stuck into a packet that is all time synchronized. And that gets sent off to the computer for processing by the host system. In the host system, when it gets it, we have a set of APIs and DLLs that actually take that data and then present it to any application that needs it. So whether you are in a game engine it is very easy to take that orientation data out, drive an animation rig from that, or if you are an indie developer and want to talk to that data string directly, we have an open source SDK that allows you to do that as well.

Tim: Now what sort of game engines has this been integrated with right now?

Paul: We have integrated this with all of the major game engines. We have it in Unity, which is what the demos are showing here. We have it in CRytek engine, and we also have it in the UDK working as well. We also have people, who have done stuff where people will have their game engines, we have done some of that ourselves. So talking to that data string for a user that is not using a game engine—that’s completely possible and very easy to do as well.

Tim: Did you start out with the intent of making a game controller in a suit or did this grow out of something else?

Paul: This actually grew out of something else. We were doing dynamics robotics research—we needed inertial sensors for that that were high accuracy but low cost. So we developed our own inertial sensors. Those became a product line, we have a 3 space sensor product line, primarily used in military aerospace, industrial and robotics applications. And then in our research lab, we kind of put together a suit out of these, just for fun and realized that it was so much fun. The technology had converged to the point where head mounted displays were becoming readily available. We decided that we could sit around and wait for that kind of technology to take off—or we could do it ourselves. The way we view it is that a place like Oculus or Sony or any of the other head mounted display manufacturers, they are providing the eyes into a virtual room, we want to provide the body. But as you can see in these demos, like Chris behind me, you don’t even need a head mounted display for this to be a whole lot of fun. It is still, even with just you in front of the TV in the suit, you can do incredible things that you can’t do with any other kind of gaming system.

Tim: Now there has been a lot of body tracking technology from the big console makers. Distinguish what this can do while wearing sensors right on your body, versus one of the optical based systems out there.

Paul: Okay, essentially the problems of the optical based systems is two-fold: One is that because it is an optical based system, there is an inherent latency involved with getting the image from the camera or the depth camera the combination of the two, and then doing the processing on that. The processing on that is non-trivial so you wind up with these delays. For example, the connect is somewhere around 90 milliseconds, the new connect the connect 1 is somewhere on 60 milliseconds. Which in a gaming environment you want this one-to-one response, that is a pretty long time. Whereas our system is under 10 milliseconds. So by measuring the things directly we avoid a lot of the processing load and can have this kind of instant tracking to happen. The second problem with optical system is obviously the line-of-sight problem. If you are out of the camera range, or if you move into a pose where it can’t see part of your body, it can’t accurately track that. And also, if you drop down on the ground and roll around and curl up in the fetal position, they can’t track that kind of behavior. But we can completely do that. So we have a number of advantages, but for that, the disadvantage is you have to get suited up. But for the kind of gamer that wants to have this experience, it is hard to beat—the ability to do anything that you want and have the system be able to track it.

Tim: Now we have got people wearing suits and interacting with headsets here, but it is not even a shipping product. Can you talk about where you are in that process and what it has been like?

Paul: Yeah, where we are in the process, we have years of experience doing inertial sensors, so that technology is pretty stable for us and ready to go. Essentially what we are doing is, on February 14, we have a re-launch of the Kickstarter that is going to be specifically for these products. It is a 45-day campaign but we are looking at a very quick turn from the completion of that campaign to having products shipped. We are looking at the end of June or early July for the first units to ship. And essentially what we need the money for is to move from demo systems like this, to mass produced systems. So a lot of the money is going to making the suit ergonomic, easy to put on, easy to take off, and then getting them mass produced. So that is what we are looking at. The technology itself is fairly stable as you can see in the demos here. So it is mostly just a matter of moving it into mass production.

Tim: Now talk about the capabilities of your very highest end suit, you mentioned to me earlier it can actually even track what your foot is doing.

Paul: Yeah. We have a three-level suit: the upper body only, which I mentioned earlier, we have the core suit, which we are primarily gearing towards gamers, and then we have a pro suit which essentially can track foot position, shoulder articulation, torso twisting. So essentially the same technology that will be used in a professional quality motion capture solution that you would see used in a feature film or something like that.

Tim: Now we have got guys here killing zombies. What are some other non-zombie killing applications that might be just outside of it?

Paul: Okay. There are all kinds of serious applications for this. Obviously, military training simulation, applications for that—to put a bunch of soldiers in suits and have them run around in a virtual environment for training purposes. But also, in medical applications—we have a lot of users in the medical space that are interested in using this for rehabilitation range of motion studies. We also have users that are interested in using this for sports and fitness, for example sports analysis to track performance of an athlete. We actually have the US Olympic committee using a couple of our sensors for actually testing sprinters from run to run to see how they do. And we have a lot of people looking at these for a number of different serious applications, education being one of them, architectural so a lot of uses for a suit like this at an affordable price other than just having fun in killing zombies.

Tim: One more question: Can you go into a little bit more detail about what the open source SDK allows you to do?

Paul: Essentially what we are doing is we are doing all the heavy lifting either in the sensors, on the hub, or in the SDK. Primarily the processing of all the inertial data is happening in the suit itself, the SDK then takes that puts into the skeletal framework or the skeletal model, so it makes it very easy for a developer to access that data either at the raw orientation level and do whatever they want with it; or they can actually extract the full posed skeletal model from that in a very easy way. Again, with very little overhead on the host system itself, that being open source. We are also making our demos open source and freely available which allows somebody to see ‘okay, how did they do that?’ Our goal is to make it easy as possible for anybody to use this in whatever end application they have.

Tim: People are going to obviously see you on Kickstarter. Is there any place they should look first if they want to find out more about your product here?

Paul: Yeah. Before the Kickstarter launches, they can visit priovr.com which is the website we have set up right now that has information on this. We will have updates on there. Or yeitechnology.com, on that website we also have links to this, and our existing 3 space sensor line, if somebody can’t wait to get to this technology and try it. The suit is going to be really cool though, so look for the Kickstarter on February 14.

This discussion has been archived. No new comments can be posted.

CES 2014: Ohio Company is Bringing Military-Grade Motion Sensors to Gaming

Comments Filter:
  • I thought military grade meant reliable, rugged, and manufactured by the lowest bidder. High performance doesn't really seem like its part of the package.

    I would rather have something commercial or enterprise grade if I'm after performance, or consumer grade if I am after price. Maybe military grade if its for a toddler and I don't want it to be destroyed instantly...

    • High performance doesn't really seem like its part of the package.

      It is if you're shopping for components to build an INS-guided cruise or ballistic missile. ;-) That reminds me, where can I buy these sensors again? :D

      • by mi ( 197448 )

        That reminds me, where can I buy these sensors again?

        Sensors-shmensors! I want to by some of their shares... Too bad, they don't seem to be publicly traded (yet?)...

      • It is if you're shopping for components to build an INS-guided cruise or ballistic missile. ;-)

        Ballistic missiles don't require high performance. Once they are launched there is little to compute other than time to detonation. Some early missiles used pneumatic logic [wikipedia.org], which is very reliable even in a radioactive environment, but runs with a clock measured in seconds rather than nanoseconds.

        • Ballistic missiles don't require high performance. Once they are launched there is little to compute other than time to detonation.

          What are you blabbering about computing? I was talking about sensors. Resolution, sensitivity, linearity, temperature and time stability, signal-to-noise ratio and frequency response, among others, definitely are the performance characteristics of any sensor, and missiles have some of the most stringent requirements for inertial sensors of all applications.

    • by Roblimo ( 357 )

      Simulators are an area where military requirements are at least as strict as commercial specs. Flight simulators are a good example. Link was the first serious simulator manufacturer, and their first large customer was the Army Air Force.

    • by Dahamma ( 304068 )

      Exactly. "Military grade" sounds impressive and all, but "movie/game production grade" mocap would be a lot better technology for, oh, GAMING.

    • Yeah, military grade generally means it weighs a ton and comes in a giant metal rack mount case, and possibly uses vacuum tubes.

    • Wii controller being what it is, I thought there was more call for military grade televisions.

      I'll get my coat.

  • and this time, they've been working on PR before asking for money

    Obviously, if this story's on Slashdot.

  • by CaptSlaq ( 1491233 ) on Monday January 13, 2014 @05:30PM (#45945005)
    is why they just didn't use YETI as their company acronym and be done with it?
    • is why they just didn't use YETI as their company acronym and be done with it?

      Because Blue already makes a YETI?

  • Military-grade encryption?

  • by Anonymous Coward

    The number one draw for me is that, unlike other similar offerings, this one does not have a spy camera aimed at you all the time!

    In fact; this system is far faster and more accurate than even the newest version of the Kinect. From the Video: Old Kinect had a 90ms latency, the new Kinect has a 60ms latency and this YEI strap-on system has a smaller than 10ms latency.

    Also; This YEI system is essentially ready to go now. They say they just want to tweak the design of the wearables then raise enough money from

  • Complete living room destruction!

    And probably a trip to the ER.

  • Why does TFS link to a wikipedia article about rotoscoping, which (correctly) identifies it as a manual, 2D process?

    But I guess we should be thankful that, if editors aren't actually going to catch such mistakes, they are at least doing us the favor of linking to documentation highlighting their errors.

    • Yeah, I don't think the editors know what Rotoscoping is. :P

      Maybe "Rotomation" which is a process for matching 3D geometry to footage but that's not usually (if at all) used for rotoscoping.

  • by loopdloop ( 207642 ) on Monday January 13, 2014 @06:22PM (#45945457)

    I was at CES and got to put on their sensor suit with an Oculus Rift. It's the best immersion I've experienced so far. The ability to independently rotate your hands, biceps, and forearms is hugely beneficial.

    Because the system isn't based on optical tracking, there are no occlusion issues. The biggest drawback is that it takes a couple minutes to "suit up". They need to devise a way to attach the sensors to you without all the straps. Also, I've heard people report that there can be sensor drift problems. I didn't experience that, though.

    Overall, I was super impressed with the experience.

    -Matt Sonic / virtualreality.io

  • by Gravis Zero ( 934156 ) on Monday January 13, 2014 @08:28PM (#45946445)

    simply put, this is a very expensive way to do things. the Kinect has done a good job at motion capture so why not just improve on that idea? using multiple (cheap-o) cameras at different angles, you could not only capture one person but multiple people without putting on any annoying suits or even extend the area of capture. what's better is that it scales as you can add more and more cameras and create a more accurate model which means it would solve occlusion issues. just to sweeten the deal, you could use optical flow to predict future motion and thus remove any possible lag you may encounter. this would be a great use case for Epiphany III [adapteva.com] manycore processor as it could process every camera at the same time.

    the bottom line is that while this military-grade motion sensing stuff may be a great but it's going to be expensive ($350 per unit from what i see on KS) and there are going to be a LOT of hardware support issues.

    Further reading:
    3D Reconstruction from Multiple Images [ed.ac.uk]
    Optical Flow [wikipedia.org]

    • Price tends to come down as more people adopt the technology. Do you remember how expensive a cell phone cost when that technology first rolled out to the general population? Military or government sponsored technology projects usually end up contributing to the advancement of non-military related applications. One example would be GPS. The government spent billions on GPS development for the military and now that technology is used for non-military applications.

      • Price tends to come down as more people adopt the technology.

        you are assuming many people will adopt their version of motion tracking technology. GPS and cell phones didn't have any direct competition, motion tracking does. a better comparison would be VHS and Betamax. however, in this case, full body optical tracking (Kinect) already has a three year head start, costs a lot less, isnt clunky and is in a shitload of homes. that's some stiff competition.

  • I saw this demo'ed first hand, and it is awesome. I don't know that I'd call it "military grade" (not sure what that means) but they originally developed the technology for controlling industrial robots better, according to the guy in the booth who I talked to. So I'd say it's at least "industrial grade" tech. I really want to see the kickstarter succeed. This VR suit pairs brilliantly with oculus rift, and makes the wiimote seem rather primitive.
  • by Anonymous Coward

    I had him as a professor for some computer engineering courses in the 90's. A little bit of a nut in the fun way; good professor, and coincidentally I ran into his MIDI servo controllers about a decade ago. Those were very solid, well designed, and responded to the appropriate range of MIDI commands instead of being just barely functional. It looks like his product line has matured quite a bit, and this seems completely plausible from him and his group.

  • by Anonymous Coward

    Will the general public buy into something that sells for over $300, and then requires one to "suit up" and strap multiple thingies to your arms, legs, head, and torso? Unfortunately, I think not. If the sensors are built into a garment like thing (i.e. something like a sleeved shirt) so the user just pulls it over his/her head without multiple strapping points, it may be more acceptable. The public is extremely lazy.

  • The VR community had heavily invested in the STEM kickstarter a month or so before PrioVR's kickstarter was up. I think this was a big reason that the first one failed. The marketing of the technology also failed because they didn't show any integration into any existing games like the STEM system was doing at the time. The fact that the Razer Hydra was already in the hands of folks meant they could see other people using what is in sense a prototype for the STEM system. Also PrioVR is only relational to y

The sooner all the animals are extinct, the sooner we'll find their money. - Ed Bluestone

Working...