Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Games

Video CES 2014: Ohio Company is Bringing Military-Grade Motion Sensors to Gaming 46

Video no longer available.
In a town called Portsmouth, Ohio, a company called Yost Engineering (YEI) Technology has quietly been making motion sensing devices for military, aerospace, industrial, robotics, and other commercial motion capture uses, including rotoscoping for the film/video industry. Now they want to bring this same technology to gaming. They tried a Kickstarter campaign in 2013, but only got a little less than 1/2 of their target amount pledged. They're going to do Kickstarter again, starting Feb. 14, 2014 -- and this time, they've been working on PR before asking for money. You can see what they're up to in gaming sensor development at www.priovr.com/. Or go to the main YEI Technology corporate site, which has a whole bunch of free downloads in addition to the usual product blurbs.

Tim: So Paul, we are here at the YEI booth. And there are some guys in suits playing video games. Talk about that a little bit. What is the hardware we are looking at?

Paul: Essentially, what we are looking at here is our full body motion tracking suit—it is called PrioVR. We have two variants of the suit on display here. We have what we call the core suit which is essentially a full body suit primarily for gaming applications. And then we have Derek in the upper body only suit. Actually Derek is not in the suit anymore but somebody else is getting into the suit there. That is kind of the same technology where you can sit on your couch and be lazy and still have the same upper body aspects without necessarily needing the legs attached.

Tim: Tell us about what kind of sensors it takes to track body motion, and where is the processing about where each body part is? Where does that take place?

Paul: Essentially what we have is in each of the sensors, we have a high performance wrist microcontroller along with a set of three sensors, a 3-axis gyroscope, a 3-axis magnetometer, a 3-axis accelerometer. Then the sensor fusion all happens on board on the microcontroller. All of that gets fed to a centralized wireless hub and then that gets transmitted to the PC for use in the game. Our goal is: By putting all of the processing on board, it frees the host system to do the processing for the game. So our goal is to make it so that the host system feels very little of the effects of that except for the communication with.

Tim: Now you mentioned PC specifically. What sort of data is flowing into what sort of software backend, so what it does it take to interact on the computer side with the sensors that you put into the suit?

Paul: Okay, essentially what we are doing is we are taking all of that inertial data we are getting from the sensors, the raw data, we are calibrating that, processing it, turning it into very accurate orientations, because very accurate orientations get stuck into a packet that is all time synchronized. And that gets sent off to the computer for processing by the host system. In the host system, when it gets it, we have a set of APIs and DLLs that actually take that data and then present it to any application that needs it. So whether you are in a game engine it is very easy to take that orientation data out, drive an animation rig from that, or if you are an indie developer and want to talk to that data string directly, we have an open source SDK that allows you to do that as well.

Tim: Now what sort of game engines has this been integrated with right now?

Paul: We have integrated this with all of the major game engines. We have it in Unity, which is what the demos are showing here. We have it in CRytek engine, and we also have it in the UDK working as well. We also have people, who have done stuff where people will have their game engines, we have done some of that ourselves. So talking to that data string for a user that is not using a game engine—that’s completely possible and very easy to do as well.

Tim: Did you start out with the intent of making a game controller in a suit or did this grow out of something else?

Paul: This actually grew out of something else. We were doing dynamics robotics research—we needed inertial sensors for that that were high accuracy but low cost. So we developed our own inertial sensors. Those became a product line, we have a 3 space sensor product line, primarily used in military aerospace, industrial and robotics applications. And then in our research lab, we kind of put together a suit out of these, just for fun and realized that it was so much fun. The technology had converged to the point where head mounted displays were becoming readily available. We decided that we could sit around and wait for that kind of technology to take off—or we could do it ourselves. The way we view it is that a place like Oculus or Sony or any of the other head mounted display manufacturers, they are providing the eyes into a virtual room, we want to provide the body. But as you can see in these demos, like Chris behind me, you don’t even need a head mounted display for this to be a whole lot of fun. It is still, even with just you in front of the TV in the suit, you can do incredible things that you can’t do with any other kind of gaming system.

Tim: Now there has been a lot of body tracking technology from the big console makers. Distinguish what this can do while wearing sensors right on your body, versus one of the optical based systems out there.

Paul: Okay, essentially the problems of the optical based systems is two-fold: One is that because it is an optical based system, there is an inherent latency involved with getting the image from the camera or the depth camera the combination of the two, and then doing the processing on that. The processing on that is non-trivial so you wind up with these delays. For example, the connect is somewhere around 90 milliseconds, the new connect the connect 1 is somewhere on 60 milliseconds. Which in a gaming environment you want this one-to-one response, that is a pretty long time. Whereas our system is under 10 milliseconds. So by measuring the things directly we avoid a lot of the processing load and can have this kind of instant tracking to happen. The second problem with optical system is obviously the line-of-sight problem. If you are out of the camera range, or if you move into a pose where it can’t see part of your body, it can’t accurately track that. And also, if you drop down on the ground and roll around and curl up in the fetal position, they can’t track that kind of behavior. But we can completely do that. So we have a number of advantages, but for that, the disadvantage is you have to get suited up. But for the kind of gamer that wants to have this experience, it is hard to beat—the ability to do anything that you want and have the system be able to track it.

Tim: Now we have got people wearing suits and interacting with headsets here, but it is not even a shipping product. Can you talk about where you are in that process and what it has been like?

Paul: Yeah, where we are in the process, we have years of experience doing inertial sensors, so that technology is pretty stable for us and ready to go. Essentially what we are doing is, on February 14, we have a re-launch of the Kickstarter that is going to be specifically for these products. It is a 45-day campaign but we are looking at a very quick turn from the completion of that campaign to having products shipped. We are looking at the end of June or early July for the first units to ship. And essentially what we need the money for is to move from demo systems like this, to mass produced systems. So a lot of the money is going to making the suit ergonomic, easy to put on, easy to take off, and then getting them mass produced. So that is what we are looking at. The technology itself is fairly stable as you can see in the demos here. So it is mostly just a matter of moving it into mass production.

Tim: Now talk about the capabilities of your very highest end suit, you mentioned to me earlier it can actually even track what your foot is doing.

Paul: Yeah. We have a three-level suit: the upper body only, which I mentioned earlier, we have the core suit, which we are primarily gearing towards gamers, and then we have a pro suit which essentially can track foot position, shoulder articulation, torso twisting. So essentially the same technology that will be used in a professional quality motion capture solution that you would see used in a feature film or something like that.

Tim: Now we have got guys here killing zombies. What are some other non-zombie killing applications that might be just outside of it?

Paul: Okay. There are all kinds of serious applications for this. Obviously, military training simulation, applications for that—to put a bunch of soldiers in suits and have them run around in a virtual environment for training purposes. But also, in medical applications—we have a lot of users in the medical space that are interested in using this for rehabilitation range of motion studies. We also have users that are interested in using this for sports and fitness, for example sports analysis to track performance of an athlete. We actually have the US Olympic committee using a couple of our sensors for actually testing sprinters from run to run to see how they do. And we have a lot of people looking at these for a number of different serious applications, education being one of them, architectural so a lot of uses for a suit like this at an affordable price other than just having fun in killing zombies.

Tim: One more question: Can you go into a little bit more detail about what the open source SDK allows you to do?

Paul: Essentially what we are doing is we are doing all the heavy lifting either in the sensors, on the hub, or in the SDK. Primarily the processing of all the inertial data is happening in the suit itself, the SDK then takes that puts into the skeletal framework or the skeletal model, so it makes it very easy for a developer to access that data either at the raw orientation level and do whatever they want with it; or they can actually extract the full posed skeletal model from that in a very easy way. Again, with very little overhead on the host system itself, that being open source. We are also making our demos open source and freely available which allows somebody to see ‘okay, how did they do that?’ Our goal is to make it easy as possible for anybody to use this in whatever end application they have.

Tim: People are going to obviously see you on Kickstarter. Is there any place they should look first if they want to find out more about your product here?

Paul: Yeah. Before the Kickstarter launches, they can visit priovr.com which is the website we have set up right now that has information on this. We will have updates on there. Or yeitechnology.com, on that website we also have links to this, and our existing 3 space sensor line, if somebody can’t wait to get to this technology and try it. The suit is going to be really cool though, so look for the Kickstarter on February 14.

This discussion has been archived. No new comments can be posted.

CES 2014: Ohio Company is Bringing Military-Grade Motion Sensors to Gaming

Comments Filter:

"If it ain't broke, don't fix it." - Bert Lantz

Working...