Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Games Entertainment

Virtual Sword Fighting 177

Faeton writes "SIGGRAPH is on, and Extremetech has the scoop on it. From Nvidia's N30 to ATI's monster 4x Radeon 9700 render board, the coolest thing was the virtual sword fighting simulator. With a VR headset and a gyroscopic force-feedback "sword", you could really be the badass knight you've always dreamed of. I want this at a local arcade soon!"
This discussion has been archived. No new comments can be posted.

Virtual Sword Fighting

Comments Filter:
  • by MagPulse ( 316 ) on Saturday July 27, 2002 @08:03PM (#3965829)
    Check out this project [uiuc.edu], where you can have a light saber fight with a cheap plastic toy and a webcam. It was on Slashdot [slashdot.org] two years ago.
  • by green pizza ( 159161 ) on Saturday July 27, 2002 @08:11PM (#3965855) Homepage
    Why isn't the equipment wireless, using bluetooth or something similar for everything to communicate. Its not going to feel very realistic to me if I have a strand of wires attached to me.

    SGI was showing off some examples of what you are describing. Basicly, the big iron (clusters, or large machines such as Onyxes) sit in the machine room, while the users have wireless webpads and such elsewhere. It's the only way we can currently tap the power of thousands of processors and dozens of 3D accelerators in a handheld using current technology.

    http://www.sgi.com/visualization/van/ [sgi.com]
  • by Vess V. ( 310830 ) on Saturday July 27, 2002 @09:03PM (#3965992) Homepage
    For a much less crude, albeit less geeky, sword fighting "simulation," check out the SCA [sca.org]. It's a nation-wide organization that reenacts all aspects of Medieaval life... from armed combat, to chivalrous ceremonies, to arts, crafts, and cooking.... but mostly combat . People craft their own custom armor and costumes and make all sorts of weapons from rattan (the stuff those chairs are made of) wrapped with duct tape. Combat ranges from one-on-one bracketed tournaments, to full-scale open field battle consisting of hundreds of warriors in rank and formation under multiple subdivions of field command, complete with mock castles, and authentic battle formations (shield/swordsmen in the front, pikemen and spearmen behind them, archers in the back, etc), and siege engines! Nothing is scripted - all combat is as if it were really happening -- except the death. "Deaths" in combat are much like paintball -- if you're hit, you're out... more or less honor system. If you're hit in an arm or leg, you lose control of that limb. Loks like a whole ton of fun.

    You can check out some of my favorite pictures of stuff going on here [lglan.net].

  • by zenyu ( 248067 ) on Saturday July 27, 2002 @09:16PM (#3966030)
    It's a parallax barrier system, this is for the 3-D to work. If it didn't spin it would have big black stripes and you wouldn't be able to fuse the images. This doesn't help with the low resolution as someone else suggested, they just used an LED display because it was much cheaper to buy billboard display blocks than lots of custom LCD displays. It is probably easier to drive the low res display. It takes a special display server with four digital video cables to drive IBM's high res display, this would probably be similar with the large 360 degree stereo view.
  • by JJC ( 96049 ) on Saturday July 27, 2002 @09:47PM (#3966105)
    Thought some people might be interested that there's an (admittedly less sophisticated) sword fighting game from Konami out in arcades which uses a motion-sensor sword controller. It's called Tsurugi (apparently Blade of Honor is the US). Here's some pics and information [the-magicbox.com] from the Magic Box [the-magicbox.com].
  • by SpaceGhost ( 23971 ) on Saturday July 27, 2002 @10:16PM (#3966216)
    This wasnt even close to the coolest thing at SIGGRAPH! Takeo Igarashi's work on predictive interfacing making easier 2d and 3d drawing tools was cooler. Digiplasty [asu.edu] , a kind of 3d exquisite corpse as shown by Stewart and Makai was cooler. (For that matter the Studio, manned by Makai, Stewart, Scott and many others, where you could create 2d and 3d art and print 2d and 3d was AWESOME - you could work in there for hours, vs. the few seconds of playing with a silly virtual sword.) Scotts Dodecahedron [asu.edu] was a wonderful example of taking something abstract and virtual and making it real and usable. Isa's overview of wearable tech and cyberfashion [psymbiote.org] (she took out the notes, dammit!) was refreshing, if not so new to a frequent slashdotter. (She's a burner too!) Some of the mixed reality work [nus.edu.sg] being done at the University of Singapore was really neat. (This is an example of some of the most exciting stuff there. Several researchers showed some great work being done in augmented reality, and combining that with some of the reasonable priced wearable and wirelessable computing, we can see some real headway being made. One researcher even composites a virtual face back onto a fellow participant in the augemented reality environment, masking the HMD, even going so far as to track the eyes and simulate the gaze.) The results of last years meditation chamber [gatech.edu] research installation was an interesting and possibly VERY useful application of VR technology. W. Bradford Paley's work on applying alternative interfaces to explore other media was fascinating, where you can use this LARGE java tool named TextArc [textarc.org] to examine graphically over 400 literary works. The Web3D Consortium's release of the final working draft of X3D [web3d.org] (with tools) could end up being much more important than the newest video card from ATI. Dietmar Offenhuber's work on non-isotropic spaces at wegzeit [futurelab.aec.at] was an interesting approach to mapping and representing real places. Zachary Simpson et al's delightfully simple shadow interactivity [mine-control.com] was many times more fun than the virtual swordfight. Fabric.ch's [fabric.ch] knowscape [electroscape.org] was also exciting, both for the viewers and the presenter, as he would find additions from his European counterparts each morning when he logged on to the shared 3d space. Kenneth Huff's beautiful art [itgoesboing.com] using maya was just one example of some wonderful digital work being done. Lastly, Michael J. Lyons soon-to-be-published research on the aesthetics of Tokyo's Kyoto Gardens was both informative and inspiring. And this is just a TINY PART of what happened there!

    Really, SIGGRAPH was NOT just an exhibition floor with cheesey swag (although the little green LED lights were very nice) and some cool new toys. It was presentation after presentation by resesarchers, some barely able to speak engrish, but all excited about their work and open to collaboration. It was hours and hours of animation, some (Like Allain Escalle's "Le Conte du monde flottant") were so stunning as to make you forget where animation ended and life began. Disney's work on replacing one actors face with another, retaining ALL facial expression, was downright scary. And the Spiderman gag footage, his spidey-suit oddly replaced with a fully reflective silver surface, like most of the rest of SIGGRAPH'S less entertaining presentations, were surely an indication of things to come.
    Take the time to go to SIGGRAPH2002 [siggraph.org] and look around. If you find something interesting, write the author. This is where the new VR and AR comes from - not ATI!
  • by foobar104 ( 206452 ) on Sunday July 28, 2002 @10:39AM (#3967411) Journal
    Well, as I said before, I'm not a graphics programmer-- I'm a different kind of programmer-- so I may get some of these details wrong.

    When you say "32 bits per pixel," you're talking about output pixel depth and format. A pixel in RGBA8 format stores one byte for each of red, green, blue, and alpha, and no other data. Those 32 bits are used by the DACs on the hardware to generate a component RGB video signal to drive your monitor. (Or, as I said before, a digital signal, but I'm not familiar with digital signal formats, so I get a little fuzzy at that point.)

    IR doesn't support RGB8 or RGBA8; it uses either RGB10 (the default), in which 10 bits are used for each of red, green, and blue (not sure of the packing used), RGBA10 (adds alpha), or RGB12 (12 bpp).

    On top of the color data, you can have a second buffer (used to eliminate image flicker in real-time animations), stereoscopic buffers (rendering two different images into the same buffer and display them through special stereo viewing hardware), auxiliary buffers (used for off-screen rendering in hardware; glCopyPixels() can copy aux buffer pixels into the visible frame buffer), multisample antialiasing, Z-buffering, and so on.

    As I understand it from my vis sim buddies, it's really not that hard to fill up a 256 bit pixel in a real time image generator. They use 1 Kbit and 2 Kbit pixels pretty often.

    Here's an example of a visual available on my Onyx2 at the office:

    Visual ID: 6b depth=24 class=TrueColor
    bufferSize=48 level=0 renderType=rgba doubleBuffer=1 stereo=1
    rgba: redSize=12 greenSize=12 blueSize=12 alphaSize=12
    auxBuffers=1 depthSize=23 stencilSize=8
    accum: redSize=32 greenSize=32 blueSize=32 alphaSize=32
    multiSample=4 multiSampleBuffers=1
    Opaque.

    I wish I could tell you what everything in there means, but most of it is beyond me.

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...