Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Software Entertainment Games

Creating 3D Environments Without Polygons 74

Igor Hardy writes "I've conducted an interesting interview concerning a new episodic indie adventure game series called Casebook. What's quite uncommon, especially for these kinds of independently developed and published productions, is that they include professionally created FMV — all of the footage is filmed in real locations. Yet what's even more interesting is that the games use an innovative photographic technology which recreates a fully explorable 3D environment through the use of millions of photos instead of building from polygons. The specifics of how it works are explained by Sam Clarkson, the creative director of the series."
This discussion has been archived. No new comments can be posted.

Creating 3D Environments Without Polygons

Comments Filter:
  • It's an interesting approach but it reminds me too much of the old adventure games, I suppose it could work out all right if they make it fit in smoothly.

    It'll be interesting to see what effect it has on performance though.
    • by blahplusplus ( 757119 ) on Sunday March 01, 2009 @01:24PM (#27031331)

      Real life graphics are over-rated, almost all games bend the rules of reality significantly. The fact is even in the movies, the 'photorealistic' images we are seeing have been usually doctored to high hell. Almost everything one see's in a movie is made to be look ideal or if not ideal a certain unrealistic way that looks visually nice.

      I think his point about 'not being able to connect with' polygon characters to be a overstatement, a good case study is Prince of persia: the sands of time.

      The characterization in that game and banter back and forth was excellent. There's more to developing interest in a character beyond mere appearances and fancy animations people get the gist of things. I know I was disappointed to what they did to the series and it's characters after the first game, with the whole injection of the "badass prince" persona with it's sequels the warrior within and the two thrones. The game veered well away from the original princes personality in significant ways.

      • Isn't Prince of Persia Sands of Time in itself a sequel?
        • Re: (Score:3, Informative)

          Nope. It's not part of the same series. The way the Prince of persia franchise has functioned is more like an isolated series of different worlds based on the core general ideas.

          The sands of time is the first installment in what we might call the "Sands of time" trilogy, where the 2nd and 3rd games (warrior within + two thrones) were the same world referring to the same storyline.

          Here's a wiki entry (in case you're interested)

          http://en.wikipedia.org/wiki/Prince_of_Persia [wikipedia.org]

          The way the Prince of persia franch

      • Real life graphics are not photorealistic. Photography does not now nor will it ever be capable of delivering a scene as a person sees it. By projecting the 3d image onto a flat page you've distorted the hell out of it.

        The control of the aperture, focal length, focus and exposure are where the photo gets its meaning from. Coincidentally, all of those are necessary in order to get any image at all onto film.

        If you can suggest a way of doing this without distorting it greatly, you're probably eligible for a N

        • by GrpA ( 691294 )

          I've seen this done differently.

          An image created as millions of radials. The image itself was on the computer but it was optically captured.

          Also, the distance from the origin to each point was captured, so each pixel had a distance from the origin, a vector and a color.

          The result was a 3-D image that could appear exactly as someone would view it. You could even adjust for optical properties of photographic equipment (including our eyes).

          I suppose you could even project it back onto a curved surface if you w

      • Re: (Score:3, Interesting)

        by Jekler ( 626699 )
        As a counterpoint, everything we see in movies is altered to make things appear more real, not necessarily ideal. Video of real moments tend to look unrealistic because the camera lens doesn't capture contextual clues that you would get if you were really in the moment. The way we setup a movie set is an attempt to compensate for the disconnection of watching a series of events happen in a scenario you can't touch, smell, or taste, and your field of vision is restricted to about 90 degrees. You can't tur
  • It's just photosynth but with an effort applied at hiding the individual photos all while turning it into a game?
    • by Animaether ( 411575 ) on Sunday March 01, 2009 @01:57PM (#27031621) Journal

      Except that you get a smooth transition from one VR to the other.

      A QuickTime VR - for those who have been living under a rock or just don't care - is a small file with a graphical representation of, typically, the whole environment. So 360 degrees around and 180 degrees up/down. Within a QuickTime VR viewer you can then look in any direction of that environment, zoom in/out, etc.

      In some QuickTime VRs (and much better in older PanoTools-based panoramas, or even SmoothMove/etc.), you can click on a hotlink and it would take you to another QuickTime VR taken from that position/area (e.g. click on a door and you would get a VR of the next room).

      This is much the same technology as far as that goes, except that instead of clicking (presumably), you move around using whatever you'd use to move around with typically.. such as the keyboard.

      The nice part is where they blend smoothly between the panoramas. Sure, they have to take a LOT of them to begin with (hence the camera rig off a grid in the ceiling, probably something like 1 pano every 10 inches or whatever; from the looks of it only in a 2D plane, but 3D should be doable), but even with that you need some nice motion estimation to blend between the two panos as depicted on the screen.

      However, there are limitations that they point out...
      1. they can't blend in live actors -while- you move. That's an organisational limitation - you'd have to make the actor re-do their steps for every single pano vantage point. Ouch. You could mount a whole grid of cameras, but that's gonna be insanely expensive (not just in material costs but rigging that up for each room as well). Probably their best bet is to 3D digitize the actor and blend that into their panos using standard 3D compositing software.
      2. they're limited to a 2D plane at the moment. As I mentioned, this could be made 3D - just means it will take a LOT more time to create
      3. they're limited by storage media; granted, they're talking about their hope for a DVD release, so I guess they're stuck on CD, but even DVD or Blu-Ray would be filled up quickly if it was a more involved game than what it currently looks like.

      • Re: (Score:3, Interesting)

        by BikeHelmet ( 1437881 )

        http://www.gametrailers.com/player/usermovies/178177.html [gametrailers.com]

        Looks like they've done an okay job on the smooth transitions part.

        If only they had scheduled release for a date other than April 1st!

        http://www.youtube.com/watch?v=xF4zYu1nOMw [youtube.com]

        It also appears they're doing some very fancy processing to allow limited alternate viewing angles on scenes with actors. I imagine if they allow the angles to differ from the source by too much, it'd look distorted.

        The youtube vid seems to go over a bunch of the "mini-games" yo

        • Re: (Score:3, Interesting)

          by blincoln ( 592401 )

          It also appears they're doing some very fancy processing to allow limited alternate viewing angles on scenes with actors. I imagine if they allow the angles to differ from the source by too much, it'd look distorted.

          They probably filmed the live-action sequences with the same extreme fisheye lens(es) that they used for the static crime-scene filming. So you would be able to "look around" a bit, but not change the position of the camera, or look rotate the POV too far in any one direction.

          That sort of thing

    • It's just photosynth but with an effort applied at hiding the individual photos all while turning it into a game?

      Yeah, it's not even 3D. It's all 2D and it uses smoke and mirrors to give the player the illusion of a real life 3D world. I watched some game play video and the player really only has about 120 degrees of movement in the camera. So just to make things clear, this isn't a game with a 3D environment. So lets drop the whole "Creating 3D environments without Polygons". Well I guess you could use Nurbs....if you wanted.

      • Erm, isn't "the illusion of a real life 3D world" what all 3D graphics is about? Anyway, I don't know what videos have you watched, but this IS a game with full 3D environment, which is explorable the same way as locations done in traditional 3D. Sure, the technology has lots of limitations, the biggest in my opinion is not being able to register things in motion (they have to be done the traditional way), but I find its possibilities very interesting nevertheless.
  • You mean... (Score:2, Informative)

    by TheSHAD0W ( 258774 )

    Something like this [wikipedia.org]?

    That's so eighties...

  • Buzzwords (Score:3, Insightful)

    by MostAwesomeDude ( 980382 ) on Sunday March 01, 2009 @01:57PM (#27031623) Homepage

    This actually sounds like they are generating polygon-composed scenes from photographs. Cool, yes, but not actually without the traditional rendering method.

    Of course, yes, it's possible to do this entirely with photos and without any kind of 3D rendering at all, but in that case, can it be accelerated? Will it move at a decent speed?

    • Re: (Score:2, Informative)

      by Anonymous Coward

      This is called photogrammetry, and was used to create CG environments in the Matrix trilogy, for one.

      • Ah that technology. Well then... good luck with looking up and downwards. What? Oh never mind...
    • Who says you need 3D rendering to create a two dimensional image of mathermatical data and a databass filled with coordinates and images with RGB data?
      • Re: (Score:3, Funny)

        Who says you need 3D rendering to create a two dimensional image of mathermatical data and a databass filled with coordinates and images with RGB data?

        Good point. We'd never be able to have fishing games without databass.

        Seriously, I meant that if it's not rendered using 3D->2D polygon rasterization, how much hardware acceleration would it be able to use? Can it still be translated into OGL/DX expressions, or must it all be done in software?

        • Re: (Score:3, Informative)

          by Shados ( 741919 )

          To oversimplify things, these scenes are just prerendered videos with more or less all possibilities of position in a database. So no matter where you are, you're seeing a prerendered "still" picture. They just select and display the pictures fast enough that it looks like its 3d. So it doesn't need hardware acceleration for anything beyond buffering the images, which are probably rendered as textures on a flat plane.

        • Can it still be translated into OGL/DX expressions, or must it all be done in software

          Well since we are not talking about polygons and triangles, OpenGL and Direct3D can't render it, duh. You of all people should know that.

          And seriously, what makes you think you need polygons to create a two dimensional image of a three dimensional world?

          • Actually, as the above reply demonstrated, this could be accelerated using a video overlay or any equivalent 3D hardware, so yes, it could be done with OGL.

            And I'm just used to seeing 3D creations constructed with rasterizers, because the only alternative that actually seems feasible is ray-tracing, and everything else falls into one of those two categories. Voxels are rasterized, 2.5D is rasterized, and this is rasterized as well.

            • I was talking about raytracing. coordinates and mathemathical functions -> quadric models. Textures with RGB values -> shading and HDR data for post processing. I wasn't referring to TFA, just to show that you can construct a 2D image without the need of triangles and polygons.
  • Polygons (Score:4, Insightful)

    by sunderland56 ( 621843 ) on Sunday March 01, 2009 @02:00PM (#27031649)
    A "photograph" is just a textured rectangle - i.e. a textured polygon. So the environment is created by the blending of many textured polygons. Sounds awfully familiar to me.

    Sure, they are rectangles instead of triangles; and sure, they aren't arranged in a mesh. But this looks to me like the triumph of a marketing press release over engineering reality.
    • I'm not particularly technically minded, but it did cross my mind that the title I've chosen might be considered as a bit faulty. Still, everything that is shown on a screen is a rectangle in some way. I could have titled this article "3D Environments with one polygon a frame", but everyone would immediately think that it is about some simple 2D trick.
  • Okay, so these voxels - with current generation technology - are represented as cubes which of course are 12 tri-polies, so it's not entirely -without- polygons.. but at least it's not based on polygons and it lets you do some pretty cool stuff - such as truly fully destructible environments. No, none of that "we ran a script on all objects (except for those we don't want you to be able to destroy) that pre-fragments them and call the havok engine on the object if the damage model reaches a certain level"

    • You mean that wise-ass won't have to subject themselves to the artificial devices of game designers who worked under limitations, but that they no longer work under and yet are still designing into the new games, with the new tech, as if they didn't have the new tech at all?

      Those the wise-asses you're talking about? ;p

    • Voxels don't necessarily actually need to be rendered as cubes. Algorithmicly it is possible to draw a cube on screen without really using polygons in the conventional sense of coding. There is no need to mathematically establish a surface between 3 or more vertexes in order to UV-map your texture or gourad shade etc.

      A voxel can be rendered as a point sprite, a square, a circle, a single pixel (with some kind of interpolation) just about whatever floats your boat. Voxels really are rendering without pol
  • Quite impressive (Score:4, Informative)

    by janwedekind ( 778872 ) on Sunday March 01, 2009 @03:08PM (#27032193) Homepage
    Quite impressive [youtube.com]. Not much information how it works though.
  • www.casebookthegame.com
  • So the lighting is captured by the camera, not an algorithm - how then, do you *remove* lighting for shadows? Or change the lighting when light-emitting objects move?

    This seems like a step backwards from truly immersive worlds, where one can interact with the world and it interacts back. My prediction is that this line of research will lead to some cool proof-of-concept games (under a killing moon is still one of my favorite games of all time), but will ultimately be a dead end. We have the technology to do

  • Wouldn't it make more sense to base something on a volume particle system? You could start with only a few elemental particles ... say, three (you could get smaller but we're trying to get simple) ... and make up some rules about how they combine. make them up into, oh, say, 117 or so "elements" which you can then compound according to other rules. Each step in the chain can increase complexity.

    Naw, it would never work.

  • People in above comments are talking about photogrammetry and voxels. This is not the technology refered to in this article. They specifically mention having to compress the photos to a great extent to get the game under 1 GB. I am 99% sure that what they are doing is simply storing a grid of 360 degree 'fisheye' photos, and then interpolating between them based on the camera position using some clever interpolation method. The technique is pretty obvious so I am guessing the technology they are so proud of
  • I couldn't plow through all the spam about the actors and characters and storyline, cut to the chase... is this just VRML-style backdrops like Myst and Riven again?

  • This is nothing new. That technique was a standard before the polygon technique became feasible.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (10) Sorry, but that's too useful.

Working...