Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Games

Making Graphics In Games '100,000 Times' Better? 291

trawg writes "A small Australian software company — backed by almost AUD$2 million in government assistance — is claiming they've developed a new technology which is '100,000 times better' for computer game graphics. It's not clear what exactly is getting multiplied, but they apparently 'make everything out of tiny little atoms instead of flat panels.' They've posted a video to YouTube which shows their new tech, which is apparently running at 20 FPS in software. It's (very) light on the technical details, and extraordinary claims require extraordinary evidence, but they say an SDK is due in a few months — so stay tuned for more." John Carmack had this to say about the company's claims: "No chance of a game on current gen systems, but maybe several years from now. Production issues will be challenging."
This discussion has been archived. No new comments can be posted.

Making Graphics In Games '100,000 Times' Better?

Comments Filter:
  • Most games recently just kind of suck and rest upon the shoulders of innovative graphics. This does not make me hopeful for the future of gaming.

    However, if this technology isn't a scam like I suspect, I still welcome it. It could change the face of computer graphics as we currently know it forever.

    • by Monchanger ( 637670 ) on Tuesday August 02, 2011 @01:54PM (#36963138) Journal

      Most games recently just kind of suck and rest upon the shoulders of innovative graphics. This does not make me hopeful for the future of gaming.

      Generally speaking, I'm in agreement on the suck part, but hold on a second there with the conclusion. If this technology is real and games do see a massive jump forward in graphics, wouldn't that allow for an end to each successive title needing to simply out-polygon the competition? Isn't it equally likely this would force a paradigm shift, where if nothing else art- real art, would supplant technical graphics specs?

  • by bky1701 ( 979071 ) on Tuesday August 02, 2011 @02:39AM (#36956838) Homepage
    The "goal" of crazy people who don't actually understand computers has always been to make graphics (and sometimes logic) based on "atoms"/particles/etc. The problem is not that it can't be done - anyone who has ever used a 3D modeling program with fluid dynamics has that power right in front of them - the problem is that it can't realistically be done in real time with our technology. Hell, it can't realistically be done pre-rendered without a supercomputer.

    So sure, it could make it '100,000 times better.' No one is really debating that, and it isn't news to anyone who knows the first thing about graphics. What would be news would be hardware that better supported it. Somehow, I don't think that's what we have here. Notice the lack of specifics as to what KIND of graphics they seek to improve.

    Looks like the Australians just got scammed for 2 million.
    • by bky1701 ( 979071 )
      Addendum. I watched the video (OK, skimmed it). As far as particles go, this doesn't look like it is actually intended to be a full particle system. Rather, some kind of hybrid, like particle effects are done now. So sure, that could be something new - but still, their claims are formed in a very misleading way, given this.

      I did however notice an extremely questionable statement which makes me seriously suspect this is a scam.

      5:45 - he makes the claim that real-world scanned objects can't be used in g
      • by snowgirl ( 978879 ) on Tuesday August 02, 2011 @04:31AM (#36957350) Journal

        5:45 - he makes the claim that real-world scanned objects can't be used in games because the resolution is too high. This is completely false. Game developers have scanned objects for a long time, and even more often, made extremely high resolution models on purpose. The models are then lowered in resolution down to a usable form, and the differences between the low-res and high-res models is compiled into a normal (bump) map. This is how almost all first person game textures are made these days. (The benefits of this process are mainly surrounding the better efficiency of textures in holding the depth data than polys, especially at varying distances where complex geometry results in extreme aliasing, and the fact that high-poly models cause serious issues with more advanced lighting schemes.) To make the claim this guy just did is highly suspect.

        So, what you're saying is that people scan real world objects, but don't actually use those models in games... so... once one accounts for market speak "you can't use a scan of a real-world object in a game [without dropping enough detail so that you're not using the original scan]."

        • by EdZ ( 755139 )
          No, they don't use the raw point cloud models, they perform some processing first. And I doubt these guys use unprocessed point clouds for their engine either, so it's a ludicrous claim.
          • No, they don't use the raw point cloud models, they perform some processing first. And I doubt these guys use unprocessed point clouds for their engine either, so it's a ludicrous claim.

            Their specific claim is that they are using point clouds. My thoughts are that if you strategically collapse points of the model (like a dynamic LOD sort of thing) that you could feasibly accomplish something similar to what they're doing.

            I mean, seriously, they're talking about large point fields for grains of sand... if that were true, then why wouldn't they be able to use a raw point clouds from scanned objects?

    • by iinlane ( 948356 )

      Seen this video few years ago - definitely a scam.

      • by dintech ( 998802 )

        Seen this video few years ago - definitely a scam.

        How is that possible? Crysis 2, featured in the video, was only released this year. I'm not saying this ISN'T a scam or that you might have see a similar video, but you did not see THIS video a few years ago.

        • by data2 ( 1382587 )

          Parts of the video were awfully familiar, especially in the beginning. If I remember correctly, last time around they produced this with functions for curves etc. and created models with these curves, allowing "unlimited" detail.

    • by gutnor ( 872759 )
      They claim they can do in realtime what you say is impossible. Now, if you don't actually have any technical argument, I'll take the view of an expert: John Carmack does not think it is a scam. That said, there are big always big challenges to go from the tech demo to the finished product for sure and they are unlikely to make it especially in the current game market which is already struggling to create content.
      • Re: (Score:3, Informative)

        They claim they can do in realtime what you say is impossible. Now, if you don't actually have any technical argument, I'll take the view of an expert: John Carmack does not think it is a scam. That said, there are big always big challenges to go from the tech demo to the finished product for sure and they are unlikely to make it especially in the current game market which is already struggling to create content.

        Here, kid, as an actual graphics programmer, I'm translate Carmack's producer- and marketing-approved Twitter into plain, run-of-the-mill English for the simple-minded:

        Statement: "No chance of a game on current gen systems, but maybe several years from now."

        Translation: "No chance of a game on current-gen systems, nor what will be the next generation, as Wii U devkits have already been seeded to developers and it'd be foolish to think that Sony or Microsoft are very far behind. Insofar as nobody, not

        • by gutnor ( 872759 )
          So no real different than raytracing demo presented by Intel. That is still a level above "scam" and "impossible, duh".

          There is almost every day an article about revolutionary new material, source of energy, cure for cancer, ... the vast majority of it never make it to an actual product for lots of different reasons (does not scale, too weird, politic, bad time, fashion, $$$, 90% there syndrom, overoptimism, fatal flaws). That is still interesting, certainly more interesting than the 5 articles there will

          • It would be interesting (to me, as a graphics programmer in the games industry), if they stopped bullshitting. The claims in that video, when writtten down, are absolutely absurd. 20,000Gb of Ram. That's right. 20,000Gb of ram (at least!) to store the number of 'atoms' they claim they are displaying. Now, that simply can't be true - so they must either have left out a hell of a lot of information (such as, we are drawing the same object 20,000,000 times, or we are throwing everything at some procedural geom
            • by smallfries ( 601545 ) on Tuesday August 02, 2011 @06:02AM (#36957750) Homepage

              The idea that they've come up with a new LoD algorithm for point cloud data is reasonable. It would then allow their ridiculous claims to be (technically) true about the size of datasets. But, if everything is held procedurally then it must have a low complexity description in order to compress that vast dataset (say 20,000Gb) into something that can be processed. Low-complexity descriptions tend to exist for highly regular geometry, and if you look at their demo they appear to have very high detail objects in a very coarse, regular and repetitive mesh to the extent that when they zoom out it looks like Minecraft.

              No need for it to be a hoax, I'm guessing that they can make horrific looking (regular, craply lit, static) graphics as they claim in the video with the projected datasizes they refer to. What they gloss over is that it can't just be translated onto a real level design and scaled up to the level of complexity that you see in real level design.

              It would be kind of like me saying "hey, I can draw circles at an infinite level of detail, equivalent to trillions of line segments. Can't draw more complex shapes like faces yet though....."

            • There's an awful lot of instancing in the scene...if you calculate the total when the instances are baked then you can get those numbers.

              This is a hoax, or an exceptionally ill-advised way to generate developer interest in their middleware

              Yep. My prediction is that they're just after investor money and this will never be more than a tech demo.

            • It would be interesting (to me, as a graphics programmer in the games industry), if they stopped bullshitting. The claims in that video, when writtten down, are absolutely absurd. 20,000Gb of Ram. That's right. 20,000Gb of ram (at least!) to store the number of 'atoms' they claim they are displaying.

              What figures are you using for that calculation.

              Now, that simply can't be true - so they must either have left out a hell of a lot of information (such as, we are drawing the same object 20,000,000 times, or we are throwing everything at some procedural geometry shader)

              Well of course they are drawing each object many times. So do all the polygon based games. It would be stupid not to. No one would store the full unique geometry for each blade of grass.

              Simple tool to magically convert polygons (that we've been lambasting for the last 5mins) into an infinite detail point cloud (thereby adding detail to the mesh that was not there to begin with? WTF?)"".

              You're ranting. The polygon's that they are lambasting are for example tree trunks with 6-12 sides. If instead they take a model with a very high polygon count, higher than would be used in a polygon based game, and convert it to their "point cloud" system, that would be quite

          • by bky1701 ( 979071 )
            Raytracing is quite a different situation. The issue with realtime raytracing is that it can make the lighting worse; unless you use a large number of rays (which can become prohibitive, especially with other lighting methods at work, which can complicate ray paths), it looks splotchy and staticy. It is far from impossible given current technology - you can forcibly enable it in some games, even - just not really that helpful when it comes to making a scene look better.

            These people are playing up a compl
    • by hitmark ( 640295 )

      I find myself wondering if what they are doing is using voxels to step down the detail level om distant objects while stepping it up on near objects, and not even bothering with the objects out of view.

    • Actually, it was being done realistically in near real-time over 10 years ago, using splatting based techniques (see surfels and QSplat http://graphics.stanford.edu/software/qsplat/ [stanford.edu]). These systems weren't really suitable or fast enough for games at the time, but 10 years is a long time for hardware and software to progress.

    • "Looks like the Australians just got scammed for 2 million."

      Worse than that, he did this over a year ago [popsci.com]. Here's his video from February 2010 [youtube.com] which is basically the same as the July 2011 version [youtube.com].

      His linkedin goes into a bit more detail: [linkedin.com] "The Unlimited Detail system consists of a compiler that takes point cloud data and converts it in to a compressed format, the engine is then capable of accessing this data in such a way that it only accesses the pixels needed on screen and ignores the others generati
      • He's a con artist who can program graphics (or get other people to while he takes all the credit).

    • Perhaps they've invented a compiler that's so smart that it can deal with (code handling single) atoms by cleverly dividing them into groups or something.

      As an analogy: fluid mechanics also does not describe one atom at a time, yet the equations are valid on a larger scale.

    • This looks a lot like sparse voxel octrees [wikimedia.org]. As a concept, SVO is nothing new at this point, and id has been considering using it as part of their id Tech 6 engine.

      A sparse voxel octree is basically a hierarchical structure for points in 3D space. The advantage of using a hierarchical structure is that you can stop looking at any time, and so zooming works very well: you just traverse the tree until you get so far down that further detail won't be visible, then you render. If the player moves closer, that
    • I do get your skepticism...

      Thing is though, some members of the demo scene have been doing really impressive things with particle-based rendering systems over the past few years.

      To see what can be done, you should check out the demos "Ceasefire (All falls down)" and "Numb Res" by CNCD & Fairlight. They both make very heavy use of particle-based rendering engines - the latter features a rather long section of real-time particle-bsed computational fluid dynamics simulation - the kind of stuff that one te

    • I remember when Wolfenstein 3D came out. It seemed unbelievable that a world of textured polygons was being manipulated in real time on a 4.77 MHz PC. We'd seen nothing like it!

      Later the details of how Carmack had done it came out. This wasn't the traditional matrix manipulation of 3D points, hidden surface removal, plotting of textures and the painter's algorithm we were used to. It was 2D raycasting from a simplified data structure. Each ray cast allowed the plotting of 128 pixels. Only 280 rays had to b

      • by bky1701 ( 979071 )
        "I certainly don't see anything here that is so impossible he must be a scammer."

        That's actually my biggest issue with it. I don't see anything in the video *at all* that can't be done with current technology, some good hardware, and very clean programming. The claims they are making do not seem to align with what is being shown, and indeed, the claims seem to be somewhat self-contradictory.

        I don't think that a massive revolution in rendering is impossible (although it would almost certainly require n
  • Now is not the time for voxels. At current hardware specs, the computing time:image quality tradeoff comes out so far in favour of rasterizing (or whatever) polygons it's insane.

    These guys had a tech demo floating around a few years ago, and to my eyes, not a lot has changed.

    • Re:Voxels (Score:5, Interesting)

      by XDirtypunkX ( 1290358 ) on Tuesday August 02, 2011 @03:08AM (#36957012)

      This is probably not actually what is generally called "voxels", but a hierarchical point cloud system consisting of points on the surface of objects, rendered via some kind of weighted splatting mechanism. There was a lot of research into such systems for visualising some of the very high resolution point clouds coming out of digital laser scanning systems (for example QSplat, which came out of the Digital Michelangelo project http://graphics.stanford.edu/software/qsplat/ [stanford.edu]).

  • They've definitely proved they're capable at:
    - Hiring the most annoying voice over guy.
    - Over use of the word 'unlimited.'

    Thankfully they have UNLIMITED POWER at their disposal to prove any further developments.
  • by trawg ( 308495 ) on Tuesday August 02, 2011 @02:47AM (#36956886) Homepage

    (I submitted this article) I fired off a request for more information from the developers about this and they got back to me indicating they're willing to answer some more questions, so I've summarised some of the main ones that I've seen around the place.

    We're based in the same city as this company (Brisbane, Australia) so I'm hoping that I might be able to actually go out there and eyeball this stuff myself to get a feel for it (and possibly drag along a graphics programmer to do some grilling).

    • by Wizarth ( 785742 )

      I too am interested in hearing more about this.

      Oddly enough, I'm working for a company that's working on a modelling tool for "infinite detail", with a aim for 3d fabrication. But it's not voxels, like the engine here shows.

    • by Suiggy ( 1544213 ) on Tuesday August 02, 2011 @05:52AM (#36957702)

      Don't bother, they're taking credit for other people's work. You want to know how their technology works? Here's a couple of research papers:

      http://research.nvidia.com/publication/efficient-sparse-voxel-octrees-analysis-extensions-and-implementatio [nvidia.com]
      http://artis.imag.fr/Publications/2009/CNLE09/ [artis.imag.fr]

      Want some source code? http://code.google.com/p/efficient-sparse-voxel-octrees/ [google.com]
      Want a video? http://www.youtube.com/watch?v=HScYuRhgEJw [youtube.com]

      Euclideon is just spinning up the marketing bullshit and trying to make a profit off of it all. They don't even have good lighting, they're just doing forward shading for each voxel ray-cast intersection using diffuse lighting with a single global point light source. And they haven't demonstrated robust animation yet.

      Guess what, it is possible to animate voxel octrees, but Euclideon never came up with the method either. Some researcher in Germany came up with a working solution for his bachelor's thesis: http://www.youtube.com/watch?v=Tl6PE_n6zTk [youtube.com]

      • by RingDev ( 879105 )

        Except that this isn't Voxel. Euclideon is marketing for Unlimited Tech now. Go do a search for Unlimited graphics engine, they've been showing off their work for the last 2 years. The only thing new here is their marketing partner.

        -Rick

      • by gmueckl ( 950314 ) on Tuesday August 02, 2011 @09:21AM (#36959412)

        The technology is rather related to point cloud rendering which is about 10 years old now. This is the most clever implementation of point cloud rendering that I am aware of and it is pretty cool: http://graphics.stanford.edu/software/qsplat/ [stanford.edu] It renders amazingly fast.

        It has its shares of problems including requiring a lot of precomputation and as far as I know noone was able to do proper anitaliasing on point clouds. Texture interpolation in the traditional sense has also not been solved to my knowledge because with these point clouds all you can do is give individual points colors, so you will always have hard edges between points. Those two combined result in a lot of visual noise that destroys the illusion in the demo videos that I have seen so far.

  • The video is new, but the demo of the tech certainly isn't. I saw this years ago.
    • by ctid ( 449118 )

      The commentary in the video said that they'd demonstrated an early version a year ago and then disappeared. They're back now that they've got something new to show.

  • by ledow ( 319597 )

    So that means 100,000 times more work to make everything that detailed?

    Or else everyone who makes games uses a standard library of objects to cut/paste and so the games end up looking the same anyway?

    This is voxels all over again, in a modern iteration. Yeah, it looks cool, but it increases your development time and isn't anywhere near as fast as other techniques and all those graphical "shortcuts" that standard 3D cards do are done for a reason - nobody *really* notices or cares so long as the game runs s

    • I see this as the equivalent of FLAC vs MP3 - yeah, sure it's definitely contains more information but at the cost of storage size and, in the end, 99% of people won't actually care.

      But FLAC sounds so much better then 512Kib/s mp3 with my $ 15 headphones and on-board soundcard!

  • Aren't they simply calling pixels atoms, and rasterizing images, as opposed to vectorizing them? I fail to see any novel technology. I'm happy to listen though if there is something involved I missed.
  • I call bullshit (Score:5, Insightful)

    by Sycraft-fu ( 314770 ) on Tuesday August 02, 2011 @03:01AM (#36956972)

    If they really could do realtime graphics that were "100,000 times" more detailed than current stuff, they'd do one of two things:

    1) Release a demo so people could actually try it and see it working on their systems, to prove it was real. Or more likely...

    2) License that shit to a company in the industry. Intel would be extremely interested if it ran on CPUs as they'd love for people to spend more money on CPUs and none on GPUs. Any game engine maker would be extremely interested either way. Wouldn't matter if things still had to be hammered out, at the point they claim to be, that would be more than plenty to sign a licensing deal and get to work.

    So I'm calling bullshit and saying it is a con. This is classic con man strategy: You show a demo, but one that is hands off, where the people watching only get to see what you want them to see and don't actually get to play with your product. You make all sorts of claims as to how damn amazing it is, but nobody actually gets to try it out.

    This has been a con tactic for centuries, I've no reason to believe it is any different here.

    So to them I say: Put up or shut up. Either release a demo people can download that will let them see this run on their own systems, or get a reputable company to license it. If Intel comes out and says "This is for real, we've licensed the technology and will be releasing a SDK for people as soon as it is ready," I'll believe them, as they have a history of delivering on promises. So long as it is some random guys posting Youtube videos, I call bullshit.

    • Re: (Score:3, Insightful)

      by ThirdPrize ( 938147 )

      On the vague off chance it is real, the last thing they would do is release a demo. The first thing everyone else would do is reverse engineer it and rip them off.

    • Intel had its own similar project for a while, but they cancelled it.

    • by Suiggy ( 1544213 )

      The technology itself isn't bullshit, but what is bullshit is that Euclideon is taking credit for other people's work.

      They say they've invented the methods and algorithms behind it all, well that's just pure fantasy. Here's what Euclideon is basing there technology off of:

      http://research.nvidia.com/publication/efficient-sparse-voxel-octrees-analysis-extensions-and-implementatio [nvidia.com] [nvidia.com]
      http://artis.imag.fr/Publications/2009/CNLE09/ [artis.imag.fr] [artis.imag.fr]

      Here's video from 2009 which looks better than Euclideon'

      • by RingDev ( 879105 )

        You should look into the underlying engine. The reason that they call it 'unlimited' is because the performance is based on a search engine that only has to be executed once per pixel instead of the more traditional for each poligon. With traditional engines, the more poligons, the more performance suffers, with the Unlimited engine, adding more points has a negligable effect on performance, adding higher resolution on the other hand, has a significant impact.

        -Rick

        • by Suiggy ( 1544213 )

          You still run into the same problem as you increase the resolution of your voxel octrees, thus increasing the depth of your octrees or spatial lookup structures, which requires that you need to recurse an more levels as you perform a spatial query, although it scales logarithmically instead of linearly. Still no reason to call it "unlimited."

  • Other voxel engine (Score:5, Interesting)

    by binkzz ( 779594 ) on Tuesday August 02, 2011 @03:03AM (#36956982) Journal

    This russian guy made his own voxel engine as well, which I believe is hardware accelerated and also pretty impressive: http://www.atomontage.com/ [atomontage.com]

  • So did they just essentially develop a super intelligent LOD loading system that uses procedural instancing? I'm pretty sure you could put together similarly impressive demos using the latest tricks from Nvidia and ATI using standard polygon rendering. The fact they are using points vs. polygons isn't that interesting to me.

    What is fundamentally missing here? Animation, lighting and shadows. Those are going to be really hard problems to solve and I'm curious how they will go about it.

    Also, it's not "infinit

    • by Tom ( 822 )

      Actually, 64 voxels per square millimeter is "unlimited" for all practical purposes. If you can simulate a world to the limit of what the eye can see, you're done.

      If they do indeed do anything procedural (but there's not hint that they do), then a simple fractal will take care of actual unlimited detail.

      (and yes, /., I can type the above in a minute or less. can I please get a "knows how to use a keyboard" mode???)

    • I love the detail of the models in the demos. I'd love to see games without the polygonal trees and such shown in the video. But I agree the lighting of their demos could use some serious work. It's as if there's a uniform light source shining from all directions at once in their palm tree world. There are a few shadows in their demo, but the contrast is way too low.

      I'd love to see the combination of good lighting, and this non polygonal world.

    • by RingDev ( 879105 )

      Think of it this way, a model is made up of a whole lot of points, billions of them even. This engine takes on pixel of the output and searches for which points will fill it. Imagine the whole world as points (not polygons) in a giant cube. The pixel is actually 1 point with an contracting square extrude coming out of it. The engine starts close and works further and further away until the entire pixel is filled with points. The resulting image is then compressed back down to 1 pixel and sent for output as

  • It'll take a while for this tech to get turned into an engine with animation/shading/lighting working, and no game developer will touch it until that happens. Euclideon had the right idea making a converter to turn polygonal models into voxel models, since noone was going to dedicate the money to create high-quality voxel assets that couldn't be used if they decided to scrap the tech and use a normal polygon engine. This tech is risky, so the first game to use it is likely going to be a cheaply-made game, p

    • by Tom ( 822 )

      It'll take a while for this tech to get turned into an engine with animation/shading/lighting working, and no game developer will touch it until that happens.

      That's also my highest doubts. How does the system handle animations at all? The demos they have shown so far have no movement aside from the camera.

      I am very, very much looking forward to this. I can barely imagine the amount of creative potential being freed up if for most real-life objects you don't need hours of artist time anymore, but simply throw them into a laser scanner and be done with it. Your artists could focus on other things.

      But I'd very much like to know what the shortcomings and limitations

  • Here is what I think they probably do, similar to raytracing: They fire one "photon" from eachone pixel of screen into the scene. As oposed to raytracing, this photon is never divided to multiple copies, it travels until it reaches something. Photon is traveling through the scene by adding X,Y,Z from pre-calculated table, until it reaches box with something in it, then it halves step for x,y,z, looking for even smaller boxes etc. until box is so small it represents one pixel, OR photon is outside the box (o
    • I think their smarts are in modeling the environment data such, that they don't have to move gigabytes around for every image. Also, as they claim in TFA, they have a very limited "shade" model. They probably cut a lot of corners when it comes to reflectiveness and secondary light sources and all that.
  • We've had this discussion some time ago.. and from what I remember it came out that the procedural creation of "atoms" is kinda powerful and scalable, but will inherently not allow any kind of collision detection and/or animation.

    so basically, yes, this can be used to create a very detailed static world.

  • First of all, I have nothing against the government spending money on computer game graphics engines, in fact I think such money is wisely spent (more wisely than most defense projects, at least). However, out of sheer curiosity I'd like to know how a small software company can get 2 million AUD$ government funding?

  • Euclideon is unjustly taking credit for other people's hard work. They say they've invented the methods and algorithms behind it all, well that's just pure fantasy. Here's what Euclideon is basing there technology off of:

    http://research.nvidia.com/publication/efficient-sparse-voxel-octrees-analysis-extensions-and-implementatio [nvidia.com]
    http://artis.imag.fr/Publications/2009/CNLE09/ [artis.imag.fr]

    Here's video from 2009 which looks better than Euclideon's videos: http://www.youtube.com/watch?v=HScYuRhgEJw [youtube.com]

    They might also be using othe

  • I see a lot of very skeptic responses and I must admit I am a bit too. But then I thought back to the first time I saw "Doom" on a 486 and almost had my eyes fall out. It was just such a big step... I could have imagined it. All it took was someone with a very bright idea. Perhaps we might be in for a similar surprise...

  • Towards the end of their video, they talk about a demo of an island using 21 billion points. Which is pretty much impossible to keep in RAM on anything less than a minicomputer.

    Let's assume that each point is storing the bare minimum of data needed - xyz position (each as 32-bit ints) and two pieces of color information (diffuse and specular, also 32 bits a piece). So that's 20 bytes of data per point, which comes out to be 391GB of data (for a static, unanimated mesh, I remind you). You can't store that in

Life's the same, except for the shoes. - The Cars

Working...