Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Graphics Games

Making Graphics In Games '100,000 Times' Better? 291

trawg writes "A small Australian software company — backed by almost AUD$2 million in government assistance — is claiming they've developed a new technology which is '100,000 times better' for computer game graphics. It's not clear what exactly is getting multiplied, but they apparently 'make everything out of tiny little atoms instead of flat panels.' They've posted a video to YouTube which shows their new tech, which is apparently running at 20 FPS in software. It's (very) light on the technical details, and extraordinary claims require extraordinary evidence, but they say an SDK is due in a few months — so stay tuned for more." John Carmack had this to say about the company's claims: "No chance of a game on current gen systems, but maybe several years from now. Production issues will be challenging."
This discussion has been archived. No new comments can be posted.

Making Graphics In Games '100,000 Times' Better?

Comments Filter:
  • by bky1701 ( 979071 ) on Tuesday August 02, 2011 @03:39AM (#36956838) Homepage
    The "goal" of crazy people who don't actually understand computers has always been to make graphics (and sometimes logic) based on "atoms"/particles/etc. The problem is not that it can't be done - anyone who has ever used a 3D modeling program with fluid dynamics has that power right in front of them - the problem is that it can't realistically be done in real time with our technology. Hell, it can't realistically be done pre-rendered without a supercomputer.

    So sure, it could make it '100,000 times better.' No one is really debating that, and it isn't news to anyone who knows the first thing about graphics. What would be news would be hardware that better supported it. Somehow, I don't think that's what we have here. Notice the lack of specifics as to what KIND of graphics they seek to improve.

    Looks like the Australians just got scammed for 2 million.
  • I call bullshit (Score:5, Insightful)

    by Sycraft-fu ( 314770 ) on Tuesday August 02, 2011 @04:01AM (#36956972)

    If they really could do realtime graphics that were "100,000 times" more detailed than current stuff, they'd do one of two things:

    1) Release a demo so people could actually try it and see it working on their systems, to prove it was real. Or more likely...

    2) License that shit to a company in the industry. Intel would be extremely interested if it ran on CPUs as they'd love for people to spend more money on CPUs and none on GPUs. Any game engine maker would be extremely interested either way. Wouldn't matter if things still had to be hammered out, at the point they claim to be, that would be more than plenty to sign a licensing deal and get to work.

    So I'm calling bullshit and saying it is a con. This is classic con man strategy: You show a demo, but one that is hands off, where the people watching only get to see what you want them to see and don't actually get to play with your product. You make all sorts of claims as to how damn amazing it is, but nobody actually gets to try it out.

    This has been a con tactic for centuries, I've no reason to believe it is any different here.

    So to them I say: Put up or shut up. Either release a demo people can download that will let them see this run on their own systems, or get a reputable company to license it. If Intel comes out and says "This is for real, we've licensed the technology and will be releasing a SDK for people as soon as it is ready," I'll believe them, as they have a history of delivering on promises. So long as it is some random guys posting Youtube videos, I call bullshit.

  • Re:I call bullshit (Score:3, Insightful)

    by ThirdPrize ( 938147 ) on Tuesday August 02, 2011 @04:47AM (#36957166) Homepage

    On the vague off chance it is real, the last thing they would do is release a demo. The first thing everyone else would do is reverse engineer it and rip them off.

  • by Anonymous Coward on Tuesday August 02, 2011 @05:13AM (#36957272)

    1. They are not done yet.

    2. That is exactly what they try. Create some buzz, get everyone interested, THEN show it to everyone and make profit.

    3. Yes, it could be a con. On the other hand, we have worked on the technology of polygons for decades now (with the short exception of a handful voxel games)... I'd say the time was ripe for some other technology to come along. Why not this?
    We have seen what voxel-engines looked like in the 90s - and since then (Novalogic with the Comanche-titles or Outcast) no one has done some serious development of voxel-engines. When the first 3D-acceleration cards came out, they killed that development entirely.

    What would modern voxel engines look on modern CPUs and maybe also when shoved in modern GPUs? There are some projects by people here and there but I have not seen what massive research in that sector could do, simply because no one (AMD, Intel, Nvidia, Crytec, Epic, anyone who is "big") has done that (or at least told us about it). So far, we compare the highly advanced (in years as well as manhours and money) technology of polygons and rasterizer graphics to some hobby projects or small-team-projects.

    I'd say: It's time for a more advanced technology as the whole polygon-thing is a compromise in the first place. Don't invest money here unless you really have it and really have seen something - but blatantly saying "this is shit!" is like claiming flying in a plane was impossible in the year 1900.
    No matter if that here is real or not, I say the entire voxel-technology is something worth looking at has it could get around a lot of disadvantages that polygon-based technology has as it came to live as approximation (like for Elite, then later with textures). Given some fluff as normal maps etc the technology is still the very same as it was for Tie Fighter.

    Why not see what else is out there?

  • by snowgirl ( 978879 ) on Tuesday August 02, 2011 @05:31AM (#36957350) Journal

    5:45 - he makes the claim that real-world scanned objects can't be used in games because the resolution is too high. This is completely false. Game developers have scanned objects for a long time, and even more often, made extremely high resolution models on purpose. The models are then lowered in resolution down to a usable form, and the differences between the low-res and high-res models is compiled into a normal (bump) map. This is how almost all first person game textures are made these days. (The benefits of this process are mainly surrounding the better efficiency of textures in holding the depth data than polys, especially at varying distances where complex geometry results in extreme aliasing, and the fact that high-poly models cause serious issues with more advanced lighting schemes.) To make the claim this guy just did is highly suspect.

    So, what you're saying is that people scan real world objects, but don't actually use those models in games... so... once one accounts for market speak "you can't use a scan of a real-world object in a game [without dropping enough detail so that you're not using the original scan]."

  • by smallfries ( 601545 ) on Tuesday August 02, 2011 @07:02AM (#36957750) Homepage

    The idea that they've come up with a new LoD algorithm for point cloud data is reasonable. It would then allow their ridiculous claims to be (technically) true about the size of datasets. But, if everything is held procedurally then it must have a low complexity description in order to compress that vast dataset (say 20,000Gb) into something that can be processed. Low-complexity descriptions tend to exist for highly regular geometry, and if you look at their demo they appear to have very high detail objects in a very coarse, regular and repetitive mesh to the extent that when they zoom out it looks like Minecraft.

    No need for it to be a hoax, I'm guessing that they can make horrific looking (regular, craply lit, static) graphics as they claim in the video with the projected datasizes they refer to. What they gloss over is that it can't just be translated onto a real level design and scaled up to the level of complexity that you see in real level design.

    It would be kind of like me saying "hey, I can draw circles at an infinite level of detail, equivalent to trillions of line segments. Can't draw more complex shapes like faces yet though....."

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...