Making Graphics In Games '100,000 Times' Better? 291
trawg writes "A small Australian software company — backed by almost AUD$2 million in government assistance — is claiming they've developed a new technology which is '100,000 times better' for computer game graphics. It's not clear what exactly is getting multiplied, but they apparently 'make everything out of tiny little atoms instead of flat panels.' They've posted a video to YouTube which shows their new tech, which is apparently running at 20 FPS in software. It's (very) light on the technical details, and extraordinary claims require extraordinary evidence, but they say an SDK is due in a few months — so stay tuned for more."
John Carmack had this to say about the company's claims: "No chance of a game on current gen systems, but maybe several years from now. Production issues will be challenging."
Unfortunately for us gamers... (Score:2)
Most games recently just kind of suck and rest upon the shoulders of innovative graphics. This does not make me hopeful for the future of gaming.
However, if this technology isn't a scam like I suspect, I still welcome it. It could change the face of computer graphics as we currently know it forever.
Re:Unfortunately for us gamers... (Score:4, Interesting)
Most games recently just kind of suck and rest upon the shoulders of innovative graphics. This does not make me hopeful for the future of gaming.
Generally speaking, I'm in agreement on the suck part, but hold on a second there with the conclusion. If this technology is real and games do see a massive jump forward in graphics, wouldn't that allow for an end to each successive title needing to simply out-polygon the competition? Isn't it equally likely this would force a paradigm shift, where if nothing else art- real art, would supplant technical graphics specs?
Yeah, and I am a Pony (Score:3, Insightful)
So sure, it could make it '100,000 times better.' No one is really debating that, and it isn't news to anyone who knows the first thing about graphics. What would be news would be hardware that better supported it. Somehow, I don't think that's what we have here. Notice the lack of specifics as to what KIND of graphics they seek to improve.
Looks like the Australians just got scammed for 2 million.
Re: (Score:2)
I did however notice an extremely questionable statement which makes me seriously suspect this is a scam.
5:45 - he makes the claim that real-world scanned objects can't be used in g
Re:Yeah, and I am a Pony (Score:4, Insightful)
5:45 - he makes the claim that real-world scanned objects can't be used in games because the resolution is too high. This is completely false. Game developers have scanned objects for a long time, and even more often, made extremely high resolution models on purpose. The models are then lowered in resolution down to a usable form, and the differences between the low-res and high-res models is compiled into a normal (bump) map. This is how almost all first person game textures are made these days. (The benefits of this process are mainly surrounding the better efficiency of textures in holding the depth data than polys, especially at varying distances where complex geometry results in extreme aliasing, and the fact that high-poly models cause serious issues with more advanced lighting schemes.) To make the claim this guy just did is highly suspect.
So, what you're saying is that people scan real world objects, but don't actually use those models in games... so... once one accounts for market speak "you can't use a scan of a real-world object in a game [without dropping enough detail so that you're not using the original scan]."
Re: (Score:2)
Re: (Score:3)
No, they don't use the raw point cloud models, they perform some processing first. And I doubt these guys use unprocessed point clouds for their engine either, so it's a ludicrous claim.
Their specific claim is that they are using point clouds. My thoughts are that if you strategically collapse points of the model (like a dynamic LOD sort of thing) that you could feasibly accomplish something similar to what they're doing.
I mean, seriously, they're talking about large point fields for grains of sand... if that were true, then why wouldn't they be able to use a raw point clouds from scanned objects?
Re: (Score:2)
Seen this video few years ago - definitely a scam.
Re: (Score:2)
How is that possible? Crysis 2, featured in the video, was only released this year. I'm not saying this ISN'T a scam or that you might have see a similar video, but you did not see THIS video a few years ago.
Re: (Score:2)
Parts of the video were awfully familiar, especially in the beginning. If I remember correctly, last time around they produced this with functions for curves etc. and created models with these curves, allowing "unlimited" detail.
Re: (Score:3)
Re: (Score:3, Informative)
They claim they can do in realtime what you say is impossible. Now, if you don't actually have any technical argument, I'll take the view of an expert: John Carmack does not think it is a scam. That said, there are big always big challenges to go from the tech demo to the finished product for sure and they are unlikely to make it especially in the current game market which is already struggling to create content.
Here, kid, as an actual graphics programmer, I'm translate Carmack's producer- and marketing-approved Twitter into plain, run-of-the-mill English for the simple-minded:
Statement: "No chance of a game on current gen systems, but maybe several years from now."
Translation: "No chance of a game on current-gen systems, nor what will be the next generation, as Wii U devkits have already been seeded to developers and it'd be foolish to think that Sony or Microsoft are very far behind. Insofar as nobody, not
Re: (Score:2)
There is almost every day an article about revolutionary new material, source of energy, cure for cancer, ... the vast majority of it never make it to an actual product for lots of different reasons (does not scale, too weird, politic, bad time, fashion, $$$, 90% there syndrom, overoptimism, fatal flaws). That is still interesting, certainly more interesting than the 5 articles there will
Re: (Score:3)
Re:Yeah, and I am a Pony (Score:5, Insightful)
The idea that they've come up with a new LoD algorithm for point cloud data is reasonable. It would then allow their ridiculous claims to be (technically) true about the size of datasets. But, if everything is held procedurally then it must have a low complexity description in order to compress that vast dataset (say 20,000Gb) into something that can be processed. Low-complexity descriptions tend to exist for highly regular geometry, and if you look at their demo they appear to have very high detail objects in a very coarse, regular and repetitive mesh to the extent that when they zoom out it looks like Minecraft.
No need for it to be a hoax, I'm guessing that they can make horrific looking (regular, craply lit, static) graphics as they claim in the video with the projected datasizes they refer to. What they gloss over is that it can't just be translated onto a real level design and scaled up to the level of complexity that you see in real level design.
It would be kind of like me saying "hey, I can draw circles at an infinite level of detail, equivalent to trillions of line segments. Can't draw more complex shapes like faces yet though....."
Re: (Score:2)
There's an awful lot of instancing in the scene...if you calculate the total when the instances are baked then you can get those numbers.
This is a hoax, or an exceptionally ill-advised way to generate developer interest in their middleware
Yep. My prediction is that they're just after investor money and this will never be more than a tech demo.
Re: (Score:2)
It would be interesting (to me, as a graphics programmer in the games industry), if they stopped bullshitting. The claims in that video, when writtten down, are absolutely absurd. 20,000Gb of Ram. That's right. 20,000Gb of ram (at least!) to store the number of 'atoms' they claim they are displaying.
What figures are you using for that calculation.
Now, that simply can't be true - so they must either have left out a hell of a lot of information (such as, we are drawing the same object 20,000,000 times, or we are throwing everything at some procedural geometry shader)
Well of course they are drawing each object many times. So do all the polygon based games. It would be stupid not to. No one would store the full unique geometry for each blade of grass.
Simple tool to magically convert polygons (that we've been lambasting for the last 5mins) into an infinite detail point cloud (thereby adding detail to the mesh that was not there to begin with? WTF?)"".
You're ranting. The polygon's that they are lambasting are for example tree trunks with 6-12 sides. If instead they take a model with a very high polygon count, higher than would be used in a polygon based game, and convert it to their "point cloud" system, that would be quite
Re: (Score:2)
These people are playing up a compl
Re: (Score:2)
Problem is...by the time the average machine is really able to do all this then we won't be too worried about having a billion polygons on screen either.
i.e. It's a sort of self-obsoleting technology - it looks good today but by the time it's finally available nobody will want it.
Re: (Score:2)
I find myself wondering if what they are doing is using voxels to step down the detail level om distant objects while stepping it up on near objects, and not even bothering with the objects out of view.
Re: (Score:2)
Actually, it was being done realistically in near real-time over 10 years ago, using splatting based techniques (see surfels and QSplat http://graphics.stanford.edu/software/qsplat/ [stanford.edu]). These systems weren't really suitable or fast enough for games at the time, but 10 years is a long time for hardware and software to progress.
Re: (Score:2)
Worse than that, he did this over a year ago [popsci.com]. Here's his video from February 2010 [youtube.com] which is basically the same as the July 2011 version [youtube.com].
His linkedin goes into a bit more detail: [linkedin.com] "The Unlimited Detail system consists of a compiler that takes point cloud data and converts it in to a compressed format, the engine is then capable of accessing this data in such a way that it only accesses the pixels needed on screen and ignores the others generati
Re: (Score:2)
He's a con artist who can program graphics (or get other people to while he takes all the credit).
Re: (Score:2)
Perhaps they've invented a compiler that's so smart that it can deal with (code handling single) atoms by cleverly dividing them into groups or something.
As an analogy: fluid mechanics also does not describe one atom at a time, yet the equations are valid on a larger scale.
Re: (Score:2)
It's called a spatial index.
It's pretty basic computational geometry stuff.
Re: (Score:3)
A sparse voxel octree is basically a hierarchical structure for points in 3D space. The advantage of using a hierarchical structure is that you can stop looking at any time, and so zooming works very well: you just traverse the tree until you get so far down that further detail won't be visible, then you render. If the player moves closer, that
Re: (Score:3)
I do get your skepticism...
Thing is though, some members of the demo scene have been doing really impressive things with particle-based rendering systems over the past few years.
To see what can be done, you should check out the demos "Ceasefire (All falls down)" and "Numb Res" by CNCD & Fairlight. They both make very heavy use of particle-based rendering engines - the latter features a rather long section of real-time particle-bsed computational fluid dynamics simulation - the kind of stuff that one te
Re: (Score:2)
I remember when Wolfenstein 3D came out. It seemed unbelievable that a world of textured polygons was being manipulated in real time on a 4.77 MHz PC. We'd seen nothing like it!
Later the details of how Carmack had done it came out. This wasn't the traditional matrix manipulation of 3D points, hidden surface removal, plotting of textures and the painter's algorithm we were used to. It was 2D raycasting from a simplified data structure. Each ray cast allowed the plotting of 128 pixels. Only 280 rays had to b
Re: (Score:2)
That's actually my biggest issue with it. I don't see anything in the video *at all* that can't be done with current technology, some good hardware, and very clean programming. The claims they are making do not seem to align with what is being shown, and indeed, the claims seem to be somewhat self-contradictory.
I don't think that a massive revolution in rendering is impossible (although it would almost certainly require n
Re: (Score:2)
(I've done some things on computers that were 'impossible', I just didn't accept the limitations and did something nobody had thought of before. Many cool pieces of programming were considered impossible before someone went and pulled it off anyhow. So the way I see it, if I and other people can do the 'impossible' with software, I see no reason a bunch of other smart people can't do it. In a decade or two after release, nobody will understand why it took so long for someone to do it this way, just wait.)
No, things that are impossible to do on computers, are simply impossible to do. Time travel for example. That's impossible. Storing 21 trillion (as they claim in the video) anythings on a computer is impossible on current gen hardware. Unless they are expecting the PS4 to ship with 20,000Gb+ ram, it will be still be impossible on next generation of hardware. If you can show me how to store 21 trillion unique and random values on a PS3, well sir, I shall forever be your servant because I'd have a lot to lear
Re: (Score:2)
Is it even possible? It's only impossible until someone ignores conventional thought and does it anyhow.
(I've done some things on computers that were 'impossible', I just didn't accept the limitations and did something nobody had thought of before.
Um, no. Some things really are impossible. The 'impossible' things you did all had quote marks around them.
Re: (Score:2)
Re: (Score:2)
Voxels (Score:2)
Now is not the time for voxels. At current hardware specs, the computing time:image quality tradeoff comes out so far in favour of rasterizing (or whatever) polygons it's insane.
These guys had a tech demo floating around a few years ago, and to my eyes, not a lot has changed.
Re:Voxels (Score:5, Interesting)
This is probably not actually what is generally called "voxels", but a hierarchical point cloud system consisting of points on the surface of objects, rendered via some kind of weighted splatting mechanism. There was a lot of research into such systems for visualising some of the very high resolution point clouds coming out of digital laser scanning systems (for example QSplat, which came out of the Digital Michelangelo project http://graphics.stanford.edu/software/qsplat/ [stanford.edu]).
All is not lost (Score:2)
- Hiring the most annoying voice over guy.
- Over use of the word 'unlimited.'
Thankfully they have UNLIMITED POWER at their disposal to prove any further developments.
Re: (Score:2)
Though it's more financial disguising than voice acting.
The company got back to me (Score:5, Interesting)
(I submitted this article) I fired off a request for more information from the developers about this and they got back to me indicating they're willing to answer some more questions, so I've summarised some of the main ones that I've seen around the place.
We're based in the same city as this company (Brisbane, Australia) so I'm hoping that I might be able to actually go out there and eyeball this stuff myself to get a feel for it (and possibly drag along a graphics programmer to do some grilling).
Re: (Score:2)
I too am interested in hearing more about this.
Oddly enough, I'm working for a company that's working on a modelling tool for "infinite detail", with a aim for 3d fabrication. But it's not voxels, like the engine here shows.
Re:The company got back to me (Score:5, Interesting)
Don't bother, they're taking credit for other people's work. You want to know how their technology works? Here's a couple of research papers:
http://research.nvidia.com/publication/efficient-sparse-voxel-octrees-analysis-extensions-and-implementatio [nvidia.com]
http://artis.imag.fr/Publications/2009/CNLE09/ [artis.imag.fr]
Want some source code? http://code.google.com/p/efficient-sparse-voxel-octrees/ [google.com]
Want a video? http://www.youtube.com/watch?v=HScYuRhgEJw [youtube.com]
Euclideon is just spinning up the marketing bullshit and trying to make a profit off of it all. They don't even have good lighting, they're just doing forward shading for each voxel ray-cast intersection using diffuse lighting with a single global point light source. And they haven't demonstrated robust animation yet.
Guess what, it is possible to animate voxel octrees, but Euclideon never came up with the method either. Some researcher in Germany came up with a working solution for his bachelor's thesis: http://www.youtube.com/watch?v=Tl6PE_n6zTk [youtube.com]
Re: (Score:2)
Except that this isn't Voxel. Euclideon is marketing for Unlimited Tech now. Go do a search for Unlimited graphics engine, they've been showing off their work for the last 2 years. The only thing new here is their marketing partner.
-Rick
Re:The company got back to me (Score:4, Interesting)
The technology is rather related to point cloud rendering which is about 10 years old now. This is the most clever implementation of point cloud rendering that I am aware of and it is pretty cool: http://graphics.stanford.edu/software/qsplat/ [stanford.edu] It renders amazingly fast.
It has its shares of problems including requiring a lot of precomputation and as far as I know noone was able to do proper anitaliasing on point clouds. Texture interpolation in the traditional sense has also not been solved to my knowledge because with these point clouds all you can do is give individual points colors, so you will always have hard edges between points. Those two combined result in a lot of visual noise that destroys the illusion in the demo videos that I have seen so far.
Re: (Score:2)
I would really like to hear the results. I've been writing a 3d engine for the past year and a half - wouldn't want to learn I'd wasted my time! ;)
Heh, given the investment of the industry and users in 3d hardware I think you're probably on the safe side :)
Re: (Score:2)
I saw the exact same images/videos they are showing now a year ago, so you're safe. The main problem is, it's only static data, and the gaming world moved beyond static scenery when we noticed doors could open in wolvenstein 3D.
Old news is old. (Score:2)
Re: (Score:2)
The commentary in the video said that they'd demonstrated an early version a year ago and then disappeared. They're back now that they've got something new to show.
Great (Score:2)
So that means 100,000 times more work to make everything that detailed?
Or else everyone who makes games uses a standard library of objects to cut/paste and so the games end up looking the same anyway?
This is voxels all over again, in a modern iteration. Yeah, it looks cool, but it increases your development time and isn't anywhere near as fast as other techniques and all those graphical "shortcuts" that standard 3D cards do are done for a reason - nobody *really* notices or cares so long as the game runs s
Will this start the old discussion? (Score:3)
I see this as the equivalent of FLAC vs MP3 - yeah, sure it's definitely contains more information but at the cost of storage size and, in the end, 99% of people won't actually care.
But FLAC sounds so much better then 512Kib/s mp3 with my $ 15 headphones and on-board soundcard!
Re: (Score:2)
Point cloud != voxels.
Voxels = regularly sampled volume elements.
Point cloud = irregularly sampled points, usually on the surface of an object.
Nomenclature (Score:2)
I call bullshit (Score:5, Insightful)
If they really could do realtime graphics that were "100,000 times" more detailed than current stuff, they'd do one of two things:
1) Release a demo so people could actually try it and see it working on their systems, to prove it was real. Or more likely...
2) License that shit to a company in the industry. Intel would be extremely interested if it ran on CPUs as they'd love for people to spend more money on CPUs and none on GPUs. Any game engine maker would be extremely interested either way. Wouldn't matter if things still had to be hammered out, at the point they claim to be, that would be more than plenty to sign a licensing deal and get to work.
So I'm calling bullshit and saying it is a con. This is classic con man strategy: You show a demo, but one that is hands off, where the people watching only get to see what you want them to see and don't actually get to play with your product. You make all sorts of claims as to how damn amazing it is, but nobody actually gets to try it out.
This has been a con tactic for centuries, I've no reason to believe it is any different here.
So to them I say: Put up or shut up. Either release a demo people can download that will let them see this run on their own systems, or get a reputable company to license it. If Intel comes out and says "This is for real, we've licensed the technology and will be releasing a SDK for people as soon as it is ready," I'll believe them, as they have a history of delivering on promises. So long as it is some random guys posting Youtube videos, I call bullshit.
Re: (Score:3, Insightful)
On the vague off chance it is real, the last thing they would do is release a demo. The first thing everyone else would do is reverse engineer it and rip them off.
Re: (Score:2)
Intel had its own similar project for a while, but they cancelled it.
Re: (Score:2)
The technology itself isn't bullshit, but what is bullshit is that Euclideon is taking credit for other people's work.
They say they've invented the methods and algorithms behind it all, well that's just pure fantasy. Here's what Euclideon is basing there technology off of:
http://research.nvidia.com/publication/efficient-sparse-voxel-octrees-analysis-extensions-and-implementatio [nvidia.com] [nvidia.com]
http://artis.imag.fr/Publications/2009/CNLE09/ [artis.imag.fr] [artis.imag.fr]
Here's video from 2009 which looks better than Euclideon'
Re: (Score:2)
You should look into the underlying engine. The reason that they call it 'unlimited' is because the performance is based on a search engine that only has to be executed once per pixel instead of the more traditional for each poligon. With traditional engines, the more poligons, the more performance suffers, with the Unlimited engine, adding more points has a negligable effect on performance, adding higher resolution on the other hand, has a significant impact.
-Rick
Re: (Score:2)
You still run into the same problem as you increase the resolution of your voxel octrees, thus increasing the depth of your octrees or spatial lookup structures, which requires that you need to recurse an more levels as you perform a spatial query, although it scales logarithmically instead of linearly. Still no reason to call it "unlimited."
Other voxel engine (Score:5, Interesting)
This russian guy made his own voxel engine as well, which I believe is hardware accelerated and also pretty impressive: http://www.atomontage.com/ [atomontage.com]
Re: (Score:2)
mod parent up. That one is highly impressive.
Just advanced level of detail rendering? (Score:2)
So did they just essentially develop a super intelligent LOD loading system that uses procedural instancing? I'm pretty sure you could put together similarly impressive demos using the latest tricks from Nvidia and ATI using standard polygon rendering. The fact they are using points vs. polygons isn't that interesting to me.
What is fundamentally missing here? Animation, lighting and shadows. Those are going to be really hard problems to solve and I'm curious how they will go about it.
Also, it's not "infinit
Re: (Score:2)
Actually, 64 voxels per square millimeter is "unlimited" for all practical purposes. If you can simulate a world to the limit of what the eye can see, you're done.
If they do indeed do anything procedural (but there's not hint that they do), then a simple fractal will take care of actual unlimited detail.
(and yes, /., I can type the above in a minute or less. can I please get a "knows how to use a keyboard" mode???)
Re: (Score:2)
I love the detail of the models in the demos. I'd love to see games without the polygonal trees and such shown in the video. But I agree the lighting of their demos could use some serious work. It's as if there's a uniform light source shining from all directions at once in their palm tree world. There are a few shadows in their demo, but the contrast is way too low.
I'd love to see the combination of good lighting, and this non polygonal world.
Re: (Score:2)
Think of it this way, a model is made up of a whole lot of points, billions of them even. This engine takes on pixel of the output and searches for which points will fill it. Imagine the whole world as points (not polygons) in a giant cube. The pixel is actually 1 point with an contracting square extrude coming out of it. The engine starts close and works further and further away until the entire pixel is filled with points. The resulting image is then compressed back down to 1 pixel and sent for output as
I Agree With Carmack (Score:2)
It'll take a while for this tech to get turned into an engine with animation/shading/lighting working, and no game developer will touch it until that happens. Euclideon had the right idea making a converter to turn polygonal models into voxel models, since noone was going to dedicate the money to create high-quality voxel assets that couldn't be used if they decided to scrap the tech and use a normal polygon engine. This tech is risky, so the first game to use it is likely going to be a cheaply-made game, p
Re: (Score:2)
It'll take a while for this tech to get turned into an engine with animation/shading/lighting working, and no game developer will touch it until that happens.
That's also my highest doubts. How does the system handle animations at all? The demos they have shown so far have no movement aside from the camera.
I am very, very much looking forward to this. I can barely imagine the amount of creative potential being freed up if for most real-life objects you don't need hours of artist time anymore, but simply throw them into a laser scanner and be done with it. Your artists could focus on other things.
But I'd very much like to know what the shortcomings and limitations
What they probably do (Score:2)
Re: (Score:2)
Re: (Score:2)
The idea you've had is the standard traversal of an octree. Congratulations on reinventing a standard tool in CGI. :-)
Yet another person falls to Saunt Lora's Proposition.
relax guys (Score:2)
We've had this discussion some time ago.. and from what I remember it came out that the procedural creation of "atoms" is kinda powerful and scalable, but will inherently not allow any kind of collision detection and/or animation.
so basically, yes, this can be used to create a very detailed static world.
2 Million AUD$ (Score:2)
First of all, I have nothing against the government spending money on computer game graphics engines, in fact I think such money is wisely spent (more wisely than most defense projects, at least). However, out of sheer curiosity I'd like to know how a small software company can get 2 million AUD$ government funding?
There's no magic behind "UNLIMITED DETAIL" (Score:2)
Euclideon is unjustly taking credit for other people's hard work. They say they've invented the methods and algorithms behind it all, well that's just pure fantasy. Here's what Euclideon is basing there technology off of:
http://research.nvidia.com/publication/efficient-sparse-voxel-octrees-analysis-extensions-and-implementatio [nvidia.com]
http://artis.imag.fr/Publications/2009/CNLE09/ [artis.imag.fr]
Here's video from 2009 which looks better than Euclideon's videos: http://www.youtube.com/watch?v=HScYuRhgEJw [youtube.com]
They might also be using othe
Re: (Score:2)
Sorry, that first URL is missing a character at the end, it should be: http://research.nvidia.com/publication/efficient-sparse-voxel-octrees-analysis-extensions-and-implementation [nvidia.com]
Remember the first time you saw "DOOM"? (Score:2)
I see a lot of very skeptic responses and I must admit I am a bit too. But then I thought back to the first time I saw "Doom" on a 486 and almost had my eyes fall out. It was just such a big step... I could have imagined it. All it took was someone with a very bright idea. Perhaps we might be in for a similar surprise...
Storage capacity (Score:2)
Towards the end of their video, they talk about a demo of an island using 21 billion points. Which is pretty much impossible to keep in RAM on anything less than a minicomputer.
Let's assume that each point is storing the bare minimum of data needed - xyz position (each as 32-bit ints) and two pieces of color information (diffuse and specular, also 32 bits a piece). So that's 20 bytes of data per point, which comes out to be 391GB of data (for a static, unanimated mesh, I remind you). You can't store that in
Re: (Score:3)
one Sri Chinmoy Library [srichinmoylibrary.com]
(1 LoC [= 147M items] / 100000 ~ 1 SCL [~ 1.5K items])
hope this helps,
sincerely yours
Re: (Score:3)
High quality voxel graphics with dynamic deformation would allow a whole new level of user-generated content.
Imagine something like world of warcraft meets second life, but without all the furries. (Something where if you take a shovel, and dig, you can dig up rocks, and other bits-- or even bury loot, or build a house out of ambient materials, and have it be persistent.)
Some people might complain that it opens the doors to world vandalism ([sarcasm]Oh dear, somebody wrote the word "Penis" in 30 foot letter
Re: (Score:3)
>>High quality voxel graphics with dynamic deformation would allow a whole new level of user-generated content.
Yeah, that would actually be pretty damn neat. None of what they showed was dynamic, though.
About 10 years ago, when I was doing a lot of work with voxels, I'd arrange all the voxels in an octree and could adjust the framerate/detail simply by how far down each object's octree I'd traverse. I could have large, coarse voxels, or small, precise ones, adjust for distance from the viewer, and so
Re: (Score:3)
They say quite clearly that their little bitty atom tech is based on point clouds (not voxels).
Re: (Score:3)
Re: (Score:2)
Re:In other news... (Score:5, Informative)
But what do they do then when they are not seen? Sod off for a holiday in the cloud? Seriously. I think you are missing the point. Where the hell is this data being stored, and what is the size of the data set? It's got to be in memory *at some point*, and hard disk if it's not. So how much ram/disk space will this thing use exactly? Ok, so 'most of it is calculated, somewhat like fractals', well ok. But which bits? Are the trees fractals (or L-systems maybe)?. Just the leaves? The Models of the rocks they have scanned in? The 3ds max models they have converted to point clouds? The whole island? Answers to these questions need to be provided before any games developer would even bother looking at this tech. Either it's all procedural (in which case it's utterly useless for game designers), it's primarily procedural (in which case the art director will struggle to achieve a consistent look), it's partially procedural (which will annoy the modelling & texturing departments), or it's a load of made up lies. I'm erring towards the latter.....
Re:In other news... (Score:5, Interesting)
Look at these two examples:
Re: (Score:3)
they occupy virtually no space
That's not exactly true. While they require almost no disk space, they do require quite a bit of RAM. Just because all the textures and models are procedurally generated doesn't make the need to store them go away. If things would be dynamically generated each frame in a geometry or pixel shader things might look different, but that is a whole lot more complicated then just procedural generation.
Re: (Score:3)
There's an awful lot of object instancing in their videos (same object repeated multiple times).
The numbers they're quoting are the number of 'atoms' you see on screen, not the number of atoms in the computer's memory.
Re: (Score:3)
Compared to the raw number of triangles your average geforce card can theoretically process, that's very true
And no mention from the video about what kind of hardware is powering that humble 20fps "real time" preview. Even if we accept that statement, if it takes a supercomputer to get to 20fps that's not going to have much market. Given that this tech is totally different from where the industry is going, they should probably be talking with NVidia / AMD about what the hardware can help even make it feasible. Carmack is right, the hardware just simply isn't there, and for that matter is not even trending that
Re: (Score:3)
I think this is all just a ploy to provide Intel with a market for their Knights Ferry chips -- this won't run on GPU hardware on current systems apparently, so you need CPU might. Where do you get that? Knights Ferry, obviously.
Still, it sounds very cool, if only for statically-rendered stuff like wallpapers and movies.
Re: (Score:2)
Re: (Score:2)
Something where if you take a shovel, and dig, you can dig up rocks, and other bits-- or even bury loot, or build a house out of ambient materials, and have it be persistent.
Yeah, that'd be awesome. A game where you could mine and craft all kinds of stuff... what to call it...
Re:In other news... (Score:4, Interesting)
Something where if you take a shovel, and dig, you can dig up rocks, and other bits-- or even bury loot, or build a house out of ambient materials, and have it be persistent.
Yeah, that'd be awesome. A game where you could mine and craft all kinds of stuff... what to call it...
I don't know about you but I would call it "Nethack"
Re: (Score:2)
Re:animaaaation (Score:5, Informative)
Which is the whole trick, this was shown off at least a year ago, it pops up now and then.
The tech precalculates a LOT, for that it needs static model information.
The site of the creators is http://unlimiteddetailtechnology.com/ [unlimitedd...nology.com]
http://forums.tigsource.com/index.php?topic=11624.30 [tigsource.com] they talked about it last year.
Re: (Score:2)
Is that why the lighting looks terrible too?
As with everything it looks to be something that suffers compromises, sure they've made things look better, but if it can't be lit well, and if it can't be animated easily it's not much use for games.
Re: (Score:2)
Also, dynamic lighting/shaders. It doesn't look great yet. There are a few games around that do nicer modelling than [insert generic FPS here].
Re: (Score:2)
Re: (Score:3)
1) except the games industry is bigger than Hollywood by far
2) The department that provided the funding looks to be Commercialisation Australia [commercial...lia.gov.au], which seems to basically be a government-backed VC-like operation - I can only imagine that exists because of the paltry VC in Australia.
Re: (Score:2)
Don't be so hard on yourself over this one mistake. /. comments]
I've seen much worse grammar from native English speakers here in the USA. [NO citation needed if you read
After all, there is a reason that political speeches in the USA are written to target an 8th grade education.(previously covered on /. within the last year or two)
Re: (Score:2)
It could be point data, rendered through a subpixel renderer.
Instead of 3d voxels in the traditional sense, it would be 1d points in 3d space, with luminance, specularity, and fuzziness variables assigned. After that it is just lighting and pixel shading, which would be embarrassingly parallel. You would render the scene as a 2d canvas that fills the whole viewport.
LOD would be based on the available viewport resolution.
Re: (Score:2)
nVidia and AMD are currently looking at real-time ray tracing, because that's where intel is going and they have to compete. There is also CUDA and OpenCL, and the next stepping for GPUs is almost half of the current. (meaning performance/cost ~doubles) Anandtech says AMD promises a 22nm card this year still. GPUs are no longer toys; they are a form-factor for supercomputers.
I don't think for example caustics would work very well with voxels, but a hybrid solution would perhaps be ideal, where you could hav
Re: (Score:2)
Re: (Score:2)
Remember, they only need to search a point cloud once for each pixel on the screen. The volume of points in the cloud has a much lesser effect on their performance than the number of pixels on the screen. So they can probably run a 640x480 output on a fairly low end machine. Yeah, running a 3-1040p monitor set up would probably require some amazing hardware, but for us mere mortals, I don't think that's quite as much of a concern.
AMD would be nice, but honestly, Google would be the best to have a hack at it
Re: (Score:2)
Correct, but they actually use ray-casting, not ray-tracing. Ray-casting only involves a single ray collision test and sample per pixel, and then you need to use an alternative means to compute lighting, such as a deferred shading and lighting compositor. Full ray-tracing doesn't scale well for real-time graphics in shared memory systems due to memory access patterns involved. Intel has some demos with some simple car models doing full recursive ray-tracing, but it only runs at a few FPS even on 64 cores.