Carmack Expounds on Doom III 351
Rainier Wolfecastle writes: "Non-high-end-comp-owning geeks rejoice! GameSpot is reporting that John Carmack has confirmed that Doom III is Xbox-bound. Carmack said that id is totally commited to bringing the game to Microsoft's console with its visual splendor intact. Best of all, the game could be available on the Xbox as soon as May next year." And Warrior-GS writes: "John Carmack gave a two-hour presentation about Doom 3 and engine technology. GameSpy reports on the presentations and analyzes Carmack's comments and how they apply to the future of gaming. There is also a look at the demo of Doom III"
API? (Score:5, Interesting)
Rendering - two generations from done? (Score:4, Interesting)
There will still be scaling issues, where the world is big and a lot of it is contributing to the image onscreen. Level of detail processing can help, but there are situations where you have to examine an excessive amount of geometry. One of the worst cases is a detailed city street, where you can see many blocks ahead and there are lots of trees, signs and whatnot that can obscure surfaces further away. Doing that well requires grinding through a lot of geometry. An insane amount of CPU time went into those long views down streets in Toy Story. All those houses have full detail. Game designers currently avoid such situations. Most driving games are laid out so that you never look down a really long street. And fog is your friend. It's still going to be a while before we have architectural-flythrough quality for long views in urban areas in real time.
Then again, a background process rendering billboards of distant street sections...
Re:Rendering - two generations from done? (Score:1, Interesting)
It's disappointing to see an industry leader make such short-sighted statements. Graphics technology will not be 'done' until we can't visually tell the difference between real-life and a game.
In real life, I can cut through an orange with a knife at any angle and see the internal structure of the orange in the cross section I created. In real life, I can use a magnifying glass to view an object/texture at 10x its normal size, with no loss of clarity, no pixel artifacts.
Carmack is talking about two more generations of vertex/triangle/bit-map based graphics cards. What about raytracing? What about voxels?
The 'ideal' graphics card would be able to render a world comprised of billions of voxels/atoms, each with their own properties and physics, and with raytraced photons creating the light. Suggesting that we'll be 'done' in 2 generations is just silly. It would require a totally new knid of PC architecture just to deal with the memory bandwidth requirements of an 'ideal' graphics processor.
different backends useless then? (Score:2, Interesting)
Xbox bound... (Score:3, Interesting)
And if the current X-box... Absolute minimum gives the engine a lot of room to move given the difference between Quake III at low detail 640x480 and high detail at 1600x1200.
In other words, having a low bottom end does not necessarily hold back having an insanely good top end.
Re:They're dumb at the same time. (Score:3, Interesting)
Yeah... nothing like stereotypes or popular thought to cloud hard facts, eh?
In the Sound (WAV) Compression Test [compression.ca] on compression.ca [compression.ca] the GZip 1.2.4 + TAR combo comes in at 7.29b/B (91%), bzip2 0.9.5d + TAR is at 7.01b/B (87%). RAR on the other hand, comes in at 5.65b/B (70%) and Monkey's Audio 3.96 rocks in at 5.01b/B (62%).
So my 10mb of WAV takes up 9.1MB after being GZiped and 7.0MB after compressing it with that odd archive that [is] still not as good or just as good.
GZip and bzip are *excellent* compression tools. But they are not - and have not been for a long time - the kings of the hill.