Carmack's QuakeCon Keynote Detailed 309
TheRaindog writes "In addition to announcing the Quake III source code's impending release, John Carmack's QuakeCon 2005 keynote also covered the programmer's thoughts on Microsoft and Sony's next-gen consoles, physics acceleration in games, and what he'd like to see from new graphics hardware."
Procedural textures (Score:5, Interesting)
Of course procedural textures can never replace hand-painted detail, but layering on some infinite-resolution noise-detail onto a finite sized bitmap texture really brings materials to life.
physics is here to stay (Score:4, Interesting)
Re:Procedural textures (Score:5, Interesting)
Maybe during the next-gen consoles' lifespan we'll start seeing more procedural stuff. It'll become more important as we start pushing more polys and going down the High Definition route, I think.
(I'm more interested in offline procedural content generation, personally - automatically generated cities, it's the way of the future!
Re:Procedural textures (Score:5, Interesting)
Renderman water shader [edit.ne.jp]
There's plenty. Try watching any film with decent CG effects, it'll be full of procedural shaders which are fairly realistic.
See, the thing about shaders is they can be as realistic as you're willing to let them get. The problem is how long it takes to calculate them - realtime games use more shortcuts, hacks and estimates to get something that looks "good enough". Not just in shaders, either. That's why we don't do realtime raytraced games, instead we use lightmaps or whatever to approximate them.
Re:XBOX 360 PowerPC != PowerPC G4, G5 (Score:1, Interesting)
He said that Apple *is* over priced, under powered hardware, which is true. He didn't say it will stay that way forever.
Re:Procedural textures (Score:3, Interesting)
If you think that, play Daggerfall. Play it anyway actually, it's a great game - but it still shows that generated cities are a really bad idea.
Re:Procedural textures (Score:5, Interesting)
I should probably explain further. My approach would be to generate the basic street layouts, buildings, and maybe even internal floor & room layouts procedurally, say in a Maya/Max plugin. This would act as the basis for artists/designers to then tweak and adjust to produce something good, hopefully in a fraction of the time.
Using control maps (for population density, affluence, terrain, etc) it should be possible to have fairly fine control over how the city is generated. Add to that a decent set of rules to govern the generation, and a big stock library textures/shaders to give a nice looking generic output, that should give a decent start point.
I know some of the guys who worked on GTA3/VC/SA, and one of their big problems was generating the sheer amount of content to make these large play areas. Starting with a pre-populated one and using it as a base might let them concentrate on making it good...
let me make myself clearer, once and for all. (Score:2, Interesting)
Re:Dual-core CPU not that easy to take advantage o (Score:5, Interesting)
Quite where we add this 2nd thread is difficult. Everything must happen in the same order in order for things like collision detection to function correctly.
Not neccesarily. One big problem with games is that the typical order (beginscene/render/endscene/present) is implemented with a busy-wait loop in the present part. This is the part where all data has been sent to the graphics card and the driver waits in a loop until it gets a 'scene completed' message from the card. This is why games always run at 100% CPU.
Games that don't use threading well (only threading for network/input/sound) put stuff in the loop you describe. Draw a scene, the driver waits for an 'OK', then you update the player, update the AI characters, do collision, calculate all new positions and start drawing. Because the drawing takes eg. 10 ms per frame for 100 FPS developers limit the AI/collision part to run in something like 1 ms or else the frame rate starts dropping. So the real AT would be limited to say 10% of the CPU time.
For example the 'move AI' part could be a bunch of threads, calculating new positions based on direction, collision etc.
Right now games like DOOM3 typically only display a few NPC's at the same time because of the timing problem. If the move AI thread can just keep running on the second CPU while the first CPU waits within the driver a game could support a few 100 enemies on-screen.
Strategy games with complicated pathfinding with hundreds of units on-screen like Warcraft 3 or Age of Mythology would profit enormously, if programmed for multithreading.
Half Life 2 (Score:2, Interesting)
Interesting (Score:2, Interesting)
John Carmack's past pleas for graphics hardware changes have led to a number of developments, including the incorporation of floating-point color formats into DirectX 9-class graphics chips. We'll have to watch and see how the graphics companies address these two problems over the next couple of years.
These are topics that the whole graphics industry acknowledges. While he is wise to focus on these as core issues, I'm not sure if I would say that the industry responded to his plea when these things get addressed.
One other thing that I'm a little confused by is why game developers always seem to think multithreading games is going to be nearly impossible to take advantage of in the near term. I admit it won't be a piece of cake and there will be evil bugs, but I don't see it as this big mystery. It will be more work, but it seems with some thought it can be handled fairly nicely on first generation games of next gen consoles. If I were to tackle the problem I would not break up the rendering into separate threads since this is just going to be trouble, but I would reduce rendering to truely only do the rendering which some developers seem to get confused. I would make one or more physics threads, one or more AI threads, a sound thread, a rendering thread, a resource managment thread, and perhaps a culling thread which assisted the VPU with geometry occlusion if the CPU is ahead of the VPU. I'd also put in a semaphore queue mechanism so some of these could get a frame or two ahead without syncing.
That said I'm not a game developer so perhaps I'm just missing something. If that's the case please enlighten me.
Multi-Core game programming (Score:1, Interesting)
Start with online multi-user games. Instead of a remote server hosting a game which 5 different users around the world connected to it, all 5 instances of the client software run on the same machine as the server. Now take the final step and convert 4 of those clients from user-controlled to computer-controlled AI.
The beauty of this idea is that you're programming the single-user and multi-user game at the same time.
Re:physics is here to stay (Score:2, Interesting)
Re:physics is here to stay (Score:5, Interesting)
Comment removed (Score:2, Interesting)
Re:Here we go again.... (Score:3, Interesting)
You spent over a thousand pounds for a system that would only run the game at 800x600? What did you do, crank AA and AF to max and set the detail level to "Ultra"? I played on Ultra with AA and AF set to a middle setting and I got a solid 25-30FPS on a machine that was a year old. Either you got ripped off by a retailer, or you don't know jack shit about what parts to buy.
Your problem is you have your OWN head so far up your ass you aren't able to read/hear what Carmack himself is saying. If you did, you'd know they're about the engines and they let OTHER companies take the engines to make good games. Quake 4 is being done by Raven. Raven did amazing things with the Quake 3 engine in the Jedi Knight games, they're going to be the ones to take the new engine and turn it into an actual game.
Thanks for trolling.
What about the memory issues of the new consoles? (Score:2, Interesting)
I doubt that the next generation games will look like movies; except for some graphic demos like the Unreal Engine 3.
Here's an old quote from Tim Sweeney:
"Off-the-shelf 32-bit Windows can only tractably access 2GB of user RAM per process. UT2003, which shipped in 2002, installed more than 2GB of data for the game, though at that time it was never all loaded into memory at once. It doesn't exactly take a leap of faith to see scenarios in 2005-2006 where a single game level or visible scene will require >2GB RAM at full detail."
http://www.beyond3d.com/interviews/sweeneyue3/ [beyond3d.com]
So let's wait and see how XBOX 360 and PS3 will fare...
Re:Spent! (Score:3, Interesting)
Re:Procedural textures (Score:3, Interesting)
Re:Dual-core CPU not that easy to take advantage o (Score:2, Interesting)
1. Read viewable universe state
I think that would introduce all of the issues multiplayer games have with network lag right into the game engine. If the AI characters aren't all working from the same data set (because it's changing while they're "thinking,") you're bound to have some pretty weird and difficult-to-debug timing issues. Even simple single-threaded code has a lot of wacky and unpredictable timing behavior on a PC, compared to actual real-time systems or purpose-built gaming consoles. This would make those timing issues much, much worse.
Carmack is a very smart guy. He addressed this sort of issue publicly in his finger journals (back before anyone said "blog") when he was trying to develop a version of Quake III that took advantage of multiple processors. Certain things in a game loop just have to be synchronized, and that causes bottlenecks that multiple processors can't help with.
If it were possible to write explicitly parallel code for dual-core CPUs, that could be a very different story. But it would also negate the advantage dual-core is supposed to provide, so the heck with it.
Re:Dual-core CPU not that easy to take advantage o (Score:1, Interesting)
This approach might work for some games, but it's definitely problematic for others. Consider that the accuracy of each AI's decisions about the world around them is going to be proportional to the number of AIs in it, both because the world will be changing more when there are more AIs, and because each AI will get fewer chances to refine_strategy() when there are many of them. That could lead to some truly freaky positive feedback loops in a game like Doom or GTA, where NPCs fight with each other. Everything might be hunky-dory up until a certain AI count threshhold, at which point the NPCs can't target each other accurately anymore. If more NPCs are being generated automatically, you're headed for a meltdown.
And since PCs don't all have equal amounts of CPU horsepower / memory bandwidth / etc, the meltdown threshhold would vary from one system to another. Apparent intelligence of NPCs might also vary with their quantity and with the system's speed. It seems unlikely that any particular game's design would call for such behavior.
It would probably be more efficient, more deterministic, and possibly simpler to debug, to use just N threads, where N is the number of CPU cores available - in modern systems, it would be 1 or 2. Each thread grabs the next unprocessed AI from a list, processes it (calls AICharacter.Think() or whatever), marks the AI as processed, and stores the result in a seprate copy of the world data. Once all AIs are processed, you make the new world data copy the current copy and go on to collision detection or what-have-you. Nothing would be wasted on context switches. The code would work fine on any number of CPU cores, but be faster on more. Only a few pretty trivial deadlocks/race-conditions to worry about.