Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Quake First Person Shooters (Games) Hardware

Carmack's QuakeCon Keynote Detailed 309

TheRaindog writes "In addition to announcing the Quake III source code's impending release, John Carmack's QuakeCon 2005 keynote also covered the programmer's thoughts on Microsoft and Sony's next-gen consoles, physics acceleration in games, and what he'd like to see from new graphics hardware."
This discussion has been archived. No new comments can be posted.

Carmack's QuakeCon Keynote Detailed

Comments Filter:
  • Procedural textures (Score:5, Interesting)

    by mnemonic_ ( 164550 ) <jamec@umich. e d u> on Tuesday August 16, 2005 @05:54AM (#13328824) Homepage Journal
    I was a bit taken aback by Carmack's opposition to procedural textures. No, they can't do everything but they can be real timesavers when you need to add some overall realistic looking details. Things like dirt, "roughness" and stains can be done effectively using Brownian noise and the like, and you've got the infinite resolution, low-memory features of procedurally generated data. It's efficient and looks good, especially when I used it to create realistic terrain [umich.edu].

    Of course procedural textures can never replace hand-painted detail, but layering on some infinite-resolution noise-detail onto a finite sized bitmap texture really brings materials to life.
  • by canozmen ( 898239 ) on Tuesday August 16, 2005 @06:09AM (#13328860) Journal
    Although Mr.Carmack says physics in game engines isn't easily scalable for level of detail, there is ongoing research about this producing good results. I remember a video from last years SIGGRAPH that had hundreds of plastic chairs falling from the sky, and bouncing realistically. The important part was it employed a level-of-detail hierarchy for interacting parts (i.e. an object doesn't have much physical detail if you don't touch it), but it will be some time before we can see such techniques in real time games.
  • by MaestroSartori ( 146297 ) on Tuesday August 16, 2005 @06:10AM (#13328863) Homepage
    The argument generally is, as far as I know, that it's overkill for the current generation of hardware. Rather than procedural noise generated realtime, a few pregenerated detail noise textures can do the job with a fraction of the gpu time. It's pretty hard to tell the difference with a decent artist doing the noise maps, really.

    Maybe during the next-gen consoles' lifespan we'll start seeing more procedural stuff. It'll become more important as we start pushing more polys and going down the High Definition route, I think.

    (I'm more interested in offline procedural content generation, personally - automatically generated cities, it's the way of the future! :D)
  • by MaestroSartori ( 146297 ) on Tuesday August 16, 2005 @06:32AM (#13328926) Homepage
    Nice shader example that I quite like:

    Renderman water shader [edit.ne.jp]

    There's plenty. Try watching any film with decent CG effects, it'll be full of procedural shaders which are fairly realistic.

    See, the thing about shaders is they can be as realistic as you're willing to let them get. The problem is how long it takes to calculate them - realtime games use more shortcuts, hacks and estimates to get something that looks "good enough". Not just in shaders, either. That's why we don't do realtime raytraced games, instead we use lightmaps or whatever to approximate them.
  • by should_be_linear ( 779431 ) on Tuesday August 16, 2005 @06:34AM (#13328929)
    Finally, proof that Apple is over priced, under powered hardware. Why does Carmack hate Apple so much? ... BTW, if Apple loved PPC so much, why did they announce the switch to Pentium M ? :)
    He said that Apple *is* over priced, under powered hardware, which is true. He didn't say it will stay that way forever.
  • by m50d ( 797211 ) on Tuesday August 16, 2005 @06:49AM (#13328964) Homepage Journal
    automatically generated cities, it's the way of the future!

    If you think that, play Daggerfall. Play it anyway actually, it's a great game - but it still shows that generated cities are a really bad idea.

  • by MaestroSartori ( 146297 ) on Tuesday August 16, 2005 @06:54AM (#13328979) Homepage
    Oh, that game :D

    I should probably explain further. My approach would be to generate the basic street layouts, buildings, and maybe even internal floor & room layouts procedurally, say in a Maya/Max plugin. This would act as the basis for artists/designers to then tweak and adjust to produce something good, hopefully in a fraction of the time.

    Using control maps (for population density, affluence, terrain, etc) it should be possible to have fairly fine control over how the city is generated. Add to that a decent set of rules to govern the generation, and a big stock library textures/shaders to give a nice looking generic output, that should give a decent start point.

    I know some of the guys who worked on GTA3/VC/SA, and one of their big problems was generating the sheer amount of content to make these large play areas. Starting with a pre-populated one and using it as a base might let them concentrate on making it good...
  • by domipheus ( 751857 ) * on Tuesday August 16, 2005 @07:03AM (#13329006)
    Carmack was commenting on hardware vendors believing that more cores, more cpus, more hardware is good. But for the DEVELOPMENT OF A GAME in a GOOD AND REASONABLE TIME using a SIMPLE, ELEGANT, AND FAST METHODOLOGY, adding more cores and more cpus and more accelerators intruduces more places where bugs and glitches can occur, and thats only after you figure out a nice design, which will take longer to do and therefore cost more to make. It complicates things which shouldn't need to be. Not all companies want to introduce 3rd party code into their games for various reasons and you should not assume that everyone wants to.
  • by robnauta ( 716284 ) on Tuesday August 16, 2005 @07:08AM (#13329021)
    This is considerably more difficult than one would think. Games typically have to perform tasks in a particular order, for example (extremely simplified): get inputs, move player, move AI players, move other objects, check for collisions, update parameters, display the next frame, loop.

    Quite where we add this 2nd thread is difficult. Everything must happen in the same order in order for things like collision detection to function correctly.

    Not neccesarily. One big problem with games is that the typical order (beginscene/render/endscene/present) is implemented with a busy-wait loop in the present part. This is the part where all data has been sent to the graphics card and the driver waits in a loop until it gets a 'scene completed' message from the card. This is why games always run at 100% CPU.

    Games that don't use threading well (only threading for network/input/sound) put stuff in the loop you describe. Draw a scene, the driver waits for an 'OK', then you update the player, update the AI characters, do collision, calculate all new positions and start drawing. Because the drawing takes eg. 10 ms per frame for 100 FPS developers limit the AI/collision part to run in something like 1 ms or else the frame rate starts dropping. So the real AT would be limited to say 10% of the CPU time.

    For example the 'move AI' part could be a bunch of threads, calculating new positions based on direction, collision etc.

    Right now games like DOOM3 typically only display a few NPC's at the same time because of the timing problem. If the move AI thread can just keep running on the second CPU while the first CPU waits within the driver a game could support a few 100 enemies on-screen.

    Strategy games with complicated pathfinding with hundreds of units on-screen like Warcraft 3 or Age of Mythology would profit enormously, if programmed for multithreading.

  • Half Life 2 (Score:2, Interesting)

    by erwincoumans ( 865613 ) on Tuesday August 16, 2005 @07:11AM (#13329025)
    I perfectly understand what you say. And I discussed with Carmack over email a few month ago, exactly about this topic. It doesn't mean I agree. First of all, graphics is already heavily parallel, but in this case it is handled purely inside the hardware. For physics, this can be too, but he argues about the fallback path. Not only consoles are choosing multi-core. Even Intel and Apple are going the multi-core direction, for a good reason I think. I think it just frighens more game developers to jump on the next-gen multi-code machines. Instead of moaning, its better to just prove him wrong. Half Life 2 had some good physics, I think that is the way to go.
  • Interesting (Score:2, Interesting)

    by ribblem ( 886342 ) on Tuesday August 16, 2005 @07:25AM (#13329060) Homepage
    Carmack's other wish-list item was that some attention be paid to the problems with handling small batches of data on today's GPUs. He said the graphics companies' typical answers to these problems, large batches and instancing, don't make for great games.

    John Carmack's past pleas for graphics hardware changes have led to a number of developments, including the incorporation of floating-point color formats into DirectX 9-class graphics chips. We'll have to watch and see how the graphics companies address these two problems over the next couple of years.


    These are topics that the whole graphics industry acknowledges. While he is wise to focus on these as core issues, I'm not sure if I would say that the industry responded to his plea when these things get addressed.

    One other thing that I'm a little confused by is why game developers always seem to think multithreading games is going to be nearly impossible to take advantage of in the near term. I admit it won't be a piece of cake and there will be evil bugs, but I don't see it as this big mystery. It will be more work, but it seems with some thought it can be handled fairly nicely on first generation games of next gen consoles. If I were to tackle the problem I would not break up the rendering into separate threads since this is just going to be trouble, but I would reduce rendering to truely only do the rendering which some developers seem to get confused. I would make one or more physics threads, one or more AI threads, a sound thread, a rendering thread, a resource managment thread, and perhaps a culling thread which assisted the VPU with geometry occlusion if the CPU is ahead of the VPU. I'd also put in a semaphore queue mechanism so some of these could get a frame or two ahead without syncing.

    That said I'm not a game developer so perhaps I'm just missing something. If that's the case please enlighten me.
  • by Anonymous Coward on Tuesday August 16, 2005 @07:27AM (#13329067)
    If you look at it from a different angle, a lot of games already have very good multi-threaded support.

    Start with online multi-user games. Instead of a remote server hosting a game which 5 different users around the world connected to it, all 5 instances of the client software run on the same machine as the server. Now take the final step and convert 4 of those clients from user-controlled to computer-controlled AI.

    The beauty of this idea is that you're programming the single-user and multi-user game at the same time.
  • by vigilology ( 664683 ) on Tuesday August 16, 2005 @07:33AM (#13329081)
    I think 'they' should concentrate most of their resources on just making natural animal movements realistic. We have walls that look like walls. We have shadows that look like shadows. We have toppling barrels that look like toppling barrels. We don't have animals that move like natural animals.
  • by aarku ( 151823 ) on Tuesday August 16, 2005 @07:39AM (#13329102) Journal
    As a game developer, I'll say it'll come sooner than you think. Engines such as Unity [otee.dk] will support Aegia's PPU when it comes out as it already uses the Novodex engine. From there it would take about 15 minutes to set up, tops. Expect some awesome things to come from little Indie developers.
  • Comment removed (Score:2, Interesting)

    by account_deleted ( 4530225 ) on Tuesday August 16, 2005 @07:50AM (#13329134)
    Comment removed based on user account deletion
  • by zoomba ( 227393 ) <mfc131NO@SPAMgmail.com> on Tuesday August 16, 2005 @08:19AM (#13329238) Homepage
    Dude, have you played ANY iD game since Quake 1? They're all tech demos for new engines, that's all they ever have been. Carmack has admitted on serveral occassions that he thinks story and such don't belong in games, that they're a waste of time and effort. iD does NOT make games, they make the next level of graphics engines.

    You spent over a thousand pounds for a system that would only run the game at 800x600? What did you do, crank AA and AF to max and set the detail level to "Ultra"? I played on Ultra with AA and AF set to a middle setting and I got a solid 25-30FPS on a machine that was a year old. Either you got ripped off by a retailer, or you don't know jack shit about what parts to buy.

    Your problem is you have your OWN head so far up your ass you aren't able to read/hear what Carmack himself is saying. If you did, you'd know they're about the engines and they let OTHER companies take the engines to make good games. Quake 4 is being done by Raven. Raven did amazing things with the Quake 3 engine in the Jedi Knight games, they're going to be the ones to take the new engine and turn it into an actual game.

    Thanks for trolling.
  • by Purifier ( 782794 ) on Tuesday August 16, 2005 @08:46AM (#13329354) Homepage
    256 MB RAM is definitely not enough for games with demand for such extreme graphics and realism (did he say physics?)!

    I doubt that the next generation games will look like movies; except for some graphic demos like the Unreal Engine 3.

    Here's an old quote from Tim Sweeney:

    "Off-the-shelf 32-bit Windows can only tractably access 2GB of user RAM per process. UT2003, which shipped in 2002, installed more than 2GB of data for the game, though at that time it was never all loaded into memory at once. It doesn't exactly take a leap of faith to see scenarios in 2005-2006 where a single game level or visible scene will require >2GB RAM at full detail."

    http://www.beyond3d.com/interviews/sweeneyue3/ [beyond3d.com]

    So let's wait and see how XBOX 360 and PS3 will fare...
  • Re:Spent! (Score:3, Interesting)

    by gl4ss ( 559668 ) on Tuesday August 16, 2005 @09:25AM (#13329551) Homepage Journal
    dunno. doom was one excellent game and quake was fun online.

  • by captaineo ( 87164 ) on Tuesday August 16, 2005 @10:36AM (#13330127)
    I think Carmack's opposition to procedural textures is for practical, not technical reasons. Developing good-looking shaders requires math and programming skills that most artists do not have. You'd have to tie up a software developer to write the shaders (and possibly an artist too, if the developer doesn't have a good artistic "eye"). So from a manpower perspective, it makes more sense just to have a bunch of artists cranking out texture maps in Photoshop.
  • by Anonymous Coward on Tuesday August 16, 2005 @10:37AM (#13330131)
    Thread 3-100: AI Threads
          1. Read viewable universe state


    I think that would introduce all of the issues multiplayer games have with network lag right into the game engine. If the AI characters aren't all working from the same data set (because it's changing while they're "thinking,") you're bound to have some pretty weird and difficult-to-debug timing issues. Even simple single-threaded code has a lot of wacky and unpredictable timing behavior on a PC, compared to actual real-time systems or purpose-built gaming consoles. This would make those timing issues much, much worse.

    Carmack is a very smart guy. He addressed this sort of issue publicly in his finger journals (back before anyone said "blog") when he was trying to develop a version of Quake III that took advantage of multiple processors. Certain things in a game loop just have to be synchronized, and that causes bottlenecks that multiple processors can't help with.

    If it were possible to write explicitly parallel code for dual-core CPUs, that could be a very different story. But it would also negate the advantage dual-core is supposed to provide, so the heck with it.
  • by Anonymous Coward on Tuesday August 16, 2005 @01:34PM (#13331684)
    If the changes to the universe between each pass of the token are small (this should be a fair assumption as you would expect one cycle of token passing per frame) then a little inaccuracy between the universe read and 'reality' should be expected - we all blink don't we?

    This approach might work for some games, but it's definitely problematic for others. Consider that the accuracy of each AI's decisions about the world around them is going to be proportional to the number of AIs in it, both because the world will be changing more when there are more AIs, and because each AI will get fewer chances to refine_strategy() when there are many of them. That could lead to some truly freaky positive feedback loops in a game like Doom or GTA, where NPCs fight with each other. Everything might be hunky-dory up until a certain AI count threshhold, at which point the NPCs can't target each other accurately anymore. If more NPCs are being generated automatically, you're headed for a meltdown.

    And since PCs don't all have equal amounts of CPU horsepower / memory bandwidth / etc, the meltdown threshhold would vary from one system to another. Apparent intelligence of NPCs might also vary with their quantity and with the system's speed. It seems unlikely that any particular game's design would call for such behavior.

    It would probably be more efficient, more deterministic, and possibly simpler to debug, to use just N threads, where N is the number of CPU cores available - in modern systems, it would be 1 or 2. Each thread grabs the next unprocessed AI from a list, processes it (calls AICharacter.Think() or whatever), marks the AI as processed, and stores the result in a seprate copy of the world data. Once all AIs are processed, you make the new world data copy the current copy and go on to collision detection or what-have-you. Nothing would be wasted on context switches. The code would work fine on any number of CPU cores, but be faster on more. Only a few pretty trivial deadlocks/race-conditions to worry about.

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...