Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics PlayStation (Games) Wii XBox (Games) Games

Carmack: Next-Gen Console Games Will Still Aim For 30fps 230

An anonymous reader sends this excerpt from Develop: "Games developed for the next-generation of consoles will still target a performance of 30 frames per second, claims id Software co-founder John Carmack. Taking to Twitter, the industry veteran said he could 'pretty much guarantee' developers would target the standard, rather than aiming for anything as high as 60 fps. id Software games, such as Rage, and the Call of Duty series both hit up to 60 fps, but many titles in the current generation fall short such as the likes of Battlefield 3, which runs at 30 fps on consoles. 'Unfortunately, I can pretty much guarantee that a lot of next gen games will still target 30 fps,' said Carmack."
This discussion has been archived. No new comments can be posted.

Carmack: Next-Gen Console Games Will Still Aim For 30fps

Comments Filter:
  • Re:Detail (Score:2, Interesting)

    by Gerzel ( 240421 ) <brollyferret@nospAM.gmail.com> on Wednesday December 19, 2012 @03:01AM (#42334349) Journal

    Also depends on what they are doing with that extra processing power. Are you making a game that is more intuitive? That reacts and learns better? That has AI that is more intelligent that adds to game play?

    Really 30fps is the range of reasonable quality. You get a diminished return as you increase fps especially if the rest of the game doesn't perform to the same standard.

  • by Brulath ( 2765381 ) on Wednesday December 19, 2012 @04:00AM (#42334575)

    TechReport analysed the nVidia 680 a bit after its release and had a piece on adaptive vsync [techreport.com] which should answer your question.

    Quoted from an nVidia software engineer:

    There are two definitions for triple buffering. One applies to OGL and the other to DX. Adaptive v-sync provides benefits in terms of power savings and smoothness relative to both.

    - Triple buffering solutions require more frame-buffer memory than double buffering, which can be a problem at high resolutions.

    - Triple buffering is an application choice (no driver override in DX) and is not frequently supported.

    - OGL triple buffering: The GPU renders frames as fast as it can (equivalent to v-sync off) and the most recently completed frame is display at the next v-sync. This means you get tear-free rendering, but entire frames are affectively dropped (never displayed) so smoothness is severely compromised and the effective time interval between successive displayed frames can vary by a factor of two. Measuring fps in this case will return the v-sync off frame rate which is meaningless when some frames are not displayed (can you be sure they were actually rendered?). To summarize- this implementation combines high power consumption and uneven motion sampling for a poor user experience.

    - DX triple buffering is the same as double buffering but with three back buffers which allows the GPU to render two frames before stalling for display to complete scanout of the oldest frame. The resulting behavior is the same as adaptive vsync (or regular double-buffered v-sync=on) for frame rates above 60Hz, so power and smoothness are ok. It's a different story when the frame rate drops below 60 though. Below 60Hz this solution will run faster than 30Hz (i.e. better than regular double buffered v-sync=on) because successive frames will display after either 1 or 2 v-blank intervals. This results in better average frame rates, but the samples are uneven and smoothness is compromised.

    - Adaptive vsync is smooth below 60Hz (even samples) and uses less power above 60Hz.

    - Triple buffering adds 50% more latency to the rendering pipeline. This is particularly problematic below 60fps. Adaptive vsync adds no latency.

    So triple buffering is bad because it could cause an intermediary frame to be dropped, resulting in a small visual stutter despite being 60fps. There's a video of adaptive vsync on YouTube [youtu.be].

  • by SecurityTheatre ( 2427858 ) on Wednesday December 19, 2012 @04:15AM (#42334623)

    That's exactly the problem I had.

    The "Jerkycam" works BECAUSE of the 24fps.

    The only time I found the 48fps showing to be uncomfortable and weird was during very fast action, jerky motion sequences. It suddenly feels like high-fidelity jerkyness, which makes it lose its tendency to portray "oh noez, stuff is blurry and out of control, even the camera", and just feels like "why is the dude shaking the camera so much?"

  • by epyT-R ( 613989 ) on Wednesday December 19, 2012 @04:25AM (#42334653)

    I guess my interpretation of jerkycam was always "why the hell is he shaking the camera so much?" Its' annoying and distracting, especially when it's every other scene. If the sharpness of movement isn't sufficient it's because the movements aren't sharp enough. The lower framerate just hid that.

  • by AmiMoJo ( 196126 ) * on Wednesday December 19, 2012 @09:02AM (#42335673) Homepage Journal

    It's just a way of doing action on the cheap. The special effects and stunts don't have to be as good because no-one can see them clearly. A bit of low budget CGI looks much better when blurred and our of focus and only on the screen for 1/24th of a second.

    Transformers invented a variation where the CGI has so much detail and is frames so poorly on screen that you can't make out where the character's limbs are or what is actually going on anyway, so again it seems to be better than it actually is. If you step through the action sequences frame by frame there is a very clear disconnect between the CGI and real objects that get thrown around by poorly hidden explosives and hydraulics. Terrible camera work hides a multitude of lameness.

If you think the system is working, ask someone who's waiting for a prompt.

Working...