Forgot your password?
typodupeerror
Graphics Intel Games Hardware

CPUs Do Affect Gaming Performance, After All 220

Posted by timothy
from the that's-just-what-jesus-said dept.
crookedvulture writes "For years, PC hardware sites have maintained that CPUs have little impact on gaming performance; all you need is a decent graphics card. That position is largely supported by FPS averages, but the FPS metric doesn't tell the whole story. Examining individual frame latencies better exposes the brief moments of stuttering that can disrupt otherwise smooth gameplay. Those methods have now been used to quantify the gaming performance of 18 CPUs spanning three generations. The results illustrate a clear advantage for Intel, whose CPUs enjoy lower frame latencies than comparable offerings from AMD. While the newer Intel processors perform better than their predecessors, the opposite tends to be true for the latest AMD chips. Turns out AMD's Phenom II X4 980, which is over a year old, offers lower frame latencies than the most recent FX processors."
This discussion has been archived. No new comments can be posted.

CPUs Do Affect Gaming Performance, After All

Comments Filter:
  • Re:Err (Score:5, Informative)

    by snemarch (1086057) on Thursday August 23, 2012 @06:31PM (#41102649)

    *shrug*

    I've been running a Q6600 for several years, and only replaced it last month. That's a July 2006 CPU. It didn't really seem strained until the very most recent crop of games... and yes, sure, it's a quadcore, but game CPU logic hasn't been heavily parallelized yet, so a fast dualcore will still be better for most gamers than a quadcore - and the Q6600 is pretty slow by today's standard (2.4GHz, and with a less efficient microarchitecture than the current breed of core2 CPUs).

    Sure, CPUs matter, but it's not even near a case of "you need the latest generation of CPU to run the latest generation of games!" anymore. Upgrading to a i7-3770 did smooth out a few games somewhat, but I'm seeing far larger improvements when transcoding FLAC albums to MP3 for my portable MP3 player, or compiling large codebases :)

  • by hairyfeet (841228) <bassbeast1968@@@gmail...com> on Thursday August 23, 2012 @07:48PM (#41103527) Journal
    This also shows what many of us have been saying which is Bulldozer is AMD's Netburst. I've stuck with the Phenoms because you get great bang for the buck and because it was obvious the "half core' design of BD was crap, now we have it in B&W, the much older Phenom spanking the latest AMD chips which cost on average 35-45% more. Lets just hope that recent hire of the former Apple chip designer to AMD can right the ship, because otherwise when I can't score X4s and X6s anymore i'll have no choice but to go Intel.
  • by DrYak (748999) on Friday August 24, 2012 @02:17AM (#41106059) Homepage

    This also shows what many of us have been saying which is Bulldozer is AMD's Netburst.

    Yes but not for the reason you think. Netburst introduced two things:
    - An extremely deep pipeline, which was a stupid idea and ultimately netburst's demise and core's reboot from the ashes of pentium3. That's the thing most people are referring to when comparing both chips.
    - HyperThreading. the ability to run 2 threads on the same pipeline (in order to keep the extremely long pipeline full). That's what's similar to buldozer's problems.

    When HT was introduced, its impact on running windows software was catastrophic. That is simply due to the fact that Windows was optimized for SMP (Symmetric Multi Processors) where all CPU are more or less equal. Hyperthreadinng is far from symetric: it introduces 2 virtual processor which must share resource with the real one. You have to properly schedule threads so that no real cpu is idle while a virtual core is strugling. And you have to intelligently schedule threads to minimize cache misses. Windows simply wasn't designed for such architecture and definitely sucked at correctly juggling with the threads and the virtual processors. Proper Windows support came much later (and nowadays enabling hyperthreading under windows doesn't come as much a performance hit).

    The "half core" of bulldozer are in the same situation. It's also a weird architecture (although less is shared between half-cores). It requires correctly assigning thread to processors, etc. Again current Windows ( 7 ) sucks at this, you'll have to wait for Windows 8 to see an OS properly optimized with this situation. Until then, the half-core design will come with a huge performance cost.

    But that's only in Microsoft world.

    On Linux the situation is different. Beside the Linux kernel being much more efficient for thread and process scheduling, Linux has another advantage: opensource code coupled with shorter release cycle. And thus the latest kernels available already support the special core model of bulldozer.

    The end result is that bulldozers run much more efficiently under Linux than under Windows (as can be assert from the Linux benchmarks on Phoronix).
    And they have decent performance per dollar.

    Lets just hope that recent hire of the former Apple chip designer to AMD can right the ship, because otherwise when I can't score X4s and X6s anymore i'll have no choice but to go Intel.

    What you'll benefit the most is waiting for a version of windows which does support the bulldozer model.
    Although the bulldozer have some short-comings, which are in the process of being ironed out.

  • by DrYak (748999) on Friday August 24, 2012 @08:34AM (#41107651) Homepage

    *there* is the parallel.

    There is parallel in the way people perceive them.
    There is a big difference under the hood in practice.

    I mean people see both and say "Bulldozer is the new Netburst" just as "Windows 8 is the Windoes Vista is the new WindowsME".
    But the reasons behind are fundamentally different.
    Netburst sucked and was hopeless. Bulldozer is suboptimal but there's room for improvement.

    Intel went out chasing high numbers, what they got, was a chip that clocked moderately highly, but performed like ass anyway, and sucked power.

    They got it, because they choose a design path which has many drawbacks, they sacrificed a lot just for the sake of higher GHz, Netburst doesn't bring much interesting thing to the table. I could maybe somewhat work a little bit today using the latest shrinking technologie, advanced cooling, and finally hit the 10GHz where the architecture should be competitive. While still sucking a lot of power.
    But back in the Pentium IV days, there were no hope that anything could actually efficiently use it.
    It "performed like ass" almost by design. Because all the things they neglected end up biting them in the long run, and become hard limits.

    The only way to do something better was scrap the whole thing, move to something simpler, and stop favouring GHz at all cost, preferring it above anything else including power consumption.

    Which they did. The Core family was done by improving over the older Pentium IIIs.
    And they did it again in a way with the Atom family, which is not completely unlike the even simpler and older Pentium, giving an even lower power end result (though its difficult to compete with ARM in this range...)

    The only solution to get Intel out of their solution was a garbage bin.
    The only useful stuff which came out of the Netburst architecture was HyperThreading. Which was useless back in the Pentium IV era for lack of proper OS support. But worked better when it was reintroduced later in the Core era, just because Windows had some time to mature.

    AMD went out chasing core count, what they got, was a chip that can't hold its own against chips with half as many "cores", and sucks power.

    On the other hand bulldozers are limited by things which are improvable in the near future.
    Some might be design flaws on the silicon, but these are stuff which can be fixed. And that means in the near future, not counting on some advanced technology 10 years from now to dramatically shrink the process. Part of the "sucks power" problem is fixable in hardware.
    (And part of it is fixable by litteral "building architecture". AMD is a little bit late using older processes, simply for lacking manufacturing plants with the latest technology like intel).

    But most problem aren't even hardware, but software.
    - The OS and Kernel scheduler need to support its peculiar concept of half cores. There's a dramatic difference *already today* in using Bulldozer between Windows and Linux. Because current generation of kernel inside Windows 7 predates Bulldozer's release. Whereas Linux is not only fucking much more efficient, but support for half core was added long ago.
    - The software needs to be written to take advantage of Bulldozer, specially using more cores. But *that is* the current general tendency anyway:toward multiprocessing, and multithreading. so that will happen naturally over time. Just look at the Google's Chrome: Each tab is (for security and sandboxing reasons) a separate (isolated) process. It's the most visible and known example, but other software follow the same trend. Being of Unix heritage, Linux uses multiprocessing much more heavily and thus has much more use cases where Bulldozer is useful (server tasks is one example).
    (Also in the opensource world Bulldozer's other advantages are usually only a compiler switch- or a tool library upgrade- away. Software can take advantage of that rather quickly)

    So yeah, a

<< WAIT >>

Working...