Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics Intel Games Hardware

CPUs Do Affect Gaming Performance, After All 220

crookedvulture writes "For years, PC hardware sites have maintained that CPUs have little impact on gaming performance; all you need is a decent graphics card. That position is largely supported by FPS averages, but the FPS metric doesn't tell the whole story. Examining individual frame latencies better exposes the brief moments of stuttering that can disrupt otherwise smooth gameplay. Those methods have now been used to quantify the gaming performance of 18 CPUs spanning three generations. The results illustrate a clear advantage for Intel, whose CPUs enjoy lower frame latencies than comparable offerings from AMD. While the newer Intel processors perform better than their predecessors, the opposite tends to be true for the latest AMD chips. Turns out AMD's Phenom II X4 980, which is over a year old, offers lower frame latencies than the most recent FX processors."
This discussion has been archived. No new comments can be posted.

CPUs Do Affect Gaming Performance, After All

Comments Filter:
  • by hammeraxe ( 1635169 ) on Thursday August 23, 2012 @04:57PM (#41102227)

    Try cranking up the difficulty of an RTS on a not-so-good computer and you'll immediately notice how things slow down

    • by locopuyo ( 1433631 ) on Thursday August 23, 2012 @05:01PM (#41102273) Homepage
      In StarCraft 2 my CPU is the bottleneck.
    • Re: (Score:3, Informative)

      Comment removed based on user account deletion
      • This is why I have the 4 "second" cores shut off, and running them at their base turbo boost fequency. The thermals of my system are about on par with my old 965 (non OC'd), and the 8150FX provides a massive difference in gaming response (vs. the 4.2ghz OC on the 965). I thought that everyone knew that it was the sensible thing to do unless you are running something that taxes more than 4 cores at once.
      • by DrYak ( 748999 ) on Friday August 24, 2012 @01:17AM (#41106059) Homepage

        This also shows what many of us have been saying which is Bulldozer is AMD's Netburst.

        Yes but not for the reason you think. Netburst introduced two things:
        - An extremely deep pipeline, which was a stupid idea and ultimately netburst's demise and core's reboot from the ashes of pentium3. That's the thing most people are referring to when comparing both chips.
        - HyperThreading. the ability to run 2 threads on the same pipeline (in order to keep the extremely long pipeline full). That's what's similar to buldozer's problems.

        When HT was introduced, its impact on running windows software was catastrophic. That is simply due to the fact that Windows was optimized for SMP (Symmetric Multi Processors) where all CPU are more or less equal. Hyperthreadinng is far from symetric: it introduces 2 virtual processor which must share resource with the real one. You have to properly schedule threads so that no real cpu is idle while a virtual core is strugling. And you have to intelligently schedule threads to minimize cache misses. Windows simply wasn't designed for such architecture and definitely sucked at correctly juggling with the threads and the virtual processors. Proper Windows support came much later (and nowadays enabling hyperthreading under windows doesn't come as much a performance hit).

        The "half core" of bulldozer are in the same situation. It's also a weird architecture (although less is shared between half-cores). It requires correctly assigning thread to processors, etc. Again current Windows ( 7 ) sucks at this, you'll have to wait for Windows 8 to see an OS properly optimized with this situation. Until then, the half-core design will come with a huge performance cost.

        But that's only in Microsoft world.

        On Linux the situation is different. Beside the Linux kernel being much more efficient for thread and process scheduling, Linux has another advantage: opensource code coupled with shorter release cycle. And thus the latest kernels available already support the special core model of bulldozer.

        The end result is that bulldozers run much more efficiently under Linux than under Windows (as can be assert from the Linux benchmarks on Phoronix).
        And they have decent performance per dollar.

        Lets just hope that recent hire of the former Apple chip designer to AMD can right the ship, because otherwise when I can't score X4s and X6s anymore i'll have no choice but to go Intel.

        What you'll benefit the most is waiting for a version of windows which does support the bulldozer model.
        Although the bulldozer have some short-comings, which are in the process of being ironed out.

        • I don't think the GP was trying to draw litteral comparisons with netburst (though you're right, there are some), instead simply pointing out that Bulldozer is a terrible misstep in the design direction.

          Intel went out chasing high numbers, what they got, was a chip that clocked moderately highly, but performed like ass anyway, and sucked power.
          AMD went out chasing core count, what they got, was a chip that can't hold its own against chips with half as many "cores", and sucks power.

          *there* is the parallel.

          • by DrYak ( 748999 ) on Friday August 24, 2012 @07:34AM (#41107651) Homepage

            *there* is the parallel.

            There is parallel in the way people perceive them.
            There is a big difference under the hood in practice.

            I mean people see both and say "Bulldozer is the new Netburst" just as "Windows 8 is the Windoes Vista is the new WindowsME".
            But the reasons behind are fundamentally different.
            Netburst sucked and was hopeless. Bulldozer is suboptimal but there's room for improvement.

            Intel went out chasing high numbers, what they got, was a chip that clocked moderately highly, but performed like ass anyway, and sucked power.

            They got it, because they choose a design path which has many drawbacks, they sacrificed a lot just for the sake of higher GHz, Netburst doesn't bring much interesting thing to the table. I could maybe somewhat work a little bit today using the latest shrinking technologie, advanced cooling, and finally hit the 10GHz where the architecture should be competitive. While still sucking a lot of power.
            But back in the Pentium IV days, there were no hope that anything could actually efficiently use it.
            It "performed like ass" almost by design. Because all the things they neglected end up biting them in the long run, and become hard limits.

            The only way to do something better was scrap the whole thing, move to something simpler, and stop favouring GHz at all cost, preferring it above anything else including power consumption.

            Which they did. The Core family was done by improving over the older Pentium IIIs.
            And they did it again in a way with the Atom family, which is not completely unlike the even simpler and older Pentium, giving an even lower power end result (though its difficult to compete with ARM in this range...)

            The only solution to get Intel out of their solution was a garbage bin.
            The only useful stuff which came out of the Netburst architecture was HyperThreading. Which was useless back in the Pentium IV era for lack of proper OS support. But worked better when it was reintroduced later in the Core era, just because Windows had some time to mature.

            AMD went out chasing core count, what they got, was a chip that can't hold its own against chips with half as many "cores", and sucks power.

            On the other hand bulldozers are limited by things which are improvable in the near future.
            Some might be design flaws on the silicon, but these are stuff which can be fixed. And that means in the near future, not counting on some advanced technology 10 years from now to dramatically shrink the process. Part of the "sucks power" problem is fixable in hardware.
            (And part of it is fixable by litteral "building architecture". AMD is a little bit late using older processes, simply for lacking manufacturing plants with the latest technology like intel).

            But most problem aren't even hardware, but software.
            - The OS and Kernel scheduler need to support its peculiar concept of half cores. There's a dramatic difference *already today* in using Bulldozer between Windows and Linux. Because current generation of kernel inside Windows 7 predates Bulldozer's release. Whereas Linux is not only fucking much more efficient, but support for half core was added long ago.
            - The software needs to be written to take advantage of Bulldozer, specially using more cores. But *that is* the current general tendency anyway:toward multiprocessing, and multithreading. so that will happen naturally over time. Just look at the Google's Chrome: Each tab is (for security and sandboxing reasons) a separate (isolated) process. It's the most visible and known example, but other software follow the same trend. Being of Unix heritage, Linux uses multiprocessing much more heavily and thus has much more use cases where Bulldozer is useful (server tasks is one example).
            (Also in the opensource world Bulldozer's other advantages are usually only a compiler switch- or a tool library upgrade- away. Software can take advantage of that rather quickly)

            So yeah, a

      • I've stuck with the Phenoms because you get great bang for the buck and because it was obvious the "half core' design of BD was crap, now we have it in B&W

        You've got it in black and white from an unreliable, fluffy article that looks more like a troll to me.

      • by dreamchaser ( 49529 ) on Friday August 24, 2012 @05:03AM (#41106941) Homepage Journal

        This also shows what many of us have been saying which is Bulldozer is AMD's Netburst. I've stuck with the Phenoms because you get great bang for the buck and because it was obvious the "half core' design of BD was crap, now we have it in B&W, the much older Phenom spanking the latest AMD chips which cost on average 35-45% more. Lets just hope that recent hire of the former Apple chip designer to AMD can right the ship, because otherwise when I can't score X4s and X6s anymore i'll have no choice but to go Intel.

        You say that like you'll be forced to change your religion or political party. It's a CPU. It's a tool. Use what works best for your use case scenario. Why the fanboi mentality?

        • You say that like you'll be forced to change your religion or political party.

          He's choosing to invest in the competition. If AMD goes tits-up, you will be paying whatever Intel wants because you will have no alternative. Imagine having only a single mobile service provider. You would not be getting the same plan you have now.

  • Err (Score:5, Interesting)

    by bhcompy ( 1877290 ) on Thursday August 23, 2012 @05:03PM (#41102307)
    Which idiot made that claim? Pretty much every hardware review site has CPU and GPU dependent games in their reviews when they review GPUs, CPUs, and OTB rigs.
    • Re: (Score:3, Funny)

      by Anonymous Coward

      For years, absolutely nobody has maintained that CPUs have little impact on gaming performance; all you need is a god-tier video card setup, and a game engine that magically handles everything via GPU.

      There, I fixed it.

      Seriously, this has to be the most nonsensical Slashdot summary I've read all day. CPU hasn't been a minor factor in gaming for several gaming aeons now, and there are no shortage of games that are critically dependent on it (Hi, Skyrim!).

      • Re:Err (Score:5, Informative)

        by snemarch ( 1086057 ) on Thursday August 23, 2012 @05:31PM (#41102649)

        *shrug*

        I've been running a Q6600 for several years, and only replaced it last month. That's a July 2006 CPU. It didn't really seem strained until the very most recent crop of games... and yes, sure, it's a quadcore, but game CPU logic hasn't been heavily parallelized yet, so a fast dualcore will still be better for most gamers than a quadcore - and the Q6600 is pretty slow by today's standard (2.4GHz, and with a less efficient microarchitecture than the current breed of core2 CPUs).

        Sure, CPUs matter, but it's not even near a case of "you need the latest generation of CPU to run the latest generation of games!" anymore. Upgrading to a i7-3770 did smooth out a few games somewhat, but I'm seeing far larger improvements when transcoding FLAC albums to MP3 for my portable MP3 player, or compiling large codebases :)

        • I had a dualcore(E6600) for 5 years and pretty much every new game in the past 3 years can use two or more cores, even if it's just two you have to consider the other programs running in the background, for example on bad company 2 punkbuster had a bug that after a few minutes it would use 20-30% of cpu, the game itself uses ~90% of cpu and because of punkbuster there was a lot of stuttering, now I have a sixcore(3930k) and yeah maybe six cores are too much for games but some of them like bf3 can already us

          • Re:Err (Score:4, Interesting)

            by snemarch ( 1086057 ) on Thursday August 23, 2012 @06:10PM (#41103181)

            PunkBuster spiking to 20-30% CPU is, as you mentioned, a bug - it is not the norm. And while people won't be shutting down every background process to play a game, they don't tend to run anything heavy while gaming. And all the regular stuff (web browser with a zillion tabs loaded, email client, IM client, torrent client, ...) is pretty negligible CPU-wise.

            I personally haven't run into games that can utilize more than two cores (please let me know if they're out there!), and even then there's usually been synchronization issues that has kept the game from reaching 100% core utilization, even on the slower cores. Parallelizing stuff is hard, and outside of the core graphics pipeline (which runs mostly on the GPU), there's so much stuff that needs to run in strict order in a game engine. I sure do hope clever programmers will think of improvements in the future, though, since we'll hit the GHz sooner or later - and then we need to scale on number of cores.

            As things are right now, I'd still say a faster dualcore is more bang for the buck than a slower quadcore, gamewise - but that might change before long. And considering that the current crop of CPUs can turbo-boost a couple of cores when the other cores are inactive, it's obviously better to shop for a quadcore than a dualcore - but with the current crop of games, you'd effectively be using the CPU as a faster dualcore when not running intensive background stuff :-)

            You can't really compare the consoles directly to x86 CPUs, btw, the architecture is radically different - moreso on the playstation side than the xbox (and let's ignore the original xbox here, for obvious reasons :)). I wonder if Sony is going to keep up their "OK, this is pretty whacky compared to the commodity multicore stuff you're used to, but it's really cool!" approach, or if they'll settle for something "saner".

            • for the console I was talking about the fact that 90% of pc games are console ports right now so with a new generation of consoles the pc version should also be better optimized otherwise we will have a lot games that runs at 20 fps

            • And while people won't be shutting down every background process to play a game, they don't tend to run anything heavy while gaming.

              I was quite interested in TFA's data on performance while transcoding video, as I do that quite often myself. Their data mirrors my own anecdotal experiences...a low-priority video encode won't hurt much if you have a decent number of cores.

              And all the regular stuff (web browser with a zillion tabs loaded, email client, IM client, torrent client, ...) is pretty negligible CPU-wise.

              One of the things that does kill performance for me is moderately heavy background disk activity. Download-speed activity isn't a big deal, but a few GB of robocopy across the LAN will bring a lot of games to a halt for a second or two.

        • by heypete ( 60671 )

          I still have a Q6600 in my gaming system. It's a solid CPU. Replacing it would require replacing the motherboard and I can't really justify it at this point -- things run really well (I have a GeForce 550 Ti graphics card, which handles essentially the games I want to play, including modern ones, with aplomb).

          Once I start running into performance issues, I may upgrade, but that'll probably be in another year or two.

        • *shrug*

          I've been running a Q6600 for several years, and only replaced it last month. That's a July 2006 CPU. It didn't really seem strained until the very most recent crop of games... and yes, sure, it's a quadcore, but game CPU logic hasn't been heavily parallelized yet, so a fast dualcore will still be better for most gamers than a quadcore - and the Q6600 is pretty slow by today's standard (2.4GHz, and with a less efficient microarchitecture than the current breed of core2 CPUs).

          Sure, CPUs matter, but it's not even near a case of "you need the latest generation of CPU to run the latest generation of games!" anymore. Upgrading to a i7-3770 did smooth out a few games somewhat, but I'm seeing far larger improvements when transcoding FLAC albums to MP3 for my portable MP3 player, or compiling large codebases :)

          I had to can my dual core E8600 when BlackOps came out so I beg to differ on that. I only upgraded to a Q9650 I found second hand and it made the world of difference. The old dual core chip stuttered horrendously at 1920*1200 with a GTX480 for the first minute or so of a multiplayer match. I am guessing the game was trying to load the textures after I had started playing or something but I tried everything I could think of to fix it before spending any money. I even tried overclocking the E8600 up to 3.5 o

        • by Creepy ( 93888 )

          I'm still running a Q6600 on my main box with a nVidia 560 Ti GPU. For most GPU bound games (read, shooters) I have no issues with it at all. For RPGs like Skyrim it tended to get CPU bound, but not so bad that I felt I had to update it today, and it played the Guild Wars 2 beta much better than my laptop with an i7 2630 and nVidia 560M (better CPU, GPU is 50%+ slower than the Ti and quite a bit slower than the 560, despite the similar name).

      • Re:Err (Score:4, Interesting)

        by Cute Fuzzy Bunny ( 2234232 ) on Thursday August 23, 2012 @06:04PM (#41103099)

        For years, absolutely nobody has maintained that CPUs have little impact on gaming performance; all you need is a god-tier video card setup, and a game engine that magically handles everything via GPU.

        There, I fixed it.

        Seriously, this has to be the most nonsensical Slashdot summary I've read all day. CPU hasn't been a minor factor in gaming for several gaming aeons now, and there are no shortage of games that are critically dependent on it (Hi, Skyrim!).

        Check out your favorite hot deals web site. The mantra is a celeron or any old amd chip made in the last 5 years plus a solid gpu = goodness. I coiuld point you to dozens of threads where this is the defacto standard.

        But thats what you get when you combine cheap with minimal knowledge. Eventually everyone becomes convinced that its true.

        • Re: (Score:2, Redundant)

          by Ironhandx ( 1762146 )

          Um, it is true. Frame latency doesnt even matter. Its less than 1ms in ALL cases. IE: Its imperceptible.

          I just bought a FX4100 purely because it was cheap, had ENOUGH power, and with an excellent video card setup a better intel chip wouldn't provide any sort of noticeable performance increase. Current-Gen CPUs so far overpower current-gen game engines cpu requirements that this argument is just plain silly.

          I even see someone making the argument that AI is causing massive cpu load.... get fucking real, AI ha

    • Re:Err (Score:5, Interesting)

      by Sir_Sri ( 199544 ) on Thursday August 23, 2012 @05:36PM (#41102725)

      If you read the charts the assertion that 'cpu doesn't matter' is kind of true in a lot of cases.

      It's not that it doesn't matter at all, but the difference between an 1100 dollar sandy bridge i7 3960 and a 200 dollar 2500k, even though they are almost a factor of 2 difference in performance side by side (http://www.cpubenchmark.net/high_end_cpus.html) is less than 10% in games. Now those processors are still *way* better than the AMD offerings unfortunately, and the AMD processors are in many cases so bad that becomes the dominant problem.

      The new "bulldozer" architecture from AMD is a disaster, in just about every way. They're terrible. The charts clearly show that.

      The video card makers (more than the review sites) have correctly pointed out that performance is much more likely to be GPU gated than CPU gated, or, if it's a problem like I'm working on now, it's a single CPU gated for an algorithm that doesn't neatly parallelize - so more cores doesn't do anything. If you're given a choice between a 1000 dollar CPU or a 600 dollar one from the same company odds are you won't be able to tell the difference, so in that sense they're reasonably correct, there's virtually no benefit to buying an extreme CPU or the like if your primary goal is gaming performance. If you're talking about the best use of say 1000 dollars to build a gaming PC, well then the cheapest i5 you can find with the best video card you can afford is probably the best bang for your buck.

      As someone above said, an RTS like starcraft is more likely to be CPU limited than GPU limited.

      What this tells us is that AMD processors are terrible for gaming, but there's virtually no difference which FX processor you buy (don't buy those though, if you're buying AMD buy a phenom), and within the Intel family there is again, virtually no difference for a factor of 4 or 5 price difference.

      What they didn't look at (because you don't really benchmark it) is load times, I think the FX processors have a much faster memory subsystem if you have a good SSD than their Phenom counterparts, but otherwise someone should take a bulldozer to bulldozer.

      If we were to revisit the oft used car analogy for computing, it's a fair assertion that which brand of car you buy won't help you get to work any faster day to day, slightly better cars, with faster pickup etc will have a small (but measurable benefit) but that's about it. Well, unless you buy a land rover, or a BMW 7 series (http://www.lovemoney.com/news/cars-computers-and-sport/cars/12461/the-country-that-makes-the-most-reliable-cars, http://www.reliabilityindex.com/ ), at which point, you should budget time into your schedule for the vehicle to be in the shop.

      • I wonder if this same logic applies to browser performance? As they become more graphical and video-oriented will the GPU power matter more than the CPU?

        Maybe I didn't need a new computer..... maybe I just needed to keep the Pentium4 and upgrade the graphics card to something fast. Then I could play HD youtube.

        • Only if the software you're using supports GPU acceleration, which I believe Flash does now.
          • by Sir_Sri ( 199544 )

            on an OS that supports it. No GPU acceleration on Windows XP generally, and older flavours of linux are the same deal.

            • by afidel ( 530433 )

              Flash 11.1 supports GPU acceleration on XP, the current version of the Chrome embedded flash object however does not. I found this out during the Olympics, the 720p feeds were jumpy as heck in Chrome but fairly smooth in Firefox.

          • by Surt ( 22457 )

            flash only supports acceleration for movie decoding (so of course that does apply to youtube, but basically nothing else other than porn sites).

        • by Sir_Sri ( 199544 )

          I wonder if this same logic applies to browser performance

          In windows 8 it definitely will, windows 7 and linux, not so much. GPU acceleration is becoming more and more popular because GPU's are able to solve one type of problem significantly better than CPU's, if you can split your problem up, into the rendering problem and the logic problem the CPU becomes a lot less important, assuming it's fast enough to keep up with the GPU for whatever problem you have.

          General purpose GPU acceleration isn't standard in use very well on any OS, although MS is doing so with t

      • Wise words.

        Just one thing: whether disk speed matters or not depends a lot on the game, and whether it's the "we have a fixed memory profile, and load all assets to memory while loading a level" or "we stream stuff as necessary" type. For instance, for Far Cry 2, it made pretty much no difference whether I had the game files on a 2x74gig Raptor RAID-0 or on a ramdisk. For a lot of engines, there's all sorts of things going on... Disk I/O, some CPU crunching, some sysmem->gpumem transfers, some gpu crunch

        • by Sir_Sri ( 199544 )

          Ya, disk speed is a hard one to benchmark out, which is why I pointed at loading times, that's where it makes the most difference, not 'in game' activities. Well that and just general system behaviour.

      • Comment removed based on user account deletion
        • by Sir_Sri ( 199544 )

          No, the phenoms aren't actually terrible, they're behind where the equivalent generation i5's are, but bizarrely, they're ahead of the successor FX parts (FX are supposed to be a newer better microarchitecture than phenom).

          Depends how you define 'higher end' here too. An i5 2500k is a 200 dollar processor for the OEM version at retail, now that's not full system cost, you'd need a mobo and RAM to go with that, but the phenom x6 is a 150-160 dollar part at retail and is maybe 2/3rds the overall performance

      • What this tells us is that AMD processors are terrible for gaming

        No, it tells us that AMD processors are a little worse for gaming, not "terrible". On the other hand, if more cores matter to you, and they do to me, AMD still looks like good bang for the buck.

      • Oh, and if you want to talk about terrible, Intel's Atom is terrible. I regret wasting any money at all on those brain challenged, hot running turds. The Atom fiasco singlehandedly killed off the pretty much the entire netbook market.

      • If you're talking about the best use of say 1000 dollars to build a gaming PC, well then the cheapest i5 you can find with the best video card you can afford is probably the best bang for your buck.

        I just built myself a nice shiny new gaming PC as my old Core2 Quad 9650 decided to go pop a few weeks ago and gave up trying to resurrect it.

        I looked at the prices and it seems that a low end 2011 Sandy Bridge CPU is actually pretty reasonable so you should be able to put together a gaming PC featuring this for under a $1000. The 3820 is only $300. Throw in some memory, motherboard and a mid range graphics card and you get up to $785.96 on new egg :)

        Most people seem to discount 2011 SandyBridge stuff based

  • What. What?! (Score:5, Interesting)

    by RyanFenton ( 230700 ) on Thursday August 23, 2012 @05:10PM (#41102403)

    Who thought that CPU's didn't bottleneck gaming performance? Who ever thought that? Only the smallest of tech demos only used GPU resources - every modern computer/console game I'm aware of uses, well, some regular programming language that needs a CPU to interpret instructions and is inherently limited by the standards of clock cycle and interrupt tied to those CPUs.

    GPUs only tend to allow you to offload the strait-shot parallelized stuff - graphic blits, audio, textures & lighting - but the core of the game logic is still tied to the CPU. Even if you aren't straining the limits of the CPU in the final implementation, programmers are still limited by the capacity of them.

    Otherwise, all our games would just be done with simple ray-traced logic, using pure geometry and physics, there would be no limits on the number or kind of interactions allowed in a game world, game logic would be built on unlimited tables of generated content, and we'd quickly build games of infinite recursion simulating all known aspects of the universe far beyond the shallow cut-out worlds we develop today.

    But we can't properly design for that - we design for the CPUs we work with, and the other helper processors have never changed that.

    Ryan Fenton

    • I will take a mediocre cpu with a kick ass GPU than the other way around. Sure I have an under clocked phenom II at just 2.6ghz but with my ATI 7870 I plan to get it will blow away an icore7 extreme with the HD 4000 graphics by several hundred percent!

      GPU is where it is at with games. Just like with Windows an SSD makes a bigger difference than a faster CPU booting up.;

    • GPUs only tend to allow you to offload the strait-shot parallelized stuff - graphic blits, audio, textures & lighting - but the core of the game logic is still tied to the CPU. Even if you aren't straining the limits of the CPU in the final implementation, programmers are still limited by the capacity of them.

      Your theory is basically valid, but the practical reality and the empirical evidence of the last, I dunno, 20 years or so, is that the graphics processing takes a significant amount of computing power. There's a reason that virtually every computer and every game console has a dedicated GPU. For that matter, a dedicated sound processing chip. It's all offloaded and the APIs have improved to the point that it doesn't seem like much work, but those specialized chips are burning an awful lot of power.

      For a

  • For years? (Score:5, Interesting)

    by _Shorty-dammit ( 555739 ) on Thursday August 23, 2012 @05:11PM (#41102413)

    I don't recall ever reading on any PC hardware site anyone claiming that the CPU doesn't matter and all you need is a good graphics card. How on earth did anyone ever successfully submit that story?

  • by WilliamGeorge ( 816305 ) on Thursday August 23, 2012 @05:13PM (#41102441)

    The research into frame-rate latencies is really interesting, but the whole idea that *anyone* knowledgeable about PC gaming would have *ever* denied that the CPU was an important factor in performance is ridiculous. I am a consultant at a boutique PC builder (http://www.pugetsystems.com/) and I have always told gamers they want to get a good balance of CPU and GPU performance, and enough RAM to avoid excessive paging during gameplay. Anything outside of that is less important... but to ignore the CPU? Preposterous!

    Then again, it is a Slashdot headline... I probably should expect nothing less (or more)!

  • FTFY (Score:5, Insightful)

    by gman003 ( 1693318 ) on Thursday August 23, 2012 @05:13PM (#41102443)

    For years, stupid PC hardware sites have maintained that CPUs have little impact on gaming performance; all you need is a decent graphics card. That position is largely supported by FPS averages, as most GPU tests are run using the most powerful CPU to prevent the CPU from being the limiting factor, but the FPS metric doesn't tell the whole story. Examining individual frame latencies better exposes the brief moments of stuttering that can disrupt otherwise smooth gameplay. Those methods have now been used to quantify the gaming performance of 18 CPUs spanning three generations by some site that really has nothing better to do than to restate the obvious for morons. [ed: removed fanboy-baiting statements from summary]

    • by Twinbee ( 767046 )
      So its mention of AMD CPU latencies increasing (and Intel's decreasing) is wrong is it?
      • No, I'm not saying it's factually incorrect. I'm saying that the way they put it into the summary was misleading flamebait.

        A simple logical analysis shows that the primary factor in latency is instructions-per-clock, and clock speed (core count matters as well for applications with multithreaded rendering, but those are surprisingly few). The Phenom II series was good at both. The Sandy Bridge/Ivy Bridge Intel processors are also good at both, even a bit better. Bulldozer, unfortunately, went the Pentium IV

    • You're an AMD fanboi, eh? :-)
      • *looks at current laptop* Core i7 3610QM
        *looks at wreckage of last laptop* Core 2 Duo P8400
        *looks at primary desktop* dual Xeon 5150s
        *looks at secondary desktop* Athlon 900

        Yeah, if I were going to accuse myself of fanboyism, I think I'd accuse myself of fanboying for *Intel*, not AMD. Now granted, I've got a few more AMD-based builds under my belt, but I've either given them away (the Phenom X3 build) or accidentally fried them (the old Athlon XP build).

        In all honesty, though, both companies have their good

        • Ah, nice to hear - your redacted summary just gave another impression.

          Been through both sides myself, depending on what made most sense at the time - first box I owned was a 486dx4-100, obviously AMD. Current rig is third intel generation in a row, though - AMD haven't really been able to keep up (except for the budget segment) since Intel launched Core2, imho. Which is kinda sad - while I kinda would have liked to see x86 die and "something better" emerge rather than getting x86-64, at least AMD obliterate

  • This should be obvious to anyone who has done any realtime/interactive graphics programing. As the frame rate gets higher the amount of time the CPU has to process the next frame gets smaller. It also becomes more diffcult to properly utilise the CPU fully unless you are willing to add a couple of frames of latency to generate frames in the future which I'd speculate is not ideal for a game type application.

  • My current rig that i build in 2007 and upgraded once in a while has decent gaming performance, even though i haven't put any money in it in 2 years or so... still on a Geforce 450.
    Calm down please :)

  • Over all GPU does impact FPS the most cheap one vs little more expensive one, but to say cpu has no impact is wrong. Overall its impact is very small but there is some.
    • by Krneki ( 1192201 )
      Small? Go play something online that uses only 1 or at max 2 cores and then ask yourself why can't you be in the top tier. If you are undemanding (aka 30FPS is all I need), then of course any old hardware will do.
  • by zlives ( 2009072 ) on Thursday August 23, 2012 @05:19PM (#41102509)

    so... i should finally give in and buy the coprocessor for my 386!!

    • I think it's just a fad.
      Wait and see.

    • by Hatta ( 162192 )

      Did any games support math coprocessors back in the math coprocessor days? My impression was that they were for office apps, Lotus etc.

      • I dunno, but I could see it being exploited for the additional registers, and doing a floating point op at the same time as an executing loop.

        It might have also been useful when doing software blitting on non accellerated cards.

      • Probably an outlier, but I remember that Scorched Earth [wikipedia.org] ran much better after I added a 387 to my 386 machine back in the early 90s. Specifically the projectile trajectories were calculated much quicker.
        • What were you smoking? Like you needed Scorched Earth to run faster? Jeez, I take my turn and then *boop* *boop* *bing* the computer players shoot and I have no idea what just happened. Oh, I'm dead now. That was fun. I don't mind losing, I just want to see WHAT THE HELL HAPPENED.
      • by zlives ( 2009072 )

        wing commander 3? i think that helped!!!
        Bluehair was the best captain

        • You must have had a 486 DX50. (No, not a DX2/50, I mean a DX50.)

          It ran internally at 50mhz, at a 1x multiplier. Old dos games expecting a 33mhz bus clock would go at warpspeed! :D

          (The DX2/50 used a 2x multiplier, and had a bus speed of 25mhz. It was a lot cheaper than a real DX50.)

      • The original Quake 1 actually REQUIRED an FPU coprocessor, as I recall. Never was happier to have a shiny new 486DX2!
    • by danomac ( 1032160 ) on Thursday August 23, 2012 @06:22PM (#41103305)

      Yep, nothing like being able to calculate 1+1=3 quickly. Err...

    • by Nimey ( 114278 )

      Nah, I've got a TSR[1] that emulates an 8087. It totally speeds up Doom! ...actually, it really did on my 486SX-25, maybe 2-4 FPS. No, I don't know why; maybe without a co-pro present Doom would use emulated 387 instructions that were less efficient than emulating a simpler 8087.

      [1] it was called EM87.

  • This doesn't make sense at all. It's clear that the CPU is far more important than the GPU.
    CPU speed solves stuttering and lag
    Hard drive speed solves long load times
    Memory amount decreases frequency of load times (memory speeds, despite what many thing, have relatively little to do with performance as even the slowest memory is far faster than any other component of the system)
    GPU speed/memory amount affects quality of graphics settings and frame rate when those settings are turned on (i.e. you can check mo
  • "Strawmansummary"
  • Minecraft: I know it's not the best optimized game, but I'm pretty sure it still uses hardware. I have had an Nvidia GTX 275 forever though many CPUs. When playing Minecraft with an older Quad Core Intel CPU (can't remember the model number) I would get around 30FPS at medium settings, after upgrading to an I7 with the same video card, now my Minecraft FPS is around 90FPS with the same settings.

    So I can attest empirically that "CPU matters" is in fact the case. Also games like ARMA2, Supreme Commander

    • Tekkit mod certainly is CPU limited. Heavily so. To the point you can bring down a server by building too many buildcraft pipes.
  • ...told you this! I been saying this for years. I didn't buy an AMD FX 8-core and 16BG of RAM for kicks!
    I will say though, that for a while, RAM was a major player.
  • Hm. First there is:

    "...The FX-4170 supplants a lineup of chips known for their strong value, the Athlon II X4 series. Our legacy representative from that series actually bears the Phenom name, but under the covers, the Phenom II X4 850 employs the same silicon with slightly higher clocks."

    and then:

    "Only the FX-4170 outperforms the CPU it replaces, the Phenom II X4 850, whose lack of L3 cache and modest 3.3GHz clock frequency aren't doing it any favors."

    How can I trust them if they are unaware of basic stuff

    • The Phenom II X4 850 (and the 840 as well) is based on the C3 stepping Propus core, which means that it is essentially an upclocked Athlon II. It does not have any L3 cache. The article is correct.
    • Comment removed based on user account deletion
  • I have been saying this for years, but have never had any data to back it up. For me it has always been a "seat of the pants" sort of metric. Over the last decade I have tried AMD CPUs on a number of occasions, and always found them to be lacking in comparison to Intel CPUs of the same generation. My latest gaming machine is running an i7-960 (got it cheap from NewEgg) and it works great with all of the games I play.

  • Yea if I'm trying to render 120fps then yes it's a bottleneck. Chances are you only have a 60Hz monitor so VSync will lock you at 60fps. Most of the tests ran above 60fps with some exceptions on the older CPUs. So you can spend your money on an expensive Intel i7 to render frames you cannot see, or you can buy a cheaper processor and spend the money on a beefy GPU or fix the real bottle neck is the HDD and switching to a SSD is a better improvement.
    • Frames per second in video games are not all about what you can see. The FPS that a game plays at is in direct relation to input delay. A game that runs at 30fps is going to have twice as much input delay as a game that runs at 60fps, and 4 times the delay of a game that runs at 120fps. In highly competitive multiplayer games having an additional 20ms delay on all of your inputs compared to an opponent can make a difference.
      • That's background frame delay and if done properly background frames don't have to be hard linked to drawn frames. Typically background frames can be a lot slower than foreground as long as you don't have any dropped frames you don't normally. And they also need to be a lot slower when running over a network so that they can remain synced.
  • Tell this to someone who plays civilization...or SoF.
  • by cathector ( 972646 ) on Thursday August 23, 2012 @06:03PM (#41103095)

    as this article points out it's not the number of frames per second that really matters:
    it's the longest gap between subsequent frames which the eye picks up on.

    you could cram 200 frames into the last 10th of a second, but if the other 0.9 seconds only has 1 frame, it'll feel like 1Hz.

    i typically chart another metric next to traditional FPS which is 1 / (max inter-frame period in one second).

    • as this article points out it's not the number of frames per second that really matters:
      it's the longest gap between subsequent frames which the eye picks up on.

      you could cram 200 frames into the last 10th of a second, but if the other 0.9 seconds only has 1 frame, it'll feel like 1Hz.

      i typically chart another metric next to traditional FPS which is 1 / (max inter-frame period in one second).

      I don't get the point of this, frames rendered out of sync with vertical refresh are already garbage. Variability of inter-frame latency and correspondingly variable rate are just another good reason to lock your frame rate to something consistently achievable like 30/60 fps.

      Anything inconsistent, and not in sync is just plain dumb.

  • how to tell (Score:4, Interesting)

    by Andrio ( 2580551 ) on Thursday August 23, 2012 @07:01PM (#41103663)
    In a game, look at the sky. If your framerate shoots up, the video card was your bottleneck. If it doesn't, your CPU is.
    • Re: (Score:2, Funny)

      by Anonymous Coward

      In a game, look at the sky. If your framerate shoots up, the video card was your bottleneck. If it doesn't, your CPU is.

      I'm playing Ultima Underworld, you insensitive clod!

    • by Anonymous Coward

      No, not really.

      I assume you are referring to the fact that when you look at the sky the game engine culls (skips rendering) most of the objects in the scene, therefore the GPU has less to do and if you are not CPU bound the frame rate shoots up. However when you are not looking at the sky BOTH the CPU and GPU load increases and your test does not reveal which has now become the bottleneck.

      Your test only confirms the obvious: that it takes less resources (CPU and GPU) to render the sky than a full scene.

  • Ok so when I get beautiful Starcraft 2 rendering from my GTX 570m and then there's a big lag (frame rate goes from 40-50 to 10 fps) because the screen is full of units firing at each other, I need to blame the CPU? I assumed it was Windows 7's fault -- they couldn't even code a multi-core OS properly. (I have a Qosmio X770-11C)
  • when Crysis wouldn't load on my 80286 machine.

The unfacts, did we have them, are too imprecisely few to warrant our certitude.

Working...