CPUs Do Affect Gaming Performance, After All 220
crookedvulture writes "For years, PC hardware sites have maintained that CPUs have little impact on gaming performance; all you need is a decent graphics card. That position is largely supported by FPS averages, but the FPS metric doesn't tell the whole story. Examining individual frame latencies better exposes the brief moments of stuttering that can disrupt otherwise smooth gameplay. Those methods have now been used to quantify the gaming performance of 18 CPUs spanning three generations. The results illustrate a clear advantage for Intel, whose CPUs enjoy lower frame latencies than comparable offerings from AMD. While the newer Intel processors perform better than their predecessors, the opposite tends to be true for the latest AMD chips. Turns out AMD's Phenom II X4 980, which is over a year old, offers lower frame latencies than the most recent FX processors."
It's not all graphics (Score:5, Insightful)
Try cranking up the difficulty of an RTS on a not-so-good computer and you'll immediately notice how things slow down
Re:It's not all graphics (Score:4, Interesting)
Re: (Score:3, Informative)
Re: (Score:2)
Requires software support, like HyperThreading (Score:5, Informative)
This also shows what many of us have been saying which is Bulldozer is AMD's Netburst.
Yes but not for the reason you think. Netburst introduced two things:
- An extremely deep pipeline, which was a stupid idea and ultimately netburst's demise and core's reboot from the ashes of pentium3. That's the thing most people are referring to when comparing both chips.
- HyperThreading. the ability to run 2 threads on the same pipeline (in order to keep the extremely long pipeline full). That's what's similar to buldozer's problems.
When HT was introduced, its impact on running windows software was catastrophic. That is simply due to the fact that Windows was optimized for SMP (Symmetric Multi Processors) where all CPU are more or less equal. Hyperthreadinng is far from symetric: it introduces 2 virtual processor which must share resource with the real one. You have to properly schedule threads so that no real cpu is idle while a virtual core is strugling. And you have to intelligently schedule threads to minimize cache misses. Windows simply wasn't designed for such architecture and definitely sucked at correctly juggling with the threads and the virtual processors. Proper Windows support came much later (and nowadays enabling hyperthreading under windows doesn't come as much a performance hit).
The "half core" of bulldozer are in the same situation. It's also a weird architecture (although less is shared between half-cores). It requires correctly assigning thread to processors, etc. Again current Windows ( 7 ) sucks at this, you'll have to wait for Windows 8 to see an OS properly optimized with this situation. Until then, the half-core design will come with a huge performance cost.
But that's only in Microsoft world.
On Linux the situation is different. Beside the Linux kernel being much more efficient for thread and process scheduling, Linux has another advantage: opensource code coupled with shorter release cycle. And thus the latest kernels available already support the special core model of bulldozer.
The end result is that bulldozers run much more efficiently under Linux than under Windows (as can be assert from the Linux benchmarks on Phoronix).
And they have decent performance per dollar.
Lets just hope that recent hire of the former Apple chip designer to AMD can right the ship, because otherwise when I can't score X4s and X6s anymore i'll have no choice but to go Intel.
What you'll benefit the most is waiting for a version of windows which does support the bulldozer model.
Although the bulldozer have some short-comings, which are in the process of being ironed out.
Re: (Score:3)
I don't think the GP was trying to draw litteral comparisons with netburst (though you're right, there are some), instead simply pointing out that Bulldozer is a terrible misstep in the design direction.
Intel went out chasing high numbers, what they got, was a chip that clocked moderately highly, but performed like ass anyway, and sucked power.
AMD went out chasing core count, what they got, was a chip that can't hold its own against chips with half as many "cores", and sucks power.
*there* is the parallel.
I was using it as a metaphor (Score:5, Informative)
*there* is the parallel.
There is parallel in the way people perceive them.
There is a big difference under the hood in practice.
I mean people see both and say "Bulldozer is the new Netburst" just as "Windows 8 is the Windoes Vista is the new WindowsME".
But the reasons behind are fundamentally different.
Netburst sucked and was hopeless. Bulldozer is suboptimal but there's room for improvement.
Intel went out chasing high numbers, what they got, was a chip that clocked moderately highly, but performed like ass anyway, and sucked power.
They got it, because they choose a design path which has many drawbacks, they sacrificed a lot just for the sake of higher GHz, Netburst doesn't bring much interesting thing to the table. I could maybe somewhat work a little bit today using the latest shrinking technologie, advanced cooling, and finally hit the 10GHz where the architecture should be competitive. While still sucking a lot of power.
But back in the Pentium IV days, there were no hope that anything could actually efficiently use it.
It "performed like ass" almost by design. Because all the things they neglected end up biting them in the long run, and become hard limits.
The only way to do something better was scrap the whole thing, move to something simpler, and stop favouring GHz at all cost, preferring it above anything else including power consumption.
Which they did. The Core family was done by improving over the older Pentium IIIs.
And they did it again in a way with the Atom family, which is not completely unlike the even simpler and older Pentium, giving an even lower power end result (though its difficult to compete with ARM in this range...)
The only solution to get Intel out of their solution was a garbage bin.
The only useful stuff which came out of the Netburst architecture was HyperThreading. Which was useless back in the Pentium IV era for lack of proper OS support. But worked better when it was reintroduced later in the Core era, just because Windows had some time to mature.
AMD went out chasing core count, what they got, was a chip that can't hold its own against chips with half as many "cores", and sucks power.
On the other hand bulldozers are limited by things which are improvable in the near future.
Some might be design flaws on the silicon, but these are stuff which can be fixed. And that means in the near future, not counting on some advanced technology 10 years from now to dramatically shrink the process. Part of the "sucks power" problem is fixable in hardware.
(And part of it is fixable by litteral "building architecture". AMD is a little bit late using older processes, simply for lacking manufacturing plants with the latest technology like intel).
But most problem aren't even hardware, but software.
- The OS and Kernel scheduler need to support its peculiar concept of half cores. There's a dramatic difference *already today* in using Bulldozer between Windows and Linux. Because current generation of kernel inside Windows 7 predates Bulldozer's release. Whereas Linux is not only fucking much more efficient, but support for half core was added long ago.
- The software needs to be written to take advantage of Bulldozer, specially using more cores. But *that is* the current general tendency anyway:toward multiprocessing, and multithreading. so that will happen naturally over time. Just look at the Google's Chrome: Each tab is (for security and sandboxing reasons) a separate (isolated) process. It's the most visible and known example, but other software follow the same trend. Being of Unix heritage, Linux uses multiprocessing much more heavily and thus has much more use cases where Bulldozer is useful (server tasks is one example).
(Also in the opensource world Bulldozer's other advantages are usually only a compiler switch- or a tool library upgrade- away. Software can take advantage of that rather quickly)
So yeah, a
Re: (Score:2)
I've stuck with the Phenoms because you get great bang for the buck and because it was obvious the "half core' design of BD was crap, now we have it in B&W
You've got it in black and white from an unreliable, fluffy article that looks more like a troll to me.
Re:It's not all graphics (Score:4, Insightful)
This also shows what many of us have been saying which is Bulldozer is AMD's Netburst. I've stuck with the Phenoms because you get great bang for the buck and because it was obvious the "half core' design of BD was crap, now we have it in B&W, the much older Phenom spanking the latest AMD chips which cost on average 35-45% more. Lets just hope that recent hire of the former Apple chip designer to AMD can right the ship, because otherwise when I can't score X4s and X6s anymore i'll have no choice but to go Intel.
You say that like you'll be forced to change your religion or political party. It's a CPU. It's a tool. Use what works best for your use case scenario. Why the fanboi mentality?
Re: (Score:3)
You say that like you'll be forced to change your religion or political party.
He's choosing to invest in the competition. If AMD goes tits-up, you will be paying whatever Intel wants because you will have no alternative. Imagine having only a single mobile service provider. You would not be getting the same plan you have now.
Err (Score:5, Interesting)
Re: (Score:3, Funny)
For years, absolutely nobody has maintained that CPUs have little impact on gaming performance; all you need is a god-tier video card setup, and a game engine that magically handles everything via GPU.
There, I fixed it.
Seriously, this has to be the most nonsensical Slashdot summary I've read all day. CPU hasn't been a minor factor in gaming for several gaming aeons now, and there are no shortage of games that are critically dependent on it (Hi, Skyrim!).
Re:Err (Score:5, Informative)
*shrug*
I've been running a Q6600 for several years, and only replaced it last month. That's a July 2006 CPU. It didn't really seem strained until the very most recent crop of games... and yes, sure, it's a quadcore, but game CPU logic hasn't been heavily parallelized yet, so a fast dualcore will still be better for most gamers than a quadcore - and the Q6600 is pretty slow by today's standard (2.4GHz, and with a less efficient microarchitecture than the current breed of core2 CPUs).
Sure, CPUs matter, but it's not even near a case of "you need the latest generation of CPU to run the latest generation of games!" anymore. Upgrading to a i7-3770 did smooth out a few games somewhat, but I'm seeing far larger improvements when transcoding FLAC albums to MP3 for my portable MP3 player, or compiling large codebases :)
Re: (Score:3)
I had a dualcore(E6600) for 5 years and pretty much every new game in the past 3 years can use two or more cores, even if it's just two you have to consider the other programs running in the background, for example on bad company 2 punkbuster had a bug that after a few minutes it would use 20-30% of cpu, the game itself uses ~90% of cpu and because of punkbuster there was a lot of stuttering, now I have a sixcore(3930k) and yeah maybe six cores are too much for games but some of them like bf3 can already us
Re:Err (Score:4, Interesting)
PunkBuster spiking to 20-30% CPU is, as you mentioned, a bug - it is not the norm. And while people won't be shutting down every background process to play a game, they don't tend to run anything heavy while gaming. And all the regular stuff (web browser with a zillion tabs loaded, email client, IM client, torrent client, ...) is pretty negligible CPU-wise.
I personally haven't run into games that can utilize more than two cores (please let me know if they're out there!), and even then there's usually been synchronization issues that has kept the game from reaching 100% core utilization, even on the slower cores. Parallelizing stuff is hard, and outside of the core graphics pipeline (which runs mostly on the GPU), there's so much stuff that needs to run in strict order in a game engine. I sure do hope clever programmers will think of improvements in the future, though, since we'll hit the GHz sooner or later - and then we need to scale on number of cores.
As things are right now, I'd still say a faster dualcore is more bang for the buck than a slower quadcore, gamewise - but that might change before long. And considering that the current crop of CPUs can turbo-boost a couple of cores when the other cores are inactive, it's obviously better to shop for a quadcore than a dualcore - but with the current crop of games, you'd effectively be using the CPU as a faster dualcore when not running intensive background stuff :-)
You can't really compare the consoles directly to x86 CPUs, btw, the architecture is radically different - moreso on the playstation side than the xbox (and let's ignore the original xbox here, for obvious reasons :)). I wonder if Sony is going to keep up their "OK, this is pretty whacky compared to the commodity multicore stuff you're used to, but it's really cool!" approach, or if they'll settle for something "saner".
Re: (Score:2)
for the console I was talking about the fact that 90% of pc games are console ports right now so with a new generation of consoles the pc version should also be better optimized otherwise we will have a lot games that runs at 20 fps
Re: (Score:2)
And while people won't be shutting down every background process to play a game, they don't tend to run anything heavy while gaming.
I was quite interested in TFA's data on performance while transcoding video, as I do that quite often myself. Their data mirrors my own anecdotal experiences...a low-priority video encode won't hurt much if you have a decent number of cores.
And all the regular stuff (web browser with a zillion tabs loaded, email client, IM client, torrent client, ...) is pretty negligible CPU-wise.
One of the things that does kill performance for me is moderately heavy background disk activity. Download-speed activity isn't a big deal, but a few GB of robocopy across the LAN will bring a lot of games to a halt for a second or two.
Re: (Score:2)
I still have a Q6600 in my gaming system. It's a solid CPU. Replacing it would require replacing the motherboard and I can't really justify it at this point -- things run really well (I have a GeForce 550 Ti graphics card, which handles essentially the games I want to play, including modern ones, with aplomb).
Once I start running into performance issues, I may upgrade, but that'll probably be in another year or two.
Re: (Score:2)
*shrug*
I've been running a Q6600 for several years, and only replaced it last month. That's a July 2006 CPU. It didn't really seem strained until the very most recent crop of games... and yes, sure, it's a quadcore, but game CPU logic hasn't been heavily parallelized yet, so a fast dualcore will still be better for most gamers than a quadcore - and the Q6600 is pretty slow by today's standard (2.4GHz, and with a less efficient microarchitecture than the current breed of core2 CPUs).
Sure, CPUs matter, but it's not even near a case of "you need the latest generation of CPU to run the latest generation of games!" anymore. Upgrading to a i7-3770 did smooth out a few games somewhat, but I'm seeing far larger improvements when transcoding FLAC albums to MP3 for my portable MP3 player, or compiling large codebases :)
I had to can my dual core E8600 when BlackOps came out so I beg to differ on that. I only upgraded to a Q9650 I found second hand and it made the world of difference. The old dual core chip stuttered horrendously at 1920*1200 with a GTX480 for the first minute or so of a multiplayer match. I am guessing the game was trying to load the textures after I had started playing or something but I tried everything I could think of to fix it before spending any money. I even tried overclocking the E8600 up to 3.5 o
Re: (Score:2)
I'm still running a Q6600 on my main box with a nVidia 560 Ti GPU. For most GPU bound games (read, shooters) I have no issues with it at all. For RPGs like Skyrim it tended to get CPU bound, but not so bad that I felt I had to update it today, and it played the Guild Wars 2 beta much better than my laptop with an i7 2630 and nVidia 560M (better CPU, GPU is 50%+ slower than the Ti and quite a bit slower than the 560, despite the similar name).
Re:Err (Score:4, Interesting)
For years, absolutely nobody has maintained that CPUs have little impact on gaming performance; all you need is a god-tier video card setup, and a game engine that magically handles everything via GPU.
There, I fixed it.
Seriously, this has to be the most nonsensical Slashdot summary I've read all day. CPU hasn't been a minor factor in gaming for several gaming aeons now, and there are no shortage of games that are critically dependent on it (Hi, Skyrim!).
Check out your favorite hot deals web site. The mantra is a celeron or any old amd chip made in the last 5 years plus a solid gpu = goodness. I coiuld point you to dozens of threads where this is the defacto standard.
But thats what you get when you combine cheap with minimal knowledge. Eventually everyone becomes convinced that its true.
Re: (Score:2, Redundant)
Um, it is true. Frame latency doesnt even matter. Its less than 1ms in ALL cases. IE: Its imperceptible.
I just bought a FX4100 purely because it was cheap, had ENOUGH power, and with an excellent video card setup a better intel chip wouldn't provide any sort of noticeable performance increase. Current-Gen CPUs so far overpower current-gen game engines cpu requirements that this argument is just plain silly.
I even see someone making the argument that AI is causing massive cpu load.... get fucking real, AI ha
Re:Err (Score:5, Interesting)
If you read the charts the assertion that 'cpu doesn't matter' is kind of true in a lot of cases.
It's not that it doesn't matter at all, but the difference between an 1100 dollar sandy bridge i7 3960 and a 200 dollar 2500k, even though they are almost a factor of 2 difference in performance side by side (http://www.cpubenchmark.net/high_end_cpus.html) is less than 10% in games. Now those processors are still *way* better than the AMD offerings unfortunately, and the AMD processors are in many cases so bad that becomes the dominant problem.
The new "bulldozer" architecture from AMD is a disaster, in just about every way. They're terrible. The charts clearly show that.
The video card makers (more than the review sites) have correctly pointed out that performance is much more likely to be GPU gated than CPU gated, or, if it's a problem like I'm working on now, it's a single CPU gated for an algorithm that doesn't neatly parallelize - so more cores doesn't do anything. If you're given a choice between a 1000 dollar CPU or a 600 dollar one from the same company odds are you won't be able to tell the difference, so in that sense they're reasonably correct, there's virtually no benefit to buying an extreme CPU or the like if your primary goal is gaming performance. If you're talking about the best use of say 1000 dollars to build a gaming PC, well then the cheapest i5 you can find with the best video card you can afford is probably the best bang for your buck.
As someone above said, an RTS like starcraft is more likely to be CPU limited than GPU limited.
What this tells us is that AMD processors are terrible for gaming, but there's virtually no difference which FX processor you buy (don't buy those though, if you're buying AMD buy a phenom), and within the Intel family there is again, virtually no difference for a factor of 4 or 5 price difference.
What they didn't look at (because you don't really benchmark it) is load times, I think the FX processors have a much faster memory subsystem if you have a good SSD than their Phenom counterparts, but otherwise someone should take a bulldozer to bulldozer.
If we were to revisit the oft used car analogy for computing, it's a fair assertion that which brand of car you buy won't help you get to work any faster day to day, slightly better cars, with faster pickup etc will have a small (but measurable benefit) but that's about it. Well, unless you buy a land rover, or a BMW 7 series (http://www.lovemoney.com/news/cars-computers-and-sport/cars/12461/the-country-that-makes-the-most-reliable-cars, http://www.reliabilityindex.com/ ), at which point, you should budget time into your schedule for the vehicle to be in the shop.
Re: (Score:2)
I wonder if this same logic applies to browser performance? As they become more graphical and video-oriented will the GPU power matter more than the CPU?
Maybe I didn't need a new computer..... maybe I just needed to keep the Pentium4 and upgrade the graphics card to something fast. Then I could play HD youtube.
Re: (Score:2)
Re: (Score:2)
on an OS that supports it. No GPU acceleration on Windows XP generally, and older flavours of linux are the same deal.
Re: (Score:3)
Flash 11.1 supports GPU acceleration on XP, the current version of the Chrome embedded flash object however does not. I found this out during the Olympics, the 720p feeds were jumpy as heck in Chrome but fairly smooth in Firefox.
Re: (Score:2)
flash only supports acceleration for movie decoding (so of course that does apply to youtube, but basically nothing else other than porn sites).
Re: (Score:3)
I wonder if this same logic applies to browser performance
In windows 8 it definitely will, windows 7 and linux, not so much. GPU acceleration is becoming more and more popular because GPU's are able to solve one type of problem significantly better than CPU's, if you can split your problem up, into the rendering problem and the logic problem the CPU becomes a lot less important, assuming it's fast enough to keep up with the GPU for whatever problem you have.
General purpose GPU acceleration isn't standard in use very well on any OS, although MS is doing so with t
Re: (Score:2)
Wise words.
Just one thing: whether disk speed matters or not depends a lot on the game, and whether it's the "we have a fixed memory profile, and load all assets to memory while loading a level" or "we stream stuff as necessary" type. For instance, for Far Cry 2, it made pretty much no difference whether I had the game files on a 2x74gig Raptor RAID-0 or on a ramdisk. For a lot of engines, there's all sorts of things going on... Disk I/O, some CPU crunching, some sysmem->gpumem transfers, some gpu crunch
Re: (Score:2)
Ya, disk speed is a hard one to benchmark out, which is why I pointed at loading times, that's where it makes the most difference, not 'in game' activities. Well that and just general system behaviour.
Re: (Score:3)
Re: (Score:2)
No, the phenoms aren't actually terrible, they're behind where the equivalent generation i5's are, but bizarrely, they're ahead of the successor FX parts (FX are supposed to be a newer better microarchitecture than phenom).
Depends how you define 'higher end' here too. An i5 2500k is a 200 dollar processor for the OEM version at retail, now that's not full system cost, you'd need a mobo and RAM to go with that, but the phenom x6 is a 150-160 dollar part at retail and is maybe 2/3rds the overall performance
Re: (Score:3)
150 dollars as the difference between a 500 dollar i5 kit vs a 340 dollar phenom kit. The tigerdirect.ca site has phenom kits for 250 bucks and i5 2500 kits for 400-450.
But looking at what the specs of the PS4 is supposed to be frankly
whatever you're looking at is probably wrong. We teach game development where I am, and we have very good relations with the Console developers nearby and none of them have any idea what the PS4 is going to actually be. There are a lot of good theories on what it could be, or should be, but no one knows what it will be.
I figure we've got another year to year and a half before i'll swap out the GPUs and with 2 hexacores and a quad I doubt we'll be needing new systems for a good 5 years or more.
Depends very much
Re: (Score:2)
What this tells us is that AMD processors are terrible for gaming
No, it tells us that AMD processors are a little worse for gaming, not "terrible". On the other hand, if more cores matter to you, and they do to me, AMD still looks like good bang for the buck.
Re: (Score:2)
Oh, and if you want to talk about terrible, Intel's Atom is terrible. I regret wasting any money at all on those brain challenged, hot running turds. The Atom fiasco singlehandedly killed off the pretty much the entire netbook market.
Re: (Score:2)
If you're talking about the best use of say 1000 dollars to build a gaming PC, well then the cheapest i5 you can find with the best video card you can afford is probably the best bang for your buck.
I just built myself a nice shiny new gaming PC as my old Core2 Quad 9650 decided to go pop a few weeks ago and gave up trying to resurrect it.
I looked at the prices and it seems that a low end 2011 Sandy Bridge CPU is actually pretty reasonable so you should be able to put together a gaming PC featuring this for under a $1000. The 3820 is only $300. Throw in some memory, motherboard and a mid range graphics card and you get up to $785.96 on new egg :)
Most people seem to discount 2011 SandyBridge stuff based
Re: (Score:2)
I do lab projects for a living, lab projects that aren't working should stay in the lab.
It seems like piledriver is 15% faster clock for clock than first gen bulldozer, that still only brings it up to gufftown levels of performance. Being one step behind Intel has always been AMD's thing, there's nothing wrong with that place in the market, Bulldozer looks like it has put them at least two, and possibly 3 steps behind intel, given that we could see haswell by march of next year.
Also, and as the wikipedia a
What. What?! (Score:5, Interesting)
Who thought that CPU's didn't bottleneck gaming performance? Who ever thought that? Only the smallest of tech demos only used GPU resources - every modern computer/console game I'm aware of uses, well, some regular programming language that needs a CPU to interpret instructions and is inherently limited by the standards of clock cycle and interrupt tied to those CPUs.
GPUs only tend to allow you to offload the strait-shot parallelized stuff - graphic blits, audio, textures & lighting - but the core of the game logic is still tied to the CPU. Even if you aren't straining the limits of the CPU in the final implementation, programmers are still limited by the capacity of them.
Otherwise, all our games would just be done with simple ray-traced logic, using pure geometry and physics, there would be no limits on the number or kind of interactions allowed in a game world, game logic would be built on unlimited tables of generated content, and we'd quickly build games of infinite recursion simulating all known aspects of the universe far beyond the shallow cut-out worlds we develop today.
But we can't properly design for that - we design for the CPUs we work with, and the other helper processors have never changed that.
Ryan Fenton
Re: (Score:2)
I will take a mediocre cpu with a kick ass GPU than the other way around. Sure I have an under clocked phenom II at just 2.6ghz but with my ATI 7870 I plan to get it will blow away an icore7 extreme with the HD 4000 graphics by several hundred percent!
GPU is where it is at with games. Just like with Windows an SSD makes a bigger difference than a faster CPU booting up.;
Re: (Score:2)
GPUs only tend to allow you to offload the strait-shot parallelized stuff - graphic blits, audio, textures & lighting - but the core of the game logic is still tied to the CPU. Even if you aren't straining the limits of the CPU in the final implementation, programmers are still limited by the capacity of them.
Your theory is basically valid, but the practical reality and the empirical evidence of the last, I dunno, 20 years or so, is that the graphics processing takes a significant amount of computing power. There's a reason that virtually every computer and every game console has a dedicated GPU. For that matter, a dedicated sound processing chip. It's all offloaded and the APIs have improved to the point that it doesn't seem like much work, but those specialized chips are burning an awful lot of power.
For a
For years? (Score:5, Interesting)
I don't recall ever reading on any PC hardware site anyone claiming that the CPU doesn't matter and all you need is a good graphics card. How on earth did anyone ever successfully submit that story?
Re: (Score:2)
Interesting research - poor Slashdot title (Score:5, Interesting)
The research into frame-rate latencies is really interesting, but the whole idea that *anyone* knowledgeable about PC gaming would have *ever* denied that the CPU was an important factor in performance is ridiculous. I am a consultant at a boutique PC builder (http://www.pugetsystems.com/) and I have always told gamers they want to get a good balance of CPU and GPU performance, and enough RAM to avoid excessive paging during gameplay. Anything outside of that is less important... but to ignore the CPU? Preposterous!
Then again, it is a Slashdot headline... I probably should expect nothing less (or more)!
Re: (Score:3)
Re: (Score:3)
> The research into frame-rate latencies is really interesting,
Indeed. There was a VERY interesting article last year on Micro-Stuttering And GPU Scaling In CrossFire And SLI
http://www.tomshardware.com/reviews/radeon-geforce-stutter-crossfire,2995.html [tomshardware.com]
> but the whole idea that *anyone* knowledgeable about PC gaming would have *ever* denied that the CPU was an important factor in performance is ridiculous.
Not exactly. Battlefield 3 doesn't use more then 2 cores.
http://www.bit-tech.net/hardware/2011/1 [bit-tech.net]
Re: (Score:2)
Sorry, forgot a link:
http://www.tomshardware.com/reviews/battlefield-3-graphics-performance,3063-13.html [tomshardware.com]
i7-2600K
4 cores: 80.57 fps
3 cores: 81.07 fps
2 cores: 80.76 fps
1 core: doesn't start
Re: (Score:2)
Its not that it doesn't matter AT ALL, its that any mid-range CPU(Say 2500k or FX4100) with a high-end graphics card(7870+) will do any current video game on max settings without breaking a sweat and will continue to do so for a couple of years while only swapping the video card.
I can't even begin to understand someone at a performance computer store that would recommend a "balanced approach". At that point you'd be recommending that if someone wants a 7970 they need a fucking $800 i7 to go with it, when th
FTFY (Score:5, Insightful)
For years, stupid PC hardware sites have maintained that CPUs have little impact on gaming performance; all you need is a decent graphics card. That position is largely supported by FPS averages, as most GPU tests are run using the most powerful CPU to prevent the CPU from being the limiting factor, but the FPS metric doesn't tell the whole story. Examining individual frame latencies better exposes the brief moments of stuttering that can disrupt otherwise smooth gameplay. Those methods have now been used to quantify the gaming performance of 18 CPUs spanning three generations by some site that really has nothing better to do than to restate the obvious for morons. [ed: removed fanboy-baiting statements from summary]
Re: (Score:2)
Re: (Score:2)
No, I'm not saying it's factually incorrect. I'm saying that the way they put it into the summary was misleading flamebait.
A simple logical analysis shows that the primary factor in latency is instructions-per-clock, and clock speed (core count matters as well for applications with multithreaded rendering, but those are surprisingly few). The Phenom II series was good at both. The Sandy Bridge/Ivy Bridge Intel processors are also good at both, even a bit better. Bulldozer, unfortunately, went the Pentium IV
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
*looks at current laptop* Core i7 3610QM
*looks at wreckage of last laptop* Core 2 Duo P8400
*looks at primary desktop* dual Xeon 5150s
*looks at secondary desktop* Athlon 900
Yeah, if I were going to accuse myself of fanboyism, I think I'd accuse myself of fanboying for *Intel*, not AMD. Now granted, I've got a few more AMD-based builds under my belt, but I've either given them away (the Phenom X3 build) or accidentally fried them (the old Athlon XP build).
In all honesty, though, both companies have their good
Re: (Score:2)
Ah, nice to hear - your redacted summary just gave another impression.
Been through both sides myself, depending on what made most sense at the time - first box I owned was a 486dx4-100, obviously AMD. Current rig is third intel generation in a row, though - AMD haven't really been able to keep up (except for the budget segment) since Intel launched Core2, imho. Which is kinda sad - while I kinda would have liked to see x86 die and "something better" emerge rather than getting x86-64, at least AMD obliterate
Not a game player, but (Score:2)
This should be obvious to anyone who has done any realtime/interactive graphics programing. As the frame rate gets higher the amount of time the CPU has to process the next frame gets smaller. It also becomes more diffcult to properly utilise the CPU fully unless you are willing to add a couple of frames of latency to generate frames in the future which I'd speculate is not ideal for a game type application.
Why should I care? (Score:2)
My current rig that i build in 2007 and upgraded once in a while has decent gaming performance, even though i haven't put any money in it in 2 years or so... still on a Geforce 450. :)
Calm down please
agree to disagree (Score:2)
Re: (Score:2)
time for upgrade? (Score:5, Funny)
so... i should finally give in and buy the coprocessor for my 386!!
Re: (Score:2)
I think it's just a fad.
Wait and see.
Re: (Score:2)
Did any games support math coprocessors back in the math coprocessor days? My impression was that they were for office apps, Lotus etc.
Re: (Score:2)
I dunno, but I could see it being exploited for the additional registers, and doing a floating point op at the same time as an executing loop.
It might have also been useful when doing software blitting on non accellerated cards.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
wing commander 3? i think that helped!!!
Bluehair was the best captain
Re: (Score:2)
You must have had a 486 DX50. (No, not a DX2/50, I mean a DX50.)
It ran internally at 50mhz, at a 1x multiplier. Old dos games expecting a 33mhz bus clock would go at warpspeed! :D
(The DX2/50 used a 2x multiplier, and had a bus speed of 25mhz. It was a lot cheaper than a real DX50.)
Re: (Score:2)
Re:time for upgrade? (Score:4, Funny)
Yep, nothing like being able to calculate 1+1=3 quickly. Err...
Re: (Score:2)
if only you had the math-co...
Re: (Score:2)
Nah, I've got a TSR[1] that emulates an 8087. It totally speeds up Doom! ...actually, it really did on my 486SX-25, maybe 2-4 FPS. No, I don't know why; maybe without a co-pro present Doom would use emulated 387 instructions that were less efficient than emulating a simpler 8087.
[1] it was called EM87.
Re: (Score:2)
YES!! i had forgotten the turbo button, money saved... EXCELLENT :)
what? (Score:2)
CPU speed solves stuttering and lag
Hard drive speed solves long load times
Memory amount decreases frequency of load times (memory speeds, despite what many thing, have relatively little to do with performance as even the slowest memory is far faster than any other component of the system)
GPU speed/memory amount affects quality of graphics settings and frame rate when those settings are turned on (i.e. you can check mo
Suggested tag for story: (Score:2)
Minecraft (Score:2)
Minecraft: I know it's not the best optimized game, but I'm pretty sure it still uses hardware. I have had an Nvidia GTX 275 forever though many CPUs. When playing Minecraft with an older Quad Core Intel CPU (can't remember the model number) I would get around 30FPS at medium settings, after upgrading to an I7 with the same video card, now my Minecraft FPS is around 90FPS with the same settings.
So I can attest empirically that "CPU matters" is in fact the case. Also games like ARMA2, Supreme Commander
Re: (Score:2)
I could have... (Score:2)
I will say though, that for a while, RAM was a major player.
They don't know basic chip arch? (Score:2, Interesting)
Hm. First there is:
"...The FX-4170 supplants a lineup of chips known for their strong value, the Athlon II X4 series. Our legacy representative from that series actually bears the Phenom name, but under the covers, the Phenom II X4 850 employs the same silicon with slightly higher clocks."
and then:
"Only the FX-4170 outperforms the CPU it replaces, the Phenom II X4 850, whose lack of L3 cache and modest 3.3GHz clock frequency aren't doing it any favors."
How can I trust them if they are unaware of basic stuff
Re: (Score:2)
Re: (Score:2)
VINDICATION!! (Score:2)
I have been saying this for years, but have never had any data to back it up. For me it has always been a "seat of the pants" sort of metric. Over the last decade I have tried AMD CPUs on a number of occasions, and always found them to be lacking in comparison to Intel CPUs of the same generation. My latest gaming machine is running an i7-960 (got it cheap from NewEgg) and it works great with all of the games I play.
CPU still isn't a bottle neck. (Score:2, Offtopic)
Re: (Score:2)
Re: (Score:2)
civ4? (Score:2)
FPS is not the right metric (Score:3)
as this article points out it's not the number of frames per second that really matters:
it's the longest gap between subsequent frames which the eye picks up on.
you could cram 200 frames into the last 10th of a second, but if the other 0.9 seconds only has 1 frame, it'll feel like 1Hz.
i typically chart another metric next to traditional FPS which is 1 / (max inter-frame period in one second).
Re: (Score:2)
as this article points out it's not the number of frames per second that really matters:
it's the longest gap between subsequent frames which the eye picks up on.
you could cram 200 frames into the last 10th of a second, but if the other 0.9 seconds only has 1 frame, it'll feel like 1Hz.
i typically chart another metric next to traditional FPS which is 1 / (max inter-frame period in one second).
I don't get the point of this, frames rendered out of sync with vertical refresh are already garbage. Variability of inter-frame latency and correspondingly variable rate are just another good reason to lock your frame rate to something consistently achievable like 30/60 fps.
Anything inconsistent, and not in sync is just plain dumb.
how to tell (Score:4, Interesting)
Re: (Score:2, Funny)
In a game, look at the sky. If your framerate shoots up, the video card was your bottleneck. If it doesn't, your CPU is.
I'm playing Ultima Underworld, you insensitive clod!
not entirely accurate (Score:2, Interesting)
No, not really.
I assume you are referring to the fact that when you look at the sky the game engine culls (skips rendering) most of the objects in the scene, therefore the GPU has less to do and if you are not CPU bound the frame rate shoots up. However when you are not looking at the sky BOTH the CPU and GPU load increases and your test does not reveal which has now become the bottleneck.
Your test only confirms the obvious: that it takes less resources (CPU and GPU) to render the sky than a full scene.
Starcraft 2 on Core i7 laptop (Score:2)
I knew this was a problem (Score:2)
Re: (Score:2)
Mine is a 2600 also! But mine is made by Atari because I wanted a system made for gaming. I'm sure they're pretty much the same thing though.
Re: (Score:3)
They said frame rate *latencies* increased with the FX..... not that the frame rates went down.
Re: (Score:2)
I think part of the point of the article was that frame rates and frame latencies are not equivalent.
Re: (Score:2)
Re: (Score:3)
Re: (Score:3)
We also have eyes that track and focus and thus reduce the blurriness of moving objects that we are looking at. For us, when we look at a fast moving object, the object is sharp and in focus, the background might be blurred. But if we look at the background the background is sharp and the object is blurred.
Whereas if you blur stuff before displaying it, there's no wa
Re: (Score:2)
I much prefer motionblurring to happen in my eyes instead of shitty sw.
Re: (Score:2)
I may be mistaken, but I think they mean "Frames per Second."
Or did the AC just "whoosh" my ass?