Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Real Time Strategy (Games) Games

Is StarCraft II Killing Graphics Cards? 422

An anonymous reader writes "One of the more curious trends emerging from last week's StarCraft II launch is people alleging that the game kills graphics cards.The between-mission scenes onboard Jim Raynor's ship aren't framerate capped. These are fairly static scenes, and don't take much work for the graphics card to display them. Because of this, the card renders the scene as quickly as possible, which then taxes your graphics card as it works to its full potential. As the pipelines within your graphics card work overtime, the card will heat up and if it can't cope with that heat it will crash."
This discussion has been archived. No new comments can be posted.

Is StarCraft II Killing Graphics Cards?

Comments Filter:
  • Ridiculous. (Score:5, Insightful)

    by Mr EdgEy ( 983285 ) on Monday August 02, 2010 @09:41AM (#33109336)
    How about timedemos for FPS games? Benchmarking your card? Tools used for overclocking to actually stress the card? These GPU's are designed to operate at max temp. Many games operate with no FPS cap unless vsync is enabled. This is a complete non-issue.
  • Design issue? (Score:1, Insightful)

    by DaveV1.0 ( 203135 ) on Monday August 02, 2010 @09:42AM (#33109346) Journal

    This sounds more like a design issue with the cards than an issue with StarCraft 2. If the card can not handle performing at its full potential, then the card was under-engineered in the first place.

  • by Dragoniz3r ( 992309 ) on Monday August 02, 2010 @09:43AM (#33109366)
    Yes, they do. It is quite standard practice for games to render uncapped. This story is just FUD and troll. I would've expected it to come from kdawson, but apparently I gave Taco too much credit.

    To clarify my stance: This story is retarded, and all the time you look at it/think about it is time you won't get back.
  • *Cough* (Score:2, Insightful)

    by Lije Baley ( 88936 ) on Monday August 02, 2010 @09:44AM (#33109370)

    Bullshit.

  • by Anonymous Coward on Monday August 02, 2010 @09:45AM (#33109382)

    Only lazy firmware developers for hardware can do that, the fault is not any game, its the driver (or, if the program somehow turns of the fan)

  • Already dead (Score:3, Insightful)

    by KirstuNael ( 467915 ) on Monday August 02, 2010 @09:45AM (#33109384)

    Graphics card that can't handle working to its full potential is already dead (as designed).

  • Seriously..? (Score:5, Insightful)

    by Fusione ( 980444 ) on Monday August 02, 2010 @09:47AM (#33109408)
    Story title should read: "Faulty video cards with inadequate cooling are freeze when run at their full potential". This has nothing to do with starcraft 2, other than that it's a video game that runs on a video card.
  • by RealGene ( 1025017 ) on Monday August 02, 2010 @09:47AM (#33109416)
    Sounds like a design defect in the card, not the game.
  • by Richard_at_work ( 517087 ) on Monday August 02, 2010 @09:48AM (#33109428)
    Its hardly "Starcraft II Killing Graphics Cards", its "Shitty Graphics Cards Dying Because Of Lack Of Self Moderation When Running At Full Speed". But I guess the second version doesn't include a much hyped game in the title...
  • Re:Uhh... (Score:3, Insightful)

    by Delwin ( 599872 ) * on Monday August 02, 2010 @09:48AM (#33109442)
    That's the whole point of the article.
  • by Nimey ( 114278 ) on Monday August 02, 2010 @09:50AM (#33109476) Homepage Journal

    OMG NEW HIGHLY ANTICIPATED TITLE KILLZ0RZ YOUR COMPUTAR!!!

    No, if your machine is crappy, this exposes that you've got cooling or power problems, or both. You should see that you fix these.

    In '94 I had a 486SX-25 that would choke and die when playing Doom in multi-player from time to time. It wasn't that the game KILLZ0RED MY COMPUTAR, it was that the CPU couldn't keep up with everything. Sticking a DX2-50 Overdrive into the socket solved that problem.

  • by Sycraft-fu ( 314770 ) on Monday August 02, 2010 @09:51AM (#33109482)

    I fail to see how rendering a scene at a high framerate would be any more challenging than rendering a complex scene at a lower frame rate. Remember that the hardware either is or is not in use. The ROPs, the shaders, etc. It isn't like there is some magic thing about a simple scene that makes a card work extra hard or something.

    So my bet is you have users that have one or more things happening:

    1) They are overclocking their cards. This is always a potential for problems. When you push something past its spec, you may find it has problem in some cases.

    2) Their airflow sucks. They have inadequate ventilation in their case for their card.

    3) Their PSU is inadequate for their card. High end graphics cards need a lot of voltage on the 12v rail. If you have one that can't handle it, well then maybe you have some problems in intense games.

    Really, this sounds no different than the people who OC their processor and claim it is "perfectly stable" but then claim that Prime95 or LinX "break it." No, that means it is NOT perfectly stable, that means you have a problem. Doesn't mean the problem manifests with everything, but it means that you do have a problem that'll show up sometimes.

    I'm betting it is the same thing here. It isn't that SC2 is "killing" their card, it is that their card has problem and SC2 is one of the things that can reveal that. There are probably others too.

    So if your system is crashing in SC2 disable any overclocking, make sure you've got good ventilation (which may mean a new case) and make sure you have a PSU that supports your graphics card, including providing dedicate PCIe power connectors sufficient for it. Don't blame the software for revealing a flaw in your system.

  • If a process, like a webserver, could erase itself from a hard drive by benign input, it would be a bug. This is no different.

    My graphics card, a GTX 275, was factory locked to a 40% duty cycle on the fan, no matter how hot it got. I had to resort to RivaTuner to make the fan auto-adjust speed based on temperature. Since there is no speed limit where I'm putting people's lives at risk for rendering too many frames per second, or any other reasonable reason to limit the amount of work a card can do before it destroys itself when the hardware is perfectly capable of doing that work without destroying itself, the only conclusion is that it is defective.

    That said, anything that doesn't use vsync is stupid, period, always, (unless you're benchmarking or trying to warm a cold room). Spending that extra processing power on a proper motion blur would have a far greater effect on perceived smoothness.

  • Re:Ridiculous. (Score:5, Insightful)

    by Andy Dodd ( 701 ) <atd7@cornell . e du> on Monday August 02, 2010 @09:52AM (#33109502) Homepage

    There is a parameter used for most high-dissipation ICs (such as CPUs and GPUs) - It's called "thermal design power".

    This is the absolute maximum amount of heat the card can dissipate under any circumstances (not counting overclocking). The nature and definition of TDP means it should be physically impossible for ANY software to ever cause the card to exceed TDP.

    If you have a system that can't handle the card running at TDP, that's faulty design of your system, not whatever caused it to hit TDP.

  • by rotide ( 1015173 ) on Monday August 02, 2010 @09:55AM (#33109552)

    I'm going to have to disagree here. It's not up to software developers to go around making sure hardware x and y won't just roll over and die during certain sections of their game.

    It's up to hardware manufacturers to make sure their hardware works under all normal conditions. I mean really, if you make hardware that can fry itself, maybe you're pushing it to far.

    Gee whiz guys! We can render this game at 4839483 FPS! But don't do it for more than 2 seconds or it'll melt! Woot, time to release them en masse! The benchmarks will look awesome!

    Pushing a card to its max should _never_ cause it to "crash", let alone get damaged.

  • by crossmr ( 957846 ) on Monday August 02, 2010 @09:57AM (#33109584) Journal

    At this point I suspect "Kdawson" is a lot like "Alan Smithee". He just forgot to tick the box this time.

  • by Anonymous Coward on Monday August 02, 2010 @09:59AM (#33109596)

    Clearly StarCraft is not at fault here. No software should be capable of damaging your graphics card. But if the thermal design of your system is broken, then it's your fault, or the manufacturer's.

    If your card breaks and there is nothing wrong with your cooling, then your card was already broken before you even fired up StarCraft.

    Why are you even assuming the story is correct?

    "Reports of graphics problems" is a bit nebulous, to say the least.

    My money is on SlashFUD at this point in time.

  • by Maarx ( 1794262 ) on Monday August 02, 2010 @10:01AM (#33109614)

    StarCraft II is exposing shoddy thermal engineering in video cards because, unlike most games on the market, StarCraft II is correctly utilizing your video card to it's fullest potential.

    Say what you will about SC2 game balance, say what you will about Battle.NET 2.0's crappy interface, say what you will about how cheesy Jim Raynor is. I wouldn't disagree with you.

    But when it comes to writing engines, Blizzard is the best of the best. Hands down. Everything they write runs smooth as silk, and they have a genuine talent for squeezing jaw-dropping performance out of even mediocre computers. StarCraft II contains correctly written code, and it will utilize your hardware to it's fullest potential. If you bought a bargain computer, put it together yourself, and skimped on the cooling, you're going to get burned.

    Pun intended.

    I have as much hate as the next guy for how StarCraft II was cannibalized in the name of profit, but this article? This is a non-issue. This is not Blizzard's/StarCraft's fault.

  • Re:Ridiculous. (Score:5, Insightful)

    by Anonymous Coward on Monday August 02, 2010 @10:03AM (#33109636)

    If a graphics card cant survive a tight loop like that then it's designed by an idiot. My intel processor can survive that state.. and NO it's not like putting the car in neutral and mashing the throttle. you are comparing a device with a rotating mass getting sped up PAST the designed rpm rate+time point that actually would not cause a problem because of the rev limiter kicking in.

    electronics, my uneducated friend, are different as there are NOT any moving parts in it.. This may surprise you.

    I am tired of electronics nowdays designed for highest profit and not for quality. Engineers for chip makers are complete fucking morons if this is really happening.

  • by RogueyWon ( 735973 ) * on Monday August 02, 2010 @10:07AM (#33109672) Journal

    Or a more developed version of the same argument:

    Starcraft 2 has a pretty wide audience, by the standards of a PC/Mac game, and while it's certainly not a Crysis-style hardware-hog, it does have higher requirements than a lot of the usual mass-market PC games (eg. The Sims and its sequels). In addition, its prequel, which is 12 years old and was technically underwhelming by the standards of its own time (the graphically-far-superior Total Annihilation actually came out first) has a large hardcore fanbase, a lot of whom probably don't play much other than Starcraft.

    So Starcraft 2 is released and is promptly installed on a lot of PCs that are not routinely used for gaming, or at least for playing games less than a decade old. A large chunk of these PCs have never run a high-end modern game before. When asked to do so, the less-than-stellar graphics cards in a good portion of them give up and fall over. No conspiracy, no fault in Starcraft 2, just a lot of crusty PCs being taken outside of their comfort zone and not faring so well.

  • Re:Ridiculous. (Score:5, Insightful)

    by alen ( 225700 ) on Monday August 02, 2010 @10:51AM (#33110280)

    you forget that most chips people buy these days are the equivelant of "off the rack" clothing. they are manufacturing rejects that are sold as lower end cards. just like Xeon CPU's and the top of the line $600 graphics cards being the only ones that are a result of "perfect" manufacturing. everything else from i Core to your $150 graphics cards are manufacturing rejects with circuitry disabled. it's not like there is a production process for every single SKU of the 20 or so that ATI/Nvidia sell at any one time.

    it all comes off one line, tested, binned and then circuitry is disabled depending on the results of the testing

  • by vitaflo ( 20507 ) on Monday August 02, 2010 @10:56AM (#33110350) Homepage

    The entire game is not capped. It's been that way since beta started. The framerate cap variables have also been published from shortly after the beta came out.

    Why Blizzard doesn't cap their games at 60fps (or hell 120fps if they think 60 is too low for some reason) I don't know. There's really no reason to render frames faster than that, even if you can.

  • Re:Ridiculous. (Score:3, Insightful)

    by HaZardman27 ( 1521119 ) on Monday August 02, 2010 @11:41AM (#33111020)
    From what I've gathered on forums, a lot of the users experiencing this issue are playing on laptops. In some of these cases, a fan ends up dying due to the extended periods of heavy load, which then causes the laptop to become even hotter. This is one of the many reasons I don't game on a laptop.
  • Re:Ridiculous. (Score:5, Insightful)

    by surgen ( 1145449 ) on Monday August 02, 2010 @12:33PM (#33111758)

    While that fact is interesting, if I bought a chip that says it could do X, I still expect it to live up to X. It doesn't matter if X is a reject from manufacturing Y. If they were Y-rejects that still can't handle X, don't sell it as such.

  • Re:Ridiculous. (Score:2, Insightful)

    by BitZtream ( 692029 ) on Monday August 02, 2010 @12:38PM (#33111824)

    And what the person you replied to is saying is that the situation you described would be considered a broken graphics card.

    The hardware shouldn't allow for that to happen, if it does, its broken. Doesn't matter if they claim its 'intentional', its still broken.

  • by fishbowl ( 7759 ) on Monday August 02, 2010 @12:45PM (#33111906)

    >it's not because your cooling is subpar

    If your hardware can undergo a heat-related failure, then you have substandard cooling. That's pretty much the definition of substandard cooling.

  • Re:Ridiculous. (Score:5, Insightful)

    by sjames ( 1099 ) on Monday August 02, 2010 @01:06PM (#33112180) Homepage Journal

    Not so much design. A few other new games have had this issue (notably Star trek online).

    TDP assumes, wrongly, that your card is perfectly clean, and that the fan controls are always correct, which might be the case on a reference designed card, but might not quite be the case on factory overclocked boards or if there have been aftermarket tweaks to the driver to adjust the fan speed (which is usually a noise problem).

    You're also assuming the fan is still perfectly mounted (which it might not have been in the first place), and that sort of thing. The PSU needs to be able to feed enough juice to the card, the case needs to be properly ventilated (and ideally cleaned), and god knows what other bits you've got in this board. Lots of boards have a northbridge fan that sits directly above the (first) gpu nowdays.

    That is still a design issue. Proper engineering includes appropriate margins for error to deal with the real world including things getting dirty and not being put together perfectly.

    Not everything is designed to operate at 100% duty cycle, but in those cases, the duty cycle is well documented and there are usually mechanisms in place to prevent actual damage if the limits are exceeded. Note how in the inevitable car analogy, there are warning lights and significant physical warning signs (such as steam pouring out of the engine compartment to let you know you have exceeded the engine's design capabilities. Unless you're stupid enough to ignore that and keep pushing, the engine suffers no actual damage.

    Imagine the outrage people might feel if the engine's design limits could be exceeded just cruising down the highway and the first sign of it was that the engine just stops running and never starts again. There would most certainly be a class action suit alleging that the engine was defective by design.

  • Re:The Fix (Score:3, Insightful)

    by ToasterMonkey ( 467067 ) on Monday August 02, 2010 @01:27PM (#33112534) Homepage

    Since I haven't seen anyone else post the fix, I will: Add the following lines to your "Documents\StarCraft II\variables.txt" file: frameratecapglue=30 frameratecap=60 You can add them to the beginning, end, or wherever. The game doesn't care.

    Wouldn't ticking off the vsync option in the in-game settings be an easier way to fix the problem?

    I'm having a hard time picturing any overlap between systems where unlimited framerate is a problem and vsync could drop your frame rate too low. I mean, usually it's the high end cards that have heat problems, not the ones where vsync might drop you to 1/2 or 1/4 refresh rate... If you have a high end card in a system that can't cool itself enough to use the card to it's full potential, who's problem is that?

  • Bananas (Score:3, Insightful)

    by 0123456 ( 636235 ) on Monday August 02, 2010 @01:58PM (#33113048)

    I'm not sure why the parent post is tagged as insightful, because it's nonsense. Yes, some CPUs and GPUs are actually more powerful chips with defective components which were disabled, but the majority are not.

    Nor does a fault in one part of the chip somehow make it less reliable than any other; faults are typically random due to imperfections in the die which affect only one small part of the silicon, and the rest of the chip will work without any problems.

    The suggestion that that every CPU or GPU 'comes off one line' and is binned based on defects is pure monkey-talk.

  • by am 2k ( 217885 ) on Monday August 02, 2010 @02:31PM (#33113548) Homepage

    The problem with enabling vsync is the following:

    A standard LCD has 60Hz, which is about 1.67ms per frame. When your game requires 1.7ms to render a single frame, without vsync that's about 58.8fps, which isn't that bad (you wouldn't notice it).

    When vsync is enabled, what happens is that the first frame isn't ready when the screen is refreshed, so the whole pipeline stalls until the next vertical sync. On the next one, you can finally display your image, and render again for 1.7ms. The whole cycle repeats. In the end, this means that you have a whooping 30fps, even though the graphics pipeline is idle nearly 50% of the time.

    Of course, this doesn't make sense for the menu system, where the frame rendering doesn't take anywhere near 1.67ms, but it does make sense for the game itself, since it tends to lag a bit when the action is intense (esp. on creep).

  • by oji-sama ( 1151023 ) on Monday August 02, 2010 @03:18PM (#33114262)

    Note how this is kind of the same thing, but Blizzard's solution has some actual tact behind it...

    And if he worked for Blizzard he probably would express his view differently. The Blizzard's solution doesn't fix the underlying problem (which will probably show up in some other game later).

  • by Fjandr ( 66656 ) on Monday August 02, 2010 @03:19PM (#33114282) Homepage Journal

    There's a difference between "substandard" and "insufficient."

    A standard cooling solution can be insufficient under the right circumstances, such as a card that doesn't rate TDP correctly and allows the card to exceed its published TDP through software. The cooling isn't the problem; the manufacturer is the problem.

  • Re:Bananas (Score:1, Insightful)

    by Anonymous Coward on Monday August 02, 2010 @03:28PM (#33114378)

    I suggest you read up on the manufacturing processes of silicon chips. Many CPUs are indeed sorted and rebadged according to the speed they can operate at and various other tests, and *many* video cards and chips (and other products) are actually 'remarked' according to their faults and tolerances. Sorting the chips according to tolerances and faults is a process known as "binning", and has been an integral part of silicon manufacturing for at least 20 years to my knowledge.
    Think about it: why would you create multiple manufacturing processes for almost identical products? You don't. You keep the process the same, remark slower or imperfect chips, route around faults where they are expected, leave optional/unavailable components out, and so produce a whole 'range' of products, maximising your return on investment and market share, and minimising wasted time and materials. This manufacturing technique is used for CPUs, GPUs, televisions, modems, car components, you name it.

    Personal anecdote: I've bought 3 nVidia GPUs in the last 5 years or so, and every one was remarked with lower speed/fewer pipelines, etc.. (e.g. peeling off the label reveals the label of a 'higher spec' card). In one case it was even possible to re-enable some of the pipelines in firmware using a tweaked nvidia utility. (Didn't work very well though... caused random crashes and random white dots on some textures!)

    Back on topic, I've also had a GPU catch fire on me: was left running a kde openGL screensaver, and the heatsink was covered in dust... lucky I was around when it caught light. IIRC, it was actually the RAM chips that went up in flames.

All I ask is a chance to prove that money can't make me happy.

Working...