Forgot your password?
typodupeerror
Graphics Real Time Strategy (Games) Games

Is StarCraft II Killing Graphics Cards? 422

Posted by CmdrTaco
from the wish-i-had-a-copy dept.
An anonymous reader writes "One of the more curious trends emerging from last week's StarCraft II launch is people alleging that the game kills graphics cards.The between-mission scenes onboard Jim Raynor's ship aren't framerate capped. These are fairly static scenes, and don't take much work for the graphics card to display them. Because of this, the card renders the scene as quickly as possible, which then taxes your graphics card as it works to its full potential. As the pipelines within your graphics card work overtime, the card will heat up and if it can't cope with that heat it will crash."
This discussion has been archived. No new comments can be posted.

Is StarCraft II Killing Graphics Cards?

Comments Filter:
  • Ridiculous. (Score:5, Insightful)

    by Mr EdgEy (983285) on Monday August 02, 2010 @09:41AM (#33109336)
    How about timedemos for FPS games? Benchmarking your card? Tools used for overclocking to actually stress the card? These GPU's are designed to operate at max temp. Many games operate with no FPS cap unless vsync is enabled. This is a complete non-issue.
    • Re:Ridiculous. (Score:5, Informative)

      by Vectormatic (1759674) on Monday August 02, 2010 @09:47AM (#33109414)

      i was thinking the same thing, many games arent FPS-capped anyway, and even in capped games, gamers will put the settings up so high that the game wont run at the capped framerate all the time

      Graphic cards should be able to cope with it, although i do believe that it is possible to load a GPU in such a way that more transistors are active at the same time then the manufacturer thought would happen.

      So unless there are reports of thousands of melted video cards, i call shens

    • Re:Ridiculous. (Score:5, Insightful)

      by Andy Dodd (701) <atd7@co[ ]ll.edu ['rne' in gap]> on Monday August 02, 2010 @09:52AM (#33109502) Homepage

      There is a parameter used for most high-dissipation ICs (such as CPUs and GPUs) - It's called "thermal design power".

      This is the absolute maximum amount of heat the card can dissipate under any circumstances (not counting overclocking). The nature and definition of TDP means it should be physically impossible for ANY software to ever cause the card to exceed TDP.

      If you have a system that can't handle the card running at TDP, that's faulty design of your system, not whatever caused it to hit TDP.

      • Re:Ridiculous. (Score:5, Informative)

        by bertok (226922) on Monday August 02, 2010 @10:13AM (#33109754)

        There is a parameter used for most high-dissipation ICs (such as CPUs and GPUs) - It's called "thermal design power".

        This is the absolute maximum amount of heat the card can dissipate under any circumstances (not counting overclocking). The nature and definition of TDP means it should be physically impossible for ANY software to ever cause the card to exceed TDP.

        If you have a system that can't handle the card running at TDP, that's faulty design of your system, not whatever caused it to hit TDP.

        Many video cards can exceed their TDP through certain sequences of instructions, and the drivers include code to prevent this from occurring. There's been issues in the past where this filter wasn't perfect, and cards were destroyed, typically when executing GPU stress tests.

        • by eddy (18759) on Monday August 02, 2010 @11:14AM (#33110610) Homepage Journal

          Exactly. Agree. That's the story here for anyone confused; hardware can be killed through software through no real fault of the user. See for instance Furmark [google.com] which ATI tries to throttle by checking for its name! No, you don't have to overclock, no, it's not because your cooling is subpar or because of dust or anything else, it's because HWVs don't want to spend the ten cents or whatever to take away the 'can run over peak for a few seconds' capability.

          They're knowingly releasing hardware that can't survive 'full throttle', and it's bullshit.

          PS. Here's a 8800GT fried during SC2 [hardforum.com].

      • On top of that, it has been possible to put heat sensors on the chip and throttle the clock in case of overheating for several years now. IIRC Intel introduced this with the Pentium 4, and in some PCs with poorly cooled 3GHz+ P4 models this throttling actually kicked in. Annoying for the users, but at least their systems did not die.

        Maybe AMD/ATI and NVIDIA should copy that feature? (apologies for dissing them if they have actually done so).

      • Re:Ridiculous. (Score:5, Informative)

        by Sir_Sri (199544) on Monday August 02, 2010 @10:51AM (#33110288)

        Not so much design. A few other new games have had this issue (notably Star trek online).

        TDP assumes, wrongly, that your card is perfectly clean, and that the fan controls are always correct, which might be the case on a reference designed card, but might not quite be the case on factory overclocked boards or if there have been aftermarket tweaks to the driver to adjust the fan speed (which is usually a noise problem).

        You're also assuming the fan is still perfectly mounted (which it might not have been in the first place), and that sort of thing. The PSU needs to be able to feed enough juice to the card, the case needs to be properly ventilated (and ideally cleaned), and god knows what other bits you've got in this board. Lots of boards have a northbridge fan that sits directly above the (first) gpu nowdays.

        As a developer there's a bit you can, and should be doing to prevent this sort of thing. This sort of problem happens a couple of ways. One is the 'draw a simple scene as fast as possible' scenario in SC so cap the framerate at something like 120. The other is basically constantly feed the card as much data as possible (some beta builds of STO and early release had this problem), that was basically a problem of not being able to fit a whole area/level in memory, or not wanting to cause a load screen, so you're maxing out your bandwidth to push data to the card, while at the same time letting the player fly around and shoot stuff (and said stuff shoot back). One of the things here is to do a better job of controlling what's being sent to the card in the first place (BSP trees for example). That's a problem that the card will render a scene to look correct even if you treat it badly, so you can sort of plod through development like that, but you shouldn't assume that the uncleaned 3 year old system one of your customers has will be as pristine as your development machines.

        When driving a car you can 'floor it' for a few seconds, but if you left it that way your engine would eventually overheat, if you've ever gotten stuck in the snow or on ice you'll know what I'm talking about. GPU's are similar. When your comp starts or when you do specific things with an application they can run with all of it's parts at full power, but only for a little while. If you do that for too long eventually it will burn out.

        • Re:Ridiculous. (Score:5, Insightful)

          by sjames (1099) on Monday August 02, 2010 @01:06PM (#33112180) Homepage

          Not so much design. A few other new games have had this issue (notably Star trek online).

          TDP assumes, wrongly, that your card is perfectly clean, and that the fan controls are always correct, which might be the case on a reference designed card, but might not quite be the case on factory overclocked boards or if there have been aftermarket tweaks to the driver to adjust the fan speed (which is usually a noise problem).

          You're also assuming the fan is still perfectly mounted (which it might not have been in the first place), and that sort of thing. The PSU needs to be able to feed enough juice to the card, the case needs to be properly ventilated (and ideally cleaned), and god knows what other bits you've got in this board. Lots of boards have a northbridge fan that sits directly above the (first) gpu nowdays.

          That is still a design issue. Proper engineering includes appropriate margins for error to deal with the real world including things getting dirty and not being put together perfectly.

          Not everything is designed to operate at 100% duty cycle, but in those cases, the duty cycle is well documented and there are usually mechanisms in place to prevent actual damage if the limits are exceeded. Note how in the inevitable car analogy, there are warning lights and significant physical warning signs (such as steam pouring out of the engine compartment to let you know you have exceeded the engine's design capabilities. Unless you're stupid enough to ignore that and keep pushing, the engine suffers no actual damage.

          Imagine the outrage people might feel if the engine's design limits could be exceeded just cruising down the highway and the first sign of it was that the engine just stops running and never starts again. There would most certainly be a class action suit alleging that the engine was defective by design.

    • by V!NCENT (1105021) on Monday August 02, 2010 @09:52AM (#33109506)

      $ glxgears
      5791 frames in 5.0 seconds = 1158.177 FPS
      7120 frames in 5.0 seconds = 1423.968 FPS
      6801 frames in 5.0 seconds = 1360.132 FPS
      7110 frames in 5.0 seconds = 1421.871 FPS

      Nope. No meltdown. Totally BS...

      • I too, ran GLXGEARS to check my framerate, and was pulling 3500 FPS on a 6 month old good card, and was wondering - "HOLY fuct! -what card do you have that runs that fast?"

        And then I remembered you could shrink the screen, and get higher FPS
        (makes glxgers screen tiny)

        20,900 FPS
        21,500 FPS

        meh...

    • Re:Ridiculous. (Score:5, Interesting)

      by Zeussy (868062) on Monday August 02, 2010 @09:53AM (#33109514) Homepage
      The issue is quite simple, stardock had the same issue with galciv 2. There are people playing sc2 who do not play games that fully tax the graphics card as these scenes do, and do not have well ventilated cases, causing the cards to overheat and crash. The issue is solved with a simple frame rate cap. Or the consumer to adequately ventilate their case.
    • Re:Ridiculous. (Score:5, Interesting)

      by Sycraft-fu (314770) on Monday August 02, 2010 @09:56AM (#33109566)

      No kidding. SC2 may end up being more intense if it happens to be just the right balance so that the ROPs, TMUs, and shaders all get to work to near capacity, but same shit: If your card crashes the problem is your setup, not the game. For a demo that'll kick the crap out of your card heat wise, try Furmark. It is designed such to run the chip to its absolute limits and thus have maximum power draw. If your system bombs it isn't the demo that is wrong, it is your computer. Maybe you don't have enough power, maybe your ventilation is bad, maybe your GPU has a defect in it. Whatever the case an intense load that causes a crash is revealing a problem, not causing it. Your system should handle any load given to it.

    • Re: (Score:2, Funny)

      by elrous0 (869638) *
      Yeah, but in the real-world, how many apps work your hardware to its max capacity for long periods of time? Considering how long Koreans are known to play Starcraft, I imagine there will be quite a rash of computer fires south of the 38th parallel, and a subsequent rash of suicides and shooting sprees.
    • Re: (Score:2, Interesting)

      by striker64 (256203)

      Normally I would agree, the graphics card should be able to handle anything that is thrown at it, but there is something to this story. I have a radeon 4850 with one of these zalman coolers on it http://www.quietpcusa.com/images/vf1000-led.jpg and my case is big with lots of cooling. I have used this exact same configuration to play countless games over the last year, including MW2, BC2, etc. and never had a single crash. But now my system is crashing at the SC2 menus. My brother's machine is doing the

    • Re: (Score:3, Insightful)

      by HaZardman27 (1521119)
      From what I've gathered on forums, a lot of the users experiencing this issue are playing on laptops. In some of these cases, a fan ends up dying due to the extended periods of heavy load, which then causes the laptop to become even hotter. This is one of the many reasons I don't game on a laptop.
  • by Sockatume (732728) on Monday August 02, 2010 @09:41AM (#33109340)

    Are most games framerate-capped? Wouldn't all games, at all times, be rendering as quickly as possible, operating to the graphics card's full potential?

    • by Dragoniz3r (992309) on Monday August 02, 2010 @09:43AM (#33109366)
      Yes, they do. It is quite standard practice for games to render uncapped. This story is just FUD and troll. I would've expected it to come from kdawson, but apparently I gave Taco too much credit.

      To clarify my stance: This story is retarded, and all the time you look at it/think about it is time you won't get back.
      • by crossmr (957846) on Monday August 02, 2010 @09:57AM (#33109584) Journal

        At this point I suspect "Kdawson" is a lot like "Alan Smithee". He just forgot to tick the box this time.

      • by Burnhard (1031106)
        Crysis Warhead killed my ATI 4890. The fan bearings went (the card still worked and the fan kind-of cooled it, but the noise was unbearable). So yes, playing games can kill your graphics card, in much the same way that driving your car a long way can cause it to break down.
      • I'd be interested in seeing some of the more reputable hardware sites take up this story. Sure, running Furmark should stress your GPU far more than any game and unless you can run it without crashing your rig isn't stable.

        I still wonder if there is something more to this story than a bunch of cards with insufficient cooling crashing. Few if any professionally assembled PCs should have bad enough cooling that a game could cause the GPU to overheat.
      • Re: (Score:3, Interesting)

        by ShakaUVM (157947)

        >>This story is just FUD and troll. I would've expected it to come from kdawson, but apparently I gave Taco too much credit.

        It's news, because... it's about Starcraft 2? Kinda?

        Why not run a story about how Quake 1 is killing modern computers? The last time I ran Quake it was somewhere above 300fps with vsync disabled.

    • by LWATCDR (28044)

      I find it odd that people don't want the game frame rate capped.
      Why go past 60 FPS? Okay maybe 120 if you nuts?
      What do you gain? I would rather not put out the heat and eat up the power.
      Of course the only game I really play is FSX and I would love to see 60 FPS with all the eye candy turned on.

  • by Anonymous Coward on Monday August 02, 2010 @09:43AM (#33109360)

    Clearly StarCraft is not at fault here. No software should be capable of damaging your graphics card. But if the thermal design of your system is broken, then it's your fault, or the manufacturer's.

    If your card breaks and there is nothing wrong with your cooling, then your card was already broken before you even fired up StarCraft.

    • by RogueyWon (735973) * on Monday August 02, 2010 @10:07AM (#33109672) Journal

      Or a more developed version of the same argument:

      Starcraft 2 has a pretty wide audience, by the standards of a PC/Mac game, and while it's certainly not a Crysis-style hardware-hog, it does have higher requirements than a lot of the usual mass-market PC games (eg. The Sims and its sequels). In addition, its prequel, which is 12 years old and was technically underwhelming by the standards of its own time (the graphically-far-superior Total Annihilation actually came out first) has a large hardcore fanbase, a lot of whom probably don't play much other than Starcraft.

      So Starcraft 2 is released and is promptly installed on a lot of PCs that are not routinely used for gaming, or at least for playing games less than a decade old. A large chunk of these PCs have never run a high-end modern game before. When asked to do so, the less-than-stellar graphics cards in a good portion of them give up and fall over. No conspiracy, no fault in Starcraft 2, just a lot of crusty PCs being taken outside of their comfort zone and not faring so well.

  • Uhh... (Score:5, Interesting)

    by The MAZZTer (911996) <megazzt.gmail@com> on Monday August 02, 2010 @09:43AM (#33109362) Homepage

    You can uncap the framerate in lots of games, but we've never heard about this problem before. I don't think this is a problem. Especaily since you can easily make a GFX card run at full capacity and a low framerate by simply playing a game that's a little too new for it, something a lot of people trying to put off upgrades do. If your GFX card can't run at it's maximum capacity without overheating, something is wrong with its cooling.

    • s/I don't think this is a problem./This sounds like a red herring to me.

      And the article talks about dust being the problem; which is exactly what I was thinking of when I said "something is wrong with its cooling". I've had that problem before with my old GPU; in Left 4 Dead 2 (but not in older games like TF2) I'd get great slowdowns every so often and my GPU was running pretty hot. Turns out it was throttling itself to keep itself from getting even hotter. A heatsink cleanout fixed that right up.

    • Re: (Score:3, Insightful)

      by Delwin (599872) *
      That's the whole point of the article.
      • Re: (Score:3, Informative)

        by The MAZZTer (911996)
        Yeah I ripped apart the summary before I moved on to the article. It does look like the article still tries to place part of the blame on Blizzard though, as the author expects it to be patched.
        • by drinkypoo (153816)

          Yeah I ripped apart the summary before I moved on to the article. It does look like the article still tries to place part of the blame on Blizzard though, as the author expects it to be patched.

          it's pathetic for computer hardware to kill itself by overheating, but if you know that it can happen, you should still do your best not to overheat it.

          • by TheLink (130905)
            > but if you know that it can happen, you should still do your best not to overheat it.

            Yeah, don't play games like Starcraft till you buy a proper graphics card that can handle it :).

            If the cards were dying even when the fans were working fine,there wasn't that much dust, and the ambient temps were within range, then I'd say the cards are faulty.
  • by The Barking Dog (599515) on Monday August 02, 2010 @09:43AM (#33109364) Homepage

    I'm playing Starcraft II on the last-gen iMac (purchased about four months ago) on OS X 10.6.3. The game is stable during gameplay, but it's crashed on me several times in cutscenes, onboard the Hyperion, or even in the main menu (ironically, while I was bringing up the menu to quit the game).

    • What graphics card does it have? I'd be surprised if an iMac doesn't have adequate cooling.
      • Re: (Score:3, Interesting)

        by toddestan (632714)

        Ever feel how hot those things get now, even when running normally? It's not surprising that they would completely fall over if you push them even moderately hard.

  • *Cough* (Score:2, Insightful)

    by Lije Baley (88936)

    Bullshit.

  • by Anonymous Coward

    Only lazy firmware developers for hardware can do that, the fault is not any game, its the driver (or, if the program somehow turns of the fan)

  • Already dead (Score:3, Insightful)

    by KirstuNael (467915) on Monday August 02, 2010 @09:45AM (#33109384)

    Graphics card that can't handle working to its full potential is already dead (as designed).

  • by j0nb0y (107699) <jonboy300@@@yahoo...com> on Monday August 02, 2010 @09:45AM (#33109386) Homepage

    This may have been the problem I experienced. I had played in the (multiplayer only) beta with no problems. Once the game came out though, I kept crashing in single player in between levels. I cleaned the dust out of my computer and that solved the problem.

    I wonder how many people experiencing this just have too much dust built up in their computers?

    • Re: (Score:2, Informative)

      by Anonymous Coward
      Yes, this is a real problem that has been discussed on many sites including on Blizzard's forums. I expect it will get patched by Blizzard eventually. FIX Some systems may reach high temperatures and overheating conditions while running StarCraft II. This is mainly due to the video card rendering the screens at full speed. As a workaround, there are two lines that you can add to your Documents\StarCraft II Beta\variables.txt file to limit this behavior. Frameratecapglue=30 Frameratecap=60 The framerate
  • Seriously..? (Score:5, Insightful)

    by Fusione (980444) on Monday August 02, 2010 @09:47AM (#33109408)
    Story title should read: "Faulty video cards with inadequate cooling are freeze when run at their full potential". This has nothing to do with starcraft 2, other than that it's a video game that runs on a video card.
  • Sounds like a design defect in the card, not the game.
  • What year is this? (Score:5, Interesting)

    by Sir Lollerskates (1446145) on Monday August 02, 2010 @09:48AM (#33109424)

    When graphics cards overheat, the worst thing that happens is a blue screen. On ATI cards, they just restart the card (it does a recovery-mode type of thing).

    You can overclock any card to insane temperatures (90C+) without them even turning off, much less breaking them. There is simply no way that Starcraft 2 is killing any graphics cards.

    There *was* one issue with an nvidia patch a while back which a driver update actually did kill some graphics cards, but it was nvidia's fault, and they promptly fixed it.

    This article is pure misinformation.

    • My old nVidia card would underclock itself when it started to overheat. Good thing too, when my heatsink got clogged with dust it probably saved itself when I was still clueless about it.

  • by Richard_at_work (517087) <richardprice.gmail@com> on Monday August 02, 2010 @09:48AM (#33109428)
    Its hardly "Starcraft II Killing Graphics Cards", its "Shitty Graphics Cards Dying Because Of Lack Of Self Moderation When Running At Full Speed". But I guess the second version doesn't include a much hyped game in the title...
  • by Speare (84249) on Monday August 02, 2010 @09:48AM (#33109430) Homepage Journal

    The summary says an overheated video card will crash. It will do more than crash. It can permanently damage the video hardware. This seems like a major hassle to swap out the video components on a big gaming rig, but it can be a lot worse for high-end laptops. I've had similar problems with 3D software running on a MacBook Pro -- plenty of performance, but the video card gets second priority in the heat-management.

    In my MBP, there are separate temperature probes on the CPU, hard drive, battery and chipset, but none on the dual video chip units, so the thermostat-controlled fan won't even kick in when either the "integrated" nor the "high performance" video units are the only stressed component.

    Besides the hardware cooling problems, there's no reason for trying to draw more than 120 fps on most LCDs; software needs to get more responsible about heat and speed resource usage when given access to over-spec hardware. Limit the rendering loop to 90~120 fps, unless you're doing something purposely exotic such as driving stereoscopic displays or CAVEs (at 90~120 fps per camera).

    • by rotide (1015173) on Monday August 02, 2010 @09:55AM (#33109552)

      I'm going to have to disagree here. It's not up to software developers to go around making sure hardware x and y won't just roll over and die during certain sections of their game.

      It's up to hardware manufacturers to make sure their hardware works under all normal conditions. I mean really, if you make hardware that can fry itself, maybe you're pushing it to far.

      Gee whiz guys! We can render this game at 4839483 FPS! But don't do it for more than 2 seconds or it'll melt! Woot, time to release them en masse! The benchmarks will look awesome!

      Pushing a card to its max should _never_ cause it to "crash", let alone get damaged.

    • Re: (Score:3, Interesting)

      by Greyfox (87712)
      Apple is particularly bad for this. I had an older MacPro desktop that would display video artifacts and then crash in any 3D application. From what I was able to determine from research on the internet, the model I had actually had a firmware issue that would prevent the fans from spinning up as much as they needed to as the card got hotter. This problem seems to have been fixed in later models but if your fan vents get clogged with dust you'll still have problems. If you google around on "Mac Video Card O
    • by PitaBred (632671)

      In my MBP, there are separate temperature probes on the CPU, hard drive, battery and chipset, but none on the dual video chip units, so the thermostat-controlled fan won't even kick in when either the "integrated" nor the "high performance" video units are the only stressed component.

      Sounds like a hell of a design problem. Given what you had to have paid for it, I'd take it back. There's no excuse for that kind of incompetence.

      Either that, or you just don't know what the hell you're talking about. I give it 50/50 odds.

  • by ledow (319597)

    No. Crappy cards that overheat when left running displaying ANYTHING (static images, top-end 3D, what does it matter?) are killing those graphics cards. CPU's (and therefore GPU's) should detect overheat, then throttle back or switch off as necessary. If that still causes a problem in 2010, you have bigger problems on your hands than how often a game decides to blit surfaces about - such as a potential fire. If your case is that dirty, your card should still cope anyway, even if that means it just overh

  • by mike2R (721965) on Monday August 02, 2010 @09:48AM (#33109434)
    Long answer: NOOOoooooooooooooooo!!!!!
  • We're talking about a piece of hardware here which is capable of melting itself down with no internal cap on processing, and we're blaming the software?

    IANAE (I am not an engineer) but it seems to me that the software designers should be able to throw whatever they like at the cards, and it's up to the hardware manufacturers to see to it that the hardware doesn't self destruct.

  • Is this Orrin Hatch's "Destroy the PCs" [bbc.co.uk] plan made manifest? It has taken 7 years, but what subtle, indeed Machiavellian implementation.

  • by Anonymous Coward

    God /. you are WAY behind here. This was an issue 5 months ago in the Beta. There IS a hard cap in menus now.

  • by zlogic (892404)

    What about OpenCL/CUDA? These frameworks use the card's full potential, so far nobody reported any issues. If the card has cooling problems, it's clearly the faulty hardware. The only downside is a slightly more heat and noise from the videocard than there should be during these scenes. This is not a car where revving the engine on neutral indeed stresses the engine.

  • by Nimey (114278) on Monday August 02, 2010 @09:50AM (#33109476) Homepage Journal

    OMG NEW HIGHLY ANTICIPATED TITLE KILLZ0RZ YOUR COMPUTAR!!!

    No, if your machine is crappy, this exposes that you've got cooling or power problems, or both. You should see that you fix these.

    In '94 I had a 486SX-25 that would choke and die when playing Doom in multi-player from time to time. It wasn't that the game KILLZ0RED MY COMPUTAR, it was that the CPU couldn't keep up with everything. Sticking a DX2-50 Overdrive into the socket solved that problem.

  • by Sycraft-fu (314770) on Monday August 02, 2010 @09:51AM (#33109482)

    I fail to see how rendering a scene at a high framerate would be any more challenging than rendering a complex scene at a lower frame rate. Remember that the hardware either is or is not in use. The ROPs, the shaders, etc. It isn't like there is some magic thing about a simple scene that makes a card work extra hard or something.

    So my bet is you have users that have one or more things happening:

    1) They are overclocking their cards. This is always a potential for problems. When you push something past its spec, you may find it has problem in some cases.

    2) Their airflow sucks. They have inadequate ventilation in their case for their card.

    3) Their PSU is inadequate for their card. High end graphics cards need a lot of voltage on the 12v rail. If you have one that can't handle it, well then maybe you have some problems in intense games.

    Really, this sounds no different than the people who OC their processor and claim it is "perfectly stable" but then claim that Prime95 or LinX "break it." No, that means it is NOT perfectly stable, that means you have a problem. Doesn't mean the problem manifests with everything, but it means that you do have a problem that'll show up sometimes.

    I'm betting it is the same thing here. It isn't that SC2 is "killing" their card, it is that their card has problem and SC2 is one of the things that can reveal that. There are probably others too.

    So if your system is crashing in SC2 disable any overclocking, make sure you've got good ventilation (which may mean a new case) and make sure you have a PSU that supports your graphics card, including providing dedicate PCIe power connectors sufficient for it. Don't blame the software for revealing a flaw in your system.

    • Re: (Score:3, Interesting)

      by Zeussy (868062)
      Someone who is write on the money. The cards are crashing due to inadequate case ventilation. Stardock got the same issues with GalCiv 2.

      I fail to see how rendering a scene at a high framerate would be any more challenging than rendering a complex scene at a lower frame rate. Remember that the hardware either is or is not in use. The ROPs, the shaders, etc. It isn't like there is some magic thing about a simple scene that makes a card work extra hard or something.

      Games now a days are highly threaded, with game logic and rendering happening in parallel and both in lock step (waiting for each other to finish). The difference between a complex scene and a simple scene is that the render thread will have less to update, and do more draw calls. If there is little or no animation to update (either updated in the game logic and pushed acr

    • Open case, put desktop fan roughly inside, full blast. If it stops crashing, get better ventilation in your case.

    • by the_other_chewey (1119125) on Monday August 02, 2010 @10:29AM (#33109960)

      Their PSU is inadequate for their card. High end graphics cards need a lot of voltage on the 12v rail.

      I'd say they need 12V of voltage on the 12V rail...

    • Re: (Score:3, Informative)

      by BobMcD (601576)

      I'm betting it is the same thing here. It isn't that SC2 is "killing" their card, it is that their card has problem and SC2 is one of the things that can reveal that. There are probably others too.

      So if your system is crashing in SC2 disable any overclocking, make sure you've got good ventilation (which may mean a new case) and make sure you have a PSU that supports your graphics card, including providing dedicate PCIe power connectors sufficient for it. Don't blame the software for revealing a flaw in your system.

      I guess we can all be glad you don't work for Blizzard. Here's what the pro's said:

      Screens that are light on detail may make your system overheat if cooling is overall insufficient. This is because the game has nothing to do so it is primarily just working on drawing the screen very quickly. A temporary workaround is to go to your Documents\StarCraft II Beta\variables.txt file and add these lines:

      frameratecapglue=30
      frameratecap=60

      You may replace these numbers if you want to.

      Note how this is kind of the same thing, but Blizzard's solution has some actual tact behind it...

    • Re: (Score:3, Informative)

      by shadow_slicer (607649)

      I fail to see how rendering a scene at a high framerate would be any more challenging than rendering a complex scene at a lower frame rate. Remember that the hardware either is or is not in use. The ROPs, the shaders, etc. It isn't like there is some magic thing about a simple scene that makes a card work extra hard or something.

      The difference is that with complex scenes, the framerate is limited either by the CPU (to calculate AI and physics) or IO (to send commands, textures and meshes to the card). Wit

  • If a process, like a webserver, could erase itself from a hard drive by benign input, it would be a bug. This is no different.

    My graphics card, a GTX 275, was factory locked to a 40% duty cycle on the fan, no matter how hot it got. I had to resort to RivaTuner to make the fan auto-adjust speed based on temperature. Since there is no speed limit where I'm putting people's lives at risk for rendering too many frames per second, or any other reasonable reason to limit the amount of work a card can do before it destroys itself when the hardware is perfectly capable of doing that work without destroying itself, the only conclusion is that it is defective.

    That said, anything that doesn't use vsync is stupid, period, always, (unless you're benchmarking or trying to warm a cold room). Spending that extra processing power on a proper motion blur would have a far greater effect on perceived smoothness.

    • by tibit (1762298)

      I agree that the card you mention had an issue. But the main problem is that die temperature sensing is such a simple thing to so. Power chips (switchers, regulators, bridges) that sometimes sell for $0.10 apiece can have die temperature sensors and can turn themselves off to cool down. Why the heck graphics chip makers don't put temperature-controlled power management (clock scaling, unit cycling) is beyond me. It's not like it's rocket science. If you know your engineering, you should even be able to have

  • I mean when I code I never think "Well my code needs to check to see if it might damage hardware." since I try to keep my coding agnostic to the system it's on. (Admittedly financial software in the last place I worked) Starcraft 2 is using either DirectX or OpenGL so I'd expect to be hardware agnostic as well.(Sorry, I'm not a graphics guy so I might be talking out of my but.) Seriously, if I remember correctly there were systems in the early 80's that you could damage if certain code was executed in a cer
  • The first Command & Conquer game was released in 1995, with full-motion video cutscenes. Those scenes did not destroy any graphics cards that met the system requirements for the game. Why would video scenes start doing this to modern video cards?
    • FMV scenes generally only playback at 23 to 25 FPS because they are pre-rendered. In SC2 they are being rendered by the GPU as they are not pre-rendered FMV and so generally need in excess of 30 FPS constantly to not look stuttery, which is about the same thing you'd get from playing something like Gears of War or Crysis or really any recent good looking 3D game.
  • during the gameplay of the single player campaign, the nvidia drivers I got did the dance to restart themselfs once, without a hitch(the game didn't crash either, just a blinking of the screen and a note about it waiting on the desktop after quitting).

    but saying that using your graphics card at full juice is essentially saying that pcmark& etc programs would do it as well. there's an issue with the card already if it's killing them. the real problem with the game is that it hasn't evolved at all from
  • The more time passes, the less time people understand anything about their computers, and unfortunately this includes most kids these days..
  • StarCraft II is exposing shoddy thermal engineering in video cards because, unlike most games on the market, StarCraft II is correctly utilizing your video card to it's fullest potential.

    Say what you will about SC2 game balance, say what you will about Battle.NET 2.0's crappy interface, say what you will about how cheesy Jim Raynor is. I wouldn't disagree with you.

    But when it comes to writing engines, Blizzard is the best of the best. Hands down. Everything they write runs smooth as silk, and they have a ge

  • by davidwr (791652) on Monday August 02, 2010 @10:08AM (#33109682) Homepage Journal

    The summary should say that it's the Evil Giant Killer Dust Bunnies From Hell, not Starcraft, that are shutting down the cards.

  • Some building do this: at weekends, turn off and on the power of the whole building. This serve the purpose to force these bulbs that are about to fail.. to fail sooner. Maybe SC2 has one of the most popular games released in years is working as a unintended "break test". But it will be good if Blizzard adds some caps *anyway*.

  • I left the game running frequently (as I'm lazy) at these cut-scenes for almost 4 days straight, and I had no problems.
  • Here's an idea! Grab a $3 can of canned air/air duster from Office Depot/Staples/Office Max/Fry's Electronics/Best Buy, open up the side of your computer, and then spray out all the dust that's accumulated in your graphics card fan and heatsink since the first StarCraft came out.
  • by PowerEdge (648673) on Monday August 02, 2010 @10:24AM (#33109896)
    If it is anything like the original it is killing college aspirations, careers and marriages and the nation of South Korea. Graphics cards should be the least of our concerns!!! I say this as a survivor of SC. Oh and Total Annihilation was the better game!!
  • I sincerely doubt that the summary's claim is true, any decent graphics card should either underclock itself to lower the heat generation, or simply BSOD on you. Furthermore, if scenes are very easily rendered, wouldn't this also automatically mean that less of the chip is used ? I'm no expert in graphics hardware design, but one would think (I know, bad idea) that they would have special parts of the silicon dedicated to some more complicated tasks, which in a case like this wouldn't be used at all. If tha

  • by vitaflo (20507) on Monday August 02, 2010 @10:56AM (#33110350) Homepage

    The entire game is not capped. It's been that way since beta started. The framerate cap variables have also been published from shortly after the beta came out.

    Why Blizzard doesn't cap their games at 60fps (or hell 120fps if they think 60 is too low for some reason) I don't know. There's really no reason to render frames faster than that, even if you can.

It is better to give than to lend, and it costs about the same.

Working...