Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Upgrades Bug Graphics Games Hardware

NVIDIA Driver Update Causing Video Cards To Overheat In Games 155

After a group of StarCraft II beta testers reported technical difficulties following the installation of NVIDIA driver update 196.75, Blizzard tech support found that the update introduced fan control problems that were causing video cards to overheat in 3D applications. "This means every single 3D application (i.e. games) running these drivers is going to be exposed to overheating and in some extreme cases it will cause video card, motherboard and/or processor damage. If said motherboard, processor or graphic card is not under warranty, some gamers are in serious trouble playing intensive games such as Prototype, World of Warcraft, Farcry 3, Crysis and many other games with realistic graphics." NVIDIA said they were investigating the problem, took down links to the new drivers, and advised users to revert to 196.21 until the problem can be fixed.
This discussion has been archived. No new comments can be posted.

NVIDIA Driver Update Causing Video Cards To Overheat In Games

Comments Filter:
  • by Beelzebud ( 1361137 ) on Friday March 05, 2010 @05:28AM (#31369078)
    Oddly enough, I played World of Warcraft and Fallout 3 quite a bit since upgrading to these drivers, and my performance has been much better than the previous win7 64x driver. I hear the fan ramping up like it should, and the card hasn't gotten close to overheating. Maybe it's only affecting certain models. I have an 8800ultra.
    • Yes, it probably only affects newer cards.

      The newer cards have so many execution units, that the cards aren't actually able to run all of them full-out at the same time - it would take too much power and produce too much heat. The logic behind this is that for most applications, total performance is bottlenecked somewhere, so every part of the chip is never going to be active at the same time. Apparently something in their driver update has either changed this or (more likely) broken the logic to throttle b

      • or some intern optimized a complicated piece of logic by noticing it's essentially an idle loop---a very important idle loop.

        • Re: (Score:3, Funny)

          by Minwee ( 522556 )

          or some intern optimized a complicated piece of logic by noticing it's essentially an idle loop---a very important idle loop.

          You mean it wasn't a Speed-up Loop [thedailywtf.com]?

      • So nVidia FINALLY acknowledged that there is a problem with their newer graphics cards.

        I've been having this problem for over 5 months since I got a new GTX 275. Games would crash or freeze because the fan duty cycle would stay fixed at 40% even with temperatures higher than 75. I reported my problems to the nVidia forums, but people there said it had nothing to do with the driver, but was probably a manufacturer BIOS or chipset issue. Still, since the problem can be solved by software using RivaTuner, I do

    • >Oddly enough, I played World of Warcraft and Fallout 3 quite a bit since upgrading to these drivers, and my performance has been much better than the previous win7 64x driver.

      If you read the release notes you'll see big performance gains on a lot of games from this driver. This is something I've never seen from Nvidia. Anyone have the details on what happened? Maybe they found some new way to be efficient or found some long standing bug.

      • by tlhIngan ( 30335 )

        Oddly enough, I played World of Warcraft and Fallout 3 quite a bit since upgrading to these drivers, and my performance has been much better than the previous win7 64x driver.

        If you read the release notes you'll see big performance gains on a lot of games from this driver. This is something I've never seen from Nvidia. Anyone have the details on what happened? Maybe they found some new way to be efficient or found some long standing bug.

        Or someone mentioned elsewhere, you can't run the card at full capabili

    • I had an 8800GS that overheated with all recent drivers. I had to underclock its memory with Rivatuner by about 10%.

      I recently picked up a GTS 250, which also overheats. I had to underclock its memory by about 30%, bringing it in line with the 8800GS's memory speed.

      The culprit seems to be inadequate GDDR3 cooling. Frankly, I'm glad their new drivers are more efficient, and stress the card more. I just wish they didn't half-ass the heatsinks.

  • Apart from the fan problem, is this version more stable? The last version causes my laptop to crash every few minutes, making it unusable, so I have to run the VESA driver.
    • Re: (Score:1, Insightful)

      by Anonymous Coward

      Laptop? You should probably use the drivers from your laptop manufacturer, they often customize things to get clock frequencies etc right for their specific model.

      • by jhoegl ( 638955 )
        Also they like their spyware.
      • Re: (Score:3, Insightful)

        by yacc143 ( 975862 )

        What a stupid recommendation, I mean, they usually stop to provide updates, the moment the next model comes out.
        Consumer laptop models have seldom a life much beyond 6-12 months. Some consumer laptops can be quite useable way longer than 12 months. (and that assumes that you buy it on the first day it's out)

        Hence you are forced to use the upstream drivers.

      • by ZosX ( 517789 )

        I always use the untouched nvidia drivers for my laptop. If they can't figure out how to make drivers for their own chipsets then shame on them. I mean most integrated nvidia chipsets have fairly fixed clock frequencies. I've even overclocked my laptops gpu slightly, without a great deal more heat generated. (thanks evga!) I always get the best performance from stock nvidia drivers. I tried the dox drivers but found them to be problematic and no faster than the nvidia stock. I guess YMMV of course......

      • Maybe they do that when their machine is first released. I doubt they do that with each new driver version.
      • Laptop? You should probably use the drivers from your laptop manufacturer, they often customize things to get clock frequencies etc right for their specific model.

        In my limited experience, the laptop manufacturer releases drivers at a very slow pace and they stop releasing new versions after a while. After some point the most they might do is release a new version if a new OS comes out.

        For the most part that's alright (especially in non-gaming environments), but some games need a more recent version of a driver due to a fix or a new feature.

        I tried playing Champions Online with the manufacturer's 1+ year-old video drivers on my laptop after a reformat. The game war

    • by scalarscience ( 961494 ) on Friday March 05, 2010 @07:06AM (#31369550)
      This issue is related to automatic fan control not working due to improper registry keys, and so GPU's that run warm (9800 series for instance) can quickly overheat and potentially suffer damage. I'm having no issues with mine, but I set fan profiles manually as I'm using a machine that has a very hot MCH & fb-dimms (2008 Xeon) and don't want the gpu contributing more. However for anyone interested (and using a GT200 or at least G80/G92 on up) here's the fix: http://forums.nvidia.com/index.php?showtopic=161767 [nvidia.com]
      • by mmalove ( 919245 )

        I wonder if this is what was happening to me then?! I run a 9800M GS (laptop version of the 9800). Been overheating for months now, finally resorted to using ntune to underclock the processors by 25%. Fixed the crashing with minimal impact to WOW (hurray for modern GPUs on a 5 year old game).

        • It being a laptop, usually there isn't actually a fan specific to the video card. My last laptop had an 8600M GT 256MB (I only retired it on Wednesday for the new one; that system was almost 3 years old). Inside the case, there was a heatpipe from the video card to a set of fins that was right next to the fins for the CPU heatsink (with a similar heatpipe setup), and a separate fan that blew air through both sets of fins. The cooling fan wasn't actually connected to either the graphics card or the CPU, and

          • by mmalove ( 919245 )

            While I appreciate the advice, I've cracked open the innards of the laptop and do clean it regularly. It is indeed set up as you describe with the heat sink pipe leading to a single fan/exhaust system. Maybe the fan on that's just choking independent of driver issues.

            Ah well, it runs significantly cooler underclocked - should carry me till I'm ready to replace the system I think.

  • by cbope ( 130292 ) on Friday March 05, 2010 @05:41AM (#31369156)

    Wait a minute... just how is an overheating graphics card causing damage to a CPU? As an EE, I'd love to hear the basis for that. Even motherboard damage is extremely unlikely, unless the card bursts into flames and torches the PCIe slot. Or the graphics card gets hot enough to re-flow solder, which then drips onto the PCIe slot or motherboard components. Not to mention most cases are vertically oriented these days. Not a chance in hell, I'd say.

    I'm not saying there isn't an issue, but it sounds like the issue is just a bit over-hyped... or someone has an agenda and just wants to bash NVIDIA.

    • Re: (Score:2, Interesting)

      by theeddie55 ( 982783 )
      if it causes a short circuit, the feedback could easily blow the CPU, though in practice, any half decent power supply should cut out before that can happen.
      • by cbope ( 130292 )

        Umm... not likely. A short circuit in the power circuit of the GPU would only affect the graphics card itself and probably the power supply. At the most it would probably trip the over-current protection in the power supply which would simply shut down. The only electrical connections between the CPU and GPU are data lanes and these are not sufficient to bring down a CPU. They are only signal circuits, not power. Even shorting a signal to ground is unlikely to do damage. Remember, a binary zero is 0V (or ve

        • One thing to remember about PCs is that there is a wide range of voltages present. At the high end you have 12V for analog stuff and bulk power distribution* while at the low end you have signal voltages of less than 2V (afaict) and core voltages of under a volt.

          I could easily see a A badly fried graphics card putting 12V onto the data lines. That would almost certainly damage whatever was on the other end (northbridge for LGA775 stuff, IOH for lga1366 stuff, CPU for LG1156 stuff) and maybe stuff beyond tha

    • The melting GPU burns the CPU. It could even damage the carpet!

    • by mkairys ( 1546771 ) on Friday March 05, 2010 @05:56AM (#31369238) Homepage
      Laptops for example generally have the same heat pipe connected to the CPU and GPU. If one overheats, so can the other.
      • Re: (Score:1, Informative)

        by Anonymous Coward

        And this is an additional problem since all decent GPUs can survive much higher temperatures then CPUs.

        Water cooling from the same reservoir & same cycle and such is fine, but a shared heatpipe would be questionable in most (but not all) cases. The difference in max operating temperature is just too high.

        • by mkairys ( 1546771 ) on Friday March 05, 2010 @08:35AM (#31370108) Homepage
          Spot on. My 8600GT started overheating in my laptop and while it survived, my CPU was hitting 105C and would shut down randomly and required the processor, motherboard and many other components to be replaced (the heat ruined the life of the battery). The GPU was holding out at the temperatures fine but because of the heat pipe it was connected to, it was cooking the CPU in the process.
    • Re: (Score:3, Informative)

      by Manip ( 656104 )

      The slot can be damaged by overheating cards, and if it is your only 16x slot then you could wind up throwing away the entire motherboard. Although typically this is more often seen when a card overheats multiple times causing the material to expand and contract until it eventually fails (as opposed to this case when cards just die).

      My only guess about CPU damage is unregulated power spikes but that is just conjecture. Plus if anything was going to get damaged by power spikes it wouldn't be the CPU it would

      • If the fan fails to spin up and the gfx overheats, the ambient temp in the case rises. Without good airflow it would be easy for an overheating gfx card to seriously affect the CPU heatsink's heat dispersion properties. An ambient temp of 100f means your cpu will be that temperature at least. I don't know the physics, but my "seat of my pants" maths tells me that you'll add 50% onto that temperature from the cpu, bringing up the core temp to almost 150f / 60c. My old prescott ran at that temperature, and it
        • AT cards for desktop cases had the processor and memory on the right side while the expansion cards were grouped on the left side. When the tower AT cases appeared, they put the hottest item (the processor, at that time running without a heat sink, I've seen even a 486SX in a Dell running very hot) at the top of the case, and near the power source (the only source for ventilation in those cases were the fans in the PSU). The hard drives were situated near openings in front of the case, so whatever airflow t

          • BTX never really took off in the "third party components and DIY" sector, even back when Intel was cranking out chips that really could have used the extra cooling help; but it was a pretty big hit in corporate basic-box land. To this day, a substantial proportion of PCs from the various vendor's business lines are either actually BTX or heavily BTX inspired in terms of cooling layout.
          • by Molochi ( 555357 )

            To add to what Fuzzy said,

            There have been a number of nonstandard "ATX" cases that mount the motherboard upside-down on the opposite side of the enclosure mostly intended for multi videocard gaming machines. some of those even isolate component areas like BTX does. Cable lengths can be an issue in these though.

            • Yes, cases with the power supply on the bottom (and even in a separate compartment along with the hard drives) appeared some time ago. I don't remember separate compartments for video cards and for processors, but dedicated ventilation for those zones are becoming common.

    • by yacc143 ( 975862 )

      Well, I'm not an EE, but I seem to remember from university, that changing temperatures can lead to changes in voltage/current. Then you've got the extreme case of a short circuit.

      So I think it's quite possible to have motherboard damage, e.g. GPU takes more power than is good for the MB, MB dies. Slowly or quickly, depending upon how extreme the effect is.

      As an official example, see GPU that have a seperate power connection, where the documentation explicitly states that the GPU and/or the motherboard can

    • Wait a minute... just how is an overheating graphics card causing damage to a CPU?

      Depends on the airflow in your case, but many are not well laid out in terms of airflow. If the card is pumping out more heat than usual and this isn't being drawn out correctly, it may build up in the case generally, reducing the ability of the CPU's HS+F to cool it properly. Similarly, if the heat built up is sufficient for an appreciable amount of time (say, over the course of a long gaming session) you may also find drives and other components start failing due to overheating though the CPU is the item

    • The excessive heat can overwhelm the standard cooling system on a PC.

      As an EE, I'm sure you're well aware that heat has negative effects on CPU's and other electrical components.

    • I have had a video card overheat and break my motherboard.

      I am not sure about the technical side but I imagine that the motherboard was not designed to run at extreme temperatures.

    • by mcgrew ( 92797 ) *

      Or the graphics card gets hot enough to re-flow solder, which then drips onto the PCIe slot or motherboard components. Not to mention most cases are vertically oriented these days. Not a chance in hell, I'd say.

      In hell you wouldn't even need to turn the PC on for all the solder to melt!

    • You're forgetting laptops. Everything integrated in close proximity on one motherboard, with shared fixed-capacity cooling. If the driver update pushes a heat-constrained laptop GPU harder, you could easily exceed whole-system thermal limits leading to CPU or MB damage.

      The less obvious case is if a desktop system is ventilated just well enough to handle normal heat from its components, and the GPU goes into thermal overdrive because of this driver. In that case, intra-case temps will go up and, if not notic

  • by Mascot ( 120795 ) on Friday March 05, 2010 @05:48AM (#31369198)

    WoW seems an odd companion to those other games, I've always felt the CPU was the primary bottleneck in that beast, but be that as it may..

    For me, I can't recall ever solving an issue or getting noticeable performance improvements from upgrading graphics drivers. I have, however, had several issues introduced by it.

    Nowadays I stick to the old "if it works don't try to fix it" mantra, with a few exceptions. For example, I kept up-to-date for a bit after Win7 release, assuming there would be teething issues for a few revisions. If buying a bleeding edge recently released card I would also stay on top of drivers for a month or two. But other than that, just leave them be I say.

    • Re: (Score:3, Informative)

      by L4t3r4lu5 ( 1216702 )
      The shadows implemented with v3 crippled WoW graphics performance. I have an C2Q Q6600@2.8GHz, 4GB DDRII RAM, 8800gtx running everything at max settings except shadows (blob only), 1920x1200 with min 60fps. If I turn shadows up one level I get 40 fps, full shadows bring the thing to a crawl even in open areas like The Shimmering Flats.

      I can easily see the gfx being a bottleneck with the shadows up, but other than that I agree. Loading the other players in Dala is horrid.
    • More to the point, World of Warcraft has "realistic graphics?" Even if you ignore the art style, which is as far from realistic as you can get, the engine is something like 5-10 years older than all the other games listed there and quite frankly looks like ass.

      I wish people would proofread before they publish an article that thousands will read.

  • by L4t3r4lu5 ( 1216702 ) on Friday March 05, 2010 @06:15AM (#31369304)
    The EVGA tool has been used to manually set fan speed to 77% to compensate. I see no reason for other low-level customisation tools (RivaTuner etc) to not behave in the same way.

    If you get a performance boost from this new driver, download RivaTuner or a similar tool and manually set the fan speed for gaming.
    • by Deluge ( 94014 )

      It's pretty pathetic for NVIDIA to write drivers that require the use of 3rd party utils to achieve sane fan behaviour. The GTX260 I bought was the first video card that I'd bought that required such a massive cooling solution, and I thought that since the cooling hardware seemed fairly capable, the software wouldn't be a problem.

      Imagine my surprise that, by default, the fan is set to run at 40% without *ANY* ramping, and only jumps to 100% when the card reaches ~85C - when it's basically overheating. Tha

  • Terrible design (Score:4, Insightful)

    by QuoteMstr ( 55051 ) <dan.colascione@gmail.com> on Friday March 05, 2010 @06:48AM (#31369458)

    Software should not be able to destroy hardware, period. The GPU's cooling system should be designed to safety operate for sustained periods at peak load --- anything less is artificially crippling the hardware and leads to both security and reliability problems.

    Great job, NVIDIA: now, malware can not only destroy your files, but destroy your expensive graphics card as well.

    • Re: (Score:2, Funny)

      by Kleppy ( 1671116 )

      "Software should not be able to destroy hardware, period."

      Tell that to Toyota.....

    • Re: (Score:2, Interesting)

      by rotide ( 1015173 )

      Software (read: applications) isn't destroying hardware in this case. The hardware itself is now "faulty" as the drivers have a pretty bad bug.

      In my mind, this is no different than taking the the heatsink/fan off a CPU. That's a hardware issue. Doesn't matter what games, etc, you run, you risk killing that CPU because the CPU is under an abnormal operating condition.

      While drivers are in control in the case we have here with nVidia, I see the drivers as part of the hardware since they were released by the

      • Re: (Score:2, Flamebait)

        by QuoteMstr ( 55051 )

        I see the drivers as part of the hardware since they were released by the manufacturer.

        Congratulations! You've won the "Stupidest Thing Dan Has Read In The Last 24 Hours" award.

        • Re: (Score:2, Funny)

          by rotide ( 1015173 )
          Do you care to discuss something? Or does it simply make you feel better to make fun of people that disagree with you and/or have a different opinion?
        • Ok captain literal, calm down. Obviously he meant that the hardware was useless without drivers...kind of like how cars are useless without roads or gas. In that case, the drivers could be considered to be part of the hardware....perhaps saying "hardware package" would suit you better.
      • I see the drivers as part of the hardware since they were released by the manufacturer.

        So Mac OS X is hardware, too, because it's released by the hardware manufacturer, i.e. Apple?

        • by rotide ( 1015173 )

          If you want to take my post out of context, I guess.

          But my point, in this case, is what happens if the firmware was causing the problem? Ok, that's software too, should that "never" damage hardware as well? I mean, it's code written and compiled, right?

          When it comes to video cards, there are at least two sets of software released by the manufacturer that run the card. One is the firmware and two is the driver. If either one bugs, it's software causing the hardware to fail.

          I took the OP to mean "applicat

      • Software (read: applications) isn't destroying hardware in this case. The hardware itself is now "faulty" as the drivers have a pretty bad bug.

        In my mind, this is no different than taking the the heatsink/fan off a CPU. That's a hardware issue. Doesn't matter what games, etc, you run, you risk killing that CPU because the CPU is under an abnormal operating condition.

        Er, no, because the hardware clearly still works fine with older drivers.

        While drivers are in control in the case we have here with nVidia, I s

    • Old monitors could be killed by software as well (by just selecting a too high sync frequency). Later monitors added a protection against that.
      Also, don't some motherboards allow to set the CPU voltage in the BIOS? I guess that means you could fry your CPU from software as well.

    • by syousef ( 465911 )

      Software should not be able to destroy hardware, period

      Good luck with that since software controls the hardware. Whether it's in bios or drivers, software that operates hardware is going to be able to fry it if written poorly.

      The GPU's cooling system should be designed to safety operate for sustained periods at peak load --- anything less is artificially crippling the hardware and leads to both security and reliability problems.

      Yes, that's why they built a fan or heat sync into the graphics card.

      Great job, NVIDIA: now, malware can not only destroy your files, but destroy your expensive graphics card as well.

      You must be new to computing because the ability for a virus to destroy hardware is not new. The only reason it's not done more often is that there's no money or glory to be made in such asshole behaviour. So instead viruses focus on stealing bank account details.

    • Software should not be able to destroy hardware, period. The GPU's cooling system should be designed to safety operate for sustained periods at peak load

      And that's certainly the strategy in the corporate* world (for servers, for example).

      On the other hand, some other people, the kind who only occasionally play games and use their computer most of the time for office-type work (ie.: non graphically intesive tasks), would appreciate not having to endure the sound of an Airbus A380's takeoff coming out of their computer case every single moment during which the computer is on.

      Thus the fan aren't working at constant speed, but are varying their speed to constan

      • by Deluge ( 94014 )

        "Thus the fan aren't working at constant speed, but are varying their speed to constantly find the perfect balance between silence and avoiding the card catching fire under the load."

        And that's the problem with NVIDIA cards. The fan stays at 40% until the card is near overheating, and only then will the fan jump into 100% "oh shit" mode. And to change this behaviour you need a 3rd party util because what NVIDIA provides (a driver addon) is broken, and has been for as long as I've been using the latest NVI

      • The only thing that could have been done, is adding a safe guard which fires a software alarm and either shuts down or massively underclocks the 3D core in case a temperature threshold is crossed. (That's how it's done to protect CPU in case of faulty fan).
        Back in the P4/athon XP era toms hardware demonstrated ( http://www.youtube.com/watch?v=BSGcnRanYMM [youtube.com] ) removing the heatsink from various chips while gaming and the P4 kept going albiet at slideshow framerates and reconvered fine when the heatsink was reat

        • Nvidia need to realise (like AMD did after that video) that thier overheat protection systems need failsafes implemented at as low a level as possible so that even if the fan system fails the chip can't cook itself.

          Modern GPU can't easily cook themselves. They are spec'ed to work with extremely harsh temperature. 90C is a normal operating temperature for some chips.

          The problem is not the CHIP, the problem is the PCB board. If the cooling system fails, the whole PCB will be heated. If done for a prolonged time, the card is going to suffer : Thermal stress, board wrapping, connection breaking, solder melting, etc.
          The exact same as observed in some modern consoles (XBox360 mainly) and some classic computers (Apple III).

    • by null8 ( 1395293 )
      That is not realistic. If you want to provide people with a possibility of BIOS update to fix some hardware bugs, you can overwrite you bios for example with some garbage that can apply incorrect voltages, which will physically destroy your mainboard, it once happened to me. If you know how you even can load new microcode, which can kill a CPU. One can theoretically open multiple tristate gates and cause some kind of short circuit. I mean you can say "noone should kill another person, period", everyone wil
    • I think it has a lot to do with enthusiast cards. That market segment is incredibly picky and extremely informed. They tend to push the hardware to the limits, and beyond if it all possible. A lot of these guys run fans at 100% for a year with the card pushed to the max, so it has a lifespan less than 30% of a stock card. As such, they buy more cards per year than their mainstream counterpart. They are also the highest profit margin and recoup R&D costs for NVidia and ATI.

      A good way to kill your en

    • by Nemyst ( 1383049 )
      This is why you don't buy directly from Nvidia (or ATI for that matters). If you get a BFG or XFX or EVGA card (which often run for about the same price as vanilla cards), you also get lifetime warranty to protect against exactly this kind of trouble. A friend of mine got his EVGA GPU fried (8800GTX) and they replaced it within 5 days with a GTX 260, for free. No, it's not normal that the drivers can allow that, but shit happens. It'll get fixed quickly I hope, but you should always give yourself some prote
    • And people called me crazy when I said there was a possibility of software ruining hardware again. Those old enough should remember the ansi/ascii malware that ran around for awhile popping peoples monitors before there was sync locking. And they should also remember the number of virus that were floating around that would crash drive heads into the spindle.

    • Software should not be able to destroy hardware, period. The GPU's cooling system should be designed to safety operate for sustained periods at peak load --- anything less is artificially crippling the hardware and leads to both security and reliability problems.

      Great job, NVIDIA: now, malware can not only destroy your files, but destroy your expensive graphics card as well.

      This shouldn't be surprising to anyone. Software (or firmware, if you want to make the distinction) has been used to control fans on GPUs, CPUs, northbridges and plenty of other components for many, many years. I think people don't think about the alternative: putting hardware exclusively in charge of fan control. If you choose the hardware method, there is just as much chance of it becoming fucked up due to lack of testing, poor design choices, etc. However, if you ship a million units with faulty hard

      • Frankly, I think all parties benefit with a software solution.
        IMO the best solution is a hybrid. Let software be in charge of finding the normal balance between cool, quiet and fast but have safety limits in the hardware that kick in when thing go too far outside normal.

        BTW does anyone know if anyone has retested the affect of heatsink removal under load with more modern CPUs since toms hardware did it back in 2001. I'd like to know if AMD have got thier act together...

    • by Brianwa ( 692565 )
      Reminds me of my laptop. The fan speed is software controlled. There's some sort of fail safe built in -- if the speed control software doesn't initialize or stops responding completely for around 10 seconds, the fan will spin up to maximum speed and stay there. Once I was having some software problems (a nasty rootkit broke a lot of stuff) and the fan speed control process was crashing every few hours. One time, the fail safe didn't kick in. Unfortunately the CPU load was light, so it sat there runnin
  • Don't expect it fixed... ever! In 2005 bought a "top end" Nvidia card that worked fine most of the time, but occasionally it would go through fits where it threw up a BSOD announcing an infinite loop was detected in the display driver nv4_disp.dll.

    Many reported it to nVidia - me included - but they ignored everyone through every avenue. The bug stayed there through releases of new generation nVidia cards, and Google shows people still finding the bug and trying to "fix" it to this day.

    I can only presume nV

    • I haven't heard of that particular problem, but I should point out that nVidia does in fact accept bug reports (on Linux, just run nvidia-bug-report.sh and it'll tell you where to mail your report), and I have actual experience with reporting a bug (nvidia_drv crashed X when switching to another virtual console while an OpenGL window was minimized) and having it fixed.

    • My 8800GT was getting that, but mainly in older games (Descent 3, Gothic, Recoil, Neverwinter Nights) and just a few of the newer ones (Titan Quest: IT). The integrated Nvidia 7100 ran those perfect with the exact same drivers.

  • How the hell did those guys get into the Starcraft II beta, I've been waiting for months!
  • Far Cry 3 (Score:5, Informative)

    by Karem Lore ( 649920 ) on Friday March 05, 2010 @07:53AM (#31369820)

    Hi,

    Please do tell where I can get Far Cry 3....Unless bittorrent has seriously moved into time travel of course...

  • I had to revert back to 195.62 driver because the 196.21 was causing my system to randomly lock up, even more so when i was playing games such as Star Trek Online. Boy am I Glad I didn't see the newer one. I will tell you this however, these last to driver revs from Nvidia are sure starting to make look more closely at ATI again.

    • You only think it's unrealistic because you think the unrealistic graphics the Matrix gives you is the reality. The real reality of course looks exactly like WoW graphics.

  • That's very odd. Also odd is that from the article it seems that the overheating has to do with how realistic the game looks; as if the card just KNOWS the content looks realistic, and suffers a spell of worry, feeling stressed about performing, and thus not managing to cope. Oh, the poor GPUs, they deserve better. Spread the love.
  • nvidia is evil since they don't publish their hardware programming manual like AMD(ATI)/Intel. Buy AMD(ATI) or Intel. Avoid like hell nvidia till they release their manuals.
  • Last week my win7 bluescreened 3 times with weird hardware errors while playing WoW. I knew something was off but never figured it would be crappy nvidia cards. I've always been a fan and always bought their cards but yea wtf is up with that. Maybe time to try some ATI

  • Unlike the (what appears to be purely speculative) complaining here, modern graphics boards have thermal and voltage protection circuitry that operates independently of the software to protect the GPU from exactly this sort of situation. That's why the Blizzard report talks about a lot of "my game slowed down" complaints rather than "my GPU blew up" complaints.

    • Unlike the (what appears to be purely speculative) complaining here, modern graphics boards have thermal and voltage protection circuitry that operates independently of the software to protect the GPU from exactly this sort of situation.

      Are you saying it's impossible to damage modern GPUs by reckless overclocking?

  • I finally sidelined my (very expensive) NVIDIA card because it kept bsodding. Damn nvlddmkm driver. This is a long term problem for NVIDIA. Check the web.

    Don't know whether it's a software or hardware problem. Card used to work.

    Won't buy NVIDIA for a long time.

  • by RobDude ( 1123541 ) on Friday March 05, 2010 @11:31AM (#31372198) Homepage

    Just sayin...

Keep up the good work! But please don't ask me to help.

Working...