Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Graphics Software Entertainment Games

NVIDIA Gives Details On New GeForce 6 327

An anonymous reader writes "According to Firingsquad, NVIDIA will be announcing a new GeForce 6 card for the mainstream market at Quakecon this week. Like GeForce 6800, this new card will support shader model 3.0 and SLI (on PCI Express cards), so you can connect two $199 cards together for double the performance. NVIDIA will also be producing AGP versions of this card as well."
This discussion has been archived. No new comments can be posted.

NVIDIA Gives Details On New GeForce 6

Comments Filter:
  • Imagine (Score:3, Funny)

    by Anonymous Coward on Monday August 09, 2004 @08:30AM (#9919430)
    A beowolf cluster of video cards...??? Oh wait...

    *Ducks.
    • Re:Imagine (Score:5, Informative)

      by TheRaven64 ( 641858 ) on Monday August 09, 2004 @09:03AM (#9919688) Journal
      Actually, some 3DLabs workstation cards let you do this. They have an external connector so that you can join a load of them (in different machines) together to make a rendering cluster. Of course, if you want to use commodity hardware (and don't mind a 2 frame latency) you could always use Chromium [sourceforge.net].
  • by buro9 ( 633210 ) <david@buro9 . c om> on Monday August 09, 2004 @08:30AM (#9919433) Homepage
    If there are as many people out there with fresh copies of Doom 3 in their hands or winging their way to them as I suspect, then this will be slashdotted veerrryy soon.

    So here's the content:

    In last week's conference call ( http://www.corporate-ir.net/ireye/ir_site.zhtml?ti cker=NVDA&script=2100 ), NVIDIA CEO Jen Hsun Huang confirmed reports that NVIDIA would be launching a new shader model 3.0 mainstream card shortly: "In a few days we're going to turn up the heat another notch. At Quakecon in Texas, a mecca for gamers and truly a phenomenon to witness, we will officially unveil our newest mainstream member of the GeForce 6 family".

    Jen Hsun went on to say:

    This mainstream GeForce 6 will be the only shader model 3.0 GPU in its class and deliver performance well beyond that of the competition. PCI Express support is native and AGP support will be provided through HSI, once again showing the versatility of the HSI strategy...sampling started in June, production is in full steam on TSMC's 110 nanometer process, with shipments to OEMs soon.

    Price points and product names weren't discussed, but Jen Hsun also confirmed SLI support for this upcoming card, and also mentioned by the end of the year NVIDIA will have a top-to-bottom family of shader model 3.0 cards. In fact, he mentions "we're ramping 110 on two GeForce 6 families right now at TSMC, and very shortly we'll start a third...and this quarter we'll have five GeForce 6 GPUs in production, and that ought to cover us from top to bottom."
  • by Anonymous Coward on Monday August 09, 2004 @08:30AM (#9919435)
    I can't wait for the GeForce 27, it's going to be sooo much better. :-)

    Seriously, can't they figure out a new name already?
  • Thank you! (Score:5, Interesting)

    by merlin_jim ( 302773 ) <James@McCracken.stratapult@com> on Monday August 09, 2004 @08:31AM (#9919440)
    SLI was such an obvious way to make graphics rendering parallelized! I'm glad they're bringing it back... I've been missing it.

    Does anyone have any idea how many PCI Express ports this uses? It's my understanding that you have a total of 20 and most motherboards are allocating 16x to the video... will this card require 8x? Or do you need a special motherboard for this?

    Anyone know?
    • Re:Thank you! (Score:4, Insightful)

      by dj42 ( 765300 ) on Monday August 09, 2004 @08:36AM (#9919477) Journal
      Ignorance warning: I don't know much about the technology, but scanline interweaving seems like it's difficult to pull off with present day technology because of anti-aliasing algorithms, temporal AA, etc. These things have to be calculated, and available on both cards if they are generating the image line by line (alternating turns). It's "obvious" in that it makes sense intuitively, but technologically, it seems like a more impressive feat.
      • I think AA is doable over PCIe.

        Both cards render, share the output with each other, then both cards apply antialiasing based on the result, then output their respective lines. Maybe only one card need to output rather than having goofy cables.
        • Re:Thank you! (Score:3, Informative)

          by dossen ( 306388 )
          I don't care to google for the reference (I actually think I read it on dead tree), but I believe that what Nvidia is doing is to divide the screen in an upper and a lower part, with a boundry that is moved according to load. Then the two cards each render one part, communicating over the SLI link (and possibly the PCIe bus), and one of the cards output the finished frame (so one card would have nothing connected to its outputs).
      • Re:Thank you! (Score:4, Informative)

        by hattig ( 47930 ) on Monday August 09, 2004 @08:55AM (#9919618) Journal
        This new SLI is not the same as old skool SLI.

        The new one divides the screen up into two sections, I assume that if both cards are equally powerful then it will be 50:50 or thereabouts. I assume a little bit of overlap so that anti-aliasing and whatnot works correctly on the seam.

        Then one card sends its generated half of the scene to the other, and they are merged and output to the display.
        • Re:Thank you! (Score:4, Informative)

          by So_Belecta ( 738191 ) on Monday August 09, 2004 @09:35AM (#9919943)
          From what I read a while back, the screen isnt divided up into two equal sections, but rather in a proportion that would allow each card to do the same amount of work approximately, i.e. if they were working on a scene where the sky took up the top 2/3ds of the screen, while the bottom 1/3 was complex geometry, then 1 card would work on say the top 80% or so, while the second card would work on the bottom 20%, in such a way that neither card is ever doing significantly less work than the other.
        • What you're talking about is Alienware's proprietary "SLI" technology, that works even if you're using two different brands of video cards.

          What nVidia is talking about is SLI in the Voodoo2 sense.

        • Re:Thank you! (Score:3, Insightful)

          As I understand it, you're real close here.

          From what I understood (when I read an article about it around what, a month back?) is that yes, each card renders a seperate portion of the screen, but the spiffy thing about this new implimentation is that the ratio is dynamic; if there's a lot going on in one half of the screen, and not much in the other portion, the under-utilized card starts rendering more of the screen to allow more focus on the "action-intensive" area by the other card.

          Then again, I c
      • Re:Thank you! (Score:3, Insightful)

        by nmk ( 781777 )
        I think this is the reason that SLI is only available on the PCIe cards. PCIe provides an independent bus for each component. This means that, not only can components communicate with the processor very quickly, but also each other. My understanding is that the processor is also connected to the rest of the components in your system using a PCIe bus. So, due to PCIe, both the cards can communicate with each other as quickly as they are communicating with the processor. This should make it possible to have t
    • Re:Thank you! (Score:5, Informative)

      by Short Circuit ( 52384 ) * <mikemol@gmail.com> on Monday August 09, 2004 @08:37AM (#9919486) Homepage Journal
      PCIe is a switched network on your motherboard. If you're technically inclined, read this article [arstechnica.com] for further details.
    • Umm...

      Considering that the PCIe spec allows for ports at 32x, I don't think you've got a limitation of 20 channels on it. From what I've seen, the only reason you've have any sort of limit is going to be chipset dependant.

      As for needing a special mobo for it, you're going to end up with a high-end workstation board if you want a pair of 16x ports unless SLI sparks enough consumer demand to bring the feature down to consumer boards. Even then, it's probably only going to be found on relatively high-end k
    • Re:Thank you! (Score:5, Insightful)

      by NanoGator ( 522640 ) on Monday August 09, 2004 @09:55AM (#9920116) Homepage Journal
      "SLI was such an obvious way to make graphics rendering parallelized! I'm glad they're bringing it back... I've been missing it."

      From an economics point of view, it sounds pretty cool. Spend a few extra $$$ to get a top of the line card. Then, in a year or two, pick up a second card when the prices are considerably lower, then you get 2x the performance without tossing hardware. Bitchin.

      Unfortunately, I wonder if that puts NVidia in an ugly place. It does set the bar for what the Geforce 7's have a minimum to do. But... that aint bad for us, now is it? :)
  • Does it ever stop? (Score:5, Interesting)

    by xIcemanx ( 741672 ) on Monday August 09, 2004 @08:31AM (#9919445)
    Is there a point where graphics cards get so advanced that humans can't even tell the difference anymore? Or is that virtual reality?
    • by DaHat ( 247651 ) on Monday August 09, 2004 @08:41AM (#9919521)
      Why would it?

      So long as game companies turn out new games that make existing systems cry for mercy, (and we choose to buy them) we will always need to buy newer video cards in order to stave off choppy video for another generation of games.

      Same goes for CPU... although much of the difference is that most of those people buying a Geforce 6 are gamers and will use most/all of the power at their disposal... I'd wager only a fraction of those using the latest and most powerful CPU's from AMD or Intel use them to their full potential.
    • No (Score:3, Insightful)

      by caitsith01 ( 606117 )
      Based on recent cinema experiences, you would have to say were still a hell of a long way from this. I just saw Spiderman 2, and a lot of the CG work still looks totally artificial. Likewise, the trailer for I, Robot made me cringe with its computer-generated aura. Even LOTR looked fake in places.

      Considering these movies are using the absolute cutting edge of pre-rendered graphics technology, I would suggest we're still a decade or so from anything like 'real' looking PC graphics.
    • by Creepy ( 93888 ) on Monday August 09, 2004 @09:01AM (#9919672) Journal
      There still are a number of things that are way out in the future for graphics processors, especially polygon based - for instance, ray tracing has the ability to reflect off multiple surfaces (you could create a house of mirrors, for instance, with true curvature reflections), while polygon models have just started to make decent reflections on a single flat(ish) surface. Radiosity and similar effects are usually mapped beforehand because they are so processor intensive to calculate in real time, but could be used to cast "foggy" shadows and create other creepy effects. Another possibility is to offload the entire graphics model to hardware and do everything (e.g. frustum polygon culling and quadtree/oct-tree culling) inside the hardware instead of in software.

      It seems to me graphics hardware has a long ways to go still. There are also probably newer, more photorealistic models that have appeared since I studied computer graphics, as well. Virtual reality in a sense depends on audio and AI in a true form, but a virtual visual (and perhaps audio) reality is probably on the horizon. AI is probably 15-20 years down the line (at least for something that stands a chance at passing a Turing test, IMO).
    • In my Phys III class ages ago, we did the calculation for the resolvable limit for the human eye given a certain distance from 2 points. I can't recall the formula, but it seems that at some point in the near future a 8000 x 6000 screen will look exactly like a 80,000 x 60,000 screen unless your 2 cm away from it.
    • Why not? The moment we can get graphics to the point where it looks like the real thing - then we will consistantly go for increased performance until we can have some crazy fast rendering that is so intense the human mind cannot experience it on a conscious level (A.D.D. anyone?)
      It won't stop until we get to "holo-deck" technology - and while we may not have the force field effect at the very least we could have a cool visual. So when you play Doom 50 you will be in a special suit that simulates blows,
    • by mikael ( 484 ) on Monday August 09, 2004 @09:49AM (#9920068)
      Reality was predicted to be 80 million triangles/second (with 25 pixel per triangle?). Just about every console system and graphics card now exceeds this.

      The human retina consists of 120 million rods (wavelength insensitive) for peripheral vision and 6 million cones (wavelength sensitive for red,green and blue) for central vision. To match the full capability of human vision, you'd need a 12000x12000 monochrome framebuffer covering a field of view 170+ degrees, with a central region 2000x2000 with floating-point RGB colour, and it would have to update around 70 times/second.

      Graphics cards and virtual reality headsets are slowly edging up to the resolution for central vision, since there isn't much demand to support peripheral vision resolutions.
    • by f97tosc ( 578893 )
      Is there a point where graphics cards get so advanced that humans can't even tell the difference anymore? Or is that virtual reality?

      As someone else pointed out, the monitors may very well reach the limit that the human eye can resolve.

      However, the computational problem of generating those pixels can at least in theory be arbitrarily difficult. If the problem of calculating certain pixels in certain situations is NP-complete then we may never be able to calculate them all in time. It remains to be se
  • Only $200? (Score:5, Interesting)

    by Gamefreak99 ( 722148 ) on Monday August 09, 2004 @08:31AM (#9919447)
    Only $200?

    This should be interesting to see and good for competition to say the least.
  • Only two? (Score:5, Funny)

    by Anonymous Coward on Monday August 09, 2004 @08:32AM (#9919452)
    This is nice and all, but it's kind of ridiculous to only be able to link two video cards together. What of one of them dies? Then you're back to single speed performance until you can get a replacement. I would much rather get a RAIVC-5 array of, say, five to ten video cards. Then if any one dies, no big deal; the others can handle the load. And does anyone know if these new NVidia cards will be hot-swappable?
    • This does raise an interesting point, I think. NVIDIA seems to have let the cat out of the bag. A display card that can coordinate with another display card, perhaps doubling performance. Why buy next year's card that doubles your performance when you can buy last year's card and add it to your existing duplicate card for way less than paying the premium for the bleeding edge?

      BTM
      • Will last years card be able to work with this new found technology? Nvidia might not offer that can't of support (be it by choice or by natural design).
    • by Spokehedz ( 599285 ) on Monday August 09, 2004 @10:15AM (#9920268)
      And does anyone know if these new NVidia cards will be hot-swappable?

      I belive that PCI-Express is, in fact, hot swappable.

      *Checks google*

      Yes. It is infact hotplug/hotswap capable. I dunno how good your os (*cough*windows*cough*) will react to you unplugging the VC though... I'm sure that Linux will have wonky support for it initially, eventually getting stable and usable support about the time that PCI-Express will be obsolete... ;)
      • Yes. It is infact hotplug/hotswap capable. I dunno how good your os (*cough*windows*cough*) will react to you unplugging the VC though...I'm sure that Linux will have wonky support for it initially

        So that's why I couldn't see anything, I forgot to mount my videocard!
  • by hal2814 ( 725639 ) on Monday August 09, 2004 @08:35AM (#9919465)
    I might need to dust off my textbook from "Parallel and Distributed Computing", but I'm pretty sure that getting double the performance from two cards is about as likely as getting double the performance from two processors. It's just not likely unless the graphics routines can split up jobs perfectly and not suffer from any overhead for communication. I imagine there will be a noticeable performance increase from 2 cards working in parallel since graphics algorithms do have a tendency to be very parallelizable, but claiming double performance in naive at best and dishonest at worst.
    • by rokzy ( 687636 ) on Monday August 09, 2004 @08:40AM (#9919512)
      well yes it can be split e.g. odd numbered lines and even numbered lines.

      depending on the scene it won't always be a perfect split of the workload, but it should be pretty damn close.
    • Each card draws a scan-line each, all the way down.
    • Graphics is easily parallelizable, and SLI is actually almost perfect: have one card draw half of the scanlines, and the other card the rest. True, T&L and other stuff must be replicated, but that's a negligible part of the work nowadays.
      • Are you sure you know what "T&L and other stuff" means? Transform & Lighting... nowadays, you need to think of it this way: Transform ~= Vertex Shader (vertex-level lighting is done here), Lighting ~= Pixel Shader. Given the advances in "T&L" with GPUs, do you really think that "that's a negligable part of the work nowadays"? So basically, because of the non-pixel-specific nature of Vertex Shading, each card needs to run the appropriate vertex program on each vertex that it might need to have
    • by caitsith01 ( 606117 ) on Monday August 09, 2004 @08:45AM (#9919547) Journal
      Don't they interlace the lines, with each card doing an alternate line?

      I think that this is actually a rare case where you can actually get close to 200% performance. For one thing, the job that is being done is very well understood and the cards need zero flexibility - hence they can write very specialised software that does one thing and does it very efficiently.

      For another thing, many of the common problems of parallel computing are caused by communications, and in the case of SLI the two 'nodes' do not need to communicate - the mothership (i.e. the CPU via the PCIx bus) does all the organisation and communicating, and even that is basically one-way, so there is very little in the way of latency related issues. From a software point of view, the only real task is to shovel half the data one way, and half the other way - significantly easier than, say, a system where you have to constantly send and receive data to a range of nodes operating at different speeds.

      I seem to recall that the Voodoo II (bless its zombie bones) was able to get near 2x performance in parallel.
      • Don't they interlace the lines, with each card doing an alternate line?

        That's the way the old Voodoo cards did it, but that's not how it works with the new nVidia cards; they just split the screen into 2 halves (I believe the actual size of each portion is dynamic, to allow for a more even work load between the cards when one portion of the screen is recieving more action than the other) and each card renders its own half.
  • by Anonymous Coward on Monday August 09, 2004 @08:35AM (#9919467)
    My first NVidia card was a GeForce 256, but then I upgraded to a GeForce 2. Later I bought a GeForce 4MX card which was actually slower than the GeForce 2 in my older system. Lately I've upgraded to a GeForce FX 5600... now a GeForce 6800 is the best, but they want me to buy a GeForce 6? I can't keep up with this shit. So my $250 GeForce FX 5600 card that I bought last year is no longer any good? It runs Battlefield 1942 alright, but now they're saying it's not good enough for Doom 3 which I just bought but haven't installed yet. Ugh. I suppose my Athlon XP 2400+ I built last year is now too slow as well?
    • Unless you're a total performance nutter your CPU and graphics card will do just fine for the next 12 or so months. You should be ok with Doom III at medium detail and 1024x768 resolution.

      They are talking about a mid-range GeForce 6-series, most likely a '6600', i.e. the next generation version of your current card. I would relax and let the prices drop.

      Also, your CPU is more than adequate for the time being. Don't listen to these idiots - they probably have aerodynamic fins and flourescent light tubes on
    • That's why it's called the "enthusiast market". It's for people who only care about top notch performance, not price. Your Geforce FX 5600 is fine for Doom 3, I'm running it on a Geforce 4 Ti-4200. You will NOT, however, be able to turn it all the way up. If you paid any attention at all 4 years ago when they first announced Doom 3, you would've known that you would need a top notch card at the time to play the game turned all the way up. Quit complaining.
  • by bigmouth_strikes ( 224629 ) on Monday August 09, 2004 @08:36AM (#9919476) Journal
    ...when you see the phrase "connect two $199 cards together" and say to yourself "Hey, that's a good value!".
    • ...when you see the phrase "connect two $199 cards together" and say to yourself "Hey, that's a good value!".

      Connecting two $199 cards together will probably give you 40% more performance than a $398 card, assuming that this new SLI will only have about 10-20% overhead, so yes it is a good value!
  • I'm out of it (Score:2, Insightful)

    by danamania ( 540950 )
    Since I don't keep up with things.. Is PCI Express way better than AGP, for bandwidth on graphics cards? If so - is there anything new from the AGP camp planned?

    I fear something like AGP EXTREME .
    • PCIe (Score:3, Informative)

      See this [arstechnica.com] article.
    • Yeah, I believe it's about twice as fast as AGP 8x. Not sure if there's anything planned to expand AGP at all, but the PCI Express stuff is pretty impressive.
    • Re:I'm out of it (Score:5, Informative)

      by Mornelithe ( 83633 ) on Monday August 09, 2004 @08:46AM (#9919560)
      I don't think there is an "AGP camp."

      PCI Express is a replacement for PCI and AGP on desktop class motherboards (I guess PCI-X might be better for servers, but I don't know).

      Its advantages are that it has switched uplinks, so, if I understand correctly, each device can have its maximum bandwidth between any other component. PCI shares its bandwidth between all devices.

      PCI Express 16x replaces AGP, and roughly doubles the bandwidth, I think. Then there's 8x, 4x, 2x and 1x for devices with lower bandwidth requirements. And you could probably expand to 32x if you really need more bandwidth than 16x. It's all about the number of "lanes" you devote to a card.

      Someone here has a link to an article on this stuff, in case you want a description from someone who actually knows what they're talking about.
      • AGP 8x (Score:5, Informative)

        by caitsith01 ( 606117 ) on Monday August 09, 2004 @08:56AM (#9919624) Journal
        Am I right in thinking that most of the current crop of video cards don't really push AGP 8x at this stage? I seem to remember seeing some benchmarks where high end Radeons were not really that much faster on 8x vs 4x.

        At least it will give 'gamers' a chance to brag about how fat their bandwidth is, I suppose.
      • You're correct. Most reviews I saw around the 4X->8X transition would benchmark the Radeon 9700 Pro (then top of the line) and see about a 2% performance difference. The only major difference in AGP 8X was the fact that the 3.0 (8X) spec actually supported two AGP slots on a board. Although I have seen old Compaq deskpro boards that used a hardcore proprietary method of making this work (you WILL buy ONLY THESE PARTICULAR CARDS, which are pinned out unlike any other AGP card), I never saw any mobo tak
    • AGP was a stop-gap solution when computer makers realized that the PCI 2.0 bus didn't have the bandwidth required for graphics heavy applications.

      AGP has always had a limited shelf life, and now it's finally coming to pass. AGP will still be the primary stepping stone for a lot of people though, and will eventually be "budget class" only.

  • What bothers me (Score:5, Insightful)

    by Anonymous Coward on Monday August 09, 2004 @08:36AM (#9919482)
    I'm all for advancing technology, but when it comes to video cards, it's all a matter of who can keep up with Microsoft's DirectX demands the best.

    Meanwhile, OpenGL, the industry standard graphics library, is getting left behind because every video chip maker wants to show off how well it supports GlibFlobber() DirectX 27 API.

    Won't someone please think of the industry standard instead of the proprietary (and very small market) "standards" of Windows?
    • I'd hate to burst your bubble, but your "very small market" is called the "Gaming Industry." Minus Sony and Nintendo, Microsoft is pretty much the only release platform for many mainstream (read: revenue-generating) games.

      There's a reason nVidia and ATI show off GlibFlobber27() at high FPS: it makes them money - lots of it. And, the money isn't just from their latest and craziest graphics card (That only the hardcore gamer would buy) - that money will also pour in from mobile devices, on-board graphics, a

    • Re:What bothers me (Score:3, Interesting)

      by NMerriam ( 15122 )
      Unfortunately, this is actually something GOOD for the video card companies (from a sales standpoint). Because the consumers need blazing DX speed, but the workstation market needs OGL, they can still charge a hefty premium for the better OGL support in a workstation version of the card, even though the hardware is 99% the same.
    • Re:What bothers me (Score:5, Informative)

      by be-fan ( 61476 ) on Monday August 09, 2004 @09:34AM (#9919932)
      OpenGL is doing just fine. I had lot's of worries about it, back in the DirectX 7 era, when Microsoft was rushing ahead, and the ARB was dragging it's ass with the standard, but those fears have since faded. OpenGL 1.3, 1.4, and 1.5 came out in quick succession, with each release maintaining feature-parity with DirectX. Vendor support, from NVIDIA anyway, has been excellent, with new driver releases supporting new features being released within months of each updated standard.

      OpenGL is about to get a big overhaul for 2.0 (due out this year at SIGGRAPH, I think), and should compete well with the DirectX updates in Longhorn.
  • Maybe (Score:5, Funny)

    by Anonymous Coward on Monday August 09, 2004 @08:37AM (#9919489)
    so you can connect two $199 cards together for double the performance.

    Much like you can duct-tape two cars together for double the performance (but certainly not double the speed).
    • by caitsith01 ( 606117 ) on Monday August 09, 2004 @08:48AM (#9919570) Journal
      Moe: "And that's how, with a few minor adjustments, you can turn a regular gun into five guns." [applause]
    • by UserChrisCanter4 ( 464072 ) * on Monday August 09, 2004 @09:27AM (#9919876)
      I was hosting a LAN party at my place about a year or so ago. One of my coworkers showed up with his computer and another 512MB DIMM that he planned on installing before we got started.

      We balked. There's an unspoken rule that no hardware changes during the LAN unless necessary. Murphy's law simply looms too large. He ignored it.

      The case was a smaller mid-tower that he uses for LANs, and with a couple of hard drives and the associated cabling it gets pretty tight. As he's sliding the RAM into place, we hear a "plink." Shit. The RAM's in place, so he steps back to survey the situation. There's a capacitor sitting on the floor of the case. "Um, maybe it's one of those capacitors that's, you know, for show..." The computer throws a video error at post.

      We pull the card. Murphy's law has struck; it's a GeForce 5800 Ultra (the old dustbuster model), and a cap has sheared right off the card. I don't have a soldering iron in my apartment, so the coworker is prepaing for an evening of staring over shoulders. That's when we break out the electrical tape. We give the card a good hard wrap with the tape to hold the cap in place, and...

      It works spedtacularly. No crashs, no video glitches, no problem. In fact, it works for another month while he waits for the 5900 Ultra to release before exchanging the card. It led us to praise NVidia for the Unified ELectrical TApe architecture (ELTA), which we theorized could provided bootleg performance maintenance across the entire NVidia line, from the TNT2 up.
  • $199 (Score:5, Interesting)

    by Distinguished Hero ( 618385 ) on Monday August 09, 2004 @08:40AM (#9919511) Homepage
    From the article:
    Price points and product names weren't discussed

    So where did $199 come from?
  • Real DirectX 9 (Score:4, Insightful)

    by fostware ( 551290 ) on Monday August 09, 2004 @08:44AM (#9919546) Homepage
    All I want is DirectX 9 support in hardware, not the kludges which the current NV's have. The GPU makers churn these things out so quickly, yet they can't keep up with an industry standard a year old...
    • Re:Real DirectX 9 (Score:4, Interesting)

      by molarmass192 ( 608071 ) on Monday August 09, 2004 @09:03AM (#9919681) Homepage Journal
      Then go petition MS to create and distribute cards that supports their gd standard in hardware. I don't use Windows and have no interest in paying a fee to MS for having DX9 embedded into a card when I'll never be able to use it. If MS wants to pay for it and it's a zero cost addition for nVIDIA and it doesn't adversely affect OpenGL performance, then it would be inconsequential to me if it were included or not. Btw, what companies are in the consortium that controls the DirectX industry standard [opengl.org]?
    • Re:Real DirectX 9 (Score:5, Insightful)

      by canb ( 792889 ) on Monday August 09, 2004 @09:13AM (#9919771)
      Since when is directx the industry standard? I had the belief that openGL is, whereas directx is the microsoft standart.
    • I wish DX would die. Quickly. Then maybe I could get some games native to linux...

      I know, I know. There are a few, but if everyone used OpenGL, it would be so much easier for them to port.. right? That "Sorry, we used DirectX" excuse most game makers throw about drives me crazy.

      Why, yes, I *am* waiting for the release of the Linux Doom 3 binaries. :)
  • by TheSHAD0W ( 258774 ) on Monday August 09, 2004 @08:50AM (#9919579) Homepage
    ...But does this mean you have to load the same texture data into both cards in order to obtain this parallel processing? Isn't that rather inefficient?
  • by Anonymous Coward
    I just wish they would give some real, across the board benchmarks. I want to know if it is going to give me enough additional FPS for nethack to make it worth the purchase? Would I have to get some exotic motherboard combo to make that happen?
  • How does this card compare to the 6800? I mean at 200 bucks, that sounds awefully cheap compared to the 6800 which is $500? If this card is better, I will shell out to buy two of them and a new motherboard - and my friend who just got his 6800 this past week will be bitter (he paid 400 for his one card) :)
    -A
  • 6600 or 6800LE? (Score:5, Interesting)

    by Erwos ( 553607 ) on Monday August 09, 2004 @09:08AM (#9919723)
    What's weird is that nVidia already _does_ have a $200 variant of Geforce6 - the Geforce 6800LE. It's essentially a lower-clocked (GPU and RAM) 6800 with only 8 pipes (so, half of what the 6800GT/U has). One of the hardware sites did a review of it (t-break?), and it performed pretty nicely - almost always beat the 5950. It's supposedly only for OEMs, but that's never stopped the online vendors from selling a card.

    If they are indeed talking about a 6600, it's going to need to go under $170 to have any sales value whatsoever. SLI is nice and everything, but most people simply don't have PCIe mobos to take advantage of it, so it's going to be a non-issue for the next year and a half.

    Still, it'll be nice to see nVidia actually try to deliver a better price/performance ratio than ATI for once.

    -Erwos
  • Production (Score:5, Insightful)

    by Led FLoyd ( 803988 ) on Monday August 09, 2004 @09:15AM (#9919786)
    They might concentrate on getting their CURRENT high end card (6800 Ultra) on the retail shelves instead of "pre-announcing" crap in the pipeline.
  • ... you know, the cheap card, that has a simple cooling solution, doesn't need a molex connector for aditional power and plays current generation games more than acceptably. I like gaming a lot, but i can't afford $200-300 for each new gfx hardware generation. Say what you want about the FX5200, but for it's price it can't be beaten.
  • by scotay ( 195240 ) on Monday August 09, 2004 @09:34AM (#9919926)
    NVDA has just reported a HORRIBLE quarter. Many are wonder what the F is going on with that company. This is a PR release. They need to say these things. They need to say they have native PCIe despite not a SINGLE OEM design win. They need to say 6800 volume will ramp up and product will be driven down to the low end. Will this actually happen? I have no idea, but this is the least I would expect NVDA to say on this horrible week for NVDA longs. ATI has really put the hurt on. This next 12 months should be pivotal for NVDA's future.
  • by AirP ( 99063 ) on Monday August 09, 2004 @09:37AM (#9919963)
    Replies from a website where people want more options in Operating Systems, but they bitch about more options from hardware, just makes me wonder if people just want to bitch.
  • Prices? (Score:3, Interesting)

    by novakane007 ( 154885 ) on Monday August 09, 2004 @09:50AM (#9920074) Homepage Journal
    Does this mean we'll see a price drop for the GeForce line? I've been putting off buying a new card, I don't want to end up buying it a couple of weeks before a price drop.
  • by Xhargh ( 697819 ) on Monday August 09, 2004 @09:56AM (#9920130) Homepage
    Does it have low power consumption or does it include a nuclear powerplant?
  • by Benanov ( 583592 ) <brian,kemp&member,fsf,org> on Monday August 09, 2004 @10:02AM (#9920167) Journal
    "In a few days we're going to turn up the heat another notch."

    Translation: my computer's electricity bill and my winter heating bill just became synonymous. ;)
  • by Greyfox ( 87712 ) on Monday August 09, 2004 @10:23AM (#9920348) Homepage Journal
    flying-rhenquest died a couple weeks back (The fan base may have noticed that the web page is down,) so I upgraded to a system with a ATI X600 PCIE card. You can force the system to recognize it as a radeon for 2D, but apparently PCIE is not yet supported by the ATI proprietary driver nor the Xfree86 radeon driver. Rumor has it the Nvidia proprietary drivers have PCIE support, but I haven't had any solid confirmation of that yet. So does anyone know for sure that if you drop this card into a Linux system, you'd be able to get 3D acceleration?
  • by sco_is_for_babies ( 726059 ) on Monday August 09, 2004 @03:25PM (#9923286)
    throw in a couple of 3 ft black candles. And you know, a baby goat.

"Confound these ancestors.... They've stolen our best ideas!" - Ben Jonson

Working...