NVIDIA Gives Details On New GeForce 6 327
An anonymous reader writes "According to Firingsquad, NVIDIA will be announcing a new GeForce 6 card for the mainstream market at Quakecon this week. Like GeForce 6800, this new card will support shader model 3.0 and SLI (on PCI Express cards), so you can connect two $199 cards together for double the performance. NVIDIA will also be producing AGP versions of this card as well."
Imagine (Score:3, Funny)
*Ducks.
Re:Imagine (Score:5, Informative)
Contents in case of /. (Score:5, Informative)
So here's the content:
In last week's conference call ( http://www.corporate-ir.net/ireye/ir_site.zhtml?t
Jen Hsun went on to say:
This mainstream GeForce 6 will be the only shader model 3.0 GPU in its class and deliver performance well beyond that of the competition. PCI Express support is native and AGP support will be provided through HSI, once again showing the versatility of the HSI strategy...sampling started in June, production is in full steam on TSMC's 110 nanometer process, with shipments to OEMs soon.
Price points and product names weren't discussed, but Jen Hsun also confirmed SLI support for this upcoming card, and also mentioned by the end of the year NVIDIA will have a top-to-bottom family of shader model 3.0 cards. In fact, he mentions "we're ramping 110 on two GeForce 6 families right now at TSMC, and very shortly we'll start a third...and this quarter we'll have five GeForce 6 GPUs in production, and that ought to cover us from top to bottom."
Re:Contents in case of /. (Score:2)
I can't wait for... (Score:4, Funny)
Seriously, can't they figure out a new name already?
Re:I can't wait for... (Score:2, Funny)
So yeah, the GeForce 27 will be kickass.
Re:I can't wait for... (Score:2, Funny)
In other news, "I can't wait for the Radeon 35750, it;s going to be sooo much better. :-)
Seriously, can't they figure out a new name already?"
Re:I can't wait for... (Score:5, Funny)
Re:I can't wait for... (Score:2)
Re:I can't wait for... (Score:3, Informative)
Re:I can't wait for... (Score:3, Interesting)
Re:I can't wait for... (Score:3, Informative)
Re:I can't wait for... (Score:2, Funny)
Re:I can't wait for... (Score:4, Funny)
But this one goes to 11
Re:I can't wait for... (Score:3, Funny)
Seriously though, why should they? GeForce is an established brand name for NVidia, its recognized world-wide, why would they want to throw that away?
Its like saying: Coca-cola, has been original, 'new', classic, etc, but couldn't they call it something else? They've been making the same line of product for over 100 years now!
Thank you! (Score:5, Interesting)
Does anyone have any idea how many PCI Express ports this uses? It's my understanding that you have a total of 20 and most motherboards are allocating 16x to the video... will this card require 8x? Or do you need a special motherboard for this?
Anyone know?
Re:Thank you! (Score:4, Insightful)
Re:Thank you! (Score:2)
Both cards render, share the output with each other, then both cards apply antialiasing based on the result, then output their respective lines. Maybe only one card need to output rather than having goofy cables.
Re:Thank you! (Score:3, Informative)
Re:Thank you! (Score:4, Informative)
The new one divides the screen up into two sections, I assume that if both cards are equally powerful then it will be 50:50 or thereabouts. I assume a little bit of overlap so that anti-aliasing and whatnot works correctly on the seam.
Then one card sends its generated half of the scene to the other, and they are merged and output to the display.
Re:Thank you! (Score:4, Informative)
Re:Thank you! (Score:2)
What nVidia is talking about is SLI in the Voodoo2 sense.
Re:Thank you! (Score:3, Insightful)
From what I understood (when I read an article about it around what, a month back?) is that yes, each card renders a seperate portion of the screen, but the spiffy thing about this new implimentation is that the ratio is dynamic; if there's a lot going on in one half of the screen, and not much in the other portion, the under-utilized card starts rendering more of the screen to allow more focus on the "action-intensive" area by the other card.
Then again, I c
Re:Thank you! (Score:3, Insightful)
Re:Thank you! (Score:5, Informative)
Re:Thank you! (Score:3, Funny)
Re:Thank you! (Score:2)
Considering that the PCIe spec allows for ports at 32x, I don't think you've got a limitation of 20 channels on it. From what I've seen, the only reason you've have any sort of limit is going to be chipset dependant.
As for needing a special mobo for it, you're going to end up with a high-end workstation board if you want a pair of 16x ports unless SLI sparks enough consumer demand to bring the feature down to consumer boards. Even then, it's probably only going to be found on relatively high-end k
Re:Thank you! (Score:5, Insightful)
From an economics point of view, it sounds pretty cool. Spend a few extra $$$ to get a top of the line card. Then, in a year or two, pick up a second card when the prices are considerably lower, then you get 2x the performance without tossing hardware. Bitchin.
Unfortunately, I wonder if that puts NVidia in an ugly place. It does set the bar for what the Geforce 7's have a minimum to do. But... that aint bad for us, now is it?
Does it ever stop? (Score:5, Interesting)
Re:Does it ever stop? (Score:4, Insightful)
So long as game companies turn out new games that make existing systems cry for mercy, (and we choose to buy them) we will always need to buy newer video cards in order to stave off choppy video for another generation of games.
Same goes for CPU... although much of the difference is that most of those people buying a Geforce 6 are gamers and will use most/all of the power at their disposal... I'd wager only a fraction of those using the latest and most powerful CPU's from AMD or Intel use them to their full potential.
No (Score:3, Insightful)
Considering these movies are using the absolute cutting edge of pre-rendered graphics technology, I would suggest we're still a decade or so from anything like 'real' looking PC graphics.
Re:Does it ever stop? (Score:5, Insightful)
It seems to me graphics hardware has a long ways to go still. There are also probably newer, more photorealistic models that have appeared since I studied computer graphics, as well. Virtual reality in a sense depends on audio and AI in a true form, but a virtual visual (and perhaps audio) reality is probably on the horizon. AI is probably 15-20 years down the line (at least for something that stands a chance at passing a Turing test, IMO).
Re:Does it ever stop? (Score:3, Interesting)
Cockroaches gather information from their environment, process it, and produce output. You might not call that intelligence, as intelligence implies some sort of abstract reasoning, but you could also say that our "intelligence" is m
Same question for monitors (Score:3, Interesting)
Re:Does it ever stop? (Score:2)
It won't stop until we get to "holo-deck" technology - and while we may not have the force field effect at the very least we could have a cool visual. So when you play Doom 50 you will be in a special suit that simulates blows,
Re:Does it ever stop? (Score:4, Informative)
The human retina consists of 120 million rods (wavelength insensitive) for peripheral vision and 6 million cones (wavelength sensitive for red,green and blue) for central vision. To match the full capability of human vision, you'd need a 12000x12000 monochrome framebuffer covering a field of view 170+ degrees, with a central region 2000x2000 with floating-point RGB colour, and it would have to update around 70 times/second.
Graphics cards and virtual reality headsets are slowly edging up to the resolution for central vision, since there isn't much demand to support peripheral vision resolutions.
Re:Does it ever stop? (Score:3, Interesting)
As someone else pointed out, the monitors may very well reach the limit that the human eye can resolve.
However, the computational problem of generating those pixels can at least in theory be arbitrarily difficult. If the problem of calculating certain pixels in certain situations is NP-complete then we may never be able to calculate them all in time. It remains to be se
Only $200? (Score:5, Interesting)
This should be interesting to see and good for competition to say the least.
Re:Only $200? (Score:2, Funny)
Re:Only $200? (Score:2)
Only two? (Score:5, Funny)
Re:Only two? (Score:3, Interesting)
BTM
Re:Only two? (Score:2)
Re:Only two? (Score:5, Funny)
I belive that PCI-Express is, in fact, hot swappable.
*Checks google*
Yes. It is infact hotplug/hotswap capable. I dunno how good your os (*cough*windows*cough*) will react to you unplugging the VC though... I'm sure that Linux will have wonky support for it initially, eventually getting stable and usable support about the time that PCI-Express will be obsolete...
Re:Only two? (Score:3, Funny)
So that's why I couldn't see anything, I forgot to mount my videocard!
Two cards == 2x performance (Score:5, Insightful)
Re:Two cards == 2x performance (Score:5, Informative)
depending on the scene it won't always be a perfect split of the workload, but it should be pretty damn close.
Doesn't work that way (Score:2)
Re:Two cards == 2x performance (Score:3, Informative)
Re:Two cards == 2x performance (Score:3, Interesting)
Re:Two cards == 2x performance (Score:5, Informative)
I think that this is actually a rare case where you can actually get close to 200% performance. For one thing, the job that is being done is very well understood and the cards need zero flexibility - hence they can write very specialised software that does one thing and does it very efficiently.
For another thing, many of the common problems of parallel computing are caused by communications, and in the case of SLI the two 'nodes' do not need to communicate - the mothership (i.e. the CPU via the PCIx bus) does all the organisation and communicating, and even that is basically one-way, so there is very little in the way of latency related issues. From a software point of view, the only real task is to shovel half the data one way, and half the other way - significantly easier than, say, a system where you have to constantly send and receive data to a range of nodes operating at different speeds.
I seem to recall that the Voodoo II (bless its zombie bones) was able to get near 2x performance in parallel.
Re:Two cards == 2x performance (Score:3, Informative)
That's the way the old Voodoo cards did it, but that's not how it works with the new nVidia cards; they just split the screen into 2 halves (I believe the actual size of each portion is dynamic, to allow for a more even work load between the cards when one portion of the screen is recieving more action than the other) and each card renders its own half.
Re:Two cards == 2x performance (Score:3, Insightful)
Re:Two cards == 2x performance (Score:4, Insightful)
100% increase = 2x.
Re:Two cards == 2x performance - maybe! (Score:5, Informative)
FIRST OF ALL: THIS IS NOT "SLI".
Nvidia is simply leveraging the term to sell their version of the concept.
SECOND OF ALL: THIS IS NOT NEW.
In fact, every single consumer card that has attempted this in the past has been a failure.
** 3DFX Voodoo 2
The performance of a single Voodoo 2 was so good that people waited for prices to fall before buying a second Voodoo 2. Sales of the Voodoo3 also suffered heavily because, under many conditions, the Voodoo 2 SLI performed similarly. Thus, the long-term failure.
** Metabyte "SLI"
Shortly after 3DFX made "SLI" a household name, Metabyte developed a PCI-bridge technology that would split the framebuffer between ANY two cards and have them render in parallel.
Sound familiar? It should. There was one major drawback: both cards would have to operate in PCI mode, negating some of the advantages the newer AGP cards enjoyed. Metabyte tried to license the technology to TNT2 manufacturers, but none were interested...mainly because the upcoming GeForce 256 would make ir obsolete overnight.
** ATI Rage MAXX
This card featured two chips rendering a piece of the framebuffer, much like MEtabyte's technology. This was simply an attempt by ATI to get some experience designing a parallel-processor architecture, and to take some wind out of Nvidia's GeForce 256 sails. Because the parallelization was on-card, it could function as a normal AGP card. Bad drivers and lack of Win2k / XP support killed this card.
** 3dfx VSA 100 (Voodoo 5 5500)
The VSA 100 was designed to be used in parallel in a fashion similar to the Rage MAXX. Although this card boasted many fancy features, it could not keep up in the performance race. 3dfx also found out how hard it is to make money when the chipsets on your cards cost roughly twice that of your competitors.
** Alienware "SLI"
Yes, this is basically Metabyte's concept, but the appearance of PCIe has made it a reality for high-performance cards. PCIe also makes it possible for this to be developed entirely in software (Metabyte's vision required an on-card bridge), so why the hell wouldn't they market it?
** Nvidia Geforce 6 with SLI
Two things are readily apparent about this latest attempt:
1. The card is not a flagship, high-margin card. It is simply designed to lock-in users to a cheap Nvidia card now, and an upgrade in the future.
2. Even in SLI mode, this combo won't exceed the performance of their top-end card, meaning Nvidia won't cannibalize upgrades for their next card like the Voodoo 2 SLI did.
So sure, Alienware and Nvidia look like they've got a winner on their hands...except that there aren't many PCIe motherboards with dual 16x slots. Oh well, yet another niche-market-product-turned-failure waiting to happen.
Confused with naming scheme (Score:3, Funny)
Re:Confused with naming scheme (Score:3)
They are talking about a mid-range GeForce 6-series, most likely a '6600', i.e. the next generation version of your current card. I would relax and let the prices drop.
Also, your CPU is more than adequate for the time being. Don't listen to these idiots - they probably have aerodynamic fins and flourescent light tubes on
Re:Confused with naming scheme (Score:2, Informative)
You know you're a hardware junkie... (Score:5, Funny)
Re:You know you're a hardware junkie... (Score:2)
Connecting two $199 cards together will probably give you 40% more performance than a $398 card, assuming that this new SLI will only have about 10-20% overhead, so yes it is a good value!
I'm out of it (Score:2, Insightful)
I fear something like AGP EXTREME
PCIe (Score:3, Informative)
Re:I'm out of it (Score:2)
Re:I'm out of it (Score:5, Informative)
PCI Express is a replacement for PCI and AGP on desktop class motherboards (I guess PCI-X might be better for servers, but I don't know).
Its advantages are that it has switched uplinks, so, if I understand correctly, each device can have its maximum bandwidth between any other component. PCI shares its bandwidth between all devices.
PCI Express 16x replaces AGP, and roughly doubles the bandwidth, I think. Then there's 8x, 4x, 2x and 1x for devices with lower bandwidth requirements. And you could probably expand to 32x if you really need more bandwidth than 16x. It's all about the number of "lanes" you devote to a card.
Someone here has a link to an article on this stuff, in case you want a description from someone who actually knows what they're talking about.
AGP 8x (Score:5, Informative)
At least it will give 'gamers' a chance to brag about how fat their bandwidth is, I suppose.
Re:I'm out of it (Score:2)
Re:I'm out of it (Score:2)
AGP has always had a limited shelf life, and now it's finally coming to pass. AGP will still be the primary stepping stone for a lot of people though, and will eventually be "budget class" only.
What bothers me (Score:5, Insightful)
Meanwhile, OpenGL, the industry standard graphics library, is getting left behind because every video chip maker wants to show off how well it supports GlibFlobber() DirectX 27 API.
Won't someone please think of the industry standard instead of the proprietary (and very small market) "standards" of Windows?
Re:What bothers me (Score:2)
There's a reason nVidia and ATI show off GlibFlobber27() at high FPS: it makes them money - lots of it. And, the money isn't just from their latest and craziest graphics card (That only the hardcore gamer would buy) - that money will also pour in from mobile devices, on-board graphics, a
Re:What bothers me (Score:3, Interesting)
Re:What bothers me (Score:5, Informative)
OpenGL is about to get a big overhaul for 2.0 (due out this year at SIGGRAPH, I think), and should compete well with the DirectX updates in Longhorn.
Re:What bothers me (Score:2)
Re: (Score:2)
Re:What bothers me (Score:2)
Maybe (Score:5, Funny)
Much like you can duct-tape two cars together for double the performance (but certainly not double the speed).
Oblig. Simpsons quote (Score:4, Funny)
Unified ELTA (Score:5, Funny)
We balked. There's an unspoken rule that no hardware changes during the LAN unless necessary. Murphy's law simply looms too large. He ignored it.
The case was a smaller mid-tower that he uses for LANs, and with a couple of hard drives and the associated cabling it gets pretty tight. As he's sliding the RAM into place, we hear a "plink." Shit. The RAM's in place, so he steps back to survey the situation. There's a capacitor sitting on the floor of the case. "Um, maybe it's one of those capacitors that's, you know, for show..." The computer throws a video error at post.
We pull the card. Murphy's law has struck; it's a GeForce 5800 Ultra (the old dustbuster model), and a cap has sheared right off the card. I don't have a soldering iron in my apartment, so the coworker is prepaing for an evening of staring over shoulders. That's when we break out the electrical tape. We give the card a good hard wrap with the tape to hold the cap in place, and...
It works spedtacularly. No crashs, no video glitches, no problem. In fact, it works for another month while he waits for the 5900 Ultra to release before exchanging the card. It led us to praise NVidia for the Unified ELectrical TApe architecture (ELTA), which we theorized could provided bootleg performance maintenance across the entire NVidia line, from the TNT2 up.
$199 (Score:5, Interesting)
Price points and product names weren't discussed
So where did $199 come from?
Real DirectX 9 (Score:4, Insightful)
Re:Real DirectX 9 (Score:4, Interesting)
Re:Real DirectX 9 (Score:5, Insightful)
Re:Real DirectX 9 (Score:2, Interesting)
I know, I know. There are a few, but if everyone used OpenGL, it would be so much easier for them to port.. right? That "Sorry, we used DirectX" excuse most game makers throw about drives me crazy.
Why, yes, I *am* waiting for the release of the Linux Doom 3 binaries.
Correct me if I'm wrong... (Score:5, Interesting)
Re:Correct me if I'm wrong... (Score:5, Informative)
Real world preformance (Score:2, Funny)
How does it compare (Score:2)
-A
6600 or 6800LE? (Score:5, Interesting)
If they are indeed talking about a 6600, it's going to need to go under $170 to have any sales value whatsoever. SLI is nice and everything, but most people simply don't have PCIe mobos to take advantage of it, so it's going to be a non-issue for the next year and a half.
Still, it'll be nice to see nVidia actually try to deliver a better price/performance ratio than ATI for once.
-Erwos
Production (Score:5, Insightful)
I'm waiting for the sub-$100 range one... (Score:2, Insightful)
I'll believe it when I see it (Score:5, Insightful)
Funny replies... (Score:4, Funny)
Prices? (Score:3, Interesting)
Power consumption... (Score:4, Insightful)
Literally, I bet. (Score:5, Funny)
Translation: my computer's electricity bill and my winter heating bill just became synonymous.
Actual PCIE drivers for Linux? (Score:3, Interesting)
should call it the GeForce666 and bundle Doom ]I[ (Score:3, Funny)
Re:Help (Score:2)
DirectX is the industry standard, where industry = everything within Microsoft's control.
Re:Help (Score:5, Informative)
The thing with DX is that it's aimed mostly to games, and, while full-featured, it's incompatible with everything else. OpenGL, much like D3D, is dedicated exclusively to graphics but can be ported much more easily, and it's (IMHO) overall a cleaner implementation. Both can coexist in a single machine (if you have a modern videocard, that's most likely the case), but are independent, requiering separate drivers and so.
Re:Help (Score:4, Informative)
That's not true. This *was* true 2-3 years ago, but in that space, the OpenGL ARB has been very quick to keep OpenGL competitive with D3D. 1.3, 1.4, and 1.5 all came out about a year apart, and 2.0 should be coming wout this year. 1.5, which came out last year, supports pretty much everything out right now, including a full high-level shading language (GLSL).
Re:'Sigh, yet another shader-centric advance (Score:3, Informative)
DirectX 8.x and 9 offer both vertex and pixel shaders. A vertex shader takes 3D coordinates (and constants) as input and gives screen coordinates and other vars as output. Although usual