ATI & Nvidia Duke It Out In New Gaming War 208
geek_on_a_stick writes "I found this PC World article about ATI and Nvidia battling it out over paper specs on their graphics cards. Apparently ATI's next board will support pixel shader 1.4, while Nvidia's GeForce3 will only go up to ps 1.3. The bigger issue is that developers will have to choose which board they want to develop games for, or, write the code twice--one set for each board. Does this mean that future games will be hardware specific?"
Cutting through the hysteria... (Score:5, Informative)
There seems to be a large amount of confusion as to what this means, and some people seem to be jumping off the deep end (as usual), so here's an attempt to clear up some of the issues.
(PS = Pixel Shader in the following points)
Hope this makes things clearer.
Pixel/Vertex shaders are an attempt to provide developers with low-level access while still maintaining the abstraction needed to support multiple sets of hardware.
To be honest, compared to the issues of shader program proliferation due to the number/type of lights you have in a scene etc., this isn't that big a deal. You might as well complain that writing a PS that uses PS1.3 means that you're 'choosing' GeForce 3 over all the existing cards that don't support PS1.3. Or that when bump mapping was added to DX and you used it, you were choosing the cards that did bump mapping over those that didn't.
DirectX is supposed to let you know the capability set of the gfx card, and allow you to use those capabilities in a standard way. The pixel shader mechanism is just another example of this at work.
As ever with games development, you aim as high as you can, and scale back (within reason) when the user's hardware can't cope with whatever you're doing.
Trust me, this is not news for games developers :-)
Tim
Re:direct x is not open, OpenGL is, we should use (Score:3, Informative)
OpenGL is written for a UNIX environment, DX is for a Windows environment
No. OpenGL is an API, with bindings on UNIX platforms, on the Mac, Win32, Linux, PSX2, XBox and so on. Pretty much all 3D hardware of note has an OpenGL driver.
OpenGL does NOT change very much, which has both good and bad sides, for example, this threads discusses pixel shading, which is a feature OpenGL does not natively supports.
OpenGL does change a lot. Hardware vendors are free to add functionality via extensions [sgi.com], something they cannot do with D3D without going through microsoft.
Also, it does support what DX8 calls pixelshading. It exposes it through a quite different interface to DX8 (see here [sgi.com] and here [sgi.com]), this much more closely represents what the hardware is actually doing.
Re:Why DirectX is better (Score:2, Informative)
First of all, there are no pixel shaders in OpenGL. nVidia's extensions divide pixel shaders into Texture shaders, and Register Combiners. Which, basically mean, "Closer to the metal."
What does that mean? Well, Pixel Shader language is just a language. How the metal reacts is the same, if the semantics are the same.
However, *more importantly* ATI is going to *copy* nVidia's existing OpenGL extensions. That's the only way to compete - you must support existing features.
Don't believe me? They've already been doing this for years. Do a glGetString( GL_EXTENSIONS ); on any video card. Matrox, ATI, whatever. You're going to see a lot of NV_ (nVidia) specific extensions.
Re:why arent.... (Score:2, Informative)
Not being a 3D programmer, I don't know whether the claim of vast differences in code are true. Can anyone shed light on this?
Neurotic
Games are already hardware-specific (Score:1, Informative)
Max Payne [maxpayne.com], for instance, was developed mostly with GeForce cards. This means that by choosing their standard developer hardware setup the developers are actually becoming hardware-dependant and are, if not frankly, saying that these cards are the ones that you should use to play their game.
This is really no news.
"Optimized for Pentium III" is what read on every possible piece of marketing material with the late Battlezone II.
I would make a conclusion that hardware dependancy of games goes far beyond than just the graphic cards. Use this processor to get better results, use this sound card to hear the sounds more precisely, etc.. It seems that game industry has big bucks, and every hardware vendor wants to make sure that when the next big hit comes, everyone needs to buy their product in order to get that +15% FPS out of it.
Re:direct x is not open, OpenGL is, we should use (Score:1, Informative)
OpenGL is an open standard, but the source code isn't open--there isn't even any source code! It's just a specification, then each individual vendor must implement according to that specification. For example, Nvidia makes an OpenGL implementation that is accelerated by their graphics cards, MS makes an implementation that is software only, and 3dfx made a mini-implemenation at one point.
I think maybe Mesa is open-source? Not sure. But the actual implementation inside the vendor's API is whatever they want, and is probably closed (see Nvidia). The only requirement is to follow the specification and the rendering pipeline properly (so transforms/shading/etc will be applied the same through any OGL implementation).
Re:Oh good. A pissing contest... (Score:4, Informative)
The framebuffer is only 8 bits per channel at most, while pixel shader hardware has higher internal precision per channel, keeping the math in the chip as well as saving read-back from the framebuffer saves bandwidth AND improves quality.
True per-pixel phong shading looks nice, but then all they seem to do extra is allow you to vary some constants across the object via texture addresses
Pixel shaders enable arbitrary math on pixels, it isn't a fixed function phong equation with a few more variables added. Maybe an artist wants a sharp terminator, cel shading, a fresnel term, or wants a anisotropic reflection.
All these are performed using 4D SIMD math operations, just like they were in 1.1: Add, Subtract, Multiply, Multiply-Add, Dot Product, Lerp, and Read Texture. But texture reads can happen AFTER more complex math, before there was only a few set math ops possible during a texture read. It's all in the DX8 SDK, which anyone can download.
Well that's great, but texture upload bandwidth is can already be significant bottleneck
"texture upload?" This isn't a problem, with DX8.1 cards having 64mb of memory for texture, why would developers be uploading textures per-frame? If you are talking about texture reads by the pixel shader, this also isn't a bottleneck. Reading geometry from the AGP bus is the bottleneck.
Artists won't draw bump-maps.
Sure they will, (heck, I do) look at any x-box game, they are all over the place. They won't draw in vectors-encoded-as-colors, they'll draw height maps, which would be converted off-line into normal maps.
I don't think ATI have a PS 1.0 implementation, someone please correct me if I'm wrong
1.4 hardware can support any previous version, including DX7 fixed function blend ops.
P.S.
I design hardware for this stuff, I do know what I'm talking about.
Oh good. A pissing contest... (Score:5, Informative)
I have a few disparate thoughts on this subject, but rather than scatter them throughout the messages I'll put 'em all in one place.
ATI are attacking what is possibly the weakest part IMHO of DirectX 8 - the pixel shaders. Pixel shaders operate on the per-fragment level, rather than on the per-vertex level vertex shaders which were actually Quite Good. The problem with Pixel Shaders 1.1 is that, to paraphrase John Carmack, "You can't just do a bunch of math and then an arbitary texture read" - the instruction set seemed to be tailored towards enabling a few (cool) effects, rather than supplying a generic framework. Again, to quote Carmack, "It's like EMBM writ large". Read a recent
If you read the ATI paper, they don't really tell you what they've done - just a lot of promises, and a couple of "more flexibles!", "more better!" kind of lip-service. I don't care about reducing the pass number. Hardware is getting faster. True per-pixel phong shading looks nice, but then all they seem to do extra is allow you to vary some constants across the object via texture addresses. Well that's great, but texture upload bandwidth is can already be significant bottleneck, so I don't know for sure that artists are gonna be able to create and leverage a separate ka, ks etc map for each material. (I did enjoy their attempts to make Phong's equation look as difficult as possible)
True bump-mapping? NVidia [nvidia.com] do a very good looking bump-map. Adding multiple bump-maps is very definitely an obvious evolutionary step, but again, producing the tools for these things is going to be key. Artists won't draw bump-maps.
Their hair model looks like crap. Sorry, but even as a simple anisotropic reflection example (which again NVidia have had papers on for ages) it looks like ass. Procedural textures, though, are cool - these will save on texture uploads if they're done right.
What does worry me is that the whole idea of getting NVidia and Microsoft together to do Pixel Shaders and Vertex Shaders is so that the instruction set would be universally adopted. Unfortunately, ATI seem to have said "Sod that, we'll wait for Pixel Shader 1.4 (or whatever) and support that." I hope that doesn't come back to bite them. DirectX 8.0 games are few and far between at the moment, so when they do come out there'll be a period when only Nvidia's cards will really cut it (I don't think ATI have a PS 1.0 implementation, someone please correct me if I'm wrong) - will skipping a generation hurt ATI, given that they're losing the OEM market share as well?
I dunno, this just seems like a lot of hype, little content.
Henry
Re:How the hell? (Score:3, Informative)
I've owned a few video boards over the years, and have been constantly looking for a board that does both good 2D and 3D, and up until now, I haven't really found it. My Matrox Millenium (from about four years ago) did excellent 2D, no 3D. My Voodoo Rush had decent 3D for it's time, but the 2D sucked (blurry image, and this was without a passthrough cable). That got replaced (after switching back to the Matrox) with an nVidia TNT Ultra. The 3D was pretty good, but the 2D was a bit blurry (I dumped the TNT when I spoke with nVidia and confirmed that they were not producing Open Source Linux drivers - I don't like liars too much). So, the TNT got replaced by a Matrox G400Max Dualhead - excellent 2D, the 3D was lacking somewhat.
Just this weekend I purchased an ATI Radeon All-In-Wonder for $250. An excellent deal, since the 2D is nice and crisp, and the 3D rocks (for my purposes anyway). And, in 32-bit mode, it almost equals the GeForce 2 in performance.
Plus, this board has excellent multimedia. I love the TV tuner, it's so much better than the Hauppauge I used it to replace, plus I can hook up all sorts of video input devices and record from them. Excellent on the fly MPEG compression. And of course, we can't forget the hardware DVD playback, which is outstanding.
Also, like other people have said, let's not forget that the GeForce cards are still quite expensive.
A friend of mine was telling me three years ago that ATI made great cards, and I scoffed at him. Looks like I owe him an apology.
So, in conclusion, who would buy an ATI? How about somebody who wants a full-featured card that gives outstanding image quality. If you want pure frames per second, then buy your GeForce with it's blurry, dim images, since they screw around with the palettes and overclock the chips to get those numbers that hardcore gamers seem to like so much.
-- Joe
Re:ATI stinks (Score:2, Informative)
Performance benefits (Score:5, Informative)
It is still unclear how the total performance picture will look.
Lots of pixels are still rendered with no textures at all (stencil shadows), or only a single texture (blended effects), so the pass advantage will only show up on some subset of all the drawing.
If ATI doesn't do as good of a job with the memory interface, or doesn't get the clock rate up as high as NVidia, they will still lose.
The pixel operations are a step more flexible than Nvidia's current options, but it is still clearly not where things are going to be going soon in terms of generality.
Developers are just going to need to sweat out the diversity or go for a least common denominator for the next couple years.
I fully expect the next generation engine after the current DOOM engine will be targeted at the properly general purpose graphics processors that I have been pushing towards over the last several years.
Hardware vendors are sort of reticent to give up being able to "out feature" the opposition, but the arguments for the final flexibility steps are too strong to ignore.
John Carmack
Re:How the hell? (Score:1, Informative)
Isn't it nice to see the non-nVidia brands redeemed?