Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Games Entertainment

ATI & Nvidia Duke It Out In New Gaming War 208

geek_on_a_stick writes "I found this PC World article about ATI and Nvidia battling it out over paper specs on their graphics cards. Apparently ATI's next board will support pixel shader 1.4, while Nvidia's GeForce3 will only go up to ps 1.3. The bigger issue is that developers will have to choose which board they want to develop games for, or, write the code twice--one set for each board. Does this mean that future games will be hardware specific?"
This discussion has been archived. No new comments can be posted.

ATI & Nvidia Duke It Out In New Gaming War

Comments Filter:
  • by Tim Browse ( 9263 ) on Wednesday August 01, 2001 @01:57PM (#2061)

    There seems to be a large amount of confusion as to what this means, and some people seem to be jumping off the deep end (as usual), so here's an attempt to clear up some of the issues.

    (PS = Pixel Shader in the following points)

    • DX8 Pixel Shaders use the PS API. Part of this API is a definition of a limited assembly language.
    • A PS written for version X will run on drivers that support version Y if X <= Y - i.e. pixel shaders are backwards compatible.
    • When new versions of the PS API appear, they mostly add instructions, or extend the register set. Hence the backwards compatibility.
    • Hence any PS written for ps1.3 (e.g. a GeForce 3 card) will also run on a card supporting ps1.4 (e.g. ATI's new card).
    • The ps1.3 shader may not run as fast as it could on the ATI card, depending on what features of ps1.4 it could take advantage of.
    • If you try to create a PS on a gfx card that does not support PS, or does not support the minimum PS version required, then DX8 will not fall back to software to render the triangles. That would be madness - rendering would probably be an order of magnitude (or two) slower. The request to create the PS will simply fail. (NB. When using a vertex shader, DX8 can fall back to software for that, because it makes sense, and they have some reasonably fast software emulation for vertex shaders).
    • You don't have to choose whether you write for nVidia or ATI - you choose what level of PS (if any) you are going to support. You can choose to support 1.3 and 1.4 with separate code paths if you want, to get maxiumum throughput from ps1.4 cards.
    • Hope this makes things clearer.

      Pixel/Vertex shaders are an attempt to provide developers with low-level access while still maintaining the abstraction needed to support multiple sets of hardware.

      To be honest, compared to the issues of shader program proliferation due to the number/type of lights you have in a scene etc., this isn't that big a deal. You might as well complain that writing a PS that uses PS1.3 means that you're 'choosing' GeForce 3 over all the existing cards that don't support PS1.3. Or that when bump mapping was added to DX and you used it, you were choosing the cards that did bump mapping over those that didn't.

      DirectX is supposed to let you know the capability set of the gfx card, and allow you to use those capabilities in a standard way. The pixel shader mechanism is just another example of this at work.

      As ever with games development, you aim as high as you can, and scale back (within reason) when the user's hardware can't cope with whatever you're doing.

      Trust me, this is not news for games developers :-)

      Tim

  • by mike260 ( 224212 ) on Wednesday August 01, 2001 @01:03PM (#8443)
    That was insightful? Crikey.

    OpenGL is written for a UNIX environment, DX is for a Windows environment

    No. OpenGL is an API, with bindings on UNIX platforms, on the Mac, Win32, Linux, PSX2, XBox and so on. Pretty much all 3D hardware of note has an OpenGL driver.

    OpenGL does NOT change very much, which has both good and bad sides, for example, this threads discusses pixel shading, which is a feature OpenGL does not natively supports.

    OpenGL does change a lot. Hardware vendors are free to add functionality via extensions [sgi.com], something they cannot do with D3D without going through microsoft.

    Also, it does support what DX8 calls pixelshading. It exposes it through a quite different interface to DX8 (see here [sgi.com] and here [sgi.com]), this much more closely represents what the hardware is actually doing.
  • by Francis ( 5885 ) on Wednesday August 01, 2001 @01:02PM (#9188) Homepage
    Hehehe, don't be silly.

    First of all, there are no pixel shaders in OpenGL. nVidia's extensions divide pixel shaders into Texture shaders, and Register Combiners. Which, basically mean, "Closer to the metal."

    What does that mean? Well, Pixel Shader language is just a language. How the metal reacts is the same, if the semantics are the same.

    However, *more importantly* ATI is going to *copy* nVidia's existing OpenGL extensions. That's the only way to compete - you must support existing features.

    Don't believe me? They've already been doing this for years. Do a glGetString( GL_EXTENSIONS ); on any video card. Matrox, ATI, whatever. You're going to see a lot of NV_ (nVidia) specific extensions.
  • Re:why arent.... (Score:2, Informative)

    by _Neurotic ( 39687 ) on Wednesday August 01, 2001 @09:06AM (#10647) Journal
    The Pixel Shader technology discussed here is in fact a part of DirectX8 (Direct3D) The issue isn't which API (in the general sense) but rather which version of a subset of an API.

    Not being a 3D programmer, I don't know whether the claim of vast differences in code are true. Can anyone shed light on this?

    Neurotic

  • by Anonymous Coward on Wednesday August 01, 2001 @09:07AM (#10648)

    Max Payne [maxpayne.com], for instance, was developed mostly with GeForce cards. This means that by choosing their standard developer hardware setup the developers are actually becoming hardware-dependant and are, if not frankly, saying that these cards are the ones that you should use to play their game.

    This is really no news.

    "Optimized for Pentium III" is what read on every possible piece of marketing material with the late Battlezone II.

    I would make a conclusion that hardware dependancy of games goes far beyond than just the graphic cards. Use this processor to get better results, use this sound card to hear the sounds more precisely, etc.. It seems that game industry has big bucks, and every hardware vendor wants to make sure that when the next big hit comes, everyone needs to buy their product in order to get that +15% FPS out of it.

  • by bribecka ( 176328 ) on Wednesday August 01, 2001 @09:10AM (#21700) Homepage
    opengl is an open standard with sourcecode and specification open to all

    OpenGL is an open standard, but the source code isn't open--there isn't even any source code! It's just a specification, then each individual vendor must implement according to that specification. For example, Nvidia makes an OpenGL implementation that is accelerated by their graphics cards, MS makes an implementation that is software only, and 3dfx made a mini-implemenation at one point.

    I think maybe Mesa is open-source? Not sure. But the actual implementation inside the vendor's API is whatever they want, and is probably closed (see Nvidia). The only requirement is to follow the specification and the rendering pipeline properly (so transforms/shading/etc will be applied the same through any OGL implementation).

  • by Mercenary9 ( 461023 ) on Wednesday August 01, 2001 @03:38PM (#22144)
    I don't care about reducing the pass number.
    The framebuffer is only 8 bits per channel at most, while pixel shader hardware has higher internal precision per channel, keeping the math in the chip as well as saving read-back from the framebuffer saves bandwidth AND improves quality.

    True per-pixel phong shading looks nice, but then all they seem to do extra is allow you to vary some constants across the object via texture addresses
    Pixel shaders enable arbitrary math on pixels, it isn't a fixed function phong equation with a few more variables added. Maybe an artist wants a sharp terminator, cel shading, a fresnel term, or wants a anisotropic reflection.
    All these are performed using 4D SIMD math operations, just like they were in 1.1: Add, Subtract, Multiply, Multiply-Add, Dot Product, Lerp, and Read Texture. But texture reads can happen AFTER more complex math, before there was only a few set math ops possible during a texture read. It's all in the DX8 SDK, which anyone can download.

    Well that's great, but texture upload bandwidth is can already be significant bottleneck
    "texture upload?" This isn't a problem, with DX8.1 cards having 64mb of memory for texture, why would developers be uploading textures per-frame? If you are talking about texture reads by the pixel shader, this also isn't a bottleneck. Reading geometry from the AGP bus is the bottleneck.

    Artists won't draw bump-maps.
    Sure they will, (heck, I do) look at any x-box game, they are all over the place. They won't draw in vectors-encoded-as-colors, they'll draw height maps, which would be converted off-line into normal maps.

    I don't think ATI have a PS 1.0 implementation, someone please correct me if I'm wrong
    1.4 hardware can support any previous version, including DX7 fixed function blend ops.
    P.S.
    I design hardware for this stuff, I do know what I'm talking about.
  • by Gingko ( 195226 ) on Wednesday August 01, 2001 @09:12AM (#23995)
    First of all, a direct link [ati.com] to ATI's SmartShader tech introduction.

    I have a few disparate thoughts on this subject, but rather than scatter them throughout the messages I'll put 'em all in one place.

    ATI are attacking what is possibly the weakest part IMHO of DirectX 8 - the pixel shaders. Pixel shaders operate on the per-fragment level, rather than on the per-vertex level vertex shaders which were actually Quite Good. The problem with Pixel Shaders 1.1 is that, to paraphrase John Carmack, "You can't just do a bunch of math and then an arbitary texture read" - the instruction set seemed to be tailored towards enabling a few (cool) effects, rather than supplying a generic framework. Again, to quote Carmack, "It's like EMBM writ large". Read a recent .plan of his if you want to read more.

    If you read the ATI paper, they don't really tell you what they've done - just a lot of promises, and a couple of "more flexibles!", "more better!" kind of lip-service. I don't care about reducing the pass number. Hardware is getting faster. True per-pixel phong shading looks nice, but then all they seem to do extra is allow you to vary some constants across the object via texture addresses. Well that's great, but texture upload bandwidth is can already be significant bottleneck, so I don't know for sure that artists are gonna be able to create and leverage a separate ka, ks etc map for each material. (I did enjoy their attempts to make Phong's equation look as difficult as possible)

    True bump-mapping? NVidia [nvidia.com] do a very good looking bump-map. Adding multiple bump-maps is very definitely an obvious evolutionary step, but again, producing the tools for these things is going to be key. Artists won't draw bump-maps.

    Their hair model looks like crap. Sorry, but even as a simple anisotropic reflection example (which again NVidia have had papers on for ages) it looks like ass. Procedural textures, though, are cool - these will save on texture uploads if they're done right.

    What does worry me is that the whole idea of getting NVidia and Microsoft together to do Pixel Shaders and Vertex Shaders is so that the instruction set would be universally adopted. Unfortunately, ATI seem to have said "Sod that, we'll wait for Pixel Shader 1.4 (or whatever) and support that." I hope that doesn't come back to bite them. DirectX 8.0 games are few and far between at the moment, so when they do come out there'll be a period when only Nvidia's cards will really cut it (I don't think ATI have a PS 1.0 implementation, someone please correct me if I'm wrong) - will skipping a generation hurt ATI, given that they're losing the OEM market share as well?

    I dunno, this just seems like a lot of hype, little content.

    Henry

  • Re:How the hell? (Score:3, Informative)

    by The Vulture ( 248871 ) on Wednesday August 01, 2001 @11:52AM (#24741) Homepage
    Who would buy an ATI board? Well, I would. Not to fan the flames, but...

    I've owned a few video boards over the years, and have been constantly looking for a board that does both good 2D and 3D, and up until now, I haven't really found it. My Matrox Millenium (from about four years ago) did excellent 2D, no 3D. My Voodoo Rush had decent 3D for it's time, but the 2D sucked (blurry image, and this was without a passthrough cable). That got replaced (after switching back to the Matrox) with an nVidia TNT Ultra. The 3D was pretty good, but the 2D was a bit blurry (I dumped the TNT when I spoke with nVidia and confirmed that they were not producing Open Source Linux drivers - I don't like liars too much). So, the TNT got replaced by a Matrox G400Max Dualhead - excellent 2D, the 3D was lacking somewhat.

    Just this weekend I purchased an ATI Radeon All-In-Wonder for $250. An excellent deal, since the 2D is nice and crisp, and the 3D rocks (for my purposes anyway). And, in 32-bit mode, it almost equals the GeForce 2 in performance.

    Plus, this board has excellent multimedia. I love the TV tuner, it's so much better than the Hauppauge I used it to replace, plus I can hook up all sorts of video input devices and record from them. Excellent on the fly MPEG compression. And of course, we can't forget the hardware DVD playback, which is outstanding.

    Also, like other people have said, let's not forget that the GeForce cards are still quite expensive.

    A friend of mine was telling me three years ago that ATI made great cards, and I scoffed at him. Looks like I owe him an apology.

    So, in conclusion, who would buy an ATI? How about somebody who wants a full-featured card that gives outstanding image quality. If you want pure frames per second, then buy your GeForce with it's blurry, dim images, since they screw around with the palettes and overclock the chips to get those numbers that hardcore gamers seem to like so much.

    -- Joe
  • Re:ATI stinks (Score:2, Informative)

    by 4n0nym0u$ C0w4rd ( 471100 ) on Wednesday August 01, 2001 @08:48AM (#24885)
    hmmm, first let me say I'm getting a GeForce three rather than an ATI Radeon for my new computer, so I'm not really biased.....ATI Radeon DDR 64MB, $200........Geforce 3 DDR 64MB, $400........light-speed memory architecture, priceless. ATI isn't over-priced, they are very reasonably priced, if I wasn't a total performance junky I'd be getting a Radeon instead of a GeForce 3 because the Geforce 3 is definately overpriced.
  • Performance benefits (Score:5, Informative)

    by John Carmack ( 101025 ) on Wednesday August 01, 2001 @05:36PM (#30502)
    The standard lighting model in DOOM, with all features enabled, but no custom shaders, takes five passes on a GF1/2 or Radeon, either two or three passes on a GF3, and should be possible in a clear + single pass on ATI's new part.

    It is still unclear how the total performance picture will look.

    Lots of pixels are still rendered with no textures at all (stencil shadows), or only a single texture (blended effects), so the pass advantage will only show up on some subset of all the drawing.

    If ATI doesn't do as good of a job with the memory interface, or doesn't get the clock rate up as high as NVidia, they will still lose.

    The pixel operations are a step more flexible than Nvidia's current options, but it is still clearly not where things are going to be going soon in terms of generality.

    Developers are just going to need to sweat out the diversity or go for a least common denominator for the next couple years.

    I fully expect the next generation engine after the current DOOM engine will be targeted at the properly general purpose graphics processors that I have been pushing towards over the last several years.

    Hardware vendors are sort of reticent to give up being able to "out feature" the opposition, but the arguments for the final flexibility steps are too strong to ignore.

    John Carmack

  • Re:How the hell? (Score:1, Informative)

    by Zuchinis ( 301682 ) on Wednesday August 01, 2001 @09:27AM (#33441)
    Here [anandtech.com] is a part of a hardware review from Anandtech [anandtech.com] that compares geforce3 cards for, among other things, 2D image quality. A Radeon DDR, a Matrox G450 eTV, and a geforce2 MX card were used for a referencein the test. Apparently, the Radeon DDR and G450 set a high standard for video card basics like high quality VGA filtering. If you read the (subjective) scores, you'll find that it is a rare geforce3 card that can live up to the Radeon in 2D image quality and no geforce3 can match the G450.

    Isn't it nice to see the non-nVidia brands redeemed?

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...