Forgot your password?
typodupeerror
Games Entertainment

ATI & Nvidia Duke It Out In New Gaming War 208

Posted by Hemos
from the who's-bringing-the-heavy-artillery dept.
geek_on_a_stick writes "I found this PC World article about ATI and Nvidia battling it out over paper specs on their graphics cards. Apparently ATI's next board will support pixel shader 1.4, while Nvidia's GeForce3 will only go up to ps 1.3. The bigger issue is that developers will have to choose which board they want to develop games for, or, write the code twice--one set for each board. Does this mean that future games will be hardware specific?"
This discussion has been archived. No new comments can be posted.

ATI & Nvidia Duke It Out In New Gaming War

Comments Filter:
  • by Dr_Cheeks (110261) on Wednesday August 01, 2001 @09:35AM (#464) Homepage Journal
    Just do what I do; utterly fail to save up for that latest bit of kit. Every game I've bought in the past year has supported my TNT2 M64 chipset. Sure, I can't render something like Shreck on my machine, but since I blew the graphics card money on beer I can't tell the difference.

    Hell, Half Life and Doom are barely distinguishable from each other if your beer-goggles are thick enough. And it doesn't matter if the frame-rate slows down thru lack of processing power - your reactions are already terrible from the booze.

    Yet again, beer is the cause of, and solution to one of life's problems (thanks to Homer for the [slightly paraphrased] quote).

  • All hardware is never going to work with all software. It's simply too much to ask. But graphic card incompatibility is not new to the gaming world. I remember a few years ago when I had a Riva TNT board, and just about EVERYONE else had some 3dfx board. Some games only first came out with support for the 3dfx stuff, and then later released patches for riva stuff, or riva released a new driver that would work with the game. I'm fairly sure that was all basically software differences, not hardware like this is. But there can't be anything wrong with a little friendly competition. As long as the good new games are released and are able to support ALL of the major boards out there I'm happy. They don't all have to be supported as soon as they hit stores, but delayed support is better than no support.
  • This sucks (Score:3, Insightful)

    by levik (52444) on Wednesday August 01, 2001 @09:14AM (#1431) Homepage
    Whatever happened to standards? Remember when things were "100% compatible"? IBM-PC compatible. SoundBlaster compatible. VESA compatible. Compatibility in harware was nice, because you could take it and your software would work on any OS with a piece of compatible hardware without needing special drivers.

    Now the hardware industry has moved away from that, instead giving us free drivers for windows. Which not only are crappy in their first release, but are also useless on other platforms which the vendor decides not to support.

    Bring hardware standards back, and MS will lose much of the power it's able to leverage through the high degreee of hardware support their system provides. I for one would sacrifice a little technological progress for the ability to have things work together as expected out of the box.

    • Re:This sucks (Score:2, Insightful)

      by Francis (5885)

      Standards are always developed later. Maybe you fail to grasp how new this technology. GeForce 3 was the first video card to support hardware vertex/pixel shaders. That was released 2 months ago.

      Remember when things were "100% compatible"? ... SoundBlaster compatible

      Do you really remember what those days were like, when sound-cards *just* came out? You had to pick which sound card you wanted to lock your life into. Adlib? SoundBlaster? ELS? I can't hardly remember anymore.

      ATI and nVidia *are* arguing about standards right now. They're working from a common frame. It's not that bad. You're just exagerating the problem.

    • Compatibility in harware was nice, because you could take it and your software would work on any OS with a piece of compatible hardware without needing special drivers.

      The problem with Compatibility in hardware is that it wasn't. How many 100% Vesa compatible devices do you remember that were 100% Vesa compatible? "Oh? Your Video Card's Vesa implimentation is non-standard? Well, just BUY Display Doctor, and you'll be okay!" No, screw that. Not only did the standard hold back hardware, it didn't even do what it was supposed to.

      At least with a proprietary API such as DirectX, the inferior, crash-prone, nasty, closed Microsoft OS has one thing Unix still doesn't. Support for all video cards right off of the shelf, from day one. Most software works without any fight.

      Well, there are some video card venders who seem to have trouble writting drivers even for a well established API, we'll leave them guys out of this since most of them are quickly dying or are now dead. Good riddance.

      I really don't think the way NVidia and ATI are going to add their own unique features is going to make THAT much of a difference. At best, some coders will take advantage of one or the other, and at worst the rest will ignore anything not built into DirectX and the extra features won't matter.

      Best to have features and not need, than to need and not have.
  • by Tim Browse (9263) on Wednesday August 01, 2001 @01:57PM (#2061)

    There seems to be a large amount of confusion as to what this means, and some people seem to be jumping off the deep end (as usual), so here's an attempt to clear up some of the issues.

    (PS = Pixel Shader in the following points)

    • DX8 Pixel Shaders use the PS API. Part of this API is a definition of a limited assembly language.
    • A PS written for version X will run on drivers that support version Y if X <= Y - i.e. pixel shaders are backwards compatible.
    • When new versions of the PS API appear, they mostly add instructions, or extend the register set. Hence the backwards compatibility.
    • Hence any PS written for ps1.3 (e.g. a GeForce 3 card) will also run on a card supporting ps1.4 (e.g. ATI's new card).
    • The ps1.3 shader may not run as fast as it could on the ATI card, depending on what features of ps1.4 it could take advantage of.
    • If you try to create a PS on a gfx card that does not support PS, or does not support the minimum PS version required, then DX8 will not fall back to software to render the triangles. That would be madness - rendering would probably be an order of magnitude (or two) slower. The request to create the PS will simply fail. (NB. When using a vertex shader, DX8 can fall back to software for that, because it makes sense, and they have some reasonably fast software emulation for vertex shaders).
    • You don't have to choose whether you write for nVidia or ATI - you choose what level of PS (if any) you are going to support. You can choose to support 1.3 and 1.4 with separate code paths if you want, to get maxiumum throughput from ps1.4 cards.
    • Hope this makes things clearer.

      Pixel/Vertex shaders are an attempt to provide developers with low-level access while still maintaining the abstraction needed to support multiple sets of hardware.

      To be honest, compared to the issues of shader program proliferation due to the number/type of lights you have in a scene etc., this isn't that big a deal. You might as well complain that writing a PS that uses PS1.3 means that you're 'choosing' GeForce 3 over all the existing cards that don't support PS1.3. Or that when bump mapping was added to DX and you used it, you were choosing the cards that did bump mapping over those that didn't.

      DirectX is supposed to let you know the capability set of the gfx card, and allow you to use those capabilities in a standard way. The pixel shader mechanism is just another example of this at work.

      As ever with games development, you aim as high as you can, and scale back (within reason) when the user's hardware can't cope with whatever you're doing.

      Trust me, this is not news for games developers :-)

      Tim

  • Legal crap: I work at ATI, but these are my own views, and don't reveal anything that we haven't stated publicly in a press release.

    New graphics hardware from ATI and NVidia are both being developed to support MS's DirectX spec. Game developers just need to pick the level of DirectX that they want to focus on with their engine (and both companies cards support pixel shaders up to v1.3.. only ATI also has 1.4 at the moment).
    So you code it once, and it works on both hardware. If you really like the ATI cards and have some time to spare, you might hack in shader 1.4 support.

    As far as open source support is concerned, I believe both companies are making OpenGL extensions for these technologies available. Here you have a split in the standard, and it becomes harder to support multiple hardware with one piece of code. But the support is there, and it should be possible to have it run under any free (GNU-style and otherwise) system. As much as I don't like MS, DirectX is under more constant revision than OpenGL (from what I've seen), and does a better job of providing the features that a cutting-edge game developer would want to take advantage of.
  • Aren't they now? (Score:3, Interesting)

    by ajs (35943) <ajs&ajs,com> on Wednesday August 01, 2001 @08:50AM (#3840) Homepage Journal
    Every time I get a game, there's a short list of graphics devices supported on the box. I always hear about the development of this or that game, in terms of specific card features.

    Heck, I even remember Carmack talking on Slashdot [slashdot.org] about things like "Nvidia's OpenGL extensions" and other features of specific cards that he was having to take advantage of.

    Yeah, the new wiz-bang game will probably be able to limp-along on whatever you've got, but likely will only be optimized for a few special cards.

    The video-card industry has gotten really awful. I hope that someone pulls it back in line and we get back on a standards track where card manufacturers contribute to the standards efforts and then work hard to make the standard interface efficient.
    • Yeah, the new wiz-bang game will probably be able to limp-along on whatever you've got, but likely will only be optimized for a few special cards.

      That's the PC trade-off - it can grow as a platform, but that gives developers a moving and fractured target. If you don't like it then get a console, which has exactly the opposite characteristics.

      I hope that someone pulls it back in line and we get back on a standards track where card manufacturers contribute to the standards efforts

      Standards are for stable technologies. As soon as video card makers agree on a feature (eg multitexturing or texture-compression), it gets standardised.
  • Can we say 3dFX?? How long was the Voodoo series considered the "premimium gaming card" simply because most games had a couple of cheesy little effects (which were normally 2D) designed for the 3dFX engine? And you think that kind of practice will change? Of course not.
  • by LordZardoz (155141) on Wednesday August 01, 2001 @09:01AM (#6273)
    Its much like the choice to support AMD's 3DNOW or Intel's SIMD instructions. If you use DirectX 8 or OpenGL, the issue is usually dealt with by the graphics library and the card drivers. Some bleeding edge features are initially only supportible by writing specific code, but that is the exception.

    END COMMUNICATION
    • Its much like the choice to support AMD's 3DNOW or Intel's SIMD instructions.

      ..which are converging. The Palomino Athlon core now supports Intel's SSE opcodes as well as 3DNow, and it is promised that the Hammer will also support SSE2. One can only hope that Nvidia and ATI's pixel shaders can also be comfortably converged into a common interface (sounds like they pretty much will be in DirectX 8.1, hopefully it won't be long until there's a common ARB extension for them in OpenGL too).

      Some bleeding edge features are initially only supportible by writing specific code, but that is the exception.

      And in the case of 3D hardware, the bleeding edge features are sure to be used for extra "flash", not vital functionality. A game might have phong-shaded bump-mapped objects on a Radeon2, but it will still run with slightly less exciting graphics on your elderly TNT2.

  • by Anonymous Coward

    Max Payne [maxpayne.com], for instance, was developed mostly with GeForce cards. This means that by choosing their standard developer hardware setup the developers are actually becoming hardware-dependant and are, if not frankly, saying that these cards are the ones that you should use to play their game.

    This is really no news.

    "Optimized for Pentium III" is what read on every possible piece of marketing material with the late Battlezone II.

    I would make a conclusion that hardware dependancy of games goes far beyond than just the graphic cards. Use this processor to get better results, use this sound card to hear the sounds more precisely, etc.. It seems that game industry has big bucks, and every hardware vendor wants to make sure that when the next big hit comes, everyone needs to buy their product in order to get that +15% FPS out of it.

  • If all these new wiz-bang features are implemented as extensions to the original API, then all the developer has to do (if he/she/they/it chooses to support the new features) is detect if the feature is available and add the code for it. If not, then just use the standard API.

    Eventually, all the other manufacturers will catch up with the new features, and the extension will become integrated into the standard.
    An analogy could be (think ye olde days) detection of a sound card, and only enabling sound if one is available.

    If even that it too much work for the developer, then just dont support the new extension - the graphics will just look as pretty to the untrained (read consumers) eye.
  • Well, ok, there is a story here -- ATI vs NVidia (and perhaps by extention, Nintendo vs Microsoft).

    But what's with all of these doom & gloom posts about fragmenting games for specific hardware? There's already a ton of features that may or may not be available in Direct3D or OpenGL depending upon your underlying hardware and driver. In Direct3D these are known as 'capabilities', in OpenGL they are 'extentions', in either API you can easily check for their existence.

    Game developers are already doing this for features such as dot3 bumpmaping. Some boards support this feature in hardware, some don't, so your code is free to check to see if its available and use it if it is or ignore it (or fallback to some other method) if its not.

    These shaders aren't really any different from that.. you write code to look at the shader version supported and either use 'new improved' shaders or 'older style shaders' depending upon the platform.

    Yes, its more work for the programmer/artists to support a fallback mode, but that's the price of targetting cutting edge gaming hardware while still supporting users of older systems. It always has been and always will be.

    As to the dramatic question of ATI vs NVidia, I'd say that NVidia has the early advantage due to the XBox. Considering how similar the XBOX graphics system is to the PC GeForce 3, its pretty much guaranteed that all of the major gaming engines being used to create most 'big' games these days will target GeForce3/XBOX features specifically, and features of 'other boards' (such as ATI) only as a bonus if there's enough time, or ATI lays down enough cash on a crossmarketing deal.

    Of course, if Microsoft manages to flub the XBOX release to a staggering degree, all bets as to the future are off.

  • Stanford have been doing some research on compiling RenderMan (pretty much the holy grail) shaders down to OpenGL. They can do it, in a lot of passes and with a couple of extensions.

    I have heard that there is work ahead to do this with Pixel Shaders. Once Pixel Shaders become sufficiently general, all you need to do is re-target the back end of the compiler and you're set.

    Henry
  • by mfb425 (462506)
    The differences in hardware are not that big of a problem for next generation graphics engines. The amount of features and flexibility available now necessitate using a higher level shader language as opposed to hardware specific api features. A well designed shader language can be compiled to take advantage of whatever driver features you have available, and emulate or ignore the rest.

    We are currently able to target both pixel shader versions in DirectX, and hopefully soon in OpenGL. We are currently ignoring features not supported by the hardware that shader code tries to use. So rendering the shader surface on a GeForce1 will look much worse than on a full featured card, but we don't waste time emulating it.

    For reference on similiar techniques check otu Proudfoot et al. 'A Real-Time Procedural Shading System for Programmable Graphics Hardware'. (Thought thats based on NVIDIA hardware, it extendable to new features as well)

  • Does this mean that future games will be hardware specific?

    Well, Not a long time ago, we just had 3dfx (and glide) as the only option for 3D games.
    Even if there were other 3d hardware and other tecnologies (OpenGL and the rising Direct3D), glide (and 3dfx) was the default choice.

  • Hardware Specific? (Score:1, Interesting)

    by Anonymous Coward
    "Does this mean that future games will be hardware specific?"

    Well, yes actually. Haven't they always been? We've had 3Dfx versus PowerVR, Glide versus OpenGL, Direct3D versus OpenGL...

    It goes all the way back to floppy versus CD, Win3.1 versus Win32s, 16 colours versus 256...

    Every game has system requirements (even if your only talking about a scale like processing power), and always has done. I still remember the shock when I realised I'd need to get a tape drive to complement the disk drive in my CPC664, just to play some of the games!
  • is the level of compatibility there is. PS 1.4 is really just an extension of 1.3 -- it adds more instructions to the same basic architeture. If you write for GF3, it'll run just fine on the R200 or whatever. If you write for R200, it'll run just fine on nVidia's next part, even though it supports PS1.5. It's all back-compatible, I believe. It's sorta like the deal with two texture units vs three (GF2 vs Radeon) or single-pass quad texture with two units (GF3). Write for the lowest denominator you care to, it'll work fine on all the newer stuff.
    • >Write for the lowest denominator

      hmm, but it seems like game developers don't do that. There is a segment of gamers that are attracted to the newest hardware _because_ it has the latest features, and they then want to buy a game that uses that feature they just paid a $$$ premium for. Totally wrong priorities, but it seems to happen.

      Sure, write your game for the best compatibility across different hardware, but then you run the risk that PC Gamer magazine won't drool all over themselves in their review because the reviewer ran your demo on his rig with a GeForce XXI, but your game didn't have the latest 'cyclops, semi-transparent, half-inverse bump/pixel grinding' feature.

      A 14-year old reading pcgamer has no idea what this feature really does for him, but he knows that dad is getting him a GeForce XXI for xmas, so this game isn't going to be on his santa list.
    • Hmm, not really, PS 1.4 is fairly different to PS 1.3. The architecture has changed. Many of the instructions are the same but they're applied in a different way.
  • The bigger issue is that developers will have to choose which board they want to develop games for, or, write the code twice--one set for each board. Does this mean that future games will be hardware specific?"

    two points:

    1. games have always been hardware-specific. dx8 means only x86 compatible hardware & windows. but i assume you mean graphics-hardware, so to address that directly:
    2. yes, developers have to write code for one or the other pixel 'shader' api directly, thus excluding them from the other. this is extremely shrewd business - get developers locked-into your platform, then watch the other competitor's hardware perform more poorly, and eat their lunch too. it's how microsoft has done well, and it's what ati & nvidia are competing for right now. it's not the technology, it's the business of technology.
    3. ok, i lied, 3 points. bonus point. there are alternatives which are platform neutral, and higher-level. think of pixel-shading apis as writing in assembly. think of high-level shading apis as writing in c or c++. a better idea than writing in 'shader assembly' would be to write in a high-level language. SGI & stanford both have projects which are, at their core, hardware-agnostic:
  • Looks like we're back to the days of yore, when you (the developer) got to choose to support a specific card (3dfx or the others that didn't survive) because there was no DirectX support... because there was no DirectX. Then you (the consumer) got the shaft if you didn't have the right card, unless the developer later came out with a binary that would support your card's features. But if it wasn't an uber-popular game, this usually didn't happen.

    So why are Nvidia and ATI forcing developer to go back to the stone age of accelerated polygons? Oh that's right... Me likes pretty picture.

    • why use something like direct x when opengl is an open standard with sourcecode and specification open to all?

      It's scary that so many people are relying on M$'s proprietary graphicx technology. at any time they could discontinue it, or change the API in such a way to make all games broken. I wouldn't put it past them.

      subatomic
      http://www.mp3.com/subatomicglue [mp3.com]
      • opengl is an open standard with sourcecode and specification open to all

        OpenGL is an open standard, but the source code isn't open--there isn't even any source code! It's just a specification, then each individual vendor must implement according to that specification. For example, Nvidia makes an OpenGL implementation that is accelerated by their graphics cards, MS makes an implementation that is software only, and 3dfx made a mini-implemenation at one point.

        I think maybe Mesa is open-source? Not sure. But the actual implementation inside the vendor's API is whatever they want, and is probably closed (see Nvidia). The only requirement is to follow the specification and the rendering pipeline properly (so transforms/shading/etc will be applied the same through any OGL implementation).

      • Have you ever used either one? IMO DirectX is a lot less of a pain in the ass to code with. It also supports more than OpenGL. Plus, most game designers aren't too concerned with other platforms seeing as the majority of their market runs Windows for gaming.
      • DirectX has full documentation freely available, also DX doesnt only support accelerated graphics but the whole range in output and input devices such as joysticks, sound etc etc.

        OpenGL is written for a UNIX environment, DX is for a Windows environment. And yes, opengl is opensource and very easy to learn, but still it has alot of drawbacks, one of them being those dinosaurs that runs it.

        OpenGL does NOT change very much, which has both good and bad sides, for example, this threads discusses pixel shading, which is a feature OpenGL does not natively supports. I do not know how hard this is to implement in DX, but I figure that since they are even talking about it and not just dismissing it as some "toy" like the OpenGL-board seems todo..

        • That was insightful? Crikey.

          OpenGL is written for a UNIX environment, DX is for a Windows environment

          No. OpenGL is an API, with bindings on UNIX platforms, on the Mac, Win32, Linux, PSX2, XBox and so on. Pretty much all 3D hardware of note has an OpenGL driver.

          OpenGL does NOT change very much, which has both good and bad sides, for example, this threads discusses pixel shading, which is a feature OpenGL does not natively supports.

          OpenGL does change a lot. Hardware vendors are free to add functionality via extensions [sgi.com], something they cannot do with D3D without going through microsoft.

          Also, it does support what DX8 calls pixelshading. It exposes it through a quite different interface to DX8 (see here [sgi.com] and here [sgi.com]), this much more closely represents what the hardware is actually doing.
        • Sorry, but we are going to need to use the bullshit filter here.

          OpenGL and DirectX don't compete with each other, the only comparison that you can really draw is between Direct3D (a small component of DirectX) and OpenGL. You can use OpenGL and DirectX in the same project, and many games do. There isn't anything better than DX when it comes to an interface for i/o devices, etc. on the windows platform. But as far as OpenGL and D3D, they are directly comparable, and there are minor trade-offs when you choose between the two.

          >OpenGL is written for a UNIX environment, DX is for a Windows environment.

          Okay, what is the point here? OpenGL has long ago been ported to the windows env, and it runs fine, even better than on UNIX (for most workstations) because it has direct access to the hardware layer, UNIX (until very recently) had to go thru the X protocol (nice and portable, but slow).

          >OpenGL does NOT change very much[....]

          True, the core of OpenGL doesn't change very much, but this is very good. With every release of D3D, the API changes drasticly, so you must relearn it every time. OpenGL on the other hand, doesn't change (much), but it has extensions that make it dynamic. The pixel/vertex shading on OpenGL has the same features that the D3D version has (supposedly it is better if you believe opengl.org). So, by design, it doesn't matter if OpenGL "natively" supports shaders, the API was designed to be flexible and extendable.

          I am not saying that since OpenGL is "open" and extensible, that it makes the choice of what to use a no-brainer, it is far from that. Some choose on or the other for many reasons, sometimes political/ideology, sometimes monitary (MS paid big bucks to early adopters of D3D), etc. It is by far an easy decsion. The only good part about it: it doesn't really matter; you can do anything in one that you can do in the other, if you know the API well enough. And that includes pixel/vertex shaders, etc.
  • is 1.4 backward compatible? 1.3 and 1.4 are a direct3d thing, what about opengl?

    so what if we all just use opengl instead? open standard etc... would definately worth it to pressure the ARB to extend their spec to shaders.... NVIDIA shader extensions would have to be uses cause the opengl ARB is very slow in adopting new standards (like pixel shading)

    subatomic
    http://www.mp3.com/subatomicglue [mp3.com]
  • by Gingko (195226) on Wednesday August 01, 2001 @09:12AM (#23995)
    First of all, a direct link [ati.com] to ATI's SmartShader tech introduction.

    I have a few disparate thoughts on this subject, but rather than scatter them throughout the messages I'll put 'em all in one place.

    ATI are attacking what is possibly the weakest part IMHO of DirectX 8 - the pixel shaders. Pixel shaders operate on the per-fragment level, rather than on the per-vertex level vertex shaders which were actually Quite Good. The problem with Pixel Shaders 1.1 is that, to paraphrase John Carmack, "You can't just do a bunch of math and then an arbitary texture read" - the instruction set seemed to be tailored towards enabling a few (cool) effects, rather than supplying a generic framework. Again, to quote Carmack, "It's like EMBM writ large". Read a recent .plan of his if you want to read more.

    If you read the ATI paper, they don't really tell you what they've done - just a lot of promises, and a couple of "more flexibles!", "more better!" kind of lip-service. I don't care about reducing the pass number. Hardware is getting faster. True per-pixel phong shading looks nice, but then all they seem to do extra is allow you to vary some constants across the object via texture addresses. Well that's great, but texture upload bandwidth is can already be significant bottleneck, so I don't know for sure that artists are gonna be able to create and leverage a separate ka, ks etc map for each material. (I did enjoy their attempts to make Phong's equation look as difficult as possible)

    True bump-mapping? NVidia [nvidia.com] do a very good looking bump-map. Adding multiple bump-maps is very definitely an obvious evolutionary step, but again, producing the tools for these things is going to be key. Artists won't draw bump-maps.

    Their hair model looks like crap. Sorry, but even as a simple anisotropic reflection example (which again NVidia have had papers on for ages) it looks like ass. Procedural textures, though, are cool - these will save on texture uploads if they're done right.

    What does worry me is that the whole idea of getting NVidia and Microsoft together to do Pixel Shaders and Vertex Shaders is so that the instruction set would be universally adopted. Unfortunately, ATI seem to have said "Sod that, we'll wait for Pixel Shader 1.4 (or whatever) and support that." I hope that doesn't come back to bite them. DirectX 8.0 games are few and far between at the moment, so when they do come out there'll be a period when only Nvidia's cards will really cut it (I don't think ATI have a PS 1.0 implementation, someone please correct me if I'm wrong) - will skipping a generation hurt ATI, given that they're losing the OEM market share as well?

    I dunno, this just seems like a lot of hype, little content.

    Henry

    • I don't know for sure that artists are gonna be able to create and leverage a separate ka, ks etc map for each material

      This would be done more through code than by an artist. You only need to either write a shader do do it properly, or just assign a ka/kd/ks to each material, and that isn't exactly difficult. After all, in the real world, most surfaces have a pretty much constant reflectance function for the whole dang thing. Just look around...Yes, things have different *colors* across its surface, but the actaul reflectance is usually the same.

      Artists won't draw bump-maps.

      Why not? They draw textures now, I'm sure they would have no problem drawing a bump map if need be. Besides, if there is support for procedural textures, you can just use those to generate a bump map.

      • This would be done more through code than by an artist

        True, but what you go on to say is that the Phong constants are indeed constant across a surface - then ATI saying 'oh look - you can programatically change ka and ks' becomes useless because you won't need to change it. This assumes that you are working on a one material : one texture map correspondance. If, like their examples, you have say metal and stone on one map then varying some constants becomes necessary. But then this requires another map (or even two) at close to the resolution of the source diffuse map. You can do per-material ka/kd/ks now with no troubles at all. Per-fragment is a bit more involved.

        Why not? They draw textures now,
        I'm not an artist, so I don't know. But I don't know that the tools are there for them to draw bump-maps, and you have to admit that using an RGB channel as a three component normal vector can't be the most intuitive way to draw things. Much better to procedurally generate, like you say.

        Henry
        • you have say metal and stone on one map then varying some constants becomes necessary

          True enough. I'm thinking more in a high-end graphics environment rather than a gaming one, where that situation wouldn't come up very often really--it's just a way of being lazy and putting multiple objects into one--not sure if I'm being clear.

          But you're right, you would have to change those constants across a surface, especially in games, where i suppose surfaces might be merged together for optimization's sake.

          As far as the RGB/normal channel goes, I think most bump mapping is sufficiently done with just a grayscale type image...much like a heightmap. Since bump maps inherently give the appearance of micro geometry, some accuracy that might be acheived through an RGB bump map can be set aside for the sake of ease and speed (even if the speedup is in development!).

    • by Mercenary9 (461023) on Wednesday August 01, 2001 @03:38PM (#22144)
      I don't care about reducing the pass number.
      The framebuffer is only 8 bits per channel at most, while pixel shader hardware has higher internal precision per channel, keeping the math in the chip as well as saving read-back from the framebuffer saves bandwidth AND improves quality.

      True per-pixel phong shading looks nice, but then all they seem to do extra is allow you to vary some constants across the object via texture addresses
      Pixel shaders enable arbitrary math on pixels, it isn't a fixed function phong equation with a few more variables added. Maybe an artist wants a sharp terminator, cel shading, a fresnel term, or wants a anisotropic reflection.
      All these are performed using 4D SIMD math operations, just like they were in 1.1: Add, Subtract, Multiply, Multiply-Add, Dot Product, Lerp, and Read Texture. But texture reads can happen AFTER more complex math, before there was only a few set math ops possible during a texture read. It's all in the DX8 SDK, which anyone can download.

      Well that's great, but texture upload bandwidth is can already be significant bottleneck
      "texture upload?" This isn't a problem, with DX8.1 cards having 64mb of memory for texture, why would developers be uploading textures per-frame? If you are talking about texture reads by the pixel shader, this also isn't a bottleneck. Reading geometry from the AGP bus is the bottleneck.

      Artists won't draw bump-maps.
      Sure they will, (heck, I do) look at any x-box game, they are all over the place. They won't draw in vectors-encoded-as-colors, they'll draw height maps, which would be converted off-line into normal maps.

      I don't think ATI have a PS 1.0 implementation, someone please correct me if I'm wrong
      1.4 hardware can support any previous version, including DX7 fixed function blend ops.
      P.S.
      I design hardware for this stuff, I do know what I'm talking about.
  • I believe (Score:3, Insightful)

    by RyuuzakiTetsuya (195424) <taiki@cUMLAUTox.net minus punct> on Wednesday August 01, 2001 @08:48AM (#24883)
    The Pixel Shader technology will be backwards compatable as far as the DirectX 8.0 API is concerned. Imagine that. Microsoft using an API to bring software developers together across various hardware choices. Now only if they could get Win32 cleaned up and a decent kernel, then I'd THINK about purchasing that OS. Although I'm not saying that there won't be card specific code, but as far as Pixel shader tech goes, as long as the drivers are DX 8 compatable, there's no problem with code for one card not working on the other. Besides, most systems sold in the last year have 810/810e/815E chipsets and stuck with those old i740 Starfighter chips.
    • The Pixel Shader technology will be backwards compatable as far as the DirectX 8.0 API is concerned.

      But that doesn't necessarily mean that it will run at maximum performance in all conditions. Perhaps it is something like this:

      • if the game uses PixelShader 1.3, the nVidia runs at its maximum speed since it natively supports it. The ATI performs suboptimally.
      • if the game uses PixelShader 1.4, ATI performs optimally. But now the game uses features that the nVidia doesn't support, so DirectX uses software emulation for those features... bye bye high performance.
      • ..since DX is backwards compatable with such features. If DX 8 allows PS 1.4, it most certainly will allow PS 1.3 and the same functions will all be translated within DX 8.

        That's pretty much why DX 8 exists. Sure, you wont have all the features of 1.4, but your 1.3 will be zipping along instead at full speed regardless. It wont simply be tossed aside to software-emulation mode.

    • Re:I believe (Score:2, Interesting)

      The Pixel Shader technology will be backwards compatable as far as the DirectX 8.0 API is concerned. Imagine that. Microsoft using an API to bring software developers together across various hardware choices.

      Sadly the situation is not unified in OpenGL yet, with both Nvidia and ATI providing their own separate extensions for accessing pixel shaders. One can only hope that its not too long before we can get an ARB-approved extension that covers the capabilities of both cards.

      Of course, since it will be quite a while before games publishers can rely on people having a GeForce3 or Radeon2, I expect pixel shaders will only be used for optional flash for quite some time. If people are doing bump mapping and phong shading and so on using them, they'll certainly have the option to run in a slightly less attractive mode for those with lamer hardware.

  • Remember the past?
    3DFX only games came and went, then again so did 3DFX.
  • by be-fan (61476) on Wednesday August 01, 2001 @11:01AM (#25363)
    This story actually gives me a chance to bitch about OpenGL! None of these new features are a part of the standard OpenGL. "Extensions! Extensions!" you shout. However, due to the differences between hardware, you'll end up with ATI and NVIDIA versions of the same extensions, since the ARB won't touch such new/untested features. This makes sense in the pro segment, where hardware is slow to evolve, but in the consumer segment, it will make the API seem antiquated. Plus, the extension mechanism isn't even suited to such uses anyway, since it was meant to expose features, not radically different methods of rendering. And yes, these are radically different. Part of the reason that the GeForce3 has 57 million transistors is that it has to have a standard geometry engine for DirectX 7 and a new vertex shader-based geometry engine.

    In the long run, this will make OpenGL unpopular with game developers. Sure guys like Carmack and afford to suck it in and develop to all the extensions, but for a small development house that wants to make an impressive game, they'll go with DirectX to save themselves the development costs. And when they do, there goes the possibility of a native Linux port.

    Now there are two solutions to this. First, the ARB could get off their asses and start integrating extensions. This could be problemetic for the pro segment, which wants a stable API. On the other hand, the ARB could fork OpenGL into a pro and a consumer version, but that results in two incompatible APIs. I think Microsoft is doing the right thing by supporting the latest featuers (in essense, baiting all the hardware manufactuers to integrate these features) but it *does* make DirectX unsuitable for pro use.
    • by Francis (5885)
      Hehehe, don't be silly.

      First of all, there are no pixel shaders in OpenGL. nVidia's extensions divide pixel shaders into Texture shaders, and Register Combiners. Which, basically mean, "Closer to the metal."

      What does that mean? Well, Pixel Shader language is just a language. How the metal reacts is the same, if the semantics are the same.

      However, *more importantly* ATI is going to *copy* nVidia's existing OpenGL extensions. That's the only way to compete - you must support existing features.

      Don't believe me? They've already been doing this for years. Do a glGetString( GL_EXTENSIONS ); on any video card. Matrox, ATI, whatever. You're going to see a lot of NV_ (nVidia) specific extensions.
    • This story actually gives me a chance to bitch about OpenGL! None of these new features are a part of the standard OpenGL. "Extensions! Extensions!" you shout. However, due to the differences between hardware, you'll end up with ATI and NVIDIA versions of the same extensions, since the ARB won't touch such new/untested features.

      You are boring, you know? (And moderators couldn't spot a troll even if they were standing under a bridge) We already went over this at least twice, and you are bringing out the same dried up arguments again. You still haven't said what prevents NVIDIA and ATI and the ARB from sitting at the table and coming up with an uniform API for such an extension. Last time your cried, just like know, that the "hardware" is too different. Bullshit. Direct3D is no better. You just have to decide on a freaking API and let the vendors implement that. Read elsewhere in this discussion about programmers not supporting DirectX N yet. How's that better than OpenGL's extension mechanism? It's not better. You still have to rewrite stuff. The only difference is that some companies that will remain unnamed have placed stupid patents arround interfaces, not features, interfaces and other companies have to come up and implement their own. If you want to bitch at someone, bitch at the companies that patent interfaces and stop trolling about OpenGL not supporting current technology.

      • You still haven't said what prevents NVIDIA and ATI and the ARB from sitting at the table and coming up with an uniform API for such an extension.
        >>>>>>>>
        When they do, I'll be impressed. But they show no signs of it thus far. And I'll tell you what prevents them from working together: human arrogance, plain and simple.

        Last time your cried, just like know, that the "hardware" is too different.
        >>>
        Care to cite that?

        Bullshit. Direct3D is no better. You just have to decide on a freaking API and let the vendors implement that. Read elsewhere in this discussion about programmers not supporting DirectX N yet. How's that better than OpenGL's extension mechanism?
        >>>>>>>
        Because there are only two versions of DirectX ever in use. The current version, which is used by games already on the shelf (that would be 7.0) and the new version that is being used in games that are in development (that would be 8.0). Plus, DirectX is fully backwards compatible, which can't be said for all OpenGL extensions.

        It's not better. You still have to rewrite stuff.
        >>>>>>>>
        No you don't. If you go from DirectX 7.0 to DirectX 8.0, you have to rewrite 0 lines of code, since they are 100% backwards compatible. To go from NVIDIA's extension to ATI's extension, you might have to rewrite a good bit of code, depending on the exact semantics of the extension.

        The only difference is that some companies that will remain unnamed have placed stupid patents arround interfaces, not features, interfaces and other companies have to come up and implement their own.
        >>>>>>>>>>>
        You hit the nail on the head! "Interfaces, not features!" That's what OpenGL's extension mechanism misses. Two features with different interfaces might as well be two different features. With DirectX, every feature has exactly (1) interface, DirectX's. Not the same is true for OpenGL.

        If you want to bitch at someone, bitch at the companies that patent interfaces and stop trolling about OpenGL not supporting current technology.
        >>>>>>>>
        OpenGL does support coherent technology, just not in a sane way. Extensions are bad design. Game developers don't like them, and users don't like them. They only people who like extensions are hardware manufacturers, because it allows them to lock people into using their products.

  • Does this mean that future games will be hardware specific?

    Well, no. Game developers do prefer the state of the art, but common sense dictates that you target something that is exists and is popular.

    Comparisons to browser market shares are appropriate here: When Internet Explorer became the norm, web sites tended to take advantage of IE's superior DHTML and DOM support, but developers have mostly strived to make pages backwards-compatible with Netscape and other less capable browsers. After Mozilla caught up, most web sites still aren't targeting it specifically.

    Keep in mind that, according to the article, the board does not currently exist. One's desire to write custom code for a nonexistent board is contingent on several factors, such as the manufacturer's present and potential future market share.

    Case in point: Developers used to target Glide, 3Dfx' low-level rendering API. Games these days don't bother: 3Dfx has DirectX support, the effort to squeeze a few extra FPS from writing "straight to the metal" usually isn't worth the time and money, and most importantly, 3Dfx is dead. Its user base is dwindling, and there is no incentive to use the (generally) hardware-specific Glide over the generic DirectX.

    As for the development effort: As a former game developer and Direct3D user, I agree with the claim that when targeting both shaders, "they'll have to write more code". A few hundred lines, perhaps, for detecting and using the two extra texture shaders per pass. It's not like it's a new, different API.

    • While sanity reigns in this thread, may I also add:
      Who gives a fuck?

      Seriously, this is all quite a bit of petty whining about things that don't really matter. The difference will barely be noticeable except in hardware review screenshots, where it will become apparent that one square centimeter of the screen looks significantly better with one card than the other, if only for a split second, away from your focus.

  • I'm surprised at the lack of comments about platform support for these new features.

    If you own a GeForce3 *today*, you can access all of the hardware's features on Linux, Windows and Mac through OpenGL.

    I don't know about ATI's Mac support, but under Linux the Radeon drivers still don't support T&L, cube maps, 3D textures or all three texture units. The card has been available for well over a year, but the driver only enables Rage128-level features. How long do you think it's going to take for the pixel and vertex shader capabilities to make it into the Linux drivers? And what about the Mac?

    I've been extremely impressed by the balanced approach NVIDIA has been taking: they do a great deal of work on D3D 8 with Microsoft, but they simultaneously create OpenGL extensions for interesting hardware features, allowing Windows developers to target OpenGL, and also allowing alternate plaforms to access the new features. Their OpenGL support surpasses any other consumer grade hardware manufacturer's, and they offer better cross plaform support than any graphics company.

    The safest choice any game developer can make is NVIDIA.

    -Mark
  • Double code (Score:3, Interesting)

    by LightningTH (151451) on Wednesday August 01, 2001 @10:16AM (#29551)
    I'm one of the developers of next version of the Genesis3D [genesis3d.com] game engine. We ran into this problem of what do we support on an engine that is to push the latest cards to the limits.

    The simple answer was to write the common code in the main part of the engine then write multiple drivers for the engine that would use different features on different cards. This way we could push both cards and optimize the code for each card to get the best performance. Of course this is no easy task either.

    This is a pain but if you wish to push what each card can do, you have to write code for each individual card or maker of the cards (IE a nVidia driver and an ATI driver then a 3rd driver for everything else that the other 2 don't optimize and run on).
  • Performance benefits (Score:5, Informative)

    by John Carmack (101025) on Wednesday August 01, 2001 @05:36PM (#30502)
    The standard lighting model in DOOM, with all features enabled, but no custom shaders, takes five passes on a GF1/2 or Radeon, either two or three passes on a GF3, and should be possible in a clear + single pass on ATI's new part.

    It is still unclear how the total performance picture will look.

    Lots of pixels are still rendered with no textures at all (stencil shadows), or only a single texture (blended effects), so the pass advantage will only show up on some subset of all the drawing.

    If ATI doesn't do as good of a job with the memory interface, or doesn't get the clock rate up as high as NVidia, they will still lose.

    The pixel operations are a step more flexible than Nvidia's current options, but it is still clearly not where things are going to be going soon in terms of generality.

    Developers are just going to need to sweat out the diversity or go for a least common denominator for the next couple years.

    I fully expect the next generation engine after the current DOOM engine will be targeted at the properly general purpose graphics processors that I have been pushing towards over the last several years.

    Hardware vendors are sort of reticent to give up being able to "out feature" the opposition, but the arguments for the final flexibility steps are too strong to ignore.

    John Carmack

  • I think the answer to that question is to rate just how flexible the current API's can be. The two contenders (and please, let's try not to make MORE!) are OpenGL and DirectX. Nvidia has ressurected the venerable Amiga's idea for a fully programmable graphics processor, and I presume that ATI's post-Raedon chip will be similar.

    So, which API allows one to most easily get at the GPU's coding power? How many hooks does the high level api have into the gpu's engine, and can the gpu get data from the api on the fly?

    If anyone out there has worked with them, I'd be curious to hear what's present or lacking from the standards, and if it's feasable to try and write GPU level code abstractly.
  • Game development companies will write code to the lowest common denominator that allows them to turn their projected profit.

    If this means using ASCII on a VT100, that's what they'll do.
  • Deja vu. (Score:4, Insightful)

    by AFCArchvile (221494) on Wednesday August 01, 2001 @09:38AM (#40600)
    "Does this mean that future games will be hardware specific?"

    If so, it won't be the first time; remember the days of 3dfx? Original Unreal would only run on Glide hardware acceleration; if you didn't have a 3dfx card, you were forced to run it in software. Of course, this didn't sit well with the growing NVidia user base who consistently pointed out that Quake 2 and Half-Life both rendered on anything running OpenGL (including 3dfx cards; remember those mini-driver days?), and OpenGL and Direct3D renderers were finally introduced in a patch. That's about when 3dfx started to go down the toilet; delaying product releases and missing features (32-bit color and large texture support being two of the most blatant omissions) eventually tainted the 3dfx brand to the point of extinction.

    Since then, 3D gaming has been a less lopsided world. Linux gaming was taken seriously. Standardised APIs that could run on almost anything were the rule; if it wasn't OpenGL, it would at least be Direct3D. Then the GL extensions war heated up, with NVidia developing proprietary extensions that would work only on their cards. But this wasn't a problem; you could still run OpenGL games on anything that could run OpenGL; you'd just be missing out on a few features that would only slightly enhance the scenery.

    Leave it to Microsoft to screw it all up with DirectX 8. They suddenly started talking about pixel shaders and other new ideas. John Carmack has already described the shortfalls and antics of DX8 [planetquake.com]. And now 3D programmers will have to program for multiple rendering platforms, but at least you can still run it with anything.

    Sure, this entire disagreement between ATI and NVidia is bad for the 3D industry, but things could be worse. A LOT worse.

    • missing features (32-bit color and large texture support being two of the most blatant omissions)

      3dfx cards supported 256x256 textures. Are you talking about a texture larger than that for a single polygon? If you're not, you can simply use multiple textures for one object, as in the 8- and 16-bit world where a sprite was made of several smaller tiles.

  • by Skynet (37427) on Wednesday August 01, 2001 @08:46AM (#43550) Homepage
    This is good for hardware because ATI and NVidia will continue to push the envelope, developing more and more advanced graphics boards. Features will creep from one end to the other, just staggered a generation.

    This is good for software because developers will have more choice in the hardware that they develop for. ATI doesn't support super-duper-gooified blob rendering? Ah, NVidia does in their new Geforce5. No worries, ATI will have to support it in their next generation boards.

    A bipolar competition is ALWAYS good for the consumer.
    • This is good for hardware because ATI and NVidia will continue to push the envelope, developing more and more advanced graphics boards. Features will creep from one end to the other, just staggered a generation.

      Eh, I don't know. I would compare this to the late 80's when computers were being developed by Amiga, IBM (and clones), Mac, Apple, etc...you had certain games/software that were available on a given platform and not the other, people just couldn't support multiple hardware configurations. As long as there are multiple companies producing competing products, is there really a reason they can't be compatible at the software level? Personally, I'd rather be able to look at a video card's features (memory, fps) than what games I'm going to be able to play with it.

      --trb

    • ...is that developers shouldn't HAVE to develop for specific hardware. I don't work in the game industry specifically, but I don't see how this is necessarily good for software in general, or graphics software in particular. This doesn't give developers "more choice in the hardware they develop for" It gives them less choice, because they have to decide how to allocate limited resources on a per-platform basis. When you have a common API, you're not forced to choose in the first place. That's why hardware specific features and capabilities ought to be abstracted-out into a common API. What these guys should do is come up with a dozen or so different kinds of high-level magic (e.g. water waves, flame, smoke,bullet-holes, whatever) that they can work with their pixel and vertex shaders, lobby Microsoft to get that magic incorporated into the DirectX spec, and then supply drivers that meet those specs by sending a few pre-packaged routines to the pixel/vertex shaders, rather than have game developers worry about this stuff directly. Or am I missing something?
      • Hmm no you're note missing anything, but I djust don't think the situatios is *that* bad.
        The article is very light n details, so who knows, but I really don't imagine it's about hardware compatibilities/cincompatibilities. DirectX is efectively a standard, the fact is, one board will go UP TO this or that version.
        The higher version your board support, the better the game (should) run, with better and faster effects.
        Now when they say game developers will have to work seperately for both cards hm.. well first they always had to twaek their code so that it work with as many boards as posible. but anyway, I don'0t think it goes in the way "we are going to do a game for this board or this one or both".
        It's more in the way "we are going to use stick to this version of DirectX instead of this one". the impact of this decision is that the game would run faster on one board or on another, or that they could use or not some specific effects. They want this kind of super shading . ok, if they do it on software,it will be slow on both boards but everybody will see this nice looking shadow. if they chose the hardware way, maybe only one board will show it, maybe it will be disabled for the other, so you'll have a better looking game for one board, and an overall faster game, at the cost of pissing off the one that don't have the nice effect.
        But I don't see it goes as deep as working completely focused on one or another board.

        Also, lot of people say they should stack at the standards and all
        this is of correct, but not at the cost of evolution.
        Having a too static standard will allow better competition between boards, thus better prices, better performance etc.. yes and no.
        It will, but it will be very limited, the only way to improve the boards would then be basically to improve the brute force. it's much better to add new concepts, or new way to represent the information so tat it can be more efficient/realistic at (posibly) a lower cost. So, if you stick to this version of DirectX, you'll be able to draw x textures, and if you use this one, it will be y textures in 1 pass only.
        Also, instead of giving raw plygons to the board, you can give it quadric parameters (again, if the board supports it) or things like that.
        The game developer has to choose if he wants to do it this way or tis other way, and of course it will have to do with the support there is in the boards market, but even then, i is not "developing for this or that board".
      • No, you're not missing anything. At LAST! somebody who understands the importance of proper abstraction. The mistake, it seems is not one by nvidia or ati, but Microsoft. If directx was designed soundly it should be possible for the card manufacturers to both support the same API versions.

        I was getting worried - the majority of posters so far seem to be nutcases.
    • > This is good for software because developers
      > will have more choice in the hardware that they
      > develop for.

      Bullshit. That path leads straight to the darn old days where every game was board-specific.

      > ATI doesn't support super-duper-
      > gooified blob rendering? Ah, NVidia does in
      > their new Geforce5. No worries, ATI will have to
      > support it in their next generation boards.

      Wrong, this should be "ATI doesn't support super-duper-gooified blob rendering? Idiot, why did you buy that board in the first place? But no worries, NVidia has the new Geforce5, just spend 300+ bucks an get one. Unfortunatley, this will break application Y as only ATI has the new M.O.R.O.N.-rendering which is required by Y. But hey, such is life!

      > A bipolar competition is ALWAYS good for the
      > consumer.

      This is not "bipolar competition", this is "fragmentation".

      C. M. Burns
    • A bipolar competition is ALWAYS good for the consumer.

      You mean like when Netscape and IE were competing? In case you haven't noticed, HTML rendering between the two browsers haven't exactly meshed.

      • You mean like when Netscape and IE were competing?

        Were? [mozilla.org]

        HTML rendering between the two browsers haven't exactly meshed.

        Most sites are designed around IE 5, but I see very few problems with Mozilla 0.9.x aka Netscape 6.1, except for some Really Stupid Sites(tm) that use VBS instead of ECMAScript [jsworld.com]. HTML is not designed to be a pixel-perfect layout language; it's a structural markup language. For layout use CSS, which supports pixel-perfect positioning and is supported in current versions of IE (5+) and Netscape (6+). Except for a few glitches in IE such as inserting an extra 3px of left and right margins into the CSS box property float: and treating a newline before </div> as whitespace (contrary to the SGML spec), Mozilla and IE look pretty much the same.

        I agree that Netscape 4.x is sucks. Users can't turn off CSS (cascading stylesheets) without turning off CSS (client-side scripting), and the buggy implementations of the parts of CSS it does support will only make sites look ugly [misunderestimated.net].

    • How is this meant to be good for developers, or consumers? Developers now have three options:

      • Develop for NVidia based cards, which is slower if you have an ATI card
      • Develop for ATI based cards, completely ignoring the NVidia market
      • Develop for both, significantly adding to development effort

      This is also terrible for the consumer. Sorry, but that new card you just spend a small fortune on doesn't support the pixel shader version the game you want uses. Oh well, you'll just have to upgrade to the next card, when it comes out, hope that's okay. But don't worry, it will have lots of new features too (which no-one elses card will support).

      • Oh, but you forgot the fourth option:

        Say screw'em both and develop for neither, just using lowest common denominator stuff, and spend the saved time on improving the other parts of the game.

        If your game cant stand on its own using that... well, maybe, just maybe, it sucks?
    • You mean like Coke/Pepsi, where they essentially agreed to divide the market, and either acquired everyone else, or drove them out of business. Then what pretense of competition left is according to their own "gentlemen's rules". They probably insist that they are competing fiercely, but I might argue that it's more like a fencing match with masks, padding, and the little balls on the tips of the foils.

      Imagine a sort of RIAA for soft drinks.

All this wheeling and dealing around, why, it isn't for money, it's for fun. Money's just the way we keep score. -- Henry Tyroon

Working...