Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Games Entertainment

3D Mark 2003 Sparks Controversy 337

cribb writes "3DMark 2003 is out, sparking an intense debate on how trustworthy its assessment of current graphics cards is, after some harsh words by nVidia and the reply from Futuremark. THG has an analysis of the current situation definately worth reading. The article exposes some problems with the new GeforceFX previously mentioned in a slashdot article on Doom3 and John Carmack. Alas, here seems to be no end to the troubles with the new nVidia flagship." If you've run the benchmark, post your scores here, and we'll all compare.
This discussion has been archived. No new comments can be posted.

3D Mark 2003 Sparks Controversy

Comments Filter:
  • by multipartmixed ( 163409 ) on Wednesday February 26, 2003 @03:41PM (#5389422) Homepage
    Now, it's the video card makers slagging the benchmark makers.

    Anybody remember the early 90s (93?) when Hercules got itself into hot water by hard-coding a super-fast result for the PC Magazine video benchmark? Whoo hoo, that made for some good press. Got their awards pulled and everything.
    • ATI did the same thing with their drivers and quake. Article on it [hardocp.com]
      • by Anonymous Coward on Wednesday February 26, 2003 @04:16PM (#5389761)
        It was not the same thing. Hercules coded into the drivers the exact string that PC magazine was using in one of the text rendering tests. This was a cheat because it allowed the card to bypass the font rendering and just blit the text to the screen.

        OTOH, ATI coded an optimized path into its drivers for Quake 3, something they do on several games and something which nVidia is also known to do. The reason why it looked like a cheat is because there was a bug in the drivers that degraded texture quality significantly when a certain combination of options were enabled. The texture bug was fixed in the very next driver release without impacting performance. They also extended the optimizations to work with all Q3 engine based games.
        • I was against driver optimizations like that at first - but now I realize that I don't think it's so bad. Why isn't it a good thing that video card drivers are optimized so that popular games like that run faster? I mean, I'm a Quake/Unreal Tournament fan, and if ATI or Nvidia want to optimize their games so that they run super-fast on their cards, that's cool with me as long as it ads to my game playing experience.
          • IT didn't just make it run faster. It turned down the Visual Quality lower than what you were asking for in order to make it run faster.

            If I ask for Highest Visual Detail in a game, I expect Highest Visual Detail. I don't expect the Video Card Drivers to internally decide that I really meant Pretty High Visual Detail so that it can run it faster.
          • But see it didn't really optimise, all ir did was turn the detail down. That's not optimising, that's changing settings. Sure, I'd have no problem is ATi put some code in that made their card run Quake 3 faster, all other things equal, but all ti did was deliberatly lower visual quality.

            Well that's crap, if I want to lower visual quality for better performance, let me choose to do so. Have optsions in the control panel that let me pick the tradeoffs.
          • if ATI or Nvidia want to optimize their games so that they run super-fast on their cards, that's cool with me as long as it ads to my game playing experience.

            When AMD's K6-2 processors were getting stomped by the Pentium II, it turned to 3DNow, leaning heavily on 3DNow-optimized Voodoo2 drivers and a 3DNow-optimized version of Quake 2. Anand's Monster 3D-2 review [anandtech.com] shows 3DNow improving a last-place 44 FPS to a competitive 76 FPS. Quake 2 played better because of the efforts of AMD and 3dfx. However, the results weren't representative, as the Turok and Forsaken benchmarks show.

            I played System Shock 2 on a Voodoo3. At the time, 3dfx had Quake 3 on the brain, struggling to tweak its drivers to keep up with the GeForce. Those efforts were small consolation to me, as each new driver release would break something in System Shock, like making the weapon model sporadically disappear.

            The problem with a marquis game like Quake is that it encourages short cuts. The testing is done when Quake runs (a little faster). I, for one, am glad that Quake 3 put an end to the miniGL nonsense. Give me a card with decent, reliable performance in standard APIs like OpenGL and Direct3D. Put it this way: would you buy a TV that was optimized for Friends?

    • by pgrote ( 68235 ) on Wednesday February 26, 2003 @03:50PM (#5389515) Homepage
      Here is a great paper on the subject. The site is down, but Google has a cache of it.

      A quote:
      "Michael M, Editor-in-Chief of PC Magazine was looking at the executive report on the latest graphics benchmarks which were to appear in the June 29th issue. As he got deeper into the summary, his face took on a baffled look. He picked up the phone to call Bill M, Vice President for Technology, and asked him to come by his office with the detailed test results. Five minutes later, they were pouring over the data on Bill's laptop."

      Source:
      Hercules Cheating [216.239.53.100]
    • Next well have manufacturers making us accept a EULA before installing the drivers that will forbid benchmarking their hardware. Sound familiar?
  • Well... (Score:4, Insightful)

    by Geekenstein ( 199041 ) on Wednesday February 26, 2003 @03:42PM (#5389430)
    It stands to reason that a benchmark should fairly and accurately depict the widest range of common capabilities possible to determine a clear winner. Of course, this can be very hard to do. It does seem in this case though that 3DMark got caught up in the whiz-bang marketing side of things by supporting the latest and greatest(?) features and ignoring the very compatibility that would give it any real meaning.

    Sorry guys, you goofed.
    • Re:Well... (Score:5, Informative)

      by robbyjo ( 315601 ) on Wednesday February 26, 2003 @03:48PM (#5389491) Homepage

      You should read Carmack's comment that pretty much summed up the gist of the debates:

      The R200 path has a slight speed advantage over the ARB2 path on the R300, but only by a small margin, so it defaults to using the ARB2 path for the quality improvements. The NV30 runs the ARB2 path MUCH slower than the NV30 path. Half the speed at the moment. This is unfortunate, because when you do an exact, apples-to-apples comparison using exactly the same API, the R300 looks twice as fast, but when you use the vendor-specific paths, the NV30 wins.

      The reason for this is that ATI does everything at high precision all the time, while NVIDIA internally supports three different precisions with different performances. To make it even more complicated, the exact precision that ATI uses is in between the floating point precisions offered by NVIDIA, so when NVIDIA runs fragment programs, they are at a higher precision than ATI's, which is some justification for the slower speed. NVIDIA assures me that there is a lot of room for improving the fragment program performance with improved driver compiler technology.

      • by El Camino SS ( 264212 ) on Wednesday February 26, 2003 @04:09PM (#5389710)

        He makes my head explode every time he talks video cards.

        Tell him Corky here can't handle this.
        • by Fembot ( 442827 ) on Wednesday February 26, 2003 @04:29PM (#5389856)
          roughly what he's saying is:

          If you just write an application then it will run twice as fast on the ati card as the geforce fx

          But if you write two applications to to the same thing and optimize one for the ati card and the other for the nvidia card then the nvidia card does better

          So performance wise nvidia appear to be relying on developers to optimise their applications specificaly for the geforce fx. And they probably will get it too given their current market share.
      • Re:Well... (Score:5, Insightful)

        by WiPEOUT ( 20036 ) on Wednesday February 26, 2003 @04:47PM (#5390015)
        This reminds me of a certain other graphics vendor (now departed), who relied upon developers optimising specifically for their chipset. Then came a new entrant, who provided a chipset that outperformed it when using standard APIs like Direct3D and OpenGL.

        It's ironic that I'm referring to 3dFX and the then-incumbent nVIDIA, where now it's nVIDIA expecting developers to optimise for it's cards, while ATI makes sure their card is fast without specific optimisations.

        I hope nVIDIA sees the parallels, and wakes up to itself. I'd hate to see the heated competition in the graphics market come to an abrupt end due to nVIDIA's arrogant assumptions on how developers should do their thing sending it under.
        • Yes, I was just remembering how annoying it was to get the Voodoo2 card to run glquake back in the day (what? Glide drivers, glide minidrivers? wtf? wrong minidriver for this windows driver? baaaaah!).

          I surmise as you do that to The Average Consumer nVidia feels a bit big for its britches. It is unfortunate as I feel that nVidia has done and is doing The Right Thing in so many other areas.

          Cheers,
          -- RLJ

        • Re:Well... (Score:3, Informative)

          by p7 ( 245321 )
          But this is different. It's an apples to oranges situation. Essentially what Carmack has said is that when the NV30 runs in ARB2 mode it is doing 32 bit calculations and the ATI is running 24 bit calculations. Bandwidth alone will seriously affect the benchmarks and theoretically the NV30 has a more accurate picture. Now switch the NV30 to the NV30 path which runs at 16 bit and it beats the ATI, but now the ATI probably has better image quality. The problem with this 'neutral' isn't meaningful since both cards run different settings. Now when quoting 3dmark scores for ATI and NVidia we don't know that image quality maybe be lower on one card or that if you were ok with lower quality the lower 3dmark score card is actually faster. In other words no kidding the ATI runs faster when it has less data to shuffle and then the NV30 runs faster when it has less date to shuffle.
        • Re:Well... (Score:3, Interesting)

          by Firehawke ( 50498 )
          All the more interesting, since by everything I'VE heard, the FX was designed by engineers formerly from 3DFX. It sure shows in the design-- it has all the hallmarks of 3DFX bad design. High heat, high power consumption, brute force design, and a huge card.

          At this rate, unless Nvidia gets up off their collective asses and designs a card that can actually show superiority, they're going to lose. Okay, fine, the FX is a bit faster than the 9700 Pro.. but I see that the 9700 Pro hasn't been pushed to ITS limits yet. I expect a newer revision of the Pro with a higher clockrate, and I really expect to see it blow away the FX.

          How can I be so sure of this? Because there are already pre-overclocked 9700 Pro cards on the market (at about the same price as the FX) that blow away the FX.

          I'm no ATi fanboy, but this doesn't look good for Nvidia.
    • Re:Well... (Score:3, Informative)

      by ad0gg ( 594412 )
      The 3Dmark benchmark 2003 is for new features(directx 9, PS2.0), if you want a more realistic benchmark you can always use the older versions like 2001Se which has directx 8 and ps1.1.

      There is no game that uses directx 9.0 or Pixel Shaders 2.0 but it sure nice to see how these so called direct9.0 graphics cards actually peform. And it has really nice eye candy to boot.

  • by JoeD ( 12073 ) on Wednesday February 26, 2003 @03:44PM (#5389451) Homepage
    There are lies, damn lies, and benchmarks.
  • First post! (Score:3, Informative)

    by l33t-gu3lph1t3 ( 567059 ) <arch_angel16@ho[ ]il.com ['tma' in gap]> on Wednesday February 26, 2003 @03:44PM (#5389452) Homepage
    792 3DMarks.

    System:

    Geforce3Ti200 GFX
    AthlonXP1700 CPU
    256MB SDRAM
    ECS K7S5A Mainboard

    I don't like it. I'm gonna rely on actual game benchmarks when I compare my system's performance. Some good games to use:

    Quake3 (still scales nicely)
    UT2003 (the game sucks, but it's a decent CPU benchmark)
    C&C: Generals (don't know how it scales, but it cripples most computers)
    Doom3 (Will hopefully scale as well as Q3 when it comes out in 2 months)

    Synthetic benchies just aren't that reliable anymore...
    • 792 3DMarks

      Thats about 2.5 inches then?

      -1 inch for trying to get a 'first post', sad bastard.
    • Re:First post! (Score:3, Insightful)

      by Sj0 ( 472011 )
      Holy crap! Well, it looks like it's back to ye olde drawing board for me. To think that only a few weeks ago my MX-460 was the tenth fastest video card on the slate...

      Oh well. It runs all my 5 year old crappy abandonware like a demon, that's all I care about! :)

    • Re:First post! (Score:3, Informative)

      by looseBits ( 556537 )
      Score 4821
      P4 2.26@2.9
      512 MB @227 MHz (DDR455) CAS 2
      Radeon 9700Pro
      Abit IT7-Max

      I love UT2003, run it at 1600x1200, max details.

      Wolfenstien Castle I run on an Apple ][ emulator runs real well.
    • Synthetic benchies just aren't that reliable anymore...

      I agree. All I want to know is, is it going to improve the graphics enough to warrant the cost? I'd much rather read a collection of reviews that included a person's description of their system, and described how a specific game ran on it. for instance:

      My System:
      PowerMac g4 tower, 450 mhz
      384 mb RAM
      ATI Rage 128

      When playing Warcraft III, the single player scenarios are playable with default options. A performance boost is noticable if you set all the video options to low and turn off ambient sound. You have to do this in many online games. All in all, the game is enjoyable with this video card, but you can tell that it would really shine - and probably was intended for - a faster one.

      Return to Castle Wolfenstein however is uplayable (to me). Even with minimum options (video and sound), the framerate is noticably choppy. If you don't mind a little choppiness, you can deal with it, but I demand silky-smooth response from a fps. I won't be playing the game until I get a better video card.

      This kind of review may be totally non-quantifiable, but if I found a reviewer with similar system specs, I would find it invaluable.
    • Re:First post! (Score:2, Informative)

      Ahem...
      8 3Dmarks.

      Specs:
      PIII 450mhz
      256mb SD-RAM
      Radeon SDR 32mb


      I actually have another computer (1.2ghz T-bird w/ GF4) but whenever I install DX 9 on it it becomes unstable..so I haven't benched it on there yet.

      My jaws dropped when I saw that score...never thought I'd see one that low...
    • by 0biJon ( 593687 )
      107 3DMarks! woot!

      It must've been so fast it buffer overflowed!

      PIII 1GHz 512 PC133 RAM GeForceII MX w/64Mb RAM

      :-)

  • Cutthroat business (Score:5, Interesting)

    by Disoriented ( 202908 ) on Wednesday February 26, 2003 @03:45PM (#5389455)
    NVidia missed a manufacturing cycle and now it's coming back to haunt them. They really need to drop the FX and concentrate on whatever new architecture is currently being tossed around in R&D.

    Originally I was planning to buy the successor to the NV30 for a great experience in Quake III and better framerates in older games. But now it looks like I'll be laying out the dough on whatever ATI brings out early next year.
    • Next year? Try in a few months. ATI is almost finished with their production of the R350.
  • by corebreech ( 469871 ) on Wednesday February 26, 2003 @03:45PM (#5389456) Journal
    So long as my opponents are rendered as a red spot on the floor, I'm happy.
  • results and opinion (Score:5, Informative)

    by Anonymous Coward on Wednesday February 26, 2003 @03:45PM (#5389460)
    AMD Athlon1400C@1550
    512MB Samsung DDR, CL2@147FSB
    Geforce4ti4200, clocked@260core, 520memory

    a whopping: 1080 points.

    Did i mention that this benchmarks makes *heavy* utilization of the otherwis in *no* game used Pixelshader 1.4? Teh exact one, that Nvidia didnt implement in its GF4Ti cards - where only 1.3 and 1.1 is in?
    Guess, who has 1.4 - ATI has...

    You could also call this benchmark "ATIbench2003", but that was the same in 2000, when 3dmark2000 was favoring Nvidia cards over 3dfx simply because of the lack of 32bit colordepth.

    Sheeeshh...
    • by Anonymous Coward on Wednesday February 26, 2003 @04:33PM (#5389897)
      Did I mention that if you had read Futuremark's rebuttal then you would see that there are valid reasons for using PS 1.4?

      PS 1.2 and 1.3 do not offer any performance enhancements over PS 1.1, but PS 1.4 does. Also, any card the supports 2.0 pixel shaders will also support 1.4. The test does a pretty good job of showing the performance difference in cards that support more features.

      As for there being no games that support PS 1.4, straight from Beyond3D:

      Battlecruiser Millenium
      City of Heroes (OpenGL)
      Deus Ex 2
      Doom III (OpenGL)
      Far Cry
      Gun Metal
      Independence War 2 via patch
      Kreed
      Legendary Adventures
      Neverwinter Nights (OpenGL) via water patch
      New World Order
      Sea Dogs II
      Stalker
      Star Wars Galaxies Online
      Thief 3
      Tiger Woods 2003
      Tomb Raider: Angel of Darkness
      UT2003

      You must come from a different universe where zero = several. The fact is that nVidia could have implemented PS 1.4 if they had wanted instead of just releasing a rehashed GF3 in the GF4 series. They didn't. Tough sh*t.
    • by grung0r ( 538079 )
      The 3dfx 3dmark2000 issue was completely diffrent. At the time time most games did have support for 32 bit color(and could be impelemented with little to no performace hit on nvidia cards no less), and the fact that 3dfx lacked it was a major disatvantage in 3dfx cards and they deserved to be docked for it. This current situation isn't about the geforce4 at all. Nvidia dosen't care that it performs badly in it, as it is supposedly a directx 9 benchmark, and the geforce 4 is not a directx 9 part. What nivida is upset about is that their brand new part, the FX(a DX9 part) only perfoms to parity against the 9700 pro which is 6 month old card. So the real question is,does nvidia have a point? IMO, not really. while I agreem that 3dmark 2003 has some strange rendering techniques, and that they are probably biased towards the 9700, it wouldn't of happened if nvidia haden't dropped it's 3dmark subscrition, which they did becuase no matter what, they couldn't make the FX beat the 9700 by any particularly great margin. The only way to save face was to drop the subscription and cry foul. Does that make 3dmark2003 any more legit? Nope. But it does explain why nvidia is so pissed off. Their new part just isn't up to snuff.
  • by grub ( 11606 ) <slashdot@grub.net> on Wednesday February 26, 2003 @03:45PM (#5389465) Homepage Journal

    My Apple ][+ doesn't have enough disk space to download this program. Can someone help me out?
    • Turn over your floppy disk and you'll have double the capacity.
    • by madmancarman ( 100642 ) on Wednesday February 26, 2003 @04:52PM (#5390063)
      My Apple ][+ doesn't have enough disk space to download this program. Can someone help me out?

      I know you're just kidding, but something eerily similar happened when I volunteered for Apple Days when Mac OS 8.5 was released at the CompUSA in Cincinnati a couple of years ago when the iMac was still only one color. That morning, an odd-looking couple came in to look for some software. The people there really volunteered to talk about (sell) Mac OS 8.5, but we ended up spending most of our time helping people look for Mac peripherals and software (at least we got a free legit copy of 8.5!).

      The couple had just purchased an "Apple" at a garage sale (a red flag) and were asking me questions about what sort of software they could buy for it. The guy picked up a copy of CorelDraw 8 and asked if it would work, so I played 20 questions to figure out what kind of system he had. It took a while, but it turned out he had purchased an Apple ][+ and wanted to use CorelDraw 8 on it. After I explained that CorelDraw wouldn't work, he started asking me where he could find software for his new computer. I tried to explain that the Apple ][ series was way outdated and he'd probably have to go to more garage sales to find software, but he wasn't getting it. Finally, I became frustrated and said, "There is absolutely nothing in this store that can help you." He gave me a strange look and the couple left.

      About five minutes later, a CompUSA employee came back to the Mac section and said "Sure, this is our Apple stuff, everything here runs on Apple!" The guy then picked out CorelDraw 8 and walked to the register with it.

      I still can't decide if the CompUSA guys were bastards or if the weirdo deserved it. I'll bet they charged him a 15% restocking fee when/if he returned it. I could just imagine him trying to force the CD-ROM into those big black 5.25" drives...

      First they ignore you, then they laugh at you, then they fight you, then you win. -- Gandhi

    • I got a gopher server running on my trash 80. You could use some space there...
  • by Gannoc ( 210256 ) on Wednesday February 26, 2003 @03:46PM (#5389473)
    voltron:/home/gannoc/incoming/temp# chmod +x 3dmark2003.exe
    voltron:/home/gannoc/incoming/temp # ./3dmark2003.exe
    bash: ./3dmark2003.exe: cannot execute binary file

    • $ wine ./3dmark2003.exe
      Total 3DMarks: 2
  • by Shayde ( 189538 ) on Wednesday February 26, 2003 @03:48PM (#5389482) Homepage
    With the level of complexity in current hardware, I can't imagine anyone will come up with a benchmark that -can't- be labelled as skewed, inaccurate, or 'not giving justice'.

    If I spend a million dollars developing a cool board that does zillions of sprigmorphs a second (a made up metric), and someone does a benchmark that doesn't test sprigmorph rendering, does that mean my board sucks? No, it just means the benchmark doesn't check it.

    However, if Competitor B makes a board that doens't have sprigmorph rendering, but scores higher on this benchmark, which is the 'better card'?

    The days of simple benchmarks, alas, are past. It used to be "how many clock cycles a second". Nowadays, whether one piece of hardware is better than another simply comes down to "Can it do what I'm doig right now any faster or cheaper than another unit?"
  • by Gizzmonic ( 412910 ) on Wednesday February 26, 2003 @03:48PM (#5389483) Homepage Journal
    Benchmarks are generally too isolated to be of much use. They might be okay for getting a rough picture, but a high scoring 3d benchmark might not directly translate into good 3d performance.

    Even so-called 'real world' benchmarks that test stuff like file opening and scrolling documents don't really get into the meat of the everyday user experience.

    Using benchmarks to decide what computer to buy is like macking on the girl with the big boobs. She might look nice, but she could be horrible in bed. Also she might have crabs.
  • by questionlp ( 58365 ) on Wednesday February 26, 2003 @03:49PM (#5389496) Homepage
    The guys at Tech Report [tech-report.com] also has an article in which they dissect [tech-report.com] parts of the benchmark and provide what both FutureMark and nVidia's comments on the matter.
  • You want scores? (Score:5, Informative)

    by caouchouc ( 652238 ) on Wednesday February 26, 2003 @03:50PM (#5389510)
    IF you've run the benchmark, post your scores here, and we'll all compare.

    Or you could just go directly to the futuremark forums [futuremark.com] instead.
  • by dgrgich ( 179442 ) <drew AT grgich DOT org> on Wednesday February 26, 2003 @03:51PM (#5389526)
    Along with most of my geek friends, I really depend on excellent the "excellent" 3dMark scores of the latest and greatest hardware to drive down the price of the previous generation of video cards. After all, software that truly supports all of the whiz-bang features of the top-tier cards doesn't arrive until about 6-9 months after the cards appear on Best Buy's shelves.

    If 3DMark isn't producing high enough scores for the new nVidia cards, where will my price breaks be?
  • Old news? (Score:3, Insightful)

    by mwarps ( 2650 ) on Wednesday February 26, 2003 @03:51PM (#5389530) Journal
    Sorry, but this has been on major hardware sites for two? three? weeks. Www.hardocp.com had an entire article on (and mostly started the hoopla over) this entire thing. Posting 3DMark scores to slashdot is a total waste of time anyway. There is no trusted system of comparison here. Most slashdot readers aren't hardcore performance nuts anyway. (Go ahead, be a troll or a classic weenie and take that statement out of context or whatever)

    I just don't think this is the right forum for this type of story. Oh well.
  • No Subject (Score:5, Informative)

    by Jay ( 1991 ) on Wednesday February 26, 2003 @03:51PM (#5389533) Homepage
    It seems like the 3Dmark folks decided to deliberately test DX9 features, even though there are not many cards which support them in hardware yet. Nvidia is pissed because they have not implemented any DX9 features in hardware on the FX, where ATI has them on the 9x00 whatever.

    This is a valid benchmark to use to test out how your current hardware will perform in a DX9 environment. I, for one, am glad to see such a tool available so that I can take DX9 performance into account when making my next video card purchase. So my next card may be an ATI - Who knew? The last ATI product I owned was a Number 9, not exactly a 3D monster....

    • Re:No Subject (Score:3, Insightful)

      by TheRaven64 ( 641858 )
      Each version of 3DMark tests a new feature set than the last. nVidia are saying that most games are only using DirectX 8 (and they have a point), so the simple solution would be for people who care about this kind of thing to simply look at the scores from the previous version of the benchmark. End of story. Can we move on now?
    • Re:No Subject (Score:2, Interesting)

      by L0neW0lf ( 594121 )
      Actually, this is NOT a very valid DirectX 9 test at all. Only one benchmark even tests DirectX 9 features, the Mother Nature game, and even then, it is only a partial DirectX 9 test, not a full one. The first three game tests are DirectX 7 and DirectX 8 benchmarks.

      I think there are several uses for a benchmark. One is to measure compatibility with the features offered by today's game engines and gaming API's (OpenGL, DirectX). The second is to measure real-world performance for current gaming titles and technologies. I think 3DMark `03 looks nice, is perhaps a partial measure of current featuresets at best, but is not a good measure of real-world performance at all.
    • Yeah, except we probably won't see DX9 games for another 3 years. The first real DX8 games are just now entering the market. It doesn't really matter though, benchmarks are just a dicksizing tool anyway. They often have very little to do with the real world except letting 16 year olds whose parents have too much money try and pretend they have a bigger penis, which sadly, will never get used. Computers don't impress the ladies. Buy a corvette instead.
  • by bascheew ( 220781 ) on Wednesday February 26, 2003 @03:53PM (#5389555)
    Does NVidia's poor performace have anything to do with the recently revealed fact that it does NOT have 8 rendering pipelines as it advertised, but only 4?

    Read about it here. http://www.theinquirer.net/?article=7920 [theinquirer.net]

    "An Nvidia technical marketing manager confirmed to us that Geforce FX has 4 Pipelines and 2 Texture Memory Units that can results with 8 textures per clock but only in multitexturing.
    However, Nvidia did say that there were some cases where its chip can turn out 8 pixels per clock. Here is a quote:
    "GeForce FX 5800 and 5800 Ultra run at 8 pixels per clock for all of the following: a) z-rendering b) stencil operations c) texture operations d) shader operations"
    and
    "Only color+Z rendering is done at 4 pixels per clock"

    We talked with many developers and they said me that all games these days use Color + Z rendering. So all this Nvidia talk about the possibility of rendering 8 pixels in special cases becomes irrelevant.
    The bottom line is that when it comes to Color + Z rendering, the GeForce FX is only half as powerful as the older Radeon 9700."

    • Wrong! You're a good boy for doing your homework, but you didn't quite complete it. Firing Squad [gamers.com] links to a site that tells the reason. The GeForce 4 can actually switch its configuration. It can run with 8 rendering pipelines or do the 4/2 thing that it does. nVidia is enforcing this through software because they say that it is faster in today's games. They say that when newer games come out, they'll switch it to have 8 rendering pipelines to perform better in the games that will come out in the comming years.

      Looks like we'll have to wait and see.

  • Silly arguments... (Score:5, Insightful)

    by klocwerk ( 48514 ) on Wednesday February 26, 2003 @03:55PM (#5389573) Homepage
    Ok slashies.
    3DMark 2001 measures performance for directx 7 and 8 hardware platforms.
    3DMark 2003 was built from the ground up to measure performance for directx9 platforms, it is not DESIGNED to be a broad range benchmark. it isn't meant to give good scores to your computer that does what you need it to.

    It's a high end performance measurement tool, which UNLESS USED IN THE PROPER CONTEXT gives you useless measurements.

    Sorry for the pissiness, but jeeze. for geeks who claim to love specialized tools and hate bloat, this is the perfect tool. it does one thing specifically and doesn't throw in the kitchen sink, or support for ancestral hardware.

    They aren't microsoft, they're fully supporting 3DMark 2001 for the platforms that it was designed for.

    I'll hush now.
    • by l33t-gu3lph1t3 ( 567059 ) <arch_angel16@ho[ ]il.com ['tma' in gap]> on Wednesday February 26, 2003 @04:01PM (#5389635) Homepage
      One of the primary reasons for the criticism of 3DMark2003 is the fact that it *DOESN'T* use DX9 extensively. Pixel shader 1.1 and 1.4 are primarily used, which is absolutely laughable, and only in ONE benchmark are there SOME PS2.0 and VS2.0 paths used. The first test is DX7 for chrissakes...
      • Very sill argument (Score:2, Informative)

        by 0123456 ( 636235 )
        Uh, PS1.1 and 1.4 are part of DX9; they just happen to be available on DX8 cards too. There's no reason why you _have_ to use PS2.0 on DX9 cards if earlier versions will work just as well... and it's likely that game developers will use the earlier versions where possible, for best backward compatibility with older cards.

        Just imagine if every test had required DX9: people would be whining that their DX7 and DX8 cards couldn't run anything.
      • by mr3038 ( 121693 )
        One of the primary reasons for the criticism of 3DMark2003 is the fact that it *DOESN'T* use DX9 extensively. Pixel shader 1.1 and 1.4 are primarily used, which is absolutely laughable

        Uhh.. You didn't read the reply [futuremark.com], did you? OK, I thought so. Here's an excerpt from it:

        The argument here is that game test 4 is not "DirectX 9 enough". Once again, a good application should
        draw a scene as efficiently as possible. In the case of game test 4 this means that some objects use Pixel Shaders 2.0, and some use 1.4 or 1.1 if a more complex shader is not required. Because each shader model is a superset of the prior shader models, this will be very efficient on all DirectX 9 hardware. In addition, the entire benchmark has been developed to be a full DirectX 9 benchmark[...]
        (emphasis mine)

        Do you think your web browser should use DirectX 9 pixels shaders to render text, too?

  • My system (Score:5, Informative)

    by dfenstrate ( 202098 ) <[moc.liamg] [ta] [etartsnefd]> on Wednesday February 26, 2003 @03:56PM (#5389588)
    3d Mark 2003 Score: 1252

    Geforce 4 Ti 4600 @ AGP 4x
    800 MHz PIII
    256 MB RDRAM
    Intel VC 820 Motherboard
    Windows XP
    Games & 3d Mark ran off of 80GB WD 8MB cache Special edition hard drive, alone on a seperate IDE card on the PCI bus.

    For Games:
    Simcity 4- large maps and pleasing resolutions bring my comp to it's knees. Running SC4 at 1024 & higher resolution is absolutely beautiful, running it at 800 x 600- it looks like ass.
    RtoCW runs fine at 1024, haven't tried it higher yet.
    Delta Force: Black Hawk down runs fine at 1024, with full effects. Haven't tried it higher yet. The water effects are stunning.
    UT2003 ran fine when i had a GF2 in here, haven't tried it since.

    my 2 cents
    • Just a suggestion: You might look at getting a faster motherboard and processor combination. I see you have a decent graphics card, the GF4Ti4600 used to be top of the line before the FX came out, but you are really bottlenecking on the processor. A newer Athlon XP 2400+ or P4 2.4Ghz should do the trick for you.
  • Does it really make a difference if you get an extra 2 frames per second on your game? I understand if you're doing super high end visualization where it's necessary, but at that point you can afford to purchase 5 different $500 cards and compare for yourself, right?
    • Re:Why bother? (Score:5, Informative)

      by Osty ( 16825 ) on Wednesday February 26, 2003 @04:11PM (#5389730)

      Does it really make a difference if you get an extra 2 frames per second on your game? I understand if you're doing super high end visualization where it's necessary, but at that point you can afford to purchase 5 different $500 cards and compare for yourself, right?

      Yes, it does matter (within reason, anyway). While your current card may do well enough at Quake 3 and the new cards may not have a huge margin over it (really, what's the difference between 150fps and 200fps except in the very rare situation where absolutely everything on the screen is blowing up or something), that's old technology. As hardware capabilities increase, software complexity also increases. That card getting you 150fps at 1024x768 in Q3 with 4x FSAA will likely barely break 30fps for Doom 3. (at that point, you tweak -- drop your resolution, turn off FSAA and anisotropic filtering, lower your detail levels, turn off unnecessary effects, etc and get up to a playable 50fps or so) The cards doing 200fps in Q3 will probably run D3 around 50-60fps. While there's little difference between 150-200fps, there's a world of difference between 30 and 60fps.


      And just to head off any, "But your eye can only see 24/30/60fps anyway, who needs more?" arguments:

      • Wrong
      • Film and television are watchable at such a low frame rate because film captures motion blur. Video games do not. Without motion blur, your brain needs more frames to make a smooth image. And even with motion blur, film is hardly smooth (watch a long horizontal pan some time, they can be painfully jerky depending on the speed of the pan).
      • These numbers are averages (except when you cheat and report the peak number instead, which will be even worse). Just because you normally get a smooth 60fps doesn't mean there won't be places where you drop to a slideshow 10fps. Higher is better when talking about averages, so that the worst case won't be so bad.

    • You know, this brings up a good point though. Forget benchmarks. Buy both cards at Best Buy. Get the ATI 9700Pro and the GeForce FX or whatever. Take them home. Put them in YOUR machine, run YOUR games, check the framerates, the quality. Decide which you want to keep, take the other back within 15 days and get a full refund.

      I mean seriously. I know its sort of inconsiderate to advocate buying hardware with the intention of returning it, but A) that's Best Buy's policy, if they dont like it, they can change it. and B) the few people that care enough to read the benchmarks and do this aren't going to hurt Best Buy financially by taking back a return item.

      If I was buying tomorrow this is exactly what I would do (well, once both cards were at Best Buy.)

  • THG? (Score:3, Interesting)

    by molo ( 94384 ) on Wednesday February 26, 2003 @04:00PM (#5389625) Journal
    Am I the only one who saw "THG" in the post and thought, "The Humble Guys? They're still around? And they care about graphics??"

    I had to mouseover to realize that they meant Tom's Hardware Guide and not "The Humble Guys" of 1980s BBS piracy. Hrm, I guess I'm showing my age.

    Heh, for a trip down memory lane, check this out:

    http://www.textfiles.com/piracy/HUMBLE/
  • I dislike benchmarks like these. It encourages video card manufacturers to design video cards that do well in benchmarks, rather than do well in actual applications.

    There are tons of people who do comparisons with applications rather than benchmarking utility. Whether you're a fan Tom's Hardware [tomshardware.com] (or not, I know he's had somewhat of a sorted past), there a lot of sites where people like him do testing with end user applications. Do research, find one of those sites you trust, and go with numbers based on software you use, rather than some number a benchmarking application you'll never actually run gives you.
  • The REAL Issue (Score:5, Informative)

    by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Wednesday February 26, 2003 @04:02PM (#5389639) Homepage
    I've been reading about this, and the big rift seems to come down to this: the pixel/vertex shader programs are not optomized. This is why nVidia doesn't like the benchmark but ATI does. From what I've read, ATI's hardware performs very well with unoptomized code while nVidias does not. nVidia's hardware is faster than ATIs, but it doesn't do well with non optomized code. All of the complaints about the benchmark seem to be about "unneccessary complexity" and other "no one would do it like that" type things. These are all basically "you could optomize that, so why don't you do it" type complaints.

    The under-issue here is that nVidia is no longer a "partner" of madonion (I know they changed their name, whoever they are now, futuremark or whatever) but ATI is (IIRC). This is helping fuel suspicion that the benchmark is designed to perform better on ATI hardware than on nVidias. You must pay a fee to be a "partnet" so there is the unspoken idea that what Futuremark is doing might be some kind of extortion.

    Where the answer lies is up to you. Personally, I do think that the benchmark is unfair/not a good benchmark. For example, chaning the graphics card in your computer should have next to no effect on the CPU score, if any; yet it has a measureable effect. But all of this is mute, IMHO, since Doom III will be the new Uber benchmark trusted above all else when it comes out. Untill then, argue amongst yourselves.

  • by Steveftoth ( 78419 ) on Wednesday February 26, 2003 @04:03PM (#5389660) Homepage
    Personally, I think that a good benchmark is just doing whatever you are going to be doing and timing that.

    Are you going to be playing much of the 3D-Mark benchmark ? If the answer is yes, then you should use it, otherwise it's pure masterbation. Their site claims that the purpose of the benchmark is to give you an idea of what a typical DX7-DX9 game will give you in performance. However, the 'games' they use to test it are not games you can actually play. It's basically a graphics demo. Wow.

    The only benchmarks even worth considering are the Quake, Unreal, etc. benchmarks that test real games being played. And even those results should be taken with a grain of salt. They are 'real world' results, but you have to take into account many factors to actually derive useful information from them. Such as RAM, CPU, resolution that marks were run at, etc.

    If you are smart, then you will buy your graphic card from a place like Fry's that will let you return it if the performance is unsatisfactory. In this day and age where the graphics card costs more then a computer, you had better get your money's worth.
  • Simple Benchmark (Score:2, Interesting)

    by Eric Savage ( 28245 )
    Just test SimCity 4. It kicks the snot out of my P4-2.26/1GB DDR/4200.
  • by EXTomar ( 78739 ) on Wednesday February 26, 2003 @04:08PM (#5389707)
    Sometimes people scratch their heads about benchmarks and wonder "how did they come up with that number?" If the benchmark itself was Open Source you'd have at least a partial answer. Not to mention you'd have the eyes of many people looking over the code to make sure it was executing draws in the right and consistent manner.

    So why aren't benchmarks open? What do the makers of benchmarks have to hide? Are they under NDAs from the card vendors?
    • I dunno--I think I trust their abilities as programmers. If I recall correctly, it's the same core team that coded Second Reality [hornet.org] and made us all crap our collective pants, back in The Day.
    • The flip side is also true. If you know the exact tests that a benchmark is making you can tailor your driver, or even your hardware, to give a higher benchmark score.

      Well, big deal, but bear in mind that all design is some sort of compromise. If you gain performance in one area you necessarily give up a little in another. To use the car analogy, you can have milage or power, but not both.

      When you fudge a product to give good benchmark scores you often have to do this by degrading the real world performance that will be experienced by your customers. They believe they are buying a better card but actually getting a worse one.

      All scientific testing should really be done double blind, but such isn't usually possible in running engineering performance tests. (Imagine trying to time a drag run without knowing what you were doing, but in a proper test the timer wouldn't know what a good time was or why you wanted it). An OSS benchmark wouldn't even be blind. It's being given a test AND the answer sheet.

      All benchmarks should have their code opened after a period of time, but then replaced by new ones. The problem is that benchmarks are used for *selling*, not scientific purposes, and by the time a benchmark could be opened it would be wholely irrelevant because the product cycle has moved on.

      And never mind the fact that performance of video cards is largely a subjective measure, not an objective one, and so benchmarks themselves are of extremely limited use.

      Except by the marketing department of course.

      If *you* want to know which card is better, try them and see which one you like.

      KFG
  • ...but I couldn't get 3DMark 2003 to run on my system.

    Not much of a benchmark program if the thing won't even run properly.
    Then again perhaps I just need to update my version of winex [transgaming.com]
  • benchmarketing (Score:2, Insightful)

    by in_ur_face ( 177250 )
    face it... the computing industry is run by benchmarks and benchmarketing.

    I personally dont put too much trust into any benchmark. If I see an increase in performance compared to the actual software/hardware that I run, then thats all I care about...

    Either synthetic or not, you can only put so much into a benchmark. Half of the graphs for bencharks have scales which are EXTREMELY misleading. It makes a .4 fps difference look like a 400 fps difference. --

  • the point (Score:5, Insightful)

    by cribb ( 632424 ) on Wednesday February 26, 2003 @04:16PM (#5389760)
    what bothers me is that the geforceFX, being very slow with unoptimized code, needs code specially rewritten so it works fast. directx was created with the idea that it will be the standard 3D engine, eliminating the need of a each game developing its own.

    now nvidia are introducing a new factor in the equation: now you have to write different code for each videocard. just as there used to be 3dfx-only games.

    isn't this against the idea of directx? seems very counterproductive to me, and an attempt by nvidia to monopolize the gaming industry.

    • Re:the point (Score:4, Insightful)

      by olethrosdc ( 584207 ) on Wednesday February 26, 2003 @04:55PM (#5390111) Homepage Journal
      In a nutshell: You don't need to write different code for different cards. Your program will work everywhere. You might improve the performance if you write special code. But that should be handled by the directx driver, so you would not have to.

      Carnack is doing a bit deeper programming than just using the top-level opengl API, he's actually coding shaders and stuff.. I guess in that case you might need to go do vendor-specific stuff. But the top-level API is the top-level API You just use it and it's the same for all cards, the driver inbetween does its job and you dont need to write extra code.

      Correct me if I'm wrong.
    • Direct X is not written to be the standard 3D engine, except in the sense that it is intended to be the *only* engine.

      Let me ask you a question. How many OS's does Direct X run under?

      No peeking.

      That's right. One. Direct X is written to monopolize the gaming industry onto one OS, and, for the most part, it's working.

      And *whose* OS is it written to work under?

      Again, no peeking.

      If MS wanted Direct X to be a standard gaming engine all they'd have to do is open the API, but that would destroy its very purpose.

      You'd think they were *trying* to be a monopoly or something.

      KFG
  • What I want even more than that last bit of graphics speed is a driver that doesnt crash every few minutes.
    As everyone who plays 3d games knows, the driver that comes with the card is unusable. The only thing that will typically run under it is the benchmark. So the first thing you do when you get a new card OR a new game, is go to the board's manufacturer's site and get the latest driver (and pray).

    My experience tells me that nVidia is ahead in this area. When a new game comes out, if there is a bug that stops it from running or causes random crashes, the fix will usually be released by the game's release date. ATI on the other hand tends to both have buggier drivers and lag weeks behind on bug fixes.

    So the bottom line is, if you are planning on playing that hot new MMPORG on release day, you are probably better off going with nVidia since you are more likely to get a driver that works.
  • NVIDIA has a problem (Score:4, Interesting)

    by lazyl ( 619939 ) on Wednesday February 26, 2003 @04:25PM (#5389822)
    Carmack says:

    It seems that the NV30 architecture requires a good deal more optimization to run shader code optimally (read: fast), while R300 deals with standard code much better. This would explain why NVIDIA is so harsh and aggressive in its criticism of the new 3DMark 2003, since the GeForce FX (NV30) seems to have a problem with non-optimized shader code, a trait that its mainstream siblings NV31 and NV34 will obviously share. If word got around - and in this community, it does - this could seriously hurt NVIDIA's sales.

    To be fair, in real games this "handicap" will most likely not be nearly as pronounced as in the 3DMark test. After all, NVIDIA is very good at convincing game developers to optimize and adapt their code for their hardware.


    So NVidia only runs well with optimized code huh? That's going to be a problem for them I think. It means we won't know how well it works until we get some games to benchmark it with. Sure, we could benchmark it with UT2003 or something; but that doesn't mean much. I don't care about UT2003. My current card runs that fine. I (and other people who buy these cards) care about how they will run the next gen games. We could wait until those games come out, but a lot of people don't have that patience. For those people it might be safer to get the ATI. If you go with NVidia you have to really trust that the games you want are going to be well optimized for it, though as Carmack said, they probably will be. Personally I'm still on the fence about which card I will eventually get.
  • by scotay ( 195240 ) on Wednesday February 26, 2003 @04:28PM (#5389849)
    I've got a 9700 pro, p2.53, sis648 and 512 DDR400. Hardly a lowend rig. My 9700 chokes on 3dmark 2k3. At several points in the demos the FPS drops below 10.

    If this benchmark is supposedly so horribly biased in favor of ATI, you'd think they might at least get it to run smooth on my 9700.

    I think 3dmark may be accurately pointing out that this new wiz-bang high-precision stuff may only start to be gameworthy on the NV35/R350 or even NV40/R400 generations.
    • Nice system... :)

      I've found that the 3dmark is a good benchmark for comparing your system with others that have the same graphics card or processor speed that you have, to see if you're getting the performance you should be out of your setup.

      I've used it several times and after the benchmark has completed, you can go online and compare your score with others and see how well you stack up against them.

      It's not just a dick-measuring contest, I've actually found my system performing 25% slower than other systems using the same graphics card, and it turned out I just had to upgrade my ATI drivers to the latest and greatest and my performance jumped 25%. This is a valuable tool to see if your system isn't configured properly.

      Here's another scenario for you: How many non-geeks know that if you enable memory interleaving in your BIOS and have 2 or more DIMMS, you can essentially double your memory throughput and all of your games/apps will run much faster? (interleaving is like RAID-0 for memory)

      I've also found that 3dmark is a good benchmark to run after you've overclocked your graphics card. 3dmark seems to test parts of the card that most games don't, and I've seen several times where an overclocked card will run UT2003 stable, but you throw 3dmark on there and start to get artifacts. So it is valuable to see if maybe you're overclocking a little too much and pushing your card beyond a stable level.

      You might just find that by making a simple BIOS setting change or updating your drivers all of a sudden your system is twice as fast as it used to be.

      3dmark is good for that, but you do have to take it with a grain of salt and run several other benchmarks to see how your card really stacks up.
  • As much as I am lothe to waste space with another score, I'll do it all the same, as a form of eulogy for the video card that helped earn it.

    Score: 1493
    Date: 2003-02-15
    CPU: AMD Athlon(tm) XP/MP/4 1741 MHz (XP 2100+)
    GPU: NVIDIA GeForce4 Ti 4400
    275 MHz / 554 MHz
    Memory: 512MB 333Mhz DDR

    Sadly the ASUS GeForce4 Ti4400 that earned this score passed away on Monday evening, due to a burned out cooling fan. It spent its last moments on this world doing what it loved best, running my druid through the Plane of Tranquility in EverQuest. Services are postponed until the ASUS Technical Support Dept. gets their computer system back up and can issue me an RMA.

    In Memorium: Asus 'speedy' GeForce4 Ti4400 (December 25, 2002 - February 24, 2003)

    Rest in peace, dear friend, we hardly knew ye.
  • by willith ( 218835 ) on Wednesday February 26, 2003 @05:10PM (#5390235) Homepage
    There's a substantial thread on Ars Technica's forums [infopop.net] that contains a ton of benchmark results. What it boils down to is that if you have a decent processor (Athlon XP 1600+ or better) and an NVidia GF4 Ti4600, you'll end up with something like 1500-1700 3DMarks. If you pull the GF4 out and slap in a Radeon 9700 Pro (and get the appropriate drivers installed, of course), your score would shoot up to over 4000 3DMarks.

    I've got a Ti4600, and 3DMark 2003 runs like ass. Fortunately, Splinter Cell plays just fine, so I'll ignore the benchmark and get on with actually using the computer.
  • the review site with an introduction spread out several pages, laden with adverts!

    where else can on go to derive such pleasure in clicking "introduction, continued..." over and over again?
  • But at 171MB, I have a feeling somebody's in for a hefty bandwidth bill at the end of this month.

    With the low scores everyone is posting, I'm concerned for my safety. If I run this benchmark on a system that's too slow for it, will it get a negative 3DMark score, or will it cause a total protonic reversal of the space-time continuium and destroy the entire universe? Or does my Radeon 8500 only possess enough processing power to cause destruction limited to my neighborhood? Oh well, I hope the answer wasn't in that clickwrap licence I just said OK to.
  • by OBODY ( 556829 ) on Wednesday February 26, 2003 @05:37PM (#5390520)
    AMD XP TBred-B 2100+ OCed to 2700+ (166fsb x 13)
    2x265MB DDR400 Clocked at 333Mhz, with 2-2-4-2 Timings (Dual Channel A7N8X Deluxe)
    ATI Radeon 8500 Default Clocking

    My Score was a wopping 1173 3DMARKS with

    Program Version 3DMark03 Revision 1 Build 3
    Resolution: 1024x768@32 bit
    Texture Filtering: Optimal
    Pixel Processing: None
    Vertex Shaders: Optimal
  • My result (Score:3, Informative)

    by fatwreckfan ( 322865 ) on Wednesday February 26, 2003 @06:16PM (#5390899)
    1211 3DMarks

    Athlon Thunderbird 1.4GHz
    Geforce 4 TI4600
    512 MB PC 133

    Of course, I can't compare to other users without paying, so I don't know if that is good or bad.
  • We all know that benchmarks can be fixed and benchmarks don't even tell half the story about real world ownership of a particular card.

    But frankly, I'm sure that most people buy card primarily on the benchmark scores. Even if a review slags the quality of a driver, many people will buy the card anyway telling themselves that the drivers are gonna get fixed, a firmware upgrade will make it faster, and for the 20% of the time that the card works right, we've have 5 extra frames per second.

    If benchmark scores didn't mean so much (both in sales and consumer opinion) then we might get back to meaningful metrics for measuring performance, but I suspect that we'll be looking at benchmark skullduggery for some time to come.

The best things in life are for a fee.

Working...