Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
PC Games (Games) Entertainment Games

Opteron Gaming Benchmarks 39

bishop writes "Ace's Hardware has published some Unreal Tournament 2003 benchmark results on a 1.8 GHz Opteron 244. The Opteron servers out right now don't have AGP, but this issue has been nullified, literally, through the use of the 'null renderer' option in UT2003 to bypass the display output. At 1.8 GHz, the Opteron manages to outpace all previous Athlon models, though it does still fall behind the 3 GHz Pentium 4 by about 8%." Only 8% slower in performance with a 40% slower clock speed. Not too shabby.
This discussion has been archived. No new comments can be posted.

Opteron Gaming Benchmarks

Comments Filter:
  • Nice Stats! (Score:3, Insightful)

    by mungeh ( 663492 ) on Tuesday April 29, 2003 @04:34AM (#5832367) Homepage
    I can't wait till the Athlon-64's come out! Maybe they'll be able to surpass a P4 3ghz?
    • However at that point the question is at that time (Sept) what will Intel be shipping? P4 3.5 ghz? P4 4.0 ghz? Sure it should beat a P4 3.0, but that's avaliable today. It's gunna be interesting to see how it compares to other procs avaliable when it ships.
  • by vandel405 ( 609163 ) on Tuesday April 29, 2003 @04:39AM (#5832376) Homepage Journal
    One Function explains it all ...
    bltMegsOfTextureMaps(...) {
    return;
    }

    "hum, nah, that shouldn't mess up the results..."
    • Re:Um Null Driver? (Score:3, Informative)

      by ChadN ( 21033 )
      But seriously. The graph indicates that they used the "null" driver for all setups (and even tested with graphics boards of VASTLY different performance to make sure they all performed roughly the same with the "null" driver). So one can infer it does test CPU and memory speed (primarily).
  • by Vector7 ( 2410 ) on Tuesday April 29, 2003 @04:57AM (#5832415) Journal
    You know, I find it pretty interesting how vastly AMD's chips outpace the P4 clock-per-clock.It's widely acknowledged that the P4 gets less done per clock than the P3 did. Some people have said that the P4's memory architecture is a disaster, or that it is pipelined TOO deep, but I've got a sort of conspiracy theory about this. Granted I'm not a computer engineer, and know just enough to hang myself with, but here goes:

    I think they've manipulated the design so they can deliberately increase the clock rate for marketing reasons, without getting proportionally more performance out. Basically I suppose they've taken the longest paths through the chip and stuck latches all over the place so that the overall cycle time can be reduced, but operations that used to take one clock cycle now may take two. When 1.8 GHz AMDs can nearly match the speed of a 3 GHz P4, I don't think this is an entirely unplausible theory.

    On the other hand, maybe the P4 really just is a pig. There was some discussion on usenet a while back about how it takes upward to 2000 clock cycles to enter and exit an interrupt handler on the P4, something which an old 486 could do in ~45 cycle IIRC (and I don't recall exactly, but I think the Athlon today can do an INT/IRET pair in a few hundred cycles). Curious how in hyper-optimizing these chips for the most common cases of execution, performance of these sorts of periphery operations goes all to hell.

    That said, I'm really looking forward to having 64-bits on the desktop. =)
    • I know my bud with a p4 3.04 gets steaming pissed
      when my athlon 1700+ running a 184 bus times 12
      crushes his machine by 10 minutes on 'make world'
      on FreeBSD. Even using a -j flag to take advantage
      of the HT goodness doesn't seem to help him much.
      I thought it might be I/O, but his drive is faster
      than mine too with hdparm -t and -T.
      My UT2003 benchmarks are faster than his even
      though he has a ti4200 with 128mb and I'm running
      a ti500. All the supposed memory bandwidth just
      doesn't seem to be there for him. Nforce2
      • You have just posted the most useful benchmark in the world to me. I don't care about Unreal, or winstones, or whatever. I run a compiler on my machine and that's it. The make world test is the one I care about.

        Can you post what the total make world time on your machine is? You only mentioned that it was 10 minutes faster.
        • Ok, I'm running 4.x stable. I'm also dual booting
          Gentoo at the moment. I was running Gentoo for
          a while waiting for issues with the nforce2
          support to sort themselves out. I ran hdparm
          under Gentoo. Steve's IBM drive is faster than
          my WD800JB. Pre 3.2.2 GCC was pulling 23 minutes
          for everything BUT I whack my /usr/obj tree
          before running make buildworld && make installworld
          && make buildkernel && make installkernel.
          If something dies in any of those, I want it to
          stop dead. At this point it
    • Everyone says they upped the depth of teh pipeline (20 stages) so they coudl push the MHz up too, but the engineers think this is so laughable as to be pathetic.
      As much as it'd be a great story if it were true, Intel are not going to deliberately put out a chip that they know can be outclassed so easily.
    • I have a therory.

      They realized that if they made a design that allowed them to ramp up the clock speed to profane levels, they would have the best overall performance.

      That p4 3GHz is faster then the fasted AMD has to offer. Considering that not too lang ago AMD had the fastest offering for quite a while, I would say the stradegy worked.

      There is nothing wrong with designing a chip for ultra high clock sppeds then attaining them. The only problem I had with the p4 was that right when it came out it was s
    • You know, I find it pretty interesting how vastly AMD's chips outpace the P4 clock-per-clock.
      Not that interesting. The Opteron's pipeline is 40% shorter than the P4's. That's why the P4 can and must run at a higher clock speed to get equivalent performance. The same amount of work is being done, it's just split into different sized chunks of time.
  • cool (Score:3, Funny)

    by Sevn ( 12012 ) on Tuesday April 29, 2003 @04:58AM (#5832418) Homepage Journal
    I'd pay 50 dollars!

    I'll run out and buy one for my blind neighbor.
    so I can cream him on tokara forest.
    Maybe I can hook up a more advanced version of
    that cool braille output doohickey from the
    movie sneakers so he can feel the pain with
    his fingers.
  • by Anonymous Coward
    I think the Opteron has the potential to do a lot more than be a gaming spectacle, but clearly, between the preliminary SPEC marks, various performance metrics on database machines, coupled with a real bus (hyper transport / alpha - like bus) and being reasonably competitive at a modest clock rate is impressive to say the least. And its ability to chew through legacy un-optimized code is leagues better than Itanium2, which is considering an FX!32 like on the fly "re-compiler" to help with horrific performan
  • by DrSkwid ( 118965 ) on Tuesday April 29, 2003 @07:06AM (#5832665) Journal
    You know, it's always saddened me that there aren't more multi-cpu motherboards for general consumption.

    There's plenty of us that would stick another £200 CPU in the box every now and then as the cash came in.

    I mean, I've already got more memory than I ever use and too much HD storage, I want CPUs I never use either.

    I like my DUAL 1.2 Ghz p3 mb. I put it together for £300 or so and it's done sterling service, surpassing my normal 18 month upgrade timetable and not looking like it's about to be retired any time soon. (until DoomIII & HL2 I guess 8).

    I'll be looking to buy a nice 4-8 way Opteron system and populate it as I go along.

    I think Intel missed an opportunity crippling their processors to be non SMP. I hope there's an engineering reason why. I can't really think of a good reason why otherwise, surely they don't make *that much* on gambling that I'll buy a expensive Pentium if I can't have a 4 way Celeron set-up.

    • by NerveGas ( 168686 ) on Tuesday April 29, 2003 @02:17PM (#5836197)
      surpassing my normal 18 month upgrade timetable and not looking like it's about to be retired any time soon

      That is precisely the reason why dual-CPU machines are much more useful as desktops that the pundits would have you believe.

      You'll hear over and over that "you won't use the other CPU", or "your apps won't take advantage of it." In reality, dual-CPU desktops are so much more responsive under load that they still feel "quick" much, much longer than their single-CPU counterparts will.

      I have a dual Pentium 133. With NT4 on it, it's just as quick and usable as a P3/650 with Win98. CPU-intensive tasks do take longer, but the machine is still so responsive that you really don't notice it.

      Here are the reasons: First, you *are* running more than one program at once, even if you don't notice it. On the Windows side, even with one app, you still have at least 30-40 different threads and/or programs running that you're not aware of - logging daemons, mouse daemons, graphics drivers, timers, and the copious other programs that X and/or Windows will launch. Then, if you're doing net or disk access, that's even more there.

      The second reason is because of the interrupts. In a dual-CPU machine, one CPU can be getting hammered by interrupts, and you still have another to run other code, such as your GUI.

      After having used dual-CPU workstations, I'll never build myself another single-CPU setup again. If that 2xP133 is still a nicely usable machine, I really can't imagine how long my 2xAthlonMP 1800 is going to last!

      steve
      • by Anonymous Coward
        The second reason is because of the interrupts. In a dual-CPU machine, one CPU can be getting hammered by interrupts, and you still have another to run other code, such as your GUI.

        True to a point. There must be a working/efficient APIC and the OS has to properly manipulate this for both CPUs to service interrupts. While I strongly agree with a 2-way system being leagues better than "faster" single boxes, there are big-time problems with various OSes including Linux where say, putting a GigE card in, then

        • The situation with the gigE cards isn't entirely the fault of the OS (although the OS certainly isn't blameless). GigE cards simply generate *swamps* of interrupts. However, the nicer gigE cards give you the ability to turn on interrupt coallescing, where the card will queue up several packets before it sends an interrupt.

          That does, of course, increase the latency, but unless you're in a clustering environment, you can generally handle a small hit in latency much better than you can handle a swarm
  • by Guspaz ( 556486 ) on Tuesday April 29, 2003 @08:51AM (#5833008)
    Oddly enough, Anandtech's numbers show the complete opposite: The Opteron beat ALL other CPUs at ALL games. Including the P4 3.0C in UT2K3. Take a look:

    http://anandtech.com/cpu/showdoc.html?i=1818&p =6
  • It's interesting to see what's almost a disclaimer after the benchmark results saying that it is not a final benchmark result basically because it wasn't using the full capabilities of the chip.

    There has been no benchmarks so far except perhaps for a few Linux benchmarks where the Opterion has been nearly as optimised by the software as the Pentium, Athalon or Xeon simply because the software producers and developers haven't caught up to the chip, and it's a big transfer as well.

    In my opinion, we won'
  • Only 8% slower in performance with a 40% slower clock speed. Not too shabby.

    but what is the price difference?
    • At Newegg, you can get a 1.4 opteron for $315 for shipping. Assuming the Equivalent to that is about a 2.7 or so P4, the 2.66 is $219 at newegg, and the 1.8 is $319. This is just from one vendor though, so these may no be the best prices. Also, the opteron is designed for server applications. A Better comparison might be a Xeon, which at 2.66 is $308. So judging by performance, about the same cost. The Opteron can only come down in price :)

fortune: cpu time/usefulness ratio too high -- core dumped.

Working...