ATI Introduces a Parallel Processing Video Card 89
bilestoad2 writes " ATI has announced the introduction of a new video card, The Rage Fury Maxx. It uses two RAGE 128 Pro Processors and 64 Megs of Ram.
Follow this link for the complete story.
I don't know about you, but I've got to have one of these..."
Thought i read somewhere about a quad 3dfx board (Score:1)
Great... (Score:2)
Sorry, ATI, but I don't subscribe to your "proprietary technology" anymore. I made the mistake of buying one of your cards before when I didn't know better; I'll not make the same mistake again. There are a lot of cards which are better than yours who actively support free development - like Matrox and nVidia - and I'll not be shackled with your wholly proprietary support - or lack of any support - again.
Re:Thought i read somewhere about a quad 3dfx boar (Score:1)
stuff (Score:1)
Anyway.. on the parallel processing thing... arent biological processors that they are playing with relatively slow, but massively parallel? I can just see it now... one processor for each PIXEL on your screen
smash
ATI release schedule (Score:1)
Today's drivers tomorrow.
"The number of suckers born each minute doubles every 18 months."
Bah... not new... (Score:1)
SLI/PGC renewed (Score:1)
Multi processors == Multi monitors? (Score:1)
Re:'patent-pending' - marketting speak .... (Score:2)
Of course, they did give 3DFX a patent on Multitexturing, which while more unique than ATI's Full Frame nonsense, is still essentially pretty obvious--and not just after the fact.
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Where do I submit anti-patent commentary? (Score:4)
Are you kidding?
It's at least arguably unique to split even and odd lines among two cards(like Voodoo 2 SLI), or to split the image into horizontal strips(Metabyte's PGC), or to evenly split the texel reprocessing load among multiple texel processors(Voodoo 2 core design), but to attempt to patent the process of merely having one complete frame go to one processor while having the next complete frame go to the next processor?
The general reason one doesn't want to use a full frame architecture is simple: Per frame times don't budge. Either you have to build a higher latency into your rendering chain, since the chipcluster has to know the next x frames you intend to render, or you get *no* speed boost.
Don't even get me started on out of order frame rendering on a realtime rendering solution.
Each of the previously mentioned solutions(SLI/PGC/Texel x 2), incidentally, lowers per-frame latency.
Granted, there's probably some degree of multi-frame latency built into most drivers, particularly for games. But the concept of patenting the most basic parallelization solution strikes me as absolutely hilarious. It's very likely most 3D rendered movies use the technique ATI is trying to patent. "I'm done finishing this frame, send me a new one."
It's very likely most WORKPLACES work the same way too. "I'm done with this job, assign me a new one."
That being said, I'm looking forward to trying out ATI's new cards. Ever since I noticed their 128's were supported by Metabyte's [metabyte.com] excellent Eyescream system, I've been much more interested in them.
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Two procs? 64 megs?!? (Score:1)
----------------
"Great spirits have always encountered violent opposition from mediocre minds." - Albert Einstein
Re:ATI Boards are crap (Score:1)
(Just a joke, i knew what you meant)
ATI's no longer proprietary! (Score:1)
Funny, I could have sworn I had docs on my desk... (Score:1)
We're working on it as fast as we can... (Score:1)
ATI HAS released spec docs to the GLX team... (Score:2)
ATI has done nothing (Score:1)
Whoa, easy there (Score:2)
You should know that 3dfx, the recognized performance leader for most of the last 2 years (this of course may be changing) has been using the exact same old technology for quite some time. Voodoo2 is just a someone updated Voodoo1 chip (they added hardware support for the lighting calculation along the edges of the triangle, iirc). Voodoo3 is just an overclocked SLI Voodoo2 (Voodoo3 2000 has exactly the same performance numbers as a V2 SLI rig, as we measured it). Voodoo4 will have some new stuff (tho no one knows exactly what yet), but it is definitely still a direct descendent of SST1, the core Voodoo technology.
Similarly, TNT2 is just a massively overclocked TNT (GeForce does appear to be quite different).
I will give ATI the benafit of the doubt till I see the performance numbers.
Hilariously, the Maxx has 2x as much memory as the high-end Rage 128, for a whopping 64 megs, which is totally absurd. That's as much memory as my entire system! (and of course I can't afford more ram now
You know the weird thing? I just read about Maxx on ATI's webpage yesterday. When I woke up (4 minutes ago) I realized I had a dream about it. Then I came here, and there was a story about it! Man, I must be sad....
Time to go email ATI...maybe I'll ask them for nice open Linux drivers.
Re:Beowoulf :-) (Score:2)
---
"'Is not a quine' is not a quine" is a quine.
Not entirely untrue, but quite misleading (Score:2)
I don't know what exactly their patent is trying to cover. It looks like it's trying to cover distributed, rather than parallel, rendering; that is, in triple-buffering, have one chip handle the first backbuffer and the other chip handle the second backbuffer. The law of diminishing returns would definitely apply right away. Right now one of the big bottlenecks in 3D cards is the speed at which the bus can send rendering commands to it. Also, the time it takes to send a rendering command is often longer than the time it takes to execute it on the card; unless each chip is storing a complete displaylist and then post-rendering it (and there's not really much point to that, either), the overlap between the chips' rendering times will be minimal, at best. At the absolute best you could get a doubling in framerate, but the latency would still be just as high, and latency is the real killer in 3D games, not framerate (it's just that framerate is easier to measure and easier to explain).
Perhaps some of their patented work involves trying to 'interpolate' between frames. If that's the case, then that really is a quite difficult problem, and I'd be tempted to say they deserve any patent they get in that area. However, I seriously doubt that's the case.
Basically, this seems like another case of Exxtreme Marketing[tm]. ATI seems to have taken a page out of 3Dfx's book. (I'm sorry, but the Tbuffer is nothing revolutionary - it's a crippled accumulation buffer being marketed as revolutionary, when the TNT and Rage and Savage and the like have had a full accumulation buffer for a couple years now.)
---
"'Is not a quine' is not a quine" is a quine.
Re:Not entirely untrue, but quite misleading (Score:2)
---
"'Is not a quine' is not a quine" is a quine.
Re:SLI/PGC renewed - nothing new here (Score:1)
Re:sounds pretty good, (Score:1)
$299
You know, reading the article really helps.
Well this is hardly that original (Score:2)
Okay, so ATI's MAXX is rendering alternate frames on a different chip. Sounds precisely like a 3dfx SLI setup. And if you look outside of the 3D world, there have already been parallel processing display adapters. I once owned a Radius Firestorm 192 which featured three S3 864 chips. Each one was responsible for a color: one chip for red, one for blue, and one for green. The card was really, really fast in its day, and produced the sharpest picture I have ever seen, including that of my current Millenium II.
Anyway, just thought I'd remind ATI of the past.
-jwb
Re:Yeah, why don't you get a Glaze? 200FPS Q-III (Score:1)
The fact remains, though, that the speculated release dat is still Q2'00. So you've got 9 months to wait -- Buy a Maxx now, and wait and see the Glaze 3d.
Yeah, why don't you get a Glaze? 200FPS Q-III (Score:1)
Re:SLI/PGC renewed - nothing new here (Score:1)
3D hardware needs to be seen as an enabling technology. The faster the hardware is, the more Carmack can do with it :)
But the 3D vendors can't really advertise with, "Hey, if you buy our board, it will fast enough for some way cool games that haven't been invented yet". So instead, they have to post framerates using current title, something that gamers can relate to easier.
sounds pretty good, (Score:1)
I recently installed Unreal Tournament Demo and had to find out, that my good old Hercules Dynamite TNT is too slow for that game... same with Q3ATest, I have to use less detail and colors to actually be able to play the game...
I just hope that the support for 3d-cards will improve for linux...
Re:sounds pretty good, (Score:1)
I'll try that next time...
Ecclesiastes 1, ATI 0 (Score:3)
- nothing new under the sun.
- ATI reinvents pipelining, ignores drawbacks.
Firstly a rant about the press release and its quoted 5% drop between 16 bit frame rates and 32 bit frame rates for this new ATI card: Any manufacturer could do this by artificially limiting their 16 bit fill rate. This number says nothing unless combined with an absolute fill rates at either bit depth.
Now to the deja vu: ATI has effectively shoehorned two cards' worth of acceleration into one graphics subsystem. This has been done twice before in the consumer space: first by 3dfx, with Scan Line Interleave, which allowed two cards to work in parallel on any polygon that spans more than one line on screen; more recently by Metabyte, with their Parallel Graphics Configuration, which partitioned the screen vertically into two independent regions and dedicated a card to handling each of the regions.
Both 3dfx and Metabyte use spatial partitioning to get parallelism. 3dfx could do it finer grained because they had control over the chipset design and could include a mechanism for tight synchronisation of two cards. Metabyte went coarse-grained because they had to do the picture recombination from the two cards in external hardware, and it was hard enough to make this work at all without making it work for alternating scanlines. So why didn't Metabyte save themselves a bunch of hassle and use the "temporal" partitioning (or, in other words, pipelining) approach that ATI is now using? Hmmmm...
One issue here is latency. (For this discussion, let's assume that the video refresh rate is arbitrarily high, so that as soon as a frame becomes ready, we get to see it.) When a 3d card completes the rendering of a frame and swaps the front and back draw buffers, you are seeing the state of the world as it was at the time the game engine _began_ to draw the frame. If the current frame render time is x milliseconds, that's x milliseconds latency between the game state and your eyeballs.
With a spatial partitioning like SLI, both chipsets work in parallel to render a particular frame, and each frame is completed before rendering of the next frame begins; the game state to eyeball latency is simply 1/(frame rate).
With the ATI approach however, each of the two Rage chips plugs away at its frame independent of the other (which is working on a frame either one ahead or one behind.) Frame _render_ time is therefore twice the frame _display_ time, and the latency is twice as high as SLI for a given overall frame rate: 2/(frame rate). For a 60Hz frame rate, SLI gives 16.6 ms game state to eyeball latency, while the ATI approach gives 33.3 ms.
I am not a cognitive psychologist, so I don't know if an extra 16.6ms or so is going to make a noticeable difference to most people, but I wouldn't be suprised if experienced first-person-shooter players noticed a difference. Certainly for modem play the extra latency is probably smaller than the variation in ping time to the server, so I wouldn't expect it to make much difference, but on a LAN it might be noticeable. I have turned off sync to vertical refresh and forgone triple buffering in LAN Q3Test games because the variation in latency between frames was driving me batty, so I think this could actually be an issue. Of course, the higher the frame rate, the smaller the extra latency, and the less this will matter.
There is also the other matter that for this to work, there has to be at all times a large amount of rendering waiting to go so that each chipset stays busy. The drivers will presumably have to do a *lot* of buffering and then spoon feed each chip as its command FIFO is exhausted. I really wonder whether this will fit in well with what currently written applications are expecting from 3d acceleration hardware; if an application wants to have any synchronous interaction at all with the hardware, such as reading back values from a stencil buffer each frame after drawing is complete, it will totally screw this kind of pipelining. Somehow I'm just not convinced.
Re:Yeah, why don't you get a Glaze? 200FPS Q-III (Score:1)
-----
Disappointed (Score:1)
. . . ATi has aparantly thrown in the towel on technology and is just gonna put two of their second rate chip on the same card?
Sheesh.
Stuff like this isn't new. Way back when, Radius put out a card for PC's tht used three S3 864's. True, they weren't handling things exactly like this - they were assiging one to each primary color - but hell, even Intel had the good taste to put two pipelines on the same die when they came out with the Pentium Pro, instead of just telling people they ought to build SMP systems instead.
- Eric
Re:I don't think so... (Score:1)
Consciousness is not what it thinks it is
Thought exists only as an abstraction
Re:I don't think so... (Score:1)
Win95 OSR/2 with DirectX6 and latest ATI drivers
SuSE Linux 5.3 with XFree 3.3
AMD K6-233 and AMDK6/2-300
Soyo 5EDM M/B (VIA VP3 chipset)
DFI P5XV3 M/B (VIA VP3 chipset)
AOpen AX59Pro M/B (VIA MVP3 chipset, revision "CD")
I tried *very* hard to find a home for this card but all combinations of the above software and hardware resulted in frequent and unacceptable glitches every time. I have had no problems with either the STB Velocity 128 (Riva 128) or STB Velocity 4400 (TNT) that I also have.
The fine print in the card documentation on the CDROM is most telling: only Intel CPUs and chipsets are supported. Unfortunately you don't get to see the fine print on the CD before you buy.
What else can I conclude under the circumstances, but that this card - and ATI's support policy - leave a lot to be desired?
Consciousness is not what it thinks it is
Thought exists only as an abstraction
Cool but not yet... (Score:2)
Re:3D support in Linux? (Score:1)
Re:Cool but not yet... (Score:1)
That is exactally what the origional Nvidia NV1 did. It was a radical new 3D accelerator that included wavetable sound and a digital gameport. Like the PowerVR it didn't render using polygons but it used something with curved surfaces iirc. Just like the PowerVR with its flying planes, non-polygon rendering chips--no matter how fast--lose to the established method.
Do you think that AGP 4x can handle the bandwidth of the NV10 (GeForce) and 32 voice A3D audio?
Re:Two procs? 64 megs?!? (Score:1)
well-designed air circulation. ATI's run hot
and two ATI's may well cook your computer.
The two fans SharkyExtreme is showing look
grossly insufficient.
Hmmmmm (Score:1)
--
grappler
ummmm... (Score:1)
500 Megatexels... big deal (Score:1)
Pretty sad, really. GeForce 256 has been criticised for its "low fillrate", but it does 480 Megatexels. Meanwhile, the Savage 2000 will do "700+" Megatexels.
Anyone who thinks the MAXX is cutting-edge is fooling themselves. . .
Re:stuff (Score:1)
IE. 1024x768 = 786432 x 3 (bytes RGB) = 2359296 x 4 framebuffers = 9,437,184 processes. It is, in pure theory, a great way to do OCR and pattern recognition (bottom up, vs. top down fuzzy algos which are most common right now)
Just thought I'd throw numbers around. :)
ATI is Linux Friendly (Score:1)
All the companies either gave me the run around (email soandso - he's the linux guy) or didn't respond at all. ATI sent me a detailed response to my email and actually answered the questions I asked. The sales rep, whose name I don't have handy, was even nice enough to put me on his Linux users list and now I get an occasional update on Linux progress with ATI cards.
I have been impressed with the companies stand and openness regarding their products under non-MS OSes.
Re:3D support in Linux? (Score:1)
Beowoulf :-) (Score:1)
Heh,all we need now is a quick hack to put all that processing power and memory to use when your not a-quaking. Lessee now , a quick rc5 algorithm made out of graphics transforms , how hard can that be? Slip a couple of these cards in , and you're up there with the best of em!
Wonderful, but will it be anything like past cards (Score:1)
Re:Well this is hardly that original (Score:1)
Re:Thought i read somewhere about a quad 3dfx boar (Score:1)
Re:sounds pretty good, (Score:1)
I paid $99 for the card and overclock it at 175Mhz without ever a problem.
Screw paying $300 for a card.
.....Support For Linux..... (Score:1)
More info at Tom's Hardware (Score:3)
Re:3D support in Linux? (Score:2)
If you mean DRI and GL support, that's coming in version XFree86 v4.0, which hasn't announced a release date. The next snapshot, 3.9.17 should be available mid-month according to the XFree86 page [xfree86.org].
Re:ATI Boards are crap (Score:1)
Re:Bah... not new... (Score:1)
two is too many. and three is right out.
Re:SLI/PGC renewed - nothing new here (Score:1)
Really, what's the difference between 60FPS and 90FPS? Who cares? Where are the *NEW* features? Where's the EBM? Where's the hardware T&L? Where's the "Dualhead" feature?
I think gamers are tired of cards that promise 50% more FPS ("The game is so fast you can't even see what's going on!!"), and are looking for new features. That's why I got a G400. Dualhead is amazing for work, it's fast with games and the EBM (although i've yet to actually play a game that supports it) is comforting in knowing that it's there. SHOW US SOMETHING NEW, KIDDOS!!
--
poop?
'patent-pending' - marketting speak .... (Score:1)
Anyway you get my point the 'patent pending' stuff is purely marketting speak to try and get you to buy the thing
PS: I WANT ONE!
Re:How Efficient (Score:1)
Re:SLI/PGC renewed (Score:2)
Better Article (Score:3)
I still have a 2MB Cirrus Logic 5446 :P (Score:1)
Re:ummmm... (Score:1)
Re:ATI Boards are crap (Score:1)
A related ATI question... (Score:3)
The thing that was neat was that this page stayed that way for as long as I could remember. The owner took great delight in posting letters from lawyers demanding he turn over the domain name. Companies like ATI Technologies (the graphics card maker most people are trying to find when they type in www.ati.com), American Tractor Incorporated, Arand Typeset and Ink...and about a dozen others.
ATI ended up getting www.atitech.com which they still own. But now I just found that they also have acquired www.ati.com!
How did this happen? I don't remember reading about it on
- JoeCurious
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
... but will the drivers not suck? (Score:1)
ATI is also legendary for their abysmal driver support. First-generation drivers barely do anything more than basic GDI functions (i.e. enough to run the desktop and IE) but often crash and burn when you throw complex things at it like, say, an actual program.
Further revisions of the drivers provide minimal levels of functionality, but there are still obscure problems, some of them showstoppers. And we all know how useful Windows from a DOS prompt.
On top of providing shoddy driver support for Windows (the cards' native environment), they also keep their chipsets proprietary, not even allowing NDA access to the design. This means that ATI chipsets are entirely dependant on ATI to supply drivers, unless you don't mind using a reverse-engineered driver that may or may not provide 100% functionality.
ATI is outclassed on all fronts by the likes of S3, Nvidia, and even *spit* 3dfx. The only reason they have survived is because they are firmly entrenched in the OEM market. I will not use ATI products by choice, and I will not reccomend them to my friends. Spend your money on a TNT2.
Nathan
Ati's commitment to Linux / X ? (Score:1)
Re:'patent-pending' - marketting speak .... (Score:1)
As has already been mentioned in other posts, I don't think anyone will try to copy this solution anyway because it doesn't tackle the problem of latency, and as such it's inherently unscalable - a 4-way system at 40fps would have 100ms of latency, which would feel quite odd.
Luckily this one can be worked around unlike 3dfx's patent on a*x + b*y (in parallel).
Re:Multi processors == Multi monitors? (Score:1)