Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Games Entertainment

ATI Introduces a Parallel Processing Video Card 89

bilestoad2 writes " ATI has announced the introduction of a new video card, The Rage Fury Maxx. It uses two RAGE 128 Pro Processors and 64 Megs of Ram. Follow this link for the complete story. I don't know about you, but I've got to have one of these..."
This discussion has been archived. No new comments can be posted.

ATI Introduces a Parallel Processing Video Card

Comments Filter:
  • Dont remember where, or I might just be dreaming, but i thought I read somewhere about a quad processor 3dfx video card. Even if thats vapor, ati doesnt make that great of 3d video chipsets do they?
  • ...another 3D chipset ATI will tell us non-MS/Mac OS users to screw off regarding.

    Sorry, ATI, but I don't subscribe to your "proprietary technology" anymore. I made the mistake of buying one of your cards before when I didn't know better; I'll not make the same mistake again. There are a lot of cards which are better than yours who actively support free development - like Matrox and nVidia - and I'll not be shackled with your wholly proprietary support - or lack of any support - again.

  • by smash ( 1351 )
    To the person who was talking about slow framerates in UT... its not optimized for TNT support yet...

    Anyway.. on the parallel processing thing... arent biological processors that they are playing with relatively slow, but massively parallel? I can just see it now... one processor for each PIXEL on your screen ... perfect application for SMP if you ask me :)


    smash
  • Yesterday's silicon today.
    Today's drivers tomorrow.

    "The number of suckers born each minute doubles every 18 months."
  • It reminds me of an old Radius video card (for PCs!) that had three (read: one more than two) 64-bit video processors. That was a few years ago, and semingly no one cared much. It has to do with something utterly evil most people call "hype".
  • In short, this is Voodoo SLI or Wicked3D's PGC with a twist. Interesting, but with ATI's traditional lackluster performance, it remains to be seen whether this will be the thing speed freaks really want. Personally I expect more from T&L acceleration..
  • Is it possible to have ATI add multi monitor support by creating a special VGA connector card? Or is this architecturally not possible? Also, is this card backwards compatible with all the ATI drivers for XFree86? Or do new drivers have to be written? Do new applications have to be partially re-written to take advantage of the multiple processors (like SMP on linux, where the OS had to be written with SMP support)?
  • Well, since at least this doesn't seem to have much to do with the Internet, the Patent Office's "It's Net So It's New" psychosis shouldn't apply.

    Of course, they did give 3DFX a patent on Multitexturing, which while more unique than ATI's Full Frame nonsense, is still essentially pretty obvious--and not just after the fact.

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • by Effugas ( 2378 ) on Sunday October 10, 1999 @05:18AM (#1625842) Homepage
    Splitting full frame rendering among multiple processors...this is patent pending?

    Are you kidding?

    It's at least arguably unique to split even and odd lines among two cards(like Voodoo 2 SLI), or to split the image into horizontal strips(Metabyte's PGC), or to evenly split the texel reprocessing load among multiple texel processors(Voodoo 2 core design), but to attempt to patent the process of merely having one complete frame go to one processor while having the next complete frame go to the next processor?

    The general reason one doesn't want to use a full frame architecture is simple: Per frame times don't budge. Either you have to build a higher latency into your rendering chain, since the chipcluster has to know the next x frames you intend to render, or you get *no* speed boost.

    Don't even get me started on out of order frame rendering on a realtime rendering solution.

    Each of the previously mentioned solutions(SLI/PGC/Texel x 2), incidentally, lowers per-frame latency.

    Granted, there's probably some degree of multi-frame latency built into most drivers, particularly for games. But the concept of patenting the most basic parallelization solution strikes me as absolutely hilarious. It's very likely most 3D rendered movies use the technique ATI is trying to patent. "I'm done finishing this frame, send me a new one."

    It's very likely most WORKPLACES work the same way too. "I'm done with this job, assign me a new one."

    That being said, I'm looking forward to trying out ATI's new cards. Ever since I noticed their 128's were supported by Metabyte's [metabyte.com] excellent Eyescream system, I've been much more interested in them.

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com

  • I don't have this kind of juice in my PC!

    ----------------

    "Great spirits have always encountered violent opposition from mediocre minds." - Albert Einstein
  • Well, maybe it you didn't have an ATI, your G3 wouldn't be black and white...

    (Just a joke, i knew what you meant)
  • The Open GLX project is currently beginning on ATI RAGE drivers for the Pro and soon the 128. This is courtesy of them providing technical data about a month ago to us in the same manner Matrox gave it to us. We're working on a driver for the Pro (since everybody and his dog's got one! :-) and we'll follow up with the 128 once the Pro driver's relatively stable.
  • And I've had them for about a month and a half now. ATI just didn't make a big deal out of the release of info. We're beginning the work on the GLX support as I write this.
  • Writing drivers isn't easy- it takes time.
  • We've had them for about a month and a half now. The members working on it were delayed (myself, included- had my vacation in that timeframe, ah Scotland!) and we're getting seriously into it at this point. ATI's rep's been fairly helpful.
  • So far to support Linux, either as far as providing open or closes drivers, or releasing hardware specs.
  • I rag on ATI as much as anyone for being habitually behind the times (and I have used every major video card on the market now except GeForce, which I'm planning on getting fairly soon).

    You should know that 3dfx, the recognized performance leader for most of the last 2 years (this of course may be changing) has been using the exact same old technology for quite some time. Voodoo2 is just a someone updated Voodoo1 chip (they added hardware support for the lighting calculation along the edges of the triangle, iirc). Voodoo3 is just an overclocked SLI Voodoo2 (Voodoo3 2000 has exactly the same performance numbers as a V2 SLI rig, as we measured it). Voodoo4 will have some new stuff (tho no one knows exactly what yet), but it is definitely still a direct descendent of SST1, the core Voodoo technology.

    Similarly, TNT2 is just a massively overclocked TNT (GeForce does appear to be quite different).

    I will give ATI the benafit of the doubt till I see the performance numbers.

    Hilariously, the Maxx has 2x as much memory as the high-end Rage 128, for a whopping 64 megs, which is totally absurd. That's as much memory as my entire system! (and of course I can't afford more ram now :( ... )

    You know the weird thing? I just read about Maxx on ATI's webpage yesterday. When I woke up (4 minutes ago) I realized I had a dream about it. Then I came here, and there was a story about it! Man, I must be sad....

    Time to go email ATI...maybe I'll ask them for nice open Linux drivers.
  • Well, I don't know about RC5, but it's possible to implement Conway's Game of Life using the stencil buffer on newer 3D cards (i.e. anything not put out by 3Dfx). There's a wonderful description of it in the back of the OpenGL ARB Redbook. RC5 is probably a bit too complicated to do properly with graphics operations, though...
    ---
    "'Is not a quine' is not a quine" is a quine.
  • They claim that this card is the first parallel-processing 64MB card for gamers. Well, that's somewhat true. There have been plenty of parallel-processing cards in the past (the Voodoo2 is basically two Voodoo1s on a card, and as many others have pointed out, you can SLI multiple Voodoo2s together, and others have pointed out Quantum3D's [quantum3d.com] high-end massively-parallel voodoo-based arcade rendering boards), and plenty of 64MB cards (SGIs have been known to have well over 256MB of texture RAM alone), but never has there been, exactly word-for-word, a 64MB parallel-processing gamer-oriented video card.

    I don't know what exactly their patent is trying to cover. It looks like it's trying to cover distributed, rather than parallel, rendering; that is, in triple-buffering, have one chip handle the first backbuffer and the other chip handle the second backbuffer. The law of diminishing returns would definitely apply right away. Right now one of the big bottlenecks in 3D cards is the speed at which the bus can send rendering commands to it. Also, the time it takes to send a rendering command is often longer than the time it takes to execute it on the card; unless each chip is storing a complete displaylist and then post-rendering it (and there's not really much point to that, either), the overlap between the chips' rendering times will be minimal, at best. At the absolute best you could get a doubling in framerate, but the latency would still be just as high, and latency is the real killer in 3D games, not framerate (it's just that framerate is easier to measure and easier to explain).

    Perhaps some of their patented work involves trying to 'interpolate' between frames. If that's the case, then that really is a quite difficult problem, and I'd be tempted to say they deserve any patent they get in that area. However, I seriously doubt that's the case.

    Basically, this seems like another case of Exxtreme Marketing[tm]. ATI seems to have taken a page out of 3Dfx's book. (I'm sorry, but the Tbuffer is nothing revolutionary - it's a crippled accumulation buffer being marketed as revolutionary, when the TNT and Rage and Savage and the like have had a full accumulation buffer for a couple years now.)
    ---
    "'Is not a quine' is not a quine" is a quine.

  • Er, wow, shoulda tried waking up first before rambling. The bus isn't a bottleneck *yet*, but hardware T&L will make it one. The CPU is still a major bottleneck, though, and still can't do vertex processing fast enough to send twice the vertex data to the card. The whole point to hardware T&L is because of the fact that the CPU is the limit to the framerate, not the graphics card. Fillrate is no longer an important consideration.
    ---
    "'Is not a quine' is not a quine" is a quine.
  • but that's what nvidia is doing. You can have enough 'room' for 40000 fps in q3 but a game with lots of polygons will still be as slow as a snail. Not so with the geforce
  • > but how much will it cost?

    $299

    You know, reading the article really helps.
  • I guess ATI's hype machine is counting on the short memory of the buying public, but they can make no such assumptions about the collective memory of Slashdot!

    Okay, so ATI's MAXX is rendering alternate frames on a different chip. Sounds precisely like a 3dfx SLI setup. And if you look outside of the 3D world, there have already been parallel processing display adapters. I once owned a Radius Firestorm 192 which featured three S3 864 chips. Each one was responsible for a color: one chip for red, one for blue, and one for green. The card was really, really fast in its day, and produced the sharpest picture I have ever seen, including that of my current Millenium II.

    Anyway, just thought I'd remind ATI of the past.

    -jwb

  • Actually, I have some *cough* friends *cough* at Bitboys Oy who've told me Alpha silicon engineering amples have been available for about a month and a half..

    The fact remains, though, that the speculated release dat is still Q2'00. So you've got 9 months to wait -- Buy a Maxx now, and wait and see the Glaze 3d.
  • http://www.bitboys.fi/ 2000x1600x200FPS with all options on in Quake3. Due February, eatcha heart out! (-:
  • 3D hardware needs to be seen as an enabling technology. The faster the hardware is, the more Carmack can do with it :)

    But the 3D vendors can't really advertise with, "Hey, if you buy our board, it will fast enough for some way cool games that haven't been invented yet". So instead, they have to post framerates using current title, something that gamers can relate to easier.

  • but how much will it cost?

    I recently installed Unreal Tournament Demo and had to find out, that my good old Hercules Dynamite TNT is too slow for that game... same with Q3ATest, I have to use less detail and colors to actually be able to play the game...

    I just hope that the support for 3d-cards will improve for linux... :-)

  • >You know, reading the article really helps.

    I'll try that next time... :-))

  • by Snorbert Xangox ( 10583 ) on Sunday October 10, 1999 @07:46AM (#1625862)
    summary:
    - nothing new under the sun.
    - ATI reinvents pipelining, ignores drawbacks.

    Firstly a rant about the press release and its quoted 5% drop between 16 bit frame rates and 32 bit frame rates for this new ATI card: Any manufacturer could do this by artificially limiting their 16 bit fill rate. This number says nothing unless combined with an absolute fill rates at either bit depth.

    Now to the deja vu: ATI has effectively shoehorned two cards' worth of acceleration into one graphics subsystem. This has been done twice before in the consumer space: first by 3dfx, with Scan Line Interleave, which allowed two cards to work in parallel on any polygon that spans more than one line on screen; more recently by Metabyte, with their Parallel Graphics Configuration, which partitioned the screen vertically into two independent regions and dedicated a card to handling each of the regions.

    Both 3dfx and Metabyte use spatial partitioning to get parallelism. 3dfx could do it finer grained because they had control over the chipset design and could include a mechanism for tight synchronisation of two cards. Metabyte went coarse-grained because they had to do the picture recombination from the two cards in external hardware, and it was hard enough to make this work at all without making it work for alternating scanlines. So why didn't Metabyte save themselves a bunch of hassle and use the "temporal" partitioning (or, in other words, pipelining) approach that ATI is now using? Hmmmm...

    One issue here is latency. (For this discussion, let's assume that the video refresh rate is arbitrarily high, so that as soon as a frame becomes ready, we get to see it.) When a 3d card completes the rendering of a frame and swaps the front and back draw buffers, you are seeing the state of the world as it was at the time the game engine _began_ to draw the frame. If the current frame render time is x milliseconds, that's x milliseconds latency between the game state and your eyeballs.

    With a spatial partitioning like SLI, both chipsets work in parallel to render a particular frame, and each frame is completed before rendering of the next frame begins; the game state to eyeball latency is simply 1/(frame rate).

    With the ATI approach however, each of the two Rage chips plugs away at its frame independent of the other (which is working on a frame either one ahead or one behind.) Frame _render_ time is therefore twice the frame _display_ time, and the latency is twice as high as SLI for a given overall frame rate: 2/(frame rate). For a 60Hz frame rate, SLI gives 16.6 ms game state to eyeball latency, while the ATI approach gives 33.3 ms.

    I am not a cognitive psychologist, so I don't know if an extra 16.6ms or so is going to make a noticeable difference to most people, but I wouldn't be suprised if experienced first-person-shooter players noticed a difference. Certainly for modem play the extra latency is probably smaller than the variation in ping time to the server, so I wouldn't expect it to make much difference, but on a LAN it might be noticeable. I have turned off sync to vertical refresh and forgone triple buffering in LAN Q3Test games because the variation in latency between frames was driving me batty, so I think this could actually be an issue. Of course, the higher the frame rate, the smaller the extra latency, and the less this will matter.

    There is also the other matter that for this to work, there has to be at all times a large amount of rendering waiting to go so that each chipset stays busy. The drivers will presumably have to do a *lot* of buffering and then spoon feed each chip as its command FIFO is exhausted. I really wonder whether this will fit in well with what currently written applications are expecting from 3d acceleration hardware; if an application wants to have any synchronous interaction at all with the hardware, such as reading back values from a stencil buffer each frame after drawing is complete, it will totally screw this kind of pipelining. Somehow I'm just not convinced.
  • Glaze 3D has been a vaporware chipset for way to long for me to ever believe it will come out. My friend tried to stop me from buying the Voodoo2 the day it came out because this super powerful Glaze 3D card was comming out soon that was slightly more powerful then Voodoo2 SLI. And now a few years later, nothing has yet to be seen from the company making Glaze 3D. I'll buy what meets my needs when I need to upgrade and not hold out for a rumored chip.

    -----
  • While everybody else in the industry is working overtime to discover new ways of doing things in order to make their video processors as fast as possible . . . .

    . . . ATi has aparantly thrown in the towel on technology and is just gonna put two of their second rate chip on the same card?

    Sheesh.

    Stuff like this isn't new. Way back when, Radius put out a card for PC's tht used three S3 864's. True, they weren't handling things exactly like this - they were assiging one to each primary color - but hell, even Intel had the good taste to put two pipelines on the same die when they came out with the Pentium Pro, instead of just telling people they ought to build SMP systems instead.

    - Eric
  • Like where? It's the video that's the problem, everything else was OK. And the fine print in the documentation for the card clearly states that the card is only intended for systems with an Intel CPU and an Intel chipset. So your success is clearly fortuitous.

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • 4MB Xpert@Play AGP
    Win95 OSR/2 with DirectX6 and latest ATI drivers
    SuSE Linux 5.3 with XFree 3.3
    AMD K6-233 and AMDK6/2-300
    Soyo 5EDM M/B (VIA VP3 chipset)
    DFI P5XV3 M/B (VIA VP3 chipset)
    AOpen AX59Pro M/B (VIA MVP3 chipset, revision "CD")

    I tried *very* hard to find a home for this card but all combinations of the above software and hardware resulted in frequent and unacceptable glitches every time. I have had no problems with either the STB Velocity 128 (Riva 128) or STB Velocity 4400 (TNT) that I also have.

    The fine print in the card documentation on the CDROM is most telling: only Intel CPUs and chipsets are supported. Unfortunately you don't get to see the fine print on the CD before you buy.

    What else can I conclude under the circumstances, but that this card - and ATI's support policy - leave a lot to be desired?

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • the Maxx technology on the card is a good idea but its not something I'm going to think about buying anytime soon just like the GeForce. The chips on this card basically suck, they run hot and don't have anything new but what they do have is a new chipset that has a future. The chipset was designed so when they come out with the next generation of their big and bad chips they can stick them in a Maxx chipset and double the framerate and maybe even performance. The GeForce is the same in my opinion, the idea is sound and will be cooler in a few months but right now it's a waste of money. I think chipmakers are all going to move to a GeForce styling where the video processor is a GPU (or whatever) that takes some of the strain off the system's CPU. My P3 500 with a Viper v770 plays any game I put on the machine damn well but in two or three years it most likely won't. If I want the newest games (assuming that video card technology stayed the same as it is now where the graphics card just handles the actual rendering and output) I would have to update my entire system which would mean I'd build a new one. Then along comes a technology like the GeForce where you can just get a new graphics board that deflates the load on the CPU by handling more parts of the game. I no longer have to completely update my system, I can keep it competitive by replacing the faster and more powerful graphics board. Just look at the games that ran great on a 90mhz Pentium processor a few years ago. If they had a way for more and more of the game to be run on the graphics board then your 90mhz pentium might be able to play Quake 2 pretty decently since it would only have to run the game code and not the graphics setup. Maybe in another year or two we'll see an entire gaming subsystem you could plug into your AGP slot, like having a Dreamcast in your computer. The subsystem could run every aspect of the games and use the rest of the parts of the PC as imput. The CPU wouldn't even have to be involved because DMA devices would mean they could communicate directly with the subsystem without bothering the tired CPU.
  • ATI Have actually released specs for their chips recently allowing the GLX team to begin work on some of their chips. Hopefully this latest card will also be documented so the glx-developers can hack up a driver. So far we have hardware acceleration on the 3dfx cards, Matrox g200/400, The Nvidia cards and some prelim support on the ATI's . Hopefully this trend contnues and 3d support in linux continues to grow. Who knows, maybe by the time the Nvidia and the ATI card hit the shelves, manufacturers will be shipping drivers on their CD's
  • > Maybe in another year or two we'll see an entire gaming subsystem you could plug into your AGP slot, like having a Dreamcast in your computer. The subsystem could run every aspect of the games and use the rest of the parts of the PC as imput. The CPU wouldn't even have to be involved because DMA devices would mean they could communicate directly with the subsystem without bothering the tired CPU.

    That is exactally what the origional Nvidia NV1 did. It was a radical new 3D accelerator that included wavetable sound and a digital gameport. Like the PowerVR it didn't render using polygons but it used something with curved surfaces iirc. Just like the PowerVR with its flying planes, non-polygon rendering chips--no matter how fast--lose to the established method.

    Do you think that AGP 4x can handle the bandwidth of the NV10 (GeForce) and 32 voice A3D audio?
  • My guess is it'll require a case with moderately
    well-designed air circulation. ATI's run hot
    and two ATI's may well cook your computer.
    The two fans SharkyExtreme is showing look
    grossly insufficient.
  • So now Robin Miller "has got to have one of these"?? Wow... hunger for geeky gadgets is infectious in these parts...

    --
    grappler
  • Why does this article sound like an advertisement rather than a review?
  • The RAGE Fury MAXX achieves a maximum "fill rate", or the number of textured pixels it can render per second, of 500 megapixels/second.

    Pretty sad, really. GeForce 256 has been criticised for its "low fillrate", but it does 480 Megatexels. Meanwhile, the Savage 2000 will do "700+" Megatexels.

    Anyone who thinks the MAXX is cutting-edge is fooling themselves. . .
  • You jest, but in reality, moving away from linear processes into biological models would likely lead to a process for every RGB byte on for each pixel of every framebuffer.

    IE. 1024x768 = 786432 x 3 (bytes RGB) = 2359296 x 4 framebuffers = 9,437,184 processes. It is, in pure theory, a great way to do OCR and pattern recognition (bottom up, vs. top down fuzzy algos which are most common right now)

    Just thought I'd throw numbers around. :)

  • Recently, I sent an e-mail out to several major video card vendors. ATI, DiamondMM, Creative Labs, 3dFx, Matrox, and one or two more (not Number Nine, because their web page shows support)

    All the companies either gave me the run around (email soandso - he's the linux guy) or didn't respond at all. ATI sent me a detailed response to my email and actually answered the questions I asked. The sales rep, whose name I don't have handy, was even nice enough to put me on his Linux users list and now I get an occasional update on Linux progress with ATI cards.

    I have been impressed with the companies stand and openness regarding their products under non-MS OSes.
  • try linux3d.org
  • I just had to say it.

    Heh,all we need now is a quick hack to put all that processing power and memory to use when your not a-quaking. Lessee now , a quick rc5 algorithm made out of graphics transforms , how hard can that be? Slip a couple of these cards in , and you're up there with the best of em!

  • This sounds great, but ATI's tech support is almost non-existant, it took them 2 weeks to even reply to one of my e-mails, they didn't even have an automailer to say that they recieved the mail. It took months to get out the current crappy OpenGL driver for my ATI Rage Pro card, and the card isn't that great for Direct3D either (thankfully I didn't pay for the card). So what if it has two processors? Just think, I get crappy dithered (ATI Rage Pros don't support 32-bit OpenGL) images at double the speed, WOW thats double the crappiness, what a deal!!! I'll never buy another computer that comes packaged with an ATI card in it again.
  • 3Dfx SLI has every other line rendered by a different card, hence the term "Scan Line Interleave". ATI's MAXX renderes every other frame using a different chip.
  • Quantum3D made the Obsidian II, that had 2 voodoo2's and something they called "the brick" which contained 4 Obsidian II's, for a total of 8 (!) voodoo2's. I think 3Dfx used this to demonstrate the t-buffer.
  • My trusty Voodoo3 2000 runs UT perfectly with all details on max at 1024x768 on a P2-450.

    I paid $99 for the card and overclock it at 175Mhz without ever a problem.

    Screw paying $300 for a card.
  • I'm wondering if they are going to support this thing for Linux or at least release the stuff needed to make the correct driver for it. I have struggled myself for a long time to get my video working for Linux. Finally a distro Mandrake came with the right Xfree86 3.3.5 that had support and fixed bugs for on board video. I know it sucks but you deal with what you got. I just hope when I save up to move my machine into a different motherboard that I can buy a card like this one and have it supported on my fav flavor of Linux.
  • by Kalper ( 57281 ) on Sunday October 10, 1999 @04:50AM (#1625884)
    Tom's Hardware has a full preview [tomshardware.com], although he's not allowed to print the performance results. This isn't SLI or PGC -- ATI is actually having the CPU's draw full alternate frames, so the image quality will be high yet the speed will be doubled. It's even buffered so if CPU0 is taking a long time drawing frame 0, CPU1 can keep drawing 1,2,3,... until CPU0 is done. What I like best about this is that the MAXX architecture will allow them to drop their latest chips in as they are developed, so even if their CPU architectures remain a little behind, they'll be able to keep competitive. I like ATI cards because of all of the MPEG and TV toys they build into them; the only other company that even comes close to offering those kind of toys is Matrox, but they're just too damn expensive for the full-featured cards.
  • A XFCom X server for Linux is available for the single Rage 128 at Suse [www.suse.de]. I believe it will probably mainline in the next version of the XFree86 server, it seems quite stable. This should mean that most of the information for the dual processor version is already known, so hopefully a driver will follow quickly.

    If you mean DRI and GL support, that's coming in version XFree86 v4.0, which hasn't announced a release date. The next snapshot, 3.9.17 should be available mid-month according to the XFree86 page [xfree86.org].
  • I am sorry yours does not work, but my All-in-Wonder Pro works wonders on my Via chipset AMD motherboard! Just because you can't get it to work doesn't mean no one else can. (You might try to find some motherboard drivers depending on your mobo.)
  • that had three (read: one more than two)

    two is too many. and three is right out.

  • "I know, let's make it twice as fast, then everyone will have to buy it!"

    Really, what's the difference between 60FPS and 90FPS? Who cares? Where are the *NEW* features? Where's the EBM? Where's the hardware T&L? Where's the "Dualhead" feature?

    I think gamers are tired of cards that promise 50% more FPS ("The game is so fast you can't even see what's going on!!"), and are looking for new features. That's why I got a G400. Dualhead is amazing for work, it's fast with games and the EBM (although i've yet to actually play a game that supports it) is comforting in knowing that it's there. SHOW US SOMETHING NEW, KIDDOS!!

    --
    poop?
  • people have been claiming 'patent-pending' for generations - it's sounds great to the unwashed masses .... but untill you read the patents themselves you can't make assumptions about their contents - I have no idea what this one is about but it could be any of: a novel way of hooking two chips together - something about how to handle cooling 2 hot chips on a little PCI card - a cool new way to handle interframe aliasing issues between seperately rendered frames - managing a texture cache across multiple frames etc etc

    Anyway you get my point the 'patent pending' stuff is purely marketting speak to try and get you to buy the thing

    PS: I WANT ONE!

  • Each processor renders every other frame. Both must hold the whole "world" in memory (32MB each) but you could reach double framerates compared to one processor (in therory)
  • Personally, I don't find a problem with the performance of my ATI Fury 32. I bought it because it had MPEG motion compensation and TV out and 32 megs of video ram, I've kept it because its fast enough for me, though I sort of hedged my bets by adding a voodoo 2 daughter card (passthrough) for those games when I really want it. The combo of the fury for 2d and some 3d stuff (especially some of the better GL software) and voodoo when I need it for certain games makes a pretty formidable video subsystem. Now, if I could only get X to work with the Fury 32 I wouldn't be wasting a PIII/500 with 256 Megs of Ram on wind*ws ;-)
  • by G-Man ( 79561 ) on Sunday October 10, 1999 @06:15AM (#1625892)
    SharkyExtreme [sharkyextreme.com] has a more lengthy writeup, including some initial performance comparisons from a prerelease version (chips clocked to 125Mhz instead of 143Mhz, beta drivers).
  • I wouldn't mind a 64MB card - but I don't think I'd use it.....or Quake3 and the likes would prolly hook me and I'd never get anything done ever again :P
  • because ATI wrote it?
  • Dude, what the hell were you thinking? Have you forgotten about a little company called Apple that uses a lot of ATI cards in their non-intel PCs? My girlfriend has an AMD K6III 450 with an ATI Xpert 128 card and *does work* on her FIC 2013 mobo with it's VIA Apollo MVP3 chipset. Know your stuff before you post Flamebait.
  • by JoeShmoe ( 90109 ) <askjoeshmoe@hotmail.com> on Sunday October 10, 1999 @06:03AM (#1625897)
    Does anyone here remember when www.ati.com was run by some company calling itself "Artificial Turd Industries"? The home page featured a very large, very detailed image of rubber doggie doo.

    The thing that was neat was that this page stayed that way for as long as I could remember. The owner took great delight in posting letters from lawyers demanding he turn over the domain name. Companies like ATI Technologies (the graphics card maker most people are trying to find when they type in www.ati.com), American Tractor Incorporated, Arand Typeset and Ink...and about a dozen others.

    ATI ended up getting www.atitech.com which they still own. But now I just found that they also have acquired www.ati.com!

    How did this happen? I don't remember reading about it on /. and I'm sure the fall of such a rebel would have been noticed...? Does anyone know if the guy finally sold out or if some how the courts decided that ATI should get the site (even though there are many other companies that have the same initials trademarked!)

    - JoeCurious

    -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= -=-=-=-=-=-=-=-=-=-
  • Ah, ATI. The acronym conjures up fuzzy images of OEM video products, poor performance, and semi-nifty niche market features.

    ATI is also legendary for their abysmal driver support. First-generation drivers barely do anything more than basic GDI functions (i.e. enough to run the desktop and IE) but often crash and burn when you throw complex things at it like, say, an actual program.

    Further revisions of the drivers provide minimal levels of functionality, but there are still obscure problems, some of them showstoppers. And we all know how useful Windows from a DOS prompt.

    On top of providing shoddy driver support for Windows (the cards' native environment), they also keep their chipsets proprietary, not even allowing NDA access to the design. This means that ATI chipsets are entirely dependant on ATI to supply drivers, unless you don't mind using a reverse-engineered driver that may or may not provide 100% functionality.

    ATI is outclassed on all fronts by the likes of S3, Nvidia, and even *spit* 3dfx. The only reason they have survived is because they are firmly entrenched in the OEM market. I will not use ATI products by choice, and I will not reccomend them to my friends. Spend your money on a TNT2.

    Nathan
  • I'm personally still betting on GeForce... that being said, can anyone comment on ATI's position as far as either releasing hardware spec or working on drivers for linux? I'd like to be informed in case some of my linux-using friends get all worked up about this card...
  • Problem is companies this big tend to forget that it's just a patent portfolio thing and actually try to enforce them. Take the example of Creative Labs a while back when they attempted to patent something to the words of "PCI sound" and started legal action with... pretty much everyone I think. Surprised they didn't try to sue Adaptec for allowing people to read audio off hard disks (or maybe they just didn't think of that one!)

    As has already been mentioned in other posts, I don't think anyone will try to copy this solution anyway because it doesn't tackle the problem of latency, and as such it's inherently unscalable - a 4-way system at 40fps would have 100ms of latency, which would feel quite odd.

    Luckily this one can be worked around unlike 3dfx's patent on a*x + b*y (in parallel).
  • Actually, ATI has done this previously with cards available for the Blue and White G3's. They had a single 66MHz PCI card with 2 Rage128 Processors each with it's own 16 MB of RAM, allowing you to have a multiple monitor setup with only one card

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...