Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Software Entertainment Games Hardware

Alienware Discuss New Video Array Technology For Gamers 76

Gaming Nexus writes "Over at Gaming Nexus, we've posted an interview with Alienware about their new video array technology, which 'will provide gamers with an expected 50% increase in gaming performance by utilizing two video cards.' The interview covers the creation of the technology, the problems they had in developing it, as well some more details on how it works." The short version is that it utilizes multiple cards to render one screen, similar to SLI, but with many more features added in as well. What Alienware has developed is a software layer that sits between the video drivers and the application, routing things to where they need to be.
This discussion has been archived. No new comments can be posted.

Alienware Discuss New Video Array Technology For Gamers

Comments Filter:
  • Where Do i get the other AGP Slot from?
    • They aren't using AGP. If you'd read the article, you'd know that ;)
    • Re:AGP Slots (Score:3, Informative)

      by Rubbersoul ( 199583 )
      RTFA! [gamingnexus.com]

      GamingNexus: Was this something that you couldn't do with AGP or had you considered doing something with AGP?

      Brian Joyce: We actually had a working prototype with AGP. But as soon as it became clear that PCI Express was going to become the industry standard, we had to start re-working it for PCI Express.
    • Re:AGP Slots (Score:4, Informative)

      by u-238 ( 515248 ) on Monday June 07, 2004 @01:16PM (#9357684) Homepage
      Custom made motherboards with two PCI Express slots. Saw the actual board on TechTV a few weeks ago.

      Still not sure whether they've patented it or not - hopefully not so we'll be able to but these mobo's from other vendors and build these rigs ourself without paying alienware an extra $1500 for unnecessary services.
      • Re:AGP Slots (Score:3, Informative)

        by Paladin128 ( 203968 )
        RTFA: Multiple patents pending on the technology. Likely not on the concept of having multiple PCI Express x16 slots, just on the software and compositing/sync hardware.
    • I don't know if anyone has actually developed a board like this but AGP8x supposedly supports dual slots. However, if no one has yet done it, probably no one will, because AGP and PCI are both going down the tubes in favor of PCI-Express, which scales in both cost and performance to levels both below PCI and above AGP. (The cost of a wide PCI-Express might well be below the price of AGP8x, actually.)
  • by Ieshan ( 409693 ) <ieshan@g[ ]l.com ['mai' in gap]> on Monday June 07, 2004 @12:48PM (#9357393) Homepage Journal
    Doom 3: $60
    Dr. Pepper and Potato Chips: $5
    Alienware Super Extreme Gaming System: $10,000

    Having the "Sorry, I'm broke" Excuse to Avoid Going out on Weekends and Playing Computer Games Instead: ...Priceless?

    There are some things students can't afford. For everyone else, there's Alienware.
    • Re:I... guess.... (Score:2, Insightful)

      by vmircea ( 730382 )
      I'm going to have to agree with this... If you want the latest technology and awesome things you should go with Alienware, but you better be prepared to pay for it... as it doesn't come cheap. But with some of today's really good systems, you can get really good FPS on almost any game, for significantly less money than an Alienware. Which causes one to wonder... how much is too much...
    • If you are a US Computer Consultant, you can argue that your Alienware machine is what you need to develop and demo software for your clients. That can save you a fair amount come income tax time.
  • Re: (Score:1, Redundant)

    Comment removed based on user account deletion
  • by Kevin Burtch ( 13372 ) on Monday June 07, 2004 @01:01PM (#9357531)

    Didn't this only last about 1/2 of a "generation" the last two times it was attempted?

    "Two?" you say?
    Yes, the obvious one is the old Voodoo 1 & 2 cards, but I distictly remember at least one (I think 2 or 3) company(ies) making cards that used 3 S3 chips (one processing each color) for a performance boost.
    They were all "really hot" (popularity, not thermally... well, ok both) for a very short period of time, since the next full generation of chips completely blew them away.

    It was silly then, it's silly now.

    Now what _I_ want, is a triple-headed system that you can play FPS games on with a front and two side views (peripheral vision, or at least just a wider landscape in 2 or 3 monitors). The hardware is there (well, for dual at least), but do any games support this?
    It _can't_ be that off-the-wall, after all, the SPARC version of Doom supported triple-heads way back in version 1.2! (they dropped it after that)
    OK, that wasn't *exactly* the same thing... that required a different box for each of the left and right displays, but they acted as a slave so you only operated the center system... it was _extremely_ cool!

    Hmmm... I wonder how long it'll be before 16:9 displays are common, the only one I know of is the sweet monster made by apple that costs as much as a used car!

    • There is a three head video card right now, it's called matrox parhelia, and it's overpriced and underpowered like everything else matrox has ever made. Still, the visual quality is supposed to be quite good and if you don't pump the resolution too high (I wouldn't use more than about 800x600 per display tops) you should be able to get good performance out of most games. Games which support arbitrary resolutions are supposed to support it automatically (as they do with nvidia twin displays) and some games s
      • Matrox cards have never been about power(at least not 3d acceleration power). Their main focus is on image quality. Not everyone needs super-fast 3d acceleration.
        • This is the games section. We're talking about a card for gaming. Further I owned the original Matrox Mystique and I still have a Matrox Millenium 220 when I need a reliable and high-quality 2D card for a project, so not only is your comment irrelevant but it's also not saying anything I don't know.

          Further, matrox claimed that the parhelia would have performance competitive with or superior to ATI and nVidia offerings which were out at the same time. Well, as it turns out, the card shipped late, but even

    • The Matrox Parhelia was advertised to use 150 degrees of vision in games, utilizing 3 monitors.

      Go here [matrox.com] and check out the TripleHead Desktop table.
    • agp killed it.

      plus that it really makes more sense have the power in one card anyways if you're getting it, so the market for these is a niche one.

      cool software anyways, kudos to them and yadda yadda.

    • Don't have anything to do with video cards but... Star Wars Galaxies has a way to increase your field of vision. You can also change the size of the game window I believe. It requires some messing around with config type files I think but I know I saw someone say that it was possible to get a full 360 degree view. But it would either be really really quished or you would need multiple monitors to span it across.
    • I didn't read the article, but from what I understand the technology is somewhat more generic than the Voodoo SLI. If that's the case, then, eventually, as the technology matures, you'd be able to upgrade your two video cards to get better performance. Sure, the next generation of cards may be faster than two of today's cards. But two of the next generation's cards will be twice as fast as one. And eventually, maybe you'll be able to add as many video cards as you want, in order to make your system fast

      • "Video rendering is an inherently parallelizable problem..."

        Um, no... not at all.

        Think about how much bandwidth is needed just for the CPU of the system to feed the graphics hardware that is doing all the work (AGP 1x, 2x, 4x, now 8x and the new PCIx, etc.).

        Rendering on two boards means _4x_ the traffic over the bus!
        Don't believe me? Think about it.. your CPU has to feed card A _and_ card B (2x so far).
        Then, since you're only displaying on ONE of them, card B has to xfer the rendered display back over
        • They are using gen-locked video compositing, so there's no pixel traffic at all. The video outputs from both cards go into a combiner card, which switches between the two at the horizontal retrace interval where the screen division is.

          There was a picture of the setup at some other site where this system was mentioned a couple weeks ago.
        • Just because transferring data to two cards is not easily parallelizable doesn't mean that rendering a scene is also not parallelizable.

          First of all, most devices can access other devices and memory without the CPU being involved. This is what DMA and its ilk are for. Secondly, and I don't know if this is possible or not, but it's possible that two devices on a bus could both be written to at the same time, since they both are listening to the bus at all times.

          Anyway, the realities of implementing a sol

          • "First of all, most devices can access other devices and memory without the CPU being involved. This is what DMA and its ilk are for."

            Sorry. While it is true that DMA allows one device to talk to another device without the assistance of the CPU, it still requires the transfer take place over the bus that the devices are plugged into.
            If the bus is a bottleneck with one card, two makes things much worse.

            "Secondly, and I don't know if this is possible or not, but it's possible that two devices on a bus co
        • I am not an expert on rendering theory, but:

          To the best of my knowledge 3d rendering is all about matrix operations. Matrix operations are inherently "parallelizable". Period.

          What the heck does the bus have to do with any of it? Yes, there are going to be bottlenecks, but that doesn't mean the the original problem can't be easily broken down into discrete chunks. Inter-operation communication is an issue to be sure, but it isn't the driving force behind whether or not a algorithm can be processed in a par

          • "What the heck does the bus have to do with any of it?"

            Try unplugging your AGP card and see how well it works.

            If the bus were so unimportant, why the heavy focus on the BUS since the original VGA card?
            ISA->EISA->VLB->PCI->AGP->AGP2x->AGP4x->AG P 8x->PCI x...

            If you would actually READ what I posted, you'd understand that first you have to TRANSFER the data to be rendered, then you have to TRANSFER the rendered image BACK so you can TRANSFER it to the other card again.
            OK, so anothe
            • After a card has rendered a frame/scene, there is no data that needs to be return; it has performed its job. Furthermore, during rendering, there is no need for card A to communicate with card B and vice versa. Each works on the dataset assigned and then is done. Granted there will be some lighting/primitive overlap which is wasted cycles basically (since both cards will be computing the same information) but that really isn't here or now and has to do with how discrete you can break up the entire dataset N

              • "After a card has rendered a frame/scene, there is no data that needs to be return; it has performed its job."

                As I stated in at least 2 postings, if the video is not recombined with extra hardware downstream of the rendering (which as one poster indicated it is, making this a moot point), then the rendered data (which is drastically larger than the pre-rendered data) must be merged with the other card's rendered data in order for it to be displayed. This would require a data transfer over the only medi
      • Two cards are not twice as fast as one. The two cards are indeed faster than one, but not twice as fast. That's because screen-space subdivision cannot be 100% efficient; there's always duplication of work for geometry that falls across the border. And indeed, if your APIs don't support efficient culling, then perhaps all of the geometry may have to be processed twice, degrading your efficiency even more. As well, the more borders you have (by having more graphics cards), the worse your efficiency gets.
  • excuse me while I bang my head against my desk. I know it's important to keep innovating but this is why PC games are lagging behind consoles...because soon you'll need 3 video cards to run the latest ID game.
    • Umm, you do know that the consoles use the same video hardware as the PC right? And you also know that ID makes games for both PC and console now right?

      PC games are not lagging because of hardware. There are other factors, like price for instance (consoles are much cheaper for the same hardware), playing on a TV, controllers, less console warez... etc.

      But it certainly is not due to the technology, since it's the same.
      • Um... wrong. ID develops PC Games exclusively, and develops them for PC hardware. Doom III for XBOX is being developed by Vicarious Visions, who did Tony Hawk ports, and work very closesly with ID. Carmack codes to push the limits of the systems available. Some games are reaching even further than that, such as Unreal 3, a game which couldn't be demonstrated until a working Nvidia 6800 was available, and it still runs at a terrible frame rate. The simple answer here is that PC is not lagging behind consol
        • Vicarious Visions may be *porting* doom 3 to the console, but it's still the same game engine as the PC version and it still uses the same types of graphics hardware extensions, hence ID is developing games that target both systems. Not sure what you are trying to achieve here by playing the technical fact game.

          Unreal 3 is not out yet, and next generate consoles will be using next generation graphics hardware as well, so there is a good chance that the new unreal engine will run on consoles as well.

          I'm no
    • PC games are lagging behind consoles?? What causes you to think this?!

      Console games are simple fun 5 minuters for playing on the sofa with your mates. They have neither the depth, or the eye candy of modern "PC" games. Sure they have lots of antialising, and are fairly smooth - but hell: they'd better be at TV resolution! If you saw that on your Pc's monitor you'd ask for your money back :)

      Using multiple cards is a way of getting a "sneak preview" if you like at what the mainstream tech will do for y
  • Correction (Score:4, Informative)

    by Fiz Ocelot ( 642698 ) <baelzharon.gmail@com> on Monday June 07, 2004 @01:18PM (#9357700)
    The article says:"Brian Joyce: SLI stood for Scan Line interface..."

    In reality SLI stands for Scan Line Interleave.

  • Ok, using Voodoo2 term comparing SLI with dual ATI or Nvidia cards is just not fair. Things were simple then. Today the graphics drivers are far more complicated.

    Murphy's law here.... too many things can go wrong SLI-ing ATI and Nvidia cards, more than any forums can handle I am sure. Christ, the PC gaming industry has already shot itself in the foot with years of driver problems.

    • RTFA: It's not using SLI. The screen is divided in half; one card handles the top half, the other handles the bottom. Both cards have to use the same driver, and there's an additional PCI card that syncs things up. It's an expensive and relatively inefficient solution.

      How I assume it's going to work: both cards keep the same geometry/texture/whatever information in RAM. They both try and render the same scene, only the software tells each to render only one half of the screenby "blue-screening" -- defining
      • Isn't it also possible that they simply have the cards render at odd resolutions (ex. 1024x384) and rendering different POVs?

        I must admit though, your proposed solution is elegant in it's simplicity.
        • You can always set a clipping rectangle for the graphics card, so the obvious solution is to mess with this rectangle.

          This allows you to render at odd resolution *without* having to mess with transformation matrices.
      • I would wager it's done by changing the viewport setting on each card. The viewport controls the mapping of the viewing frustum onto the screen.

        The compositor card is just a video switch. At the horizontal retrace interval where the subdivision is, it switches from one card's output to the other's. It's probably setup to count retrace intervals and do the switching itself, rather than being interrupt-driven. (I don't think horizontal retrace interrupts are supported by most video cards.)
  • by *BBC*PipTigger ( 160189 ) on Monday June 07, 2004 @02:28PM (#9358359) Journal
    What the hell is up with this Brian Joyce guy?!?

    Of course any "hardcore gamer" knows about the history of their "patents pending technology" as their Director of Marketing calls it. Too bad he doesn't.

    In the article, this guy says: "SLI stood for Scan Line interface where each card drew every other line of the frame and my understanding was that the major challenge was to keep the image in sync. If one line's longer than another, then tearing, artifacts, and keeping the two cards in sync was a real issue. The benefits of doing it half and half is we can take advantage of the load balancing and the synchronization challenge can be overcome."

    Alright... I'm sure the technology they've developed over there is some hot fscking shit. I'm sure they have a top R&D team that knows what they're doing && this custom motherboard + pre-driver thing is a good idea. Once developed fully, it could let you keep adding as many video cards as your case can hold, even potentially from different manufacturers, to improve total rendering capacity. That is great. Alienware has some very talented people to solve all the associated problems with accomplishing this. I respect their achievement.

    That said, what the hell do they have a Director of Marketing for who doesn't know what he's talking about? He gets the SLI acronym wrong. How the fsck could one scan line be longer than the other resulting in tearing or cards getting out-of-sync? Come on! I know he's not a technical guy but then he should just stick to his hype buzzwords && patents && shit like that because he totally ruins Alienware's credibility when he shows no understanding of the most prominent attempt at this type of endeavor in the past. At least he said "my understanding" in there but he should've said "I don't know or understand the history so I'll just talk about what I do know."

    Although I hold Alienware in high regard for making really fast gaming computers (that are arguably worth the premium price if you can't be bothered to build your own), I lose substantial respect for them when they allow their cool new technology to be represented by a marketing turd who couldn't be bothered to understand the history of what his company has done or what he's talking about. Buy a clue if you care to succeed. I want to like Alienware... I really do. TTFN.

    -Pip
    • by Anonymous Coward
      I think that's supposed to be "if one line takes longer than the other" as in rendering time and not length of the line.
      • I didn't read it that way but you have a much more sensible interpretation (albeit by adding a word that wasn't there). Regardless, interleaving scan lines is a lot less likely to have a load balancing problem than separating top && bottom halves of your resultant frame buffer. SLI does sacrifice the ability to load balance but interleaving is not likely to result in one card waiting on the other with any regularity so the complaint still doesn't make much sense.

        -Pip
        • The problem with SLI is that it only subdivides the rasterization work. Both cards must still process all the geometry (and lighting calculations). (Actually, the 3dfx cards which could do SLI did not have a geometry transform engine; regardless, the triangle-setup work is still all duplicated.)

          Screen-space subdivision (ala Alienware) can subdivide the geometry work as well as the rasterization work. However, there will still be a lot of geometry work being duplicated, since you don't know where a polyg
    • Although I hold Alienware in high regard for making really fast gaming computers (that are arguably worth the premium price if you can't be bothered to build your own), I lose substantial respect for them when they allow their cool new technology to be represented by a marketing turd who couldn't be bothered to understand the history of what his company has done or what he's talking about. Buy a clue if you care to succeed. I want to like Alienware... I really do. TTFN.

      Try and actually order a system fro

    • Actually, two identical video cards may not (completely) identical performance. Subtle variances can creep into the various timing crystals and other electronic components that can make them unsynchronized. Two video cards outputting at 60 Hz may start out in sync, but it's virtually guaranteed that over time they will drift out of sync with each other.

      As an example, a primary rule of video capture is that you tie yourself to a single timing source. In other words, if you're capturing both video and a

      • That's why gen-locking or some other form of synchronization is needed. Essentially, there's just one master clock, and its signal is sent to both (or however many) cards.
        • However, that's not possible to do with generic off-the-shelf hardware without modifying it. I understand that the Alienware technology uses off-the-shelf video cards, so there's no way to lock them to the same clock (unless they're hoping they'll all by synched with the bus clock, because I wouldn't count on it). With the old 3dfx voodoo cards, this was easy because you had a connector for the two cards. Perhaps the old VESA feature connector found on old PCI cards might've supported such features, but
          • First off, ATI and Nvidia have understood the concept of gen-locking for ages, and some of their ultra-high-end boards have provided this capability. They've been used in research and high-end visualization setups where you've got multiple projector output and gen-locking is required.

            Second, Alienware is not exactly using generic off-the-shelf hardware. Can you find a PCI Express video card on any shelf? I'm sure they've worked with their vender in order to find out how to gen-lock the cards, in additi
            • I'll admit I did not look at the pictures. However, I was going by Alienware's FAQ on this technology [alienware.com] for the information about 'off-the-shelf hardware' and such. This FAQ also claims they do not require any kind of custom driver support.

              Until any of this actually makes it to market, it's all speculation. Perhaps NVIDIA and ATI are going to insist that PCI express cards have connectors for genlocking on even the lower-end gaming video cards. As it stands today, the only current NVIDIA chip that support

  • by TJ_Phazerhacki ( 520002 ) on Monday June 07, 2004 @02:57PM (#9358687) Journal
    Why does it seem that Alienware is so far out on the bleeding edge of technology?

    Oh, yeah, right. They are.

    I mean, come on, with the kinda influence they have - they asked ATI and nVida for custom cards for the Area51m - is it any real suprise they are attempting to make themselves even better?

    I suppose that the fact there are a number of other producers in this Niche - See the earlier slashdot story - might encourage the development. But the simple fact remains - They are on top, and if they can lock down this intelectual property till 2nd gen, then they can release it publically and become innovators in more than just overclocking and cool case mods.

    MMMmmmm...Cool case mods.

  • by wickedj ( 652189 ) on Monday June 07, 2004 @03:09PM (#9358802) Homepage
    Imagine a beowulf cluster of these... *ducks*
  • This kind of system has been around for quite some time, both the software and hardware to do it. SGI's Onyx4 uses their OpenGL Multipipe software kit (which works on unmodified OpenGL apps), a bunch of ATI FireGL cards, and some digital video compositing hardware to do both loadbalancing for complex data sets (what Alienware is doing) and realtime rendering at resolutions much higher than any one card could support (tiling).

    The thing that's new about this implementation is that you won't have to run out and drop $40,000 on the base Onyx4 if you have an application that needs this (to some extent - SGI's solution will go to 16 cards, with the bandwidth to drive them all, while Alienware's is currently limited to 2). Only $4000 for the Alienware box.

    Somehow I doubt that Alienware will get the patents that are 'pending' - I'd imagine that SGI probably already has a whole portfolio covering this area, since this kind of thing is their bread and butter. It's nice, though, to see a consumer-affordable implementation of this technology coming to market.
  • So does this mean if I buy 3 really cheap ATI radeon 9200's i can have the same performance as a Radeon 27600 ??

    Or can I finally put to use those old ISA videos cards i used to have in my 386?

  • by Guspaz ( 556486 ) on Monday June 07, 2004 @06:07PM (#9360518)
    Anybody see the demo videos of this? If you did, you'd notice that when they're busy unplugging alternating video cables to show that only the top or bottom half of the screen is rendered, the size of the image never changes.

    In other words, in their examples, which used quake 3, there was NO load balancing going on. If there was, when we saw, for example, the top half of the screen, the size of the top half should have been constantly changing.

    I understand fully that we were seeing alpha or beta level stuff here, but perhaps they should have waited until they had a fully functional model before showing it off.
    • Um.. That's because the second video card is still in there rendering away. Most systems don't even pay any attention to whether or not the monitor is plugged in - they still keep rendering the desktop/console.

      I've personally only seen one computer that cared if a monitor was plugged in, and that was all custom hardware.
      • Yes. So, and your statement agrees with this, what we should see is the card that IS plugged in changing it's rendering size. We should have seen the visible part growing and shrinking. Which would indicate that that videocard was being given larger and smaller parts to render based on the load balancing.

        Instead we saw a fixed size, which indicates the card was always rendering the same size, meaning NO load balancing was being done.
        • Instead we saw a fixed size, which indicates the card was always rendering the same size, meaning NO load balancing was being done.

          Or that the amount of data being fed to the two cards to crunch was staying roughly the same for the two seconds of available grainy video from Tech TV. Geeze, for a multi-thousand dollar system which requires an 800 watt power supply, and two top-of-the-line graphics cards for a %50 percent increase in performance, the best complaint you can come up with is that the preview
          • I'm not complaining about it, I think it's an interesting new technology with a lot of promise. I'm simply making an observation about the presentation.

            I didn't watch it on TechTV either, as I don't get TechTV. I watched it on the net, and the quality wasn't that poor, and there was significantly more than 2 seconds available.

            Who said anything about upgrades every 6 months? Up until now (with the release of the Radeon X800 and GeForce 6800), there hasn't been a single videocard that has dramatically impro
  • by S3D ( 745318 ) on Monday June 07, 2004 @06:21PM (#9360645)
    I did some devlopment for similar sytem - 4 videocard working in parallel, tiling the screen or time-dividing frames. To put it short - it's very difficalt to extract performance gain, require a lot of geometrical culling or synchronizations and other triks. Off the shelf game will not give 50% performance gain with such a system, 15% in the best case (and i doubt even that, and it would quite probably create artefacts) . To extract something similar to 50% would require a lot off efforts for developers, no develpers would want to do it to support a tiny market share.
    • It really depends upon the amount of rasterization work compared to the amount of geometry work being done. If you take a game that uses lots of long pixel shaders and bump up the resolution, I'm sure you won't have that much trouble achieving a good speedup. On the other hand, if you have highly-detailed geometry, simple pixel shaders, and not-so-high resolution, this probably won't help you much (assuming that you don't have efficient culling in your APIs).

      Also, a 4-way screen-subdivision system will t
  • Now instead of one 400 dollar video card, you can get two! I'm not to eager for this to become common place.
  • Am I the only person who couln't give a damn about SLI and would rather have two dual-head cards in the system to power 4 flatpanels all with scrolling ccze'd logs so I can sit back in my huge leather chair and cackle with power while stroking my white cat?
  • Having just bought the top end 3.2 GHz Alienware in November, with 64 bit chips and this new design, I feel like I just bought a 486 right before the Pentium came out.

What is research but a blind date with knowledge? -- Will Harvey

Working...