Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Games Entertainment

3dfx to develop DRI for linux 113

skroz writes "At Linux world today, 3dfx announced plans to bring their direct rendering infrastructure to its Linux drivers. Their hope is to broaden high-end graphics support in Linux. Check out the story here. "
This discussion has been archived. No new comments can be posted.

3dfx to develop DRI for linux

Comments Filter:
  • In general with X, you do pay a fairly large price due to the network transparency issue; however XSHM and especially XDGA do alot to resolve this. Theoretically, DGA in full screen mode is as fast as windows, though I can't back this up with numbers.

    Well, I was really talking about applications in general, not just games. Most apps don't need or want to be beating on the framebuffer directly. XSHM can be applicable at times (imlib), but I don't really see how XDGA helps there...

    For applications, anyway, drawing on a bitmap or a framebuffer is not always a good substitute for using the (potentially) accellerated drawing primitives either, is it? I don't think X is that fundamentally broken.

    Also, just out of curiousity, how many non-XFree86 servers support XDGA?


    ---
  • Bash the voodoo all you like, and bash their closed source libs if you want. However if it weren't for 3dfx and glide, there would be no Q3Test for linux, and Q3Arena would seriously lag the Windows/Mac release(if it were made at all).

    Maybe things have changed but I was under the impression that support for linux was a good thing?
  • I, like the other folks responding here, use the network transparency of X very heavily, just about every day. I would guess that your 5% number is very wrong, perhaps not on single user home systems, but certainly out in the corporate world where highly networked operating environments are the norm.

    Anyways, this doesn't address the 2D side of things, but the DRI deals with exactly the issue you raise on the 3D side. My (admittedly incomplete) understanding of it is that it will allow any code written to the very open and hardware independent API's of OpenGL and GLX to bypass most of the overhead that XWindows usually incurs, instead pipelining the requests directly to the 3D hardware that can support it.

    If anyone out there knows better than I, please correct any mistakes I may have made.
  • When you say something like that, right away some guy is gonna respond, like he and the rest that use network tranparency are the norm. Im with you, id be suprised if network tranparency is even used 5% of the time.
  • I may be wrong on this but i have a feeling i read something about XFree4.0 that said that they were coming up with a binary format that would only need you to produce one binary for any architecture. Now this may be incorrect and i'm getting confused with something else, but have a look at the docs with XFree 3.9.15.

    Iggy
  • I'm no 3Dfx zealot, but is there any 3D support whatsoever for the G400 in Linux? If there isn't (I don't think there is) then buying a G400 means buying a high-priced 2D board and wasting most of the hardware it contains, and a lot of your own money. Unless you are talking about using the G400 under windows, in which case, what does it have to do with this discussion?
  • I'm really tired of all the bad mouthing of the voodoo 3. The voodoo 3 with a AMD K6 266 gave me great performance at a great price. I do have to sympathize with some people in that the linux drivers for 2d support kinda suck, but if 3dfx is really serious about selling the voodoo card chain to the linux consumer than they should focus on producing Xservers that have higher bit depth and high res to match the card's benchmarks.
  • yeah. Nvidia TNT/TNT2/TNT2ultra is the great one. they're working on the NV10 which will be able to make textures 20x as small as TNT
  • look in /usr/src/linux/Documentation/mtrr.txt
    Set your mtrr registers it really helps me out alot on my TNT. The Voodoo3 with the mtrrs set is supposed to be the fastest thing under linux.

    Mike
  • The ONLY reason I still use Windows is because of the far superior quality of the windows9x nVidia drivers.
    The linux drivers offer terrible framerates, with poor texturing performance, and no (?) AGP support.
    There is a long way to go before I switch to linux for my gaming (besides the lack of games).
  • Maybe you people should look into the kernel MagicKey... I have never used it but from what I hread it could maybe some how restore the keyboard to a sane-state.
  • My life would be unlivable without the network transparency of X. Whatever finally does displace X (possibly Berlin) absolutely *must* have network transparency.

    --Lenny
  • Thank you. I couldn't agree more.
  • Well, I personally would LOVE to use a VooDoo3. But, sucks for me, I run on ALPHA. I wrote 3Dfx about wanting to compile it on my Alpha. If they do that, then I'll go and do that.

    Hey, LETS NOT bash them. Sure, code is the BEST way to go about it. But they don't want to do that. No Big Deal. Once the linux market share goes to 5 percent, or 10 percent, and they see that nVidia, 3dlabs, and intel are all gaining market share on Linux quickly, then they might think about it.

    Read the Linux Advocacy pages. Drivers have a special place. The existance of Drivers is A Good Thing(TM). People always feel better then they get praise, and ohh by the ways; than being called driver hoes.

    In fact, I wrote nVidia and 3Dfx about my experience getting my card(s) working. I settled on nVidia. (I still don't have glx working properly however.) I'm going to buy another video card, and it better have alpha support. (Hrumph!)

    Roger
  • The whole point to "Embrace and Extend" is that you are adding proprietary extensions to an open specification. An open source program can not by definition "Embrace and Extend" since are the changes are in the open. I can get XF86 and compile it on any UNIX or OS/2, so this is not an issue.

    Hrm; yes, I should have used quotes around "embrace and extend". It really isn't the most applicable term. I'm just concerned about a loss of portability of X applications that rely on the new extensions, in much the same manner as a lot of Linux programmers don't bother to make their code [easily] portable to other Unices. (although that is arguably a problem because other Unices are often rather broken in some areas :P)

    The anti-aliased text extension has the potential to be the worst culprit -- it introduces an entirely new API for drawing text, and supporting older X servers would mean writing all of your code that dealt with drawing text twice. (although hopefully you'd have abstracted it somewhat so that only meant one source module). Maybe we'll be lucky and that'll get handled by GTK and Qt. It may not really be that much of an issue after all.

    For the sake of argument, though, can you really compile XFree86 for any Unix? Can I, say, replace the vendor's X server with XFree86 on my HP-UX workstation here? What about other non-Intel non-Linux/*BSD systems?


    ---
  • Voodoo3 on a PPro 150? Jeez! You'd be better off buying a new motherboard and processor! Graphic accelerators are limited performance wise by your CPU, so don't expect miracles.

    Go do yourself a favor and save up for an Athlon and Slot-A motherboard. =)
  • They did this to speed up GUI in 4.0 ( 3.5 sucked ) and they succeded ...

    If you have problem with graphics subsystem on the server - make sure to run VGA and everything should be OK. VGA driver is rock stable.
  • For those who didn't know, nVidia has already released an OS-neutral driver suite [nvidia.com] for the RIVA 128, TNT, and TNT2 series of chips. Fully compilable source for the entire suite is provided under a BSD-ish license. (The source isn't very educational, however, as it's been run through 'cpp'.) Sample driver implementations and example code for Linux and Windoze-NT also accompany the distribution.

    The suite is designed to function as low-level support for all rendering and display functions. These functions are exposed in an object-oriented-like fashion at a hardware level. Whatever the hardware doesn't support directly is thunked over to software. Thus, the public API for the NVidia chips is the hardware channels.

    This has been out for several weeks now, but every time I've submitted it to Slashdot, it hasn't been approved. Oh, well...

    Schwab

  • If slowing down X network access would mean speed up of local execution I am all for it.

    (In fact one can easily use VNC and skip X networking altogether )
  • Perhaps this is OT, but exactly why do so many companies see a need to support Linux? If their goal is to promote up-and-coming OSes, perhaps they should release source and let the hackers out their DIY or say they are also releasing FreeBSD. Linux isn't the only free-software up-and-coming OS on the market, you know.
    But if i understand properly they are talking about improving X. That'll affect _ALL_ UNIXes.
    I've used both FreeBSD and Linux, and i have my own personal opinion on which one is better. I don't want to start any religious wars, as to each his own, but heres a hint (my favorite isn't the second one).
    I think that EVERYONE out there should try both. Only fair.
  • Sure , I know you can do this on Solaris 6 ( intel)
  • Nope. Alpha or PPC will still need it's own binary.
  • When you mention your personal opinion regarding OS do you mean server or workstation usage type?
  • As if I did not try. It would be easy then...
    No, the damn thing freezes dead solid sometimes.

    BTW, you are wrong about not getting back to X,
    right now: ctrl-alt-f2 - and I logged in as root.
    ctrl-alt-f7 - I am back in X. It worked.

    Boy, isn't Netscape under Linux ugly...
  • http://glide.xxedgexx.com/

  • I think so. Try the glx from CVS.
  • First, retool the chain from application to server. Instead of ALWAYS using sockets, write a new path on the libX11.so library so that commands are fifo'd using shared memory to the X server. Get rid of the X protocol over local connections(which require quite a bit of time decoding and encoding... think PPP), and use something more like System.map. Build the protocol over a shared memory fifo buffer. This _MUST_ be backwards compatable. Same library. Just a new life for old apps.

    This is a pretty good idea. In fact, it's already been done by someone else - IBM. The AIX X server uses a shared memory transport for all X requests if client & server are on the same machine. This confused me for a while - where's the XShm extension on AIX, I wondered? Ah, it doesn't need, since everything's shared anyway.

    I don't understand what you mean by getting rid of the X protocol though - you mean giving clients direct access to the hardware? Sun did something like this with their SUN_DGA extension (which is not the same as the XFree86 extension).

    Build enlightenment/wmaker/fvwm/etc.. as a library. Allow the wm to be linked into the X server(I know, but but it's a good thing). This would lower the context switches a good bit. because you'd get rid of the X/WM/CLIENT clusterfsck that can often happen when you have an app running and you MOVE THE MOUSE (Ohh My God). Besides, if you want to switch WM's - we use the libdl.so to kill the hooks, blow out the old wm lib, and relink the new lib

    I can sort of see what you're getting at here, but (a) it would be a truly monumental task, and (b) I don't think it would buy you much.

    Another interresting notion, would be to allow X to load toolkits (server side) like KDE/GNOME/Motif/Xt into the X server...

    What you're describing is pretty much what the Berlin [berlin-consortium.org] people want to do. Take a look at their website - there's some very interesting and thoughtful design there.

    You have some good ideas, but IMHO, you should look at contributing to Berlin, rather than reworking X. Especially since a lot of your ideas are already there :-)

  • by Larry L ( 34315 )
    actually ati cards under linux have been known to run faster than under windoze because ati wrote crappy drivers.
  • "I thought the entire point of a damned 3D card was to offload the need for a high end processor and let the 3D card's hardware handle the 3D intensive stuff?"

    This will be true when nVidia [nvidia.com] releases their next chipset code named NV10 [riva3d.com], which has full hardware transform and lighting acceleration...This is really going to piss Intel [intel.com] off...

    "Ah well, someday I may buy a new system but I haven't found ample justification to spend $700 to get something like a Celeron 433 with 128 megs of ram, Abit BX6v2 motherboard, etc."

    You can upgrade your system for far less; approximately $450 now that's assuming you need a new case (AT -> ATX) plus 128 megs rams. Now you dont have to take my word it, you can go to www.computernerd.com [computernerd.com] for MBoard/CPU/Case bundle and to www.mwave.com [192.216.185.10] for memory and see for your self. You can also checkout www.killerapp.com [killerapp.com] for computer hardware prices.

    Now me personally I plan to purchase Abit's new dual slot 370 motherboard [abit-usa.com] with two 366Mhz celerons guaranteed to overclock to 550Mhz [computernerd.com] for $412....

  • In other words, it doesn't. You mention Solaris -- does it support Solaris/SPARC? What about AIX? What about Digital Unix/Alpha? What about IRIX/MIPS? Specifically what non-Intel architectures _are_ supported?

    The Unix world is not all Intel Unix clones. Far from it. Get some perspective.

    ---

  • Without the game market, Linux will not make it big on the desktop.. IMHO. Sad fact.
  • I mean, good for them writing a dri driver. I guess we could use something like this. But my problem is they only released it as DRI because you don't need to release source under such a system, since it's a pluggable binary/library.

    It's funny, since they should work about keeping the dies for their chips a secret, and not the software that interfaces with it.

    I'll keep my nVidia (even though it crashes blender) and run with that. If I can keep the nVidia drivers from crashing, I'd use it to full potential. I'll have to wait for X11R4
  • Just as an TNT X server just rules, it seems to run much quicker than any other X server I have used... just as a 2D vidio card in general, the TNT is just a fantastic card... cheep too now.

    :) (Not to mention has good game perf too...)
  • I don't like the way this press release talks about developers writing "for 3dfx hardware". The DRI is for use with OpenGL and other hardware-agnostic APIs, so most developers won't really care what hardware is in the box. But of course, stating it that way lessens the number of times you can include "3dfx" in your message.
  • Duh, preview before posting... I mean XFree86 4
  • It is apparently some bonus feature of their news-interface that highlights the words you searched for. It looks like somebody was searching for "Red Hat" and then mailed this link to Slash.

    You can verify this by clicking the: "Jump to first matched term" link at the -very- beginning of the article.

    FYI
  • by warmi ( 13527 )
    I have Vodoo 3 and sadly, under Linux it is visibly slower than Windows running on the same box. In fact, on the other box I have G200 and it is slower too.
    Have anyone ever encountered graphics card that run faster under Linux than under Windows95 ?
    Is this driver issue or simply X is slower than Win GUI subsystem ?
  • Got e-mail from a guy at 3DFx like a week or so ago, asking my permission to use PROPAGANDA seamless tiles in the product demos theyre giving. Unfortunately, I wont be at the expo to see them in action, but if I were you, i'd look forward to such a demo. :)

    Bowie
    PROPAGANDA [system12.com]
    Bowie J. Poag

  • 3dfx needs to be fixing my Voodoo2 before they go out and do all this 2d/3d card stuff. Come on Daryll, I know you read slashdot, quit tooling with the demo for that graphics convention!

    Anyone have clues as to why "Red Hat" was in red?

    ~Kevin
    :)
  • Well, this is unquestionably a Good Thing(tm), but it really would be nice if it were possible to do 3d graphics decently _without_ X, too. (none of this beating directly on the hardware in each individual app stuff)

    While you unquestionably need some central facility to arbitrate access to and abstract the graphics hardware (which is why svgalib doesn't realy cut it -- each app banging on the hardware directly is NOT a modern or a safe way of operating), it'd be nice if you didn't have to pay for the memory overhead of X if you didn't really need it (and it is considerable). Or the startup time -- not all of us keep an X server running continuously.

    It'd be nice to see the kernel doing its job with arbitrating access to hardware, although that usually means an in-kernel abstraction layer too, which we really don't want. Partly because it can be a severe cause of bloat, and partly because there's no real obvious way to abstract most graphics stuff without breaking the standard Unix 'everything-is-a-file' metaphor. Except maybe for abstracting really basic stuff, like the framebuffer, which is what fbcon does.

    We do still need to arbitrate access to accelerated graphics though, especially in light of the existence of poorly-designed hardware where you can inadvertantly (or deliberately) lock the bus and other fun things by poking accel registers. Maybe KGI will mature someday; the design seems to manage that pretty well without foisting an ungainly abstraction layer on the kernel.

    *sigh*

    Or maybe I'm just blowing smoke. I dunno. I'm not faulting the 3dfx folks. There are a lot of times you want or need 3d in X, and DRI is the cleanest way to do that. It's also the only emerging "standard" way we have of doing this 3d stuff cleanly right now, in any environment.


    ---
  • by davie ( 191 )

    The problem is that your X server isn't using your graphics chipset's hardware acceleration capabilities. XFree86 4.0 will address some of this, and I'm sure the dot releases following will bring some pretty good performance gains. I'd settle for my TNT delivering 90% of the performance I get with the same card in one of my other machines that runs Win 95.

    I was getting half-way decent on Q2 DM on that Winnders box, but I got sick of the constant crashes (not to mention the fact that I wasn't getting any work done). With hw accel. on Linux, I could go back to being a total slacker and rule my favorite DM server!

  • Wasn't it 3dfx that was so jealous about their code and APIs (remember the GLIDE wrapper lawsuits)? I wouldn't be surprised if their "contribution" would be closed-source/with a really restrictive license.

  • Been looking at XFree86 4 code.. I want to retool it. I want to say YEAH!! to XFree86 on a good path. BUT...

    I've looked at the new driver module API for XFree86, and I am going to retool X into something different. X will still suffer from a few problems, well known. I have an answer to some of these.

    First, retool the chain from application to server. Instead of ALWAYS using sockets, write a new path on the libX11.so library so that commands are fifo'd using shared memory to the X server. Get rid of the X protocol over local connections(which require quite a bit of time decoding and encoding... think PPP), and use something more like System.map. Build the protocol over a shared memory fifo buffer. This _MUST_ be backwards compatable. Same library. Just a new life for old apps.

    On the X server side - take all the networking code and build a protocol switch on it. (Thus leaving complete remote compatability.) On the other switch, allow a shared memory FIFO.

    This change alone would reduce the context switched by about 1/3rd (best case) to 1/8th (worse case)

    Build enlightenment/wmaker/fvwm/etc.. as a library. Allow the wm to be linked into the X server(I know, but but it's a good thing). This would lower the context switches a good bit. because you'd get rid of the X/WM/CLIENT clusterfsck that can often happen when you have an app running and you MOVE THE MOUSE (Ohh My God). Besides, if you want to switch WM's - we use the libdl.so to kill the hooks, blow out the old wm lib, and relink the new lib.

    On the XServer side, some other good things could be done. One big thing is the rectangle management code under X __!!!SUCKS!!__ performance wise. (Read the X source code, it's in there). Basically, X takes all visible windows (and portions thereof) and builds a rectangle list. This would seem to be A Good Thing, but it actually sucks. Instead of some good rectangle management code, everytime you move a window - X MUST build this damn rectangle map. Oftentimes (with shapes turned on) this can add up to a hundred thousand rectangles. (Add some debug code in there - it's horrendous).
    Don't know what to do about this (It's pretty deeply imbedded into X). Ideas? I think some dirty-rectangle backend would work wonders. On the Amiga, the sytem was alot better. Rather than try and big-brother applications, it simply gave the drawing library a list of rectangles. Visibles and buffers to non-visibles (If it was buffered at all) The drawing lib drew into visibles, and backbuffers. If there was no backbuffer - too bad. X is simular to this, but the rectangles are maimed. Also, there is a gigantic spin-lock on the DDX. This is wrong!!!

    Another interresting notion, would be to allow X to load toolkits (server side) like KDE/GNOME/Motif/Xt into the X server. This keeps client fatness down substantially. The only problem is that these toolkits were't designed from a server architecture. They were designed to be an upside-down christmas tree into the X server. A way around this may be run a toolkit as a thread for each client. On the client side, you would still build the clients with the libgnome, but the lib would have an optional direct-interface into the X server toolkit. (Again, a protocol "switch")

    With a multithreaded X architecture, this would ROCK! Seperate the DDX from the DIX layer threads. Allow the DDX to accept drawing lists, and the DIX to yack at clients. This would allow you to cut your timeslices mo better for the DDX layer. Also, the DDX could operate independantly and simply be blasting pixels from a FIFO command buffer, and operate the DGA semaphores.

    Anyhow, I'm soliciting some help in retooling. I think we're dumb to throw away X, but at the same time - this RELUCTANCE to mess with X is dumb. X needs a REAL direct rendering interface. Not just for special clients, but the whole architecture The design of X is monolithic and completly unoptimized for todays archetecture (Think about SMP?? Look at how bad MESA performs on SMP. Thnk about the X server doesn't help at ALL). With some COMMON sense, we could pry the network-only fingers of code from the DDX and DIX and build optimized pathways.

    All this, while still using the XFree86 driver modules. That's right. We won't have to touch and drivers. (That was always the hard part). All we do is build a new client->server path.

    davenrs@cyberonic.com
    davenrs@mail.excite.com

    Rog
  • In my post, I was referring mostly to games (3dfx and DRI both got me thinking about games). For applications, you are correct, beating on the framebuffer directly is usually a bad thing.

    However for an application, the speed at which X draws stuff isn't nearly as critical as it is for games/video players/etc, for whom beating on the fb directly can be an advantage. There's still room for improvement within the X protocol. For example being able to say "draw this widgit" would be much more effecient than saying "draw a line here, and a line here, and a pixel here..." However I think the slight loss of speed due to the client/server architecture is a small price to pay for network transparency.

    As to how many servers aside from XF86 support DGA, I'm not sure. My experience with Unix stops with Linux, and my experience with X stops with XF86. I hope that it becomes standard fairly soon, because it's very nice for writting programs in which framerate is important.

  • DGA is really handy for running Windows games that use DirectDraw. Unfortunately this requires running them as root. If you aren't root, WINE's DirectDraw implementation can be really slow (hundreds of times slower in some cases).
  • I think that saying the point of a 3D accelerator is to "offload the need for a high end processor" is a bit of a mis-statement. I don't think there is a PC on the market yet that can even play Quake 1 in OpenGL mode at 640x480 at more than 1 fps if software rendering is used. The point of a 3D accelerator is to get speed and quality that would be entirely impossible without one, not to ease the burden on your CPU, or to save you the trouble of getting a new one.
  • The GLX project [digitalpassage.com] supports the G200, G400, and the nVidia chips. The driver isn't exactly ready for the naive end user, but it runs the Quake games and the OpenGL hacks in the XScreensaver package, so it's quite functional. It's a bit slow, but it's getting faster all the time, and Matrox have recently made some WARP code available, which will allow use of the geometry engine.

  • "You don't eat, sleep or mow the lawn, you just use X all day long"...

    Sorry, couldn't resist, please moderate it down. :)



  • I've used a TNT and a V3 in the same box, and they both seem to be pretty quick in 2d. Q3Test is really only playable with the V3, though. I'm hoping that XFree4.0, and possibly this 3dfx DRI will change the Free Un*x 3d landscape...
  • Ever been in a university computer lab?
    ---
    "'Is not a quine' is not a quine" is a quine.
  • It can be done. Essentially, the wm runs as a thread. IT STILL must go through the DIX layer, thus the wm isn't getting anything more special than it currently is, it just incurcs a much lower context switch being a thread. An added bonus will be that the X server can monitor the WM thread and kill/restart it when and if it fails - automatically.
  • The first Nvidia release for Xfree86 is a pipe/stream based solution. This will change when the DRI solution is integrated into XFree86 4.0. What it means to be stream based is that I can run an opengl app on a remote system and display it on my local Linux box with hardware acceleration. But by sending everything through a named pipe to the X server, you don't get complete acceleration.

    What I have seen of the DRI implementations seem very speed competitive with windows systems. I can't remember specific numbers but seems like it was within about 10% on some of the unstable stuff at Linux Expo '99 ( they didn't have a mouse on the system 'cause if you moved the window, X would crash). It got much better after that.
  • What chipset is it based on ?
  • I see a recent trend with companies "coming out of the closet" and announcing Linux support, but failing to fully embrace the whole philosophy. From this article, what I can gather is that now people will be able to program fast graphics for 3dfx cards. unless this DRI is also implemented by other companies with competing video cards.. this announcement is useless. Is the DRI even an open standard? Will 3dfx sue anyone else that tries to implement the same interface in Linux, like the whole smelly affair with glide? Is 3dfx going to release the source, or keep it binary only? All these questions lead to muddles.

    If you really want to get excited about graphics on Linux.. take a look at GGI and Mesa.. both seem to be progressing extremely well, and try to provide full cross platform support for all types of cards. For some reason, this doesnt excite me that much.... (maybe it's because I just got my G400.. and it rocks my world)..

    -Laxative
  • That's not it. At least not in my case. I am not talking about games - just simple GUI operations ( moving windows, resizing ) - stuff like that is slower.
  • > Is this driver issue or simply X is slower than
    > Win GUI subsystem ?

    Well, in general with X you do pay a fairly large price for marshalling/demarshalling commands for relatively low-level drawing primitives, even for local stuff. Yes, there's the SHM extension, but that's only if you want/need to implement your own drawing primitives and scribble on the bitmap. And you lose network transparency if you rely on that.

    The Win32 GDI (as far as I know) is basically a matter of local method invocation, with possible marshalling/demarshalling over a network connection going on behind the scenes in cases like Windows Terminal Server, but otherwise it's all local.
    ---
  • 3dfx doesn't own DRI. DRI is the interface between XFree86, the OpenGL modules and the hardware driver that allows near-direct access to the hardware without breaking the X server in the process. 3dfx is just making a hardware driver that will follow the DRI spec so that XFree86 can use it. The XFree86 folks "own" the DRI spec.

    The whole announcement doesn't excite me much. nVidia already has half-decent open-source drivers, will be ready with DRI ones when XFree86 v4 comes out and the TNT2 chips match the latest 3dfx hardware and will keep improving while 3dfx seems to have max'd out. 3dfx has annoyed me greatly with their Glide stunts. What reason do I have to buy their hardware?

  • RIva TNT is the chipset
  • I always thought that they made junky proprietary game cards...
  • Build enlightenment/wmaker/fvwm/etc.. as a library. Allow the wm to be linked into the X server(I know, but but it's a good thing). This
    would lower the context switches a good bit. because you'd get rid of the X/WM/CLIENT clusterfsck that can often happen when you have
    an app running and you MOVE THE MOUSE (Ohh My God). Besides, if you want to switch WM's - we use the libdl.so to kill the hooks, blow out the old wm lib, and relink the new lib.


    Your other suggestions sound good, but this is a stupid idea. Right now if your window manager crashes, or hangs, or even worse hangs with the X server grabbed, you can kill and restart it and the X server keeps chugging. If it were dyn-loaded, a WM failure would mean your whole
    X server is toast. Plus WM memory corruption
    would mean scribbling all over server memory: bad!

    But the real reason it sucks is that it is totally optimizing the wrong thing. Show me _any_ app where the performance bottleneck is manipulation of top-level windows (this being the only time the window manager gets involved) and I will show you a really poorly designed app. Most times the WM is doing anything it is user interaction with window decorations, or a new app coming up, and the context switches are not the bottleneck there.
  • My 1 meg trident video card runs as crappy on linux/X as on Windows.
  • 3Dfx are being stupid with the Glide emulation (wrapper) thing. However, this announcement is still good news for those of who _own_ a Banshee or Voodoo3. I bought a banshee when I built my Linux box, because I saw it had drivers, and the NVidia drivers are "development only", and not really fast enough for games. (At least with a TNT. With a TNT2, it is probably playable.)
    Now that I have a banshee, I'm finding out that the Banshee drivers are pretty iffy and crash-prone. I can pull my TNT out of my other box, and use it in Linux, but then I get horrid 3D performance.
    I'm hoping that this announcement means my banshee will have good 2D and 3D support sometime in the near future.
  • Yeah, I know, I know. At least THAT's fixed. It would be nice if the preview were forced once sometimes.
  • Now, are 3dfx going to do this properly? Are they going to give XFree86/PI/etc their full cooperation, release the specs, maybe write some of the code themselves, and generally do the right thing? Or are we going to see essentially another Glide, where they make their own binary-only, x86-only releases, don't give the code back to XFree86, maybe try to tie it into Glide somehow, and generally try to keep the monopoly they think they have on 3D in Linux? We shall see... (BTW, it's my understanding that 3dfx's Windoze OpenGL drivers are not totally compliant, since they can only do full-screen rendering.)

    /Begin soapbox
    I cannot understand why people still think Voodoo is the best thing since sliced bread or whatever, and go out and buy Voodoo cards by the bucketload. They can't render into a window, can't do 32-bit rendering, don't have a full 32-bit z-buffer, can't render into a window, can't do AGP texturing, and still have that ridiculous 256x256 texture limitation. The only good thing about them is that they're still pretty quick. If you want a 3D card for Linux (or anything else actually), buy a TNT, or better yet, a G400.
    /End soapbox

    P.S. Keeping my fingers crossed for the Glaze3D...
  • X is not necessary, but there's some functionality already there (or being developed for Xfree86) that I don't know if exists for console mode apps.

    Yes, that's exactly the thrust of my first comment -- some of this stuff SHOULD be possible independently of X.

    For example, there's GLX, if you want a remote app to be accelerated in your hardware.

    GLX, in particular, probably is one of those things that belongs in X. I'm not desparate for network transparent console video. However, I would like to be able to have otherwise well-rounded video support outside of X. For a lot of specialized applications, X is just too big, but there aren't any good alternatives for accelerated graphics right now. (no, writing your own drivers in userspace a la svgalib is not a good alternative)

    Also, multiple apps using the same rendering hardware .... There's already 3D support in console for 3dfx cards (I think Quake x can run accelerated under SVGALib + Glide, I'm not sure on this).

    This is exactly the problem. You can't reliably share the graphics hardware right now without going through X. That needs to change. As far as SVGALib + Glide, that was precisely what I was referring to when I said "each app banging on the hardware directly is NOT a modern or a safe way of operating." (granted, it's not as bad if you have /dev/3dfx set up, but that's _hardly_ a generalist approach)

    There's also the GGI project, but it's been ages since I last read about it.

    KGI is a GGI subproject.

    But X provides some other things than "full-screen accelerated games".

    Sure, but sometimes people don't NEED those other things. Also keep in mind that there are some application types besides games that require fullscreen accelerated graphics.


    ---
  • which eliminates the stability problem

    You hope so. This Netscape crap freezes X dead - including keybord, on a regular basis. And that is not the only offending application. Yes, brave Linux heart is still beating under it, and I can drive to work to ssh to it, but that sort not a stable system, do not you think?

    BTW, is their a way to connect the keyboard thru some separate driver, which listens to a particular key combination to switch to separate console, so X can never freeze it out?
  • 3Dfx makes junky proprietary graphics cards that until recently were faster than everybody else's junky proprietary graphics cards. Now nVidia's Riva TNT2 is faster, and has free (BSD license, but hard to read source) drivers, there's little place left for 3Dfx in my world.

    ----
  • ..use 512x512 tiles? Pardon my ignorance, but I thought they can only use 256x256 textures, unlike better cards from nVidia and Matrox...
  • I use X over a network all the time. Eventhough I typically use the command line for most things, it is nice to be able to run a GUI app and have it display locally. This is one feature I would NOT want to see go away.
  • Just had a thought... are DRI, DGA, the proposed anti-aliased font extension and so forth in XFree86 part of, or planned to be part of a current or future X11 spec? I've noticed that a lot of these things do not seem to be supported by non-XFree86 servers ... we're not inadvertantly embracing and extending X11 here, are we?
    ---
  • Hell, I don't really give a damn about network transparency.
    If network transparency is slowing X so much as you imply then , heck, this was terrible design decision to include feature that is used 5 % of the time and slows the whole thing even if you don't use it.
    Come on, it is nice to brag about X networking abilities but truly, how many people use it as compared to running X locally.
  • In general with X, you do pay a fairly large price due to the network transparency issue; however XSHM and especially XDGA do alot to resolve this. Theoretically, DGA in full screen mode is as fast as windows, though I can't back this up with numbers.

    As far as the difficulty of using these extentions is concerned, you make a valid point. However, just as most people don't use Xlib to make X applications, most people don't use XDGA to write games. Instead, they use libraries such as SDL (Simple DirectMedia Layer) which abstracts XDGA (and many other platforms).

  • I use it all the time at work. My main project involves coding something on an SGI Octane using a library which we only have licensed for the Octane. My workstation is an O2. So, I remotely display the program from the Octane onto the O2, and it works great. Of course, it runs much fasbter when/if I runt it locally on the Octane, but since the Octane sits on my manager's desk and not mine, that's a bit difficult.

    Ennyhoo. The reason the TNT driver currently only does GLX is because GLX is just so much simpler to implement and keep synchronized. That's what the DRI is all about. And anyway, q3test runs adequately on my TNT even through the GLX-only driver, as long as I turn off lightmaps (admittedly, GLX really sucks when it comes to texture thrashing, and it doesn't help that the GLX driver has no AGP support yet).
    ---
    "'Is not a quine' is not a quine" is a quine.

When the weight of the paperwork equals the weight of the plane, the plane will fly. -- Donald Douglas

Working...