Carmack Donates $10k to Mesa 103
Emil writes wrote in to tell us that that John Carmack [?]
has donated $10k to Mesa [?] to assist in the development
of optimized 3d drivers for release with Mesa 3.1. Very cool.
You can find out more about Id
or check out The Mesa Website.
Update: 05/13 04:24 by H :In somewhat related news, RealTime wrote to say "Precision Insight (the people funded partly by RedHat?) have made available their design documents for the 3D Direct Rendering Infrastructure for XFree86. The final package will be released under an XFree86 style license. "
Slashdotter's: Let's match it... (Score:1)
Or we could partition the money out to Darryl Straus and others, as a sort "flip-side" to Carmack's donations.
Anyone interested in coordinating the effort? (Don't look at me; I'm homeless at the moment until graduate school starts
Chad Netzer
NT and graphics in the kernel (Score:1)
concede"
NT 3.1, 3.5 and 3.51 did not have the GDI in their
kernels. It is WIDELY accepted by users and
developers that putting GDI in ring 0 was a mistake
that really hurt NT. Now, NT is totally at the
mercy of the video drivers.
Even Microsoft has been wary about porting DirectX
to NT....they'd like it to at least have a shred of
a chance at being an enterprise solution.
Even Microsoft isnt dumb enough to use DirectX for their desktop.
Ideally, for stability, you'd want the kernel to
be as small and simple as possible...only having
necessary drivers in the kernel. Now, Linux isnt
a microkernel, but the "line in the sand" is that
graphics cards and video drivers are more complex
and larger than hardware/drivers for other cards.
I'm supprised (Score:1)
Intresting.
Correction: Matrox has *NOT* released full specs (Score:2)
stands, even once the drivers are mature, will
probably only reach Voodoo1 speeds.
Matrox has withheld specs for their "Warp engine"
which does hardware triangle setup.
Despite 3dfx's recent job offers, it doesnt look
like the video card manufacturers are buckling at
all on the issue of proprietary specs. It's all
lip service so far.
Re:GGI/KGI is doomed? (Score:2)
Because they're a *lot* more complicated.
Video cards these ARE computers in their own
right. Modern video drivers are among the largest
and most complicated (code-wise) of any drivers
that a modern OS will use. More code == more opportunity
for bugs, and bugs in the kernel are *really* bad
news. You want to keep that stuff well isolated
from anything that could take the whole system
down.
Add to that the fact that the kernel is constantly
changing. It would take a great deal of vigilance
to keep the KGI drivers in working order. Open
source is *NOT* magic, and plenty of Linux kernel
drivers in the past have broken, and stayed
broken for a LONG time.
GGI people will tell you that it's impossible to
write stable video drivers outside the kernel.
It's not true, the drivers for my particular card
are very stable, and X has _NEVER_ crashed my
system.
GGI people, in the many arguments I've had with
them, show their colors: they're game playing,
X haters. They love to rail against X..."it's
big, it's bloated, it's slow, it's insecure, yada
yada yada". On my current system, X, at it's
peak requires less than 5% CPU time and less than
5% of memory. It's NOT bloated, it's NOT slow.
And there's really little of any substance to
fear from suid root X, and even that problem will
be solved without the "ripping up the floorboards"
KGI approach.
Text of announcement (Score:5)
May 1999 - John Carmack of id Software, Inc. has made a donation of
US$10,000 to the Mesa project to support its continuing development.
Mesa is a free implementation of the OpenGL 3D graphics library and id's
newest game, Quake 3 Arena, will use Mesa as the 3D renderer on Linux.
The donation will go to Keith Whitwell, who has been optimizing Mesa to
improve performance on 3d hardware. Thanks to Keith's work, many
applications using Mesa 3.1 will see a dramatic performance increase
over Mesa 3.0. The donation will allow Keith to continue working on
Mesa full time for some time to come.
For more information about Mesa see www.mesa3d.org. For more
information about id Software, Inc. see www.idsoftware.com.
Brian Paul
brian_paul@mesa3d.org
May 12, 1999
Re:Carmak and the G200 (Score:1)
/Andreas
Carmak and the G200 (Score:2)
it's really cool to have his input and point of view on, what is arguably, his domain...
you don't see guys like tim sweeny (sp?) (of unreal) looking at the g200 code the day q3test is supposed to ship for windows!
i believe that he (they - id) has made it (obviously) and has realized that he has made it, and now wants to do 'the right thing' (in their eyes, obviously not opensource q3, but q1 soon). cross platform, help support work on linux, push mac to get off their asses, looked at M$ and decided that if anyone was going to get openGL sorted out (driver wise) that id was going to have to do it themselves, or even push that the graphic card co's put out full opengl drivers... q3 uses a limited subset of openGL (like 90% of the calls are to one method, so that driver guys can easily optimize) and he could have stuck w/ the miniGL hacks that people had, but instead he has single handedly forced all the major card manufactorers to supply the world w/ working openGL drivers, this benifits the end user more than anyone else (ok, the M$ end user).
i wouldn't be surprised if, in the future, he has a few words w/ someone like matrox on behalf of the g200 group (i can dream at least)
henri
Re:*Cheer* (Score:1)
Re:*Cheer* (Score:1)
Re:Q3test with Mesa vs. Windows (Score:1)
Re:I'm supprised (Score:1)
I am definately buying Q3A.
I hope that someday Carmack strong-arms NVidia into helping Linux out.
Re:We love you Taco but.... (Score:1)
Re:looks like the beginning of a tradition... (Score:1)
TedC
Re:No! X has *NEVER* crashed for me. Not in 5 year (Score:1)
AFAIK most video cards can't be reset from an inconsistent state without a hard reset of the host system, so this isn't entirely a software problem. Same goes for keyboards; I've had the keyb controller crash and leave me stranded without an input device other then the mouse. Technically neither the kernel not X has crashed, but I can't even exit X using . It would be nice to have a hard-wired key for this purpose.
As far as XFree86 never crashing, running XF86Setup and selecting a 104-key PS/2 keyb does it for me. I'm using a standard 104-key Dell keyb, nothing fancy.
TedC
Re: correction (Score:1)
I forgot that angled brackets get intrerpreted as HTML tags...
TedC
Re:Hardware vendors: your turn. (Score:1)
At least a binary driver would let me run Linux on my new box. I have a Spectra 3200 TNT card but no driver for Linux. I wrote Nvidia twice asking for drivers... hopefully they will get a driver out soon.
Re:looks like the beginning of a tradition... (Score:1)
Re:*Cheer* (Score:1)
comme ca? (Score:1)
Re:GGI/KGI is doomed. Get over it. (Score:1)
Ahh but then you say, "well GGI isn't much fun without KGI, which needs to be in the kernel". Oh ho, but now we have this new toy called FBcon, which is in the kernel. And what's this taken verbatim from a GGI FAQ:
There is a glue layer called KGIcon that will allow KGI drivers to be loaded as fbcon drivers.
Besides which, graphics in the kernel is far more attractive than the (as of yet) only alternative. I don't care what Linus says, running every graphical application as SUID root is not just wrong; it's bordering on lunacy. You think graphics in the kernel would be unstable? Have you ever had X Windows crash? Was the system still usable? Were you able to see anything other than oddly coloured strips on the screen?
Graphics in the kernel did not ruin NT's stability. Show me a version of NT without graphics in the kernel which is more stable and I'll concede. The fact of the matter is: under no cirmustances do you put raw, direct hardware access in userland. The graphics card is no different than any other piece of hardware. Is IDE controller code put in userland? Is soundcard code put in userland? Is Ethernet card code put in userland? What is so special about graphics?
Re:No! X has *NEVER* crashed for me. Not in 5 year (Score:1)
Um, actually, that is a crash. It's an application crash (as opposed to a system crash).
Look, here's the thing I see about putting graphics in the kernel: for security reasons, it is a Good Thing. There should (ideally) be no program that ever has to run suid-root; it's simply a security risk. But put in as little of the graphics code as possible (I haven't taken much of a look at the framebuffer, but even that might be enough).
Hell, GGI as a library is quite nice. And being able to run the same app from the command-line or X and have it come up with a GUI is a Good Thing too.
So yeah, I think minimal graphics support should be in the kernel; just enough to keep things like X-servers from having to be run suid-root (that goes for Xwrapper as well). But it should be kept to a minimum, at least until they're rock-solid (and don't start with the "graphics ruin stability" bit; bad or lazy programming ruins stability, not graphics). And that support might be there already; I'm not well-versed enough in the framebuffer to be certain of that.
Re:No! X has *NEVER* crashed for me. Not in 5 year (Score:1)
Yes. However, this wasn't the video driver crashing. It was one app. A properly-written driver can handle one app crashing, just as the kernel itself can.
However, I think we're beginning to talk different things here. What I am advocating is that the kernel support graphics primitives. The video driver can still reside outside the kernel; the kernel "graphics layer" simply provides a common graphics API which accesses these drivers. Something basic enough to build an X server on is all that's really needed (though I am intrigued by Berlin).
Just because something is hard to do doesn't mean it should not be done. It just means that it has to be monitored and done very carefully. It'd probably take an entire devel tree cycle to get it done properly. But I believe the benefits are worth it.
Re:*Cheer* (Score:1)
Re:It's both (Score:1)
Hum, I think that I'd settle for making *any* money doing what I love...
Drinks are on the house!
Re:X is still 2D. (Score:2)
Maybe we will see it someday. I think they should plan for it. This requires making a GLXcontext and an X "GC" be the same object, making OpenGL start up with an "identity" transformation that matches the X coordinates (currently it comes up undefined), and as a temporary stopgap, making all X drawing not work if the current GL transform is not the identity or if Z buffer is on (so that people don't use it and then complain later on when it does not work).
I would also like to see X *always* provide a 32-bit true color visual and fake it on the display hardware, so we could stop thinking about those stupid colormaps!
Re:Hardware vendors: your turn. (Score:2)
Ignorant peasant (Score:1)
Re:GGI/KGI is doomed? (Score:1)
> interfaces to take a particular stand on whether they should go into
> the kernel or user space or a little of both (the last seems most
> likely).
That is in fact what GGI actually does. KGI
drivers are typically pretty thin (generally enough to successfully
arbitrate hardware access among a number of userland processes)
and then most of the stuff goes on in LibGGI. LibGGI,
incdentally, can work on top of a lot of stuff besides KGI (i.e. X), so it's compatible with other Unixes (even those without a KGI layer) too.
The FSF is 501(c)(3) (Score:3)
Donations (Score:2)
Eventually programmers could be paid for their donations though others donations.
-----------
Resume [iren.net]
XFree86 4.0 will be 3D (Score:1)
Mesa___ (Score:2)
Re:Correction: Matrox has *NOT* released full spec (Score:1)
Hardware vendors: your turn. (Score:5)
Although financial support is definitely something many spare-time-Linux-hackers only dream of, what the Linux 3D community really needs is the cooperation of hardware vendors. Only then will accellerated 3D on Linux be able to compete with the Windows platform.
Matrox has made the first, and biggest step. They have released nearly their entire specification for the G200 chip. This has generated a big development effort, seemingly overnight, to finally get an accellerated 3D solution for Linux. Although the released specification was incomplete, it was enough to get rudimentary 3D support started.
As of late, Quake2 runs accellerated on G200 hardware. And best of all, the source is with us.
Recently, other 3D hardware companies seem to be dipping their toes in the water. 3DFX and nVidia have indicated their interest in Linux, with 3DFX looking to hire Linux specialists, and nVidia pledging a binary-only solution, but I argue that these are not as desirable. The whole "Linux way" revolves around community-based open source efforts, and this requires that a chip's specification be released.
Don't get me wrong. A binary-only driver is better than nothing, but not much better.
One concern among 3D hardware vendors is that releasing the specification will allow competitors an edge. True, the 3D hardware market is competitive at best and downright cutthroat at worst. But let's get real for a minute. A 3D card's lifespan is about six months. It takes this long for an even better card to come out that blows away the previous one. I find it hard to believe that in six months, a competitor can take a register-level specification, reverse engineer it, design, test, and manufacture a better chip (remember we need a _better one_ in six months) and beat the sales of the original chip. It's just not feasable, especially since all the hardware companies already have so much invested in their own R&D.
Point is, hardware companies, please listen to reason. It is only beneficial to release your chip specifications. Upon doing so, you will 1. gain the trust and respect of the Linux community, 2. get free Linux support from the talented developers who are just foaming at the mouth to write drivers for your chip, and 3. be able to compete in the Linux 3D market which despite what Microsoft tells you is not going away any time soon.
If you don't have a linux strategy by now, you should be asking yourself why not?
Re: "Donations" (Score:2)
---
Re:We love you Taco but.... (Score:2)
Re:*Cheer* (Score:1)
Re: "Donations" (Score:2)
TNT + Linux (Score:1)
--John Riney
jwriney@awod.com
Re:comme ca? (Score:1)
Just so there's no confusion (I saw my original comment get moderated down to -1, then go up to 2), the donation to Mesa is great, the direct-to-3d hardware support is a good solution for existing X11 platforms. My beef's with being stuck with the 2D mindset of yesteryears.
GGI, X, whatever gets the job done. (Score:1)
The Precision Insight solution mentions direct 3D rendering into a window; direct rendering of a window into a 3D environment isn't mentioned...
Maybe all that's required is extending the capabilities of a traditional (2D) window manager (that was my original idea [plumb.org], but the hardware-direct path in X wasn't there) to support 3D "rooms" and rewrite the basic apps to texture-map onto room-object surfaces...
I'm open to suggestions, and I'd rather not re-invent the wheel; I definitely prefer OpenGL-based solutions. Maybe I'll should take another stab at my original train-of-thought.
XFree86 4.0 looks promising. (Score:1)
XFree86 4.0 is starting to look a lot more feasable as a platform for me to develop my ideas as I had originally wanted - as an extension of the window manager's functionality, not a (self) modified X server. My most likely plan of attack will be to add a second, active icon mode/state, where the the window contents become texture-mapped onto objects in the root. The I/O will be a bit tricky, but I have a couple of texture-based solutions in mind...
Thanks for all the feedback! (didn't expect that one comment to trigger such a large thread
X is still 2D. (Score:2)
IMHO, direct 3D rendering into multiple X11 windows is too limiting. I want to be able to do it the other way as well; render X11 windows into/onto 3D objects.
I'm tired of looking through windows; I want to be in that room on the other side!
Re:NT and graphics in the kernel (Score:1)
Hah! don't underestimate the power of stupidity. Win200 Professional (NT5) was presented in my university a week ago, and the presenter confirmed that NT5 will come with true DirectX (not simulation). When I asked him about stability, he had the balls to claim that NT is more stable than Linux... He ignored the question about the role of DirectX in the stability issue.
--
Re:We love you Taco but.... (Score:2)
Re:GGI/KGI is doomed? (Score:1)
Why not? How are graphics fundamentally different than, say, network cards?
Henderix (Score:1)
His donation to Mesa is a sign of his focus.
Re:It's both (Score:1)
looks like the beginning of a tradition... (Score:4)
It's good to see him putting some of the money he earns to good use (as opposed to buying one more Ferrari
It's both (Score:1)
respect to John Carmack (Score:1)
- nr
Re:I'm supprised (Score:3)
Source/Free Software can be creating win-win situations.
Re:Hardware vendors: your turn. (Score:1)
Re:looks like the beginning of a tradition... (Score:1)
Re:Henderix (Score:1)
what is john carmack? (Score:1)
Congrats Brian! (Score:1)
Very cool!
Thad
*Cheer* (Score:1)
How about hooks into the next Quake engine to allow for greatly expanded items and attributes? The Quake engine would make a great starting point for a 3D graphical MUD environment. Look at combining a current MUD database (the latest ROM would be best, IMHO) with the graphics engine and voila, the game of the future.
Re:GGI/KGI is doomed? (Score:3)
>
> Because they're a *lot* more complicated.
More complex than a NIC driver? Yeah. But more complex than, say, a distributed filesystem? No, not really. As you say:
>Video cards these ARE computers in their own right.
Yep, they're complicated, but that's because an awful lot of complexity is _in the card_. Is the _interface_ to a graphics card's functionality more complex than other kernel entities? Again, no, not really.
A lot of people have serious misunderstandings about what should or should not go into the kernel. Generally, I think things should be kept out of the kernel unless there's a good reason for putting them in, but such good reasons are not uncommon. At the same time, I think that allowing user-level access to hardware resources is a bad idea, but if it's done in a very tightly controlled way it can be great. For example, at Dolphin I worked on a shared-memory card. If it had worked properly, processes on separate nodes could share memory as easily and transparently (and almost as quickly) as processes on the same node. That would have been way cool. Of course, an important part of the hardware and software design was how to allow applications access to the mapped data areas without allowing them to access control stuff, and as of the time I left the card didn't really work very well anyway. So we have examples of how all these "rules" can and should be broken in specific cases.
Two of the best reasons for putting stuff in the kernel have to do with address spaces and synchronization. The address-space problems are readily resolvable in more advanced research-type operating systems, at least mostly, but in some ways the fundamental and unchangeable UNIX model of processes and address spaces etc. makes this extremely difficult and a new driver is still safer/easier than a severely-hacked virtual memory system even if it's harder/riskier than a user-space program. The synchronization issues are probably more important wrt putting graphics in the kernel or not. If all you're mapping into user space is frame buffers, fine; the worst that can happen is that somebody draws over somebody else's part of the screen. But as soon as you provide user-level access to any other graphics facilities at all, you start opening up a big synchronization Can O' Worms. In some ways, you end up more vulnerable than if you put the gritty bits in the kernel where proper synchronization (which may be complex and non-obvious or even impossible to do without a high level of data sharing which brings you into the address-space side of things) can be rigidly enforced.
I don't know enough about the itty-bitty details of graphics-device interfaces to take a particular stand on whether they should go into the kernel or user space or a little of both (the last seems most likely). I just think that most of the arguments I've seen on the issue are totally "off" wrt why we should or should not implement things in-kernel. There seems to be a lot more ideology and stubbornness involved than actual risk assessment or performance modeling.
Re:We love you Taco but.... (Score:1)
I LOVE the links from www. to everything.
PLEASE KEEP DOING IT
Re:who gives a rat's ass what he buys (Score:3)