Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Quake First Person Shooters (Games)

Carmack Donates $10k to Mesa 103

Emil writes wrote in to tell us that that John Carmack [?] has donated $10k to Mesa [?] to assist in the development of optimized 3d drivers for release with Mesa 3.1. Very cool. You can find out more about Id or check out The Mesa Website. Update: 05/13 04:24 by H :In somewhat related news, RealTime wrote to say "Precision Insight (the people funded partly by RedHat?) have made available their design documents for the 3D Direct Rendering Infrastructure for XFree86. The final package will be released under an XFree86 style license. "
This discussion has been archived. No new comments can be posted.

Carmack Donates $10k to Mesa

Comments Filter:
  • by Anonymous Coward
    Hey, I'm willing to donate $50 (money that I will save by NOT buying an unsupported card). If we can get 200 others to donate the same amount, we can match Carmack's donation (and make a strong statement about how much we want good, supported, HW 3D acceleration under Linux).

    Or we could partition the money out to Darryl Straus and others, as a sort "flip-side" to Carmack's donations.

    Anyone interested in coordinating the effort? (Don't look at me; I'm homeless at the moment until graduate school starts :)

    Chad Netzer
  • by Anonymous Coward
    "Graphics in the kernel did not ruin NT's stability. Show me a version of NT without graphics in the kernel which is more stable and I'll
    concede"

    NT 3.1, 3.5 and 3.51 did not have the GDI in their
    kernels. It is WIDELY accepted by users and
    developers that putting GDI in ring 0 was a mistake
    that really hurt NT. Now, NT is totally at the
    mercy of the video drivers.

    Even Microsoft has been wary about porting DirectX
    to NT....they'd like it to at least have a shred of
    a chance at being an enterprise solution.

    Even Microsoft isnt dumb enough to use DirectX for their desktop.

    Ideally, for stability, you'd want the kernel to
    be as small and simple as possible...only having
    necessary drivers in the kernel. Now, Linux isnt
    a microkernel, but the "line in the sand" is that
    graphics cards and video drivers are more complex
    and larger than hardware/drivers for other cards.


  • by Anonymous Coward
    I'm supprised that no one brought up the 'Bussiness point of view' on this.. That is, Carmack believes that giving away 10k will probably improve his bottom line by 10k (i.e. 10k worth of sales to Linux users).

    Intresting.
  • Matrox G200 is "slightly" accelerated, and as it
    stands, even once the drivers are mature, will
    probably only reach Voodoo1 speeds.

    Matrox has withheld specs for their "Warp engine"
    which does hardware triangle setup.

    Despite 3dfx's recent job offers, it doesnt look
    like the video card manufacturers are buckling at
    all on the issue of proprietary specs. It's all
    lip service so far.
  • by Anonymous Coward
    "Why not? How are graphics fundamentally different than, say, network cards?"

    Because they're a *lot* more complicated.

    Video cards these ARE computers in their own
    right. Modern video drivers are among the largest
    and most complicated (code-wise) of any drivers
    that a modern OS will use. More code == more opportunity
    for bugs, and bugs in the kernel are *really* bad
    news. You want to keep that stuff well isolated
    from anything that could take the whole system
    down.

    Add to that the fact that the kernel is constantly
    changing. It would take a great deal of vigilance
    to keep the KGI drivers in working order. Open
    source is *NOT* magic, and plenty of Linux kernel
    drivers in the past have broken, and stayed
    broken for a LONG time.

    GGI people will tell you that it's impossible to
    write stable video drivers outside the kernel.
    It's not true, the drivers for my particular card
    are very stable, and X has _NEVER_ crashed my
    system.

    GGI people, in the many arguments I've had with
    them, show their colors: they're game playing,
    X haters. They love to rail against X..."it's
    big, it's bloated, it's slow, it's insecure, yada
    yada yada". On my current system, X, at it's
    peak requires less than 5% CPU time and less than
    5% of memory. It's NOT bloated, it's NOT slow.
    And there's really little of any substance to
    fear from suid root X, and even that problem will
    be solved without the "ripping up the floorboards"
    KGI approach.




  • by Anonymous Coward on Thursday May 13, 1999 @05:51AM (#1893692)

    May 1999 - John Carmack of id Software, Inc. has made a donation of
    US$10,000 to the Mesa project to support its continuing development.
    Mesa is a free implementation of the OpenGL 3D graphics library and id's
    newest game, Quake 3 Arena, will use Mesa as the 3D renderer on Linux.

    The donation will go to Keith Whitwell, who has been optimizing Mesa to
    improve performance on 3d hardware. Thanks to Keith's work, many
    applications using Mesa 3.1 will see a dramatic performance increase
    over Mesa 3.0. The donation will allow Keith to continue working on
    Mesa full time for some time to come.

    For more information about Mesa see www.mesa3d.org. For more
    information about id Software, Inc. see www.idsoftware.com.

    Brian Paul
    brian_paul@mesa3d.org
    May 12, 1999

  • I read that the server port was running... but that a client port was unlikely


    /Andreas
  • Carmak has been quite active on the g200 mailing list... (talked general driver opt. looked at the code, etc.)

    it's really cool to have his input and point of view on, what is arguably, his domain...

    you don't see guys like tim sweeny (sp?) (of unreal) looking at the g200 code the day q3test is supposed to ship for windows!

    i believe that he (they - id) has made it (obviously) and has realized that he has made it, and now wants to do 'the right thing' (in their eyes, obviously not opensource q3, but q1 soon). cross platform, help support work on linux, push mac to get off their asses, looked at M$ and decided that if anyone was going to get openGL sorted out (driver wise) that id was going to have to do it themselves, or even push that the graphic card co's put out full opengl drivers... q3 uses a limited subset of openGL (like 90% of the calls are to one method, so that driver guys can easily optimize) and he could have stuck w/ the miniGL hacks that people had, but instead he has single handedly forced all the major card manufactorers to supply the world w/ working openGL drivers, this benifits the end user more than anyone else (ok, the M$ end user).

    i wouldn't be surprised if, in the future, he has a few words w/ someone like matrox on behalf of the g200 group (i can dream at least)

    henri
  • As soon as Glide 3 becomes available for Linux I plan to spend alot of time making EverQuest run under Wine.
  • This potential is already there in quake*, particularly in quake2 with the dll/so game code. Anything is possible.
  • Yeah, but Mesa (OpenGL implementation) is layered on top of Glide in the case of the 3Dfx.
  • A valid point, but I'm not complaining.

    I am definately buying Q3A.

    I hope that someday Carmack strong-arms NVidia into helping Linux out.
  • I love the links. "Everything" should be cross referenced. Why else use hypertext?
  • Dude, it's Romero who blows $$$ on overhyped autos.. Carmack actually codes...

    ...and drives his Ferrari [caranddriver.com] to work.

    TedC

  • Kernel video driver crashes -> whole system falls over.

    AFAIK most video cards can't be reset from an inconsistent state without a hard reset of the host system, so this isn't entirely a software problem. Same goes for keyboards; I've had the keyb controller crash and leave me stranded without an input device other then the mouse. Technically neither the kernel not X has crashed, but I can't even exit X using . It would be nice to have a hard-wired key for this purpose.

    As far as XFree86 never crashing, running XF86Setup and selecting a 104-key PS/2 keyb does it for me. I'm using a standard 104-key Dell keyb, nothing fancy.

    TedC

  • Technically neither the kernel not X has crashed, but I can't even exit X using [ctrl][alt][backspc].

    I forgot that angled brackets get intrerpreted as HTML tags...

    TedC

  • At least a binary driver would let me run Linux on my new box. I have a Spectra 3200 TNT card but no driver for Linux. I wrote Nvidia twice asking for drivers... hopefully they will get a driver out soon.

  • IIRC, it was more like $30k...
  • That game is already here. It's called EverQuest.

  • As much as I enjoy reading random rants about kernel stability, it has nothing to do with GGI. GGI is does not affect the kernel. GGI is not a patch for the kernel. GGI is not in the kernel.

    Ahh but then you say, "well GGI isn't much fun without KGI, which needs to be in the kernel". Oh ho, but now we have this new toy called FBcon, which is in the kernel. And what's this taken verbatim from a GGI FAQ:
    There is a glue layer called KGIcon that will allow KGI drivers to be loaded as fbcon drivers.

    Besides which, graphics in the kernel is far more attractive than the (as of yet) only alternative. I don't care what Linus says, running every graphical application as SUID root is not just wrong; it's bordering on lunacy. You think graphics in the kernel would be unstable? Have you ever had X Windows crash? Was the system still usable? Were you able to see anything other than oddly coloured strips on the screen?

    Graphics in the kernel did not ruin NT's stability. Show me a version of NT without graphics in the kernel which is more stable and I'll concede. The fact of the matter is: under no cirmustances do you put raw, direct hardware access in userland. The graphics card is no different than any other piece of hardware. Is IDE controller code put in userland? Is soundcard code put in userland? Is Ethernet card code put in userland? What is so special about graphics?
  • Occasionally, very rarely, and not even once since Xfree 3.3, there have been a few occasions where Netscape grabbed control of the keyboard and mouse and not let go...and a reboot was required. BUT THAT IS *NOT* A CRASH! The linux kernel was still functioning properly.

    Um, actually, that is a crash. It's an application crash (as opposed to a system crash).

    Look, here's the thing I see about putting graphics in the kernel: for security reasons, it is a Good Thing. There should (ideally) be no program that ever has to run suid-root; it's simply a security risk. But put in as little of the graphics code as possible (I haven't taken much of a look at the framebuffer, but even that might be enough).

    Hell, GGI as a library is quite nice. And being able to run the same app from the command-line or X and have it come up with a GUI is a Good Thing too.

    So yeah, I think minimal graphics support should be in the kernel; just enough to keep things like X-servers from having to be run suid-root (that goes for Xwrapper as well). But it should be kept to a minimum, at least until they're rock-solid (and don't start with the "graphics ruin stability" bit; bad or lazy programming ruins stability, not graphics). And that support might be there already; I'm not well-versed enough in the framebuffer to be certain of that.
  • Kernel video driver crashes -> whole system falls over.

    Yes. However, this wasn't the video driver crashing. It was one app. A properly-written driver can handle one app crashing, just as the kernel itself can.

    However, I think we're beginning to talk different things here. What I am advocating is that the kernel support graphics primitives. The video driver can still reside outside the kernel; the kernel "graphics layer" simply provides a common graphics API which accesses these drivers. Something basic enough to build an X server on is all that's really needed (though I am intrigued by Berlin).

    Just because something is hard to do doesn't mean it should not be done. It just means that it has to be monitored and done very carefully. It'd probably take an entire devel tree cycle to get it done properly. But I believe the benefits are worth it.
  • I don't think you can set up your own EQservers with new worlds and rules. Or build on the world(s). And you have to pay for time playing. (Problably why they only allow games on their servers)
  • I'd someday want to be able make enough money to afford an F-50 doing what I love...

    Hum, I think that I'd settle for making *any* money doing what I love...

    Drinks are on the house!
  • I agree, there needs to be better integration between the normal X drawing and the OpenGL drawing. It would help a lot if the current OpenGL transformation applied to everything, so that you could do perspective and then do X drawing and comes out as though drawn on a flat surface angled that direction in space.

    Maybe we will see it someday. I think they should plan for it. This requires making a GLXcontext and an X "GC" be the same object, making OpenGL start up with an "identity" transformation that matches the X coordinates (currently it comes up undefined), and as a temporary stopgap, making all X drawing not work if the current GL transform is not the identity or if Z buffer is on (so that people don't use it and then complain later on when it does not work).

    I would also like to see X *always* provide a 32-bit true color visual and fake it on the display hardware, so we could stop thinking about those stupid colormaps!

  • You're probably correct here. I might just be blind to something, but I can't see anything but benefits coming from releasing hardware specs. They won't have to pay in-house developers to release binary drivers. I'd also be more quick to buy some piece of hardware if I knew that it had good drivers.
  • He should have bought an RX-7
  • > I don't know enough about the itty-bitty details of graphics-device
    > interfaces to take a particular stand on whether they should go into
    > the kernel or user space or a little of both (the last seems most
    > likely).

    That is in fact what GGI actually does. KGI
    drivers are typically pretty thin (generally enough to successfully
    arbitrate hardware access among a number of userland processes)
    and then most of the stuff goes on in LibGGI. LibGGI,
    incdentally, can work on top of a lot of stuff besides KGI (i.e. X), so it's compatible with other Unixes (even those without a KGI layer) too.
  • by Booker ( 6173 ) on Thursday May 13, 1999 @06:36AM (#1893719) Homepage
    The Free Software Foundation is a 501(c)(3) organization, so donations [fsf.org] to them are tax deductable. I don't know if they want hardware or not, but money is always appreciated. :-)
  • People donate stuff all the time to these causes. I know friends of mine who have given up video and sound cards to have someone develop linux drivers for them. I wonder if at some point they will be seen as tax write-offs and these donations can increase a thousand-fold.

    Eventually programmers could be paid for their donations though others donations.
    -----------
    Resume [iren.net]
  • It will have OpenGL integration on a better level than win9X/directx. And eventually full 3d hardware support for everything. Other cool features will be TrueType font support and antialiasing (YES!!). We'll all be able to play Q3 Arena faster and with full hardware accelaration someday ;-). Carmack should have really given the money to XFree86, seeing as how their project is much more significant to most users, linux, and to him.
  • I noticed that on their web page they have asked folks not to use MesaGL for legal reasons. How about MesaGPL :)

  • Thats why I said the specs were incomplete :)
  • by Stiletto ( 12066 ) on Thursday May 13, 1999 @05:53AM (#1893727)

    Although financial support is definitely something many spare-time-Linux-hackers only dream of, what the Linux 3D community really needs is the cooperation of hardware vendors. Only then will accellerated 3D on Linux be able to compete with the Windows platform.

    Matrox has made the first, and biggest step. They have released nearly their entire specification for the G200 chip. This has generated a big development effort, seemingly overnight, to finally get an accellerated 3D solution for Linux. Although the released specification was incomplete, it was enough to get rudimentary 3D support started.

    As of late, Quake2 runs accellerated on G200 hardware. And best of all, the source is with us.

    Recently, other 3D hardware companies seem to be dipping their toes in the water. 3DFX and nVidia have indicated their interest in Linux, with 3DFX looking to hire Linux specialists, and nVidia pledging a binary-only solution, but I argue that these are not as desirable. The whole "Linux way" revolves around community-based open source efforts, and this requires that a chip's specification be released.

    Don't get me wrong. A binary-only driver is better than nothing, but not much better.

    One concern among 3D hardware vendors is that releasing the specification will allow competitors an edge. True, the 3D hardware market is competitive at best and downright cutthroat at worst. But let's get real for a minute. A 3D card's lifespan is about six months. It takes this long for an even better card to come out that blows away the previous one. I find it hard to believe that in six months, a competitor can take a register-level specification, reverse engineer it, design, test, and manufacture a better chip (remember we need a _better one_ in six months) and beat the sales of the original chip. It's just not feasable, especially since all the hardware companies already have so much invested in their own R&D.

    Point is, hardware companies, please listen to reason. It is only beneficial to release your chip specifications. Upon doing so, you will 1. gain the trust and respect of the Linux community, 2. get free Linux support from the talented developers who are just foaming at the mouth to write drivers for your chip, and 3. be able to compete in the Linux 3D market which despite what Microsoft tells you is not going away any time soon.

    If you don't have a linux strategy by now, you should be asking yourself why not?
  • Ummm, but don't hackers get hungry and need shelter? I see no reason not to use the word "Donation" in this case. Besides, I don't agree with your definition of how the word should be used anyways. If you give money to a cause that you think is worthy of your money then it is a donation.
    ---
  • I like the Everything links, myself... actually *useful* when you're not familiar with something. and Everything is an interesting project, IMO. not like the links are hard to ignore or anything... just a pleasant little touch
  • It is widely expected that the Quake source will be released when Q3 is finished.
  • And a donation to a library only helps people who can read. It's not that big of a difference, after all. It is a donation for something that is freely available to all, just like a donation to a library or museum. Or maybe it's more like sponsoring a poet or author on the condition that he/she release the work to the public for free. The point is, there is much more to charity and donations than feeding and clothing the homeless. Sure that's a great way to make the world a better place, but it's far from the only way.
  • Whilst playing Q3test last night, John mentioned in passing that NVidia has (mostly) working TNT drivers.

    --John Riney
    jwriney@awod.com
  • Sweet! That's close to what I had in mind. What I have in mind is more a hybrid X-server/window-manager where the individual windows are mapped/rendered into the 3D landscape. From what's shown at that site, it might just be possible. :-)

    Just so there's no confusion (I saw my original comment get moderated down to -1, then go up to 2), the donation to Mesa is great, the direct-to-3d hardware support is a good solution for existing X11 platforms. My beef's with being stuck with the 2D mindset of yesteryears.
  • I've never used GGI, so my question to you has to be, can I at least do the same thing with an Xfree 4.0 server? i.e. Is it possible to map an active Xterm session window onto, say, the surface of a tilted square?

    The Precision Insight solution mentions direct 3D rendering into a window; direct rendering of a window into a 3D environment isn't mentioned...

    Maybe all that's required is extending the capabilities of a traditional (2D) window manager (that was my original idea [plumb.org], but the hardware-direct path in X wasn't there) to support 3D "rooms" and rewrite the basic apps to texture-map onto room-object surfaces...

    I'm open to suggestions, and I'd rather not re-invent the wheel; I definitely prefer OpenGL-based solutions. Maybe I'll should take another stab at my original train-of-thought.
  • Yup, it mentions the 3D support and the like.

    XFree86 4.0 is starting to look a lot more feasable as a platform for me to develop my ideas as I had originally wanted - as an extension of the window manager's functionality, not a (self) modified X server. My most likely plan of attack will be to add a second, active icon mode/state, where the the window contents become texture-mapped onto objects in the root. The I/O will be a bit tricky, but I have a couple of texture-based solutions in mind...

    Thanks for all the feedback! (didn't expect that one comment to trigger such a large thread :-)
  • My own little pet-peeve is that X is still stuck in/with the 2D-window metaphor.

    IMHO, direct 3D rendering into multiple X11 windows is too limiting. I want to be able to do it the other way as well; render X11 windows into/onto 3D objects.

    I'm tired of looking through windows; I want to be in that room on the other side!
  • Even Microsoft has been wary about porting DirectX to NT....they'd like it to at least have a shred of a chance at being an enterprise solution. Even Microsoft isnt dumb enough to use DirectX for their desktop.

    Hah! don't underestimate the power of stupidity. Win200 Professional (NT5) was presented in my university a week ago, and the presenter confirmed that NT5 will come with true DirectX (not simulation). When I asked him about stability, he had the balls to claim that NT is more stable than Linux... He ignored the question about the role of DirectX in the stability issue.

    --

  • I like the links to Everything. It's not as though they get in the way.
  • Graphics drivers dont belong in the kernel, not now, not ever

    Why not? How are graphics fundamentally different than, say, network cards?
  • He's like Jimi Hendrix - very narrowly focused and very good at what he does. Of course, he may not be a well-rounded person, but why does it matter?

    His donation to Mesa is a sign of his focus.

  • and also a TT setup on his testarossa; that thing is damn fast. The F50 showed almost no increase with the turbos though, cuz it hasn't been tweaked at all. geez, i'd love to drop 50k on a turbo kit and not care :)
  • by mat.h ( 25728 ) on Thursday May 13, 1999 @05:39AM (#1893745)
    Very cool, indeed. A while ago Carmack donated $10k to the FSF, too, because Quake (the original, true, DOS version) was built with djgpp. If I remember his .plan update correctly, he did that after winning the cash in Las Vegas...

    It's good to see him putting some of the money he earns to good use (as opposed to buying one more Ferrari :-) Seems he just wants the world to be a better place. Technically.
  • Carmack has 3 Ferraris. Actually, he had 4, then decided to give the 348 (I think) away in the tourney last year. And he has quite the penchant for hopping his cars up - he put a twin turbo on his new F50.
  • This is great, kudos to John Carmack. For pushing OpenGL as an alternative to DirectX on the gaming scene. For being such a briliant and sharp programmer. For supporting Linux and UNIX. He has done much good for pushing consumer-level OpenGL support. How many low-end graphics cards would support OpenGL without the great games Quake and Quake2 you think?

    - nr

  • by Alan Cox ( 27532 ) on Thursday May 13, 1999 @01:18PM (#1893748) Homepage
    I hope it does boost his sales by over 10K. Open
    Source/Free Software can be creating win-win situations.
  • I'd suspect that the chip vendors dont want to release design details because it would open the door to patent infringement lawsuits. I'm sure they all use each others designs to some extent, keeping the details under wraps is the best way to keep the other guys guessing.
  • Thanks for the link. Good story for any geek who wants to feel like normal people do reading geek texts.
  • That's how you get to be the best right?

  • god of 3d coding, oh wait isn't reality 3d? ooops, nope.

  • I am thrilled to see Mesa get this kind of support. I went to school with Brian Paul (the creator of Mesa) and I am not at all suprised that a project of his has gained such recognition. I can recall hanging out in Brian's dorm room (more than a decade ago) as he demonstrated a ray-tracing program he had written for his Atari computer. It took him several days to crank out an animation on that old beast, but the final result blew my socks off!

    Very cool!

    Thad

  • Thanks John, you're a wise and generous man: Wise to help foster growth and acceptance of Linux, generous with your dollars. I'd nominate you for sainthood but I suspect da Pope wouldn't understand (yet!)

    How about hooks into the next Quake engine to allow for greatly expanded items and attributes? The Quake engine would make a great starting point for a 3D graphical MUD environment. Look at combining a current MUD database (the latest ROM would be best, IMHO) with the graphics engine and voila, the game of the future.

  • by Salamander ( 33735 ) <jeff.pl@atyp@us> on Thursday May 13, 1999 @11:47AM (#1893755) Homepage Journal
    >"Why not? How are graphics fundamentally different than, say, network cards?"
    >
    > Because they're a *lot* more complicated.

    More complex than a NIC driver? Yeah. But more complex than, say, a distributed filesystem? No, not really. As you say:

    >Video cards these ARE computers in their own right.

    Yep, they're complicated, but that's because an awful lot of complexity is _in the card_. Is the _interface_ to a graphics card's functionality more complex than other kernel entities? Again, no, not really.

    A lot of people have serious misunderstandings about what should or should not go into the kernel. Generally, I think things should be kept out of the kernel unless there's a good reason for putting them in, but such good reasons are not uncommon. At the same time, I think that allowing user-level access to hardware resources is a bad idea, but if it's done in a very tightly controlled way it can be great. For example, at Dolphin I worked on a shared-memory card. If it had worked properly, processes on separate nodes could share memory as easily and transparently (and almost as quickly) as processes on the same node. That would have been way cool. Of course, an important part of the hardware and software design was how to allow applications access to the mapped data areas without allowing them to access control stuff, and as of the time I left the card didn't really work very well anyway. So we have examples of how all these "rules" can and should be broken in specific cases.

    Two of the best reasons for putting stuff in the kernel have to do with address spaces and synchronization. The address-space problems are readily resolvable in more advanced research-type operating systems, at least mostly, but in some ways the fundamental and unchangeable UNIX model of processes and address spaces etc. makes this extremely difficult and a new driver is still safer/easier than a severely-hacked virtual memory system even if it's harder/riskier than a user-space program. The synchronization issues are probably more important wrt putting graphics in the kernel or not. If all you're mapping into user space is frame buffers, fine; the worst that can happen is that somebody draws over somebody else's part of the screen. But as soon as you provide user-level access to any other graphics facilities at all, you start opening up a big synchronization Can O' Worms. In some ways, you end up more vulnerable than if you put the gritty bits in the kernel where proper synchronization (which may be complex and non-obvious or even impossible to do without a high level of data sharing which brings you into the address-space side of things) can be rigidly enforced.

    I don't know enough about the itty-bitty details of graphics-device interfaces to take a particular stand on whether they should go into the kernel or user space or a little of both (the last seems most likely). I just think that most of the arguments I've seen on the issue are totally "off" wrt why we should or should not implement things in-kernel. There seems to be a lot more ideology and stubbornness involved than actual risk assessment or performance modeling.
  • I checked out everything. a few days ago and thought it was a _very_cool_ concept. The idea that you can just keep browsing and clicking on the keywords that spark some interest in you is a great idea and is rooted at the core of the whole 'surf the net' phylosophy. Having other people provide their own definitions for words or phrases you may take for granted is highly educational and lots of fun!

    I LOVE the links from www. to everything.

    PLEASE KEEP DOING IT
  • by Afrosheen ( 42464 ) on Thursday May 13, 1999 @07:55AM (#1893757)
    He brings joy to millions of computer owners worldwide. He probably grew up a total dork, parked behind his 286 for most of his childhood, didn't kiss a girl until he was 20 and you guys bag him for buying an expensive car. Weak. He's got the right idea, spend your teen years learning how to code extremely well, then get rich buy a fast car and get some action when you're older. Smart guy, this one.

"There are things that are so serious that you can only joke about them" - Heisenberg

Working...