Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Quake First Person Shooters (Games)

ESR on Quake 1 Open Source Troubles 339

ESR as chimed in to say his bit on the recent quake problems that popped up following the source release. Its definitely a problem that will happen again and something that needs to be handled. Read what he has to say about it.

The following was written by ESR. You know who he is already ;)

The case of quake cheats

The open-source community got a lump of coal in its Yule 1999 stocking from renowned hacker John Carmack, the genius behind id Software and such games as Castle Wolfenstein, Doom, and the Quake series. Carmack's .plan file noted a problem that has swiftly emerged since the Quake 1 source was released under GPL; it seems that some people have used their ability to modify the Quake client as a mechanism for cheating.

This may at first sight seem a trivial issue -- who cares if a few testoterone-pumped teenagers cheat at a shoot-em-up game? But in today's internetworked world, countermeasures against Quake cheating arguably provide an important laboratory model for cases that are decidedly not trivial, such as electronic commerce, securities trading, and banking.

The Quake model is made particularly relevant by its open-source connection. Open source advocates (including me) have been making a strong argument over the last two years that open-source software such as Linux and Apache is fundamentally more secure than its closed-source competitors. Cryptographers have long understood that no encryption system can really be considered well-tested until it has been energetically and repeatedly attacked by experts who have full knowledge of the algorithms it uses. Open-source advocates argue that there is nothing special about cryptography but its high stakes -- that, in general, open peer review is the only road to systems which are not merely accidentally secure by obscurity, but robustly secure by design.

Carmack, therefore, caused a bit of a flutter on Slashdot when he went on to to suggest that only a pair of closed-source encryption programs could solve the Quake-cheating problem. The problem, as he correctly pointed out, is that asking the open-source client to verify its own correctness won't work; a sufficiently clever cracker could always write a client that would simulate the right kinds of responses and then cheat.

A debate ensued, with several people pointing out that trusting the client half of a client-server pair is bad security policy whether the client code is open or closed. Fundamentally, there's no way for the server to be sure it isn't talking to a clever simulation of `correct' behavior. Thus, opening the source to Quake 1 didn't create security problems, it merely exposed one that was already present (and exploitable, and for all anyone knew already secretly exploited) in the design of the game.

Carmack weighed in to make clear that the Quake-cheating problem is subtler than many of the debators were assuming. It's not possible for a cheating client to give a player infinite ammunition or life points; the server does not in fact trust the client about these things, and manages them itself. This is correct design; whether or not it's open-source, a bank should not depend on a customer's client software to tell the bank what the cutomer's balance is!

Carmack observes that "the [cheating] clients/proxies focus on two main areas -- giving the player more information than they should have, and performing actions more skillfully."

The serious "more information" cheats depend on a performance hack. In order to hold down the number of updates of the Quake world it has to pass to the client, the server gives the client information about the location of objects and opponents that the player can't yet see, but might be able to see before the next update. The server then counts on the client not to make those things visible until they "should" be (e.g, until the user gets to a particular location in the maze the client is simulating). A cheating client can reveal an opponent seconds before the player would turn the corner and expose himself to fire.

The "more skillfully" cheats substitute the computer's speed and accuracy for tasks that the server and other players expect the player's hands and brain to be performing. Carmack talks about "aim bots" which automatically lock the player's gun onto visible opponents and fire it with inhuman accuracy.

And indeed it's hard to see how either of these sorts of cheats can be prevented given an open-source client and no way independent of the client itself to check that the client is honest. Thus Carmack's suggestion of a closed-source Quake-launcher program that would take a checksum of the client binary, communicate with the server to make sure the binary is on an approved list, and then handle communication with the server over a cryptographically-secured channel.

Carmack's argument seems watertight. What's wrong with this picture? Are we really looking at a demonstration that closed source is necessary for security? And if not, what can we learn about securing our systems from the Quake case?

I think one major lesson is simple. It's this: if you want a really secure system, you can't trade away security to get performance. Quake makes this trade by sending anticipatory information for the client to cache in order to lower its update rate. Carmack read this essay in draft and commented "With a sub-100 msec ping and extremely steady latency, it would be possible to force a synchronous update with no extra information at all, but in the world of 200-400 msec latency [and] low bandwidth modems, it just plain wouldn't work." So it may have been a necessary choice under the constraints for which Quake was designed, but it violates the first rule of good security design: minimum disclosure.

When you do that, you should expect to get cracked, whether your client is open or closed -- and, indeed, Carmack himself points out that the see-around-corners cheat can be implemented by a scanner proxy sitting between a closed client and the server and filtering communicatiuons from server to client.

Closing the source of the client may obscure the protocol between client and server, but that won't stop a clever cracker with a packet sniffer and too much time on his hands. Carmack confirms that even without the packet sniffer or access to source there are a variety of ways to flush out anticipatory information, ranging from tweaking the gamma and brightness on your screen to banish shadows to hacking your graphics card's device drivers to do transforms of the world model (such as making walls transparent).

We're back in familiar territory here; the history of computer security is littered with the metaphorical (and in some cases maybe literal) corpses of people who thought security through obscurity was sufficient. Crackers love that kind of naivete and prey on it ruthlessly.

The aim-bot cheat is trickier to prevent. The difference between human and aim-bot actions is measured only in milliseconds of timing. Changing the protocol to stop it from leaking information won't banish aim-bots; it would take the server doing statistical analysis of player action timings to even detect them, and (as Carmack points out) "that is an arms race that will end with skilled human players eventually getting identified as subtle bots."

Fortunately, the aim-bot cheat is also much less interesting from a general security point of view. It's hard to imagine anything but a twitch game in which the client user can cheat effectively by altering the millisecond-level timing of command packets. So the real lesson of both cheats may be that a closed-source program like Carmack's hypothetical secured program launcher is indeed a good idea for security -- but only if you're a hyperadrenalized space marine on a shooting spree.

(Any computer game at which computers are better than most humans has analogous cheats, some of which aren't even detectable in principle. Carmack observes "correspondence chess has been subverted from its original intent by players using computers." This isn't something security design can fix.)

If Quake had been designed to be open-source from the beginning, the performance hack that makes see-around-corners possible could never have been considered -- and either the design wouldn't have depended on millisecond packet timing at all, or aim-bot recognition would have been built in to the server from the beginning. This teaches our most important lesson -- that open source is the key to security because it changes the behavior of developers.

Open source keeps designers honest. By depriving them of the crutch of obscurity, it forces them towards using methods that are provably secure not only against known attacks but against all possible attacks by an intruder with full knowledge of the system and its source code. This is real security, the kind cryptographers and other professional paranoids respect.

It's the kind of security the Linux kernel and the Apache webserver have, and the kind people victimized by the Melissa and Chernobyl viruses and Back Orifice and the latest Microsoft-crack-of-the-week don't have. If you're betting your personal privacy or your business's critical functions on the integrity of software, it's the kind of security you want, too.

To recap, the real lessons of the Quake cheats are (a) never trust a client program to be honest, (b) you can't have real security if you trade it away to get performance, (c) real security comes not from obscurity but from minimum disclosure, and most importantly (d) only open source can force designers to use provably secure methods.

So, far from being a telling strike against open source, the case of the Quake cheats actually highlights the kinds of biases and subtle design errors that creep into software when it's designed for closed-source distribution and performance at the expense of security. These may be something we can live with in a shoot-em-up, but they're not tolerable in the running gears of the information economy. Avoiding them is, in fact, a good reason for software consumers to demand open source for anything more mission-critical than a Quake game.

Eric S. Raymond

This discussion has been archived. No new comments can be posted.

ESR on Quake 3 Troubles

Comments Filter:
  • CmdrTaco, you got it wrong. Quake3's source wasn't GPLed; it was Quake1 you're thinking of.

    Get fragged @ Lone Star Quake
    South Texas' premier Quake server
  • This is dumb. Seriously, I kind of expect more from ESR by now, I don't know about the rest of you. Quake => E-Commerce? No way, E-Commerce (Open Sourced or not) has no business providing any sort of control over the process on the client-side. In a decently designed model, everything would be server-side, except what is relevant to the customer. Unfortunately, to do everything server-side in a highly CPU/graphics intensive game like Quake, would cause a huge slowdown. That's the issue here. But until your e-commerce needs to do 1024x768 @ 60 fps, I don't think this is really all that relavent.


    "You can't shake the Devil's hand and say you're only kidding."

  • by billybob jr ( 106396 ) on Monday December 27, 1999 @07:44AM (#1442485)
    Quake was written for single player and for multiplayer over a lan like doom. When internet play became an issue and quake's inability to do it well, especially for modem connections, quakeworld was written. Keep in mind that Quake wasn't designed with internet play in mind. It was hacked in afterwards.
  • Just out of curiosity, how do you know if someone has cracked the key out of a "blessed" client? When they've broken your e-commerce system and stolen thousands of dollars worth of good from you?


    "You can't shake the Devil's hand and say you're only kidding."

  • . . . but wasn't this what everyone else was saying the other day? I mean, it's good that it's coming from someone respectable in the Open Source world (oh, hush Stallman ;-), and predictably it seems written to use a trivial(?) example to serve as evangelitical text to the non-believers. But at any rate, is this the place for it? I know I weigh Slashdot discussion (-inanities of course) almost as much as words from a "spokesman" for the community. I agree with what he says, but I'd have thought us discussing it would be all the swaying we need. I don't need top-level affirmation that what I believe is right.

    I personally think (and hope ESR does too) that this is the sort of thing you post to a non-community news source. Let's face it, a good percentage of Slashdot readers are Open Source people. No need to preach to the converted. It's a great parable for the benefits of Open Source, but we know all that already.

    Oh, and is this affecting Q3 too? I thought it was just relevant in Q1 (the game with the open source). I don't know; I'm a Drakan player ;-)
  • by CrusadeR ( 555 ) on Monday December 27, 1999 @07:47AM (#1442493) Homepage
    Some issues with this article:

    A) This is the Quake 1 source Rob, not Quake 3 :)
    B) ESR states:
    If Quake had been designed to be open-source from the beginning, the performance hack that makes see-around-corners possible could never have been considered -- and either the design wouldn't have depended on millisecond packet timing at all, or aim-bot recognition would have been built in to the server from the beginning. This teaches our most important lesson -- that open source is the key to security because it changes the behavior of developers.
    I fail to see how if Quake had been open source software from the beginning, how the situation would be any different. Quake is a fast action game that is often played over connections with high latency, the dependency on "millisecond packet timing" is inherent to the game itself due to how quickly entities manuever in the game world. Similarly, trying to build aim bot detection into the server from the beginning, as you suggested, would run into the same wall its hitting now: some players have enough skill that some form of statistical analysis would conclude they are using an aiming proxy.
  • I think ESR's calling the release of Q1 src a lump of coal in hacker's stockings is a bit extreme.

    Even the ensuing debate over multi-user game/world security models is very significant for future development.

    Now that Q1 src is opensourced, what's to stop those of us who are so unhappy with the network play model from re-implementing it??

    If we can come up with a better, faster, tighter, cross-platform, secure model for network gameplay, I'm sure that many game developers would be interested in adopting it.

    so, let's start designing and implementing the new protocols for networked online worlds!

    Thanks to John Carmack for the Q1 src which has fueled so much thought about gaming, AI, virtual communities, new user interfaces.... and much more.

    ... and thanks for the linux port when all the other game companies were writing to Windows only.
  • by Tim Behrendsen ( 89573 ) on Monday December 27, 1999 @07:49AM (#1442495)

    Fortunately, the aim-bot cheat is also much less interesting from a general security point of view. It's hard to imagine anything but a twitch game in which the client user can cheat effectively by altering the millisecond-level timing of command packets. So the real lesson of both cheats may be that a closed-source program like Carmack's hypothetical secured program launcher is indeed a good idea for security -- but only if you're a hyperadrenalized space marine on a shooting spree.

    Uh, with all due respect, THAT IS THE MAJOR PROBLEM!! You in essense admit that this problem is unsolvable, then go on to say that it's really not a problem that anyone that's important (to you) cares about. Well, trivializing the problem by mocking quake fans doesn't make the problem go away, and it doesn't change the fact that open source has made the problem infinitely worse.

    Then, of course, you go on to conclude that open source is the ultimate answer (while giving no evidence).

    ESR, normally I'm with you, but this essay was major smoke and mirrors.


  • I first thought, who the hell cares about a bunch of punks cheating in a shoot-em-up? For me, when I first heard it about it I thought, hey, let them cheat, it's not relevant to the Open Source community anyway.

    ESR proves me wrong. Again. He sees things that would normally escape the naked eye. I guess that's why he's our No.1 spokesman, right?

    - JoJo
  • It seems to me then that you agree with Carmack's point, that the design decisions made to increase performance in a low bandwidth environment, necessarily preclude a secure client.

    What ESR is addressing is that both Quake and ecommerce clients exist in this same environment, and that when open source principles are applied from the get-go, a secure solution can be provided.

    Thanks for listening.
  • People want to be able to distribute encrypted IP (chip designs) to customers for simulation. There are a couple of reasons for this - IP providers want to protect their product - and they want to protect themselves from liability should the customer change the source and then blow a lot of money on building a bogus chip.

    Traditionally this problem has been handled in a closed source world with a public key/private key sort of setup with the private key (for decryption) and encryption algorithm embedded in the compiler binary somewhere.

    This leaves the IP provider's product at the mercy of the vendor of the CAD tool

    A few years back this all fell apart for Verilog, a popular simulation compiler, for various reasons the language runtime is extensible, it was also interpreted - this left a version of the compiler which contained symbols. An anonymous poster to comp.lang.verilog pointed out how to write a gdb script that set a breakpoint in 'yylex' and extracted the decrypted token from IP.

    All hell broke loose .... eveyone who'd ever sent encrypted IP to customers was now open to the world...

    So - back to the topic - does anyone have any idea how one can do this sort of thing in an Open Source world - send people secret stuff to be used by an OS program without giving away the secret? Given that the only schemes I've seen to do this rely on security-by-obscurity (as above) I suspect it just can't be done.

  • I've been thinking a lot of how to make an aimbot impratical. It reall boils down to limiting the control responses (movement of the players) and increasing the external stimulous to the system. If the movement was limited to what a star player should be expected to do, then maybe we could design the response to external disruptions (being hit) such that they would throw a classic linear control system off enough to make it non competitive. This is an inverse control theory problem. Find a settling time and characteristic for classical control systems that makes the aimbot suck. Then change the game enough to make sure this happens.

    If the cheater then wants to use a non linear control system, or some really good linear multivariable control system let them develop it. With the right plant to design for this could be made into something that if someone implemented the program for it I'd be willing to let them use it just from the impressivness of the work.
  • It was the original Netquake v1.06. You know, with no client-side prediction and lag on your movements. The trade offs that John Carmack and crew made were made to make the gameplay better, not to make it so you can't cheat. I for one would rather risk the remote chance that I would be playing against someone who rewrote part of the code to cheat, than play with a 250ms delay to all of my input on the game again.

    One thing that isn't really taken into account in all the stories are that most quake servers (Mine included) have a steady base of regular players, and that player base pretty much stays the same. We are all good friends who play and wouldn't play with cheat things like that and if they did we would know shortly after they started because they would get instantly better than they were the day before. Improvments like that don't happen in games like this.

    I would say that ESR doesn't play quake much, and just wants to get his jabs in to give the closed source development sceme a blacker eye than it already has.
  • This is a nice little essay on why the people who wrote Quake were stupid, but it provides no solutions or even suggestions at where a solution may lie. You can say "So, in future, let's do this instead of this" but that doesn't help the problem at hand. The problem at hand is, idiots are screwing around on Quake.

  • You should consider reading the article before you criticize it. (Who moderated this guy up?)
  • "If Quake had been designed to be open-source from the beginning..." would have changed nothing.

    To make Quake secure, you would not be able to trust the client. To make Quake playable over modems, you have to trust the client (lack of bandwidth and latency issues).

    Open the source, close it, print it up and wallpaper your cubicle with it. Its still not feasable to have the server do all the work. So surprise, it won't be secure.
  • How is this proof? That people corrupt the intent of the software? Fooey. That happens regardless of the source being open or closed. One example: AOHell appeared, corrupting the security model of AOL, despite the closed-source distribution environment of AOL.

    Closed source doesn't prevent bad people from doing bad things.


  • There's nothing inherently wrong with bots. After all, that's what you're playing against when you play the single-player version. They're only annoying when they pretend that they're human.

    It seems to me that the thing that was most widely praised about half-life was the creative AI players. It will be fun to see what people can come up with for Quake. Could make Quake 1 worth playing again.

  • but ESR didn't add anything new to the discussion it just looked like the comments that were already posted under Q1 release and then rehashed. I think everyone agrees that security through obscurity is a bad idea but as Carmack explains in the world of modems you don't really have a choice, you need the performance to sell games. Now if some e-commerce site decided to do this then I would be worried but this is just a game we are talking about. I could see if e-bay had released their source code and people had decided to hack it so that they could cheat but this is just silly.
  • Only if you consider network security a pointy haired boss issue.

  • Before we all get up in arms about what JC said about closed-source client/agents and ESR's response, let's keep something in mind: what's important to ESR and the Open Source world is, in this very case, different than what's important to Carmack.

    Don't forget, it was Carmack who made the decision to move away from FPS's and into Arena, which is clearly an Internet-gaming gamble. So now the issue with cheats in Q1 probably doesn't affect his plans *today*, but what about later? Could the techniques developed in Q1 hacks become tomorrow's monsters that keep "everyone" from playing Arena online? That's gotta worry JC, I would think. That problem would adversely affect the (so far) enthusiastic yet also reserved acceptance of the gaming community to the changes in Arena.

    So this is a problem that Carmack must now solve, it along with it's closed-source baggage that caused the problem to begin with. Is this our problem? Should we care?

    I think ESR's saying, "I'm an open source advocate, and I will remain one. John Carmack's problems are his own." That's my read on it.
  • by Nimmy ( 5552 ) on Monday December 27, 1999 @08:01AM (#1442524) Homepage
    I disagree completely. It is true that ESR is making a big stretch, one that many people would think is invalid. And it is true that Quake has little to do with e-commerce per se. But ESR is not actually comparing Quake to e-commerce. He is comparing development methodologies and their outcome. His points (such as "Don't trust a client") apply just as well to E-commerce as they do to Quake. This is why we see very few client-side e-commerce sites (I can't think of any).

    ESR is not saying very much about the Quake problem. He gives no solutions of further insight into Quake. What he is trying to warn people about is that this is nothing new, that Quake still has a closed-source security method that only now has been opened. One can't expect the magical "Open Source" pixie dust to fix fundemental security flaws.

    The most important point which ESR makes (or at least hints at) is the crux of the open-source security paradym (sorry, I know people hate that word, but it does fit here): Open Source Makes Security Problems Easier to Find for BOTH GOOD AND EVIL. That last clause is what makes the "debate" between open and closed source security types interesting. Big companies see that it is easier for crackers to break an open source solution and shy away. Open source types see it makes it easier to make a quality product and embrace it. BOTH VIEW ARE CORRECT. That is the crux of the issue, so well demonstrated by Quake. That is not quite ESR's point, but he at least implies it. With this in mind, ESR makes much more sense.
  • by SheldonYoung ( 25077 ) on Monday December 27, 1999 @08:02AM (#1442526)
    There is no way to make software that runs on the client side totally secure. It is always possible to hack the client to your advantage because it runs on your computer, and you can fiddle the bits in your computer any way you want.

    No matter what kind of proxy or checksummed client they use it'll be hackable because it's runnable. It's the same reason that copy-protection, secure audio formats and DVD can all be cracked... at some point code has to be executed that the user can change.

    Even if Carmack does as ESR suggests and only send full world updates, that will not prevent a proxy that jumps every time a rocket is fired at your feet or any number of other subtle helpers.

    The only solution might be to run all the code on the server. Yes, really. Imagine the client side just being a 2D graphical dumb terminal just drawing frames from the server. It can't be 3D because it would be hackable at the driver-level again.

    I didn't say it would be easy, just the only way to make it truly secure.
  • It reall boils down to limiting the control responses (movement of the players) and increasing the external stimulous to the system. If the movement was limited to what a star player should be expected to do, then maybe we could design the response to external disruptions (being hit) such that they would throw a classic linear control system off enough to make it non competitive.

    And I'd edit my client to disregard all those external stimuli.

    If the entire environment depends on the reaction of a client, and I get to control what that client does, then saying "we'll make the client do this" won't work because I'll make the client do something else. The fix has to be external to the client.


  • We want open source, and we want security. Switching back to closed-source isn't going to solve anything.

    Quake is a game. We can try bunches of things, since we can afford to blow it. What do we do when serious programs need similar protection and we can't afford to fail?

    With a game, we don't have to get it right the first time! We can try things and incrementally evolve the correct solution. Different people will have different pieces of the puzzle, and enter in at different points in time.

    I propose that what you really want for Quake is to see the source your opponent is using. So you want authentication and revocation. Some cheats will be cool, and evolve the client. Some cheats will be stupid and/or pointless, and you will want to revoke them.

    One possible approach is to have people register their sources to trusted authentication servers which will hand them back authenticated binaries whose signatures quake servers can accept or deny. If I think someone is cheating, then I can go look at their sources. And perhaps have some sort of voting scheme where some clients are declared "stupid cheats" and their access to quake servers denied.

    My idea needs work, but it's a good starting point. I'm trying to stay brief, so am not going into a lot of detail.

  • You should consider reading the article before you criticize it. (Who moderated this guy up?)

    Amen. ESR was lamenting the parallels that will be drawn, like it or not -- and parallels that indeed should be drawn so that folks notice this important dilemma. Let's face it, there was nothing stopping folks from cheating before; open sourcing the client just made it easier for those with less time on their hands.

    You need to operate in paranoia mode whenever you are designing protocols or server applications. Never trust the client. Trusting the client to supply ``good data'' is what has filled the Web with weak CGIs (and I must note that these appear on both open source and closed source platforms.)

    Nobody moderated him up, BTW. He has High Karma(TM).

  • I suppose that creating a day-trading bot that gives the user massive advatages in terms of spotting good stock would be bad?

    Think of it - all your adrenaline slurping marine buddies sitting at a bank of computers send rapid-fire orders for stock. Seizing companies left and right, leveraging your ammuntion supplies...

    Now, is this cheating or good buiness (Assuming it works) -- I'm quite sure that if anyone ever gets any good at it there's some masssive repercussions to be had, either SEC sanctions or stock market crashes. Of coure if you had that sort of day-trading client would you use it over the normal one?

    Of course there's more security on the server side for day-trading, but the same issues are relevant there. I suppose we ought to be thankful that Quake doesn't actually have massive effects on our economy, and that hakers are leery of dealing with money things because the penalties for getting caught are larger.
  • I'll reply to this probably seconds before you get moderated as a Troll, but that's ok. Open Source works just fine. It worked before people wanted an alternative to MS and it will work when MS isn't an issue any more. Open Source is a "Good Idea" in most cases, if not all. The entire Patent office is an Open Source system -- without the freedom to use the ideas. If you just want to be a Troll, no evidence will convince you; the point is, some of us disagree with good reason.
  • Here's the long and the short of it: Quake could be made secure through more reliance on the server and less on the client, but to do so would destroy the game by slowing the whole process down to an unplayable level.

    Carmack's right; the only realistic solution at this time is to keep at least part of the verification module closed-source. Why is this such a problem? Who the hell cares if there's some obscure part of the code you can't get at? Why would this be worth sacrificing decent game play?

    Look, if you don't like the closed source stuff, go ahead and play without it and contend with all those Z-bot asswipes who get a kick out of ruining the fun for everyone else. Me, I'll take the closed-source module that Carmack's offering and actually enjoy the game.


  • And the conclusions were much the same. That time, it was with cheating on Netrek. (Script Kiddies seem more interested in big numbers than learning skills.)

    The solution there was to distribue the source as Open Source, but only allow "blessed" binaries to connect to "blessed" servers.

    This hasn't been fool-proof. Plenty of people have cracked Netrek's security, and whilst it's good at keeping the drearier riff-raff out, all that really does is give you a better class of riff-raff. It doesn't solve anything.

    How, then, to prevent cheating? That depends on how much the client does, and what counts as cheating. The server can always prevent cheating, with anything under it's direct management, by simply controlling throughput. No more than N operations per unit of time effectively prevents souped-up clients with super-human reflexed, for example.

    Controlled throughput also gives you much better Quality Control, as you can effectively ensure that no connection suffers undue lag at your end.

    As for anything else, this calls for the ability to access points in the code by address. If you can do that, then the server can call routines to check for validity in a way that can't easily be bypassed by simply rigging the checks.

  • I've actually been working on this concept in my spare time, and I came across the answer:

    It can't be done in software. Period. It needs hardware.

    What is needed is part of the kernel implemented in hardware. You have to have something along the lines of ram that is physically locked and cannot be accessed by even the (software) kernel - only by itself; or, a plug-in card which simulates such an effect. In both cases there will also need to be a hardware implementation of public key encryption, wherein any software can read the public key and no software can read the private key.
    The process would basicly consist of the client and the server sending the other an encrypted checksum binary, which can only be run by the process it is sending to, in its privileged memory space, encrypted with the public key of the other. The hardware device decrypts the binary and runs it (with read-only access to the memory space, almost all other access cut off, to avoid a major system security risk), re-encrypts the packet with the sender's public key and its own private key, and then returns it to be sent. This way, both the client and server can be guaranteed that the checksum program they sent was actually run honestly and the data returned is truly from the program. And, as the loaded program is unable to be accessed by anything but itself, no program can modify it after it has been started. There are a few other details I have omitted (such as being able to query to make sure the program is in such a memory space), but this is the basics of it.

    I went over every angle of doing it in software, and there is no other possible way that doesn't involve the fatally flawed "security through obscurity". If anyone thinks I am in error, please let me know, I'll be glad to discuss it with you.

    - Rei

    P.S. - there is a moral consideration of such hardware. It makes it possible to have software that is physically un-pirateable, if done correctly. Just so you know.
  • The client can't control anything like that. Basically the laws of physics have to be implemented on the server, or the game play will look like something from "The Matrix".

  • c) real security comes not from obscurity but from minimum disclosure.
  • trivializing the problem by mocking quake fans doesn't make the problem go away, and it doesn't change the fact that open source has made the problem infinitely worse.

    The unsolvable problem of aimbots is not a direct consequence of Open Source. An aimbot is possible with closed source too; it's just harder (but still possible) to implement.

    If anything thinks that the closed source "solution" will really protect them from cheaters, then perhaps they deserve a little mockery. How do you know that open source has made the problem worse? People could have been using aimbot cheats all along, without your knowledge. At least now, the problem is exposed.

    Then, of course, you go on to conclude that open source is the ultimate answer (while giving no evidence).

    Did you read the same article that I did? ESR does give a good argument that Open Source is the answer. His argument is that if Quake had been originally developed with the understanding that security-through-obscurity would fail (instead of this simply being realized after it's too late), then the protocol would disclose less information to the clients. If Quake had been Open Source from the beginning, then the "more information" cheats never would have been possible. I think this is a substantial and compelling point, hardly "smoke and mirrors."

  • Aimbots have already been written even for the closed source version via a shell program that sits between Quake and TCP/IP. Obviously that program was a lot harder to create than a simple hack to the source code to create auto-aiming.


  • What do you mean just a game? Games control the computer industry. They are the single thing that is driving computer hardware to new levels of power and efficiency.

    And since when is security through obscurity a bad idea? Without obscurity every single possible type of security could be broken. All you can hope for is to keep the protocols hidden for as long as possible.
  • This is not a new problem! Netrek solved this problem years ago. It was explained in the previous discussion on this,

    But as I explained in the previous discussion, Netrek's authentication protocol isn't very Open-Source. It is similar to Carmack's closed-source pair of programs. We just rolled it in to the binary. Sure, it was developed by several people in the development team, and protocols and methods were discussed in the open, but who actually gets that source is controlled. Without it, you have an un-blessed client or a server that doesn't require authentication.

    And as this message's sibling states, if someone was patient, very patient, enough to break through our 20 levels or so of obfuscation for ONE particular binary, how do we, the Key Maintainers, know that key has been compromised? We don't. Somebody has to notice and cry foul. That particular cracker would be smart to be as quiet and non-obvious as he can since each binary's obfuscation is different.

  • Uhm, coming from an old binary hacker, Closed source *is* as hackable as Open Source... Once you understand the Architecture of the system you're running on, and get to know it's Assembly language, you can modify *ANY* binary to do what you like. Hence the numerous dongle cracks / reg key bypassers that are out there for many Closed source programs. (I remember ads for dongle's calling them "uncrackable")
    With open source, you have to look into the source code and try to determine what this particular algorithm does... then you have to know how to re-work the algorithm (or determine the ramifications of removing that part completely) to get it to do what you want.

    With a closed source binary all you have to do is run a traceable debugger on the binary, and then you get down to the nitty gritty of what's happening. It can be as simple as inserting (in x86 terms) a JMP statement or a NOP...
  • The even easier solution is not build it into the CAD program, just have people mail files securly with GnuPG/PGP. Why make CAD programmers deal with encryption, its not their baliwick!

    Ah - but remember the premise - the end-customer must not be able see the secret information - it's purely for use with a tool that sees the encrypted file and performs some action on it that in itself does not reveal the secret to the end customer

  • by SuperKendall ( 25149 ) on Monday December 27, 1999 @08:28AM (#1442573)
    I think ESR has got it wrong, the AimBot is by far a much worse problem than a player being able to see around corners - a good player essentially can tell where people are anyway based on sound cues or knowing how far behind him someone was. It might give a slight advantage, but not the godlike ability to clean out a room that an AimBot gives.

    However, I thought of a possible solution - one way to stop AimBots might be for the server to send a few "fake" players to the client with either transparent skins, or embedded in the walls, or perhaps randomly "clone" the user just behind himself every now and then. A real player would never see these "shadows" but an AimBot would fire at these pahntom targets and that could trigger the server to shut him down.

    As an added bonus, an AimBot set to use the rocket launcher firing at a fake target a foot away would bring a little justice to the bot user.
  • How can so many slashdotters miss the point so completely? Comparing Quake to E-Commerce is just silly to begin with. Quake wasn't written to be a secure platform for anything, its a freaking game for christssake.

    Cheating in Quake will always be possible. I'm pretty sick of reading posts on these two threads where someone who doesn't even recognize the problem is trying to offer solutions. BY FAR the most troublesome cheat you can do with any Quake, open source or not, is use an Aimbot. You CANT STOP aimbots with any fancy autorization system. YOU CAN'T. NOT EVER. You can only obsecure protocols. Even if you coded algorithms on the server that detected "bot-like movement", a coder could just make his bot algorithm differ from what the server considers BOT-LIKE. This would be even easier if the server source was open source as well! Hell, you could even write a proxy bot that doesn't move the crosshair aim for you, it just shoots when it can draw a ray between your gun origin and an enemy. In this way, all the bot does is pull the trigger when you sweeep your railgun across an enemy. Try detecting that on the server. Its impossible. All you can do is use some half-baked fuzzy logic that says "Well Player-X is too good, he must be a BOT!". Which is not security at all.

    Even without source to the client open source of other system components can cause problems...NVidia/3dfx have open source OpenGL drivers under Linux? Cool .. why not write a program which sits between the mouse driver and the opengl driver, takes information from both and use that to cheat? Harder than hacking open source client, but still entirely possible.

    This whole issue doesn't speak badly of Open Source, nor does it (as many people here think) speak badly of how Quake was implemented. Its just the way things are. If someone wants to try to create a "secure client/server" chess system that someone sitting next to Big Blue running his chess program can't cheat at, please get back to us when you figure out how. The serious Quake cheats are in the same category as this, and just can't be solved without using some security through obsecurity.

  • The fatal problem of a launcher program is that everything required to perform the client checks is included in the launcher and hence available for reverse engineering.

    This is directly analogous to the DVD software decoder problem. If the binary is available, the algorithm and all the keys are available.

    Having, shall we say, "studied", my share of copy protection schemes I can attest that very little of a binary must actually be understood to coerce it into changing its answer.

    And just so its all out on the table, here's my roadmap for launcher breakers. Please note that none of this has been tried, I just dashed this list off with 2 minutes thought.
    • Use strace, if they use exec* to start the program, then patch exec to substitute your unclean program after the clean one is verified. (I'm thinking a preloaded library here, kernel patching is ok too.)
    • Pick a library or system call used in the client after it has been verified. Subvert it to replace the current executable but keep the file descriptors to the master loader open for use.
    • Disassemble the loader. Go to a nice quiet place and start reading, commenting, reassembling, and testing. Once you get the knack its surpisingly easy. While one of the most enjoyable experience to be had with a computer this is slower and should be the last resort. You will only need to do this if they have implemented from scratch their own loader, and they will rot in OS version compatability h*ll until users become disgusted with thier software.

    Note: any apparent confessions in the above post were merely a "youthful indiscretion" and have certainly passed the appropriate statue of limitations.
  • If the security model of closed source fails, it will not fail immediately. The next security model will take time to crack next. In the end the latency of closed source security for software being cracked will have a given time, say 3-5 months average.

    If software relies on closed source for security and it is made open, such as the opening of Quake, then all of the security weaknesses can in essence be exploited simultaneously. Software that is from the beginning open is inherently more secure than closed source software given the source code.

    So, the question becomes what effect would opening Windows source code have on Windows security?

    There have been so many back doors found in the past six months that I can only imagine that a metric ton of them would be found. Since the Justice department sees open source Windows as an option to cure the Microsoft monopoly, couldn't this be counter productive for average Windows users--ill equipped to install software, much less a patch--in the short term?

    Are (potential) long term benefits of this worth the short term cost?

  • I think ESR has got it wrong, the AimBot is by far a much worse problem than a player being able to see around corners - a good player essentially can tell where people are anyway based on sound cues or knowing how far behind him someone was. It might give a slight advantage, but not the godlike ability to clean out a room that an AimBot gives.

    Agreed! In fact, the ability to support 3D-sound footsteps REQUIRE you to have (at least approximate) positional information on people who may be around corners or otherwise out of side. This isn't really an issue with Quake, since it doesn't use footsteps, but Quake2, Quake3, UT, etc...

    Your aimbot solution is interesting but it'll still require some "fuzzy" logic as users will often shoot randomly to throw off enemies, or just because (due to their timing or hearing of footsteps) expect an enemy to be coming. They might accidentally hit one of these "phantoms". Causing undue splash damage (if they are close) and/or marking these non-cheaters as cheaters.

  • From a technical or 'community' standpoint (think ebay feedback ratings), e-commerce and e-games face a similar security and trust issues.

    But, the fundamental basis of e-commerce is still several hundred (thousand?) years of laws governing contracts and commercial transactions. If the technical security model breaks (as it has many times on Ebay), the legal security model will jump in, along with various police agencies, courts, lawyers, banks, collection agencies, insurance firms, and so on.

    When the "e-game" model breaks, there's no real higher power you can beseech. Thus, people are more likely to break the rules, because there are few if any negative consequences. So, while you can put preventative mechanisms in (reputation systems, certificate verification, signed binaries, and so on), it really comes down to the other users, and how much you are willing to trust them.

    (Think of blackjack -- you might play in Las Vegas where the Casino is supposedly trusted, or you might play your trusted friends, but you would be less likely to go into some random basement and play an illegal game because the deck could be loaded and you have little recourse if it is.)
  • So then public key cryptography is used where the keys are stored in hardware. This means either that each piece of hardware has to be different because each has a separate key ingrained in hardware somewhere, which will be totally impractical to manufacture, /or/ that the key is stored in ROM, which, although it is harder, is still accessible.

    Also the hardware would have to entirely take over the CPU...this means the addition of a super-monitor level that can only be activated by hardware, over and above the software-level monitor state (kernel). If this is done, how is authentication achieved to allow the hardware to do this? Can any piece of hardware do this? If so, then the crypto hardware is compromised. If not, yet another scheme must be invented to authenticate the crypto hardware, which may lead back to manufacturing individual CPUs to recognize a certain key, which is again impractical.

    If there is no hardware-level monitor, what is stopping a rogue kernel from just dumping memory?

    I think this may end up a philosophical question. If I give somebody something, I must /trust/ them. If I don't trust them, I can't give them something sensitive. Now, all sorts of "trust proxies" can be put in place, but it will eventually resolve to me trusting somebody. And the only secure endpoint is somebody's brain. So there will always need to be a brain-brain trust. Hardware and software just proxies it for us. - the Java Mozilla []
  • by David A. Madore ( 30444 ) on Monday December 27, 1999 @08:57AM (#1442601) Homepage just something that has never worked, will never work, and cannot work. Every now and then someone tries to use it (the DVD Consortium for example), arguing that ``yes, in general security through obfuscation is bad, but in this particular case it will work''; wrongo: it always fails.

    This is made abundantly clear in Bruce Schneier's famous book on cryptography.

    This is not an open source vs. proprietary software problem, it is a disclosed source vs. obfuscation problem. This is not an ethical issue, it is a completely practical one, and it seems that the lesson needs to be learned one more time.

    Carmack's suggested closed-source binary loader can be spoofed much more easily than by using a proxy. Indeed, it contains an obvious race condition: how is the loader to check that the program hasn't been altered between the time it is checksumed and the time it is run? A simple ptrace() should do the trick for that, and, in fact, anything invented along similar lines.

    As another poster pointed out, what we need is to have Quake clients act as simple 2D rendering clients. But that means the server would have to perform all the 3D calculations and that is hopeless. We want the client to perform the calculations without being able to spoof them; so the problem amounts to the following:

    (``Computing in `hostile environment' ''). You are given a powerful computer and you want to use it to perform a computation. However, you cannot trust the computer. You want to perform the computation in such a way that the computer should be (a) unable to fool you (i.e. give a wrong answer and make you think it is the right answer), and (b) unable to learn anything from the computation (in the Quake case, learn more than the final, 2D, result). The computer can refuse to carry out the computation, of course, and you cannot prevent this, but it cannot make you think it carried out the computation whereas in fact it is fooling you.

    This is a purely theoretical problem, and it has been studied. A cryptologist told me (but was unable to give a reference, unfortunately) that the problem is solved in the theoretical sense: computation in hostile environment is possible (using strong crypto). This makes it theoretically possible to have a solution to the problem using just one secret key, stored on the server and everything else being disclosed (basically: the server sends the encrypted computation to the client, the client performs the computation without knowing what it is calculating, sends it back to the server, the server verifies that the computation was correctly carried out, and sends the decrypted result to the client). Unfortunately, this solution is completely unpractical (the bandwidth required is simply too large).

    So, the Right Thing fails. IMHO, this is not, however, an excuse to do the Wrong Thing. If the problem has no solution, then the problem is not a problem!

  • "The only solution might be to run all the code on the server. Yes, really. Imagine the client side just being a 2D graphical dumb terminal just drawing frames from the server. It can't be 3D because it would be hackable at the driver-level again."

    I think you are right, that is what we will see eventually. But it won't be until the average person has cable, DSL, or some other high-bandwidth, low-latency connection to the net that it will be technically feasible.

    But with the average player trapped behind a modem, the kind of tricks Carmack used are necessary for the game to be playable. ESR mentioned Carmack's reply in his article, but did not comment on it, and appeared to pass it off as unimportant because the design should have been "done right" in the first place. Obviously, ESR has never played Quake on a modem against players on low latency connections. :)

    My other comment on the article was that there were no direct statement of ESR's point. From reading the comments here, it seems many people got entirely different meanings out of it. Pretty poorly written, IMHO.

    Quake! []
    Chat! []
  • I am heavily involved in e-commerce. ESR points are valid and well made. However, I think the issue can be taken into another venue which is not far from Quake: "The Art of War", Sun Tzu. Cheating is exactly what you want to achieve in a war using any means possible to achieve them: limited disclosure, deception, technology, etc. The issue is the same that ESR makes regarding open source; if you know all the cheating that goes on prior to, and in the midst of a battle, it ...changes the behavior of developers. Which may then change the game, because the rules (requirements) are no longer the same.

    E-commerce communities must war/defend against any entity, hostile or otherwise, that would encroach on the territory established for commerce.

    So many comments, so little time.....

    "There is nothing new. Everything is just smaller, faster, and cheaper, with more layers of the protocol stack at no additional cost." csfenton
  • by Tony Hammitt ( 73675 ) on Monday December 27, 1999 @09:10AM (#1442608)
    Guess what would happen if the windows source code suddenly became open? All kinds of cracks would appear. We've just proven it.

    They open the source for Quake1, and within days, there's people out there playing cheat clients. It wouldn't take even hours for the world's windoze boxes to all be cracked if they opened the source code. Anyone depending on windoze would simply be out of business.

    You can't go from 'security' through obscurity to open source. That is the main problem. All of this other stuff about the details of how people cheat when it happens are not the core issue. The core issue is that the development model cannot change from closed to open source without exposing all of the security flaws. Suddenly there are thousands of eyes looking at the code for weaknesses where before there were only dozens (who were mainly interested in functionality, not security. Why should they care; it's closed source?)

    So, how do we fix it? We could develop games entirely open source. Who's going to pay for that? People don't buy support contracts for games; they rarely buy the manuals.

    We could try to convince Carmack to release the code in stages. Release the server code first to get all of the bugs and cracks worked out. Then release the clients after most of the hacks have been anticipated. This is still suboptimal, with optimality being the unlikely open-source case. Anyone have any better ideas?

    Yes, I know, there are plenty of open source projects. They usually only have a dozen developers at most. I doubt they would pass a security audit. I know most open source net games are crackable, I've done it myself.

    Maybe in the future they will give the code a security audit before they release it. They're doing this as a publicity stunt anyway, they might as well get the most out of it.
  • This whole quake cheating thing is a non issue. People have been cheating in mutli-player games since day 1. Whether they've been closed source or open source. I remember people cheating in Doom for god's sake. But who cares? Everyone quickly learns who the cheaters are and refuses to play with them. Issue Resolved.

    Whenever you play multi player games that let the clients decide the outcomes there will be cheating. Diablo was a good example of this. There were cheats that allowed you to kill a player while in town - something that wasn't supposed to happen. There were ways to create magical items that didn't exist. None of this mattered. Blizzard handled this easily by saying you shouldn't trust anyone. And it was so true. If you play with your friends you'll never have to deal with cheating anyways. If you want to play with random people then you need to learn to make friends and associations that will prevent cheaters from playing with you.

    On the other hand, I hear Ultima online and Everquest are good code bases for cheatless systems. From what I understand the client only displays the world and allows users to send known commands back to the server which can be checked server side against the user's character which is only stored on the server and not allowed to be modified by the client.
    Joseph Elwell.
  • Actually, netrek didn't solve the problem --
    it just made cheating beyond the reach of your
    standard CS undergrad. You can still cheat
    at netrek by writing a customized X server,
    since netrek has to use X to display itself and
    get input from the user. I used this technique
    to write a two-mouse netrek client back when my
    ulnar nerve was all screwed up.

    It is possible to make protocols with untrusted
    clients heavily well-defended, as long as you
    can demand that you execute native code on the
    target machine, which Quake is able to do. All
    you have to do is send down to each client a
    specialized, randomized protocol handler with
    its own uniquely generated key, mix it up so
    there are millions of unique protocol handlers,
    each of which defends itself differently and
    makes sure it's not being tampered with, and
    hey presto, it's just way too inconvenient for
    nearly everyone to screw with it. This isn't
    possible with netrek because it runs on an
    infinity of different architectures and tries to
    be platform independent, but since Q3 only runs
    on x86 and mac platforms...
  • I still think the only way you can guarantee that an OpenSource network game doesn't have cheating is to use a thin client approach.

    Well that's essentially how Netrek works. In Netrek, the server handles ALL game mechanics and lays down the law as to what each player and weapon can and can't do as well as doing all the data hiding and deciding exactly what each player can and cannot see. The client is essentially a display engine and a user input packager.

    However you can still cheat with that. You can make a client that will play the game for you (which is fun in its own right). You can do things like auto-aim, auto-shields, auto-dodge. You can also be an info-borg. An info-borg is a client that displays or makes obvious various game events and states that a good player knows how to look for and/or estimate. Examples include showing everybody's speed and true direction on your display, showing cloaked players on your tactical display, noticing a player with kills orbits a planet, and that planet's army count drops and flagging that player as a carrier, etc.

    You can probably make it harder by doing some sort of authentication method, or make the players download the client from the server every time they want to play. Its a cat and mouse game.

    At any rate, I like the existence of borgs on servers that allow them. In college, I used to be involved in running State and National Programming Contests at UTexas' IEEE-CS student branch. Our games were not the standard ACM type (here are 6 programming problems, have at it). What we did was write a game meant for computers to play. What the contestants did was take 8 to 20 hours to write a program that would play the game for them. During the actual elimination tournament, the contestants weren't allowed to touch the computers. It was quite fun for all involved. Dave Taylor (ddt of id, Crack, and Transmeta) founded and ran the first couple of National contests. If you ever get a chance to compete in a contest like that, I highly recommend it.

  • One would think that, should the IP designer actually get hauled into court they'd be able to at least say "we built the chip and it worked", if not actually be able to prove that the client changed the design.

    But there are problems that encryption, or any technology, can solve- if I can get it in plaintext at anypoint, I can break the encryption and change things to my hearts content (or copy things, as the record and movie companies are learning the hardway). Even hardware doesn't help- poke around on for some possibilities for breaking hardware-based encryption. Leaving the debug symbols in the code (or the unprotected keys, as happened with DVD recently) just makes the reverse engineer's job easier.

    A suggestion for quake players: have reputation servers. This is not a perfect solution (can the reputation servers be hacked?), but it does give you a _social_ mechanism to discourage cheating. And it's the best you can do.
  • Couldn't someone get a blessed Netrek client, figure out what it sends to the server to authenticate itself, and then do a replay attack

    It is security through obscurity, but the obscurity is the private half of a client's RSA key.

    It is a challenge-response type of authentication. On top of that, the challenge is random and both the challenge and response are encrypted. Doing a replay attack would give the response to the wrong challenge.

  • Even the ensuing debate over multi-user game/world security models is very significant for future development. If we can come up with a better, faster, tighter, cross-platform, secure model for network gameplay, I'm sure that many game developers would be interested in adopting it. so, let's start designing and implementing the new protocols for networked online worlds!

    A number of game development projects have been working at exactly this; it might be worthwhile for those interested in working on such a protocol to see what's already in the pipeline. Freetrek and Netspace both have just recently started development on 3D protocols for space-flight. WarForge, Phoenix, Lycadican, Cyphesis, and Belchfire are all using WorldForge's "Atlas" 3D RPG protocol (libatlas2 was just completed last week).

  • point out somne real advantages of open source game protocols. Specifically, human evolution via improved computer interfaces! We should not be doiing things to discurage these ``cheats'' we should be doing things to incurage structuring and sharing them.

    The aim-bots are a peerfect example of a good cheat.. as they consist of using a computer to improve a humans performance. Shure they take a little bit of the skill out of the game, but they give the human more time to think about stratagy and the importent things that only a human can do.

    I am a great fan of StarCraft/WarCraft/CC/etc. but the truth is these games have crap for a user interface. Who wants to spend all their time commanding individual units to attack specific targets during the conflict. Yet, this is what the game rewards. The truth is the only way to fix these games is an easy to use scripting langauge.

    Video games are a great oppertunity for human computer interactions research.. and the mechinism for this research is to write the tools to allow people to easily make the computer do more of the work. Scripts and bots are the pefect opertunity to help illiminate mindless repetitive action in video games and let us focus on the imprortent part of the game.. stratagy.


    BTW> I know some moron who only read the first paragraph ofthis post is going to reply about how unfair it is to be able to see arround corners in quake. Clearly, protocol based cheats like that are NOT what I am talking about here. I pretty much agree with ESR on protocol based cheats.. and I accept the possible need for blessed clients untill ping times are faster. It might even be reasonable to have blessed clients which shared your scripts with your opponent.
  • Not only is the aimbot possible without open source but I have a friend who did it a couple
    years ago. It wasn't much fun to play against a
    guy who could kill you with an axe without taking
    a single point of damage himself. :)

    (it _was_ neat to watch though; the bot would let you run in a circle around the victim faster than they could aim, all while hitting them with the axe. It also did auto-aim, did motion prediction so it could fire ahead of players, and even knew how to aim at the wall in front of a player with a rocket so they got explosion damage while the bot switched to another weapon and hit 'em with that too)

  • has a similar problem with keeping a hostile client from supplying
    bad results. There solution is basically the same as Carmacks: keep some bits of the code private. Here is what they say:
    Why is still not completely open-source with all parts of its source code?

    Although we are providing all of the code linked on this page for public perusal, it is still necessary to keep select portions of the codebase unavailable for general distribution. Indeed, this is an aspect of our operations that we would very much like to be able to eliminate.

    Quite truthfully, releasing binary-only clients still does not completely eliminate the possibility of sabotage, since it is relatively easy for any knowledgeable person to disasemble or patch binaries. This is actually quite a trivial task, so we urge you not to try. Indeed, security through obscurity is actually not secure at all, and we do not claim it to be such.

    The source code available from this page is really all of the algorithmic code that would be of interest. The only code that is not present is the file-access and network socket code, which is not terribly interesting (nor pleasant to try to comprehend). The computational cores and platform-specific optimizations included in this package is what you would want to look at if you are interested in how the client works, or how you can increase the speed of the client for your processor.

  • I written this sort of code for a living for 10 years, and ESR doesn't even come close to understanding the technical issues involved. Instead, he's taken snippets from recent posts and built upon them yet another pulput for "open source will save you". His assumptions and solutions are flawed.

    Even Carmack's comments are predicated on yet another assumption - an environment of primarily statically loaded entities. Even through the use of extensive preprocessing and highly efficient resource management, it can take hundreds (or thousands) of milliseconds to load and initialize geometry before it comes into view. Quake can avoid this with small level size, low entity count, large memory footprint, local large disk or CD cache, and long level setup time.

    If VR applications are to move past the common set of constraints Quake imposes (which are completely acceptable for its genre of gameplay), then more issues are introduced.

    Very large worlds running across many distributed servers, clients utilizing continuous level of detail and realtime streaming preloading make it possible to wander around infinite sized worlds and never have to wait for a "loading" screen to interrupt the "suspension of disbeleif" so important in games. ESR's simple "fix" breaks this.

    If you want to prevent information cheating, you have code at the servers which examine user actions and determine if they appear to be acting on more information than they should have according to the game policies and the distributed master clock. This works for people shooting at targets on their way around corners, as well as aim bots. Simple heuristics can filter out false positives, such as lucky shots.

    Use digital certificates for robust user authentication with a difficult to automate "create new user" system and you've raised the price for cheating. Cheaters can be locked out, or even better, placed into games with each other seperate from the non-cheating players.

    I agree that Open Source is the way to go - security by obscurity is why most systems calling themselves "secure" are laughable to those who work in the field. But it is not a magic bullet.

    ESR's lessons: (a)Never trust a client program to be honest - This is absolutely correct, and I'd add that servers shouldn't be trusted to be honest either - clients should watch servers, and servers should watch each other.

    (b) you can't have real security if you trade it away to get performance, - I agree with this truism.

    (c) real security comes not from obscurity but from minimum disclosure, No, real security also comes from protecting information and actively monitoring for compromises. Minimum disclosure only works in a small subset of security models. If I hide a spreadsheet column that is too sensitive, and that column is needed, then no work will get done with that spreadsheet even if the client is open source.

    (d) only open source can force designers to use provably secure methods. I hate to berate open source, but I must, as this comment is the pulpit speaking. I would argue that "peer review" has worked just fine in many situations - the NSA has done quite well providing strong ciphers for its clients (by this I mean the ones it cares about actually protecting) without open source. Open source is a superset of classical "peer review" in this regards.

    There are some interesting lessons in this source base release. I hate to see them abused as a pulpit for incorrect information, especially when it hits near home (I run an open source massively distributed VR project).

    If anyone wants to learn more about the few hundred other cool technical problems in online VR, a recent book ("Networked Virtual Environments" ISBN 0-201-325577-8) provides a great overview, its the first I've seen that was written by people who knew what they were talking about.

  • This sounds very interesting. Like it or not, advances in armament change how war is waged. Bots are like Gatling guns against bows and arrows. People might decry how each new weapon or defence sullies the "honour" of combat, but victory favours the bigger guns.

    In Real Life, Stealth bombers, AWACS, Satellite Imaging and Cruise missiles make you a World Power -- not a cheater. As I understand it, the modern Army is working to equip its soldier with whatever is needed to coordinate and execute a mission and return alive. You want a fat defense contract? Make an aimbot for the army.

    I think that if, rather than being suppressed, bots were embraced, you might well see a shift from the immediate tactics of taking out the enemy in the next room towards strategic warfare. (Imagine the Slashdot Slashers in a virtual campaign against the West Point Bedwetters!)
  • well, im not going to go into it too deep, but at least in reference to this situation:

    obscurity in this case is giving the client the positions of all of the players in the game, and trusting the client to only display to the player the other players that can be seen from his current viewpoint, leaving the rest of the players obscured.
    minimum disclosure would be only telling the client where are the other players that the player can see from his current viewpoint. this way, the client/player can't cheat because the cient doesn't know any more about the positions of the other players than the current player is supposed to.

    in general, obscurity means leaving extra data that could be used to cheat hidden somewhere in the protocol. the programmers are trusting that they are hiding the information well enough that the players won't be able to find it. of course once the source is released, this "hiding" is no longer good enough, because cheaters can just look at the source and see where it is hidden.
    minimum disclosure means only giving out as much information as anyone needs to know. there is no extra information hidden in the protocol for cheaters to find or use to their advantage.
  • by WNight ( 23683 ) on Monday December 27, 1999 @10:55AM (#1442664) Homepage
    Designing Quake in open source wouldn't solve the problem of writing aim bots. Aim bots take that knowledge the play has, or could easily have (by looking behind them) and acting upon it.

    But, designing Quake as open source, and seeing how easy a hack like making players show through walls, for instance, would be could be expected to lead to different design choices.

    Quake uses the PVS and PHS (potentially visible/hearable sets. The PVS is the list of all 'areas' visible from the player's location, and the PHS is the PVS and all PVSes...) for knowing when to send information about one player to the next. In some fairly connected levels, the PHS is *huge* and the client knows where the other players are far in advance. This could be changed to use an algorithm to determine the quickest route for both entities to take before being visible to each other, and if that's under the update delay (ie, likely to happen before the next upate) then the information would be sent. Otherwise, it wouldn't. Thus, cheaters would have a much smaller window of opportunity.

    One other 'bug' is that if the player models are replaces with a big pointy model, where each axis has a big arrow, you can see these arrows far ahead of the actual player, even through walls and floors (in the other player is in your PHS). Writing the code, in many spots, to generate errors if the models were above a certain maximum reasonable size, and if they passed through world brushes, would make this a non-trivial hack, unlike now where you load the models into an editor, add a couple of points, and boom, can see people through walls.

    The skin, the image that wraps around the wireframe, can be altered. A white skin naturally shows up better in shadows than a dark one. In Quake some colors are 'full bright', meaning that they 'emit light'... A skin made up on these literally is visible in complete darkness. If they changed the renderer to not use these drawing modes for characters skins, this would eliminate this easy for of cheating. A skilled programmer could add this in, but it would take a lot more work than simply opening up MS Paint and flood-filling a PCX...

    Also, the design of the basic game could change a little. Quake 1 had few instant-hit weapons. The shotgun and the lightning gun are the only ones. Bots of course favor instant-hit weapons because they don't have to predict the enemy's movements, they simply line up the crosshairs and fire. The shotgun isn't a big deal because it's fairly low damage and the spread makes it less effective at medium or long range. The lightning gun is a Quake1 bot's favorite weapon. Knowing this, they could have designed the game to reduce the effectiveness of the lightning gun in such a way.

    Perhaps it's heavy and induces some movement lag on the character, preventing instant-turn shot... These are many ways the game could have been changed to prevent there from being a perfect weapon for the bots. And really, the lightning gun, and later the railgun, are no-brainer guns. All it takes are reflexes and a fast computer/vid card. Other weapons take thought, to predict the enemy movements, something a bot isn't good at.

    There are *many* design changes they could have, and pretty well, would have to have made, if Quake 1 was designed with open source in mind.

  • 1024x768, 32bit color, 60fps.. (By the time this is feasible, you can expect MUCH higher resolutions and frame-rates than this.)
    Since I wouldn't tolerate any lossy compression on my Quake3 video signal, that requires 1440Mbits/sec. That's 960 T1s per user.. For a decent game of CTF, the server would have to be a large cluster of powerful workstations, and have more bandwidth than the entire east coast of the U.S.
  • by WNight ( 23683 ) on Monday December 27, 1999 @11:03AM (#1442666) Homepage
    And, there's the problem. An untrustable system being expected to run a trusted program. It's theoretically impossible, even given perfect implementation.

    If they want this to work, they'd need to sell black-box PCs, which exist for this purpose only, and can be passed an encrypted file, decrypt it, check for bugs, and return a 'yes' or 'error in area x,y'...

    The system doesn't need to be, and in fact, shouldn't be, closed source.

    It just has to be designed in such a way that any tampering with it will render it unoperable before it's opened to the point where data can be read off of it. Nothing is 100%, so they'd need to set a target budget, like the model that is simply encased in black plastic is rated to take $100k to break, the model shipped in plastic, in a safe, guarded by the marines, is estimate to take $5M to break, etc.

    But the machine can't be modifiable by anyone 'untrusted' at any point. All modification need to be done by the trusted vendor of the product, who one expects to be bonded and insured, etc etc..

    Only then can trusted programs be run on the system and be expected to be safe.
  • Since no one seems to really know netrek's anti-borg system works, I'll put it down in detail.

    When a new client binary is created, the person compiling it makes a public/secret RSA key pair. The public key is sent to the keyserver (Carlos V), while the secret key is compiled into the client. The code to create these keys, as well as the client/server (they are the same) routines to do RSA aren't in the client or server source. This is because of export restrictions. Anyone can download the RSA source from a certain FTP server in Europe, which is what I did when I released my netrek client. The RSA source isn't widely distributed, but there isn't anything closed about it. The only thing secret is the secret key, which only the person compiling the client knows.

    After the client connects to the server, the server sends a cookie of random data to the client. The client sticks the IP address of the _server_ (via getpeername()) in front of the cookie, and encrypts it with the secret RSA key inside the client. The server gets the response, and if it decrypts properly with the client's public key, the client is allowed.

    There are several weak points to this method. The most obvious, but not the easiest, is to somehow extract the secret key from the client. This was how DVD was cracked, by extracting a secret key from the Xing player. The secret key isn't actually in the client as raw data, rather C code is generated that performs the operations needed to do the RSA encryption with the key. With the C code in front of me, it's not trivial to tell what the key is, figuring it out from the compiled code would be very difficult indeed.

    Another weak point is binary hacks. The client does NOT perform a binary hash (checksum, CRC, md5, etc.) of any kind, so you can take a hex editor to it at will. There really is no way to stop this; if you put in a checksum, someone can just hack the checksum code. With the complexity of programs like netrek and quake, adding new features with a hex editor is a difficult task. Not as easy as fiddling a few bytes on console systems with a game genie or the like. Netrek does all game mechanics in the server, so "no damage", "infinite ammo", "shoot through walls", cheats just aren't possible.

    The easiest way to get around this system is a man in the middle attack. You have a real client connect to server via your cheat proxy. The proxy understands the game protocol, and can cheat in a number of ways. For instance, it could allow the normal client to connect to the server and authenticate isself, then pass control over to a fake client. Netrek trys to make this hard by putting the IP address of the server inside the cookie. If the real client thinks the server is at put the server is really at, the RSA response will be bad. You can get around this with a dynamic library hack to getpeername() or a kernel level hack for static binaries. Or have a tranparent proxy that can intercept packets.
  • After reading though the replies, it seems to me that most people are thinking like those really annoying bosses who assume anything can be done with the resources at hand.

    Assuming you work (or worked) in IT, you've probably gotten a bogus request from sombody who means well, but dosen't understand current capabilities. They want to print metal foil with their inkjet printer, but they don't want to buy a new printer, or they want you to make it so that somebody can read a database but not write to it, when it's a flat file format accessed with a closed source client.... and do it now, before they get back from lunch.

    Both of these can be addressed given enough money (in the first case) or time (in the second case), but a condition of doing each is that you don't have the time or money.

    I'm not familiar with Quake code beyond what appeared in this forum, but it seems that the problem could easily be solved just by requiring each player to be on a LAN or have a T1 minimum.

    Simple solution, eh?...

    But it won't work - because I don't think *anyone* is willing to pay for a T1 just so they can play Quake. So you have to adjust the paramaters... but Quake can't be coded to be a fun, responsive game without enough low-latency bandwidth.

    PHB says: "I don't care. Make it work over a 33.6 modem connection with security".

    Smart coder says: "You can have server-side security, or make it work over a 33.6 modem, but not both".

    Guru coder states the solution: "I can make it fun for 33.6 users, and make it step up to a secure connection for people with less than 120ms ping times. Ask Carl in the art department for two little icon that tell the user if they are running a cheat-proof game or not, and tell Emily in docs that she's going to have to figure out how to explain this 'feature' to the users".

    The problem is there if it is open source or not (although the solution is easier to find if you have 13,000 people thinking about it)... the problem is in the specs and assumptions.


  • by John Carmack ( 101025 ) on Monday December 27, 1999 @11:33AM (#1442678)
    Thank you.

    Lots of people were just waving their hands saying "Netreck solved this years ago" without being specific.

    As you say, it isn't really a solution. If there were a couple million people playing netrek, there would be cheat proxies for it just like there are for Quake.

    I think everyone can agree that it just isn't possible to solve the problem completely, you can only make it more difficult, which is exactly what the netrek solution is -- release binaries without the exact source and make it difficult to decypher.

    John Carmack
  • Yes, that is a good idea...

    Can inanimate objects be labeled "enemy"? Could the whole level be labeled "enemy"? Intelligent AimBots could of course attempt to detect between "real" enemies and "phantom" enemies, but that would reqiure their complexity to go up and their performance down. It's just an arms race...but if it can be made sufficiently annoying then perhaps people won't try it. Then again, this will kill off client-side bots too. - the Java Mozilla []
  • c) real security comes not from obscurity but from minimum disclosure.

    I'm not going to tell you anything about me, because I might have a flaw.

    Minimum discloser:
    I'm not going to tell you anything about me, because you don't need the info.

    The difference is very, very slight. Here's a better example:
    Rather than do something silly, like broadcast everyone's info the client so it can "desynch" from the server, I'll just give you a few bits and do more managment on the server side. Disclosing less of your opponents' states.
  • Plain and simple, if your computer can read data well enough to do something with it, you can read & copy it. This is an old maxim of computing, goes at least back to the Apple ][ days, and it's still just as true. As in your example, closing the source does not make the system any more secure in the long run, someone motivated enough can figure out how to read it, copy it, whatever they want to do with it. There is no way to safely send a customer something that their computer will be able to read but not the customer.

    The answer for companies who want to give customers this sort of access is simple, don't send it. Improvements in network bandwidth allow the vendor to set up a black box on the internet. Customers would be able to log on to the black box, and access the simulation with no access to the data used to create the simulation. They just will see the interface.

  • Guess what would happen if the windows source code suddenly became open? All kinds of cracks would appear. We've just proven it.

    This might start to sound like flamebair but I don't particularly think this is a Bad Thing(tm).

    There is no shortage of Windows cracks/hacks/"security" subversions/DoS attack programs, etc as it is now, a few million more wouldn't hurt so bad, MS would just sell everyone a service pack for $59.95.

    More people might wake up to how crappy MS software is and demand better instead of just putting up with it as they seem to be.

    Not to mention it might force MS to write better code in the future, but I doubt it. I mean think about it for a second, if MS coded the best OS in the world now, free of bugs, rock solid stable, secure as the US Federal Reserves in NYC, what would they sell next? Its part of the corporate business model to make sure they still have a product later, something Open Source doesn't suffer from.

    It'd be a huge wake up call, thats for sure. Something MS desperately needs if you ask me.

    -- iCEBaLM
  • taniwha wrote:

    really my point I think that there are classes of applications that can't be addressed by Open Source at all

    This application cannot safely be addressed by Closed Source software easier, as in the orignal example. It is impossible to make something secure from the user, but accessible to the user's local CPU. The only safe solution is to never give the information to the local machine at all. This solution will work at least as well with Open Source software as it will with Closed.

    Never fall into the fallacy that Closed Source software allows Security through Obscurity to work, it never will work in a security critical environment.

  • Yes, it would. Bandwidth is definitely the limiting factor, and in fact is the reason Carmack went with the "insecure" design.

    Something like streaming MPEG can probably cut the bandwidth by a factor of 100 without any notificable effects. Only 9.6 T1s per user then :-). And if that's the case, a pair of 400kb/s just might do it.

    Not that far off, when you think about it.
  • SheldonYoung wrote:

    The only solution might be to run all the code on the server. Yes, really. Imagine the client side just being a 2D graphical dumb terminal just drawing frames from the server. It can't be 3D because it would be hackable at the driver-level again.

    You can safely run some of the code on the client, and you can do 3D. The client would collect user input (keyboard/mouse/joystick/etc), and send it to the server. The server would do all the calculation as to what the new game state is. The server would then send to the client only the information the player is allowed to have. The client would then do the 3D rendering of that info, the sound construction and mixing, etc.

    The server properly limiting the info will prevent the looking around the corner problem. The server doing the input processing will minimize the aimbot problem. You couldn't get the extreme aimbots that Quake has seen (since before being Freed), but you could still get things like "snap to target" mouse motion, or crosshairs that take target motion into account. Aimbots would work no better than an excellent player, rather than making the player superhuman.

    As ESR said, the most successful Open Source games would have to be designed with client modification in mind. Quake wasn't, which is why these deficiencies seem so blatant.

  • I agree about the possibility of splash damage to somene who "accidentally" hit one of the phantoms, but that's why you'd only place them "just" behind the person so they couldn't hit them - no other players would have that phantom sent to their client so it wouldn't matter. The only problem then would be it might act as a shield, at least as far as the client was concerned - but I had thought that the two clients told the server if someone got hit, and it would decide the final outcome (or it might not work that way at all!! I make no claims to having looked over the code). I am agreed that perhaps placing a phantom right behind the user might have too many issues to work out, but it is only one of many pssible placements.

    As for the transparent skins being detected by the bots and worked around - perhaps that would be possible, but if you threw enough variable ways that could happen (phantoms just behind a pillar, phantoms embedded in a window, normal skinned phantoms in a pit of lava, phantoms with "preditor" skins of the wall benhind them [perminent invisibility], etc.) that it might make it very hard to write an AutoBot client - or at least a client that wouldn't have enough direct hits on phantoms to trigger the server that something was amiss (I agree that accidentaly firing a shot at a few of the phantoms shouldn't trigger a cheat detector).
  • Another example:

    NetBIOS hidden shares being transmitted in share lists and hidden on the client end.

    Minimum disclosure:
    NetBIOS hidden shares having never been transmitted to the client at all.

    (NetBIOS uses the former technique)

  • Closed source is no better. The end user can allways run the closed source on a fast emulator that logs everything the software does. That can also be done in hardware. Back in the Apple ][ days, a cracker's favorite tool was a 'bus rider' which could dump a log ef everything it saw on the bus. From that, it was just a matter of following the thread of execution, and dumping out a binary image of the loaded software.

    Later software got a little more cleaver, but never got to the point of actually making copying impossable. Given the nature of the problem, it never will. Any process environment can be simulated.

  • by Anonymous Coward
    Here's my go at a solution that leaves everything open-source:

    The basic premise is that the server would compile (and download) a unique "client binary" for each user that wishses to play (as they join)...that client would be signed and incorporate semi-random events like: "send me the magic number after every 4th frag" and the client would be time-limited (would expire/not be serviced by the server at the end of every map, etc)

    Yes, it's not completely secure, but assuming you have enough bandwidth and CPU power to compile and download such clients, it would make hacking such clients pointless.

    It seems to me the nature & quickness of a game of Quake and an online e-transaction represent the same kind of forcing the hostile client to download a new "client executable" every time they want to play/pay and embedding checks within it along with tight time limitations, you can pretty much solve the problem and leave the baseline code open source....yes, the server ops will have their "secret code" they never disclose....and this leads to the question of trusting the server ops....

    Which can be solved by taking the model one level up....the server ops create their server source with their own flavor of client security and then they submit them to a trusted organization (id software). Id inspects the code (to be sure there are no server-side cheats built-in) and then their system compiles the server code (with a similar kind of signing/magic number checks like the client code) and ships it to the server ops just like the server ships clients to the users....with embedded checks and time-limitations.

    At that point, you could log onto id's site and view the "server list" and see which servers are currently authenticated. People could still run "unauthenticated" servers but you would be taking your chances if you played on them.

    Doesn't this seem workable? (well, maybe in a few years when bandwidth is a bit wider for everyone, but, hey, it's viable under the right conditions isn't it?)

    The point being that the "client" changes after every map/transaction and by the time you hack the client, the game/transaction is over/voided making the hacking effort moot. Yes, server ops would have to change their client "security" often so that people couldn't analyze previously downloaded clients and build cracks for them, but with sufficient randomness built into the client binaries, I think you could nearly eliminate the cheating problem...and leave everything open source.

  • Why not use binary builds of the executable files digitally signed by the builders, and have the servers check the signatures against a trusted certifying authority?

    Because signing binaries doesn't work. (Not for this problem.) How does the server know what the signature of the client-side binary is? Because the client told it. Carmack's suggestion of using a loader to checksum the binaries, rather than having them checksum themselves, also doesn't work: that just adds another level of indirection and changes the problem from "hack the client" to "hack the loader."

    This stuff is easy. People have been cracking harder copy-protection than this for years. And in fully closed-source environments.

    Like I said yesterday, []

    When you receive a signed message/packet/whatever, the recipient can verify that the sender of that packet had access to the private key that corresponds to a particular public key. That doesn't say anything about the integrity of the message, only about the set of secrets known to the sender.

    To oversimplify: you can know who I am, but you can't know that I'm telling you the truth.

    Where do the private keys come from? If they are embedded in the Quake executable, then anyone can extract them and use them to sign anything. If they come from PGP's web of trust, then still all you've done is verify the identity (or pseudonym) of the player -- not of the software that they are using.

    This is all very similar to the general copy-protection problem [] as well as the fundamental impossibility of DVD encryption [].

  • At least if you don't ask too much of it.

    As we all know, nearly all encryption works with hard algorithms -- something that takes a prohibitively long time to break. But people somehow think that "prohibitively" must mean a thousand years or something. I think this is because people become too narrowly focused on certain kinds of security. Quake doesn't matter that much. Cheating on Quake won't make anyone rich, and the feeling of power you might get is pretty pathetic really.

    All the security in Quake has to stand up to is a few years of not very hard attempts to break it. That is, in a few years Quake won't be so exciting and people won't care too much, and until then cracking Quake can only be a hobby anyway. Foreign operatives and the NSA couldn't give a damn.

    In this situation security through obscurity is enough. And it has worked fairly well up to now -- not perfectly, but well enough. It might not have been as hard to disassemble Quake or make a proxie as it would be to break a key, but it has been hard enough.

    ESR is being dumb when he says that client caching is bad. He's being a total blockhead. Quake isn't an eCommerce application. It's a game. Speed and quality of play are the most important aspects, not security. And when the two conflicted, quality of play won -- as it should have.

  • Unfortunately open source games are not ideal from a gamer's point of view. The arguments of open source possibly improving quality and decreasing development time do not hold up against how easier it becomes to cheat -- and that's what gamers care more about, when it comes to multiplayer.

    Seeing an opponent around the corner through cheating indeed could be eliminated, with a performance hit. This would have (probably) been done if the game had its sources available from the beginning, but...

    Other cheats would not be detectable. The idea of detecting auto-aimers is truly ludicrous. With all the variable lag added in, the uncertainty grows to unacceptable levels. There is not a way to 100% be sure that a person is cheating via auto-aiming. I believe that it would be incorrect 75% of the time, with good players it would be incorrect all of the time. The idea shouldn't even be argued over -- simply not feasible.

    If a method of attempting to detect autoaimers was developed then the auto-aiming cheaters would simply add in a few more milliseconds before they targeted the enemy. This wouldn't be detectable at all, certainly not with the variable lag added in, and cheaters could go even farther by adding variable times before targeting the enemy. Simply not possible to detect this.

    Other undetectable cheats will be developed, dodging bullets by moving and/or jumping, automatically walking to pick up needed items, others that I can't think of. Possibly undetectable bots will be created if interest holds long enough.

    The basic fact is that source available or not, cheating exists, distrust exists between players. With source available, that cheating and mistrust escalates, regardless if the protocol relentlessly follows a fake vision of security. Open source is not the solution to everything.

    Also, shame on you ESR for describing the Quake 1 source release as a lump of coal. How much experience do you have playing Quake, or more importantly writing networked games? Or anything for that matter -- show me the code. Your recent writings are not helpful. Go hack on Quake 1 yourself and show us what you mean -- through action.

  • I am actually a little bit insulted that the Quake release was used as an excuse for ESR to wave the "Open SOurce will eventually solve everything including world hunger" flag. (yes, I do participate in open source development regularly.)

    The concept that the quake cheats would not have occured if the development had been open source is an interesting one:

    If Quake had been designed to be open-source from the beginning, the performance hack
    that makes see-around-corners possible could never have been considered -- and either
    the design wouldn't have depended on millisecond packet timing at all, or aim-bot
    recognition would have been built in to the server from the beginning. This teaches our
    most important lesson -- that open source is the key to security because it changes the
    behavior of developers.


    The problem here is not the developers Nor is it aimbots. It's because of bots that we have Quake III Arena today. The issue is that people abused the source code for completely selfish reasons. These people were not developers of the software. Making the project open source from the begining will not change the behavior of cheaters.

    If quake had been designed to run securely on T1 lines and handle transaction rollbacks I could see spending the time to write a 4-5 page article explaining why it's security model has issues. But Quake is an old game A game designed to run on Pentium 90's and 14.4 modems. It's like having a doctoral disertaion panel to grade a sophmore's CS project. (just talkin' about the security model here, not the entire game). This is more of an "Oops. Sorry. We'll fix it next time" deal.

    i'll stop now before I manage to get myself in a big time flame war, cause man am I upset right now.

  • The problem is that you can do cheating in the 3D rendering part - spikes pertruding from the model so you see the player through walls, for example.
  • In Quake, yes. In a game designed with proper distrust of the client, the server shouldn't send any info that couldn't otherwise be seen.

  • A real player would never see these "shadows" but an AimBot would fire at these pahntom targets and that could trigger the server to shut him down.

    Better, reduce his shot damage to zero. Then let him slowly suffer.
  • Computer security is just like a contract, a door lock, or a security system... all can be beaten if there's enough determination and/or money and/or time. Most of us lock our homes with a pidly little pot-metal thing that does what exactly? The alternative is to put in an expensive security systems that can, of course, be beaten as well. The hassle of paying for that fancy security system, swiping my card/whatever, etc. isn't worth it to me. If I want to play a net game with you I'm going to have to trust you a bit. If I'm going to enter into a contract with you then I'm also going to have to trust you a bit. If I invite you into my home to see my stuff then I'm also going to have to trust you a bit. (sounds like security through obscurity huh :)

    The task of creating a reasonably secure environment to play Q1 using 33.6 modems isn't worth the effort if it's even possible IMHO. If Q1 had been constructed by OS from the ground up more time would have been spent patching it than playing it as a new hack would've come out weekly. The "final" OS product may have been more secure and robust than the current Q1 version but it would have been much more painful for the players along the way.

    Obscurity doesn't work but it does slow things down a bit. the Quake 1 players had a couple of good hack-free years playing Q1 because ID didn't make it too easy. ID opened Q1 up and all the hacks that were held back by obscurity flooded in.

    Just my 2 cents
  • by GFD ( 57203 )
    The solution to this problem as it stands is to expect cheating by clients and try and detect it. There are two approaches to doing this.

    Firstly, the server would create 'signiture' types of situations that would be normally very hard to get a hit under normal situations. A person would be allowed (say) to be lucky once but get bounced if he got too lucky.

    Secondly, the server could look for players that are just too good. Yes a hacked client could get smart and introduce errors to mask aimboting, etc. but that would be fine. Enough errors and a person is back to relatively normal skill levels.

    Probably a combination of the two would be good enough. A person meeting certain statistical signatures would start to be passed bogus situations in combination with real ones to see how the client reacts.

  • Carmack is (of course) correct when he says you have to send some data a little before it can be seen, or else client-side prediction won't work. Imagine player A is *just* around the corner from player B, who is on a slower connection. Player A will then be able to see B before B sees A.
  • [...] limit reign in on the amazing amount of flexibility purposely built into the engine in the first place?

    Sure, and they didn't implement skeletal animation, which reduced the number of mods. And they didn't implement specular highlights, or accurate sound reverb, or a million other things. These all limit the flexability of the engine, but none of these lacks ruined the game. A lack of fullbright colors could be more accurately represented with light generation, if it was *needed* in a mod anyway. Changes in Quake mean that the new game wouldn't be exactly like Quake, but that's not a bad thing. Quake wasn't exactly like doom, yet still rocked. Quake isn't gospel, and changes to it aren't heresy.

    I might be inferring this incorrectly, but it sounds like your saying using the PHS at all is a bad thing. If that's the case, all I can say is NOT using the PHS is a BAD thing. It's a TREMENDOUS speed up [...]

    Yes, using the PHS is a speed up. It's a kludge. Imperfect results, but close enough for government work, and results in a speed increase.

    But, if they had thought about the Stooge bots, and ZBots of the future, they might have decided that a bit of speed was a good trade for less chance to cheat.

    Having a more accurate PHS and PVS, to determine when sounds and enemy models are sent would save network traffic, and would drastically reduce the chance of cheating. With a large PVS, someone can be directly opposite you on a wall, like in the Q1 start map by the inital start spot. The location of their model will be sent to you even if you've got a wall between you, because you're in the same PVS leaf. This would happen in a long hallway too, where clients seperated by 10 seconds of running could be 'visible' to each other, through the walls. Not only is this a waste of bandwidth, but it gives the cheater ample time to identify their enemy's movement.

    If models were sent only when they could possibly be seen, you'd have a flicker of warning before the enemy ran around the wall, if you were cheating, but you wouldn't have the seconds of warning you often have.

    Recording the shortest distance between all portals visible to each other, like a mini-vis table, would speed distance calculation. Do a search from the player, to the nearest portal, to the one next closest to the enemy, etc, to the enemy. If the shortest distance is N units, send model info. Certain bizarre rooms, with millions of pillars for instance, would play havok with these, but they already kill performance for vis reasons...

    This isn't a perfect solution, and it just what I came up with in a few minutes. I'm sure Carmack, with testing could come up with something better, if cheating prevention was higher on their list of priorities.

    Yes, the game is incredible. But it was made with goals other than being cheat resistant. Yes, all ideas mentioned would limit either speed or flexability to a degree. But, have you ever heard "One time, Under Budget, Error Free - pick two"? Everything involves tradeoffs. They didn't make some because their priorities were different than they are now.
  • Sure. In this case, the chip is the black box, with extra stuff tacked on, at low levels.

    I still have doubts though... If the chip doesn't do onboard public key crypto, fast enough to swap banks of ram in and out, then this can't work. It'll only work if the only data outside of the trusted area is encrypted.

    It could happen, but it's simply shrinking the trusted part from the whole computer to the CPU, making the CPU do the authentication at every step, instead of making it do the work once as it loads the info into the computer.
  • They'll think you're incapable of reading the posts you reply to.

    The idea is that trusted computers, for limited applications *can* exist, provided all work done to them is done by a trusted company with trusted employees.

    You can send data to Bob, to get it processed, without Alice intercepting it, you can also keep Bob from actually viewing the data, he just supplies the black-boxes need to process it. You still need two trusted parties, but they don't have to be the two main parties, as long as one client is using a trusted and secure client.

    I doubt you'd sell many of these, but if data-havens ever become possible, it might happen. Ditto with a lot of off-site checking on stuff like circuits.
  • As soon as they hire me, to do what is immediately obvious to anyone smarter than a troll.

    If id was going for cheat-proof, they'd have done this long ago. They're not, they're going for other goals, and making some tradeoffs.

    Two people can both be right and come up with different answers, if they ask slightly different questions. Id's priorities are not the same as when they first wrote the game, and are not the same as mine would be, were I to tinked with the source now.
  • Parsing the BSP is fairly simple. I mean, the specs for the format are published, and the format makes sense, it was picked for ease of use and speed, not obscurity. (Worst comes to work, run a Q2 bot with a side-by-side mod that simply exists to do traceline calls between the bot and it's victim, using the actual Q2 engine itself.)

    Shooting at feet and such is trivial. If the weapon is the RL, aim lower...

    What's hard is using area-denying weapons to herd players. Spam a few grens in an area, not to hit the enemy, but to keep them from running there. Then work them towards a spot you can hit them from a distance by shooting around them, gradually narrowing the net, forcing them to move to the shrinking safe area or take damage.

    I've never seen a bot handle this well on either end. They don't understand delayed gratification. They don't understand trying to miss when you have a low chance of hitting, because of a much higher chance later. Similarly, they are easily trapped, not realizing some splash now is better than being herded into lava later.

    Movement analysis is fairly easy. To figure out how the enemy moves around and shoot them. What is hard to being able to do clever things instead of making impossible shots.

  • ..which is only a temporary solution until the bot writers check for these things.

    The result is an arms race between bots and servers trying to detect them. Which can be fun for a while, until you suddenly discover that you use more time adding protection hacks to your server than you use playing the game.

    It is a lot better to do a redesign that limits the knowledge of the client. But, as has already been explained, that would be hard to do for Quake without killing performance.

  • However, I thought of a possible solution - one way to stop AimBots might be for the server to send a few "fake" players to the client with either transparent skins, or embedded in the walls, or perhaps randomly "clone" the user just behind himself every now and then. A real player would never see these "shadows" but an AimBot would fire at these pahntom targets and that could trigger the server to shut him down.

    Good idea, but not going far enough.

    Here's a fairly simple way to detect aimbots... From the server, send a packet that says to a client "there's a guy right behind you". Not a fake guy, make it real. The guy doesn't really have to exist.. The server keeps track of things like that, so as long as the server knows it's fake, no biggie. Plus, the server doesn't broadcast one message to all clients, it's quite easy for the server to send "there's a guy right behind you" only to the guy that has the guy right behind him. Since it's right behind him, he'll never see it. Since other clients aren't told about it, they'll never see it.

    But that pesky bot will see it and fire away.

    Two things can happen then. The server can detect that this guy keeps flipping 180 degrees to fire on the guy he's not supposed to be able to see, and kick/ban him, OR, you can watch this idiot flip back and forth and laugh until he goes away.

    There is a possibility of a normal player seeing the fake guy, if he's lagged a bit and makes a fast turn. So, i suggest only telling the client that the player is there every so often.. just inserting the player into a packet once in a while to see if the guy fires at it very fast. That way, the chance of someone spinning around and seeing the fake guy is minimal, but the bot will be fast enough to fire at it.

    One other possibility.. Put the fake guy UNDER the real guy, half in the floor.. Then the bot will fire at the ground directly under the player, possibiliy splash damaging the hell out of him.

  • Ohh yes, I do remember this quote. However I do believe that my original statement is true. Originally multiplayer quake was just between clients before quakeworld and dedicated servers came about. I started playing quake online after quakeworld had come out. I guess what I'm really trying to say is that multiplayer was secondary to single player when quake was being developed.

    Now that's hardly a fair statement. Internet multiplayer was the main point of the game in fact. Quake world was developed "to test new ideas in internet multiplayer capabilities". That's it. Nothing more. Quake could do servers and dedicated servers from the beginning (Although the dedicated server was a separate executable).

    Ahh back in the day.. John always hated talking to people to get feedback because he never really thought of security issues on his own, you see. I recall the reply he gave when someone suggested making a skin the same texture as the wall so as to not be seen. He replied, "I really wish you hadn't brought that up."

    Anyway, the idea of bots was created by someone when the game was still under development, in I participated quite a bit in that discussion. Our idea was essentially to create a bot using the networking code. The proxy bot didn't occur to us, nor did we know that the "quakescript" language was powerful enough to do it. We thought, hey, why not just write a program that reads the network protocol, and can compete on the server? Simple, no? Anyway, then it evolved into ideas of putting different bots against each other, to see who had the best bot.

    Anyway, the Stooge bot came out later, and it was essentially what we were thinking of, except that it did no navigation, requiring a human to do that. Also, we had thought that the server was more robust in not allowing players to break the rules of the world (rapid firing, instant precision spins, etc).

    Essentially, Carmack relied too much on the client for the integrity of the game. Not that that's terrible, I would have probably done the same, because it made the game faster and therefore better. But it also opened up that door to cheating.

    Oh well.
  • Ha, ha, only serious. [] Yes, I was saying that in a tong-in-cheek way.

    For example: the basic problem is that people are dishonest. The Wrong solution is to use passwords and such; the Right Thing would be to make everyone perfect. But I am not Richard Stallman to continue using 'rms' or the empty string as a password because that is the Right Thing. I use passwords even though they are the Wrong Thing. So, obviously, I can only be joking when I make this claim.

    Still, it should at least make yourself wonder if there isn't a better way to deter people from cheating, like, try to persuade them that it isn't fun and that it spoils others' pleasure, or ban them if they are caught at it. Or some such thing. I'm not saying it will work, but I think it should at least be considered before jumping to solutions like are being proposed.

    In any case, the essential point is that a binary loader just makes it slightly more difficult to cheat. I'm sure someone can easily come up with an (open source?) program that will spoof the loader in a systematic way. (As I pointed out, all it takes is a ptrace().)

    ``Never let your sense of morals prevent you from doing what is right.'' -- Salvor Hardin

  • "Also, shame on you ESR for describing the Quake 1 source release as a lump of coal."

    I don't think you've understood Eric's turn of phrase. In parts of Western culture (Scottish? Not sure.. I'm a Welshman, so we have different traditions) there is a tradition of bringing a lump of coal through the front door on New Year's Day -- it's supposed to bring good luck for the year ahead. Eric is describing the Quake source as a welcome gift.

    I'm sure he'd be the first to welcome something which simulates an arsenal of high-powered weapons... He'll be eagerly awaiting "Defend your homestead against the commies" when it's released :)
  • I didn't get the impression that ESR was criticising the Quake implementation.

    No -- what he was doing was responding to those who would claim "look, freeing Quake made it easier to hack -- therefore free SSL implementations (or whatever) must be easier to hack than the proprietary equivalent". That's a point that needed refuting, and ESR had a fair crack at doing so.

Any sufficiently advanced technology is indistinguishable from a rigged demo.