Forgot your password?
Quake First Person Shooters (Games)

Open Source Quake Causes Cheating? 474

Posted by CmdrTaco
from the gotta-hate-that dept.
Stargazer writes "Well, looks like people are having problems with Quake's release under the GPL. It's not a conflict with the license, but rather, mean-spirited people are now creating clients which give them an unfair advantage, to say the least. John Carmack ponders this problem in his .plan file, and offers, unfortunately, a closed-source solution. "
This discussion has been archived. No new comments can be posted.

Quake GPL Release Causing Cheating

Comments Filter:
  • by ZuG (13394)
    This is really sad, guys. ID is one of the few companies who have really embraced open-source gaming, and what he says is going to have an effect on all gamers in the future. If he finds that there is no way around massive cheating other than to keep the source closed, then this will have a major effect on all other gaming companies, and he will follow suit. A few can ruin this for everyone. If you know someone who plays QIII under linux and uses a cheat, plead for them to stop, and have them plead for others to stop. This is one of the first attempts at OSS gaming, and it may well be the last if cheaters don't cut the crap.
  • I'm not against opensource, in fact, I love it. I was ecstatic when I heard Quake1 was opensourced. However, this does present the challenge for everyone who wants to play the game as it used to be. This is a problem with basically any opensourced game that really, to my knowledge, has yet to be addressed. The question is, how do you know whether that guy who noone can beat is really good at the game, or hacked a version that auto-aims, auto-fires, etc? I think it's good that someone is finally addressing the problem, at least for one game.

    Just my 2 cents.
  • i think this exposes a large hole if OSS development. not that open source develepment is bad, its just the now that quake is open source it has flaws. although the source release of quake may good for developers trying something new or leaaning opengl development. its opened a whole world of possibilities to them. it has completely destroyed quake1 culture. team fortress players, a segment of quake players that never really moved on to q2 or q3, after a bit of wanning of interest have finally got there death blow.

    -no clear leader [carmack is certainly not gonna continue develepment]
    -no standardation of versions
  • by Accipiter (8228) on Sunday December 26, 1999 @09:15AM (#1444013)
    There's always those select few assholes that have to ruin a Good Thing for everyone. That's not to say this problem with the Quake Open Sourcing wasn't was. The problem is that it's affecting more people then originally conceived.

    ID Software does a Good Thing, and releases the source code to Quake. Then, we have this group of people that change the code to cheat in Quake, making the general public think the Open Source community generally does things like this. Now ID has to play "damage control" and fix the problem, and the community also has to repair the damage done by the cheaters.

    Instead of looking for ways to cheat in the game, how about really giving the source a good look and maybe LEARN a couple of things. God knows ID programmers know what they're doing, and that code is bound to have a gem or two in there nobody thought of.

    -- Give him Head? Be a Beacon?

  • by Mongoose (8480) on Sunday December 26, 1999 @09:17AM (#1444015) Homepage
    I am currently making a client cide interface that will work for all 4 versions of quake. I'm using my own network code on the client and server side of the games. How do you plan to use security by obsurity to block that?

    You _can't_ - face it their a *legitamate uses for client side bots. I'm very interested in keeping them alive. Proxy cheat bots are only a small percentage of client side bots - also I've never seen a client side bot that really gave an advandage to the cheater.

    John I was very disaapointed when my favorite programmer brought up legal action threats to stop client side bots. John we want to learn more about AI! Your games model real world phyics, so well and have so many variants... please don't do this!

    - Mongoose ( from *old planetquake quakeC bot days )
  • I don't think the solution that he proposes is that unsavory. The important parts of Quake the game can still be open; the closed part would only concern client/server negotiation. Its a sacrifice, true, but I'm not such a purist that I refuse to use a version of Quake that's closed in such a way. I mean, as long as I can make my rocket launcher look like a big twinkie, I'll be happy ; )

  • by CentrX (50629) on Sunday December 26, 1999 @09:21AM (#1444019)
    I was actually thinking about this some time ago and I concluded that the only way to have no one be able to cheat is with a closed-source system of checking. A good system would make it possible for the server to add 'known good clients' i.e: clients that they know don't cheat.
    Contrary to what some others are saying, this would not stifle open-source gaming as the only closed-source part would be the executable that checks to make sure there's no cheating. If only this were closed-source, the rest could be made open-source. I believe that this method would actually increase the proliferation of open-source games.

    Chris Hagar
  • by poopie (35416) on Sunday December 26, 1999 @09:23AM (#1444020) Journal
    I really don't think this is such a big deal... It's a game, for pete's sake! When people start earning salaries for their quake performance, then maybe we need to worry about this.

    It's been my experience that cheating in these games really isn't much fun (unless your really stuck and about to give up on the whole game), and I don't think that others would want to play with cheaters unless they were cheating too, which would make the game fair again, I suppose.

    With the source, it's pretty trivial to undo just about any protection and spoof any values the program expects to find, so why bother?

    I fear that by putting in encryption and protection schemes, we'll be hindering potential development of new variations and play modes of quake and quake derivative games. If altered clients could be used to introduce a handicap factor into games, it might encourage gameplay between experienced and less experienced players.

    Why not add a HANDIcAP FACTOR to the quake game, and have that be shown for each player when joining a new game? It's probably easier to deal with cheating by making it a defined feature than trying to protect against it ever happening.
  • Perhaps this is an opportunity to come up with an open source solution to this problem. I'm not sure what that solution is, but if there is one, I feel certain that someone in the Free Software world will find it.
  • There's got to be some way to verify the Quake exe using hashes etc. Maybee not the whole program, but just a small "subprogram" that can be verified directly via the connection, then it verifies that the whole .exe is untampered (for modded games, just compare MD-5's with each other) so that you know when the other guy has tampered with things.

    Now, I suppose that this is not really a whole lot better since the verification system can be bypassed, but at least it should provide for some control mechanism which can then be altered, or improved untill it works.
  • by winterstorm (13189) on Sunday December 26, 1999 @09:35AM (#1444030)
    Does anyone remember Netrek? The same problem happened with that game. The solution is to cryptographically bless binaries that don't have cheats, and allow people to configure their servers to reject all "non-blessed" clients.
  • by Python (1141) on Sunday December 26, 1999 @09:39AM (#1444032)
    Open Sourcing the Quake code is not what caused the cheating. If that were the case, no security flaw in any closed source product would ever come to light. And yet they do. I'm disappointed in slashdot for perpetuating such a myth. Opening the source does not create the security hole, it just makes it easier to find it and fix it.

    So, this is really yet another example, in a long an sordid history, of why building a security model that depends on the client to be honest in a client/server model is a Bad Idea[TM] (can anyone say rexd?). Closing the source would be nothing more than security through obscurity. I guess its time to open that can of worms again and kick that dead horse around. There were cheats before they opened the source, their were cheats for UnReal and I'm sure their are cheats for other games as well. Anytime you rely on the client exclusively to report valid values you shift trust into an untrusted space. The users machine is not trusted, so why does it suprise anyone that someone would cheat? Why is it suprising that its possible? Its possible whether you open or close the source. This bears repeating, trusting an untrusted system (the client) to report trusted values is not possible! Thats the problem. Not the fact that the source is open, its the fact that the client is so implicitly trusted to report valid values.

    Hopefully the ID folks will realize that if they want to stop cheating. Preventing cheating in the client alone is never going to work. It will of course take some more work on their part, but its been done correctly before and I'm sure they can do it too. If they're smart they'll embrace this and work with the open source community to get it fixed.

  • I haven't looked into the details of the cheating. Hey, I haven't even read the code in question. However, there are protocols that allow for fair games of chance over a network. Basically, a set of encrypted values are passed from server to client. The client picks and returns on to the server. The server uses it. Using public key encryption each party can verify that the other hasn't cheated. Presumably, the server is trusted in this particular case, but a protocol like this even leaves the option of verifying all of the options against their encrypted counterparts to ensure that the server didn't use a rigged deck.

    This requires a significant amount of the control to be in the server. And to some extent for a workable game it requires that the server be trusted. All I can do in this case is point at a good reference, Applied Cryptography by Bruce Schneier, and ask whether such a solution is a viable alternative to locking up part of the source again.
  • No matter how "strong" Carmack's "anti-cheat" device is, it will be circumvented. Some joker will build a workalike to this complex proxy system that "tells the server what it wants to hear."

    A real solution would be to build an actual community. This word is bantered around quite a bit, so allow me to explain further.

    If people were positively identified by the server, they would be accountable to others on their server for their actions. I think that the Slashdot model would work very well in this situation, in fact probably better than it does on Slashdot.

    You could, of course, only allow "known" players to login. You could also allow an "unknown" player to login, but allow any "known" player to, say, kick him out and ban his IP for 20 min.

    This could be implemented as simply as a username and password, and as complexly as, say, you must send your username (player name) and the date and time signed by PGP.

    "Oh, yeah, I know pete-classic. Naw, he doesn't cheat. Watch out when he has the Railgun though!"

  • This is probably not a good answer now (slow networks) but maybe a solution, when everybody has faster connection, is to basically have servers that don't trust clients ?

    For example, if somebody make a modification that allows unlimitted ammo, a better place would be to move the keep_ammo_count :) code in the server , not the client.

    Another example would be a modification that allowed invalid movement (ex: going through walls, running too fast, flying). This could be countered by the server monitoring movement and enforcing the proper laws of physics in the virtual world.

    Anyways, I think there might be other alternatives that keep the whole thing Open Sourced. After all, this (hacked clients) is not a new problem nor one exclusive to online gaming.

    Imagine if in your websites you relied on your JavaScript code to do all the data validation and integrity checks and you had none on the server side ! It's like letting a user validate his/her credict card and your server just going "no problem"...

    Like I always tell my coworkers here when we do distributed apps, never trust the client (code that is), it can always get hacked or spoofed.
  • by Tsuran (77127) on Sunday December 26, 1999 @09:46AM (#1444041)
    Sorry to say it, but this is going to be one of the main reasons that open source is most probably never going to take off as a major commercial model.

    Simply put. If I dedicate my time, my effort, my life to making anything, a game, a SETI@home client, a utility, I'm not going to want people to pervert my hard work. That's what it comes down to, really. Why should I, as a potential product designer, want to release my code if the potential exists for misuse? Suppose that they opened up the source to SETI@home. Then you'd most probably be able to figure out the protocol, and how it sends 'alert' messages. And I guarantee you, someone will start sending fake data to SETI, and it'll totally defeat the purpose of the collaboration.

    Now. If your argument is that in all of the community, there will be no bad apples who will misuse the source, that's naive, I'm sorry. No group can be completely without its bad element, and I would wager that most developers don't want to open themselves up to that element, however small. There's security in a closed-source model, and companies want security. I know I wouldn't want to risk my userbase, my name, and my job security for something like this, and I would imagine that most people who do this for a living would tend to agree.

    Do I think that an open-source model is necessarily always bad? No. It has its place. Is that place in the world of commercial business? I don't believe so, no. Companies make products for money. Cheaters, exploiters, and all of that will always be there, and will always be a danger to any company. By keeping their source private, the chance of this element exploiting their work falls dramatically. It's just a fact of economics.

    Well-written, thoughtful replies are, as always, welcomed. Flames are not.


  • by Blue Lang (13117) on Sunday December 26, 1999 @09:48AM (#1444044) Homepage
    is continuing - see - Also, quakeforge is working closely with the quakeworld people, as well as some people from Loki games, to bring Q1 up to speed.

    They have a development roadmap, and the cheating issue is addressed. They have already managed to merge QW/Q1 into a single client, port it to SDL, etc, etc, etc.. It's rocking along, and quickly.

    To those of you saying "THIS is the problem with OSS.." - shut up and code. It isn't a problem, just a little bump in the road till things settle out. There are several solutions to it, including ones not mentioned by Carmack.

    I am fairly certain that this does not spell Doom (hehe) for OSS id software. Get lives, and get over it, in the face of what's already been accomplished, it's really not that big of a deal.

  • Netrek already addressed (and solved) this problem. Release the source to the clients under GPL or anything else - this is not a problem. HOWEVER, you use an encrypted key for each client (and each version) and the binary is compiled and encrypted. You can have servers in "open" mode where you can use untrusted clients, or closed, where only trusted (ie: binary only) clients are trusted. What's nice here is that the server operator can add keys for his newly-compiled client, ad nausuem. So if a binary ever gets hacked, simply yank the key and no more access. This requires that the server op be clued, but other than that it's a 1st class solution to this kind of hacking.

    Why try to deny bots? Just give 'em there own servers...

  • by jlehrbaum (114650) on Sunday December 26, 1999 @09:51AM (#1444047) Homepage
    Like john said in his .plan, there have always been ways to cheat. Transparent maps that are not detected despite server-side mapchecking, proxies that allow (albeit very poorly implemented) auto-aiming, glowskins that let people seen through shadows easily, "spikes" that are built into the player model to let people know you are coming because they go through walls, and even proxies that allow a completely hacked up map.. there are numerous other cheats and hacks that are all possible with the original quake. Many of which are undetectable. The source code release lets people to much more obvious forms of cheating such as floating in the air, or zooming through the level like a cheetah on crack. But cheating has always been around.

    what is really different now? The real problem is
    that a) more people have been exposed to the possibility of cheating, and b) it is far more fun to cheat.
    In my 3 years of playing quake up till now, I haven't used a cheat for more than 2 minutes, and then only to test it out. I believe in keeping the game pure and skilled. But with the release of the sourcecode, coders can play with a game they love. They can add special features, optimize code, and really just mess around. Its fun. It makes cheating a game all to itself, what cool feature can YOU code in? Its not the same type of cheating that plagued competitive and non-competitive gaming in the past. This cheating isn't being used to win at all costs, but to mess around. each successive build of quake becomes 'your' build, full of your customizations and features, not just something you download to get an edge.

    The important question, is where will things go from here? In all reality, the ability to cheat has not suddenly appeared, it has always been here. The knowledge required to cheat has become mainstream, and has "come out of the closet" as it were. Will this rash of cheating continue, or is it merely a phase? Will it kill competitive match-play, or will the same people that cheated in competitions still do so, and everyone else will play by the rules.. Only time will tell
  • by color of static (16129) <> on Sunday December 26, 1999 @09:56AM (#1444055) Homepage Journal
    Closed source doesn't make any code less hackable. All it does is make a protocol not designed against a malicous client effective against those not willing to go through any effort in hacking.

    The real solution for this is to make the protocol in a way so the client can only make requests to the server. Any time the client describes itself to the server, those things that can be described can not be trusted. In this case a safer protocol is to have the client request motion. The server will then provide updated info back to the client. If you want the client to track objects, then you can cryptologically sign them, so theywould be unique to the game session and non repeatable. The crypto could use very small keys to keep the performance managable, and the game exportable. 32 bits would probably be enough.
  • If opening the source of the client made multiplayer cheating possible, then it was possible before the source was open, just harder to implement. The obvious solution isn't really a fix for Quake, but care in the design of multiplayer games.

    In any multiplayer game, you have a server and at least two clients. Anything critical and cheatable should be on the server, anything computation intensive should be on the client. For example, the client determines the keyboard/mouse/joystick/whatever state, and sends it to the server. The server resolves the action, and sends a schematic of the situation to the client (health, ammo, layout of area, etc.). The client then handles the 3d rendering, sound mixing, and so forth. No amount of cheating on the client end (short of out of game attacks on the other's computers or networks) would affect any other players. Cheating could be done on the server end, but there will always be cheat-free servers available.

  • by Terje Mathisen (128806) on Sunday December 26, 1999 @10:06AM (#1444067)
    I discussed this with John Carmack back in May, the real problem is that it is absolutely impossible to make a completely cheatproof system. This is because it will always be possible for a cheater program to load the original program along side itself, and then use this to reply to things like requests for a MD5 checksum of a random area of the executable. Closed source helps only in making it harder to reverse engineer the protocols used, it is no real solution. Terje
  • I was actually thinking about this some time ago and I concluded that the only way to have no one be able to cheat is with a closed-source system of checking.

    You know I'm sitting here thinking about this and I don't think this is so.

    That's one way to do it, however, another way would be to have all damage allocation from weapons, and all damage recieved to armor, health, etc, be done at the SERVER side and not the client. Offload all the cheatable stuff to the server, that way hacking the client is useless because the client isn't the one doing all the damage assessment.

    I know this would totally break the Quake protocol and would require enormous rewrites to both the quake client and server, but it's doable, and you can keep everything open source, the only problem now is people having cheat servers. :)

    It would probably increase lag aswell, as the client would have to check with the server to see if it was allowed to do anything (ie: walking [server wouldn't let client walk through walls], firing weapon [server would have to make sure client had enough ammo], etc.) but with broadband hitting almost everywhere now, maybe it won't be such a problem in the future.

    -- iCEBaLM
  • They should build a small scripting language into the code, and upon connecting to a server, a specialized authentication program could be downloaded off of the server. Ideally this would be a different authentication program, or a set of programs that are rotated. This program then would generate some sort of PGP-type signature of vital files, and return it to the server. The server would then look at the signature and see if it matches the signatures in its database, thus determining whether the client was a valid one or not. The scripting language would be limited on commands, to prevent any sort of abuse. Since the script sent by the server could be pseudo-randomly rotated, the client would never know exactly which response to send, if it were a hacked client with cheats.
  • I have spent much of the last year developing systems/protocols for hostile client to trusted server connections. There are ways to do this, but it requires that the base protocol be designed with it in mind. I don't know the quake protocol, but I will try to describe a possible method.

    The client connects to the server with a request for a unique ID. What comes back is a two part ID, one public, one private. The client then makes universe change requests (movement of client in the universe, firing, etc...) with the pub ID, request, and a hash of the above with the priv ID. The server then can verify it's the same person that requested the ID (PKI can be used to send the intial ID back if snooping is a problem). The server sends back universe updates along with the verification. If you want these can be signed with the priv ID.
    If you want to do object tracking then the objects could have seperate signatures so they are unique and verifiable. Ammo could also be tracked with a sub object or something like that.

    Basically any rule that you don't want broken need to handled on the server in this model.

    Encryption could be cut down to a low level to prevent computaional slowness and export problems. Even a mild algorithm would be acceptable so long as the key secure until the end of the game against a single PC.

    If you want to do peer to peer, or move more handling back to the client, then one would have to look into one of the many blind poker algorithms, but it to should be doable.
  • Yes, there is a risk that, by making certain programs open-source, somebody could make use of the information made so available to, for example, commit fraud.

    However, it's not clear that this possibility exists with all software that is sold commercially - yes, you could perhaps modify a digitized photograph to show Bill Clinton or Newt Gingrich having sex with Scary Spice by using the GIMP, but you could, as far as I know, do the same with the closed-source Photoshop.

    If there were some mechanism to prevent tampering with digitized photographs, or to make such tampering detectable (e.g., some sort of digital signature), then there might be more of a risk with an open-source image editor that implements such a mechanism - but I don't know one way or the other whether no such mechanism can be implemented without keeping it secret. (And, no, that doesn't ipso facto mean that any company doing an image editor would be too afraid to do so - "I don't know one way or the other..." doesn't mean "nobody knows, it'll forever be a fear", it means "I don't know".)

    Furthermore, it won't necessarily be the case that the cost of the sort of fraud, etc. that could be committed with an open-source program (or any other source of openness, e.g. published protocols, published file formats, published algorithms; this isn't just a question of open source vs. closed source) will be deemed greater than the benefits of openness. Yes, there have been cases where that sort of fear has dominated - see, for example, the policy of some governments, including but not limited to the US government, towards freely-available encryption technology - but that doesn't prove conclusively that this sort of fear will, of necessity, be dominant.

    I.e., I've seen no evidence to suggest that the fear of misuse of a program must be so strong as to prevent any software developer from ever making their source available, so, whilst a more limited version of your conclusion might be valid, I see nothing to suggest that your quite sweeping conclusion ("this is going to be one of the main reasons that open source is most probably never going to take off as a major commercial model") is, of necessity, valid.

  • by chromatic (9471) on Sunday December 26, 1999 @10:15AM (#1444078) Homepage

    That's right. We all know that closed source projects like Diablo, Ultima Online, and IIS 4.0 are secure and uncrackable. Thank goodness for software like Windows 98 and Windows 95 which are immune to BackOrifice due to the superior protection of Security through Obscurity.

    Can you *imagine* if someone like Alan Cox or Theo De Raadt had access to the source code? I mean, he might spend upwards of two hours fixing the security holes. That is plainly unacceptable.

    It is a very good thing that reverse engineering and hex editing and asm disassembly are impossible and illegal, not to mention packet sniffing! Otherwise, our panacea of Ivory Tower software development might show some cracks.

    Now if we can just rid the world of Computer Science classes and books, we can all hold hands and dance around. Huzzah!


  • As a non-quake player, I can't say for sure what exactly a client reports to a server.
    Exactly how hard would would it be for the server to be a little more intelligent? If a cheating person is shooting someone with a machine gun doing 50 points of damage per shot, I *think* it wouldn't be hard for the server to notice that the gun is doing too much damage. Maybe have the server know what damage each gun does, how much health a person should have, and how quickly a certain gun fires/recharges. In my thinking, I wouldn't assume that would be hard to do, but I'm always ready for corrections.
    If this was actually possible, perhaps a flag could be added to the server. Something like AllowPlayerCheat=On/Off ...If the server doesn't want cheaters, and it detects one, it can boot them off with a message of "Player 1 was cheating, and has been removed from the game"

    To me, that's a pretty simple solution, but I also assume it would seriously bump up the required bandwidth, and also bump lag up. Again, I'm not sure what info is already passed to the server, but I'm assuming it will pass something about hit/miss fires from a gun, or how much health a user has left to drain.

    In a scenerio like this, I assume you would just now have to rely on servers set to not allow cheat, or if they do, let people know. Anyone think of a way around that? I'm up for opinions, as this is pretty interesting.

    On a side note, I don't think this actually damagages OSS, but proves at how quickly people can find paths that could damage your hard-worked program, amongst other pos. bugs...
  • There really is no way at all to prevent hacked clients as long as the server trusts the clients. The only way for the server not to trust the clients would be to offload everything to the server except inputs, which makes the client effectively just a remote viewer. Of course this is obviously impossible because it would render the game absolutely unplayable. Everything is "cheatable" basically, except the inputs. Making a closed source proxy won't work either, since the proxy can just be hacked. It is a stop-gap measure, but I think it won't really work. If the closed source proxy relies simply on a digest, it is trivial for any cracker, not to mention most pedestrian programmers, to hack the proxy to return one of a list of known valid digests, or simply use mechanisms of the os to fool the proxy (point it to a valid copy, but make it run the hacked one). There really isn't any way to stop it at all, except to just rely on the honesty of the Quake gaming community, and give a big fat walloping kick and ban to assholes found cheating. - the Java Mozilla []
  • Unless the cheater is distributing his modified client to others, the GPL does not require him to release the source code to anyone. The distribute-the-source requirement isn't tied to the act of source modification -- it's tied to the act of distribution.
  • To play devil's advocate for a bit: it's the assholes like this that drive software development to new levels. The same thing is paralleled in many different areas:

    In cryptography: The people who propose new protocols depend on the people who break protocols to make their proposals robust. (Without those crackers we could still be using XOR to encrypt files).

    In Science: Science, at the most fundamental level, is about destroying the work of others. One takes a theory and try to find places where it doesn't hold thereby disproving it. Without this process, we would probably still insist the world is on the back of a tortoise.

    In nature: Nothing accelerates evolution like preditors.

    Cheaters are nothing more than a virus in the open source community. We have no mechanisms of immunity against them. Now is a chance to prove our evolutionary fitness to the rest of the world. Either we adapt and survive these cheaters, or we die out until a better organism for developing software comes along.

    The beauty of open source lies in its evolution. Let's evolve.
  • Exactly. By closing the source, you simply make it harder to expose weaknesses to either the potential cheater and potential developer/auditor. Who do you think will make the extra effort? Remember the supposed "Russian super computer" on
  • One potential solution is to have all keyboard/joystick/etc. input be sent to all the other clients before any of it is handled. (This is like the client/server solution mentioned earlier, but treats everyone as servers.) As long as all the user input is applied synchronously at each client, and each client has the same set of deterministic rules, the game will proceed consistently. If anyone cheats (in any way that causes a change in the actions/properties of the objects in the game) then the game will lose consistency. To check this, simply have each client checksum their data every once in a while. If someone has a bad checksum, throw them out of the game (by a vote of the clients). If someone fakes a checksum, then they can continue playing, but they won't be seeing the same game everyone else is. One advantage of this method is that it does allow modification of the game source, as long as everyone uses the same set of modifications.

    There is one game that attempts to use this mechanism (here []), but it is incomplete (mostly graphics issues currently). I'm not sure that this approach is viable in practice, but I think it works in well in theory.
  • by Effugas (2378) on Sunday December 26, 1999 @10:32AM (#1444100) Homepage
    No, no, no.

    Most, *not all*, but most client side hacks work because the server is trusting the client to provide data that provides state data regarding a separate client not under the same security/permissions context.

    For example--I shoot a rocket launcher at you, and the server lets me decide whether or not the rocket hits. It doesn't matter whether the system is open or closed source--this is a flaw. Give a dedicated opponent a day with TCPDump and rockets will be teleporting all over the place.

    Any server, whether it is a game server, an IP Telephony Gateway, or a simple web proxy, must be designed to exclude all contexts but those that originate from the client from what content will be accepted from that client.

    This is not an impossible endeavor. Starcraft, for instance, has binary modification software that changes unit commands. Even in a peer to peer two player game, the modifications work perfectly until they ask a unit to execute a command that unit cannot do. Then, the other client detects the cheat and the game is immediately cancelled.

    The immediate response, of course, is that this peer to peer arrangement prevents information hiding. If your client is always verifying that other clients aren't cheating, then you can always watch the incoming datastream to know what's going on. Therein lies the reason why peer to peer isn't a particularly good topology for competitive gaming--there's no server to restrict the visible dataflow to that which the given client should see.

    Interestingly enough, the most inevitable (and least fixable) hack involves changing not the game but the video card drivers. Metabyte, the dementedly gifted hackers that gave gamers the first multi-API stereovision solution(and the single-pixel-resolution-adjustment power for Voodoo 2's), had a single revision of their drivers out for one day that artificially forced transparency on all surfaces. They called it X-Ray--needless to say, it made shooting around corners quite a bit easier. It also got shouted of existence rather quickly ;-)

    Reminiscent of Crypto, ain't it? Where's your trustable end point?

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
  • I never thought I'd say this, but maybe that scheme class I just got done taking actually did me some good. ;) You see, the instructor pounded the idea of abstraction layers into our heads over and over and over again. Never ever let the subprocess (ie client) of a program have access to global variables if they don't have to. Part of this is already there, the client just passes values on to the server. The server code is what needs to be changed. If it could verify every move as being legal or illegal, this problem could be fixed. This would mean that the central server would have a lot more work to do, especially for large multiplayer games, but these days I think most mid end machines should be able to handle it. You basically just have the client do everything it does already, but change the server so that any move that the client tries to make is checked. Does the player still have this much ammo? Is it a valid move for this player to try to go through this wall? Can the gun this player fired do this much damage? etc. If the client tries to make an invalid move, it means instant death. If both the client and server are working the way they are supposed to, you shouldn't need to sync because they will be counting things like bullets the same way, so it shouldn't really be anymore network overhead.

    If you wanted to get fancy, you could create a mechanism so that when a client logs in, it recieves a set of variables as the "ruleset" for what everything does. (IE how much damage a specific type of ammo does, how fast bullets move etc.) In that ruleset, if something isn't defined, it just uses the default, taking up less bandwidth.

    In my scheme class, we actually just wrote a basic AI game player to play a game that's kind of a cross between 21, and cross-four. It was implemented very similar to above, but on a much simpler scale. I think it would work for something like quake, but I'm still not sure what the overhead on the server would be to check everything that the player is trying to do. It also wouldn't eliminate the problems of nightvision type stuff. Maybe we could implement a system were in the shadows, the server reports a 50% chance of "seeing" a client being there, and it's the job of the client to render that in whatever way it can do best. So if someone always draws a player being there with a 50% chance of him not being there, and that player fires, he tells other people were he is, and they know where to shoot.

  • Does anyone remember Netrek? The same problem happened with that game. The solution is to cryptographically bless binaries that don't have cheats, and allow people to configure their servers to reject all "non-blessed" clients.

    But doesn't that defeat the whole point of releasing the Quake source? I mean, how many people are going to make really cool Quake modifications if they have to jump through hoops to get their code signed so that it can actually be used?

    I don't know if cheating is really such a big problem... Does anyone really know? It seems to me that there's no point to playing the game if you're going to cheat. It is, after all, just a game. "You're only cheating yourself". I doubt there's much prestige in the title "World's Best Quake Player When Cheating". And if you get used to cheats, you'll just suck all the more when you play at LAN parties where your cheats are unavailable.

  • Proxy cheat bots are only a small percentage of client side bots - also I've never seen a client side bot that really gave an advandage to the cheater.

    Wrong. VERY wrong. Proxy cheats, a low ping, and *some* skill (this *some* means being able to strafe and dodge incoming fire) will net you a game (that ends at 150 points) where you win by sometimes 60 to 100 points (with decent players). I have used the "stooge" bot before on public CTF servers (not trying to seem good, and only playing under an "assumed name" just testing it). You just see your weapon firing, and you don't know who at. You could be underwater and it will just launch at the nearest person w/the most number of points. If you are a decent player you can rack up several hundred points in only a few minutes (if you are running the flag as well).

    Most of the proxy bots depend on ping. You can tell the bot to use some sort of restraint to compensate for high pings, but if you are on DSL or cable, the bot is unstoppable. They NEVER miss. The only way around it is to circle strafe the bot and hope it can't keep up. It normally can.
  • A "blessed" client is one that has been approved by some group of reviewers and then digitally signed. If anyone alters the binary in any way then the signature is invalid and the server can detect this.

    Here is the idea in more detail:

    You have the source code available so that people can play with it, improve it, check it for bugs etc. But the problem is that people program their own version of the client that cheats. To prevent cheating, people who run a server can choose to only allow clients whose binaries have been digitally signed to connect. This means you'd have a group of people setup to review the source code of the client and if it contains no cheat they would compile the code, digitally sign the binary and people using the "blessed" (digitally signed) binary wouldn't be rejected by servers.

    Of course you could still run a server that doens't care if clients are blessed or not. In fact in the netrek days that was kind of fun sometimes.

    This gives you the best of both worlds, open-source software that is free to evolve, and community based servers that have a system to prevent cheating.

  • by / (33804) on Sunday December 26, 1999 @10:37AM (#1444107)
    As others have illustrated, it's not the open-source model but rather this particular client-server model that's at fault. Let's see what we can salvage out of the existing model:

    Ideally, the server would check all of the client's requests to see whether they comply with the laws of physics, but that is unfortunately unworkable with today's hardware and bandwidth. It is possible to go half-way on this one, though.

    If the server simply audits the client's behavior, that is, verifies the client's requests at random intervals, fair play can be insured. Remember: all it takes is one bad request for the client to be banned as a cheater. If the auditing is done at random intervals, then the client can't adapt by spacing its valid requests with the correct interval.

    All that's left is for someone to code a server to do this, and then for people to play on only trusted servers. The need for trust can't be eliminated, but it can be lodged solely in the server, where it belongs.
  • I can think of a number of ways around that one, but if the motion (rotation included) is controlled by the server then all the client could do is implement a control system to track the target.

    The better solution for that may be logging and fingerprinting. While I haven't seen this data, almost all other I've seen you can easily tell the difference between human and machine control signals. Develop a fingerprint for control signals. If it looks to much like a control system output, send it spoiler data that would be out of control bounds. Or you can just log them off :-).

    While I realize this is a record/record player problem (as in GEB), but the breaking record can be made unattractivly hard to make compared to the effort for the player.
  • IMHO Quake I is still the number one realtime multiplaying game.
    Okay, the pure deathmatch is quite stupid, but still it beats Q2, Q3A, Unreal and all the others. Quake I may not be as pretty as the games listed above, but Q1 feel and atmosphere is something that any other game haven't been able to achieve.
    And because it's pretty old game, you don't need the latest 3D accelerator that supports transparent-bumbing-flare-with-5th-reality-and-a-k itchen-sink effect to play it.
    Overall game-play is just something that they don't do anymore, which is a shame. For example no delays when changing weapons - not very realistic but fun, easy and efficient.

    And not to mention those great mods like CTF and TeamFortress.
    Playing Q2 and Half-Life ports of these mods just don't rise the same feeling as the original ones do.
    For example when I first tried TeamFortress Classics for Half-Life, I thought it just was a bad joke by people with sick sense of humour. Playing TFC just felt horrible when compared to the original TF. The great balance between classes was ruined and smaller versions of great TF maps with ugly textures almost made me puke. Never again, thanks.

    Apparently quite a many other people thinks this way, too. At least here in Finland playing Q1/QW is still quite popular. To check out the state of Q1/QW scene, just join some major Finnish QW server like Sonera's [] for deathmatch or for TF or some other server [].
    From 10AM to 10PM GMT+00 you even may encounter some troubles when joining a game on the most popular servers since at that time they often are full. Around 03AM GMT+00 they all are empty, though.

    Quake was the first well-working action game with multiplaying using IPv4.
    Quake will be the last well-working action game with multiplaying using IPv4.
    Since whe have the source, we can add IPv6 support.
    You can be an Internet2 user and still you can play Quake I.
    Quake One will never die.
  • Given enough determination, closed source can be patched for cheating as well. I'm guessing that it's probably been done before, and likely went undetected. Opening the source may make cheating a bit easier, but that also makes it all the more lame.

    I suppose a closed stub can help to limit the problem by reducing the number of people qualified to cheat, but that still leaves a lot of people.

  • I have to disagree. I dont the thousand eyes theory applies here. leave a comment i want to hear what /. thinks. remember fair competition is important to quakers.
  • There really is no way at all to prevent hacked clients as long as the server trusts the clients.

    Well thats what I'm talking about. Since the clients are open source why should the server trust all the clients? The server should rightly assume all clients are hacked.

    The only way for the server not to trust the clients would be to offload everything to the server except inputs, which makes the client effectively just a remote viewer. Of course this is obviously impossible because it would render the game absolutely unplayable. Everything is "cheatable" basically, except the inputs.

    Well, all the maps, skins, etc would be local, the client would be doing all graphics rendering. I'm not suggesting something like X where the entire graphics output is displayed over the network. More along the lines of IRC, where the server checks everything the client does and "allows" it to do operations, or disallows it to do operations (can't join a channel you're banned from, etc).

    Such as, say you wan't to fire your shotgun at AC_QuakeWeenee but you don't have any shells left, server doesn't know you have a hacked client which gives you unlimited ammo however, but it doesn't matter, the server would keep track of the clients ammo, and would disallow it from firing (maybe not on the clients machine, but to everyone else, the cheater never fired).

    As for damage, let the server deal that out too, I mean come on, this is what servers are FOR.

    There's absolutely no reason why this stuff shouldn't be offloaded to the servers. If you only have to worry about cheat servers, then that decreases the overall amount of people that can possibly cheat, and if you find one, just switch servers.

    I know this isn't perfect. Someone brought up the idea of hacking the client to make all enemies glow red regardless of lighting or other visual hacks, well sure you can do this, but it's a lot less then being able to increase your damage or give yourself unlimited ammo or such.

    -- iCEBaLM
  • I think that one thing that the open-source development model has shown recently is that it can adapt and meet the needs of many complex situations. So in this case we'll need a way for developers to share their patches and submit them for inclusion in the "blessed" binaries. Isn't this the same way the Linux kernel is developed? Doesn't the Apache project already deal with this tough situation with many developers, many patch submissions, and only a few "official" releases.

    People can still created unblessed binaries, and people can still run servers that allow any client, blessed or not, to connect. This method just lets the people that are organizing games have a way to ensure cheating won't take place if they want to.

  • How about if instead of all damage/movement/ammo/armor etc handled by the server what if the server just had a way of checking as the game was being played. then labeling someone as a cheater.

    Well as far as I know all movement is already handled by the server (this is why on slow links you get an "ice skating" effect), the damage and everything has to go through the server anyways as the clients are not directly linked, so why can't the server correct the values?

    I like your idea coupled with mine, you'd see everyone in a mad rush to frag the cheater :)

    -- iCEBaLM
  • by datazone (5048) on Sunday December 26, 1999 @10:53AM (#1444125) Journal
    I see you have never played diablo on battlenet. At first the cheating wasn't so bad, then it became so disgusting! Trust me, people will cheat, just because they can. And the code to Diablo was not available. so closeing the source or opening it does not always mean you have a cheat free game. Someone, somewhere will find a way to cheat, if they really want to. but that does not mean that you have to stop trying to prevent the cheaters.
  • Why should I, as a potential product designer, want to release my code if the potential exists for misuse?

    You are going to release your code one way or the other, regardless. If you release the source code under a generous, open source license, then everybody will benefit. If you release just the binary, with a restrictive license, then only those willing to ignore your license, break the law, and reverse engineer your program will benefit. (And if you don't release anything, nobody benefits.)

    Security through obscurity never works. If your argument is that not releasing the source code is a serious roadblock to the crackers in the world, that is naive, I'm sorry. :-)
  • "Such as, say you wan't to fire your shotgun at AC_QuakeWeenee but you don't have any shells left, server doesn't know you have a hacked client which gives you unlimited ammo however, but it doesn't matter, the server would keep track of the clients ammo, and would disallow it from firing (maybe not on the clients machine, but to everyone else, the cheater never fired)."

    I still think this would work in real life because of latency issues. Remember, clients like Quakeworld do all sorts of predicting. To make the game seem smoother, they automatically respond to your actions on your screen before syncing with the server. Imagine if you had to wait to see your shotgun fire until a round trip was made from your Ctrl keypress to the server and back so it could determine if that was a valid thing to do. Of course you could let clients go ahead and display actions regardless of the server's decision to honor them...that may work.

    Also, as you mention, if the user is in control of the client, and the client "knows" everything about the game, there is absolutely nothing stopping somebody from hacking a client to help themselves without touching the protocol at all. Making enemies glow red, automatically dodging rockets...client bots are designed around this very thing...they are allowed to "know everything". There is going to be no stopping that even if things are offloaded to the server. - the Java Mozilla []
  • I agree that imposing some convoluted (and utltimately impotent) form of security might just stub development. Keep it open and keep it simple. People will always cheat and it won't be possible to stop them, even with proprietary code. It should be pretty easy to spot cheaters, and if it is not then is it really a problem? So let some guy see through walls as long as he's not disrupting gameplay. The moment he does something shady to disrupt gameplay, off he goes. - the Java Mozilla []
  • Even worse, if modified clients can't connect, then what's the point of open sourcing something???

  • by Hard_Code (49548) on Sunday December 26, 1999 @11:11AM (#1444138)
    The problem, though, is not invalid behavior. Invalid behavior can be shoved to the server and verified. What is a bigger problem is valid, but highly improbable behavior. For instance, since my client knows everthing about the state of the world, I can program valid, but highly improbably inputs, that allow me to dodge most anything, simply because I /know/ the trajectories of everything. Now is this legal? Of is /possible/ that I could really have the skill to do this...but very improbably.

    As long as the client knows everything about the world, these sort of exploits will be possible. I think the current protocals maintain a state in the client and then sync that state frequently. The client "knows" the state though. The only option is for the client /not/ to know the state of the world, but only the portion it can percieve. But since the permutations of changes of one state to another is smaller than the permutations of all possible states, I think it has been easier to do the "push-state-and-sync" method rather than "redefine-state-every-time". - the Java Mozilla []
  • I still think this would work in real life because of latency issues. Remember, clients like Quakeworld do all sorts of predicting. To make the game seem smoother, they automatically respond to your actions on your screen before syncing with the server. Imagine if you had to wait to see your shotgun fire until a round trip was made from your Ctrl keypress to the server and back so it could determine if that was a valid thing to do.

    No, No, thats why I said "maybe not on the clients machine, but to everyone else, the cheater never fired".

    The client would see himself firing, HOWEVER, once that command got to the server, the server would check it to make sure he had enough ammo, if not, drop the packet and no one else sees him fire, if he has enough ammo, broadcast it to everyone else, deal out damage if necessary, etc.

    Also, as you mention, if the user is in control of the client, and the client "knows" everything about the game, there is absolutely nothing stopping somebody from hacking a client to help themselves without touching the protocol at all. Making enemies glow red, automatically dodging rockets...client bots are designed around this very thing...they are allowed to "know everything". There is going to be no stopping that even if things are offloaded to the server.

    Yes, this is a problem, but all this can already be done in mods without having to hack the client, in Quake anyways. The trick is just letting the client know enough so it can play the game, and no more. I didn't exclaim to know all the answers, just a partial solution which, IMHO, should have been done ANYWAYS.

    -- iCEBaLM
  • You're abosultely correct that "game security" should be enforced by the server, and the server API should disallow any request from the client that would violate the rules.

    But there's a huge practical problem in implementing that. When rule processing is done on the server, the client must wait for the server to process each rule. Even if you have a lightning fast network connection, eventually relativity limits the speed at which that sort of communication can travel. (Congrats, Al!)

    For example: Let's say player A has some sort of invisibility power turned on. (I know very little about Quake, so I'm speaking generically.) Ideally, the server will not report player A's position to any other client, since the other players aren't supposed to know. But what happens when player A steps right in front of player B, turns off his invisibility, and starts shooting? Player B's client now needs to download all of player A's properties from the server. (Maybe even custom textures, sounds, or other bandwidth-intensive data.) And the client needs to do this fast enough to seem instantaneous to player B.

    That's generally not possible, and that's why network games often need to place some trust in their clients.
  • by John Carmack (101025) on Sunday December 26, 1999 @11:24AM (#1444146)
    First, the Quake architecture of (reletively) dumb clients conencted to an authoritative server prevents the egregious cheating possible in some games ("I say you are dead now!", "I say I have infinite ammo!").

    For the most part, a cheating client can't make their character do anything that couldn't happen as a result of normal game interaction.

    The cheating clients/proxies focus on two main areas -- giving the player more information than they should have, and performing actions more skillfully.

    The "more information" part can take a number of forms. A reletively harmless one is adding timers for items and powerups. Good players will track a lot of that in their heads, but a simple program can "remind" players of it.

    Media cheating provides more information. Changing all the other player skins to bright white and removing all the shadows from a level give players an advantage not within the spirit of the game. Some would say cranking your screen brightness and gamma way up is one step on that path.

    More advanced clients can make available information that is not normally visible at all. The server sends over all of the entities in the potentially visible set, because the client can move around a fair amount between updates. This means that the client is often aware of the locations of players that are around corners. A proxy can display this information in a "scanner window". The server could be changed to only send over clients actually visible, but that would result in lots of players blinking in and out as you move around or turn rapidly.

    The worst cheats are the aim bots. In addition to providing more information, they override the player's commands to aim and fire with very high accuracy. The player usually "drives" around the level, and the program aims and shoots for them. This is usually extremely devestating and does ruin the game for most people.

    There are many possible countermeasures.

    There are server-side countermeasures that look for sequences of moves that are likely to be bot-generated and not human-generated, but that is an arms race that will end with skilled human players eventually getting identified as subtle bots.

    Media cheats can be protected by various checksums, as we do in Q3 with the sv_pure option. This is only effective if the network protocol is not compromised, because otherwise a proxy can tell the client that it's hacked media are actually ok.

    If the network protocol is not known, then the extra-information cheats generally can't happen unless you can hack the client source.

    Q3 performs various bits of encryption on the network protocol, but that is only relying on security through obscurity, and a sufficiently patient person with a disassembler can eventually backtrack what is happening. If only they would find something more usefull to spend their time on...

    With an open source client, the network communication protocol is right there in the open, so any encryption would be futile.

    Any attempt at having the client verify itself isn't going to work out, because a cheating client can just always tell you what you want to hear. People have mentioned nettreck several times, but I don't see how a completelty open source program can keep someone from just reporting what it is supposed to for a while (perhapse using a "real" copy to generate whatever digests are asked for), then switching to new methods. Anyone care to elaborate?

    I think a reasonable plan is to modify QW so that to play in "competition mode", it would have to be launched by a separate closed-source program that does all sorts of encryption and verification of the environment. If it just verifies the client, it would prevent the trivial modified client scanners and aim bots. It could verify the media data to prevent media information cheating. To prevent proxy information cheating and aim bots, it would have to encrypt the entire data stream, not just the connection process. That might have negative consequences on latency unless the encrypter is somehow able to be in the same address space as the verified client or scheduling can be tweaked enough to force task switches right after sends.

    In the end, it is just a matter of making it more difficult for the cheaters. If all it takes is editing and recompiling a file, lots of people will cheat. This is indeed a disadvantage of open source games. If they have to grovel over huge network dumps and disassemblies to hack a protocol, a smaller number of cheats will be available.

    Even if the programs were completely guaranteed secure (I havem't been convinced that is possible even in theory), an aim bot could be implemented at the device driver level.

    It would be a lot more work, but a program could be implemented that intercepts the video driver, the mouse driver, and the keyboard driver, and does bot calculations completely from that.

    Kind of sucks, doesn't it?

    John Carmack

  • Since the script sent by the server could be pseudo-randomly rotated, the client would never know exactly which response to send, if it were a hacked client with cheats.

    The problem is, the script would be running in a client created environment. It wouldn't be hard to make the script see what it expects to see by modeling the process/files image of an unhacked game. If the model is complete, it won't matter what the script does.

  • No matter how "strong" Carmack's "anti-cheat" device is, it will be circumvented. Some joker will build a workalike to this complex proxy system that "tells the server what it wants to hear."

    A real solution would be to build an actual community.

    Yes, this is absolutely right! The problem is that software can never be trusted: only people can be trusted. Take the problem back to the actual source.

    The only way to really fix this problem, rather than simply layering more obscurity onto it, is to design a system where you actually know the people you are playing with (or at least know them pseudonymously []), and trust them not to cheat.

    You can cheat at cards, too. This is no different.

    To the folks who think that simply hashing the binaries can solve this: who's to say that my client reports back the hashes from the binary that is actually running? This is the ``copy protection'' problem all over again, it simply doesn't work.

  • They were adequate in trusted environment, and were replaced with ssh after the Internet became a bit less "trusted". The same thing will happen to game protocols -- they will be replaced with versions that will keep "world" integrity even if clients are hostile. And since this still allows cheating by giving player more information than he normally would have (for example, by making things transparent), more advanced future servers would have to limit the amount of information, every client receives from the server to only things that player will be able to see -- but this will benefit the game as a whole because it reduces the lag and amount of calculations in the clients' 3d engines.

    The kinds of "cheating" that will always remain possible will thus become limited to client-side "automation" (scripts that determine parts of character's behavior, information keepers,...), however those things can be legitimized -- they require skills and creativity to be used, so the advantage won't be "unfair".

  • by lorimer (67017) on Sunday December 26, 1999 @11:46AM (#1444167)

    As several others have pointed out, Netrek solved this problem a LONG time ago. I'm responding where I am so that this post gets, hopefully, seen by everyone who hasn't read yet, so we don't get any more vague, unclueful debate on this.

    The solution is very simple. ID compiles a 'vanilla blessed' server. ID compiles a 'vanilla blessed' client. They create an encrypted binary key for the 'blessed' client, based on the client binary itself. They distribute this key with the vanilla server. They allow server gods to add any additional compiled keys they want - and to turn off or on whether key checking is used.

    Now, every single server will be able to be accessed by the vanilla blessed client, no matter what. It all works out of the box. Turn on key checking, and no hacked binaries or recompiled clients will work on your server. Want to make a mod? Compile your modified client binary and distribute a matching encrypted server key for it. Server gods add your key if they like your client. It's that simple. If you want to run a "chaos" server, turn off key checking. Anyone can come in and do what they want - and THAT is often pretty fun.

    It works great. People have been trying, and failing, to make 'borg' clients for Netrek for quite a long time now. There are some very good borgs that used to play on the Chaos servers. But they don't and CAN'T get into the vanilla servers.
  • I don't understand your argument claiming that "There is absolutely NO MEANINGFUL WAY for the server to verify the blessedness of the client. " The obvious way to avoid booby-trapped DLLs is to statically compile the binaries. Yes they will be big on the hard disk but its not like Quake isn't big already? As an alternative the digital signature that "blesses" the client could actually apply to the client itself and all the libraries on the system too. You can digitally sign anything. There may have to be quite a few "blessings" issued but I think it is manageable; considering the popularity of Quake I'm sure that many voluteers will step forward to do the work.

    Defeating proxy cheats are simple. You encrypt your client/server protocol stream. Thus a proxy can't actually rewrite the stream. In fact I'm quite disgusted that more on-line games don't encrypt their streams already. Sure there is a hit to the CPU but it is well worth it. Ultima Online could have saved themselves a lot of hassle by simply encrypting their client/server protocols.

    Encryption is the key to preventing cheating:

    1. Use encryption to create digital signatures to "bless" clients. Server can choose to reject all non-blessed clients.
    2. Encrypt all communication between the client and the server so that "proxies" can't site between the two and cheat.
  • by Augusto (12068) on Sunday December 26, 1999 @12:09PM (#1444191) Homepage
    But first , let me point out that your example is a bit off. Player B doesn't need to download all of the players properties from the server , because the scheme is that the server knows and manages everybody's health. Also, the problem with skins, custom sounds, etc. is not related at all to the GPL Quake since you can do that with closed source version already. I think the current scheme is to simply ignore custom sounds/skins/etc and just use the standard ones, so that if you look like Barney you still look like a grunt to me :) [this is how it works in half-life]

    Anyways, I think the general point you were trying to make (and very valid) is that waiting for the server to "approve" an action might take too long, and you're right. Unless we get really fast network connections, the only other way around this would be to use a hybrid approach hwere the server sortof trusts the clients, but then "audits" some players (randomly, or top players) and even if it let's actions go through (for speed) it might still reserve the right to analyse them and kick you out later. Once a cheater has been detected , his/her actions could be undone or simply ignored and the player is kicked out/banned from the server.
  • I disagree. John Carmack openly admits to being a fan of Linux, and the GPL of Quake I shows, at least a bit, of devotion.

    Quake 1 has been dead as a business model for years. Releasing the source was more of a contribution to the Open Source community rather than a business move.

    Also, ID is smarter than to kill one of their products. Releasing the source does quite the opposite. By giving developers the source code, interest in Quake I has been rekindled, and developments are going to spring up left and right. (Also, if a better version of a product comes out, it's natural to buy the better one over the older one. They'll make their money on the newer versions regardless of the state of the predecessors.)

    -- Give him Head? Be a Beacon?

  • The auto-aiming thing does indeed seem to be a problem. I wonder if it would be possible in some way to "confuse" such programs by limiting the information they have available to make decisions. You mentioned changing it so that you couldn't see players around corners. Perhaps in addition to this, something could be implemented so that you have a system of probabilities. It would be hard, and probably rather cumbersome to implement. Though if you could, in some reasonable way, limit the client to "there is a 50% chance that another player exists in the shadows straight ahead of you." the auto-aiming program would be forced to choose targets of highest probability. If you could also implement shadow "cues" that appear to be models of people ahead of you, (like when you scare yourself into thinking that a chair in a dark room is really a monster or a burglar) it may be enough to make auto-aiming additions unintiutive, as they shoot targets that don't exist. Of course in a bright hallway, it doesn't really help all that much.

    The other question would be how to render a polygon based model that has a 50% chance of existing. I suppose you could use transparency, though it would be questionable how accurate it would look. It would be nice to be able to remove or add other cues to tell someone if an object is there; the shadow and depth information, contrast to the background, etc.

    Perhaps another way might be to have invisible "targets" scattered throughout the level. Essentially a target that no human player can see/hear, or with information that the human player knows to ignore, but that would draw fire from auto-clients that just use client position information to aim/shoot. Again, I'm not sure how feasable this is. If you have an opengl model with completely transparent textures, how much overhead would it add? could a model with only say 6 polygons be used with mayb dull green texture so as not to be obtrusive? Would it be easy for people to code in checks for the fake players into the auto-aiming clients?

    Well, I don't know how feasible this all really is, as I'm mostly thinking in physical terms rather than polygons and models. I think limiting the useful information the client has available might be the way to go, if it can be done. Just something to think about perhaps.. Anyway, Thanks again for all the hard work John, it really is apreciated!
  • No, it's not impossible if you define your client-server protocol in such a way that the server ends up making all the decisions.

    So how exactly does the server tell if a client is using an aimbot?
    Thats right, it doesn't.

  • i disagree, this seems like one of those instances it wont work. jc is right, with the source open and me being able to enter as a server op "if name == mine, scale damage by 75%",
    If I understand this argument correctly, you're stating that open source servers allow for cheating servers. The trouble with this argument is that so do closed source servers.

    Cheats within this environment already exist for closed source clients. Thankfully, servers are often modified to detect these modifications to the client. Thus, as cheats become common, they also become a moot point.

    What's to say someone willing to put forth effort to modify a client can't also put effort towards modifying a server? I'm not aware of a client that looks for a cheating server.

    The point is that you will have to trust that server - open source or closed source. For now.

    i think this is one instance [there are several, alot more than OSS i believe] that closed source development is preferable because of the reasons i stated in another post:

    -no clear leader for development [carmack is certainly not gonna continue develepment]

    -no standardation of versions, new features added break network compatability


    And this is the challenge for open source developers to pick up if they so desire. Whoever begins putting out good fixes to the issues the Q1 environment has will naturally take the lead in development as more people decide to use/support that code base. Versions will be standardized since servers will likely only support clients comeing from trusted developers. And players are likely to only support servers that help guarentee a fair game.

    Cheating is one of the problems that need a solution - something I'm sure will be fixed. Open source progects have taken on other difficult tasks before and succeeded.

    Of course, the origional point was that closed source environents detur cheating. Not so. Remember, client cheat hacks have shown up in the Q1 environment while the source was closed.

  • It sounds like the probablity thing is just too complicated.

    However, your idea of creating "fake targets" sounds very interesting. The server could just create some random invisible targets in different spots each time the map is run. To make it more useful the targets could be automatically placed in weird spots (like in the sky or on the ground) that a human player would be unlikely to aim for. Then when the bot hits the fake target a set number of times it would be automatically bounced from the server.

    This sounds like a possible solution to at least the aiming bot problem.
  • This is a problem with basically any opensourced game that really, to my knowledge, has yet to be addressed. The question is, how do you know whether that guy who noone can beat is really good at the game, or hacked a version that auto-aims, auto-fires, etc?
    This problem has plagued the "closed" version of Q1 for years. There already exists proxies and patches that provide various methods of cheating.

    Opening the source, at worse, just allowed a few more cheats to an already exploitable environment. At best, its allowed those with the desire and ability to solve these problems the tools with which to do it.

    Time will tell.

  • "... to your scheme would be to only audit the "n" top players, because who cares if someone is cheating if they have a crappy score right ?"

    How about a cheat that allowed people to kill teammates? I mean, it really is of no use to someone unless they're an asshole spammer, but there are plenty of those. Score shouldn't be the [only] criteria. Disruption of gameplay is the big one. I mean, if someone had a cheat that exited a level, that wouldn't raise their score but it would really disrupt the game and suck, and have to be fixed. - the Java Mozilla []
  • Like I said, it was a generic example. I think we're more concerned about open source games in general than about a particular 5 (?) year old program, anyway. Regardless of what data you're actually transferring (how about the position of your opponent?), in practice you're going to end up spending a lot of time waiting for the server.

    Your suggestion is an excellent one, and I hadn't thought of it in this context before. (Although it comes up a lot whenever we discuss [] open source distributed computing. It would solve 90% of the problem, but ironically, it doesn't help in the particular case that I described: What happens when the server needs to hide information from a player, but the client needs that information to provide reasonable performance? I.e. the location of an invisible player. Once the server gives up that information, it has absolutely no way of ensuring that the client software will hide it from the player.
  • by / (33804) on Sunday December 26, 1999 @01:40PM (#1444270)
    That is, Carmack doesn't want to have to maintain the Q1 code. He released the code, and he's nominally interested in whatever happens to it, but to ask him to stick around and bless any modifications that others make is asking too much of him. Maybe you can set up an international standards body to perform that function, but it's an uphill battle.
  • Most people aren't cheating. Most people being accused of cheating aren't cheating. Most people complaining about cheating are just sore losers.

    It's possible to cheat in quake. It always was. Now that there's source it's relatively easy to do if you grok some C. We'll fix it though, like we would any other bug. So far movement looks like the simplest cheat, but it's also got the simplest solution: let the server calculate movement rather than the client. We're all a lot better off in that case anyway because then nobody can cheat regarding movement. (server side cheats are of course always possible but that's because you have to trust the server...)

    It's been discussed that autoaiming aids could be doable, QuakeForge is talking about fixes for that too. And there's also the whole idea of someone faking packets to screw up a player (either to their client claiming to be from the server or the server claiming to be their client..) There are simple fixes for these things too and we'll fix them. There are doubtless other cheats to be found, but we'll fix those too. This isn't exactly a big deal.

  • I have to agree with JWZ (yes *that* JWZ) and Pete on this one. The solution will require an alt.soc.hack.

    There are 2 parts to the problem here, the code that allows the cheating, and the cheater who uses that cheat in a game. The hacker who comes up with a cheat will always be around. What needs to be stoped is the deployment of that cheat on a large scale. Interestingly, I think that there is a parallel between this and the gun control debate. There will allways be guns (legal or not) and what needs to be stoped is the people's use of them to make bad things happen. To continue on this parallel, there are some who would seek to place restictions on the advanced 'guns that cause the damage'.

    Lets just assume for a moment that it was possible to screen on the server side for the video driver used, the software running on the client, and share that information across servers. How would one implement such a 'big brother' layer of abstraction without touching off a 'your rights-on-line' debate? (see:Another Software Spy - November 28 (/.) []

    This comes down to a trust model, and the ability of a server op(or designated trusted players) to kick a bot when they see one. It's worked in IRC for decades.

  • ...people from faking their client keys?

    Maybe I don't understand the scheme you're proposing, so bear with me.

    Proving that someone has a "blessed" client sounds theoretically impossible, for many of the same reasons why creating an uncrackable copy protection scheme in software is impossible. You can't have a perfectly hidden private or symmetric key in your "blessed" client, because to use that key the client has to decrypt it sometime, which implies:

    A. The algorithm for decrypting it is there, in the client code. Making the client closed source may make it more difficult, but no less possible, to recover/reproduce this algorithm.

    B. The decrypted key is in memory sometime. Whether you run the "blessed" client through a debugger, or halt its execution and examine /proc/kmem byte by byte, you can pull out that data somehow.

    There are more ingenious techniques, I suppose - someone mentioned running a "blessed" client and using your cheating client as a proxy between it and the server, passing any "key requests" or "checksum requests" to the blessed client while handling most of the gameplay with the cheating program.

    Two more points, as long as I'm posting:

    There have been near invincible client-side bots on public servers since I started playing Quake over two years ago, with all the aim improving, sight improving, etc. cheats that Carmack outlined. Closed source didn't do anything to stop them. In other words, to the idiot QW player who whined that he was never buying another Id game: shut up; it's not Id's fault.

    The best way to prevent cheating from a client-side bot is to have the client-server protocol such that the client is completely untrusted. Unfortunately, this isn't perfect:

    Just because it prevents the client from cheating doesn't mean it prevents the client from being a borg or a bot. You can make a Quake game such that borg clients can't see around corners, but you can't make one such that borgs can't have perfect night vision, perfect (aside from dodging projectile weapons) aim, etc.

    There are technical complications in realtime games to making the client completely untrusted. Quick example: Not sending the Quake client data on opponents who are around a corner means that the client can't do any local prediction on that player's movement, which means that you're subject to the full 200+ ms modem lag before the opponent becomes visible after he rounds the corner. I've already been too spoiled as a LPB to enjoy modem FPS games; this would just make them near-unplayable.

    Having a number of "blessed" clients as you've suggested is the perfect way to prevent cheating, but short of a magic uncrackable piece of hardware to locally verify the "blessed" client status, I don't see any way of preventing people from creating cheating clients. Closed source makes it harder, but no less possible.
  • by mcc (14761) <> on Sunday December 26, 1999 @02:46PM (#1444298) Homepage
    > it has completely destroyed quake1 culture. team fortress players, a segment of quake players that never really moved on to q2 or q3, after a bit of wanning of interest have finally got there death blow.

    heh.. bringing up the question.. what if Carmack did it this way on _purpose_..?
    i mean think about it.. now that it's OSS all these Q1 holdouts have had their game ruined. So what? So, they have to upgrade to Q2 or Q3. Meaning Carmack gets more money.

    I don't honestly think this is the reason Carmack went open-source, and i think that the way ID is willing to let go of intellectual property they no longer use is wonderful.. but still, interesting to think about. Excessive paranoia is fun!

    listen to your heartbeat delete beep beep BEEP.

  • Barring the device driver based hack, the situation you have with the networked game is much like a distributed computing situation. In fact, the network game could be considered to be just a large distributed computer for putting pixels on the right place on each player in the system's (game's) screen. Most of the distributed computing projects (, seti@home etc) do suffer from the same problem of people cracking the protocol and lying to the coordinating servers about what is actually going on, and it does seem that most of them end up relying on closed protocols and source code to keep it from being a problem. seems to use the same tactic for spotting cheaters as I use on my local Quake server, ie "they are too good".

    However, a lot of work does seem to have ben done, at least in theory, on making distributed computing truly secure, so it might provide a place to start. A quick search through Counterpane's list of crypto papers [] gave quite a number of hits on the subject. I doubt you could create a truely secure protocoll today (for speed reasons) but this is a problem that is only getting worse as time and technology advances...

    We cannot reason ourselves out of our basic irrationality. All we can do is learn the art of being irrational in a reasonable way.
  • by WNight (23683) on Sunday December 26, 1999 @03:02PM (#1444308) Homepage
    Quake already does this. The server is completely in charge of if you live or die, if you run out of health, you fall over, even if you've found the bytes for health and locked them at 100...

    What you're concerned about is actions... Bots can shoot better, and dodge better (theoretically) than humans. How does the server tell it that perfect spin while holding the lightning on a guy who ran by was Thresh, or a bot?

    Similarly, the client can change the way information is displayed. Perhaps they change the Z sorting for items, so that all players that are drawn are drawn in front of walls, even if they'd normally be behind... Throw in a simple GL effect to indicate the difference between an X-Ray view and a non-X-Ray view and you've got a way to avoid ever being suprised at corners.

    It's impossible to stop cheating in an environment with untrusted clients. Even with black boxes like console systems, it just raises the bar, making it harder to cheat.
  • by Jamie Zawinski (775) <> on Sunday December 26, 1999 @03:08PM (#1444309) Homepage

    Would targetting computers and nightscopes be cheating if everyone used them? Of course not. It's only cheating when people don't agree on the rules.

    You might think that robot/cyborg players were cheating unless your goal was to see how good you were playing against the AI. Or unless you were competing with other humans to see who could build the best robot.

    So making it impossible for the game to have bots and timers and other add-ons isn't necessarily the best approach, since that eliminates the potential for whole new forms of gameplay among consenting participants.

    That's why this is and will always be a social problem, not a technical problem. And it's one with a simple solution: don't play with jerks.

    It's just like Usenet: it used to be a nice place, but then it got overrun by idiots, and so newer, smaller communities like Slashdot appeared. If you are playing Quake and there are a lot of cheaters and idiots around, chances are your community got too big (and thus lost the elements of it that made it actually be a community) and you need to find or create a more intimate one.

  • by Jamie Zawinski (775) <> on Sunday December 26, 1999 @03:58PM (#1444320) Homepage

    For example, using PGP you can sign an email message and others can then verify that the message really came from you. Obviously the same thing could be done for an executable file

    Unfortunately, this isn't true.

    When you receive a signed message/packet/whatever, the recipient can verify that the sender of that packet had access to the private key that corresponds to a particular public key. That doesn't say anything about the integrity of the message, only about the set of secrets known to the sender.

    To oversimplify: you can know who I am, but you can't know that I'm telling you the truth.

    Where do the private keys come from? If they are embedded in the Quake executable, then anyone can extract them and use them to sign anything. If they come from PGP's web of trust, then still all you've done is verify the identity (or pseudonym) of the player -- not of the software that they are using.

    This is all very similar to the general copy-protection problem [] as well as the fundamental impossibility of DVD encryption [].

  • I only meant using encryption for the object tracking and inventory problem. The signature would be used to verify from the server, back to the server that the client really does have what he says he has.

    When it comes to computerized aiming, and target tracking, I'm not sure there is a way around this other then sending a spoiler input to the aiming control system, or recognizing it based on the response.

    Maybe this is something more like what postal chess does. Have everyone register to get on a server by giving personal info, and then boot them of if there is a problem. To make it work might be slow and intrusive. Then again I always liked just playing against the computer much better.
  • However each script would try to authenticate the environment using a different method. The people that ran the servers could write custom scripts, using new methods of authentication, methods which a hacked version may not know about (since each script is could be custom written, there could be an infinite number of ways to authenticate the program files).

  • I'm affrde a lot of people are responding with the same old answers.
    The fact is on-line games like Quake are VERY sensitive to cheating. The client is trusted with "to much information" it's nessisary becouse the server can not deliver all nessisary information on time. Instead the server delivers information the client MIGHT need.
    Time City [] preposes to solve this problem by building the client/server pacage ground up with an auditing system. All Quake servers and clients were built ground up on a trust system. To change this would require a compleate rewrite.
    The problem is Quake expects the client to be 100% reliable and trustworthy. Now that the client is open sourced this is no longer the case. Just as you can close security holes in open source you can open them.
    Thies defects have been known for quite some time and could NOT be addressed. Eventually some punk would make a cheat clinet based on the server code (allready open) not the client code and we'd be in the same position. But it would have been sevral years from now and by then Quake would not be that iteresting.

    Many open sourced multiplayer games suffer exactly the same problem. They solve it with a closed sorce solution. Trusted clients are compiled by the games develupers and given encryption keys. If your client has a valid key you can play if not you may have compiled it yourself and could be running a cheat client.
    So the source is only there to fix the bugs and improve the game but to accually use it you have to return the code to the develupers and let them compile it.
    Or you can fork the code and make your own keys.. But then only the servers recognising your keys/code could be used by your fork.

    Time citys [] solution is unqiue and time will tell if the game server will effectively detect cheating or if people will be able to make cheat clients using the open source code.

    The way it was explained to me BTW is that if the server detects someone cheating he will be dropped form the server.. It dose this by mesuring to make shure the user really really really could do what he says he's doing and if not.. disconnect...

    Some of the cheats in a Quake client rely heavy on the fact that Quake clients MUST have data on ALL players at ALL times. Raidar and transparent walls are the result. The client is trusted to do "the right thing" with the data. A cheat need only take advantage of this...
    If Quake did not yeald as much information as it dose it wouldn't be so easy to cheat... but that would take a compleate rewrite of the client server interface....
  • but you continue to say that these problems will soon be fixed and its no big deal, i disagree. you offer no new suggestions. bring something new to the arguement ;)
    Actually, the point I was making isn't that its no big deal. It is. Its a big issue with no quick fixes. Alas, I have no brilliant fix to offer right now.

    However, the point as I understood it correctly, is that open source won't work in this kind of environment and closed source will. That's the point that I disagree with. As I stated, even with closed source there has already been cheats. Closed source didn't buy us any security then.

    My suggestion is that open source will help lead to a fix if there is one to find. I don't have the fix. But I alone don't need to. Those who are interested in this problem will get togeather and they'll work to solve it. Open source progects have tackled some weighty problems in the past successfully. There's no proof that such an approuch has less of a chance coming up with a solution than a closed source model does.

    Alow me to quote - "bring something new to the arguement ;)". Now that Q1 is an open source progect, you can be involved in its development.

  • ...please check out my upcoming article in Game Developer Magazine on Cheating in Online Games. It will appear in the May (+/- 1 Month) 2000 issue (The exact date of publication will be determined when both Alex Dunne (GD Magazine Senior Editor) and I get back from our respective vacations).

    (Darn! Of all the days to be gone visiting relatives, there are ~320 posts already - I'm hoping someone still has moderator points left)

    The article for Game Developer discusses not only cheating in fast client/server games like Quake and other shooters, but in strategy games such as Age of Empires and Starcraft, Action-RPG's such as Diablo, Massively multiplayer games such as Ultima Online, and others. It also makes an effort to identify and classify cheating efforts from the blatant hacks to the gray-area issues of a game's design. It discusses the various architectures games use, and the inherent strengths and weaknesses in them. It talks about the specifics of how games get hacked, specific counters for them, and the limits therein. It also examines programming weaknesses that can lend themselves to cheating in non-obvious ways. My goal with this article is to provide to others in the game industry a reference to assist them in their efforts to secure their games. (For those interested in my credentials, I've written significant portions of all three "Age of Empires" games, and worrying about cheating and designing counters for them is something I'm paid to do in my day job. :)

    With that said, there are four rules that apply to Cheating in ALL Multiplayer games:

    1) Despite what you think, someone really wants to cheat bad enough to do it.
    2) Despite what you think, cheating in game (insert title here) is possible.
    3) Despite what you think, someone really wants to cheat bad enough to do it
    4) Despite what you think, cheating in game (insert title here) is possible.

    I repeat myself because denying the problem (which many game publishers do) does not make it go away.

    In response to various issues raised in the ~320 posts so far, I would like to assert the following regarding online gaming:

    Closed Source will not prevent cheating, only slow it down (a little)

    Terje Mathisen is correct - it is absolutely impossible to make a completely cheatproof system

    This is not a case of "Security Holes" in the game programs, but rather basic aspects of the design of our computers and network communications being used to achieve particular results. To perceive it as such is to promote a fallacy.

    You can verify that a game is running a specific and trusted executable. This does not achieve security. You can not verify anything else that is running on that computer or any other computer between you and the other players that passes your communication packets along.

    Security through obscurity is not security

    John Carmack's post includes pretty much all I was planning on saying about cheating in Quake-engine games and clarifies the misconceptions in many of the other posts. The issues with the Quake Architecture are summed up in his comment: "The cheating clients/proxies focus on two main areas -- giving the player more information than they should have, and performing actions more skillfully. " - What I classify as "Information Exposure" and "Reflex Augmentation".

    Information exposure will remain one of the biggest problems for most games. In nearly every game, there is a degree of interpretation in the display of a fixed piece of information. A cheater can alter that interpretation (display something that should not be shown, make bright something dark, make a sound louder, whatever) on his and only his machine without altering that information with respect to the game world it is a part of.

    Information exposure does not have to involve modification of game's code, data, or network communications. Passive reading of network packets and key values from another processes' memory space are sufficient to provide a cheater with a significant advantage with some games.

    Reflex Augmentation will remain a big problem for games where player's reflexes are an important part. How fast you can move the pips in Backgammon does not matter - it has no bearing on the outcome of the game, or the other player's turn. In Quake or Half-Life, it's all about being fast and accurate; that's why you can never have a fast enough system, video card, or ping. Aim-bots and other proxies will always be capable of passing themselves off as the real thing. The fundamental problem here is the inability to distinguish human inputs into the game from computer generated inputs. Quake server modifications to attempt to distinguish the two have led to the Aim-Bots adding human-style "errors" into their inputs until their accuracy is reduced to just below the statistical threshold that the server will allow. That really good human players may be incorrectly fingered as cheaters only underscores the limits of this option.

    Game Design decisions may inadvertently add to the problem and lead to quicker dissatisfaction with a given game. I'll use Half-Life for an example. Let's assume an auto-aim proxy exists for it (I believe it does). Half-Life has a weapon in Deathmatch that has two interesting capabilities (which have a place in normal gameplay): 1) it kills with a single shot no matter how much health or armor the target has. 2) it shoots through walls. I will leave it as an exercise to the reader as to how much less fun the game becomes (became) the day that proxy makes (made) the rounds.

    Encryption in communications has some important limitations:

    1) Any sort of protocol that involves adding any back-and-forth to complete a single action will raise "ping" times significantly. Games are already struggling to do everything possible to reduce lag. The gaming community would reject adding 250 ms to everyone's ping.
    2) Packet loss is accepted for some portion of communications in some action games. Any sort of encryption on those packets has to be able to survive lost packets.
    3) CPU bandwidth is limited. Too many cycles devoted to encryption and decryption (especially on the server) will negatively impact game performance.
    4) If the end user has access to both the client and server, they can and will be debugged. By very smart people.

    Slashdot user 'Pete-classic' touched on one of the anti-cheating efforts that I feel has been under-explored to date: identifying people that cheat at a game and exposing them to the online user community for that game. Being able to record games and play them back from the perspective of other players (as you can in Age of Empires II) brings the ability to audit a game after it is played. While this can't address all possible forms of cheating, it's a good tools for raising suspicion in those cases that it can't outright detect. It's also equally useful for proving that you were beaten by a better opponent.

    An opportunity exists for developers to add hooks into future games to assist the user community in policing itself. Imagine if you will, that when your server browser bring up a list of games, next to the net-speed indicator is an indication of the controversy and reputation of the server. Social solutions are going to be complex and take time evolve, but they offer possibilities that programming alone can't.

    So much more to say, but I have to sign off now. Thanks for reading and caring.

    -Matt Pritchard
    Ensemble Studios
    Age of Empires, Rise of Rome, Age of Empires 2: Age of Kings

  • The open source model lets creative people come up with superior strategies for winning. It's retarded to run around with the mouse and fire if you can use your brains and hack up some code to kick everyone's asses.

    Plus isn't it, at least to some extent, the fault of the design of the game protocol in that it facilitates cheating? A well designed protocol would not allow client modifications to give rise to cheats---other than the creation of robot players with superhuman reflexes. Even that could be eliminated; the game server could be equipped with detection heuristics in order to kick suspected robot players off, or handicap them in some way.

    I think that people who cry ``cheat'' are just damn whiners lashing out against nerds who are applying ``alternative skills'' to the game.
  • by Unbeliever (35305) on Sunday December 26, 1999 @06:51PM (#1444349)
    How? All it knows is what the client tells it. What would stop a hacked client from giving the signature of a version of the client that's on disk rather than the one that's running?

    Absolutely nothing. We just make it as difficult as we can. Someone with enough determination can (and has) spoof us.

    Let me introduce myself. I am the current netrek client KEYGOD. I am the one who edits and serves the keyring that go to all the servers who wish to validate keys. How does it work? Not by open source. Well, not very open source.

    The people who own the RSA patent have given us permission to use a version of their algorithm for authentication purposes only. That source snippet is not included with any server OR client source tarball. Neither gets it, so the source isn't really "out there" or open-source. Who gets it and how are things blessed? Well, here's where trust comes in.

    The source for the RSA verification is relatively tightly controlled for US export, and patent and copyright reasons. There's a US version and a non-US version. You get the RSA source by becoming an established client developer or Server God. You ask us, who run the metaservers to give you the key to unlock the source tarball and include it in your source compilation.

    For a server, you're done, it will go fetch the keys from the keyring automatically. For the client, the verification source generates a public/private key pair stores it in about 20 different variables in random order and random .o files. Each .o file is randomly linked in to the final binary, and symbols are stripped. No binary CRC checking is done. Multiple binaries can be compiled with the same key, and yes, you read that right, the key IS stored in the client binary. The client maintainer will then offer the client public key to me, and I have a fixed set of criteria for accepting or denying a key.

    We, the server gods, client developers, and I, have to trust each other for this system to work. We have to trust that someone didn't compile a borg using the same key as a non-borg. Hell, we have to trust that someone didn't out and out try to bless a borg outright since it is practically impossible for us to check all the clients. Server Gods have to trust me to not slip in my own or my buddy's borg. The players have to trust the Server God to not put in server side cheats for himself. But there are recourses. Someone can cry foul on and we can investigate and yank a key. Server Gods can add their own keys of people they trust, and can reject keys from the keyserver.

    Maybe I'm overemphasizing this, but at some point, people ARE going to try to cheat. There is nothing anyone can do about that. You have to hope that that number is small and trust that people are generally going to Do The Right Thing. Barring that, we try to make it as difficult as possible for the casual cheater to succeed. Heck, the non-casual cheater doesn't even need to hack the binary. They can twiddle with the IP stack. They can even write something under X to send X events to the client, I'm sure you can do the same under Windows.

    Another level of trust is with the client developers. We have always been adding new features and new clients. Every once and a while a feature introduced by a client developer may be deemed borgish. A flame-fest/discussion occurs on, and if a feature IS declared borgish, we have to trust that the client developer retracts that feature.

    If you want to see how we discuss this, do a Deja search on Keywords like "New Client" and "borg" will hit most of those discussions.

    Now to find a moderator to moderate this up...

  • the only way to have no one be able to cheat is with a closed-source system of checking.

    We have GnuPG under the GPL, as well as Quake 1 and OpenSSH. So why not setup a system where first the client and server exchange keys and begin encrypting the session, then they verify the identities (this could allow a global "stats" centre) of the client and server. If the server is a good one, and the client has not been blacklisted, play commences. By encrypting the stream (or just compressing it), you make it harder for others to break in and/or forge identities. This could dovetail quite well with Netrek's blessed binaries, and would allow better "global" rankings :-)
  • Exactly. Opening the Quake source code didn't create this problem - it was always possible to disassemble the Quake executable and modify it to cheat. It's just a lot easier now.

    It's still possible to have good, cheat-free games though. Tetrinet (online 6-player Tetris) had a really bad design, with everything kept track of client-side, and didn't even prevent you from typing ASCII 255 (the "packet" separator char) into the chat window, so you could cheat just by typing some simple text into the chat window. Yet many people still play the game, and don't cheat. They find other people who they know and who also enjoy playing the game so that nobody wants to cheat, since that would be somewhat pointless.

    Anyway, this is really the only solution. Any other attempted solution just makes it more difficult, but not impossible, to cheat.
  • I have never looked at the netrek source code, but is there anything stoping you from just changing the protocoll to prevent cheating.. and perhaps changing the game mechanics and ideas about fair play too. After all any cheats that could be accomplished by an external program which monitored the X events is not really a cheat. The ``proper open source''' solution to borg clients is to include a scripting language to incurage them.. giving everyone equal opertunity via sharing these scripts.

    Now, an interesting application of your existing blessed clients system would be to make a clients which gave a copy of your scripts to your oponent trivial.. and roge clients which did not give away the script a pain in the ass.

    Finally, if people are sharing lots of borg scripts (even via some automatic script sharing system) then there would eventually be no benifit in making a client to keep your scripts secret since your scripts wouldn't really be that much better then anyone else, i.e. the user interface to the game has evolved.. which is what we all want anyway.. especially in those build and send troops games like StarCraft. A script system is exactly what they need.


  • Matt Pritchard said a lot of good things and then said:

    You can verify that a game is running a specific and trusted executable. This does not achieve security. You can not verify anything else that is running on that computer or any other computer between you and the other players that passes your communication packets along.

    I have to disagree with the first part: I don't believe that you can verify that a game is running a specific and trusted executable.

    Maybe I'm wrong -- I am not a cryptographer, and don't even play one on TV -- but I just don't understand how this is technically possible. If someone thinks they know how to do this, I'd like to hear how.

    Security through obscurity is not security


  • Your comments about proxy bots are... unimaginative.

    I can find a poorly written proxy bot that I can kill by exploiting obvious weaknesses, but that doesn't mean all proxy bots have to be that flawed.

    The fire-frame-unsync you mention is a 'bug' in Quake and Quake2 (perhaps in Quake3) where certain animations override others. For instance, everyone in Q2 slid when they ran and fired, because there wasn't a proper running & shooting animation... There's no reason a bot's shots would look different from those of a players, the network code is the same after all, if there wasn't a complete spin in there, which only happens if the bots are given a 360 degree field of view.

    A 'skilled' bot operator runs a bot with a much smaller field of view, and plays an intelligent game as well. They aren't so easy to spot, or kill.

    I personally doubt you could kill a 360-degree viewing z-bot in q2dm1, with a semi intelligent player (ie, grabbing the odd bit of health). Your only strategy would be to stock up on armor to try to take enough rail hits to give you time to brute-force it to death.

    And I know you couldn't take on a reasonably skilled player using the zbot 'intelligently', and subtly enough to not be noticed as a bot.

    If bots are programmed to expect movement in straight lines, then parabolic curves will fool them, and jumping will make them miss. If you teach them about parabolic curves, then they'll hit jumpers. Air control will still avoid their shots, until they are programmed to cope with it. Anything you can do can be programmed into a client-side bot. Eventually the only thing you'll be able to do is dodge randomly and hope that the bot's lag keeps it from tracking you.

  • Heh, I was suprised to read a post by someone who actually understood that you need two trusted parties for encryption to work, and who understood the Quake client/server model and it's strengths/limitations... Then I read the name...

    The 'unfortunate' truth is that there is nothing which runs on your computer which you can't subvert with some work. As long as the computer is an open platform, which you can debug programs on, and monitor device traffic on, this will be the case. There is NO way around this. Anything the program can run for authentication, the hacker can rip apart to spoof said authentication.

    Both game models, peer-to-peer and client-server are vulnerable to this, in their own ways.

    The problem is that you can't control what the client does. If it returns the same information, you don't know what program is running.

    There is no cryptographic was around this. For crypto to be used to communicate between two parties, you have to trust both. If I send you a private encrypted message, I can make sure it can't be decrypted without the key (or cracking, which we'll assume is impossible in the scope of the problem.) but once you have that message, you can share it with the world and I can't stop you.

    Likewise, with digital signatures, I can be sure you sent me the information that appears to be from you, but that doesn't tell me if you're telling the truth or lying.

    Anything added onto this is security by obscurity (which, is possible in open source code, simply see the OCCC for proof...) If the source code is available, it's a bit easier, but that doesn't mean binaries are secure. Anything that happens on my computer is ultimately subject to my control.

    So, what can be done?

    Nothing really. Servers can check for unlikely shots and moves, but as JC notes, this ends up eventually kicking off Threshes as bots, and allowing any bots set to perform well but still below the cutoff.

    There are some tricks, such as invisible targets that are labelled players, but that the humans don't see, this will stop bots, until someone analyzes stored network code from before they got detected as a bot, sees this invisible target, and codes the bot to ignore invisible targets. One generation of bots stopped, no net gain.

    The only way to stop cheaters is to ship computers as black-boxes, that run a restricted OS, don't allow OS-level code, for debuggers and such, and that have private keys and serial numbers embedded, and are encassed in tamper-resistant materials. But, if we liked that sort of computer, we'd be using an N64...

    Half measures, such as dongles, have been suggested, but are simply more obscurity. Any half-measure *will* fail.

    So, are we doomed? Are good network games something we'll never have?


    I downloaded a ZBot, as did most people I know. Certainly any quake-playing programming person I know downloaded one. But I don't play with it. I don't even keep it installed. Why? Because it's not fun. A few wankers find distupting games to be fun, but if we simply vote to kick them and continue, they will eventually go away, simply because bugging people for fun relies on people being bugged.

    We have to put up with these people if we want the freedom of open computers, in the same way we have to put up with street mimes in a free society. But, if you just ignore them, they will go away.

    I should mention that bots exist in games like Quake because there are some actions that require little mental skill.

    Shooting a railgun is trivial. A monkey could be trained to do it, if they had a fast computer and a nice video card. This is why bots are usually used with the railgun. It's a no-brainer. You don't often see bots use weapons like rockets, or grenaes, especially across an open area, because those are least effective when fired at the current player location. To work, they need to be fired either where the player is going to be, or where you don't want the player, to herd them. Bots can't do this.

    If we want to get rid of bots, we'd be 90% of the way there if we'd remove the no-brainer weapons.
  • Sure, if you want to make a database of "trusted" people ...
  • > And anyway, it doesnt have to compute the total
    > memory space, just the binary code + the random
    > server string. if the hacked client lies about
    > its space it wont be able to compute the correct
    > signature..hence it will be rejected.

    This code has to be static or else the whole
    system doesn't long as they have a copy
    of the original correct code (say a binary dump
    in a file of some sort) then the hacked
    client can computer the signature from the
    static dump instead.

    How about "man in the middle" style? Hacked client
    contains a proxie built in. When you tell it to
    connect, it spawns a real Quake that connects to proxies the connection over to the real
    server and listens. The real client then
    participates in the protocol, when it finishes,
    it is killed and the hacked client takes over
    the connection.

    Yes...this can be worked around and possibly
    stopped. However as long as someone has the
    original code, they have the "secret" you want to
    authnticate with. Thus they can authenticate.

  • "i disagree, this seems like one of those instances it wont work. jc is right, with the source open and me being able to enter as a server op "if name == mine, scale damage by 75%","

    (don't know who actually posted this)

    But who the hell is going to play on your server once they realize you/your server is cheating? Do all you want to your server. Nobody will want to play with you. This is besides the issue of open source or closed source (a non-issue), or client security (a not-completely-solvable issue). - the Java Mozilla []
  • by Hard_Code (49548) on Monday December 27, 1999 @08:16AM (#1444429)
    "It's impossible to stop cheating in an environment with untrusted clients."

    I completely agree. This has nothing to do with open source or closed source. It is just simply impossible to stop cheating in an environment with untrusted clients. If you force clients to be trusted somehow (which won't work anyway) by only releasing "blessed" clients, then you have just lost the benefit of open source, and made a humongous pain in the ass for /decent/, non-cheating players.

    All sorts of solutions and fervent discussion is flying around about how to make it secure. It always resolves down to security through obscurity. In the end any "security" system in place will just make it harder on decent players. Because of the simple fact that any system that trusts the client is unsafe, it will never be a absolutely safe game (unless /everything/ but inputs are pushed to the server, at which point the game becomes secure (except for the behavioral cheats), but entirely unplayable). For the game to be completely cheat-proof the whole architecture has to change, and I don't think there is one that could live up to and support the fast and furious online play. - the Java Mozilla []
  • "What you're concerned about is actions... Bots can shoot better, and dodge better (theoretically) than humans. How does the server tell it that perfect spin while holding the lightning on a guy who ran by was Thresh, or a bot?"

    For the sake of clear terminology, I call these valid (within the rules) cheats "behavioral cheats". That is, it is a cheat of performing allowed, but unrealistic, behavoirs within the rules. - the Java Mozilla []
  • "The solution is (and always has been) to assume the server is trusted and the client is not. The majority of server's out there that anyone is willing to play on will be trustworthy anyway."

    Yes, I totally agree. Because of the specific constraints of a multiplayer gaming environment it is simply impractical to create a security model in which the client can be untrusted. People shouldn't be wasting time trying to make sketchy stop-gap solutions to the underlying (and necessarily) flawed security model. - the Java Mozilla []
  • Sure they can. It's just more complicated.

    Got me there. I meant "Can't [easily] ..." but was lazy.

    As you yourself already said, [include link...]

    Hey, no fair actually reading what I write to make me stay consistent... :)

    I mean, bots can't plug in a simple mathematical formula and cope with it. Herding a player involves knowing where you don't want that player to be, and knowing the area well enough to know where you do want them to be, and the choke points involved in cutting them off.

    Theoretically a bot could figure this out on the fly, but more likely, for a few years at any rate, it'll simply launch rockets at the 'protected' item and dare you to get too close, instead of defending it in a clever and dynamic way.

    So, I stand by the "bots can't" part of my post in all but the most theoretical ways...
  • However each script would try to authenticate the environment using a different method.

    It still doesn't matter, I just load an unhacked version up as a captive task, and feed it whatever I get from the server. It replys, and I feed that back to the server. It is NOT simple, but it CAN perfectly model what an unhacked game looks like, and it CAN make the script see only the model. At that point, even if the script checksums the entire process image, and all of the files, it will be fooled. Remember, the script runs in a hacked VM.

  • People should be forced to read the QW-Protocol specification before they contribute to this forum! The situation is: the server DOES actually all the damage/movement calculations! All the client does is input and rendering (in QW it with prediction).

    Uhm, We're talking about Quake 3 here...

    And if this is so in Quake 3, how are hacked clients able to increase damage?

    -- iCEBaLM

"Pok pok pok, P'kok!" -- Superchicken