Behind the Scenes At Sony's NOC 49
VonGuard writes "Earlier this year, I spoke to Mark Rizzo, the man who manages the people who run Sony's online game servers. Rizzo learned the ropes of MMO hosting back on Ultima Online, and we chatted about where the tough problems were then versus now. Rizzo compares the operation to a 24/7 scientific simulation, albeit with some sassier and more involved end-users. His favorite innovation since those early days? Rapidly provisioning and deploying Linux installations tailor-made to their purposes. Here's my article on Rizzo and his band of 50-some-odd sysadmin-cum-dungeon-masters, written for the new newspaper The Systems Management News."
Summary: We have scripts (Score:5, Funny)
Fascinating!
Re: (Score:3, Funny)
Re: (Score:2, Funny)
And Remedy :P (Score:5, Interesting)
That said, if they claim to be also architects, IMHO they do a poor job too.
E.g., at one point, after much lurking, and after I already had a big list of veteran awards in SWG, I want to post a suggestion. I didn't have a forum handle yet (hadn't needed one before), it's ok, I'll just go to the account management and create one. Turns out I'm sandboxed in a newbie forum noone else needs for the next two weeks, 'cause apparently the forum can't read from the database whether I'm on a trial account or a regular one. But it can read whether I have an active account, or whether I just deactivated it. (Sony's games were in the habit of asking why you quit. Post-NGE SWG was the only one which basically told me "go away, we don't take input from people with inactive accounts" after I filled that form.) But it can't read whether I'm on a trial account or not.
Well, it sounds to me like those architects of the server room don't do a particularly great job, then. Whatever interface they use to that customer database (SOAP, XMLRPC, plain SQL, whatever) should be trivial to extend to fetch that one extra piece of information. If month after month noone can figure out how to do that, it doesn't come across as a particularly competent architecture.
That, or they have no qualms with lying to the customers.
Additionally, I kinda find this funny, and while pioneered by UO, it's become a typically _Sony_ excuse later: "While today most of the problems faced by Rizzo's team are technical or development related, back in the Ultima Online days, these were compounded by the unpredictable player base. In its day, no one had ever seen the psychological and sociological reactions of players in a massive online world before."
Erm, no. The vast majority of problems UO had, were already known (and some even solved) by MUDs before. There was no excuse to repeat the same mistakes verbatim, and try the same things which were known not to work.
E.g., player justice was known not to work, as there's nothing you can do to the disposable character of a griefer, that its owner would care about. Plus, mobilizing whole posses to hunt down a griefer is, basically, just feeding the troll: he got some attention out of tens of people. Tens or hundreds of MUDs have tried that before, as it was the holy grail of being able to run a MUD without the non-fun headache of policing it, and it just didn't work without being backed by a lot of admin support. UO's recipe was known to fail, every time.
What really happened with UO was Lord British having his head so far up his own arse, that he couldn't see there's a world outside. He didn't as much discover those issues, as thoroughly ignored everything that had been discovered by anyone else. And then repeated the same thing with Tabula Rasa.
And as for Sony, since a lot of people there seem very fond of the same excuse: you have even less right to use that excuse, guys. SWG was a _third_ generation MMO, EQ2 is even later. There wasn't really an excuse even for UO to ignore the lessons of MUDs before it. Ignoring a couple dozen MMOs before you, is even less excusable.
And finally: how were those social issues relevant, in any form or shape, for the IT guys running the servers? I mean, seriously, they were (A) poor game design issues, (B) created some work for the coders who had to keep implementing fixes (which created more problems and the need for the next fix), and (C) a neverending headache for the GM's who had to sort out the thousands of support requests resulting from that fuck-up. Daily. But for the guys monitoring the servers and doing backups? Exactly how does it affect them whether the MMO is a friendly place or a newbie-hostile gank-fest run amok?
Re:And Remedy :P (Score:5, Insightful)
Re: (Score:2)
sysadmin-cum-dungeon-masters (Score:5, Funny)
Anyone else have images of S&M runnin through their minds?
Re:sysadmin-cum-dungeon-masters (Score:5, Funny)
kill (1000+('od -An -N2 -i
Oh, you like that don't you!? Want me to do it again? First, I'm going to show you what a real glob is..
Re: (Score:3, Funny)
So anyone working on Sony's MMO division is a cum-dungeon bitch.. err.. minion now?
Re: (Score:1, Funny)
Arr Eye Zee Zee Oh (Score:1, Funny)
Re: (Score:1, Funny)
What change management? (Score:5, Interesting)
Most change control systems make odd choices between a business model of selling proprietary clients, strange choices of backend databases, and a focus on managing sales contact information, hardware inventory, software updates, filling out lots of forms for tracking minutes used doing the work, etc., etc. The choices of the change control system affect the workflow quite a lot: so I'm quite curious what they use. Does anyone here on Slashdot know?
Re: (Score:2, Informative)
Promotion Strategy. (Score:5, Funny)
Taking a few management tips from in-game, perhaps?
Would love to hear more from these teams (Score:5, Interesting)
For one, the server hardware has to be pretty powerful. Because it's doing a lot of high demand database work, everything from the lower layers of the hard disks to the file system to the software itself has to be fast and reliable.
For two, there is an increased demand for data reliability. If you manage an e-mail server and for some reason a flaw in the e-mail server doesn't pass e-mail on properly, you may be able to fix it and tell users to simply resend whatever e-mail they were sending and that's that. If a flaw comes up in the online game world that requires users to possibly "redo" something they did in the game, you will immediately lose a vast majority of your playerbase as they will see the game as unreliable.
That said also, the servers are very high demand 24/7. Even when the maintenance times are scheduled outages, people still complain. Generally in a normal business IT scenario, you can reboot a few servers here or there and nobody will notice anything during off time. So you've got change control windows that can occur 2 hours before anyone else gets to work and have to use the system, and they won't care one way or the other as long as everything's fine when they get into the office.
The databases are vast, doing constant read/write operations. Again, constantly changing database as players move about the world and interact. Exchanging items, gold, leveling up, learning new abilities.
Clustering and load balancing become very real problems for game servers. This is extremely apparent when you look at Blizzard where they number over 200 seperate, completely independent realms worldwide.
We won't even get into issues where the game world can't be dynamic and involving due to the technical limitations that we have, resulting in very limited forms of gameplay.
And again, you cannot forget the customer base. You know, if Joe cannot access e-mail for an hour because something is up with his e-mail account on the server, in most situations that's perfectly fine, he has something else he can do and you won't necessarily lose money on productivity. If Joe cannot access his online gaming character, you have the potential to lose a sale and a customer.
Very high demand indeed.
Re: (Score:2)
Not really. I read the article (yeah... I must be new), and it looked like every the work done by every other NOC in the world.
Sure, there's a lot of servers to manage, but if you've got everything automated anyway, it doesn't really matter how many thousands of servers you have. If one goes done, reimage it, and get on with life. Maybe they have to go and change a blown hard disk now and again.
Re: (Score:1, Informative)
As I stated, I implied that the environment is very high demand. It's not quite like every other datacenter environment in terms of a systems architecture point of view because of the nature of the data.
Just from a systems architecture point of view, and pardon if I'm not too well versed in database architecture as some others--but in the community version of MySQL there are multiples of ways to do backups and database tracking for recovery if needed. One of which is is how to track the database da
Re:Would love to hear more from these teams (Score:4, Interesting)
You replay the binary logs of any transactions that were run since the last backup.
I'm not saying it's not a big problem because it's a game - I play lots myself, and understand the frustration when things break. I'm saying it's not a big problem because whether your tracking forum posts, medical records, or game players, when it gets to the database and hardware level, it's all the same thing.
These are solved problems. The headline may as well be "sysadmins adminster systems for Sony". The only reason this is getting any coverage is because they mentioned MMOs at some point.
Re: (Score:3, Insightful)
The problems mentioned above about transactional integrity, backup/restore, availability, clustering, "five nines" uptime have all been largely addressed at places like Amazon, Bank of America, and so on.
Re: (Score:1)
It was just a post to put it into perspective for perhaps some readers who take video game server environments for granted because it's a video game
Re: (Score:1)
You can't simply add "another box" to the game environment for a single instance of the game server since you run into issues where users interact with each other and movement data is processed and sent between server/client
Re: (Score:2)
..."five nines" uptime ...
Snake Oil I say! In every implementation I've had the pleasure to be a part of, when the subject of '5-9s uptime' comes up it is quickly shown that ensuring such an outrageous performance level makes the cost of the service exceed the revenue generated.
'Five-Nines' is 99.999% uptime - which equates to approximately 5.25 minutes of down time per year - or ~ 6 seconds per week.
In my experience measurements are taken for the overall system -- so assuming you have a customer accessing system of 1000 or more
Re:Would love to hear more from these teams (Score:5, Funny)
The secret to achieving five nines uptime is not to improve the reliability of the systems, but instead to be very careful about how you define "uptime".
"Hey, about those two hours of downtime last night..."
"There wasn't any downtime."
"No, really, the phones were lit up with people complaining that the applications weren't answering properly..."
"So the applications were answering queries? Then they were up. It's not downtime."
"But they were answering queries with error messages."
"Then that's an application problem. The system was still up."
"But the error messages said 'No response from database'. The database servers were down."
"No they weren't. They were still running. They still had power. The servers were up. It's not as if they fell down out of the racks. You can't call it downtime just because a few programs aren't behaving exactly the way you want."
"So about this SLA..."
"Five nines, baby. We've still got five nines."
Re: (Score:2, Insightful)
Having spent much of my grown life as a NOC monkey, I can assure you heads would roll at the ISPs I've worked at if we had nearly the number and lengths of outages experienced in the gaming world.
I don't see how this is more "involved" as far as the end user is concerned. What's going to happen on an MMORPG? People will post in forums and not ever see a response. That'
Re: (Score:3, Insightful)
And the obvious difference is that with an ISP you don't have dozens or hundreds of people trying new ways to game the system. With fail over, live backup servers and cron jobs aplenty, you just swap out/swap in and you are good to go. With MMORPGs, someone hacks the system and you have to shut it down deliber
Re: (Score:1)
And the obvious difference is that with an ISP you don't have dozens or hundreds of people trying new ways to game the system. With fail over, live backup servers and cron jobs aplenty, you just swap out/swap in and you are good to go. With MMORPGs, someone hacks the system and you have to shut it down deliberately, pour yourself a double-shot and let out a loud WTF. Then study the hack, if you can, then engineer a work-around, then test it, then deploy it. Then bring the system back up. Yeah, these are very comparable systems alright.
You apparently don't know what you are talking about when it comes to an ISP having totally redundant hardware that just requires a flip of the switch when something happens. ISPs are rather big, easy targets for someone with some skills. This is primarily due to them being just like any other organization with a large network. Upper level management decides they want more openness to stream-line things because one director fell for an engineer's gripe about how they should have access to port whatever fro
Re: (Score:1)
Funny mis-read ! (Score:1, Funny)
Yeah, I have a scary mind. Boo !
Linux? (Score:1, Informative)
Re: (Score:3, Insightful)
...sigh... (Score:1, Offtopic)
Let me tell you a story about UO (Score:2)