Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Games Hardware

An Inside Look At Warhammer Online's Server Setup 71

An article at Gamasutra provides some details on the hardware Mythic uses to power Warhammer Online, courtesy of Chief Technical Officer Matt Shaw and Online Technical Director Andrew Mann. Quoting: "At any given time, approximately 2,000 servers are in operation, supporting the gameplay in WAR. Matt Shaw commented, 'What we call a server to the user, that main server is actually a cluster of a number of machines. Our Server Farm in Virginia, for example,' Mann said, 'has about 60 Dell Blade chassis running Warhammer Online — each hosting up to 16 servers. All in all, we have about 700 servers in operation at this location.' ... 'We use blade architecture heavily for Warhammer Online,' Mann noted. 'Almost every server that we deploy is a blade system. We don't use virtualization; our software is somewhat virtualized itself. We've always had the technology to run our game world across several pieces of hardware. It's application-layer clustering at a process level. Virtualization wouldn't gain us much because we already run very close to peak CPU usage on these systems.' ... The normalized server configuration — in use across all of the Mythic-managed facilities — features dual Quad-Core Intel Xeon processors running at 3 GHz with 8 GB of RAM."
This discussion has been archived. No new comments can be posted.

An Inside Look At Warhammer Online's Server Setup

Comments Filter:
  • Re:Virtualization (Score:2, Interesting)

    by PhrstBrn ( 751463 ) on Thursday December 31, 2009 @04:32AM (#30603754)

    Sometimes virtualization makes sense for an app where you don't really know exactly what the usage requirements are going to be. You know at first, your app not going to need a full machine to run. So you wrap it in a VM, and throw it onto a shared server. But you think, in the future, you're going to need to scale up to bigger and better hardware. But you're not sure.

    If the app is already contained in a VM, it's trivial to just move it from Server A to a Bigger Server B if you need more power. The process takes no longer than it would take you to reboot the machine - sync up disk image (as much as possible), power off machine, finish syncing disk image, power on machine on new hardware. Doing this without virtualization would not be so trivial, and would force you to reinstall the app from scratch.

    If the time comes that an app outgrows it's virtual server environment and needs dedicated hardware, again, it's fairly trivial to just copy the disk image onto a disk partition, boot off a rescue disk to repair any driver/hardware incompatibilities between virtual machine and the real hardware, and then simply start up the new machine. Doing this again, would require reinstalling from scratch.

    Of course, some things are trivial to move, so it might not make sense to virtualize those things. But other could take a good number of man-hours to install, configure, import data, etc. You probably only want to do this once.

    Also, when keeping everything separate, you don't need to worry about things where updating App A on this server, which has new version of dependency X, if app B if going to be affected by this. If everything is separate, you don't need to worry about some future new dependency of your app will break another dependency on another unrelated app.

    It's not the end-all-be-all of solutions, but it definitely makes sense in a lot of situations. Throwing more hardware at the problem to make it go away is fairly cheap, and usually much cheaper than paying people people to try and fix interoperability issues between unrelated pieces of software.

  • Re:Virtualization (Score:2, Interesting)

    by Opportunist ( 166417 ) on Thursday December 31, 2009 @06:07AM (#30603944)

    I was thinking the same.

    Basically, enabling legacy applications to survive by giving them a slice of a real machine and running them that way is a great crutch. But not more. It would be more efficient to revamp the system and bring it up to contemporary code, but often that's not possible. I blame closed source and companies that wrote it going out of business, but that's me... I could ramble about shortsighted management decisions and putting the life of a company on the line and dependent on the existance of another company, but ... I won't.

    And as usual with great crutches, management (and often their techs, too) have turned it into the be-all, end-all solution for everything. It's a tool that solved a lot of problems and suddenly it has become the tool for every problem. Often problems that would not exist without the tool altogether.

  • by Darinbob ( 1142669 ) on Thursday December 31, 2009 @06:22AM (#30603976)
    Actually, it is kind of interesting to see what's in the back room. I know in some MMOs if a "server" is down there's inevitably some wiseguy who says "they should buy a better machine", or "I'm an IT dude and I could run the place better than those bozos". Other times people are confused why some regions of the game are working but others are not. Or why it takes so long to reboot the "server" to apply the next game patch. Or why there's scheduled maintenance. I think there are a lot of players who really believe that the companies just go out and buy a single computer (probably a tower) for each game server. The scale of the operations are pretty large.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...