Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Games Hardware

A Look At One of Blizzard's Retired World of Warcraft Servers 116

MojoKid writes "At last count, Activision Blizzard pegged the number of World of Warcraft subscribers at 10.2 million. It takes a massive amount of gear to host all the different game worlds, or realms, as they're referred to. Each realm is hosted on its own server, and in late 2011, Activision Blizzard began auctioning off retired server blades from the days of yore to benefit the St. Jude Children's Research Hospital. They sold around 2,000 retired Hewlett-Packard p-Class server blades on eBay and donated 100 percent of the proceeds (minus auction expenses) to the St. Jude Children's Research Hospital, which seeks to advance the treatment and prevention of catastrophic diseases in children. This article has a look at one of those retired server blades."
This discussion has been archived. No new comments can be posted.

A Look At One of Blizzard's Retired World of Warcraft Servers

Comments Filter:
  • For the...! (Score:5, Funny)

    by chuckfirment ( 197857 ) on Monday March 19, 2012 @03:38PM (#39406489)

    For the Horde, I mean, FOR THE CHILDREN!

  • by vlm ( 69642 ) on Monday March 19, 2012 @03:42PM (#39406551)

    With shipping, which was almost as much as the server itself, I paid $243.50 for this showpiece.

    Hmmm $100 or so to ship? Someone's padding that expense line. I would not flinch at $25 to $50 but this smells of those ebay auctions where its $0.01 for the product and $50 to ship.

    • Re:shipping cost? (Score:5, Informative)

      by Anonymous Coward on Monday March 19, 2012 @03:55PM (#39406707)

      we ship servers constantly (data center), average cost of actual shipping is around $125 within Canada/US, for a 1U system. They are heavy, require special packaging (2" solid foam surrounding the system to meet insurance requirements) and usually double boxed.

      • by Dahamma ( 304068 )

        It's not a 1U, it's a much smaller blade that only weighs about 18lbs. And there isn't much insurance cost since it's non-functional, encased in lucite, and apparently only worth about $143 based on the auction prices. More like shipping an $143 piece of artwork to hang on the wall...

        • by rhook ( 943951 )

          It is not encased in anything, they simply put a clear cover on it.

        • Others have pointed out that the $100 shipping is also from France to the US. I can't even ship 18 pounds of food from the US to Europe for under $100, so considering this is a breakable and insured server, that sounds like a great price to me.

    • I looked at the auctions when they originally occured

      they were shipped INDIVIDUALLY from France, so the price ain't that bad

      Now, WHY TF they weren't shipped to one point in the US (say Blizzard headquarters) and then individually shipped to buyers-- escapes me.

  • Blades (Score:1, Offtopic)

    by sexconker ( 1179573 )

    Are we still using blades? They save physical space but they add complexity, cost, points of failure, and the heat they generate is the same (or worse) per performance, all concentrated in a tiny box with higher cooling demands. New Xeons and Opterons have buttloads of CPU cores and then you just visualize shit. Why mess with blades?

    • by alen ( 225700 )

      vmware still has issues with all the I/O going through the hypervisor. the blades have local storage for the OS

      there are still applications like Cognos and others that say that if you run them in vmware then use a separate physical server due to the I/O demands and the fact that you have to specially code around the oversubscribing feature of vmware

      • by GNious ( 953874 )

        vmware still has issues with all the I/O going through the hypervisor. the blades have local storage for the OS

        HP absolutely promised us and our customer that there is no I/O issues using their vmWare server-solution, compared to bare-metal.....not that I believed them.

      • by drsmithy ( 35869 )

        VMware could handle hundreds of thousands of IOPS into a single host years ago. Pretty sure it's over a million now.

        Outside of exceptionally unusual corner cases, if your storage system can handle it, VMware is not going to be a bottleneck.

    • Re:Blades (Score:5, Insightful)

      by jandrese ( 485 ) <kensama@vt.edu> on Monday March 19, 2012 @03:46PM (#39406605) Homepage Journal
      Um, where do you think those Xeons and Opterons are installed? In individual towers? 1U servers are basically the same as blades except you have a lot more smaller redundant parts (power supplies, fans, etc...).

      Plus, you are griping about hardware that has been retired.
      • Re:Blades (Score:5, Insightful)

        by fuzzyfuzzyfungus ( 1223518 ) on Monday March 19, 2012 @04:05PM (#39406805) Journal
        The main saving grace of the humble 1U is that it doesn't have a vendor who has you by the balls for the next 14-ish systems you buy, along with a variety of option cards and things. Your basic rack doesn't provide much in the way of amenities, leading to lots of messy duplication of 40mm jet-fans and PSUs and a cable mess; but it just doesn't have the lock-in of a physically and logically proprietary cardcage...

        So far, the blade guys have had a difficult time not pocketing as much of the extra efficiency value as they can, while the commodity 1U knife-fight is wasteful; but it is rather harder for your vendor to achieve market power over you.
        • by drsmithy ( 35869 )

          Yet blades from major vendors are cheaper than their 1U servers. Your theory doesn't carry through to real-life.

      • Um, where do you think those Xeons and Opterons are installed? In individual towers? 1U servers are basically the same as blades except you have a lot more smaller redundant parts (power supplies, fans, etc...).

        Plus, you are griping about hardware that has been retired.

        1U servers are basically not the same as blades, lol.

    • My sense is that 'blade' as in "Wow, look how many basically-just-a-1U-with-the-economics-of-a-laptop we can cram into a proprietary cage that costs $15k empty!" isn't as trendy as it used to be; but that some of the cuter setups that offer integrated switching, dynamic allocation of a pool of disks to individual blades, and other functions that help save on switches, cabling, SAN architecture, and so on were still in a slightly tense state: On the one hand, they had the potential to be more cost effective
    • Re:Blades (Score:5, Funny)

      by Anonymous Coward on Monday March 19, 2012 @04:02PM (#39406769)

      Are we still using blades?

      Hipster IT admins, ASSEMBLE!

      "Blades are so mainstream. People in the know use a CPU with buttloads of cores. I'd tell you what brand, but it wouldn't matter, you've never heard of it."

      "Sure, 10-speeds save physical space, but they add complexity, cost, points of failure, and the heat they generate is the same. That's why I prefer fixies."

      "Why mess with blades? You can't even put a bird on them. [youtube.com]"

      • "Blades are so mainstream. People in the know use a CPU with buttloads of cores. I'd tell you what brand, but it wouldn't matter, you've never heard of it."

        You sir are my hero.

    • Re:Blades (Score:5, Informative)

      by billcopc ( 196330 ) <vrillco@yahoo.com> on Monday March 19, 2012 @04:23PM (#39406957) Homepage

      Blades are all about density. If I can squeeze 10 dual-host blades in a 7U enclosure, that's 13U saved vs 20 1U servers. Add the fact that many modern blade enclosures integrate modular switches, and you can squeeze 120 hosts per rack, instead of just 38-40. The hardware cost difference is negligible, since you're buying one set of redundant power supplies to power all 10 blades. The enclosure itself is costly, but the blades aren't much pricier than a comparable server board.

      If you're deploying lots of them like Blizzard, choosing blades means you only need 1/3rd of the floor space, 1/3rd the shipping cost, 1/3rd the installation labour, which represents a huge chunk of change when you're colocating at top-tier datacenters all around the world.

      Blades may not make sense for everyone, but don't write them off just because your needs are satisfied by simpler solutions. Virtualization is a great tool that offers tremendous flexibility and reduced costs, but it is not a magic bullet to solve every problem. It excels at handling small jobs, and fails hard with large ones. For example, virtualization struggles with I/O heavy workloads, which are becoming increasingly important with the meteoric rise of data warehousing and distributed computing. Processors are the easiest part of the equation.

      • by Anonymous Coward

        For example, virtualization struggles with I/O heavy workloads, which are becoming increasingly important with the meteoric rise of data warehousing and distributed computing.

        And funnily enough, MMO games. When you've got tens of thousands of players buying/selling, looting, switching about items, leveling up, killing stuff, and updating statistics on just about everything they do, its no home fileserver anymore. IO was the prime issue that made the EVE Online in-game Marketplace rather unresponsive at times.

      • by drsmithy ( 35869 )

        Blades are all about density.

        Mostly only in the advertising blurb.

        In real life, it's nearly impossible to find anywhere that you can achieve a higher density with blades than you could with 1U rackmounts (even when you're only talking about using fractions of a rack).

        There are plenty of other good reasons to buy blades, but density is rarely one of them. With that said, Blizzard are probably one of the few companies who could put in enough custom infrastructure to handle the power and cooling requirements

      • Blades are all about density. If I can squeeze 10 dual-host blades in a 7U enclosure, that's 13U saved vs 20 1U servers.

        Not sure what you mean by dual host, but you can get quad socket 1U servers at a very reasonable price. I've purchased a couple. The majority of space inside the box is taken up by either RAM slots or the CPUs themselves. There's also some space for disks and fans, auxhiliary motherboard stuff (networking, I/O, etc) and a powersupply. There was really very little space to spare. I'd be ve

        • Dual hosts refer to two separate boards stashed in the same enclosure (or on the same bracket). Some OEMs call these "twin boards". Take your quad socket server, slice it down the middle and that's basically what you get.

          It all depends on your application. If you need the largest single host you can get, then sure the quad-socket solution is the way to go. If you're better served by multiple smaller hosts, the dual blades tend to reduce costs and maintenance since they share one power supply stack and a

    • Re:Blades (Score:4, Informative)

      by AK Marc ( 707885 ) on Monday March 19, 2012 @04:25PM (#39406967)
      Because you can treat a single blade chassis as a single computer, despite the fact that it has 10+ computers in it. So, rather than separate boxes tied into a SAN, you have a single "computer" with directly attached drives (SCSI drive farm) for better performance. Then you cluster piles of those with a shared SAN for what must be shared across them. Better performance than separate machines.

      Oh, and blade servers have better reliability, even if you think they have more points of failure. And, depending on your setup, space is a cost that is a consideration, and compactness will save money.
    • No they are not still using blade servers, hence the reason they are selling them off as historical pieces of art basically..
    • by drsmithy ( 35869 )

      Why mess with blades?

      Because they're easier to manage, reduce complexity, require less infrastructure, are cheaper (once you've scaled past the break-even point, which depending on vendor can be up to an entire chassis full), have fewer points of failure, require less power and generate less heat per $METRIC and are exceptionally good for virtualisation.

      Or to put it another way, pretty much everything you wrote up there is wrong. Outside of very specialist scenarios where you have the facilities in place t

  • So? (Score:4, Informative)

    by X0563511 ( 793323 ) on Monday March 19, 2012 @03:45PM (#39406593) Homepage Journal

    It looks like any other blade, once you ignore the marketing decals put on it.

    • The point, though, is less about the (obsolete) hardware and more about the opportunity to own a 'piece of gaming history'.

      You can look back at it, in your golden years and tell your grandchildren "I played on that server" and they can look back at you blankly and ask 'Wow.. did they use *actual* servers in those days? Weren't there any clouds?"

      It's nostalgic and ephemeral, and not at all about the fact that it's basically some BL20p (or similar) which you could pull out of a dumpster behind most data cent

      • Your grandchildren will probably think clouds are quaint archaic tech too.

        • They'll eGiggle at each other over their psychic neural network.

          • by Anonymous Coward

            Until they get brainjacked and used by a Syndicate or two for nefarious purposes.

      • You can look back at it, in your golden years and tell your grandchildren "I played on that server" and they can look back at you blankly and ask each other if it's time to check grandpa's meds again.

        FTFY.

      • by f3rret ( 1776822 )

        ... ask 'Wow.. did they use *actual* servers in those days? Weren't there any clouds?".

        Doesn't the cloud run on servers too? Just more them and more distributed.

    • Yeah? And a game winning baseball is just a baseball, and a famous player's jersey is just a piece of cheap clothing. It's just memorabilia.

    • by SeaFox ( 739806 )

      Maybe the point is Blizzard realized that unlike most data-center junk, this was something people might be willing to pay more than used hardware costs for and they could do something good with the money they raised from selling it.

      Did you complain about Penny Arcade's charity drives, too?

      • No, but I would have if someone had been all "wow look at this awesome stuff that had Penny Arcade data on it once!"

  • From the slide show/article it says the drives were removed before hand to prevent customer info from being leaked.

    I'm wondering why these had hard drives with data on them at all. Wouldn't the data be on a SAN on the backend? Kind of defeats the purpose of a blade in the first place, seeing you want to be able to replace it quick if something goes wrong.

    In fact, if there are using the local drives, they better be sure to remove the RAID controller, as these might have info left in the cache as they are b

    • the raid card is missing the ram

      • Can't speak for all RAID cards, but the ones I've worked have a certain amount of RAM soldered onto the card and a slot for additional RAM that's semi-optional (the last card I worked with required additional RAM to add another drive to the RAID 5, but was working fine with just the onboard RAM before that). So it may not be missing it so much as never having had it in the first place.

    • The article writer probably made assumptions.

      More likely, the hard drives had basic 'get connected, and this is what you do' kind of code - all of the actual data would have been on DB servers.

      Though, this could have been a DB node...

    • by alen ( 225700 )

      the security guy probably had a case of CYA and said to take out the drives and other parts, just in case

    • by Sycraft-fu ( 314770 ) on Monday March 19, 2012 @04:17PM (#39406899)

      I get the feeling their backend design wasn't the best. For years they took their servers down every single week for a massive 6-8 hour maintenance period. This wasn't for updates, this was just regular. Patches took forEVER to happen. It clearly wasn't something like "Take things down, roll out new code, run checks, bring it online." Given that some things would only affect particular realms it was pretty clear they were doing things like running series of scripts and commands to upgrade things, and the process shad trouble in certain configurations and so on.

      So it wouldn't surprise me if they did things like store data on the blades themselves and so on. I can't say for sure, since Blizzard has been secretive to the point of paranoia about how things work on the back end, but my experience with the game leads me to believe they did not have a particularly good backend setup.

      • by Anonymous Coward on Monday March 19, 2012 @06:32PM (#39408135)

        Speaking AC here for obvious reasons.

        The reasons Blizzard did this was simply to delete objects. That is pretty much it as the I/O was the bottleneck when you had 20k - 40k population on each server and every SQL check becomes precious. You cached commonly used things and by the time a week is used the ram is filled up.

        Wow and SWTOR still do this for a scraping of objects every week.

        Occasionally new code is released too. That is quick to update over the network when the realm is down when it is done cleaning its objects.

    • by godrik ( 1287354 )

      local storage allow you to cache many things locally. You surely, do not want to go through network for every single freaking I/O.

    • by tlhIngan ( 30335 )

      From the slide show/article it says the drives were removed before hand to prevent customer info from being leaked.

      I'm wondering why these had hard drives with data on them at all. Wouldn't the data be on a SAN on the backend? Kind of defeats the purpose of a blade in the first place, seeing you want to be able to replace it quick if something goes wrong.

      In fact, if there are using the local drives, they better be sure to remove the RAID controller, as these might have info left in the cache as they are bat

  • by Anonymous Coward on Monday March 19, 2012 @04:05PM (#39406795)

    The article writer doesn't mention the specs of the blade, isn't interested in knowing if it works and thinks its ugly?! He has no interest in server tech or playing wow. Why waste our time linking to this article?

    • by JackDW ( 904211 ) on Monday March 19, 2012 @04:28PM (#39406995) Homepage

      Yeah, disappointing.

      But then, without the disks, there is very little to say about how these machines were once configured and used within the data centres.

      I hope that one day somebody from Blizzard will write a book about the development and deployment of the game, similar to Masters of Doom, in which this sort of information will be revealed. I, for one, would find it very interesting. Sure, as outsiders, we can take educated guesses about how you might build Warcraft, or a clone of it, but how much more interesting to know how it was (is) actually done! One day, perhaps it will not be so important to keep this secret.

    • That's what I wanted to know. I was hoping he'd pop off a heat sink and tell us what it was running. It's curious that it has DDR memory and 6GB of it. On the Intel side of things, I don't believe there was ever a 64-bit processor that worked with a chipset that accepted DDR memory, as the first 64-bit processors were the late P4's (and the associated Xeons) and those worked with DDR2 and later DDR3. So does it sport an AMD processor? Or were they just using PAE to access that much memory?
  • Oh man, where was the news story when these were still for sale?! $200 for a blade server doesn't sound bad, but then you look at the work they did with the paneling and the plaque and this thing looks like a pretty sweet piece. Practically belongs in a museum! $200 seems like a steal.

    I want one :(
    • Yeah, I'm a little upset I missed it. I figure of all places Slashdot would have an article on it, but looks like it didn't. That figures we can get 101 bitcoin stories, but miss this.

  • Before reselling them they had to clean the hard drives of all the lost hopes and dreams of previous players.

  • from TFA:"There may never be another game as popular as WoW, and even if there is, at the very least WoW will always be considered the first mega-successful MMORPG." I'm surprised that no one has challenged this yet. I think WoW became more popular at it's high point, but I think Everquest paved the way for it, and was as popular at it is now. EQ certainly eclipsed the stuff like Ultima Online and Baldur's Gate that preceded it.
    • by Anonymous Coward

      from TFA:"There may never be another game as popular as WoW, and even if there is, at the very least WoW will always be considered the first mega-successful MMORPG."

      I'm surprised that no one has challenged this yet. I think WoW became more popular at it's high point, but I think Everquest paved the way for it, and was as popular at it is now. EQ certainly eclipsed the stuff like Ultima Online and Baldur's Gate that preceded it.

      Everquest peaked at around 500k subscribers and hit the 100k milestone of people being logged in at once. I was logged in the night it happened. :) That was the best they achieved. Wow obliterated that number the 1st day of release with 2.9 million subscribers.

    • by gl4ss ( 559668 )

      sure, but saying everquest was as popular as wow is like saying that irc was(is) as popular as facebook. everquest had just a fraction of the impact as wow did and wow did it globally - sure they didn't invent the stuff, the amount of people who played it was just immense(however.. and here's a big however.. wow kinda sucks since it doesn't matter if there's 43242 million people on the planet playing it when you're limited in interactivity to only those playing on your realm which always was just a tiny fra

  • Comment removed based on user account deletion
  • It just doesn't feel like it says that much and there are tons of pictures of basically the same thing. Talk about padding it out for ad revenue.
  • "Fun fact: As of 2010, the average number of hours spent playing WoW each week in America is 22.7"

    Over 3 hours per day is average? No wonder I get my arse kicked in PvP. ;)

  • Am I the only one who doesn't think of pictures when someone says they are having a look at the server? I mean he literally just took pictures of the hardware?!
  • I'm interested in knowing which realm it was for?

    Was it a US or Oceanic server?

    I'm just curious.

  • a beowulf cluster of these.

What is research but a blind date with knowledge? -- Will Harvey

Working...