Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Games Businesses Entertainment Apple

Aspyr On Porting Games to the Mac 34

jvm writes "This in-depth interview with Aspyr's Glenda Adams over at Curmudgeon Gamer discusses in detail the issues of porting games to the Mac. Starting with Civilization on the Mac LC up through today's Tony Hawk Pro Skater 4, Glenda takes on PC vs. Mac system requirements, how games are selected for porting, patching Mac games, and some thoughts on the future." A notable quote from the interview: "The PC often lets you [code/architect] things in a sloppy manner with little penalty, but then when it gets on the Mac it drags the game down."
This discussion has been archived. No new comments can be posted.

Aspyr On Porting Games to the Mac

Comments Filter:
  • Blitz Max (Score:4, Informative)

    by Anonymous Coward on Friday March 05, 2004 @05:39AM (#8473689)
    Blitz Max is due out this year for Windows, OS X and Linux, all using OpenGL. The current Windows version will give an idea of what will be possible (it's fast, and Max will have OO features in addition):

    Blitz Basic [blitzbasic.com]

    • Doom 3 is also using OpenGL to get around to all three major platforms, as well as that XBox thing. It's unfortunate that when I took a game programming class as an elective the teacher solemnly intoned that "DirectX is the future and OpenGL is dead". IMHO, if The Carmack uses it it's very alive and well.
      • OpenGL will always be alive as long as there are platforms other than Windows out there. Granted, this guarantees an audience on maybe 1/20 of the computer market.

        The big reason game developers on Windows choose DirectX over OpenGL, from what I gather, is that it provides a framework for all sorts of stuff in addition to 3D graphics - sound, networking, etc. Hopefully, as projects like OpenAL mature, this won't be so much of an issue.
      • By now you probably know how easy DirectX makes things, it's pretty increadible actually. OpenGL is not as widely supported and is implemented diffrently by Nvidia and ATI.

        Carmack using OpenGL doesn't really say anything, he could probably write a game in COBOL but it doesn't mean it's a good idea for the rest of us. Carmack thinks that the final engine will be produced in the next few years and people will just liscence/develop based on it. That's why he's hammering away in OpenGl because he'd like it to
      • IMHO, if The Carmack uses it it's very alive and well.

        But Carmack's games haven't advanced much beyond Doom and Quake. Good fun, yes, but he isn't delivering the must-have, genre-breaking titles like Half-Life and that makes OGL a tougher sell.

        • But Carmack's games haven't advanced much beyond Doom and Quake. Good fun, yes, but he isn't delivering the must-have, genre-breaking titles like Half-Life and that makes OGL a tougher sell.

          Gameplay/level design and 3D engine technology aren't that closely linked. In fact, I played Half-Life in OpenGL as the DirectX version seemed to trigger a bug in NVidia's drivers... and it didn't seem any different, apart from not crashing around 20 minutes into gameplay.
  • PC Architecture (Score:5, Interesting)

    by Detritus ( 11846 ) on Friday March 05, 2004 @05:45AM (#8473704) Homepage
    I'm curious about what the problems mentioned in the article are, sloppy code that runs with a slight penalty on the PC and falls over on the Mac. My Mac has a reasonable number of MIPS (2 x 450 MHz G4), but most ports of PC games are barely playable.
    • Re:PC Architecture (Score:5, Informative)

      by netfunk ( 32040 ) <icculus@ic c u l us.org> on Friday March 05, 2004 @07:55AM (#8474080) Homepage
      Converting a number between floating point and integer on Macs is actually quite expensive compared to x86 systems. Converting between double and float has a severe penalty, too.

      Also, Macs tend to suffer a worse penalty for CPU cache misses.

      Then again, there's a handful of general purpose registers on an x86, and 64 of them on the PPC (96 if you count the Altivec registers), so it's assumed that x86 systems will optimize for these cases more than the PPC, whereas the PPC assumes you'll load what you need into registers, work on it, and go back to RAM only when absolutely forced to. This means you should define a bunch of local variables, and load as much as possible into them at the start of a function. On the PPC, this can be several orders of magnitude faster, whereas on the x86, it might be a little slower (since those locals won't translate into registers, you end up shuffling between the heap and stack and working out of RAM anyhow).

      Pre-G5, there's no square root instruction on the CPU. You can fake it with the fsqrtre opcode (which all Macs have, but I believe is optional in the PPC spec, too), but this gives you extremely low precision (five bits...you can actually GUESS with more precision!), but it's fast (and frequently "good enough" after two rounds of newton-raphson)...we used this (with Newton-Raphson) in UT2003/UT2004 without any noticable rendering artifacts. The G5 has a real, full-precision square root instruction, which spanks the cheap reciprocal method to boot, but will crash your program (SIGILL) on a G4 or lower. If you just call the system's C library's sqrt(), it'll do the right thing based on the running CPU, but to give you real precision, it won't using the reciprocal opcode...so on a G5, sqrt() gives good performance, on a G4, it'll eat up tons of CPU. The flyby intro on the original Unreal Tournament was spending about 17-20% of it's CPU time in sqrt() until I swapped in the reciprocal version.

      While I'm talking about sqrt(), a lot of other things developers take for granted on their x86 floating point unit, like sin(), aren't implemented in hardware on the PPC.

      Division on the PPC causes a complete pipeline stall (use multiplication where possible). GCC doesn't appear to optimize this case behind the scenes at this point.

      On the G5, instructions are broken up into "dispatch groups"...usually five instructions, I think, but it varies due to a few factors. If you write to a memory address and read it back in the same dispatch group, it causes a pipeline stall. This is called an "LSU reject". Developer documentation says that in these cases you should either move the store and load to seperate dispatch groups, or at least pad out the dispatch group with no-ops so the load will be in a different group. GCC doesn't doesn't necessarily optimize this for you at this point, but I'm not sure where Apple's GCC branch lies in relation to the mainline version (which can now handle this).

      LSU rejects are, however, a somewhat common optimization gotcha: // An untested example:
      static int myvar1; // force out of register.
      myvar1 = somefunc(); // store.
      int myvar2 = myvar1 + 10; // read.

      The solution is to move the addition down a few lines of code so that the compiler doesn't put it in the same dispatch group...padding out with no-ops isn't really practical, and it's something the compiler should be doing anyhow. More to the point, your x86 developer isn't going to think something as harmless as having those two lines next to each other would be an optimization issue. Why should he?

      Optimizations "truths" of the x86 aren't necessarily true on the PowerPC, either:

      1) Loop unrolling is generally believed to be a "good optimization" on the x86, but it thrashes your instruction cache on the PPC. Actually, this is probably true on modern x86 chips nowadays, too, but on the PPC, Cache Is King.

      2) Lookup tables are generally believe
      • Re:PC Architecture (Score:5, Interesting)

        by Tomah4wk ( 553503 ) <tb100@[ ].ic.ac.uk ['doc' in gap]> on Friday March 05, 2004 @08:15AM (#8474170) Homepage
        Loop unrolling in most cases i have played with it on the P4 also slows programs down (albiet not hugely) and i had also attributed this to the instuction cache. In fact the only time i have seen it to be beneficial is with a really tight (i.e. one line of code) loop where we new how many times the loop would be executing almost all of the time and used a pragma to advise the compiler (intel c++) of this.
      • Strictly this is off-topic, but that's about as nice a Slashdot post as this reader has ever seen. A nice expert opinion well-rendered and in a timely fashion. Consider me a fan.
      • GCC is capable of optimizing divides by constant integers, but nothing else. Also, AltiVec has a very fast float-to-int method which we used in the LAME Altivec patch (which I still haven't submitted because of changes in the upstream; I have some binaries if asked). There's a great thread about optimizing here [arstechnica.com] at Ars Technica. Also, Shark is great.
  • A VM always helps. (Score:5, Interesting)

    by Domini ( 103836 ) on Friday March 05, 2004 @06:06AM (#8473775) Journal
    Writing a game to use a virtual machine and some form of quasi interpreted code always makes portability a near given.

    See this site: http://this.is/vortex/osx-ports/?action=games
    for games 'ported' to OS X using the scumm vm. (Some cool ones!)

    Also note how seamlessly Quake 3 runs on even a slow iBook 900 G3.

    Other games which also ported nicely would be Warcraft III and Neverwinter Nights.

    This is mainly due to the programmer showing some foresight.

    The only problem is that the Mac version of Neverwinter Nights is out only recently, when all my multiplayer PC friends area already tired of playing it, and it costs about 8 times the price of the current Neverwinter Nights PC version, as well as the fact that it's difficult to get the expansions running for a normal home user, and ALSO the fact that I am unable to purchase the game AT ALL locally and the US and GB amazon stores don't ship here and Paypal does not support my currency.

    AND THEN THEY COMPLAIN THAT THE PORTED VERSION DOES NOT SELL!!!!

    I own the game on PC already, so why buy it twice? Warcraft 3 AND Quake3 runs on both nearly out of the box.

    No, it's all up to sloppy marketing and programming. Don't blame the OS or the users.
    • Neverwinter Nights (Score:4, Informative)

      by DaRat ( 678130 ) * on Friday March 05, 2004 @08:06AM (#8474122)

      I agree that the Mac version of Neverwinter Nights came out late, the expansion packs aren't officially for the Mac yet, and I'm sorry that you're having trouble getting a copy locally.

      In the US, the Mac version is $45 vs. $30 for the PC version. Copies are fairly easy to get from most outlets that carry Mac stuff (CompUSA, Apple Stores, online at MacMall, MacZone, Amazon, etc).

      The OpenKnights [sourceforge.net] project has an auto-updater for the Mac version which also will auto-magically install the PC versions of the expansion packs for you.

      • by Domini ( 103836 )
        But, on amazon.co.uk:

        PC: 9.99
        Mac: 34.99

        It's already delegated to a budget game on the PC before I could even get a Mac copy...

        I know about the OpenKnights project, it can be done by hand as well. How many Mac users know of this though?

        A friend of mine is currently in London, and I've ordered it via amazon.co.uk and delivered it to him there. This will still cost me R500, whereas the PC version I could buy locally for around R250 (which is a normal game price locally)
    • on even a slow iBook 900 G3.

      Considering I'm using a B&W G3 350 would you like to trade to see what a definition of slow is? Or, better yet, borrow my Powerbook 140 for a good side by side comparison between the two portables.

      Let me know! I'll pay shipping.

      • That's not slow; that's a dinosaur.

        Rob
      • Hehe... No thanks, I picked up one of those already for my computer meusem. ;)

        No seriously, what I was trying to point out was that it was the lower-end of suggested hardware for the game, and even though the game employed some form of scripting, it was even fast enough on this low-end.

        (This is only low-end relative to modern computer gaming requirements... it's plenty fast enough to do my quicktime streaming/NWN server/Mac Goban/DVD playing/MUD server/Plone etc.)
  • by Anonymous Coward on Friday March 05, 2004 @07:09AM (#8473951)
    Let's play "I can contradict myself within N words...

    ruffin:If you can answer, did you ever turn down a game were the publishing house had unrealistic minimum system requirements for a port?

    GA: Not that I can remember. I've turned down games that just seemed to require too high end of a machine, or that were bad or offensive, but that's about it.

    Ah. unrealistic minimum requirements are distinct from too high a requirement. Thanks for clarifying that.

    • Ah. unrealistic minimum requirements are distinct from too high a requirement. Thanks for clarifying that.

      Well, just to clarify, they are.

      Unrealistic minimum system requirements would mean the publisher was requiring Aspyr get the game working on machine X that marketing had determined would cover enough of the population that the game would sell amount Y.
      Requiring too high end of a machine means that the developer doesn't feel the port is viable from a technology perspective.

      In terms of who holds the p
  • Wasted cycles? (Score:3, Interesting)

    by AmiMoJo ( 196126 ) on Friday March 05, 2004 @07:54AM (#8474078) Homepage Journal
    I find it hard to beleive that a 600MHz PPC processor isn't powerful enough to run a golf game like Tiger Woods.

    There were highly realistic golf simulations that ran okay on a 7MHz A500 or 8086. Sure, these days people demand more from the graphics, but a simple option to turn off some of the fancy stuff couldn't be too hard to do, could it?
  • by mactari ( 220786 ) <rufwork@gma i l . com> on Friday March 05, 2004 @09:46AM (#8474844) Homepage
    The most interesting answer to me that I got from Glenda was with respect to crossplatform technologies. I'd recalled that she'd used OpenAL a number of times. And of course Apple created a decent OpenGL implementation thanks in part to John Carmack's influence. The DirectX to OpenXL porting route is right common nowadays. Wouldn't it be great if it were easier for programmers to *start* with these xplat techs and make ports trivial processes, like Quake 3 was?

    But Glenda didn't put much stock in xplat techs becoming easier to use than M$ sellout techs, nor did she see Apple throwing more weight behind their use a solution.

    But in true "forest for the trees" fashion, she pointed out the one potential savior for people gaming on "second tier" platforms, let us say, that I'd completely missed.

    ruffin: Who else could run with the ball to get mature, cross platform game programming APIs written?

    GA: Microsoft, but I don't see why they would want to.


    It's sad that the M$ monopoly in the desktop OS market is so, "That's just the way it is," that I would have completely overlooked the answer. That's great insight -- though awfully obvious -- and the perfect "hit-home" reason for us gamers to be pretty angry with the folks in Redmond the way AOL/Netscape, Apple, Sun, and friends are.

    I'm hoping to get a editorial based on the interview up soon, but at the same time to keep out the bile and trying to stay relatively objective and fair. That's proving tough.

    And at the same time, the porting system is awfully broken. As games get larger and larger, who knows, perhaps taking multiple DVDs instead of multiple CDs (or obscenely long download times via the net), porting all that code isn't going to get any easier.
  • Why was over half the interview all questions about minimum specs for games? Do people actually read them? I can't think of the last time I looked at a min spec.

    I know in my real job we basically just make up min specs. They bear some relation to reality, but it isn't like we sit around and test on slower machines to figure out what is acceptable.
    • I read them. I figure there is no point in buying the game if my computer doesn't meet the minimum specifications. It still may be too slow, even if my computer does meet the minimum specifications, but it's guaranteed to be too slow, or may not run at all, if my computer doesn't meet the minimum specifications.
      • It still may be too slow, even if my computer does meet the minimum specifications, but it's guaranteed to be too slow, or may not run at all, if my computer doesn't meet the minimum specifications.

        Not necessarily, not anymore. I remember playing Max Payne on a P3 450 with a Diamond Viper V770 non-Ultra even though the reqs said that I couldn't. I got decent enough framerates.

        Rob

Any sufficiently advanced technology is indistinguishable from magic. -- Arthur C. Clarke

Working...