Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software Programming Graphics Entertainment Games IT Technology

Origin of Quake3's Fast InvSqrt() 402

geo writes "Beyond3D.com's Ryszard Sommefeldt dons his seersucker hunting jacket and meerschaum pipe to take on his secret identity as graphics code sleuth extraordinaire. In today's thrilling installment, the origins of one of the more famous snippets of graphics code in recent years is under the microscope — Quake3's Fast InvSqrt(), which has been known to cause strong geeks to go wobbly in the knees while contemplating its simple beauty and power." From the article: ""
This discussion has been archived. No new comments can be posted.

Origin of Quake3's Fast InvSqrt()

Comments Filter:
  • by ArchieBunker ( 132337 ) on Friday December 01, 2006 @02:23PM (#17070412)
    "English motherfucker, do you speak it?" Anyone care to explain what that function does?
    • Re:A famous quote (Score:5, Informative)

      by GigsVT ( 208848 ) on Friday December 01, 2006 @02:26PM (#17070456) Journal
      1/sqrt(x)
      • by Trigun ( 685027 ) <evilNO@SPAMevilempire.ath.cx> on Friday December 01, 2006 @02:27PM (#17070470)
        But faster!
        • Re:A famous quote (Score:5, Informative)

          by kill-1 ( 36256 ) on Friday December 01, 2006 @03:01PM (#17071198)
          And less accurate!!!
        • Re:A famous quote (Score:4, Interesting)

          by KZigurs ( 638781 ) on Friday December 01, 2006 @08:54PM (#17076656)
          Not so much _faster_ than _guaranteed_ execution time for any given precision required (and if you have 480 pixels on axis on the screen you can play with that for quite a lot).

          This is why console games (ps, ps2, xbx, 360, bla bla bla huj) actually stays competitive to PC (more powerful, of course) - since developers has a good idea about actual CPU/GPU available at any given moment, they can safely close to the border way more confidently than on pc. And on PC they usually resort to generic 'will give you the best that I can' routines anyway.

          (at least that what I can say after observing xbox360 devel team for 6 months. scary stuff, they do, scary stuff.)
      • by zzyzx ( 15139 )
        Thanks. I kept thinking that the inverse of taking the square root would be to square something but obviously that isn't what that code did.
        • Re: (Score:3, Informative)

          by arth1 ( 260657 )

          Thanks. I kept thinking that the inverse of taking the square root would be to square something but obviously that isn't what that code did.

          You are correct, the inverse to x^n is x^(1/n), from which it follows that the inverse to a square root is the inverse inverse square, or just square.
          What TFA calls "inverse square root" is really "inverse of the square root", a small but significant difference.
          Not that it matters much, cause those who use the function tend to know what is meant.

          Regards,
          --
          *Art

    • Re:A famous quote (Score:4, Informative)

      by Raul654 ( 453029 ) on Friday December 01, 2006 @02:34PM (#17070604) Homepage
      Given a number, that function calculates the inverse of the square root - which, according to TFA, is very common in graphics applications.
      • Re:A famous quote (Score:4, Informative)

        by abradsn ( 542213 ) on Friday December 01, 2006 @02:59PM (#17071150) Homepage
        used whenever there is light reflecting off an object.
        • Re: (Score:3, Funny)

          by Fred_A ( 10934 )
          I think this is a hoax. I've been playing Quake 3 extensively and I haven't once seen anything reflected off any object. What's more it was all very slow. Granted this might have been because of my Tseng ET4000 (which I'm planning on upgrading eventually). Although I did set it to CGA to make things snappier.

          Anyway I'm not convinced.
          • Are you sure you're not actually running Doom 3? Not only does no light reflect off of anything, but no light is emitted in the first place.
      • Re: (Score:3, Informative)

        The inverse square root is used to calculate a normal vector [wikipedia.org]. Normal vectors, are, in turn, used in lighting calculations (to determine, for example, the light intensity for a given vertex), among other things.

        AMD's 3D Now! instruction set includes a set of instructions (TFSQRTS/TFSQRTD) for approximating the inverse square root.

    • Something to do with lighting I'd imagine (unless they've also modeled gravitational attraction)
    • by Anonymous Coward
      Here's a more interesting question: "How does this function compute 1/x^(1/2)?"
      I'm asking, because I have no idea how it works. ;)
      • by Jerry Coffin ( 824726 ) on Friday December 01, 2006 @04:45PM (#17073126)
        How does this function compute 1/x^(1/2)?


        It starts by taking a guess at the right answer, and then improving the guess until it's accurate enough to use.

        The first step depends heavily on the fact that a floating point number on a computer is represented as a significand (aka mantissa) and an exponent (a power of two). For the moment, consider taking just the square root of X instead of its inverse. You could separate out the exponent part of the floating point number, divide it by two, and then put the result back together with the original significand, and have a reasonable starting point.

        From there, you could improve your guesses to get a better approximation. The simplest version of that would be like a high-low game -- you split the difference between the current guess and the previous guess, and then add or subtract that depending on whether your previous guess was high or low. Eventually, you'll get arbitrarily close to the correct answer.

        This can take quite a few iterations to get to the right answer though. To improve that, Newton-Raphson looks at the curve of the function you're working with, and projects a line tangent to the curve at the point of the current guess. Where that line crosses the origin gives you the next guess. That's probably a lot easier to understand from picture [sosmath.com].

        In this case, we're looking for the inverse square root, which changes the curve, but not the basic idea. As a general rule, the closer your first guess, the fewer iterations you need to get some particular level of accuracy. That's the point of the:

        i = 0x5f3759df - (i>>1);

        While the originator of this constant is unknown, and some of it is rather obscure, the basic idea of most of it is fairly simple: we start by shifting the original number right a bit. This divides both the mantissa and the exponent part by two, with the possibility that IF the exponent was odd, it shifts a bit from the exponent into the mantissa. The subtraction from the magic number then does a couple of things. For one thing, if a bit from the exponent was shifted into the mantissa, it removes it. The rest of the subtraction is trickier. If memory serves, it's based on the harmonic mean of the difference between sqrt(x) and (x/2) for every possible floating point number of the size you're using.

        This is where the fact that it's 1/sqrt(x) instead of sqrt(x) means a lot: 1/sqrt(x) is a curve, but it's a fairly flat curve -- much flatter than sqrt(x). The result is that we can approximate a point on the curve fairly accurately with a line. In this case, it's really two lines, which gets it a bit closer still.

        From there, the number has had a bit of extra tweaking done -- it doesn't actually give the most accurate first guess, but its errors are often enough in the opposite direction from those you get in the Newton-Raphson iteration steps that it gives slightly more accurate final results.
    • by Anonymous Coward
      They're computing the inverse square root, usually written as something like x^(-2) (which is the same as 1/sqrt(x) of course) via, umm, err, magic :-)

      Seriously, they're using some evil numerical techniques that approximate the actual value of that function to withing a few percent very quickly. You could do more steps of Newton's root finding method or other complex things, but that would be slow and this is only meant to find things that are "good enough" for you to draw on someone's screen.

      This function
    • Re:A famous quote (Score:5, Informative)

      by 91degrees ( 207121 ) on Friday December 01, 2006 @03:13PM (#17071420) Journal
      Okay - At times you want a normalised vector. A lot of the time you will have vectors of arbitrary length For example, the light is at the origin, the point is at (12, 4, 3). So the vector from the point to the light is (-13, -4, -3). The length of this vector can be calculated easily using pythagoras's theorem (sqrt (12^2+4^2+3^2). It's 13 units in length. We want a unit vector (i.e. a vector 1 unit in length) So we divide by the length to get (12/13, 4/13, 3/13).

      This is great for a 3D rendering application, but in a game speed is critical. This pair of calculations involves a square root and a divide. Both of thse are at least an order of magnitude slower than multiplications and additions.

      So what this function does is provide a value you can multiply each component by to get a unit vector.

      Well, there's the what and why parts. As for the , I have no idea. I think it uses magic.
    • by ebyrob ( 165903 ) on Friday December 01, 2006 @06:15PM (#17074620)
      If this doesn't make your head swim:

      int i = *(int*)&x;
      i = 0x5f3759df - (i >> 1);


      Then I'm afraid the whole article is going to be lost on you...

      We've got a floating point being operated on as an integer.
      We've got a mysterious constant.
      We've got a two's complement sign-flip combined with a bit-shift.

      The only thing missing from this party is hookers and beer.
  • From the words of John Carmack himself, if you discover and implement an algorithm by yourself, even though it may have already been discovered already, you deserve credit for finding it on your own.

    [Insert rant about software patents]
  • I have a truly marvelous proof of who wrote this code which this comment box is too narrow to contain.
  • by gukin ( 14148 ) on Friday December 01, 2006 @02:33PM (#17070584)
    Anyone who has ever played a computer game should pay up.
  • by nels_tomlinson ( 106413 ) on Friday December 01, 2006 @02:35PM (#17070626) Homepage
    The linked site seems to be down (gee, you think it might be slashdotted?), but this paper [purdue.edu] seems to be covering the same topic.
  • by MankyD ( 567984 ) on Friday December 01, 2006 @02:35PM (#17070630) Homepage
    Why does the coder use
    int i = *(int*)
    instead of just
    int i = (int)x;


    Can someone enlighten me?
    • by MankyD ( 567984 )
      That first one didn't come out right:
      int i = *(int*) &x;
      • Because he's trying to get at the binary representation of the IEEE754 floating point number, which he then manipulates a bit, and puts back where it was.

        With your suggested code, you'd be operating on the actual number, casting the float to an int, which would just be throwing away a bunch of information.
      • by eis271828 ( 842849 ) on Friday December 01, 2006 @02:46PM (#17070866) Homepage
        (int) x would convert the floating point value to an integer (truncation, basically).
        *(int*) &x treats the bits as an integer, with no behind the scenes conversion to an actual int value.
      • by radarsat1 ( 786772 ) on Friday December 01, 2006 @02:47PM (#17070878) Homepage
        Because in C's own weird way, that's the only way of refering to a float as an int without changing the bits.

        If you do this:
        int i = (int)3.0f;

        You get i=3, like what you'd get from the floor() function.

        If you do this:
        float f = 3.0f;
        int i = *(int*)


        Then i contains a bit-for-bit copy of the IEEE floating-point representation of 3.0.

        It's because C knows how to cast a float to an int by applying the floor function. However, if you do it the second way, you aren't casting a float to an int, you are casting a pointer-to-float to a pointer-to-int and then dereferencing it.

        By the way, I just wanted to say... this is one of the most interesting things I've read on Slashdot in a while. Wow. That function is just amazing. I only wish I understood how it worked. I know nothing about what a "Newton-Raphson iteration" is.
        • by dsci ( 658278 ) on Friday December 01, 2006 @03:12PM (#17071400) Homepage
          Newton-Raphson is a general algorithm for finding root of an equation f(x)=0.

          You start with some INITIAL GUESS (the real beauty of this algorithm) X(0), then apply:

          X(n+1) = X(n) - f(X(n)) / f'(X(n))

          where
          X(n+1) is the NEXT guess after the value you 'know',
          X(n) is that most recent value you know,
          f(X(n)) is the function evaluated at X(n) and
          f'(X(n)) is the first derivative of f(x) evaluated at X(n).

          It's not foolproof and a BOTH whether it converges at al AND how FAST it converges depends on the initial guess, X(0)

          The "Secant Method" is an improvement that makes it a little 'smarter,' at the expense of more computation (this is often a positive trade-off on numerical modeling codes, since the 'smarter' algorithm does tend to converge faster). There are other improvements as well, such as the Los Alamos Linear Feedback Solver (a slightly modified secant method that converges about 10-17% faster, at least for some types of problems) that I use in my own codes.

          Obligatory wikipediea followup: Newton's Method [wikipedia.org]
        • Re: (Score:3, Informative)

          by The boojum ( 70419 )
          [Forgive the LaTeX notation -- I'm not sure how else to best express the math without sub and sup tags.]

          Newton-Raphson is a root finding method. Given a starting point it finds successively more accurate approximations to the root. The formula for it looks like: x_{n+1} = x_n - f(x_n)/f'(x_n), where f(x) is the function to find the root of, f'(x_n) is the derivative of that function and x_n are the successive approximations. Essentially, given a guess it looks at the slope of the function at that point f
        • by arniebuteft ( 1032530 ) <buteftNO@SPAMgmail.com> on Friday December 01, 2006 @04:58PM (#17073326)
          The article at http://www.math.purdue.edu/~clomont/Math/Papers/20 03/InvSqrt.pdf [purdue.edu] really explains it better, but the point is to trick the code into performing integer operations on a float. You start the function with a float, which is 32 bits, arranged in a very specific sequence: first bit is sign (0 for positive, 1 for neg), next 8 bits are exponent (offset by 127 to allow positive and negative exponents), and the last 23 bits are the mantissa (normalized to assume the decimal point has a binary 1 to its left). So, the simple good old number 21 in float is the sequence 0 10000011 01010000000000000000000, for the sign, exponent, and mantissa (omit the spaces of course). That same number 21, stored as a 32-bit integer is 00000000 00000000 00000000 00010101 (again, omit the spaces).

          The trick of this function is to take the 32 bits of data that are really a float, but process it as if it's an integer. So you take that cumbersome number 21 as a float, then BAM! presto, turn it directly to an integer not through type conversion, but by simply treating those same 32 bits as if they were representing an integer all along.

          Let's use the number 21 as an example in the function call.

          The binary representation of 21 as a float is 01000001 10101000 00000000 00000000 (broken into 8-bit words for clarity). The function then goes to create a new integer i, whose value is also 01000001 10101000 00000000 00000000 (which happens to be 1101529088 in decimal). The magical line of the code, i = 0x5f3759df - (i>>1), takes that integer i, shifts its bits one to the right (turning our 01000001 10101000 00000000 00000000 into 00100000 11010100 00000000 00000000, or 550764544 in decimal), then subtracts it (still doing integer math here) from 0x5f3759df (which is 01011111 00110111 01011001 11011111 or 1597463007 in decimal), and winds up with 00111110 01100011 01011001 11011111 (or 1046698463 in decimal).

          Now, for its next trick, it takes that wonky integer 1046698463, and turns it back into a floating point number, by the same trick used above, i.e. simply by looking at those same 32 bits, and pretending they're a float, not an int. The binary representation of 1046698463, 00111110 01100011 01011001 11011111, is the same as 0.22202251851558685 in float.

          From here on out, it's all floating math. Apply the Newton-Rhapson method (thats the next line), we get x = 0.22202251851558685 * (1.5 - ( (21*.5) * 0.22202251851558685^2 )) = 0.218117811. We return this value at the closing of the function. As it turns out, the inverse square root of 21 is 0.21821789... (thanks Google calc). So, I have no idea WHY the Float to Int to Float trick works, but it works very well.

          Whew!

          • Nowadays (e.g. since 3.x Gcc) that trick doesn't work any more, as written. To get reliable results you would need to say something more like

            float inv_sqrt(float x) { /* death to StudlyCaps! */

            float xhalf = 0.5f*x;
            union { float x; int i; } ix;
            ix.x = x;
            ix.i = 0x5f3759df - (ix.i >> 1);
            x = ix.x;
            x *= 1.5f - xhalf * x * x;
            return x;

            }

            The point is that pointer-based type punning isn't really allowed by the language definition. The optimizer can take advantage of that to emit sim

        • Re: (Score:3, Interesting)

          by HTH NE1 ( 675604 )
          Because in C's own weird way, that's the only way of referring to a float as an int without changing the bits.

          <yoda>No. There is another.</yoda>

          You can also lie to va_arg() about the type of the argument to achieve the same thing, but it's not as efficient.
      • Because it's not actually a float-to-int cast as you'd normally think of it. Rather, it's mucking with the bit representation of the float. It's roughly equivalent to "union { float x; int i; };"
      • by ToxikFetus ( 925966 ) on Friday December 01, 2006 @02:56PM (#17071070)
        Why does the coder use

        (1) int i = *(int*)

        instead of just

        (2) int i = (int)x;
        (some of my points added for emphasis)

        I'll take a swing at this one. It's because the author doesn't want the value of x, but the integer representation of the value at x's memory address.

        If x is 3.14159, (2) will result in i==3, whereas (1) will result in whatever the 4-byte IEEE-754 representation of 3.14159 is (0x40490FD0, if Google is correct). By using (1), the author is able to use integer bitwise opeartions (>>) to perform "free" floating point operations. When i is sent back into floating point form via:

        x = *(float*)

        x now contains the value of the integer operation:

        i = 0x5f3759df - (i >> 1);

        which was presumably faster than an identical floating point operation. It's a nifty little solution, especially with regard to the selection of the magic number.
      • One is 'take this value and convert it to an integer' (your version).

        One is 'take this set of bytes, assume it's integer data, and tell me what value you get when you dereference (their version).

        They won't necessarily render the same result.

        I haven't looked at the code, but depending on the type of X, it could provide some wildly different results.
    • Re: (Score:3, Insightful)

      by MeanMF ( 631837 )
      int i = (int)x would convert the floating point number to its rounded integer equivalent.

      int i = *(int*)x assigns i the 32-bit value that represents how x is stored in memory as a float.
    • by ewhac ( 5844 ) on Friday December 01, 2006 @02:44PM (#17070820) Homepage Journal
      C "understands" ints and floats. If you do the simple cast:

      int i = (int)x;

      Then C will simply convert the float value into an integer value (throwing away fractional part). But this isn't what we want. We want to operate on the bits of an IEEE floating point value directly, and integers are the best way to do that.

      So first, we lie to the compiler by telling it we have a pointer to an int:

      (int *) &f

      And then we deference the pointer to get it into an operable int:

      i = *(int *) &f

      Note what's important here is to keep the compiler from modifying any part of the original 32-bit value.

      Schwab

      • Re: (Score:3, Informative)

        Note what's important here is to keep the compiler from modifying any part of the original 32-bit value.

        Moreover, when compiled, the optimizer does the right thing - since access to the variable is actually a memory dereference (or a register access, but we'll ignore it for now), *(int*)& means "tell the compiler this is an int, but don't do anything in the code", whereas (int) means "create a new temporary that's the int part of the variable".
    • by uhoreg ( 583723 )
      Try them out. You'll see that they give different numbers. "int i = (int) x" converts x into an integer with approximately the same value. "int i = *(int*)&x" gives you an integer with the same binary representation as the float (and which will give different results depending on endianness and size of int).
    • by ars ( 79600 )
      I think, but am not sure, that it's copying the floating point value bit for bit into an integer. It's not converting the float into an int.
    • OK, my subject is flippant, but it is also serious. The alternative you are proposing simply extracts an integer approximation to the float x. The code as written does something different: it extracts the binary representation of the floating-point number. That is, it is extracting the raw bits involved.
    • by doshell ( 757915 )

      Since x is a float, int i = (int)x; would find the (best) integer approximation of x and store it in i. What the author meant to do was take the binary representation of x and have the compiler treat it as the binary representation of an int -- thus the "taking the address", followed by a cast to int *, followed by a dereference to store the actual value in i.

      If this gets you confused, I'd definitely suggest trying out both methods in your favorite compiler :)

  • by Stavr0 ( 35032 ) on Friday December 01, 2006 @02:42PM (#17070778) Homepage Journal
    New hotness: Fast InvSqrt()

    Naah, just kidding. They both deserve a spot in the Clever Hacks Hall of Fame

  • Hmm... (Score:2, Funny)

    by porkmusket ( 954006 )
    Suddenly I'm very scared that Ballmer will want to InvSqrt me some pictures of his kids.
    • Re: (Score:3, Funny)

      by gstoddart ( 321705 )
      Suddenly I'm very scared that Ballmer will want to InvSqrt me some pictures of his kids.

      Ummmm .... you mean, squirt his kids back to him? :-P
  • Poor function name (Score:4, Insightful)

    by bloobloo ( 957543 ) on Friday December 01, 2006 @02:43PM (#17070800) Homepage
    The inverse of the square root is the square. The reciprocal of the square root is what is being calculated. In the case of multiplication I guess it's a reasonable name but it's pretty poor form in my view.
    • Re: (Score:3, Insightful)

      by rg3 ( 858575 )
      The word "inverse" has several uses in mathematics. One of them is the one you mentioned, which is to say "inverse function". In that sense, working with positive numbers, the inverse function of the square root is the square, yes. However, there is also the concept of "inverse element" for sets and operations. It's a very generic concept that relates to an operation and the identity element for that operation. For the sum, the identify element is 0, given that x + 0 = x. The inverse element of another elem
  • It was fast (Score:4, Informative)

    by KalvinB ( 205500 ) on Friday December 01, 2006 @02:43PM (#17070808) Homepage
    http://www.icarusindie.com/DoItYourSelf/rtsr/index .php?section=ffi&page=sqrt [icarusindie.com]

    That page compares the time it takes to calculate the sqrt various ways including Carmacks. Short version is that modern processors are significantly faster since it can be done in hardware. It may still be useful in cases where the processor doesn't have the sqrt function available.

    His version took 428 cycles compared to 107 cycles doing it in hardware on the same system.
    • Re:It was fast (Score:5, Insightful)

      by systemeng ( 998953 ) on Friday December 01, 2006 @03:19PM (#17071500)
      First off, this function calculates 1.0/sqrt(x), not sqrt(x). InvSqrt is a particularily nasty function because both the divide and the square root stall the floating point pipeline on IA32 processors. As a result, instead of shooting out one result per cycle that the pipelining normally allows, the processor will stall for 32 cycles for the divide after it has stalled for the 43 cycles for the square root(P4). This is a big hit to realtime performance and it also prevents 76 multiplies from getting done while the pipeline is stalled. Secondly, IA32 processors are super scalar and have multiple integer units which can do portions of this calculation in parallel. This algorithm is brilliant because it uses the integer units for a portion of the most difficult part of the calculation and the remaining floating point multiplies only take about 6 clock cycles on the FPU. The difference in clock cycles you are counting is likely because the routine as written will be implemented as a function call and the stack push overhead will eat you alive. If this is implemented inline, it's about 6 times as good as simply calling the processor's assembly instructions for root and divide in sequence with the penalty that it isn't as accurate. It is virtually impossible to beat sqrt on IA-32 but 1.0/sqrt can be computed faster with newton raphson iteration in one fell swoop than by coposition of the operations. I've worked several years implementing similar optimizations in the reference implementation of ISO/IEC 18026, a standard for digital map conversion. Most of the routines that had optimizations like this added to them saw at least 30% speed improvements. This is a bit of a soft number because many things were reordered to make the pipeline fill better but in general, a complicated function especially of trig fucntions that can be computed in one iteration of well designed newton-raphson will be much faster than the coposition of the CPU's implementation of the component functions. In short, don't write off careful numerics they can provide great sped improvements, just don't use them in code that people will want to understand later if you don't document exactly what you did and why.
      • Re:It was fast (Score:5, Informative)

        by adam31 ( 817930 ) <adam31@gmELIOTail.com minus poet> on Friday December 01, 2006 @04:07PM (#17072390)
        Okay, let's try x86...

        rsqrtss xmm1, xmm0
        about 5 cycles. And it can pipeline.

        Not a fan of x86? Maybe altivec...
        vrsqrtefp V2, V1
        depends, but 12 cycles probably and pipelined.

        On PS3's SPU it's rsqrte (6 cycles), on 3dNow it's pfrsqrt (8 cycles) both pipelined. Even PS2 had rsqrt (13 cycles). There's just no reason for software reciprocal square root. It's a cool trick, but it's not even useful anymore.

        • Re: (Score:3, Informative)

          by systemeng ( 998953 )
          If you have need of a low precision result and you have a vector processor, absolutely. You're right and thanks for the hard numbers. My work is in scientific computing converting digital maps where my final result must be acctuate to better than 1.0e-13 and intermediates are multiplied by numbers on the order of the square of the earth's radius in meters. I'm stuck with the FPU where things aren't so rosy. We usually combine the newton-raphson iteration of 1/sqrt(f(x)) with the newton raphson solution
  • by zoftie ( 195518 ) on Friday December 01, 2006 @02:50PM (#17070964) Homepage
    Introduction
    Note!

    This article is a republishing of something I had up on my personal website a year or so ago before I joined Beyond3D, which is itself the culmination of an investigation started in April 2004. So if timeframes appear a little wonky, it's entirely on purpose! One for the geeks, enjoy.
    Origin of Quake3's Fast InvSqrt()

    To most folks the following bit of C code, found in a few places in the recently released Quake3 source code, won't mean much. To the Beyond3D crowd it might ring a bell or two. It might even make some sense.

    InvSqrt()

    Finding the inverse square root of a number has many applications in 3D graphics, not least of all the normalisation of 3D vectors. Without something like the nrm instruction in a modern fragment processor where you can get normalisation of an fp16 3-channel vector for free on certain NVIDIA hardware if you're (or the compiler is!) careful, or if you need to do it outside of a shader program for whatever reason, inverse square root is your friend. Most of you will know that you can calculate a square root using Newton-Raphson iteration and essentially that's what the code above does, but with a twist.
    How the code works

    The magic of the code, even if you can't follow it, stands out as the i = 0x5f3759df - (i>>1); line. Simplified, Newton-Raphson is an approximation that starts off with a guess and refines it with iteration. Taking advantage of the nature of 32-bit x86 processors, i, an integer, is initially set to the value of the floating point number you want to take the inverse square of, using an integer cast. i is then set to 0x5f3759df, minus itself shifted one bit to the right. The right shift drops the least significant bit of i, essentially halving it.

    Using the integer cast of the seeded value, i is reused and the initial guess for Newton is calculated using the magic seed value minus a free divide by 2 courtesy of the CPU.

    But why that constant to start the guessing game? Chris Lomont wrote a paper analysing it while at Purdue in 2003. He'd seen the code on the gamedev.net forums and that's probably also where DemoCoder saw it before commenting in the first NV40 Doom3 thread on B3D. Chris's analysis for his paper explains it for those interested in the base math behind the implementation. Suffice to say the constant used to start the Newton iteration is a very clever one. The paper's summary wonders who wrote it and whether they got there by guessing or derivation.
    So who did write it? John Carmack?

    While discussing NV40's render path in the Doom3 engine as mentioned previously, the code was brought up and attributed to John Carmack; and he's the obvious choice since it appears in the source for one of his engines. Michael Abrash was mooted as a possible author too. Michael stands up here as x86 assembly optimiser extraordinaire, author of the legendary Zen of Assembly Language and Zen of Graphics Programming tomes, and employee of id during Quake's development where he worked alongside Carmack on optimising Quake's software renderer for the CPUs around at the time.

    Asking John whether it was him or Michael returned a "not quite".

    -----Original Message-----
    From: John Carmack
    Sent: 26 April 2004 19:51
    Subject: Re: Origin of fast approximated inverse square root

    At 06:38 PM 4/26/2004 +0100, you wrote:

    >Hi John,
    >
    >There's a discussion on Beyond3D.com's forums about who the author of
    >the following is:
    >
    >float InvSqrt (float x){
    > float xhalf = 0.5f*x;
    > int i = *(int*)
    > i = 0x5f3759df - (i>>1);
    > x = *(float*)
    > x = x*(1.5f - xhalf*x*x);
    > return x;
    >}
    >
    >Is that something we can attribute to you? Analysis shows it to be
    >extremely clever in its method and supposedly from the Q3 source.
    >Most people say it's your work, a few say it's Michael Abrash's. Do
    >you know who's responsible, possibly with a history of sorts?

    Not me,
  • by eno2001 ( 527078 ) on Friday December 01, 2006 @02:54PM (#17071046) Homepage Journal
    ...the work of John Romero. Apparently he was going to call it the MakeYouMyBitch() function. ;P
  • Specifically, the compilation _Notation, Notation, Notation_, page 130.
  • by kan0r ( 805166 ) on Friday December 01, 2006 @03:01PM (#17071186)
    But the first thing I thought when I saw this was: "Damn, that code is a mess!"
    Seriously, try looking away from the genius who obviously wrote it.
    • There is no single comment which would make reading and understanding what happens here much easier!
    • Introduction of a magic number with no explanation whatsoever
    • Magic pointer arithmetics without demystification
    • Portability? Abuse of a single processor architecture, without warning that this would not work on non-x86
    I know it is good code. But it is simply bad code!
    • Re: (Score:3, Funny)

      Yeah, the computing industry is full of people who resent smart programmers who don't program by the book. That's fine. Every programmer eventually finds their own level and there's no shortage of startup businesses who need web page designers.
  • Error! (Score:3, Funny)

    by jrmiller84 ( 927224 ) on Friday December 01, 2006 @03:05PM (#17071264) Homepage
    float InvSqrt(float x) { float xhalf = 0.5f*x; int i = *(int*) i = 0x5f3759df - (i >> 1); x = *(float*) x = x*(1.5f - xhalf*x*x); return x; } Asking John whether it was him or Michael returned a "not quite". But it's supposed to return a float!
  • hakmem (Score:3, Interesting)

    by trb ( 8509 ) on Friday December 01, 2006 @03:27PM (#17071680)
    The article barely mentions HAKMEM [wikipedia.org], but the invsqrt hack is reminiscent of the HAKMEM programming hacks [pipeline.com], which were published in 1972. Several of these hacks use bit fiddling with magic constants to perform tasks in straight-line code, that you would ordinarily think of doing with iteration.

    HAKMEM is classic bathroom reading for hackers. If you want to do it up old-school, print a copy from original scans [mit.edu], double-sided.

  • by Ninja Programmer ( 145252 ) on Friday December 01, 2006 @04:23PM (#17072692) Homepage
    Here's an old version of one of my webpages:

            http://web.archive.org/web/19990210111728/www.geoc ities.com/SiliconValley/9498/sqroot.html [archive.org]

    And here's an updated version of the same page:

            http://www.azillionmonkeys.com/qed/sqroot.html [azillionmonkeys.com]

    It isn't an exact rendering of the code in question, but it explains enough for any skilled hacker to 1) understand what's going on and 2) modify the code to create the resulting code that's in the Quake 3 source. Furthermore this web page has existed since about 1997 (archive.org doesn't go back that far for some reason.)

    Now *IF* in fact the code origin comes from someone who took ideas from my site, I should point out that *I* am not the originator of the idea either (though I did write the relevant code). Bruce Holloway (who I credit on the page) was the first person to point out this technique to me at around the 1997 timeframe (prior to this, I created my own method which is similar, but not really as fast). (Vesa Karvonen informed by of the technique (through a code snippet with no explanation) at roughly the same time as well.) It was a technique well known to hard core 3D accelerator and CPU implementors, and follows an intentional design idea from the IEEE-754 specification.

    Prof. William Kahan, one of the key people who specified the IEEE-754 standard (the standard for floating point the many CPUs use, starting with Intel's 8087 coprocessor) apparently presented this idea, and is the source for where Bruce Holloway got the idea. The IEEE-754 standard came out around the 1982 time frame. Though, its very likely that these ideas originate from even earlier in computing history.
    • (I'm (sorry))

      You don't (happen to (program (lisp))) do you?
    • Though, its very likely that these ideas originate from even earlier in computing history.

      Attribution.

      That is why most, if not all, software patents are bogus. Just because you reinvent something published by a PhD working in a committee that disbanded 10 years before you knew 'C' came after 'B' in the alphabet does not make you reinvention patent worthy. The history of invsqrt() crosses disciplines of hardware and software design, spec development, graphics and math theory. With such a fascinating functi
  • Clever trick! (Score:5, Informative)

    by kent.dickey ( 685796 ) on Friday December 01, 2006 @05:58PM (#17074370)
    To summarize, the article is about a piece of code to approximate 1/sqrtf(f):

    float InvSqrt (float x){
      float xhalf = 0.5f*x;
      int i = *(int*)&x;
      i = 0x5f3759df - (i>>1);
      x = *(float*)&i;
      x = x*(1.5f - xhalf*x*x);
      return x;
    }
    The trick is the "i = 0x5f3759df" line. It's certainly a magic number.

    The algorithm is simple Newton-Raphson -- make a good initial guess, then iterate making the guess better. I think Newton-Raphson on 1/sqrt picks up 5-6 bits each try in the line "x = x*(1.5f - xhalf*x*x)". It can be repeated to get a more accurate result each time it's repeated.

    The problem with Newton-Raphson is making a good first guess--otherwise, you need an extra iteration or two. And that's what the magic number is doing, making a good first guess.

    So let's work out what a good first guess would look like for 1/sqrt(f), to see where this code came from.

    Floating Point numbers are stored with a mantissa and an exponent: f = mantissa * (2 ^ exponent), where exponent is 8-bits wide and the mantissa is 23-bits wide.

    Let's take an example: 1/sqrt(16) would have f = 1.0 * (2 ^ 4). We want the result 0.25 which is f = 1.0 * ( 2 ^ -2).

    So our first guess should take our exponent, negate it, and cut it in half. (Try more examples to see that this works--it's basically the definition of 1/sqrt(f)). We'll ignore the mantissa--if we can just get within a factor of 2 of the answer in one step, we're doing pretty well.

    Unfortunately, the exponent is stored in FP numbers in an offset format. In memory,

    exp_field = (actual_exp + 127) << 23
    The mantissa is in the low 23 bits, and the most-significant bit is the sign (which will be 0 if we're taking roots). For now, let's just assume we have our exponents as 8-bit values, to work out what we need to do with the +127 offset.

    We want new_actual_exp = -(actual_exp)/2. But in memory, exp = (actual_exp + 127). Or, actual_exp = exp - 127.

    Substituting gives (new_exp - 127) = -(exp - 127)/2. Simplify this to: new_exp = 127 - (exp - 127)/2 => new_exp = 3*127/2 - (exp / 2).

    Now the exponent is shifted 23 places in memory, so let's write out our code (and ignore the mantissa completely for now...):

    i = ((3*127)/2) << 23) - (i >> 1);
    rewriting as hex:

    i = 0x5f400000 - (i >> 1);
    Well, first, it's arguable whether it should be 0x5f000000 or 0x5f400000 (The "4" is actually in the mantissa). I'm guessing resolving that dilemma led to the original author discovering that choosing a particular pattern of bits in the mantissa can help make the initial guess even more accurate, leading to the 0x5f3759df constant.

    I haven't worked it out, but Chris Lomont http://www.lomont.org/Math/Papers/2003/InvSqrt.pdf [lomont.org] shows this first guess is accurate to about 4-5 bits of significance for all floating point values. That's a good result, considering that mucking with the exponents was just hoping to get us within 1-2 bits of significance.

Decaffeinated coffee? Just Say No.

Working...