In 1956, engineers at Convair were asked to develop a digital computer to replace the B-58 Hustler bomber’s existing analog navigation system . This was quite a challenge: the first commercial silicon bipolar junction transistors were only announced in 1954, so fully transistorized computers were still very new . In addition, navigation requires the calculation of trigonometric functions in real-time, something existing algorithms running on the transistors of the day were much too slow to accomplish.

Existing analog navigation systems made use of electromechanical resolvers to calculate rotations of vectors and determine their magnitude and angle. There was no digital equivalent to a resolver so “computation of navigation problems required either series expansions, approximations or table lookup of individual trigonometric functions” . The COrdinate Rotation DIgital Computer (CORDIC) was developed to replace the resolvers .

Jack Volder’s initial inspiration for the CORDIC algorithm came from “the massaging of the basic angle addition equations to obtain the following interesting equations” . If $\tan \phi = 2^{-n}$ then:

He realized that these operations could be repeated for increasing powers of $n$ which would allow the rotation of vector $R$ through any angle. It may not be immediately apparent from looking at these equations, but this is an implementation of trigonometric functions using only addition and subtraction, bit shifts, a small lookup table, and a single multiplication. In the sections that follow, we’ll attempt to unravel this.

## Coordinate rotation

Let’s set the stage: we start with the Cartesian coordinates $x$ and $y$ which we want to rotate by some angle $\phi$ to determine a new set of coordinates $x'$ and $y'$. These coordinates can be interpreted as the components of a vector $\mathbf{r}$ with magnitude $r$ and angle $\theta$ which is rotated by $\phi$ to obtain $\mathbf{r'}$.

Mathematically speaking, this can be represented as

which is pretty straightforward; however, we want to eliminate the trigonometric functions and develop an algorithm a digital computer can solve quickly.

We’ll begin by considering $x'$ and $y'$. These equations involve the addition of angles, for which we have identities:

Making use of these identities, we obtain

into which we can substitude our original equations for $x$ and $y$,

and pull a $\cos\phi$ from each equation to obtain

Now we let $\alpha = \tan\phi$ and $\phi = \tan^{-1} \alpha$ to get

and make use of another identity,

Since $\phi = \tan^{-1} \alpha$, this simplifies our equations nicely:

And just like that, the trigonometric functions are gone! Well, not quite $\alpha = \tan\phi$, so unless we already know the tangent of our rotation angle $\phi$, we haven’t really solved anything. What’s worse, it also requires a square root, division, and a fair bit of multiplication which is no good at all.

## A table for two

Suppose, for a moment, that we had a small table containing values of $\alpha$ and $A = \tan^{-1} \alpha$. If these values were chosen carefully, any rotation angle $\phi$ could be determined by iteratively applying rotations from the table until the desired rotation angle was achieved. Instead of calculating $x'$ and $y'$ directly, we’ll calculate $x_i$ and $y_i$, increasing $i$ until $x_n \approx x'$ and $y_n \approx y'$ where $n$ is the number of iterations. We’ll call our starting coordinates $x_0$ and $y_0$.

We’ll use a very simple algorithm: we’ll set $\phi_0$ to our desired rotation angle, and then add or subtract each angle $A_i$ from the table to $\phi_i$ such that $\phi_n \approx 0$. If $\phi_i \gt 0$ we’ll subtract the table angle, and if $\phi_i \lt 0$ we’ll add the table angle. For mathematical convenience, we’ll define the direction of rotation as $\xi_i$ which can only be $\pm1$. Our equations are thus rewritten in an iterative form:

The first simplification we’ll make is to address the ugly inverse square root term, which we will call $K_i$:

Since $x_{i+1}$ and $y_{i+1}$ are both linear combinations of $x_i$ and $y_i$, it’s fairly easy to see what will happen as $i$ increases:

The coefficient in front of $i$th term, $K(i)$, can clearly be expressed as the product of a sequence:

This means we can either add a new column to our table with each $K(i)$ value, or if $n$ is fixed, we can simply multiply the final results $x_n$ and $y_n$ by $K(n)$. Let’s put it aside for now, and simply ignore $K_i$:

Now we’re down to two muliplications per iteration, plus two final (or initial) multiplications to account for $K(n)$. This is significantly better than before, but we can do better still.

## 10 types of people

In the 1959 paper, Volder states that “these algorithms are suitable only for use with a binary code [which] possibly accounts for their late appearance as a numerical computing technique” . So far what we’ve accomplished isn’t really that optimized for binary (as opposed to decimal) computation. We can change that by selecting a clever value for $\alpha_i$:

And just like that, we’ve arrived at the CORDIC. For a binary computer, multiplying any number by $\alpha_i$ is simply dividing it by a multiple of 2 which is equivalent to bit shifting right by $i$. For example, suppose $i = 3$ and $x_3 = 1$:

Our equations are now:

At each iteration, we either add or subtract a bit shifted version of the previous iteration’s values and add or subtract a value from a table. No multiplication, no division, no square roots, no trigonometric functions.

Each iteration is not a true rotation of the original vector due to the $K_i$ coefficient we’ve ignored. Instead we’re adding a scaled, orthogonal version of the previous iteration to itself to calculate the current iteration. We’ll refer to this as pseudorotation. It’s apparent in figure 2 that these pseudorotations do indeed converge on our desired rotation angle and that their magnitudes converge to some multiple of the original vector.

Figure 2 shows six iterations that seem to approach our desired rotation pretty nicely. Is six iterations, in general, enough?

## A table for 10

Let’s try to determine how many iterations we’ll need and thus how large our table is. Assuming our numbers are all represented as $n$-bit two’s complement, there’s going to be a fixed limit for how many iterations we can do. The range of numbers that can be represented is $-2^{n-1}$ to $2^{n-1}-1$. The absolute value of the largest possible number we can represent is thus $2^{n-1}$ or $% $. As $i$ increases, we shift right by more and more bits. We’re therefore going to hit $0$ at some point. For an $n$-bit number, once we’ve shifted right $n-1$ times, we’re done:

Note that this is arithmetic shifting which maintains the sign bit and rounds towards negative infinity. The result is clear, though. When $n=8$ (an 8-bit signed number), nothing changes past iteration $i=n-1=7$. This sets an upper bound on our table size. What does such a table look like?

That’s only 8 numbers to store since there’s no need to store $2^{-i}$. For an $n$-bit algorithm, we need to store a table of $n \times n$-bit numbers, or $n^2$ bits total. We can’t forget $K(n)$, though, which is another number we’ll need to store.

Let’s go back to our definition of $K(i)$ and substitute our chosen value for $\alpha_i$:

It turns out this product converges pretty quickly:

By dropping the $K_i$ coefficients, our final results $x_n$ and $y_n$ are $\frac{1}{K(n)}$ larger than $x'$ and $y'$. In some circumstances this isn’t an issue. It’s more likely, though, that we’ll need to multiply $x_n$ and $y_n$ by $K(n)$. However, this is just a single multiplication per component.

Putting it all together, we have the following equations:

## Turning it around

Now that we’ve developed the rotation algorithm, going in the other direction is straightforward: Given some $x$ and $y$, suppose we want the corresponding vector $\mathbf{r}$’s magnitude $r$ and angle $\theta$. This is referred to as translation or vectoring. The way to approach this is to use the rotation algorithm, but instead of iterating until $\phi=0$, we rotate the vector such that $y_n=0$. Once complete, $x_n=r$ and $\phi_n = \theta$.

Our iterative equations are the same,

but the algorithm is modified slightly. We let $\phi_0 = 0$ since we don’t have any angle information to start with, and instead of choosing $\xi_i$ based on $\phi_i$, we’ll choose it based on $y_i$: if $y_i \gt 0$ we’ll rotate clockwise by setting $\xi_i=-1$, and if $y_i \lt 0$ we’ll rotate counterclockwise by setting $\xi_i=1$.

## The limits

Starting our table at $A_0 = 45.0^{\circ}$ places a limit on our possible rotation angles. The maximum angle we can rotate through $\phi_{max}$ is given by

For a 16-bit number, $\phi_{max} \approx 100^{\circ}$. This means that our existing algorithm can’t actually rotate beyond $100^{\circ}$. The simplest way to solve this is to perform an initial coarse rotation of the input vector:

This is essentially applying a rotation matrix to the input vector to prerotate it by $\pm90^{\circ}$. The prerotation is also subtracted or added to our initial angle $\phi_0$ to bring it into the required $\pm90^{\circ}$ range. And it’s pretty much that simple.

 J.E. Volder, “The Birth of CORDIC,” J. VLSI Signal Processing, vol. 25, n. 2, pp. 101–105, June 2000. [Online].

 Texas Instruments, “Semiconductor Timeline - 1954: First commercial silicon transistor.” [Online].

 J.E. Volder, “The CORDIC Computing Technique,” Proc. Western Joint Computer Conf., pp. 257–261, 1959. [Online].