Apollo Guidance Computer Activities

AGC - Annotations to Eldon Hall's Journey to the Moon, part 2.

Annotations to Eldon Hall's Journey to the Moon.

Part Two: Chapters 5 - 9.

Chapter 5, page 69: "Since double-precision computations (28 data bits) were satisfactory for navigation accuracy, longer word length didn't seem justifiable."

A double-precision number had two of whatever a single-precision number had one of: 28 data bits, 2 sign bits, and 2 parity bits. A somewhat different design could have had, instead, 29 data bits, 1 sign bit, and 2 parity bits, somewhat more similar to the way long and short integers are handled in modern PCs. The reasons for going the way we did involve more computer science than I want to get into here. In any case, 28 data bits is equivalent to a little over 8 decimal digits, which is enough to express the radius of the Earth in inches, or the distance to the moon in fathoms (6-foot units); that is, 28 bits of precision is more than enough because we don't(or didn't(know the radius of the Earth or the distance to the moon as accurately as that. In fact, navigation data was displayed to the crew, and downlinked to Mission Control, as 5 decimal digits, corresponding to about 17 data bits. However, it was useful to keep count of time in triple precision (42 data bits, 3 sign bits, and 3 parity bits), because timekeeping can be done with accuracies considerably better than 8 decimal digits. See also under AGC Architecture, page 114.

This whole subject came up again in picking an "off-the-shelf" computer for the Space Shuttle: the IBM standard for the two sizes of "floating point" (scientific notation) numbers provided a short size with 21 data bits and a long size with 53 data bits. I argued in vain that 21 is nowhere near enough and that 53 is almost double the requirement, but it was considered much more important that the Shuttle computers should calculate with exactly the same precision as the IBM Series 360 computers used by Mission Control.

Chapter 5, pages 70 and 73:

The most important measures of computing speed in machines of that era were: first, memory cycle time, and second, time to multiply two single-precision numbers (multiply instruction time). At least one of the multiply instruction times in Fig. 42 (page 70) and Table 7 (page 73) seems wrong to me. In the first place, it is inconceivable that AGC3, late in 1962, should multiply so much slower (634 (sec) than the Mod 3C description of 1961 (400 (sec). In the second place, I'm quite sure that my microprogram, which was common to these two versions of the machine, processed one multiplier bit in each cycle. If "cycle" means memory cycle, that makes 14 memory cycles, at 19.5 (sec, plus 1 prologue memory cycle to fetch the multiplicand from memory, plus 1 epilogue memory cycle to fetch the next instruction from memory: total 16 ( 19.5 = 312 (sec. However, I think the only cycle time I had to work with was the 39-(sec instruction cycle, and had to use a whole one of those to process each multiplier bit. Then 16 ( 39 = 624 (sec, for which 634 might be a typo. But I really don't know where 400 came from.

Chapter 5, verb-noun format, page 73:

This was invented by my good friend and later boss, Alan Green, along with Joe Rocchio (also my friend but never my boss). Their user interface program had a name of its own: Pinball Game - Buttons and Lights, and featured among its comments the only Shakespearean quotation in the Apollo software. Everybody who tells lawyer jokes knows part of Dick the Butcher's lines in Henry VI, Part 2: "The first thing we do, let's kill all the lawyers." Almost nobody remembers that Dick's next idea is to behead Lord Say, whose achievement (or crime) was to found a grammar school. The quotation in the software is from rebel leader Jack Cade's indictment of Lord Say: "It will be proved to thy face that thou hast men about thee that usually talk of a noun and a verb, and such abominable words as no Christian ear can endure to hear." The great success of the noun-verb format may well be interpreted, in the light of modern linguistics, as making good use of the central role of verb-phrases and noun-phrases in the Universal Grammar discussed by MIT's Steven Pinker in The Language Instinct (HarperPerennial, 1995).

Chapter 5, References, page 78:

References 4 and 6, to Report R-393 that I co-authored with Albert Hopkins and Ramon Alonso, are properly to AGC4, the next model in the series, rather than to the AGC3 model being discussed in this chapter. But I point them out anyway, as being my footnotes! This report, by the way, passed through the IEEE's Transactions on Electronic Computers (December 1963) to become my only author credit in hard covers, a chapter in Computer Structures: Readings and Examples, by C. Gordon Bell and Allen Newell (McGraw Hill, 1971).

Chapter 6, Fig. 50, page 82:

There's yet another variation on AGC3's multiply time: 640 µsec, contrasted with 90 µsec with Micrologic. Accepting the 640 as meaning 624 (my note against pages 70 and 73), I recall the time with Micrologic (AGC4) as being 8 memory cycles of 11.72 µsec, or 93.76 µsec. What the Micrologic really allowed us to do, to make this great improvement, was to slice each memory cycle into 12 "pulse times" as compared with perhaps 4 in the earlier (longer) cycle. If I've recalled those numbers correctly, each pulse time shrank from 19.5/4 = 4.875 µsec to 11.72/12 = 0.977 µsec, a 5-to-1 speedup in processing at the lowest level. Now the multiply instruction started with a memory cycle time in which the multiplicand was not only fetched from memory but placed (or not placed) in the product according to the first (rightmost) multiplier bit; then there followed 6 memory cycle times in each of which two multiplier bits were processed; finally in the last memory cycle time the 14th and last multiplier bit was processed while the next instruction was being fetched from memory. The faster logic, combined with a less wasteful design, sped up the critical multiplications by a factor of almost 7. As Fig. 50 shows, the overall speedup was 2.5 to 1 rather than 5 to 1, because many of the instructions had to leave some pulse times unused while waiting for the relatively slow memory.

Chapter 6, page 85: "...NASA management was quite receptive to such innovative ideas also, provided cost, schedule, and reliability could be maintained or improved. As a result, the decision [to adopt integrated circuits] came much more easily than might be expected..."

When I said luck was among Eldon's gifts, this is a big part of what I had in mind. We're talking about government here, people, can you believe it??

Chapter 6, page 87: "...fortunately for the Apollo Program, the designers had the freedom that allowed [computer design] evolution."

Some of that luck rubbed off on Al Hopkins, Ray Alonso, Herb Thaler, and me. This is where the four of us sat around a table for a week or so and sketched out the AGC4 functional design.

Chapter 6, Fig. 58, page 89:

Yet more variations in how long it took to multiply! But what's really significant about this tabulation is the repertoire of 11 instructions rather than 8. Given that we still had only 3 bits for the operation code, allowing for 8 combinations, you might well wonder how it was done.

To explain this, I need to look back at what I think was the single most valuable contribution I made to the AGC's capabilities: the INDEX instruction. In all computers except the most primitive of pioneering prototypes (such as Mod 1B), part of each instruction word is devoted to specifying indexing, that is, the trick by which a single instruction can address data in different memory locations at different times, combining the unchanging value of its own address field with the current value in an "index register." For example, a typical 24-bit machine of that time had an instruction word format composed of 3 fields: an operation code of 6 bits (64 instructions!), an indexing code of 3 bits (accessing any of 7 index registers, plus a no-indexing state), and an address field of 15 bits. But of course we didn't have 24 bits, we had 15: an operation code of 3 bits and an address field of 12 bits, none of which could very well be spared to devote to indexing. So I borrowed an obscure feature from an obscure machine by a now unremembered computer company, Bendix: an instruction that could make any data location in memory act, momentarily, like an index register. It picked up the data, added it to the next instruction, and performed the result as the next instruction, in place of the one stored in memory. That at least was my variation: the INDEX instruction made the addition affect the entire instruction word instead of just the address, though we made no particular use of this fact in programming AGC3. What I brought to the AGC4 design table was the insight that adding a data word to an instruction word could not only change the operation code as well as the address, it could create operation codes that, because of overflow, were different from any that could be stored in memory. Suddenly we had potentially 16 instruction codes instead of 8, at the price of using an INDEX instruction for every occurrence of each new "extended" instruction, whether it needed ordinary indexing or not (that's why the AGC4 multiply time is given sometimes as 93.76 µsec and sometimes, allowing for the INDEX instruction's time, as 117.2 µsec).

On the wave of this triumph, I bullied and blustered the rest of the team into accepting also my design for a divide instruction, which (as I was able to convince them eventually) did not greatly increase the machine's complexity. I believe it was Al who then demanded a subtract instruction so that programmers wouldn't have to code subtraction in a roundabout fashion. Finally we had a machine with straightforward instructions for the four basics: add, subtract, multiply, and divide. (I considered selling them a square-root instruction too, but decided to quit while I was ahead.)

Chapter 7, page 95: "Raytheon tackled rope fabrication and developed a machine for sense-line wiring. A punched Mylar tape (the output from the MIT Instrumentation Laboratory's software development process) controlled the machine ... (Plate 18)."

This was the other major function of my "Yul System," which I called manufacturing a program. Once the assembled program had been tested to the point where the powers that be were willing to invest the long fabrication cycle to make it into rope, Yul encoded the binary form of the code into a highly specialized 8-channel teletype code (specified by Raytheon for their machine) and punched that into a "paper" tape. As Eldon says, the tape we used for this purpose was not the usual pink paper at all but the more robust material, Mylar: almost untearable, impervious to oil and other factory-environment hazards, and cannot develop extra holes by any small accident.

Plate_18_small.jpg (15006 bytes)Plate 18 shows the tape reader with its takeup spool on the right, driving the larger machine. Each 8-bit row of the tape set relay switches; when settings were made by reading half a dozen or so of these rows, the fabricating machine shifted the rope frame up or down, and toward or away from the operator. The white plastic funnel (at the left end of the needle in the operator's hand) stayed in a fixed position, and the motion of the rope frame brought a particular core in line with the axis of the pair of funnels: the one you can see and its mirror image on the other side. The operator had a springlike coil of wire inside the needle; what she had to do was poke the needle carefully through the bundle of wires already passing through that core (without scraping any insulation off the 100 or so already threaded through it!), thus adding one more wire to the bundle. Then she pressed a switch to let the machine read more tape and advance the next core to the funnel position, and she passed the needle through the opposite way.

Plate 19 shows the result. I remember meeting 3 or 4 of these operators, all white-haired ladies with gentle voices and an infinitely patient dexterity.Plate_19_small.jpg (27572 bytes)

The Yul System's manufacturing function had another, more obscure, task which was entirely independent of the flight software being developed: the inhibit line tape. Before any of the sense lines were woven in, each rope frame had to have a set of inhibit lines woven to establish which core went with which address. I had to cobble up the appropriate numerical pattern in the same format as an assembled program, and run Yul System manufacturing with that input, so that the output tape would run the fabricating machine as required to weave the inhibit lines. Getting this simple task working was a big help in establishing confidence in the primary tape-punching software. Fig. 63 shows why these lines were called "inhibit" and how they work, provided that you understand that, of the 4 lines shown, the top 2 carry the core address and the bottom 2 carry the complement of the address. That is, when the direct binary address is 00, 01, 10, or 11, the complement address is 11, 10, 01, or 00 respectively, where each "1" represents a current flowing in that wire at core selection time, and each "0" represents absence of such a current. What happens at core selection time is that the Set/Reset line carries a current in the direction that would "set" all the cores (switch their magnetization to clockwise), were it not for the opposing currents flowing in half of the inhibit lines, which neutralize or overwhelm the Set current in every core but one. The result is that one core is set (magnetized clockwise) and all the rest, 3 in the illustration but 511 in each actual rope module, remain reset (magnetized counterclockwise). Then at sense time, the Set/Reset line carries a current, opposite to the setting current, which resets the magnetization of that one set core, inducing currents (logical "1"s) in the sense lines threading it.

Chapter 9, page 112: "Fortunately for the Apollo guidance computer, MIT's system and software designers came to the rescue and investigated the computational capabilities. They ... rose to the challenge and pointed out the deficiencies in the memory capacity and computational speed of the LVDC."

As one of those designers, I was on this rescue team and had my first experience of representing the Lab on a road trip. Analyzing the LVDC and what it did, I saw quickly that it was a machine designed quite narrowly for its task, which was to steer and throttle rocket engines using a technique called "adaptive polynomials." When I saw how clumsy and inefficient it would be at the nimble multi-tasking required of the AGC, I made up a nasty nickname for the LVDC: "APE," for Adaptive Polynomial Engine, in parody of Babbage's 19th-century Difference Engine concept. Several of the startling comparisons on page 113 were the results of my analysis.

The Houston trip itself holds some curious memories. Johnson Space Center was not yet built, and NASA's Manned Spacecraft Center was housed in a former oil company building designed by Frank Lloyd Wright, near Hobby airport (then Houston's only airport), about halfway down to Galveston. But the key meeting wasn't even there; it was in a large bedroom in an airport motel. We all sat around on the two king-size beds and presented a summary report to NASA people and some BellComm people led by one Ike Nahama. It must have stunned them because the rest of the trip was non-memorable, something of an anticlimax. Much of the effect was caused by an immortal sentence in Dick Battin's cover letter for our report, something like "We are astonished that BellComm could have arrived at the conclusions in their report." This was an attitude that I and everyone on the analysis team felt strongly, and we had urged Dick to express it strongly. After the NASAs and BellComms got over saying "You are what??!!", they were motivated to go over our detailed findings quite carefully. Dick and other high officials at the Lab were worried that we may have overstated our case, but history's finding became clear soon enough. So that's how to deal with naysayers, evidently: hit them with a bigger hammer.

Footnote to this history: I saw Ike Nahama about a decade later when Eldon's group (to which I then belonged) was pitching our approach to fault-tolerant computing. The pitch we brought to an audience that included Ike and other BellComms revolved around how much better our new designs were than that of the old AGC, and used some of the same arguments about voting-type redundancy that Ike had used on us in the earlier meeting. I couldn't resist breaking out of the script and reflecting on the delicious irony, while maintaining that both statements expressed the important truth of their respective times. Ike grinned and bore it like a good sport.

Go on to Blair-Smith's annotations for chapters 10 - 15.


site last updated 12-08-2002 by Alexander Brown