Apollo Guidance Computer Activities

AGC: Conference 4 - Technological change

Apollo Guidance Computer History Project

Fourth conference

September 6, 2002

Technological change

ED COPPS: These were amazing things, because we had automata doing things that we had not planned for them to do. In other words, we, the designers, were finding with these computers, which were new on the scene then, we didn't know what the new stuff could do. Dan talks about the old analog computers, well they couldn't have done this kind of stuff. We were the new guys coming to grips with new capabilities, which were now placing conceptual issues within the reach of actual implementation. And we'd have to make judgments about how much should one dare to let have its own head, you know? We designed it in a very abstract way.

In other words, we didn't tell it any more details than we had to. We left the details to evolve after the designers had been removed from the project, which I guess is probably something that people take for granted now, but to us, we didn't know. We didn't have a precedent to decide what was crazy to do or what was the right thing to do. And we, actually, I think, lucked out many times. We really just lucked out and the astronauts lucked out. I mean, they took a big leap of faith to trust that we could do it. I thought that was a remarkable thing too.

DAN LICKLY: I want to say about these guys--I think it was due to Dick Battin and Hal Laning, how both of them have very general mindsets - they're all mathematicians really and they went about things in a very generalized way. So when these guys came to implement Polaris, it was all done. Our computer had operations like unit for unitizing of vector cross. V-cross-V; you just say that and give it two vectors, and you’d get the cross product. V times matrix and so on. All of that was automated in this system, so they had a really generalized linear algebra, vector matrix algebra approach, very high powered for those days, to how it worked. But it did it in a very generalized way and you were conceptually working at a higher level, which made things really easier to think about and do. I don't know who got started in that direction, but, I think, it really worked out as far as the SGA group worked out.

DAVID MINDELL: And this was on Polaris?

DAN LICKLY: Polaris we did not. We had one velocity vector. We even called it velocity. It wasn't a vector. But, when they got to Apollo, these guys involved and things were done in extremely generalized state transition matrices. And the things that--

DAVID MINDELL: That the hardware was evolving, so first when you put all this stuff in, you didn't know what the masses were and you didn't know necessarily where the thrusters were and all that sort of stuff?

ED COPPS: Well, in Polaris, the computer essentially was what was called a digital differential analyzer. It was a computer that actually modeled directly the idea of a differential equation basically. And the differential equation that it modeled required a component, a transistor or a shift register for every state variable. It wasn't programmable in the sense that the registers could be used for one thing now and for another thing later.

DAN LICKLY: It wasn't general purpose?

ED COPPS: It was not in any sense a general purpose. It was a digital computer, but it was a digital computer that was very primitively designed to represent a certain differential equation. Period! And that was because that was all that could be packed into the square footage and the poundage that was available in the vehicle basically.

DAN LICKLY: All it was doing was trying to get your cut-off velocity, and when you got it, boom! You sent the thing going. You were done.

DAVID MINDELL: Just like the V2?



ED COPPS: Of course, by that time, there were general purpose computers. There's no question about that. We're not talking about what the state of digital computing was. It was the state of instrumentation that could be applied in a missile. But, when Apollo came along, the technology had slowly, slowly, slowly by some standards, but, actually, every six months or whatever it is, doubled in capacity. So, it was just a completely different hardware environment that changed and presented then the opportunity or the challenge as to what to do with that. You know, how do you deal with it? Now that you can do it--what is it that you want to do?

DAN LICKLY: Right. But they had a lot of fights with Rockwell, who always wanted to do things in a very simplified way.


DAN LICKLY: They thought our computer was way too general in our approach, too mathematical.

ED COPPS: I will say even while we worked on the program, the technology kept coming along and saved our ass, because we would run out of space and the technology would get us more memory, a little bit more memory, and it would give us a little bit more speed. It's not possible to remember these steps, but we eventually ended up with something like 38,000 words of memory.

DAN LICKLY: It started with 32, and then they were able to extend.

ED COPPS: Well they actually started with much less than 32. I got a call at one point from Raytheon, and they said, "We think we can go from 38,000 to 50,000," or something. "If we do this and if we do that, and if we wait another few months and so forth, we can do that. Shall we do it?"

And we huddled, and we were so close to fitting things and so close to going, that we decided, no. This will be enough. We'll figure out some way to do it with what we've got. Rather than to keep, you know, changing the parameters, that probably, in retrospect, wasn't a very smart decision. But, that was the kind of push and pull that was happening in the technology, as the thing was being developed.

DAN LICKLY: Well, that's like people had on their walls these things, "Better is the enemy of good," because whenever you were trying to make something better, that carried a lot of risk, we soon learned after that two times.

JOHN MILLER: There was a tendency to sort of say, "If this is a project that is so ambitious, we have to do everything as good as we possibly can." The logic tended to lead you to that conclusion, and I believe it was Shea, and maybe others would say, "Better is the enemy of good enough."

We had to finally get ourselves to the mental position where we could say, "That is good enough. Goddamn it. We're not going to spend any more time on that, because we've got all these other problems that we have to deal with." And I think that it took that was a very important intellectual breakthrough, when, we, as the society of people that were working on it, could really say to themselves, "I understand."

JOHN GREEN: Along the same lines that--every time you make a change in something like this, you have to retest and make sure after you spend months doing the retest, if they were to have expanded that memory to 50,000 words--the retest to make sure that it was designed properly and there were no hardware failures had been built into the new module. That brings up an item, if you're interested in the Guidance computer, I'm trying to remember--I believe it was a woman who was a physics major, who headed up the reliability design of the computer. It was built out of--basically built out of one type of chip, a three input Norgate. And the reliability that was demanded of the semiconductors--I seem to recall that they would order semiconductors from the manufacturer. Every semiconductor had to be tested. They were then received and every unit was tested again, and if one unit failed, the whole lot was rejected. We really had the semiconductor industry at the time by the tail at that point.

Human-Machine Interaction

site last updated 02-01-2003 by Alexander Brown