You’d probably want to use a better oscillator as well. While you can certainly connect the output of a TTL inverter to the input, it’s better to have an oscillator with a more predictable frequency. Such an oscillator can be constructed fairly easily using a quartz crystal that comes in a little flat can with two wires sticking out. These crystals vibrate at very specific frequen cies, usually at least a million cycles per second. A million cycles per second is called a megahertz and abbreviated MHz. If the Chapter 17 computer were constructed out of TTL, it would probably run fine with a clock frequency of 10 MHz. Each instruction would execute in 400 nanoseconds. This, of course, is much faster than anything we conceived when we were working with relays.
The other popular chip family was (and still is) CMOS, which stands for complementary metal-oxide semiconductor. If you were a hobbyist design ing circuits from CMOS ICs in the mid-1970s, you might use as a reference source a book published by National Semiconductor and available at your local Radio Shack entitled CMOS Databook. This book contains informa tion about the 4000 (four thousand) series of CMOS ICs.
The power supply requirement for TTL is 4.75 to 5.25 volts. For CMOS, it’s anything from 3 volts to 18 volts. That’s quite a leeway! Moreover, CMOS requires much less power than TTL, which makes it feasible to run small CMOS circuits from batteries. The drawback of CMOS is lack of speed. For example, the CMOS 4008 4-bit full adder running at 5 volts is only guaranteed to have a propagation time of 7 50 nanoseconds. It gets faster as the power supply gets higher-250 nsec at 10 volts and 190 nsec at 15 volts. But the CMOS device doesn’t come close to the TTL 4-bit adder, which has a propagation time of 24 nsec. (Twenty-five years ago, the trade-off between the speed of TTL and the low power requirements of CMOS was fairly clear cut. Today there are low-power versions of TTL and high-speed versions of CMOS.)
On the practical side , you would probably begin wiring chips together on a plastic breadboard:
,:�00181l!JOO, l!!ll!!l�!!!!C!l’: ®ffl!!!!l!lL �C!lffl� ®001!1!!!!� . �-
. . . ·
• . .. -· :·
. . – . . . .
Each short row of 5 holes is electrically connected underneath the plastic base. You insert chips into the breadboard so that a chip straddles the long central groove and the pins go into the holes on either side of the groove. Each pin of the IC is then electrically connected to 4 other holes. You con nect the chips with pieces of wires pushed into the other holes.
You can wire chips together more permanently using a technique called wire-wrapping. Each chip is inserted into a socket that has long square posts:
e Each post corresponds to a pin of the chip. The sockets themselves are in serted into thin perforated boards. From the other side of the board, you use a special wire-wrap gun to tightly wrap thin pieces of insulated wire around the post. The square edges of the post break through the insulation and make an electrical connection with the wire.
If you were actually manufacturing a particular circuit using ICs, you’d probably use a printed circuit board. Back in the old days, this was some thing a hobbyist could do. Such a board has holes and is covered by a thin layer of copper foil. Basically, you cover all the areas of copper you want to preserve with an acid resistant and use acid to etch away the rest. You can then solder IC sockets (or the ICs themselves) directly to the copper on the board. But because of the very many interconnections among I Cs, a single area of copper foil is usually inadequate. Commercially manufactured printed circuit boards have multiple layers of interconnections.
By the early 1970s, it became possible to use ICs to create an entire com puter processor on a single circuit board. It was really only a matter of time before somebody put the whole processor on a single chip. Although Texas Instruments filed a patent for a single-chip computer in 1971, the honor of actually making one belongs to Intel, a company started in 1968 by former
Fairchild employees Robert Noyce and Gordon Moore. Intel’s first major product was, in 1970, a memory chip that stored 1024 bits, which was the greatest number of bits on a chip at that time.
Intel was in the process of designing chips for a programmable calcula tor to be manufactured by the Japanese company Busicom when they de cided to take a different approach. As Intel engineer Ted Hoff put it, “Instead of making their device act like a calculator with some programming abili ties, I wanted to make it function as a general-purpose computer programmed to be a calculator.” This led to the Intel 4004 (pronounced forty oh four), the first “computer on a chip,” or microprocessor. The 4004 became avail able in November 1971 and contained 2300 transistors. (By Moore’s Law, microprocessors made 18 years later should contain about 4000 times as many transistors, or about 10 million. That’s a fairly accurate prediction.)
Having told you the number of its transistors, I’ll now describe three other important characteristics of the 4004 . These three measures are often used as standards for comparison among microprocessors since the 4004.
First, the 4004 was a 4-bit microprocessor. This means that the data paths in the processor were only 4 bits wide. When adding or subtracting num bers, it handled only 4 bits at a shot. In contrast, the computer developed in Chapter 17 has 8-bit data paths and is thus an 8-bit processor. As we’ll soon see, 4-bit microprocessors were surpassed very quickly by 8-bit micro processors. No one stopped there. In the late 1970s, 16-bit microprocessors became available. When you think back to Chapter 17 and recall the sev eral instruction codes necessary to add two 16-bit numbers on an 8-bit pro cessor, you’ll appreciate the advantage that a 16-bit processor gives you. In the mid-1980s, 32-bit microprocessors were introduced and have remained the standard for home computers since then.
Second, the 4004 had a maximum clock speed of 108,000 cycles per sec ond, or 108 kilohertz (KHz). Clock speed is the maximum speed of an os cillator that you can connect to the microprocessor to make it go. Any faster and it might not work right. By 1999, microprocessors intended for home computers had hit the 500-megahertz point-about 5000 times faster than the 4004.
Third, the addressable memory of the 4004 was 640 bytes. This seems like an absurdly low amount; yet it was in line with the capacity of memory chips available at the time. As you’ll see in the next chapter, within a couple of years microprocessors could address 64 KB of memory, which is the ca pability of the Chapter 17 machine. Intel microprocessors in 1999 can ad dress 64 terabytes of memory, although that’s overkill considering that most people have fewer than 256 megabytes of RAM in their home computers.
These three numbers don’t affect the capability of a computer. A 4-bit processor can add 32-bit numbers, for example, simply by doing it in 4-bit chunks. In one sense, all digital computers are the same. If the hardware of one processor can do something another can’t, the other processor can do it in software; they all end up doing the same thing. This is one of the im plications of Alan Turing’s 1937 paper on computability.
Where processors ultimately do differ, however, is in speed. And speed is a big reason why we’re using computers to begin with.
The maximum clock speed is an obvious influence on the overall speed of a processor. That clock speed determines how fast each instruction is being executed. The processor data width affects speed as well. Although a 4-bit processor can add 32-bit numbers, it can’t do it nearly as fast as a 32-bit processor. What might be confusing, however, is the effect on speed of the maximum amount of memory that a processor can address. At first, address able memory seems to have nothing to do with speed and instead reflects a limitation on the processor’s ability to perform certain functions that might require a lot of memory. But a processor can always get around the memory limitation by using some memory addresses to control some other medium for saving and retrieving information. (For example, suppose every byte writ ten to a particular memory address is actually punched on a paper tape, and every byte read from that address is read from the tape.) What happens, however, is that this process slows down the whole computer. The issue again is speed.
Of course, these three numbers indicate only roughly how fast the micro processor operates. These numbers tell you nothing about the internal ar chitecture of the microprocessor or about the efficiency and capability of the machine-code instructions. As processors have become more sophisticated, many common tasks previously done in software have been built into the processor. We’ ll see examples of this trend in the chapters ahead.
Even though all digital computers have the same capabilities, even though they can do nothing beyond the primitive computing machine devised by Alan Turing, the speed of a processor of course ultimately affects the over all usefulness of a computer system. Any computer that’s slower than the human brain in performing a set of calculations is useless, for example. And we can hardly expect to watch a movie on our modern computer screens if the processor needs a minute to draw a single frame.
But back to the mid-1970s. Despite the limitations of the 4004, it was a start. By April 1972, Intel had released the 8008-an 8-bit microprocessor running at 200 kHz that could address 16 KB of memory. (See how easy it is to sum up a processor with just three numbers?) And then, in a five-month period in 1974, both Intel and Motorola came out with microprocessors that were intended to improve on the 8008. These two chips changed the world.