Engineering:IBM 801

From HandWiki
Short description: Experimental minicomputer by IBM

The 801 was an experimental central processing unit (CPU) design developed by IBM during the 1970s. It is considered to be the first modern RISC design, relying on processor registers for all computations and eliminating the many variant addressing modes found in CISC designs. Originally developed as the processor for a telephone switch, it was later used as the basis for a minicomputer and a number of products for their mainframe line. The initial design was a 24-bit processor; that was soon replaced by 32-bit implementations of the same concepts and the original 24-bit 801 was used only into the early 1980s.

The 801 was extremely influential in the computer market. Armed with huge amounts of performance data, IBM was able to demonstrate that the simple design was able to easily outperform even the most powerful classic CPU designs, while at the same time producing machine code that was only marginally larger than the heavily optimized CISC instructions. Applying these same techniques even to existing processors like the System/370 generally doubled the performance of those systems as well. This demonstrated the value of the RISC concept, and all of IBM's future systems were based on the principles developed during the 801 project.

For his work on the 801, John Cocke was recognized with several awards and medals, including the Turing Award in 1987, National Medal of Technology in 1991, and the National Medal of Science in 1994.

History

Original concept

In 1974, IBM began examining the possibility of constructing a telephone switch to handle one million calls an hour, or about 300 calls per second. They calculated that each call would require 20,000 instructions to complete, and when one added timing overhead and other considerations, such a machine required performance of about 12 MIPS.[1] This would require a significant advance in performance; their current top-of-the-line machine, the IBM System/370 Model 168 of late 1972, offered about 3 MIPS.[2]

The group working on this project at the Thomas J. Watson Research Center, including John Cocke, designed a processor for this purpose. To reach the required performance, they considered the sort of operations such a machine required and removed any that were not appropriate. This led to the removal of a floating-point unit for instance, which would not be needed in this application. More critically, they also removed many of the instructions that worked on data in main memory and left only those instructions that worked on the internal processor registers, as these were much faster to use and the simple code in a telephone switch could be written to use only these types of instructions. The result of this work was a conceptual design for a simplified processor with the required performance.[1]

The telephone switch project was canceled in 1975, but the team had made considerable progress on the concept and in October IBM decided to continue it as a general-purpose design. With no obvious project to attach it to, the team decided to call it the "801" after the building they worked in. For the general-purpose role, the team began to consider real-world programs that would be run on a typical minicomputer. IBM had collected enormous amounts of statistical data on the performance of real-world workloads on their machines and this data demonstrated that over half the time in a typical program was spent performing only five instructions: load value from memory, store value to memory, branch, compare fixed-point numbers, and add fixed-point numbers. This suggested that the same simplified processor design would work just as well for a general-purpose minicomputer as a special-purpose switch.[3]

Rationale against use of microcode

This conclusion flew in the face of contemporary processor design, which was based on the concept of using microcode. IBM had been among the first to make widespread use of this technique as part of their System/360 series. The 360s, and 370s, came in a variety of performance levels that all ran the same machine language code. On the high-end machines, many of these instructions were implemented directly in hardware, like a floating point unit, while low-end machines could instead simulate those instructions using a sequence of other instructions encoded in microcode. This allowed a single application binary interface to run across the entire lineup and allowed the customers to feel confident that if more performance was ever needed they could move up to a faster machine without any other changes.[4]

Microcode allowed a simple processor to offer many instructions, which had been used by the designers to implement a wide variety of addressing modes. For instance, an instruction like ADD might have a dozen versions, one that adds two numbers in internal registers, one that adds a register to a value in memory, one that adds two values from memory, etc. This allowed the programmer to select the exact variation that they needed for any particular task. The processor would read that instruction and use microcode to break it into a series of internal instructions. For instance, adding two numbers in memory might be implemented by loading those two numbers into registers, adding them, and then storing the sum back to memory.[3] The idea of offering all possible addressing modes for all instructions became a goal of processor designers, the concept becoming known as an orthogonal instruction set.

The 801 team noticed a side-effect of this concept; when faced with the plethora of possible versions of a given instruction, compiler authors would almost always pick a single version. This was almost always the one that was implemented in hardware on the low-end machines. That ensured that the machine code generated by the compiler would run as fast as possible on the entire lineup. While using other versions of instructions might run even faster on a machine that implemented them in hardware, the complexity of knowing which one to pick on an ever-changing list of machines made this extremely unattractive, and compiler authors largely ignored these possibilities.[3]

As a result, the majority of the instructions available in the instruction set were never used in compiled programs. And it was here that the team made the key realization of the 801 project:

Imposing microcode between a computer and its users imposes an expensive overhead in performing the most frequently executed instructions.[3]

Microcode takes a non-zero time to examine the instruction before it is performed. The same underlying processor with the microcode removed would eliminate this overhead and run those instructions faster. Since microcode essentially ran small subroutines dedicated to a particular hardware implementation, it was ultimately performing the same basic task that the compiler was, implementing higher-level instructions as a sequence of machine-specific instructions. Simply removing the microcode and implementing that in the compiler could result in a faster machine.[3]

One concern was that programs written for such a machine would take up more memory; some tasks that could be accomplished with a single instruction on the 370 would have to be expressed as multiple instructions on the 801. For instance, adding two numbers from memory would require two load-to-register instructions, a register-to-register add, and then a store-to-memory. This could potentially slow the system overall if it had to spend more time reading instructions from memory than it formerly took to decode them. As they continued work on the design and improved their compilers, they found that overall program length continued to fall, eventually becoming roughly the same length as those written for the 370.[5]

First implementations

The initially proposed architecture was a machine with sixteen 24-bit registers and without virtual memory.[6][7] It used a two-operand format in the instruction, so that instructions were generally of the form A = A + B, as opposed to the three-operand format, A = B + C. The resulting CPU was operational by the summer of 1980 and was implemented using Motorola MECL-10K discrete component technology[8] on large wire-wrapped custom boards. The CPU was clocked at 66 ns cycles (approximately 15.15 MHz) and could compute at the fast speed of approximately 15 MIPS.

The 801 architecture was used in a variety of IBM devices, including channel controllers for their S/370 mainframes (such as the IBM 3090),[9]:377 various networking devices, and as a vertical microcode execution unit in the 9373 and 9375 processors of the IBM 9370 mainframe family.[10][11] The original version of the 801 architecture was the basis for the architecture of the IBM ROMP microprocessor[9]:378 used in the IBM RT PC workstation computer and several experimental computers from IBM Research. A derivative of the 801 architecture with 32-bit addressing named Iliad was intended to serve as the primary processor of the unsuccessful Fort Knox midrange system project.[12]

Later modifications

Having been originally designed for a limited-function system, the 801 design lacked a number of features seen on larger machines. Notable among these was the lack of hardware support for virtual memory, which was not needed for the controller role and had been implemented in software on early 801 systems that needed it. For more widespread use, hardware support was a must-have feature. Additionally, by the 1980s the computer world as a whole was moving towards 32-bit systems, and there was a desire to do the same with the 801.[13]

Moving to a 32-bit format had another significant advantage. In practice, it was found that the two-operand format was difficult to use in typical math code. Ideally, both input operands would remain in registers where they could be re-used in subsequent operations, but as the output of the operation overwrote one of them, it was often the case that one of the values had to be re-loaded from memory. By moving to a 32-bit format, the extra bits in the instruction words allowed an additional register to be specified, so that the output of such operations could be directed to a separate register. The larger instruction word also allowed the number of registers to be increased from sixteen to thirty-two, a change that had clearly been suggested by examination of 801 code. In spite of the expansion of the instruction words from 24 to 32-bits, programs did not grow by the corresponding 33% due to avoided loads and saves due to these two changes.[13]

Other desirable additions include instructions for working with string data that was encoded in "packed" format with several ASCII characters in a single memory word, and additions for working with binary-coded decimal, including an adder that could carry across four-bit decimal numbers.[13]

When the new version of the 801 was run as a simulator on the 370, the team was surprised to find that code compiled to the 801 and run in the simulator would often run faster than the same source code compiled directly to 370 machine code using the 370's PL/I compiler.[14] When they ported their experimental "PL.8" language back to the 370 and compiled applications using it, they also ran faster than existing PL/I code, as much as three times as fast. This was due to the compiler making RISC-like decisions about how to compile the code to internal registers, thereby optimizing out as many memory accesses as possible. These were just as expensive on the 370 as the 801, but this cost was normally hidden by the simplicity of a single line of CISC code. The PL.8 compiler was much more aggressive about avoiding loads and saves, and thereby resulting in higher performance even on a CISC processor.[14]

The Cheetah, Panther, and America projects

In the early 1980s, the lessons learned on the 801 were combined with those from the IBM Advanced Computer Systems project, resulting in an experimental processor called "Cheetah". Cheetah was a 2-way superscalar processor, which evolved into a processor called "Panther" in 1985, and finally into a 4-way superscalar design called "America" in 1986.[15] This was a three-chip processor set including an instruction processor that fetches and decodes instructions, a fixed-point processor that shares duty with the instruction processor, and a floating-point processor for those systems that require it. Designed by the 801 team, the final design was sent to IBM's Austin office in 1986, where it was developed into the IBM RS/6000 system. The RS/6000 running at 25 MHz was one of the fastest machines of its era. It outperformed other RISC machines by two to three times on common tests, and trivially outperformed older CISC systems.[10]

After the RS/6000, the company turned its attention to a version of the 801 concepts that could be efficiently fabricated at various scales. The result was the IBM POWER instruction set architecture and the PowerPC offshoot.

Recognition

For his work on the 801, John Cocke was awarded several awards and medals:

  • 1985: Eckert–Mauchly Award[16]
  • 1987: A.M. Turing Award[17]
  • 1989: Computer Pioneer Award[18]
  • 1991: National Medal of Technology[19]
  • 1994: IEEE John von Neumann Medal[20]
  • 1994: National Medal of Science[19]
  • 2000: Benjamin Franklin Medal (The Franklin Institute)[21]

Michael J. Flynn views the 801 as the first RISC.[22]

References

Citations

  1. 1.0 1.1 Cocke & Markstein 1990, p. 4.
  2. Savard, John. "On the 370/165 and the 360/85". https://alt.folklore.computers.narkive.com/9nl6cj2Q/on-the-370-165-and-the-360-85. 
  3. 3.0 3.1 3.2 3.3 3.4 Cocke & Markstein 1990, p. 5.
  4. Sack, Harald (7 April 2016). "The IBM System/360 and the Use of Microcode". http://scihi.org/ibm-system360-microcode/. 
  5. Cocke & Markstein 1990, pp. 6-7.
  6. "The 801 Minicomputer - An Overview". October 8, 1976. p. 9. http://www.bitsavers.org/pdf/ibm/system801/The_801_Minicomputer_an_Overview_Sep76.pdf. 
  7. "System 801 Principles of Operation". January 16, 1976. http://www.bitsavers.org/pdf/ibm/system801/System_801_Principles_of_Operation_Jan76.pdf. 
  8. Radin 1982.
  9. 9.0 9.1 Dewar, Robert B.K.; Smosna, Matthew (1990). Microprocessors: A Programmer's View. McGraw-Hill. https://archive.org/details/microprocessorsp00robe. 
  10. 10.0 10.1 Cocke & Markstein 1990, p. 9.
  11. Mitchell, James (September 1988). "Implementing a mainframe architecture in a 9370 processor". ACM SIGMICRO Newsletter 19 (3): 3–10. doi:10.1145/62185.62186. ISSN 1050-916X. 
  12. Frank G. Soltis (1997). Inside the AS/400, Second Edition. Duke Press. ISBN 978-1882419661. https://books.google.com/books?id=5DoPAAAACAAJ. 
  13. 13.0 13.1 13.2 Cocke & Markstein 1990, p. 7.
  14. 14.0 14.1 Cocke & Markstein 1990, p. 8.
  15. Shen, John Paul; Lipasti, Mikko H. (2005). "Survey of Superscalar Processors". Modern Processor Design: Fundamentals of Superscalar Processors. McGraw-Hill. 
  16. "John Cocke" (in en). https://awards.acm.org/award_winners/cocke_2083115. 
  17. "John Cocke - A.M. Turing Award Laureate". https://amturing.acm.org/award_winners/cocke_2083115.cfm. 
  18. "IEEE Computer Society Women of ENIAC Computer Pioneer Award" (in en-US). 9 April 2018. https://www.computer.org/volunteering/awards/pioneer. 
  19. 19.0 19.1 "NSTMF". https://www.nationalmedals.org/laureates/john-cocke. 
  20. "IEEE John von Neumann Medal Receipients". https://www.ieee.org/content/dam/ieee-org/ieee/web/org/about/awards/recipients/von-neumann-rl.pdf. 
  21. "John Cocke" (in en). 2014-01-10. https://www.fi.edu/laureates/john-cocke. 
  22. Flynn, Michael J. (1995). Computer architecture: pipelined and parallel processor design. pp. 54–56. ISBN 0867202041. 

Bibliography

Further reading

  • "Altering Computer Architecture is Way to Raise Throughput, Suggests IBM Researchers". Electronics V. 49, N. 25 (23 December 1976), pp. 30–31.
  • V. McLellan: "IBM Mini a Radical Departure". Datamation V. 25, N. 11 (October 1979), pp. 53–55.
  • Dewar, Robert B.K.; Smosna, Matthew (1990). Microprocessors: A Programmer's View. McGraw-Hill. pp. 258–264. https://archive.org/details/microprocessorsp00robe. 
  • Tabak, Daniel (1987). RISC Architecture. Research Studies Press. pp. 69–72. 

External links