A Brief History of Supercomputers

Summary

Supercomputer processors entering the 1990s were the king of the hill. Vector processors were primarily used, and they were far faster than PC processors. During the 1990s two “branches” of supercomputing processors developed. One branch stayed on vector processors and the other used workstation processors. Vector processor instruction sets operate on one-dimensional arrays, or vectors. They were widely used in supercomputers because they proved very fast on code that could be vectorized. Coming into the 1990s, each supercomputer company made their own vector processors and compilers.

During the 1990s these vector processors got faster and gained more performance by the addition of larger and more complex vector pipelines. Moreover, the clock speeds were much higher than on PCs by a large margin. They also went from 32-bit to 64-bit during the 1990s, with the penultimate vector processor appearing in the Cray X1 in 2003. Although it was a vector processor, it also had non-uniform memory access (NUMA) capability. In 2005, the processor got an upgrade to 1,160MHz and a dual-core processor. Again, all these vector processors were built by each supercomputer company for their specific systems. Therefore, the extreme costs of designing, testing, and manufacturing these processors were spread over only the systems that were sold by that company. As a result, the processor cost was remarkably high, especially compared with x86 processors.

The second branch of supercomputer processors came primarily from workstation processors. A perfect example is the Cray T3E, which used the DEC Alpha 21164 processors running at about 300MHz. The goal in using these processors was to reduce cost: Because there were many workstations, the costs of processors were spread across more systems, reducing the costs. Additionally, workstation processors had better performance, at that time, than PC processors.

The use of workstation processors helped reduce the costs of supercomputers, although still costly enough that they were centralized resources. Contrast this with PC processors that sold in the millions, allowing development costs, which were not too different from supercomputer processors, to be distributed across possibly hundreds of millions of processors. Supercomputers could, at best, spread the same costs across hundreds or thousands of processors.

Commodity networking, specifically Ethernet, was not initially driven by PCs. The initial push came from connecting research centers, government, and military sites. Connecting universities came next, followed by financial institutions and Telcos, both primarily at the corporate level, which helped push down the costs to something a company could afford.

This growth in networking kept reducing the price to the point that it became cost effective for high-end PCs or PC-based workstations. At that point, the hundreds of million PCs started driving down Ethernet costs extremely quickly. More PCs were bought because they were cost effective, specifically in the corporate world. These PCs needed networking, which helped drive down the costs of networking. As the networking costs came down, people could afford to network, and buy, more PCs, which drove down the cost of PC processors.

To summarize:

  • PC processor development costs could be spread across many more customers than supercomputer processors (think hundreds of millions of machines versus a few thousand). PC processors were very inexpensive compared with supercomputer processors.
  • The commodity-priced PC CPUs started adding new features, faster clock speeds, and more parallelism through the 1990s, primarily because development costs could be spread across such a huge number of systems. Supercomputer processors did not have this economy of scale, so prices remained very high.
  • Eventually, in the early 2000s, PC CPUs had features roughly equivalent to supercomputer processors; in some cases, they were faster.
  • Commodity networking grew very quickly in the 1990s, driving down prices so that individuals could use Ethernet to connect their PCs. Into the early 2000s, this meant relatively fast and low-latency networking was available for PCs.

In the next part of this series, I will continue to explore the factors that led to the development of modern-day HPC systems.