Connection Machine


Computer Science; Computer Engineering; Information Technology


The Connection Machines grew out of Daniel Hillis's Ph.D. Thesis research in Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. The basic idea was that of having a massively parallel computer network, with each processor connected to all the others. The majority of digital computers have the so-called Von Neumann Architecture, in which a Central Processing Unit does the actual computation, taking intermediate results from specified locations and storing intermediate results for later use. But the human brain and other brains found in nature do not work in that way. Instead there is no central processor, and each neuron is connected to a great many others. The connection machine series was one of the first commercial attempts to capture the brain's way of thinking.



The connection machines were among the first parallel processing computers to be commercially available. They were an outgrowth of Daniel Hillis's Ph. D. Thesis at MIT, built partially on the neural network ideas of John Hopfield and influenced in its early stages by Nobel Laureate Richard P. Feynman, a colorful figure who began his career as head of a computational group at Los Alamos and went on to be one of the founders of quantum electrodynamics.

In the early days of computation, McCullough and Pitts studied the behavior of collections of neurons whose output was a step function of a weighed sum of their inputs. If the weights were allowed to change based on the number of times a neuron fired, learning was found to take place. This was an example of Hebbian learning, so named after D. O. Hebb who promoted the concept. In the 1970's John Hopfield studied the behavior of a single network of such neurons. The Hopfield arrangement of neurons was the basis for the first versions of the connection machine. Hopfield's method proved ideally suited to the connection machine, with the weights represented as decimal numbers. This allowed all the processors to be used concurrently resulting in computing times orders of magnitude faster then in conventional machines.

Thinking Machines Corporation was founded by Daniel Hillis and Sheryl Handler in Waltham, Massachusetts in 1983 and later moved to Cambridge MA. In 1999 it was acquired by Sun Microsystems.

While it would in principle be desirable for a computer to simultaneously update all processors, it proved much too unwieldy for the first connection machines, which instead adopted a hypercube arrangement of processors. The CM-1 had as many as 65,536 processors each processing one bit at a time. The CM-2 launched in 1987 added floating point numeric coprocessors. In 1991 the CM 5 was announced, Thinking Machines went to a new architecture.

Much thinking had been devoted to the physical layout of the connection machines. The initial designs reflected the machines hypercube architecture. The CM-5 had large panels of red blinking light emitting diodes. Because of its design, a CM 5 was included in the control room in Jurassic Park.

—Donald Franceschetti, PhD

Anderson, James A, Edward Rosenfeld, and Andras Pellionisz. Neurocomputing. Cambridge, Mass: MIT Press, 1990. Print.

Hillis, W. Daniel. “Richard Feynman and the Connection Machine.” Phys. Today Physics Today 42.2 (1989): 78. Web.

Hopfield, J. J. “Neural Networks and Physical Systems with Emergent Collective Computational Abilities.” Proceedings of the National Academy of Sciences 79.8 (1982): 2554-558. Web.