Computer Engineering; System-Level Programming
Microprocessors are part of the hardware of a computer. They consist of electronic circuitry that stores instructions for the basic operation of the computer and processes data from applications and programs. Microprocessor technology debuted in the 1970s and has advanced rapidly into the 2010s. Most modern microprocessors use multiple processing “cores.” These cores divide processing tasks between them, which allows the computer to handle multiple tasks at a time.
Microprocessors are computer chips that contain instructions and circuitry needed to power all the basic functions of a computer. Most modern microprocessors consist of a single integrated circuit, which is a set of conducting materials (usually silicon) arranged on a plate. Microprocessors are designed to receive electronic signals and to perform processes on incoming data according to instructions programmed into the central processing unit (CPU) and contained in computer memory. They then to produce output that can direct other computing functions. In the 2010s, microprocessors are the standard for all computing, from handheld devices to supercomputers. Among the modern advancements has been development of integrated circuits with more than one “core.” A core is the circuitry responsible for calculations and moving data. As of 2016, microprocessors may have as many as eighteen cores. The technology for adding cores and for integrating data shared by cores is a key area of development in microprocessor engineering.
Before the 1970s, and the invention of the first microprocessor, computer processing was handled by a set of individual computer chips and transistors. Transistors are electronic components that either amplify or help to direct electronic signals. The first microprocessor for the home computer market was the Intel 8080, an 8-bit microprocessor introduced in 1974. The number of bits refers to the storage size of each unit of the computer's memory. From the 1970s to the 2010s, microprocessors have followed the same basic design and concept but have increased processing speed and capability. The standard for computing in the 1990s and 2000s was the 32-bit microprocessor. The first 64-bit processors were introduced in the 1990s. They have been slow to spread, however, because most basic computing functions do not require 64-bit processing.
Computer performance can be measured in million instructions per second (MIPS). The MIPS measurement has been largely replaced by measurements using floating-point operations per second (FLOPS) or millions of FLOPS (MFLOPS). Floating-point operations are specific operations, such as performing a complete basic calculation. A processor with a 1 gigaFLOP (GFLOP) performance rating can perform one billion FLOPS each second. Most modern microprocessors can perform 10 GFLOPS per second. Specialized computers can perform in the quadrillions of operations per second (petaFLOPS) scale.
The small components within modern microprocessors are often measured in microns or micrometers, a unit equaling one-millionth of a meter. Microprocessors are usually measured in line width, which measures the width of individual circuits. The earliest microprocessor, the 1971 Intel 4004, had a minimum line width of 10 microns. Modern microprocessors can have line width measurements as low as 0.022 microns.
All microprocessors are created with a basic instruction set. This defines the various instructions that can be processed within the unit. The Intel 4004 chip, which was installed in a basic calculator, provided instructions for basic addition and subtraction. Modern microprocessors can handle a wide variety of calculations.
Intel cofounder Gordon Moore noted that the capacity of computing hardware has doubled every two years since the 1970s, an observation now known as Moore's law. Microprocessor advancement is complicated by several factors, however. These factors include the rising cost of producing microprocessors and the fact that the ability to reduce power needs has not grown at the same pace as processor capacity. Therefore, unless engineers can reduce power usage, there is a limit to the size and processing speed of microprocessor technology. Data centers in the United States, for instance, used about 91 billion kilowatt-hours of electricity in 2013. This is equal to the amount of electricity generated each year by thirty-four large coal-burning power plants. Computer engineers are exploring ways to address these issues, including alternatives for silicon in the form of carbon nanotubes, bioinformatics, and quantum computing processors.
—Micah L. Issitt
Ambinder, Marc. “What's Really Limiting Advances in Computer Tech.” Week. The Week, 2 Sept. 2014. Web. 4 Mar. 2016.
Borkar, Shekhar, and Andrew A. Chien. “The Future of Microprocessors.” Communications of the ACM. ACM, May 2011. Web. 3 Mar. 2016.
Delforge, Pierre. “America's Data Centers Consuming and Wasting Growing Amounts of Energy.” NRDC. Natural Resources Defense Council, 6 Feb. 2015. Web. 17 Mar. 2016.
McMillan, Robert. “IBM Bets $3B That the Silicon Microchip Is Becoming Obsolete.” Wired. Condé Nast, 9 July 2014. Web. 10 Mar. 2016.
“Microprocessors: Explore the Curriculum.” Intel. Intel Corp., 2015. Web. 11 Mar. 2016.
“Microprocessors.” MIT Technology Review. MIT Technology Review, 2016. Web. 11 Mar. 2016.
Wood, Lamont. “The 8080 Chip at 40: What's Next for the Mighty Microprocessor?” Computerworld. Computerworld, 8 Jan. 2015. Web. 12 Mar. 2016.