CPU Design


Computer Engineering; Information Technology


CPU design is an area of engineering that focuses on the design of a computer's central processing unit (CPU). The CPU acts as the “brain” of the machine, controlling the operations carried out by the computer. Its basic task is to execute the instructions contained in the programming code used to write software. Different CPU designs can be more or less efficient than one another. Some designs are better at addressing certain types of problems.



The design of a CPU is a complex undertaking. The main goal of CPU design is to produce an architecture that can execute instructions in the fastest, most efficient way possible. Both speed and efficiency are relevant factors. There are times when having an instruction that is fast to execute is adequate, but there are also situations where it would not make sense to have to execute that simple instruction hundreds of times in order to accomplish a task.

Often the work begins by designers considering what the CPU will be expected to do and where it will be used. A microcontroller inside an airplane performs quite different tasks than one inside a kitchen appliance, for instance. The CPU's intended function tells what type of instruction-set architecture to use in the microchip that contains the CPU. Knowing what types of programs will be used most frequently also allows CPU designers to develop the most efficient logic implementation. Once this has been done, the control unit design can be defined. Defining the datapath design is usually the next step, as the CPU's handling of instructions is given physical form.

A generic state diagram shows the simple processing loop: fetch instructions from memory, decode instructions to determine the proper execute cycle, execute instructions, and then fetch next instructions from memory and continue the cycle.

A generic state diagram shows the simple processing loop: fetch instructions from memory, decode instructions to determine the proper execute cycle, execute instructions, and then fetch next instructions from memory and continue the cycle. A state diagram is the initial design upon which data paths and logic controls can be designed.
EBSCO illustration.

CPU design is heavily influenced by the type of instruction sets being used. In general, there are two approaches to instructions. The first is random logic. Sometimes this is referred to as “hardwired instructions.” Random logic uses logic devices, such as decoders and counters, to transport data and to perform calculations. Random logic can make it possible to design faster chips. The logic itself takes up space that might otherwise be used to store instructions, however. Therefore, it is not practical to use random logic with very large sets of instructions.

The most influential factor to consider when weighing random logic against microcode is memory speed. Random logic usually produces a speedier CPU when CPU speeds outpace memory speeds. When memory speeds are faster, microcode is faster than random logic.


Early in the history of CPU design, it was felt that the best way to improve CPU performance was to continuously expand instruction sets to give programmers more options. Eventually, studies began to show that adding more complex instructions did not always improve performance, however. In response, CPU manufacturers produced reduced instruction set computer (RISC) chips. RISC chips could use less complex instructions, even though this meant that a larger number of instructions were required.


Moore's law is named after Gordon Moore, a co-founder of the computer manufacturer Intel. In 1975, Moore observed that the computing power of an integrated circuit or microchip doubles, on average, every two years. This pace of improvement has been responsible for the rapid development in technological capability and the relatively short lifespan of consumer electronics, which tend to become obsolete soon after they are purchased.

—Scott Zimmer, JD

Englander, Irv. The Architecture of Computer Hardware, Systems Software, & Networking: An Information Technology Approach. Hoboken: Wiley, 2014. Print.

Hyde, Randall. Write Great Code: Understanding the Machine. Vol. 1. San Francisco: No Starch, 2005. Print.

Jeannot, Emmanuel, and J. Žilinskas. High Performance Computing on Complex Environments. Hoboken: Wiley, 2014. Print.

Lipiansky, Ed. Electrical, Electronics, and Digital Hardware Essentials for Scientists and Engineers. Hoboken: Wiley, 2013. Print.

Rajasekaran, Sanguthevar. Multicore Computing: Algorithms, Architectures, and Applications. Boca Raton: CRC, 2013. Print.

Stokes, Jon. Inside the Machine: An Illustrated Introduction to Microprocessors and Computer Architecture. San Francisco: No Starch, 2015. Print.

Wolf, Marilyn. High Performance Embedded Computing: Architectures, Applications, and Methodologies. Amsterdam: Elsevier, 2014. Print.