7 minute read

Computer Science

Computer Science Chronology



During the early period (1950s–1960s), a great deal of computer science work focused on understanding and refining essential elements of a computer system. Operating-systems software was developed to facilitate the control of the functional units of a computer: the processor and the input and output devices. Programming languages were devised so programmers could more easily express the computations they wished to accomplish. FORTRAN (Formula Translation), developed by John Backus and a team at IBM around 1954, was the first popular, high-level programming language. Its focus was on efficient numerical calculation. LISP (LISt Processor), developed by John McCarthy at MIT around 1956, focused on symbolic programming. Both languages had major impacts and, though less popular, were still in use in the early twenty-first century.



The study of algorithms.

As the definition of computer science is the systematic study of algorithms, it is not surprising that the decade of the 1970s was a period when the study of algorithms was dominant. One aspect of this research was the development of a theory of algorithms. Building on the theories of Church and Turing, computer scientists asked questions such as "Is there an algorithm for any Turing machine such that it decides whether or not the machine eventually stops if it is started in some initial state?" This is termed the Halting Problem. The Halting Problem has been shown to be unsolvable. Another aspect of the theory of algorithms has to do with problem reducibility. For example, it has been shown that if an algorithm did exist for the Halting Problem, then it would be possible to solve Hilbert's "tenth problem"—namely, given a Diophantine equation, determine a procedure that decides in a finite number of steps if the equation has an integer solution. Computer scientists have shown that Hilbert's problem is reducible to the Halting Problem and is therefore unsolvable.

A second aspect of algorithm studies concerns the development of new algorithmic solutions for specific problems. Topics such as sorting, searching, and graph theory were closely studied from an algorithmic point of view. Many new and efficient algorithms for these topics have been produced—for example, Hoare's Quicksort algorithm for sorting, hashing as a technique for searching, and Tarjan's algorithm for determining graph planarity, to name just a few. As the search for new algorithms proceeded, new techniques for analyzing the performance of these algorithms were developed. Methodologically, worst-case, best-case, and average-case analysis have become standard questions to address when presenting an algorithm. There are standard mathematical notations for presenting these results. Algorithm design techniques were identified, for example, divide-and-conquer, depth-first search, greedy method, and dynamic programming.

No discussion of computer science would be complete without a discussion of its most famous problem, "Does P NP?" P is the set of problems that can be solved in deterministic polynomial time. That means that for a problem with inputs of size N, there must be some way to solve the problem in F(N) steps for some polynomial F. F can be any polynomial, even N to a very high power. For example, sorting a set of N numbers can be done in polynomial time. This problem is in the set P. NP is the set of problems one can solve in Difference Engine No. 1, a differential calculating machine invented by Charles Babbage (1792–1871). In the early nineteenth century, British mathematician Babbage constructed machines to perform basic numerical computations. True computer science, however, did not exist until the mid-twentieth century. © BETTMANN/CORBIS nondeterministic polynomial time. That means for a problem with inputs of size N, there must be some way to solve the problem in F(N) steps for some polynomial F just as before. In NP, however, the program is allowed to make lucky guesses, though it must prove the solution is correct. Many problems are in NP—for example, the traveling salesman, finding a Hamiltonian cycle, satisfiability of propositional expressions, finding a maximum clique, integer knapsack problem, and the optimal scheduling problem.

All problems in P are also in NP. You do not need to use nondeterministic guesses in an NP program if you do not want to. But does P NP? This problem was first posed by Steven Cook (1971). No one has ever proven that they are equal and no one has ever proven they are not. But despite this failure, the "P NP?" problem has contributed a great deal to our understanding of algorithms. This has come about because computer scientists have been able to identify a large class of problems, all of which are reducible to each other, that is, solving one of these problems will immediately lead to a solution for the others. This class is called the NP-complete problems. The fact that NP-complete problems are (intuitively) the most difficult in NP follows from the fact that we may prove P equals NP if and only if some NP-complete problem has a polynomial-time algorithm. In his original formulation of the "P NP?" Mark I computer, built in 1944 by Howard Aiken (1900–1973). Aiken's calculator replaced mechanical components with electromagnetic ones and responded to instructions it was fed from punched cards. Aiken later introduced a computer science program to Harvard University, the first of its kind. THE LIBRARY OF CONGRESS problem, Cook showed that the satisfiability problem (find an assignment of values to Boolean variables in a logical statement to make it true) was a member of the NP-complete problems (this is called Cook's Theorem). However, despite many efforts, no one has ever shown that an NP-complete problem is also in P. Because no one has found such an example, many researchers believe that P is not equal to NP. And yet computer science is relatively new, and lots of other difficult problems have remained unsolved for centuries before someone came up with a solution, so perhaps this one just needs more time.

Artificial intelligence.

The 1980s saw a great deal of work in the field of artificial intelligence (AI). AI is the subfield of computer science that is concerned with the computational understanding of intelligent behavior and with the creation of systems that produce such behaviors. One approach has been to develop computer programs that mimic the way humans behave. A second approach is to produce a computational model of some intelligent behavior, regardless of whether the model imitates human processes or not. One of the earliest examples of the latter approach was James Slagle's SAINT program for performing symbolic integration at the level of a college freshman. AI researchers early on emphasized heuristics as a problem-solving approach. Heuristics differ from algorithms in that they may not converge (or produce) the correct or exact answer, but experience shows that they often produce acceptable answers. For example, computers that play chess must evaluate the worth of their position at every step. Since it is computationally infeasible to work out the worth of all of the possible moves, researchers have developed heuristics that return a numerical assessment of the worth of a future move.

Returning to Alan Turing, in an article he published in 1950 he described a game in which a human interrogator attempts to determine, solely on the basis of a written interrogation, whether the identity of the "person" answering his questions is in fact a person or a computer. This has come to be known as the Turing test. This challenge inspired a great deal of AI research. Of special mention is the ELIZA program developed by Joseph Weizenbaum, which appears to emulate a nondirective therapist. If the person being interrogated says "I am very happy," ELIZA might respond with "Why are you very happy?" and so it goes. Other researchers attempted to develop computer programs that would exhibit behaviors that were believed to be signs of human intelligence. Playing chess or proving mathematical theorems were two such areas of intense study. There are many subareas of artificial intelligence, including natural language understanding, general problem solving, knowledge representation and reasoning, learning, machine vision, robotics, and autonomous agents.

Personal computers and the Internet.

An important phenomenon of the 1980s was the success of industry in producing a low-cost, miniaturized computer processor. This brought about the personal computer. Initially these machines might include 640,000 bytes of memory, no disk (only a floppy drive), and a processor whose speed might be 1 KHz (kilo-hertz) or less. The cost of such a machine was approximately $5,000. As technology improved, prices steadily dropped while capabilities were enhanced, and computers moved from the exclusive domain of the government and large corporations into the home and small business arena. A personal computer in 2004 might have 1 billion bytes of memory, a disk with a capacity of 100 gigabytes, and a processor whose speed is 10 MHz (megahertz) and might cost less than $2,000. Word processing and accounting (spreadsheets) became dominant applications. But this had little effect on computer science per se.

In the 1990s research that had started back in 1969 with the U.S. Department of Defense, and that led to the digital network of computers called the ARPAnet, became accessible to everyone as the Internet. Research on packet switching and network protocols led to the development of TCP/IP (transmission control protocol/Internet protocol), the standard that now enables any pair of connected computers to communicate. As the number of Internet users grew, computer scientists began to study how best to exchange information between them. This culminated in the development of the World Wide Web by Tim Berners-Lee (c. 1990).

In the early twenty-first century new issues challenged computer researchers. People studied how digital sensors by the thousands can be coordinated to do things like predict the weather. Others developed methods for connecting thousands of idle personal computers together to work on a single problem (GRID computing). And simulations of physical processes down to the atomic level could be achieved, as digital computer speeds reached Teraflop level (a thousand billion instructions per second).

Additional topics

Science EncyclopediaScience & Philosophy: Cluster compound to ConcupiscenceComputer Science - Early History, Computer Science Chronology, Basic Methodologies Of The Field, Some Examples Of Computer Science Merging With Other Fields