Artificial Intelligence
What Is Intelligence?
One possible definition of intelligence is the acquisition and application of knowledge. An intelligent entity, on this view, is one that learns—acquires knowl edge—and is able to apply this knowledge to changing real-world situations. In this sense, a rat is intelligent, but most computers, despite their impressive numbercrunching capabilities, are not. To qualify as intelligent, an AI system must use knowledge (whether acquired from databases, sensory devices, trial and error, or all of the above) to make effective choices when confronted with data that are to some extent unpredictable Insofar as a computer can do this, it may be said, for the purposes of AI, to display intelligence. Note that this definition is purely functional, and that in AI the question of consciousness, though intriguing, need not be considered.
This limited characterization of intelligence would, perhaps, have been considered overcautious by some AI researchers in the early days, when optimism ran high. For example, U.S. economist, AI pioneer, and Nobel Prize winner Herbert Simon (1916–2001) predicted in 1965 that by 1985 "machines will be capable of doing any work man can do." Yet over 35 years later, despite exponential growth in memory size and processing speed, no computer even comes close to commonplace human skills like conversing, driving a car, or diapering a baby, much less "doing any work" a human being can do. Why has progress in AI been so slow?
One answer is that while intelligence, as defined above, requires knowledge, computers are only good at handling information, which is not the same thing. Knowledge is meaningful information, and "meaning" is a nonmeasurable, multivalued variable arising in the real world of things and values. Bits—"binary digits," 1s and 0s—have no meaning, as such; they are meaningful only when people assign meanings to them. Consider a single bit, a "1": its information content is one bit regardless of what it means, yet it may mean nothing or anything, including "The circuit is connected," "We surrender," "It is more likely to rain than snow," and "I like apples." The question for AI is, how can information be made meaningful to a computer? Simply adding more bits does not work, for meaning arises not from information as such, but from relationships involving the real world. In one form or another, this basic problem has been stymieing AI research for decades. Nor is it the only such problem. Another problem is that computers, even those employing "fuzzy logic" and autonomous learning, function by processing symbols (e.g., 1s and 0s) according to rules (e.g., those of Boolean algebra)—yet human beings do not usually think by processing symbols according to rules. Humans are able think this way, as when we do arithmetic, but more commonly we interpret situations, leap to conclusions, utter sentences, and plan actions using thought processes that do not involve symbolic computation at all. What our thought processes really are—that is, what our intelligence is, preicsely—and how to translate it into (or mimic it by means of) a system of computable symbols and rules is a problem that remains unsolved, in general, by AI researchers.
In 1950, British mathematician Alan Turing (1912–1954) proposed a hypothetical game to help decide the issue of whether a given machine is truly intelligent. The "imitation game," as Turing originally called it, consisted of a human questioner in a room typing questions on a keyboard. In another room, an unseen respondents—either a human or a computer—would type back answers. The questioner could pose queries to the respondent in an attempt to determine if he or she was corresponding with a human or a computer. In Turing's opinion, if the computer could fool the questioner into believing that he or she was having a dialog with a human being then the computer could be said to be truly intelligent.
The Turing test is obviously biased toward human language prowess, which most AI programs do not even seek to emulate. Nevertheless, it is significant that even the most advanced AI programs devoted to natural language are as far as ever from passing the Turing test. "Intelligence" has proved a far tougher nut to crack than the pioneers of AI believed, half a century ago. Computers remain unintelligent.
Even so, AI has made many gains since the 1950s. AI software is now present in many devices, such as automatic teller machines (ATMs), that are part of daily life, and is finding increasing commercial application in many industries.
Additional topics
Science EncyclopediaScience & Philosophy: Anticolonialism in Southeast Asia - Categories And Features Of Anticolonialism to Ascorbic acidArtificial Intelligence - What Is Intelligence?, Overview Of Ai, General Problem Solving, Expert Systems, Natural Language Processing