6 minute read

Mental Representation

Representational Format



Some representations play certain roles better than others. For example, a French sentence conveys information to a native French speaker more effectively than does the same sentence in Swahili, despite the two sentences having the same meaning. In this case, the two sentences have the same content yet differ in the way in which they represent it; that is, they utilize different representational formats. The problem of determining the correct representational format (or formats) for mental representation is a topic of ongoing interdisciplinary research in the cognitive sciences.



One hypothesis was mentioned above: human cognition consists in the manipulation of mental symbols according to "rules of thought." According to Jerry Fodor's influential version of this theory, cognition requires a "language of thought" operating according to computational principles. Individual concepts are the "words" of the language, and rules govern how concepts are assembled into complex thoughts—"sentences" in the language. For example, to think that the cat is on the mat is to take the required concepts (i.e., the "cat" concept, the "mat" concept, the relational concept of one thing being on top of another, etc.) and assemble them into a mental sentence expressing that thought.

The theory enjoys a number of benefits, not least that it can explain the human capacity to think arbitrary thoughts. Just as the grammar for English allows the construction of an infinite number of sentences from a finite set of words, given the right set of rules, and a sufficient number of basic concepts, any number of complex thoughts can be assembled.

However, artificial neural networks may provide an alternative. Inspired by the structure and functioning of biological neural networks, artificial neural networks consist of networks of interconnected nodes, where each node in a network receives inputs from and sends outputs to other nodes in the network. Networks process information by propagating activation from one set of nodes (the input nodes) through intervening nodes (the hidden nodes) to a set of output nodes.

In the mid-1980s, important theoretical advances in neural network research heralded their emergence as an alternative to the language of thought. Where the latter theory holds that thinking a thought involves assembling some mental sentence from constituent concepts, the neural network account conceives of mental representations as patterns of activity across nodes in a network. Since a set of nodes can be considered as an ordered n-tuple, activity patterns can be understood as points in n-dimensional space. For example, if a network contained two nodes, then at any given moment their activations could be plotted on a two-dimensional plane. Thinking, then, consists in transitions between points in this space.

Artificial neural networks exhibit a number of features that agree with aspects of human cognition. For example, they are architecturally similar to biological networks, are capable of learning, can generalize to novel inputs, and are resistant to noise and damage. Neural network accounts of mental representation have been defended by thinkers in a variety of disciplines, including David Rumelhart (a psychologist) and Patricia Churchland (a philosopher). However, proponents of the language of thought continue to wield a powerful set of arguments against the viability of neural network accounts of cognition. One of these has already been encountered above: humans can think arbitrary thoughts. Detractors charge that networks are unable to account for this phenomenon—unless, of course, they realize a representational system that facilitates the construction of complex representations from primitive components, that is, unless they implement a language of thought.

Regardless, research on artificial neural networks continues, and it is possible that these objections will be met. Moreover, there exist other candidates.

One such hypothesis—extensively investigated by the psychologist Stephen Kosslyn—is that mental representations are imagistic, a kind of "mental picture." For example, when asked how many windows are in their homes, people typically report that they answer by imagining walking through their home. Likewise, in one experiment, subjects are shown a map with objects scattered across it. The map is removed, and they are asked to decide, given two objects, which is closest to a third. The time it takes to decide varies with the distance between the objects.

A natural explanation is that people make use of mental imagery. In the first case, they form an image of their home and mentally explore it; in the second, a mental image of the map is inspected, and the subject "travels" at a fixed speed from one object to another.

The results of the map experiment seem difficult to explain if, for example, the map were represented mentally as a set of sentences in a language of thought, for why would there then be differences in response times? That is, the differences in response times seemed to obtained "for free" from the imagistic format of the mental representations. A recent elaboration on the theory of mental imagery proposes that cognition involves elaborate "scale models" that not only encode spatial relationships between objects but also implement a simulated physics, thereby providing predictions of causal interactions as well.

Despite the potential benefits, opponents argue that a nonimagistic account is available for every phenomenon in which mental imagery is invoked, and furthermore, purported neuroscientific evidence for the existence of images is inconclusive.

Perhaps the most radical proposal is that there is no such thing as mental representation, at least not as traditionally conceived. According to dynamic systems accounts, cognition cannot be successfully analyzed by positing discrete mental representations, such as those described above. Instead, mathematical equations should be used to describe the behavior of cognitive systems, analogous to the way they are used to describe the behavior of liquids, for example. Such descriptions do not posit contentful representations; instead, they track features of a system relevant to explaining and predicting its behavior. In favor of such a theory, some philosophers have argued that traditional analyses of cognition are insufficiently robust account for the subtleties of cognitive behavior, while dynamical equations are. That is, certain sorts of dynamical systems are computationally more powerful than traditional computational systems: they can do things (compute functions) that traditional systems cannot. Consequently, the question arises whether an adequate analysis of mental representations will require this additional power. At present the issue remains unresolved.

BIBLIOGRAPHY

Davis, Martin, ed. The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems and Computable Functions. Hewlett, N.Y.: Raven Press, 1965. This volume collects historically important papers on the logical foundations of computation.

Fodor, Jerry A. The Language of Thought. Cambridge, Mass.: Harvard University Press, 1975.

——. A Theory of Content and Other Essays. Cambridge, Mass.: MIT Press, 1990.

Haugeland, John, ed. Mind Design II. Cambridge, Mass.: MIT Press, 1997. A well-rounded collection of important philosophic essays on mental representation, ranging from classic papers in artificial intelligence to more recent developments such as artificial neural networks and dynamical systems.

Hodges, Andrew. Alan Turing: The Enigma. New York: Simon and Schuster, 1983. This engrossing biography of Alan Turing offers, among many other insights, well-written informal accounts of Turing's contributions to logic and computation, in- cluding his theories on the role of mental representation in cognition.

Kosslyn, Stephen M. Image and Brain: The Resolution of the Imagery Debate. Cambridge, Mass.: MIT Press, 1994.

McCartney, Scott. Eniac: The Triumphs and and Tragedies of the World's First Computer. New York: Walker, 1999.

McClelland, James L., David E. Rumelhart, and the PDP Research Group. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Cambridge, Mass.: MIT Press, 1986. Published in two volumes, these works played a major role in reintroducing artificial neural networks to cognitive science.

Watson, John B. "Psychology as the Behaviorist Sees It." Psychological Review 101, no. 2 (1913/1994): 248–253. A classic statement of the behaviorist program in psychology by a pioneer of the field.

Watson, Richard A. Representational Ideas: From Plato to Patricia Churchland. Dordrecht, Netherlands, and Boston: Kluwer, 1995.

Whit Schonbein

Additional topics

Science EncyclopediaScience & Philosophy: Mathematics to Methanal trimerMental Representation - History, Mental Content, The Nature Of Mental Representation, Representational Format, Bibliography