5 minute read

Falsifiability

Complications Of The Simple Model



Many scientists, administrators, and the legal community take falsifiability seriously as a criterion of demarcation of science from nonscience. But other scientists and science-studies experts consider falsifiability a heuristic rule of thumb at best, not a rigid requirement. Among the difficulties facing Popper's conception are these: In most scientific research, a hypothesis is tested against a competitor (often the "null hypothesis") rather than in isolation. The test typically discloses the comparative fit of the two hypotheses to the data rather than the outright falsity of one of them. The history of science discloses many cases in which a claim is not immediately falsifiable by known methods, yet the claim remained important to scientific investigation and later became testable. In the nineteenth century, August Comte (1798–1857) notoriously announced that we could never know which chemical elements were present in the Sun, yet only a few years later new spectrographic techniques revealed this information, including the existence of a hitherto unknown element, dubbed helium. In 1931 Wolfgang Pauli (1900–1958) postulated the existence of the neutrino, a chargeless and presumably massless particle that scarcely interacted with ordinary matter and was hence undetectable by any known means. Yet this turned out to be one of the more fruitful ideas of twentieth-century physics, and various kinds of neutrino are now detectable. By the end of the twentieth century, science-studies disciplines were characterizing science in terms of its practices rather than simply in terms of the logical status of its claims.



Furthermore, Popper himself admitted that absolute logical falsification, and hence absolute falsifiability, are impossible in scientific practice, since the allegedly refuting observations can never be known with certainty. Since observations themselves are not statements and can have no logical relations to statements, Popper held that observation statements (roughly, data) are accepted by convention. Moreover, they are theory-laden; there is no such thing as pure observation. Although Popper never employed falsifiability as a criterion of meaningfulness, attempting to formulate the falsifiability requirement with logical precision runs afoul of the same sorts of difficulties faced by the logical empiricists with their verifiability theory of meaning.

A specific difficulty, raised already by Pierre Duhem (1861–1916) and extended by Willard Van Orman Quine (1908–2000), is that, in isolation, universal claims yield no specific predictions. By itself, the hypothesis H, "All As are Bs," implies no testable prediction, not only because of its logical form but also because A and B will be typically abstract, theoretical terms. To generate predictions from hypothesis H, we must conjoin H with one or more auxiliary premises, A 1, …, A n. So the simple H-D model of the testing situation must be replaced by a more complex logical model: (H & A 1& … & A n) → O, where "&" means the logical "and." If prediction O is false, logic now tells us that at least one of the conjoined premises was mistaken, but not which one(s). Logic permits us to blame the failure on an auxiliary assumption rather than on H. In his influential article "Two Dogmas of Empiricism," Quine parlayed the Duhem problem into a controversial argument for holism: "our statements about the external world face the tribunal of sense experience not individually but only as a corporate body." We do not test scientific claims individually against nature but instead adjust our entire "web of belief" to fit our experience. Critics reply that deductive logic does not exhaust the distinctions licensed by scientific practice. Quineans forget that experiments are designed to test specific components of a theory or model, that an experiment designed to test H will rarely test the auxiliary assumptions as well (Sober). Furthermore, the relation of observation to theory is typically more complex than even the Duhem model, which remains deductive rather than probabilistic. Typically, several levels of data processing and theoretical modeling occur between theory and observation.

Thomas Kuhn (1922–1996), a leading opponent of H-D models of science, famously argued that Popper's falsificationist methodology fails to fit the history of physical science. In The Structure of Scientific Revolutions (1962), Kuhn advanced an alternative conception of physical science, according to which normal scientific work is highly constrained by "paradigms," the central tenets of which are immune from serious criticism and competition and hence unfalsifiable in practice. Only when a paradigm breaks down do we find the kind of critical and revolutionary ferment that Popper advocated for all scientific work. Moreover, much of normal science consists of tinkering of just the sort that the Popperians considered ad hoc. Subsequently, Popper's former student Imre Lakatos distinguished several kinds of falsificationist methodology, from simple to sophisticated. Attempting a compromise between Popper and Kuhn, he analyzed science in terms of competing research programs involving entire series of not-always-successful theories rather than individual theories in isolation. Predictive failure does not directly and immediately falsify a research program.

Finally, Larry Laudan deplores the ritual invocation of Popper's "toothless" falsifiability criterion in legal proceedings (such as the 1981–1982 creationism trial, McLean v. Arkansas) to distinguish good science from pseudoscience. Traditionally, the term science demarcated a body of established truths or scientifically warranted assertions, whereas falsifiability requires only empirical testability. For example, so-called Creation Science is false and hence falsifiable. By Popper's standard it is scientific—and so is the statement that the Earth is flat! A useful concept for certain purposes, falsifiability, by itself, fails as the hallmark of good science sought by the legal and political community.

BIBLIOGRAPHY

Cartwright, Nancy. The Dappled World: A Study of the Boundaries of Science. Cambridge, U.K.: Cambridge University Press, 1999.

Lakatos, Imre, and Alan Musgrave, eds. Criticism and the Growth of Knowledge. Cambridge, U.K.: Cambridge University Press, 1970.

Laudan, Larry. Beyond Positivism and Relativism: Theory, Method, and Evidence. Boulder, Colo.: Westview, 1996.

Popper, Karl. Conjectures and Refutations: The Growth of Scientific Knowledge. New York: Basic Books, 1962.

——. The Logic of Scientific Discovery. New York: Basic Books, 1959. Popper's expanded translation of his Logik der Forschung, 1934.

Quine, Willard Van Orman. "Two Dogmas of Empiricism." Reprinted in his From a Logical Point of View: Logico-Philosophical Essays. Cambridge, Mass.: Harvard University Press, 1953, pp. 20–46.

Sober, Elliott. "Testability." Proceedings and Addresses of the American Philosophical Association 73 (1999): 47–76.

Worrall, John. "Falsification, Rationality, and the Duhem Problem." In Philosophical Problems of the Internal and External Worlds, edited by John Earman et al., 329–370. Pittsburgh: University of Pittsburgh Press, 1993.

Thomas Nickles

Additional topics

Science EncyclopediaScience & Philosophy: Evolution to FerrocyanideFalsifiability - Popper's Emphasis On Falsifiability, Complications Of The Simple Model, Bibliography