19 minute read

Causation

The 1970s And Early 1980s: The Age Of Causal Analyses

Several different analyses of causation were given serious attention in the 1970s. One school gave an account based on counterfactuals, another used Hume's idea of regularity or constant conjunction, still another attempted to reduce causation to probabilistic relations and another to physical processes interacting spatiotemporally, and yet another was founded on the idea of manipulability.

EVENT CAUSATION VERSUS CAUSAL GENERALIZATIONS

Legal cases and accident investigations usually deal with a particular event and ask what caused it. For example, when in February 2003 the space shuttle Columbia burned up during reentry, investigators looked for the cause of the disaster. In the end, they concluded that a chunk of foam insulation that had broken off and hit the wing during launch was the cause of a rupture in the insulating tiles, which was the cause of the shuttle's demise during reentry. Philosophers call this event causation, or actual causation, or token-causation.

Policy makers, statisticians, and social scientists usually deal with kinds of events, like graduating from college, or becoming a smoker, or playing lots of violent video games. For example, epidemiologists in the 1950s and 1960s looked for the kind of event that was causing a large number of people to get lung cancer, and they identified smoking as a primary cause. Philosophers call this type-causation, or causal generalization, or causation among variables.

The properties of causal relationships are different for actual causation and for causal generalizations. Actual causation is typically considered transitive, antisymmetrical, and irreflexive. If we are willing to say that one event A, say the Titanic hitting an iceberg on 12 April 1912, caused another event B, its hull ripping open below the water line and taking on water moments later, which in turn caused a third event C, its sinking a few hours later, then surely we should be willing to say that event A (hitting the iceberg) caused event C (sinking). So actual causation is transitive. (Plenty of philosophers disagree, for example, see the work of Christopher Hitchcock.) It is antisymmetrical because of how we view time. If a particular event A caused a later event B, then B did not cause A. Finally, single events do not cause themselves, so causation between particular events is irreflexive.

Causal generalizations, however, are usually but not always transitive, definitely not antisymmetrical, and definitely not irreflexive. In some cases causal generalizations are symmetrical, for example, confidence causes success, and success causes confidence, but in others they are not, for example, warm weather causes people to wear less clothing, but wearing less clothing does not cause the weather to warm. So causal generalizations are asymmetrical, not antisymmetrical, like actual causation. When they are symmetrical, causal generalizations are reflexive. Success breeds more success and so forth.

The counterfactual theory.

In the late 1960s Robert Stalnaker began the rigorous study of sentences that assert what are called contrary-to-fact conditionals. For example, "If the September 11, 2001, terrorist attacks on the United States had not happened, then the United States would not have invaded Afghanistan shortly thereafter." In his classic 1973 book Counterfactuals, David Lewis produced what has become the most popular account of such statements. Lewis's theory rests on two ideas: the existence of alternative "possible worlds" and a similarity metric over these worlds. For example, it is intuitive that the possible world identical to our own in all details except for the spelling of my wife's middle name ("Anne" instead of "Ann") is closer to the actual world than one in which the asteroid that killed the dinosaurs missed the earth and primates never evolved from mammals.

For Lewis, the meaning and truth of counterfactuals depend on our similarity metric over possible worlds. When we say "if A had not happened, then B would not have happened either," we mean that for each possible world W 1 in which A did not happen and B did happen, there is at least one world W 2 in which A did not happen and B did not happen that is closer to the actual world than W 1. Lewis represents counter-factual dependence with the symbol: □ →, so P □ → Q means that, among all the worlds in which P happens, there is a world in which Q also happens that is closer to the actual world than all the worlds in which Q does not.

That there is some connection between counterfactuals and causation seems obvious. We see one event A followed by another B. What do we mean when we say A caused B? We might well mean that if A had not happened, then B would not have happened either. If the Titanic had not hit an iceberg, it would not have sunk. Formalizing this intuition in 1973, Lewis analyzed causation as a relation between two events A and B that both occurred such that two counterfactuals hold:

  1. A □ → B, and
  2. ∼A □ → ∼B

Because both A and B already occurred, the first is trivially true, so we need only assess the second in order to assess whether A caused B.

Is this analysis satisfactory? Even if possible worlds and a similarity metric among them are clearer and less metaphysically mysterious than causal claims, which many dispute, there are two major problems with this account of causation. First, in its original version it just misses cases of overdetermination or preemption, that is, cases in which more than one cause was present and could in fact have produced the effect.

OVERDETERMINATION AND PREEMPTION

A spy, setting out to cross the desert with some key intelligence, fills his canteen with just enough water for the crossing and settles down for a quick nap. While he is asleep, Enemy A sneaks into his tent and pokes a very small hole in the canteen, and a short while later enemy B sneaks in and adds a tasteless poison. The spy awakes, forges ahead into the desert, and when he goes to drink from his canteen discovers it is empty and dies of thirst before he can get water. What was the cause of the spy's death? According to the counterfactual theory, neither enemy's action caused the death. If enemy A had not poked a hole in the canteen, then the spy still would have died by poison. If enemy B had not put poison into the canteen, then he still would have died from thirst. Their actions overdeter-mined the spy's death, and the pinprick from enemy A preempted the poison from enemy B.

In the beginning of the movie Magnolia, a classic causal conundrum is dramatized. A fifteen-year-old boy goes up to the roof of his ten-story apartment building, ponders the abyss, and jumps to his death. Did he commit suicide? It turns out that construction workers had installed netting the day before that would have saved him from the fall, but as he is falling past the fifth story, a gun is shot from inside the building by his mother, and the bullet kills the boy instantly. Did his mother murder her son? As it turns out, his mother fired the family rifle at his drunk step-father but missed and shot her son by mistake. She fired the gun every week at approximately that time after their horrific regular argument, which the boy cited as his reason for attempting suicide, but the gun was usually not loaded. This week the boy secretly loaded the gun without telling his parents, presumably with the intent of causing the death of his stepfather. Did he, then, in fact commit suicide, albeit unintentionally?

Even more importantly, Lewis's counterfactual theory has a very hard time with the asymmetry of causality and only a slightly better time with the problem of spurious causation. Consider a man George who jumps off the Brooklyn Bridge and plunges into the East River. (This example is originally from Horacio Arlo-Costa and is discussed in Hausman, 1998, pp. 116–117.) On Lewis's theory, it is clear that it was jumping that caused George to plunge into the river, because had George not jumped, the world in which he did not plunge is closer to the actual one than any in which he just happened to plunge for some other reason at approximately the same time. Fair enough. But consider the opposite direction: if George had not plunged, then he would not have jumped. Should we assent to this counterfactual? Is a world in which George did not plunge into the river and did not jump closer to the real one than any in which he did not plunge but did jump? Most everyone except Lewis and his followers would say yes. Thus on Lewis's account jumping off the bridge caused George to plunge into the river, but plunging into the river (as distinct from the idea or goal of plunging into the river) also caused George to jump. (Lewis and many others have amended the counterfactual account of causation to handle problems of overdetermination and preemption, but no account has yet satisfactorily handled the asymmetry of causality.)

For the problem of spurious causation, consider Johnny, who gets infected with the measles virus, runs a fever, and shortly thereafter gets a rash. Is it reasonable to assert that if Johnny had not gotten a fever, he would not have gotten a rash? Yes, but it was not the fever that caused the rash, it was the measles virus. Lewis later responded to this problem by prohibiting "backtracking" and to the problem of overdetermination and preemption with an analysis of "influence," but the details are beyond our scope.

Mackie's regularity account.

Where David Lewis tried to base causation on counterfactuals, John Mackie tried to extend Hume's idea that causes and effects are "constantly conjoined" and to use the logical idea of necessary and sufficient conditions to make things clear. In 1974 Mackie published an analysis of causation in some part aimed at solving the problems that plagued Lewis's counterfactual analysis, namely overdetermination and preemption. Mackie realized that many factors combine to produce an effect, and it is only our idiosyncratic sense of what is "normal" that draws our attention to one particular feature of the situation, such as hitting the iceberg. It is a set of factors, for example, A: air with sufficient oxygen, B: a dry pile of combustible newspaper and kindling, and C: a lit match that combine to cause D: a fire. Together the set of factors A, B, and C are sufficient for D, but there might be other sets that would work just as well, for example A, B, and F: a bolt of lightning. If there were a fire caused by a lit match, but a bolt of lightning occurred that also would have started the fire, then Lewis's account has trouble saying that the lit match caused the fire, because the fire would have started without the lit match; or put another way, the match was not necessary for starting the fire. Mackie embraces this idea and says that X is a cause of Y just in case X is an Insufficient but Necessary part of an Unnecessary but Sufficient set of conditions for Y, that is, an INUS condition. The set of conditions that produced Y need not be the only sufficient set, thus the set is not necessary, but X should be an essential part of a set that is sufficient for Y.

Again, however, the asymmetry of causality and the problem of spurious causation wreak havoc with Mackie's INUS account of causation. Before penicillin, approximately 10 percent of those people who contracted syphilis eventually developed a debilitating disease called paresis, and nothing doctors could measure seemed to tell them anything about which syphilitics developed paresis and which did not. As far as is known, paresis can result only from syphilis, so having paresis is by itself sufficient for having syphilis. Consider applying Mackie's account to this case. Paresis is an INUS condition of syphilis, because it is sufficient by itself for having syphilis, but it is surely not a cause of it.

Consider the measles. If we suppose that when people are infected they either show both symptoms (the fever and rash) or their immune system controls it and they show neither, then the INUS theory gets things wrong. The fever is a necessary part of a set that is sufficient for the rash: {fever, infected with measles virus}, and for that matter the rash is a necessary part of a set that is sufficient for fever: {rash, infected with measles virus}. So, unfortunately, on this analysis fever is an INUS cause of rash and rash is also a cause of fever.

Probabilistic causality.

Twentieth-century physics has had a profound effect on a wide range of ideas, including theories of causation. In the years between about 1930 and 1970 the astounding and unabated success of quantum mechanics forced most physicists to accept the idea that, at bedrock, the material universe unfolds probabilistically. Past states of subatomic particles, no matter how finely described, do not determine their future states, they merely determine the probability of such future states. Embracing this brave new world in 1970, Patrick Suppes published a theory of causality that attempted to reduce causation to probability. Whereas electrons have only a propensity, that is, an objective physical probability to be measured at a particular location at a particular time, perhaps macroscopic events like developing lung cancer have only a probability as well. We observe that some events seem to quite dramatically change the probability of other events, however, so perhaps causes change the probability of their effects. If Pr(E), the probability of an event E, changes given that another event C has occurred, notated Pr(E C), then we say E and C are associated. If not, then we say they are independent. Suppes was quite familiar with the problem of asymmetry, and he was well aware that association and independence are perfectly symmetrical, that is, Pr(E) Pr(E | C) ⇔ Pr(C) Pr(C E). He was also familiar with the problem of spurious causation and knew that two effects of a common cause could appear associated. To handle asymmetry and spurious causation, he used time and the idea of conditional independence. His theory of probabilistic causation is simple and elegant:

  1. C is a prima facie cause of E if C occurs before E in time, and C and E are associated, that is, Pr(E) Pr(E C).
  2. C is a genuine cause of E if C is a prima facie cause of E, and there is no event Z prior to C such that C and E are independent conditional on Z, that is, there is no Z such that Pr(E Z) Pr(E Z, C).

Without doubt, the idea of handling the problem of spurious causation by looking for other events Z that screen off C and E, although anticipated by Hans Reichenbach, Irving John Good, and others, was a real breakthrough and remains a key feature of any metaphysical or epistemological account that connects causation to probability. Many other writers have elaborated a probabilistic theory of causation with metaphysical aspirations, for example, Ellery Eells, David Papineau, Brian Skyrms, and Wolfgang Spohn.

Figure 1. Cartwright's counterexample
SOURCE: Courtesy of the author

Probabilistic accounts have drawn criticism on several fronts. First, defining causation in terms of probability just replaces one mystery with another. Although we have managed to produce a mathematically rigorous theory of probability, the core of which is now widely accepted, we have not managed to produce a reductive metaphysics of probability. It is still as much a mystery as causation. Second, there is something unsatisfying about using time explicitly to handle the asymmetry of causation and at least part of the problem of spurious causation (we can only screen off spurious causes with a Z that is prior in time to C).

Third, as Nancy Cartwright persuasively argued in 1979, we cannot define causation with probabilities alone, we need causal concepts in the definiens (definition) as well as the definiendum (expression being defined). Consider her famous (even if implausible) hypothetical example, shown in Figure 1: smoking might cause more heart disease, but it might also cause exercise, which in turn might cause less heart disease. If the negative effect of exercise on heart disease is stronger than the positive effect of smoking and the association between smoking and exercise is high enough, then the probability of heart disease given smoking could be lower than the probability of heart disease given not smoking, making it appear as though smoking prevents heart disease instead of causing it.

The two effects could also exactly cancel, making smoking and heart disease look independent. Cartwright's solution is to look at the relationship between smoking and heart disease within groups that are doing the same amount of exercise, that is, to look at the relationship between smoking and heart disease conditional on exercise, even though exercise does not in this example come before smoking, as Suppes insists it should. Why does Suppes not allow Zs that are prior to E but after C in time? Because that would allow situations in which although C really does cause E, its influence was entirely mediated by Z, and by conditioning on Z it appears as if C is not a genuine cause of E, even though it is (Fig. 2).

In Cartwright's language: Smoking should increase the probability of heart disease in all causally homogeneous situations for heart disease. The problem is circularity. By referring to the causally homogeneous situations, we invoke causation in our definition. The moral Cartwright drew, and one that is now widely accepted, is that causation is connected to probability but cannot be defined in terms of it.

Salmon's physical process theory.

A wholly different account of causation comes from Wes Salmon, one of the preeminent philosophers of science in the later half of the twentieth century. In the 1970s Salmon developed a theory of scientific explanation that foundered partly on an asymmetry very similar to the asymmetry of causation. Realizing that causes explain their effects but not vice versa, Salmon made the connection between explanation and causation explicit. He then went on to characterize causation as an interaction between two physical processes, not a probabilistic or logical or counterfactual relationship between events. A causal interaction, according to Salmon, is the intersection of two causal processes and the exchange of some invariant quantity, like momentum. For example, two pool balls that collide each change direction (and perhaps speed), but their total momentum after the collision is (ideally) no different than before. An interaction has taken place, but momentum is conserved. Explaining the features of a causal process is beyond the scope of such a short review article, but Phil Dowe has made them quite accessible and extremely clear in a 2000 review article in the British Journal for Philosophy of Science.

It turns out to be very difficult to distinguish real causal processes from psuedo-processes, but even accepting Salmon's and Dowe's criteria, the theory uses time to handle the asymmetry of causation and has big trouble with the problem of spurious causation. Again, see Dowe's excellent review article for details.

Manipulability theories.

Perhaps the most tempting strategy for understanding causation is to conceive of it as how the world responds to an intervention or manipulation. Consider a well-insulated, closed room containing two people. The room temperature is 58 degrees Fahrenheit, and each person has a sweater on. Later the room temperature is 78 degrees Fahrenheit and each person has taken his or her sweater off. If we ask whether it was the rise in room temperature that caused the people to peel off their sweaters or the peeling off of sweaters that caused the room temperature to rise, then unless there was some strange signal between the taking off of sweaters and turning up a thermostat somewhere, the answer is obvious. Manipulating the room temperature from 58 to 78 degrees will cause people to take off their sweaters, but manipulating them to take off their sweaters will not make the room heat up.

Figure 2. Z mediates the relation between C and E
SOURCE: Courtesy of the author

In general, causes can be used to control their effects, but effects cannot be used to control their causes. Further, there is an invariance between a cause and its effects that does not hold between an effect and its causes. It does not seem to matter how we change the temperature in the room from 58 to 78 degrees or from 78 to 58, the co-occurrence between room temperature and sweaters remains. When the temperature is 58, people have sweaters on. When the temperature is 78, they do not. The opposite is not true for the relationship between the effect and its causes. It does matter how they come to have their sweaters on. If we let them decide for themselves naturally, then the co-occurrence between sweaters and temperature will remain, but if we intervene to make them take their sweaters off or put them on, then we will annihilate any cooccurrence between wearing sweaters and the room temperature precisely because the room temperature will not respond to whether or not people are wearing sweaters. Thus manipulability accounts handle the asymmetry problem.

They do the same for the problem of spurious causation. Tar-stained fingers and lung cancer are both effects of a common cause—smoking. Intervening to remove the stains from one's fingers will not in any way change the probability of getting lung cancer, however.

The philosophical problem with manipulability accounts is circularity, for what is it to "intervene" and "manipulate" other than to "cause"? Intervening to set the thermostat to 78 is just to cause it to be set at 78. Manipulation is causation, so defining causation in terms of manipulation is, at least on the surface of it, circular.

Perhaps we can escape from this circularity by separating human actions from natural ones. Perhaps forming an intention and then acting to execute it is special, and could be used as a noncausal primitive in a reductive theory of causation. Writers like George Henrik von Wright have pursued this line. Others, like Paul Holland, have gone so far as to say that we have no causation without human manipulation. But is this reasonable or desirable? Virtually all physicists would agree that it is the moon's gravity that causes the tides. Yet we cannot manipulate the moon's position or its gravity. Are we to abandon all instances of causation where human manipulation was not involved? If a painting falls off the wall and hits the thermostat, bumping it up from 58 to 78 degrees, and a half hour later sweaters come off, are we satisfied saying that the sequence: thermostat goes up, room temperature goes up, sweaters come off was not causal?

Because they failed as reductive theories of causation, manipulability theories drew much less attention than perhaps they should have. As James Woodward elegantly puts it:

Philosophical discussion has been unsympathetic to manipulability theories: it is claimed both that they are unilluminatingly circular and that they lead to an implausibly anthropocentric and subjectivist conception of causation. This negative assessment among philosophers contrasts sharply with the widespread view among statisticians, theorists of experimental design, and many social and natural scientists that an appreciation of the Figure 3. Path analytic model of X→Y
SOURCE: Courtesy of the author
connection between causation and manipulation can play an important role in clarifying the meaning of causal claims and understanding their distinctive features. (p. 25)

Additional topics

Science EncyclopediaScience & Philosophy: Categorical judgement to ChimaeraCausation - Modern Theories Of Causation, The 1970s And Early 1980s: The Age Of Causal Analyses, Event Causation Versus Causal Generalizations