# Physics

## Nineteenth Century

The development of physics during the nineteenth century can be seen as both a culmination of what went before and as preparing the stage for the revolutions in relativity and quantum theory that were to follow. The work of the Irish mathematician and astronomer William Rowan Hamilton (1805–1865) built on Laplace's revision of Newtonian dynamics to establish a thoroughly abstract and mathematical approach to physical problems. Originally motivated by his work in optics, Hamilton developed a new principle of least action. Instead of using Lagrange's integral of kinetic energy, Hamilton chose to minimize the integral of the difference between the kinetic and the potential energies. In applying this principle in mechanics, Hamilton reproduced the results of Euler and Lagrange, and showed that it applied to a broader range of problems. After his work was published, as two essays in 1833 and 1834, it was critiqued and improved upon by the German mathematician Carl Gustav Jacob Jacobi (1804–1851). The resulting Hamilton-Jacobi formalism was applied in many fields of physics, including hydrodynamics, optics, acoustics, the kinetic theory of gases, and electrodynamics. However, it did not achieve its full significance until the twentieth century, when it was used to buttress the foundations of quantum mechanics.

## CAUSES OF MOTION: MEDIEVAL UNDERSTANDINGS

Medieval scholars put considerable effort into modifying Aristotelian dynamics and answering the problems posed by it. Because most terrestrial bodies were composed of many elements, their natural motion was explained by summing the total power of heavy and light elements. This led medieval scholars to consider the minority type of material as providing a kind of "internal resistance." Using this idea, motion in a hypothetical void or vacuum could be explained by qualities of both motive force and resistance that inhered in the object itself.

Probably the greatest puzzle facing medieval interpreters of Aristotle was the violent motion of projectiles. If the motion of every object required the analyst to specify a mover or cause for that motion, then what caused projectiles to continue in their trajectories after they lost contact with their original projector? Aristotle suggested that a surrounding medium was pushed by the original mover and so continued to push the projectile. For medieval scholars who admitted the possibility of a vacuum, however, this explanation was not tenable. In addition, if the medium was slight compared to the projectile (such as the air compared to a stone), then it was difficult to see how a corporeal mover could continue to be the cause of violent motion. Motivated by such concerns, in the sixth century John Philoponus suggested that the continued motion of a projectile was due to an incorporeal force that was impressed on the projectile itself by the original mover. The projectile would finish its motion when this impressed force wore off.

Some eight hundred years later, in the fourteenth century at the University of Paris, Jean Buridan renamed Philoponus's impressed force "impetus," and used the concept to interpret the motion of projectiles and falling bodies. Once impressed on a projectile, the impetus could bring about virtually constant motion unless it was interrupted or dissipated by a resistive medium, a notion that bears some resemblance to the conceptions of inertia developed later. Buridan also attempted to quantify impetus, by saying it was proportional to the moving object's speed and its quantity of matter. As an object engaged in free fall, the power of gravity imparted larger and larger amounts of impetus to the object, thereby increasing its velocity.

A number of scholars attempted to quantify a relation between the impressed force and the velocity of an object. Paramount among these was Thomas Bradwardine of Merton College. In comparison to a projectile, a falling object presented special problems. Aristotle suggested that the velocity of the object was proportional to the total impressed force (F) and inversely proportional to the resistance of an ambient medium (R). Bradwardine rejected this formulation and a number of other suggestions by Philoponus, Avempace, and Averroes that involved simple ratios or subtractions of F and R. Instead, he proposed a dynamics in which the velocity of a body increased arithmetically as the ratio F/R increased geometrically. This formulation proved to be influential well into the sixteenth century.

Work on magnetism was encouraged by Alessandro Volta's (1745–1827) development, in 1800, of the voltaic pile (an early battery), which, unlike the Leyden jar, was able to produce a steady source of electric current. Inspired by the German philosophical movement of Naturphilosophie, which espoused that the forces of nature were all interrelated in a higher unity, the Danish physicist Hans Christian Ørsted (1777–1851) sought a magnetic effect from the electric current of Volta's battery. Ørsted's announcement of his success, in 1820, brought a flurry of activity, including the work of Jean-Baptiste Biot and Félix Savart, on the force law between a current and a magnet, and the work of André-Marie Ampère, on the force law between two currents. The magnetic force was found to depend on the inverse square of the distance but was more complex due to the subtle vector relations between the currents and distances. For the analysis of inverse-square force laws, the German mathematician Carl Friedrich Gauss (1777–1855) introduced, in 1839, the concept of "potential," which could be applied with great generality to both electrostatics and magnetism. This work grew from Gauss's efforts in measuring and understanding the earth's magnetic field, which he undertook with his compatriot Wilhelm Eduard Weber (d. 1891).

## THE NEWTONIAN SYNTHESIS

The Newtonian synthesis was, first and foremost, a unification of celestial and terrestrial physics. Newton's famous story of seeing an apple fall in his mother's garden does a good job in summarizing this achievement. According to the story, the falling apple made Newton consider that the gravitational force that influences the apple (a projectile in terrestrial motion) might also act on the moon (a satellite in celestial motion). He concluded that "the power of gravity … was not limited to a certain distance from the earth but that this power must extend much farther than was usually thought" (Westfall, 1980, p. 154). This idea is displayed in a famous diagram in the Principia, depicting a projectile being thrown from a mountain peak, which rests on a small planet; as the projectile is thrown with greater and greater speed, it eventually goes into orbit around the planet and becomes a satellite. Consideration of the moon's motion led Newton to the force law for universal gravitation. Simply by virtue of having mass, any two objects exert mutually attractive forces on each other (in accordance with the third law of motion). The inverse-square force law made the gravitational force quantified and calculable but regarding the cause of gravity itself, Newton famously claimed, "I feign no hypotheses."

As much as his specific scientific achievements, Newton's method of working became a touch-stone for scientists of the eighteenth century and defined a general scientific culture of "Newtonianism." In this regard, the Newtonian synthesis can be seen as a combination of three broad traditions: experiment, mathematics, and mechanism. Newton's Opticks (1704) exemplified the empirical, inductive approach recommended by Francis Bacon. There, Newton reports on careful experiments with light, including a series showing that when light passes through a prism, the prism does not modify the color of the light but rather separates it into its component colors (see Fig. 2). He also did experiments in which he shone monochromatic light on thin plates and films, to produce patterns of light and dark regions; these later became known as "Newton's rings."

The effort to describe physical events with mathematics, which was so evident in the work of Kepler, Galileo, and Descartes, reached its full expression in Newton's dynamics. The universal gravitation law, along with the three laws of motion and the calculus, presented a complete Newtonian approach to quantifying and calculating the motion of everything from the smallest atom to the largest planet. Closely related to this mathematical tendency is the mechanical philosophy pursued by Descartes, Gassendi, and Robert Boyle (1627–1691). Although Newton rejected Descartes's plenum, he retained a modified idea of mechanical causality. For Newton, gravity was an action at a distance; two masses acted on one another despite the fact that empty space lay between them. Defined as a change in motion, Newton's conception of force was a mechanical, causal agent that acted either through contact or through action at a distance.

The most significant work in magnetism was done by Michael Faraday (1791–1861) at the Royal Institution of London. By 1831, Faraday had characterized a kind of reverse Ørsted effect, in which a change in magnetism gave rise to a current. For example, he showed that this "electromagnetic induction" occurred between two electric circuits that communicated magnetism through a shared iron ring but, otherwise, were electrically insulated from one another (an early version of the transformer). Faraday made the first measurements of magnetic materials, characterizing diamagnetic, paramagnetic, and ferromagnetic effects (though this terminology is

## FORMS OF MATTER

The development of physics both contributed to and depended on ideas about the structure of matter. In this regard, the history of physics is tied to the history of chemistry. Both sciences inherited a debate that began with the ancients regarding atomism versus continuity. Combining the influences of, among others, Pythagoras and Democritus, Plato saw matter as being composed of atoms that had different geometrical shapes for each of the four elements. Against this, Aristotle developed a continuum theory of matter, in part because his theory of motion would be contradicted by the existence of a void. This debate was reawakened during the sixteenth and seventeenth centuries. On the one hand, Descartes embraced a continuum theory involving a plenum of fine matter and vortices, founded on the idea that motion is caused through contact. On the other hand, Robert Boyle proposed atomistic explanations of his finding that reducing the volume of a gas increased its pressure proportionately. Newton refined Boyle's ideas by interpreting pressure as being due to mutually repelling atoms, and recommended an atomistic stance for further research in chemistry and optics.

During the eighteenth and nineteenth centuries, many theorists and experimentalists posited the existence of a number of "imponderables," substances that could produce physical effects but could not be weighed. The first of these was proposed in 1703 by the German physician and chemist Georg Ernst Stahl in order to explain the processes of oxidation and respiration. Stahl's phlogiston theory, and the renewed interest in Newton's theories of an ether medium for gravity, encouraged further theories involving imponderables, most notably electrical fire (to describe the flow of static electricity) and caloric (to describe heat flow). Although the imponderables were eventually rejected, they served as useful heuristic devices in quantifying physical laws. For example, the Scottish chemist Joseph Black (1728–1799) used the caloric theory to found the study of calorimetry and to measure specific heat (the heat required to raise the temperature of a substance one degree), and latent heat (the heat required for a substance to change its state).

Even after the work of John Dalton, few chemists and physicists before 1890 accepted the actual existence of atoms. Nevertheless, they found the atomic hypothesis to be useful in suggesting experiments and interpreting the results. In 1863, the Irish physical chemist Thomas Andrews experimentally characterized the "critical point" between the gas and liquid phases: at relatively low temperatures, as the pressure was increased, the change from gas to liquid was abrupt; however, at relatively high temperatures, the transition was continuous. In part to account for the behavior of the critical point, the Dutch physicist Johannes Diderik van der Waals (1837–1923) assumed that the forces between atoms were attractive at large range but repulsive at short range. The work of van der Waals represented the first successful theory of phase transitions and showed how an atomistic model could describe both phases.

In the mid-nineteenth century, Michael Faraday and Julius Plücker (1801–1868), among others, pioneered research on the discharge of electricity through partially evacuated glass tubes. The British chemist William Crookes made a number of improvements to these discharge tubes and called the glowing material that formed in them the "fourth state of matter" (which was later dubbed "plasma" by the American chemist Irving Langmuir). Work in this area eventually led to Joseph John Thomson's discovery of the electron and Philipp Lenard's characterization, in 1899, of the photoelectric effect.

due to the English mathematician William Whewell). Finally, Faraday pioneered the concept of the field, coining the term "magnetic field" in 1845. He saw the "lines of force" of magnetic or electric fields as being physically real and as filling space (in opposition to the idea of action at a distance).

## SECOND LAW OF THERMODYNAMICS

The development of the second law of thermodynamics was intimately tied to the kinetic theory of gases, and carried with it the rebirth of atomism and the founding of statistical mechanics. Despite the fact that Sadi Carnot believed that caloric was not lost when it traveled from the hot body to the cool body of an engine, he recognized that the work delivered depended on the temperature difference between the two bodies and that this difference constantly decreased. This observation was clarified by the German physicist Rudolf Clausius. In 1851, a few years after the acceptance of the first law of thermodynamics, Clausius recognized the need for a second law, to account for the fact that energy was often irrecoverably lost by a system. In a paper published in 1865, Clausius analyzed thermodynamic cycles with a quantity that he dubbed the "entropy" and found that it usually went up or at best (for a reversible process) was zero.

The Austrian Ludwig Boltzmann read Clausius's paper and set about developing a mechanical interpretation of the second law. In a first attempt, published in 1866, he used Hamilton's principle to analyze the development of a thermodynamic system made up of discrete particles. After Joseph Stefan (1835–1893) alerted Boltzmann to James Clerk Maxwell's probabilistic approach, Boltzmann made refinements to Maxwell's ideas and incorporated them into his mechanical interpretation. In 1872, he published a paper that made use of a transport equation (now called the "Boltzmann equation") to describe the evolution of a probability distribution of particles. As the atoms of a gas collided and eventually reached an equilibrium velocity distribution, the entropy was maximized.

Boltzmann's ideas were met with a number of objections. One objection argued that because Newton's laws were reversible, thermodynamic processes described by the motion of atoms could be reversed in time to yield processes that deterministically went to states of lower entropy, thus contradicting the second law. Boltzmann's response highlighted the statistical nature of his interpretation, arguing that, given particular initial conditions, any thermodynamic system has a vastly greater number of final states available to it with relatively high entropy. An increase of entropy means that the system has become randomized as the available energy is spread around to its constituent atoms. In 1877 Boltzmann published a paper that incorporated this idea and defined the entropy as a log of a quantity measuring the number of states available to a system. In doing his calculations, Boltzmann used the device of counting energy in discrete increments, which he took to zero at the end of his calculation. This method, a harbinger of the quantization of energy, influenced Planck and Einstein, over twenty years later.

Boltzmann had less success answering a second set of objections regarding atomism. The British physicist William Thomson (1824–1907) and Scottish physicist Peter Tait (1831–1901) rejected atomism as a result of their adherence to the dynamical theory of matter, which rejected the existence of a void. Similarly, Ernst Mach put forward empiricist counterarguments, which rejected Boltzmann's adherence to entities that could not be confirmed by direct observation.

One of the pinnacles of nineteenth-century physics is the theory of electromagnetism developed by the Scottish physicist James Clerk Maxwell (1831–1879). Maxwell brought together the work of Coulomb, Ampère, and Faraday, and made the crucial addition of the "displacement current," which acknowledged that a magnetic field can be produced not only by a current but also by a changing electric field. These efforts resulted in a set of four equations that Maxwell used to derive wave equations for the electric and magnetic fields. This led to the astonishing prediction that light was an electromagnetic wave. In developing and interpreting his results, Figure 4. Thermodynamics: Joule's experiment on the mechanical equivalent of heat Maxwell sought to build a mechanical model of electromagnetic radiation. Influenced by Faraday's rejection of action at a distance, Maxwell attempted to see electromagnetic waves as vortices in an ether medium, interspersed with small particles that acted as idle wheels to connect the vortices. Maxwell discarded this mechanical model in later years, in favor of a dynamical perspective. This latter attitude was taken by the German experimentalist Heinrich Rudolph Hertz (1857–1894), who, in 1886, first demonstrated the propagation of electromagnetic waves in the laboratory, using a spark-gap device as a transmitter.

During the eighteenth century, most researchers saw the flow of heat as the flow of the imponderable fluid caloric. Despite developments, such as Benjamin Thompson's cannon-boring experiments, which suggested that heat involved some sort of microscopic motion, caloric provided a heuristic model that aided in the quantification of experimental results and in the creation of mathematical models. For example, the French engineer Sadi Carnot (1837–1894) did empirical work on steam engines which led to the theory of the thermodynamic cycle, as reported in his Reflections on the Motive Power of Fire (1824). A purely mathematical approach was developed by Jean-Baptiste-Joseph Fourier, who analyzed heat conduction with the method of partial differential equations in his Analytical Theory of Heat (1822).

Carnot's opinion that caloric was conserved during the running of a steam engine was proved wrong by the development of the first law of thermodynamics. Similar conceptions of the conservation of energy (or "force," as energy was still referred to) were identified by at least three different people during the 1840s, including the German physician Julius Robert von Mayer (1814–1878), who was interested in the human body's ability to convert the chemical energy of food to other forms of energy, and the German theoretical physicist Hermann Ludwig Ferdinand von Helmholtz (1821–1894), who gave a mathematical treatment of different types of energy and showed that the different conservation laws could be traced back to the conservation of vis viva in mechanics. The British physicist James Prescott Joule (1818–1889) did an experiment that measured the mechanical equivalent of heat with a system of falling weights and a paddlewheel that stirred water within an insulated vessel (see Fig. 4).

In his Hydrodynamica, Bernoulli had proposed the first kinetic theory of gases, by suggesting that pressure was due to the motion and impact of atoms as they struck the sides of their containment vessel. The work of the chemists John Dalton (1766–1844) and Amedeo Avogadro (1776–1856) indirectly lent support to such a kinetic theory by casting doubt upon the Newtonian program of understanding chemistry in terms of force laws between atoms. After John Herapath's work on the kinetic theory, in 1820, was largely ignored, Rudolf Clausius published two papers, in 1857 and 1858, in which he sought to derive the specific heats of a gas and introduced the concept of the mean free path between atomic collisions. James Clerk Maxwell added the idea that the atomic collisions would result in a range of velocities, not an average velocity as Clausius thought, and that this would necessitate the use of a statistical approach. In a number of papers published from 1860 to 1862, Maxwell completed the foundations of the kinetic theory and introduced the equipartition theorem, the idea that each degree of freedom (translational or rotational) contributed the same average energy, which was proportional to the temperature of the gas. Clausius and Maxwell's work in kinetic theory was tied to their crucial contributions to developing the second law of thermodynamics (see sidebar, "Second Law of Thermodynamics").