Experimenter's Bias

The results of experiments can be flawed or skewed because of bias. Those designing, conducting, or analyzing an experiment often hold expectations regarding the experiment's outcome, such as hoping for an outcome that supports the initial hypothesis. Such expectations can shape how the experiment is structured, conducted, and/or interpreted, thereby affecting the outcome. This typically unconscious and unintentional phenomenon is known as experimenter's bias.

The main types of experimenter's bias include self-fulfilling prophecy, observer bias, and interpreter bias. Most modern social science and clinical experiments are designed with one or more safeguards in place to minimize the possibility of such biases distorting results.


In the mid- to late 1960s, psychologist Robert Rosenthal began uncovering and reporting on experimenter's bias in social science research. His most famous and controversial work was a 1968 study on teacher expectations. In it, Rosenthal and his colleagues gave students a standardized intelligence test, then randomly assigned some to a group designated “intellectual bloomers” and told teachers that these students were expected to perform very well academically. When tested eight months later, the “intellectual bloomers” had indeed done better than their peers, suggesting that teacher expectancy had affected the educational outcomes. This phenomenon, in which a person's behavior is shaped by and conforms to the expectations of others, came to be known as the Pygmalion effect, named for the play by George Bernard Shaw. Rosenthal's work shed light on issues of internal validity and launched a new area of research into methodologies.

Other biases arise not from the experimenter's interaction with the subjects but rather from his or her observations and interpretations of their responses. Observer bias occurs when the experimenter's assumptions, preconceptions, or prior knowledge affects what he or she observes and records about the results of the experiment. Interpreter bias is an error in data interpretation, such as a focus on just one possible interpretation of the data to the exclusion of all others.

To attempt to prevent bias, most social science and clinical studies are either single-blind studies, in which subjects are unaware of whether they are participating in a control or a study group, or doubleblind studies, in which both experimenters and subjects are unaware of which subjects are in which groups. Other ways of avoiding experimenter's bias include standardizing methods and procedures to minimize differences in experimenter-subject interactions; using blinded observers or confederates as assistants, further distancing the experimenter from the subjects; and separating the roles of investigator and experimenter.

Experimenter's bias and the prevention thereof have implications for areas of research as diverse as social psychology, education, medicine, and politics.

—Céleste Codington-Lacerte

Becker, Lee A. “VIII. The Internal Validity of Research.” Effect Size Calculators. U of Colorado: Colorado Springs, 16 Mar. 1998. Web. 28 July 2015.

Colman, Andrew M. A Dictionary of Psychology. 4th ed. New York: Oxford UP, 2015. Print.

Finn, Patrick. “Primer on Research: Bias and Blinding; Self-Fulfilling Prophecies and Intentional Ignorance.” ASHA Leader June 2006: 16–22. Web. 3 Oct. 2013.

Gould, Jay E. Concise Handbook of Experimental Methods for the Behavioral and Biological Sciences. Boca Raton: CRC, 2002. Print.

Greenberg, Jerald, and Robert Folger. Controversial Issues in Social Research Methods. New York: Springer, 1988. Print.

Halperin, Sandra, and Oliver Heath. Political Research: Methods and Practical Skills. New York: Oxford UP, 2012. Print.

Jussim, Lee. Social Perception and Social Reality: Why Accuracy Dominates Bias and Self-Fulfilling Prophecy. New York: Oxford UP, 2012. Print.

Rosenthal, Robert, and Ralph L. Rosnow. Artifacts in Behavioral Research. New York: Oxford UP, 2009. Print.

Schultz, Kenneth F., and David A. Grimes. “Blinding in Randomised Trials: Hiding Who Got What.” Lancet 359.9307 (2002): 696–700. RHL: The WHO Reproductive Health Library. Web. 11 June 2015.

Supino, Phyllis G. “Fundamental Issues in Evaluating the Impact of Interventions: Sources and Control of Bias.” Principles of Research Methodology: A Guide for Clinical Investigators. Ed. Supino and Jeffrey S. Borer. New York: Springer, 2012. 79–110. Print.