# Error

## Sources of error

Error is the amount of deviation in a physical quantity that arises as a result of the process of measurement or **approximation**. Another term for error is uncertainty.

Physical quantities such as weight, **volume**, **temperature**, speed, or **time** must all be measured by an instrument of one sort or another. No matter how accurate the measuring tool—be it an **atomic clock** that determines time based on atomic oscillation or a **laser** interferometer that measures **distance** to a fraction of a wavelength of **light** some finite amount of uncertainty is involved in the measurement. Thus, a measured quantity is only as accurate as the error involved in the measuring process. In other words, the error, or uncertainty, of a measurement is as important as the measurement itself.

As an example, imagine trying to measure the volume of **water** in a bathtub. Using a gallon bucket as a measuring tool, it would only be possible to measure the volume accurately to the nearest full bucket, or gallon. Any fractional gallon of water remaining would be added as an estimated volume. Thus, the value given for the volume would have a potential error or uncertainty of something less than a bucket.

Now suppose the bucket were scribed with lines dividing it into quarters. Given the resolving power of the human **eye**, it is possible to make a good guess of the measurement to the nearest quarter gallon, but the guess could be affected by factors such as viewing **angle**, **accuracy** of the scribing, tilts in the surface holding the bucket, etc. Thus, a measurement that appeared to be 6.5 gal (24.6 l) could be in error by as much as one quarter of a gallon, and might actually be closer to 6.25 gal (23.6 l) or 6.75 gal (25.5 l). To express this uncertainty in the measurement process, one would write the volume as 6.5 gallons +/-0.25 gallons.

As the resolution of the measurement increases, the accuracy increases and the error decreases. For example, if the measurement were performed again using a cup as the unit of measure, the resultant volume would be more accurate because the fractional unit of water remain ing—less than a cup—would be a smaller volume than the fractional gallon. If a teaspoon were used as a measuring unit, the volume measurement would be even more accurate, and so on.

As the example above shows, error is expressed in terms of the difference between the true value of a quantity and its approximation. A positive error is one in which the observed value is larger than the true value; in a **negative** error, the observed value is smaller. Error is most often given in terms of positive and negative error. For example, the volume of water in the bathtub could be given as 6 gallons +/-0.5 gallon, or 96 cups +/-0.5 cup, or 1056 teaspoons +/-0.5 teaspoons. Again, as the uncertainty of the measurement decreases, the value becomes more accurate.

An error can also be expressed as a **ratio** of the error of the measurement and the true value of the measurement. If the approximation were 25 and the true value were 20, the relative error would be 5/20. The relative error can be also be expressed as a **percent**. In this case, the percent error is 25%.

Measurement error can be generated by many sources. In the bathtub example, error could be introduced by poor procedure such as not completely filling the bucket or measuring it on a tilted surface. Error could also be introduced by environmental factors such as **evaporation** of the water during the measurement process. The most common and most critical source of error lies within the measurement tool itself, however. Errors would be introduced if the bucket were not manufactured to hold a full gallon, if the lines indicating quarter gallons were incorrectly scribed, or if the bucket incurred a dent that decreased the amount of water it could hold to less than a gallon.

In electronic measurement equipment, various electromagnetic interactions can create electronic **interference**, or noise. Any measurement with a value below that of the electronic noise is invalid, because it is not possible to determine how much of the measured quantity is real, and how much is generated by instrument noise. The noise level determines the uncertainty of the measurement. Engineers will thus speak of the noise floor of an instrument, and will talk about measurements as being below the noise floor, or "in the noise."

Measurement and measurement error are so important that considerable effort is devoted to ensure the accuracy of instruments by a process known as **calibration**. Instruments are checked against a known, precision standard, and adjusted to be as accurate as possible. Even gas pumps and supermarket scales are checked periodically to ensure that they measure to within a predetermined error.

Nearly every country has established a government agency responsible for maintaining accurate measurement standards. In the United States, that agency is known as the National Institute of Standards and Technology (NIST). NIST provides measurement standards, calibration standards, and calibration services for a wide array of needs such as time, distance, volume, temperature, luminance, speed, etc. Instruments are referred to as "NIST traceable" if their accuracy, and measurement error, can be confirmed by one of the precision instruments at NIST.

Kristin Lewotsky

## Additional topics

Science EncyclopediaScience & Philosophy: *Ephemeris* to *Evolution - Historical Background*