2 minute read

Infrared Astronomy

Utilizing Infrared Astronomy



Special infrared detectors must be used to see the infrared universe. These detectors can be mounted on traditional optical telescopes either on the ground or above the atmosphere. The first infrared detector was a thermometer used by William Herschel in 1800. He passed sunlight through a prism and placed the thermometer just beyond the red light to detect the heat from the infrared light. To detect the heat from distant stars and galaxies, modern infrared detectors must be considerably more sensitive. The infancy of infrared astronomy began with the advent of these detectors in the 1960s.



Modern infrared detectors use exotic combinations of semiconductors that are cooled to either liquid nitrogen or liquid helium temperatures. Photovoltaic detectors utilize the photoelectric effect, the same principle as the solar cell in a solar powered calculator. Light strikes certain materials and kicks the electrons away from the atoms to produce an electric current as the electrons move. Because infrared light has less energy than ordinary optical light, photovoltaic infrared detectors must be made from materials that require little energy to force the electron from the atom.

Photoresistive thermal detectors work by measuring minute changes in the electrical resistance of the detector. The electrical resistance of a wire generally depends on its temperature. Infrared radiation striking a photoresistive detector will raise its temperature and therefore change its electrical resistance by a minute amount. A mixture of gallium and germanium is often used. These detectors must be cooled with liquid helium to get the extreme sensitivity required by infrared astronomers.

Early infrared detectors featured a single channel. Accordingly, they could measure the brightness of a single region of the sky seen by the detector, but could not produce pictures. Early infrared images or maps were quite tedious to make. Images were created by measuring the brightness of a single region of the sky, moving the telescope a bit, measuring the brightest of a second region, and so on.

In the 1980s infrared arrays revolutionized infrared imaging. Arrays are essentially two dimensional grids of very small, closely spaced individual detectors, or pixels. Infrared arrays as large as 256 × 256 pixels are now available, allowing astronomers to create infrared images in a reasonable amount of time.

In addition to images, astronomers can measure the brightness of an infrared source at various infrared wavelengths. Detectors record a range of wavelengths, so a filter must be used to select a specific wavelength. This measurement of brightness is called photometry. Both optical and infrared astronomers break light up into its component colors, its spectrum. This can be done on a smaller scale by passing light through a prism. This process, spectroscopy, is useful for finding the compositions, motions, physical conditions, and many other properties of stars and other celestial objects. When light is polarized, the electromagnetic oscillations line up. Infrared polarimetry, measuring the amount of polarization, is useful in deducing optical properties of the dust grains in dusty infrared sources.


Additional topics

Science EncyclopediaScience & Philosophy: Incomplete dominance to IntuitionismInfrared Astronomy - Electromagnetic Spectrum, Utilizing Infrared Astronomy, Infrared View