Validity refers to how well a test measures what it intends to, along with the degree to which a test validates intended inferences. Thus a test of achievement motivation should assess what the researcher defines as achievement motivation. In addition, results from the test should, ideally, support the psychologist's insights into, for example, the individual's level of achievement in school, if that is what the test constructors intended for the test. Most psychometric research on tests focuses on their validity. Because psychologists use tests to make different types of inferences, there are a number of different types of validity. These include content validity, criterion-related validity, and construct validity.
Content validity refers to how well a test covers the characteristic(s) it is intended to measure. Thus test items are assessed to see if they are: (a) tapping into the characteristic(s) being measured; (b) comprehensive in covering all relevant aspects; and (c) balanced in their coverage of the characteristic(s) being measured. Content validity is usually assessed by careful examination of individual test items and their relation to the whole test by experts in the characteristic(s) being assessed.
Content validity is a particularly important issue in tests of skills. Test items should tap into all of the relevant components of a skill in a balanced manner, and the number of items for various components of the skill should be proportional to how they make up the overall ability. Thus, for example, if it is thought that addition makes up a larger portion of mathematical abilities than division, there should be more items assessing addition than division on a test of mathematical abilities.
Criterion-related validity deals with the extent to which test scores can predict a certain behavior referred to as the criterion. Concurrent and predictive validity are two types of criterion related validity. Predictive validity looks at how well scores on a test predict certain behaviors such as achievement, or scores on other tests. For instance, to the extent that scholastic aptitude tests predict success in future education, they will have high predictive validity. Concurrent validity is essentially the same as predictive validity except that criterion data is collected at about the same time it is collected from the predictor test. The correlation between test scores and the researcher's designated criterion variable indicates the degree of criterion-related validity. This correlation is called the validity coefficient.
Construct validity deals with how well a test assesses the characteristic(s) it is intended to assess. Thus, for example, with a test intended to assess an individual's sense of humor one would first ask "What are the qualities or constructs that comprise a sense of humor?" and then, "Do the test items seem to tap those qualities or constructs?" Issues of construct validity are central to any test's worth and utility, and they usually play a large part in the early stage of constructing a test and initial item construction. There is no single method for assessing a test's construct validity. It is assessed using many methods and the gradual accumulation of data from various studies. In fact, estimates of construct validity change constantly with the accumulation of additional information about how the test and its underlying construct relate to other variables and constructs.
In assessing construct validity, researchers often look at a test's discriminant validity, which refers to the degree that scores on a test do not correlate very highly with factors that theoretically they should not correlate very highly with. For example, scores on a test designed to assess artistic ability might not be expected to correlate very highly with scores on a test of athletic ability. A test's convergent validity refers to the degree that its scores do correlate with factors they theoretically would be expected to. Many different types of studies can be done to assess an instrument's construct validity.