A second stream of criticisms has argued that the quality of job analysis information is suspect and that the information is inaccurate due to a variety of biases and cognitive limitations, such as aws in the judgments made by subject matter experts (see Morgeson and Campion, 1997, for a review). e term accuracy demands a consensually agreed-upon gold standard, which a fuzzy, socially constructed concept like the job does not permit. Sanchez and Levine (2000) maintained that this line of thinking is also unlikely to advance the practice of work analysis because it conceives the analysis of work as a measurement instrument intended to capture a questionable “true” score. Instead, they viewed work analysis as a set of tools intended to facilitate the inferences regarding the important work activities and work speci cations that should be the basis for selection, training, compensation, and other human resource management practices. On the one hand, we fully support studies that are aimed at improving the quality (reliability and validity) of work analysis data, as opposed to so-called accuracy, such as the meta-analysis on the reliability of job analysis data by Dierdor and Wilson (2003). On the other hand, evaluation of work analysis data should scrutinize not only the psychometric quality of the data but also more importantly the consequences of work analysis in terms of human resource management programs and practices. is argument derives incremental strength from the notion that organizations in today’s global and hypercompetitive economy must justify costs and demonstrate the value added by programs such as work analysis. Only research directed toward consequential outcomes can provide the crucial information needed to meet these objectives.