Although the SAT falls drastically short as a valid measure of college writing, that does not end the matter. Sadly, reliability has won the day, hence the dominance of indirect assessments like multiple-choice exams: in Five Fingers Shoes college writing assessment, there has been nothing short of a "continuing, unrelenting march toward reliability at the expense of validity". Indeed, not long after the SAT-W section was developed, the College Board found ways to promote it as a reliable placement tool. In 2005, Dwayne Norris and four colleagues at the American Institute for Research compared first-year writing grades with a piloted version of the SAT-W administered to 1,572 incoming freshmen at thirteen universities. Published as a College Board Research Report, the paper concludes that "these results are encouraging and suggest that the new SAT-W sections should be a useful addition to the SAT in terms of predicting academic performance during the first year and helpful for making placement decisions into undergraduate English composition courses" . Notably, this optimism was repeated in the 2007 College Board summary of the earlier report: "The Norris study also provides evidence for the validity of the SAT-W section". This research report reveals that the College Board's analysis did not attempt to show that these tests measured the right writing skills in a meaningful way. Although labeling its study as "evidence for the validity," in reality the College Board was researching reliability only whether the new SAT scores could be justified as reliable tools to make admission and placement decisions based on the correlations between test scores and academic success.
Clearly such justifications are central to ensuring broad adoption of the new test, and continued prominence and income for the College Board.
In simple terms, correlations show whether groups of Vibram Five Fingers Shoes information are similar.10 Even though correlations do not imply causation, they can be valuable for their general predictive quality. So we might learn from a correlation study that students from families with the highest annual incomes are, in general, the most prepared for first-year writing classes, and that students from the poorest families are, in general, the least prepared. Of course, parental income is not a valid measure of student writing ability, and it tells us nothing about any individual student. But if the correlation is close (or high) enough, parental income could be used to reliably place students much of the time.
The positive, linear correlations used in the study by Norris and colleagues are measured on a scale of 0 to 1. Zero represents no correlation at all perfect randomness. Results between 0 and .5 represent a small to medium correlation; where there is a low correlation; one variable is not a very good predictor of another. A correlation between .5 and 1 is usually considered high, and a correlation of 1 represents an exact match between two sets of information.12 Of course, success or failure in a writing class depends on many variables. Even using a perfectly valid measurement of college writing skills, we would still not expect a perfect correlation with final grades. There are too many other variables: sorority parties, Fear Channel marathons, overwhelming workloads, demanding jobs, and family responsibilities. But where the significance of a metric depends solely on its predictive value its reliability we should expect and require some pretty substantial (high) correlations.
So what were the actual results of Norris and colleagues' research Relevant here, the authors measured and reported the correlation between 891 first-year college students' first-year writing grades and their different SAT test scores. The chart in Figure 1 is a simplified version of Table 9 from Norris. The study found only low, weak correlations between .14 and .24. Even when the researchers "corrected" the results by eliminating some mismatches, the correlations remained weak. In fact, Norris and his colleagues found that high school GPA a free assessment instrument was a better predictor of performance in first-year writing than any part of the SAT test, or the whole test taken together.