Upon its publication, Academically Adrift quickly became one of the must-reads for people who follow debates about American higher education. It seemed to offer a mother lode of material for critics who believe that colleges and universities are running an elaborate and increasingly expensive scam, perpetrated by “faculty distracted by professional interests other than undergraduate instruction” (10) and administrators “likely even more distracted than faculty from a focus on undergraduate instruction” (11). The undergraduates themselves, meanwhile, aren't doing much work and aren't learning much of anything: “With a large sample of more than 2,300 students, we observe no statistically significant gains in critical thinking, complex reasoning, and writing skills for at least 45 percent of the students in our study” (36). These findings came as welcome (and unsurprising) news to the most ardent critics of American higher education, the ones who speak of a “higher ed bubble” like that of the housing industry and look forward to (or call for) a “shakeout” in which the enterprise will be radically reconfigured—hundreds of institutions will close, a few thousand more will be downsized, and the student population will be winnowed to the number of people willing and able to do significant intellectual work. And, of course, Academically Adrift gives powerful ammunition even to less ardent critics, the ones who simply want some measure of “accountability” or “assessment” more reliable than institutional self-reporting or the notoriously unreliable reputational surveys of the U.S. News and World Report.
Some of the response from the allegedly distracted professors and administrators has been predictably dismissive: in the New York Review of Books, for example, Peter Brooks mocked the book's reliance on the Collegiate Learning Assessment as “a kind of Consumer Reports for the head-scratchers.” The CLA is the backbone of Academically Adrift, and that might indeed be troublesome, quite apart from the fact that it was part of the call for “outcomes assessment” produced by the Spellings Commission in 2006: “among the most comprehensive national efforts to measure how much students actually learn at different campuses, the Collegiate Learning Assessment (CLA) promotes a culture of evidence-based assessment in higher education” (quoted at 138). The CLA works like so: it “allows students ninety minutes to respond to a writing prompt that is associated with a set of background documents” (21). One prompt “asks students to generate a memo advising an employer about the desirability of purchasing a type of airplane that has recently crashed” (21), and another asks students to assess a mayoral candidate's policy for crime reduction, a policy based on the belief that it is more effective to fight drug addiction than to put more police on the streets. I confess that I am not familiar with the apparatus of learning assessment, and cannot say whether the CLA measures something worth measuring; Arum and Roksa, for their part, do a fair job of summarizing the positions of critics and proponents of the CLA. I will note only that Arum and Roksa administered the CLA to students after their first two years of college, which obviously leaves open the question whether students might do more to enhance their skills in critical thinking, complex reasoning, and writing once they begin taking more advanced courses in their junior and senior years.
But the debate about the reliability of the CLA is a distraction from what I think is the most remarkable finding in Arum's and Roksa's study—namely, that many students are not doing much reading or writing at all, not because they're slackers but because their courses do not require it. It's one thing to take note of student self-reporting that reveals they spend “only 12 hours per week studying” and that “37 percent of students reported spending less than five hours per week preparing for their courses” (69); those numbers can be attributed, if one so desires, to student laziness. But these numbers cannot: “Fifty percent of students in our sample reported that they had not taken a single course during the prior semester that required more than twenty pages of writing, and one-third had not taken one that required even forty pages of reading per week. Combining these two indicators, we found that a quarter of the students in the sample had not taken any courses that required either of these two requirements, and that only 42 percent had experienced both a reading and writing assignment of this character during the prior semester” (71). Arum and Roksa conclude, rather ploddingly, that students who do so little reading and writing will probably not show marked gains on the CLA. But surely these distressing numbers are important “assessment indicators” in and of themselves, regardless of whether anyone ever administers the CLA to anyone.
The numbers get more interesting—and more revealing—when you break them down by disciplines: “Sixty-eight percent of students concentrating in humanities and social sciences reported taking at least one course requiring more than twenty pages of writing during the previous semester, and 88 percent reported taking at least one course requiring more than forty pages of reading per week. Moreover, 64 percent of students concentrating in humanities and social sciences reported both types of requirements, and only 8 percent experienced neither requirement. Students concentrating in science and mathematics, the other fields associated with the traditional liberal arts core, reported relatively low likelihood of taking courses requiring more than twenty pages of writing, or of experiencing both (reading and writing) requirements” (80–81). Whatever else the CLA measures, it seems to identify and reward students who have been asked to read and write and do 'rithmetic: “Science/mathematics majors scored 77 points higher than business majors on the 2007 CLA, while social science/humanities majors scored 69 points higher (after adjusting for the 2005 CLA scores)” (106).
I had the good misfortune to read Academically Adrift the same semester I taught a class in disability studies to twenty-one undergraduates none of whom were humanities majors. It was a profoundly dispiriting experience. For the first time in my career, I was confronted with students who did not bring their books to class, who could not summarize (let alone criticize) a writer's argument, and who clearly considered reading assignments to be “background” material for their professors' Powerpoint presentations. As a result, I wound up seeing Academically Adrift not only as a confirmation of my sense that such undergraduates take courses that require little or no reading and writing, but also, oddly enough, as an assessment-driven defense of the core disciplines of the liberal arts. For anyone who wants some quantitative measure of “critical thinking” to go along with the rich anecdotal evidence we obtain from our students every year, Academically Adrift is indeed a must-read book.