The Sad Story of Learning Outcomes Assessment

I owe one of my worst moments as a college administrator to the learning outcomes assessment movement. I confess I did not try hard to forgive it, and forgive it I never have. In the early 1990’s I was the dean of a College of Arts and Sciences at a public university, with about 20 department chairs that more or less “reported” to me. At that time the learning outcomes assessment movement was on its steady, disciplined march through higher education. Accrediting agencies were increasingly expecting us to demonstrate that we had documented learning outcomes:  “Ninety percent of the students must demonstrate adequate knowledge of punctuation through a minimum score of 85 on the XYZ test.” We also had to show that we actually assessed student achievement of those expected outcomes and that we acted on the results of those assessments.

To help prepare the college to cope with this new expectation, I invited a national expert in the field to give the chairs and me a presentation on the subject. The presentation was dry, most of the chairs incurious and unmotivated, the pale winter sun casting its weak light over the darkening room as the audience dwindled because of one lame excuse or another. At the end I was embarrassed, the speaker demoralized, and most of the chairs as unenlightened as they were before. My later experiences with learning outcomes assessment were better, but never enough to make me happy.

In the 1980’s and 90’s learning outcomes assessment was the buzz in American higher education. It differed from grades or professional certification exams for individual students since it judged the performance of an entire group of students enrolled in a course, a program, or even an entire college. As in the example above, the assessment was based on a predetermined objective with a predetermined level of success. The pressure for learning outcomes assessment came from legislators and government officials increasingly seeking more information about just what they were getting for their growing investment in higher education as it kept moving from an institution for the elite to an expectation for the majority of all students. Some information, such as graduation rates and post-graduate employment rates, was already available, but often not this more direct analysis of what students were learning in their programs.

There were reasons for the appeal of this assessment movement. Leaving aside the apparent subjectivity of individual grades and the partisanship of alumni in love with their lost youth  in the protective embrace of their alma mater, learning outcomes assessment had the feel if not the substance of objectivity (in our first paragraph example, why not 95% of the students with a score of 90 on that exam?). With its parade of statistics, numerical benchmarks, graphs, and bulleted and sub-bulleted reports, even before Power Point presentations spread their cloud of somnolence over stupefied audiences, learning outcomes was indeed the latest thing. It was, however, a poor fit with the culture of higher education. Despite its numerous cheerleaders and heavy-handed enforcers, it remains a poor fit even today, many sad years later.

Erik Gilbert’s opinion essay in the August 14, 2015 Chronicle of Higher Education, “Does Assessment Make Colleges Better? Who Knows? ” points to the crux of the problem. After nearly 40 years of workshops and seminars, exhortations and threats by accrediting bodies, and the hiring of numerous assessment administrators, prospective students and their parents really do not care whether a college has a good learning outcomes assessment program or any learning outcomes assessment program at all. Parents and students care about costs, general academic reputation, and post-graduation employability—not learning outcomes assessment. A large percentage of faculty remain indifferent or even hostile to this movement (significantly, no one has ever tried to get precise numbers) and few institutions can point to positive, substantial change arising from the results of these assessments. After nearly 40 years, the movement remains an orphan, institutionalized but unwelcome.

There are many causes of this sorry situation, including a justified faculty suspicion that this movement arises from a distrust of faculty judgement and a distaste for faculty autonomy. More importantly, we have no clear substantial evidence that this movement amounts to more than a new hill of paper. Those behind this movement—the federal and state education departments, the regional accrediting bodies, senior system and college administrators, and the recently created phalanx of assessment administrators—have created a paper empire that has not really demonstrated its usefulness. These individuals and their empire definitely need to be assessed, and soon.

I confess that as a college administrator I sometimes prodded and sometimes pushed faculty into participating in this effort. The useful results—and there were some—came from departments where the faculty actually talked to each other about the content and pedagogy of their courses and where those discussions led to changes to improve student learning. I suspect many of these departments would have done this without Big Brother looking over their shoulders. Somehow the focus needs to shift from top-down administration to a focus on encouraging better communication about curriculum and teaching within departments, from hiring more assessment administrators to putting whatever few resources are left back into the one area where change is likely to really help students.

The Sad Story of Learning Outcomes Assessment