The Learning Curve

The examination of student learning outcomes provides business schools with the data to assess the quality of their programs and offers educators the opportunity to truly practice what they teach.
The Learning Curve

I was once a professor of political science, and I’ve spent hours grading papers. After reading a paper, I would sometimes recognize that the student had a good argument but that he had gotten many of his facts wrong. I’d write my feedback on the paper and sign it off with a B-minus. Then, I would pick up the next paper, read it over, and give it a B-minus for a completely different set of reasons. But what data went into my grade book? Two B-minuses. After collecting and recording a wealth of data on how the students had responded to the assignment, I threw it all away when I handed the papers back. I had no record of where, in general, learning was going right and where it wasn’t—or what I could do about it.

The new emphasis on the assessment of learning outcomes in higher education is meant to address this condition. Assessment practices offer educators and their institutions the opportunity to gather systematic evidence about their students’ learning and evaluate the effectiveness of their educational offerings. In fact, many accrediting organizations are now asking institutions to provide evidence of student learning and achievement.

Even so, I have found that many institutions—business schools included—are still at a fairly early stage of grappling with assessment and assurance of learning. Although a majority of schools have begun to specify learning goals for their students and gather information on student performance, many share the misconception that gathering the data is the most important part of the process. They often neglect to use that data to improve student learning and experience. They fail to ask, “What can we learn from these results to make our courses better?”

Setting up a workable assessment approach is no easy task, as it often involves exchanging long-held attitudes and habits for new approaches to teaching. But once the need for learning assessment is recognized, the next step is actually using that information for improvement. By creating what I call a “community of practice,” continuous assessment and ongoing improvement can become an integral and seamless part of the educational process.

Making the Grade

Creating a community of practice based on the consistent evaluation of student work is somewhat new to American higher education institutions. In Europe, on the other hand, educators are much more accustomed to objective systems of assessment, because many European schools have external examiner systems; in addition to the professor, an external reviewer also reads examples of student exams and projects. As a result, European educators have developed consistent consensual judgments based on standards that are implied in their communities of practice.

In the American classroom, there has long been an atmosphere of exclusion, where an educator’s classroom is his or her own private domain—others are rarely invited inside. As a result, we’ve created an environment that makes it extremely difficult to align standards. It is in this area that learning assessment initiatives are trying to make headway.

In the early days of learning assessment, it was most often seen as something external to the learning process. Educators tended to approach it one of two ways: They either added a test or survey of students that was implemented outside the standard curriculum, or they used an existing standardized examination to measure students’ knowledge of the material. Although these evaluation-based methods can be valid, they bring with them several problems.

First, faculty often dismiss such methods as disconnected from what they are doing in the classroom. After all, faculty often have little to no input into a standardized exam, and the results of the exam usually have no influence on student performance in the course. Second, because these “extra” tests are given outside the standard curriculum, students often don’t take them very seriously. Finally, using such exams often adds extra expense to a school’s budget.

The main problem with traditional faculty-generated assignments and grades, on the other hand, is exactly what my initial example illustrated. Faculty members mark students individually, but gather no information about what aspects of course content a class as a whole has mastered and what aspects a class has generally failed to grasp. Not only that, but the faculty also grade subjectively and idiosyncratically. They each use their own standards and create their own assignments.

The alternative to “add-on” assessment methods and inconsistently awarded grades involves the use of “course-embedded assessments”—questions and assignments worked into each faculty member’s existing syllabus. Not only do embedded assessments involve faculty on an integral level in the process, they provide a school with a written record of course performance over time.

Using an embedded approach, a faculty comes together and decides what the learning objectives for a particular course should be. Then they study the assignments in those courses to identify or include questions that systematically measure student mastery of those learning objectives. Finally, they establish consistent ways of evaluating student responses to those questions—for example, a scoring guide that details the attributes of a good answer, perhaps on a level of one to five.

Course-embedded assessment practices serve an institution in several ways. They offer a systematic way to measure its success in teaching students that is related to what its faculty are already doing. They do so in a way that is integrated into the coursework and assignments students already must complete, ensuring that students will do their best. And, best of all, it adds very little expense to a school’s budget.

Progress, Not Penalties

Business faculty, perhaps more than other educators, may be wary of assessment. After all, they have seen the misuse of performance indicators in the corporate world. They know that employees who are managed by the numbers often suffer the wrath of their superiors. As a result, faculty may be worried that assessment results will be used, not to improve curriculum, but to winnow out the “bad seed” among them. Wrestling with such perceptions frequently generates what I like to call a “paranoia shift” among faculty when it comes to learning assessment practices.

Paranoia No. 1: Faculty worry about their jobs. Institutions that view learning assessment as a means to pin-point—and even punish—underperforming faculty will find themselves in an Enron-like situation very quickly. When performance is measured to reward the good and punish the bad, employees not only become resentful and anxious, they also learn very quickly to tell administrators what they want to hear. Allowing learning assessment to become punitive defeats its very purpose—which is to help all educators improve their games.

Allowing learning assessment to become punitive defeats its very purpose—which is to help all educators improve their games.

The main challenge in the early stages of implementing an assessment program is thus to reassure faculty that such retaliation for weak performance is not going to happen. Administrators must assure faculty that the data collected through assessment will not be used for promotion and tenure decisions, but rather, are tools for the collective improvement of the business school’s educational offerings.

Paranoia No. 2: Even after they are successfully reassured about Paranoia No. 1, faculty members start to worry about their time investment. No faculty member wants to waste time on something that will never be used. Therefore, the next challenge in putting assessment practices in place is to make clear that the extra effort faculty members invest will result in positive change. Learning assessment must eventually be consequential, leading to changes in curriculum and approaches when course objectives are not being met.

Mature assessment systems do have built-in levels of accountability—after all, if a professor shows poor performance for a long period of time, he should not be allowed to continue. But, in most cases, assessment provides that educator the opportunity to document his teaching process, observe his strengths and weaknesses, and make the necessary adjustments to improve. Accountability means that he can learn from the process without fearing short-term punishment.

Once these two aspects of the “paranoia shift” are addressed, faculty will often participate in assessment with enthusiasm. The vast majority want to improve and often realize they need to change some aspect of their teaching. What they require are the tools to discover what that “something” should be. As an institution moves into a culture of assessment, educators begin to feel invested in the process without feeling personally at risk. More important, they are excited about pinpointing specific areas to focus on to improve their teaching skills.

Ask the Right Questions

Building an assessment culture is less about engaging in “scientific” measurement and more about determining the most important questions to ask. If educators ask too many questions, they’ll be overwhelmed; too few, and they won’t have an appropriate basis for assessment. Therefore, it’s up to educators to establish a set of core questions—five or ten, perhaps—that reflects their perceptions of where problems may lie.

Institutions can also make the mistake of being overly precise with their measurement, looking only for “statistically significant differences.” This may be more sophisticated than the type of information that a school most needs to discover. Assessment results do not need to be reported to the hundredth decimal place. Rather, they need to have, as one of my professors used to say, “interocular significance.” That is, they need to produce data that hits educators right between the eyes.

Most important, embedded assessments can involve students in the process in a myriad of ways. Assessments can be contained not only in quizzes and exams, but also in highly interactive, project-based assignments. For instance, Miami University of Ohio runs an interdisciplinary program in which students from the business school, school of graphic design, and school of communications work in teams to respond to RFPs from real companies about designing a marketing campaign. What is the embedded assessment in such an assignment? It’s whether or not the company buys the campaign. Although companies may not choose to purchase the campaign each time, if the work students produce is consistently rejected, the school will know it needs to tweak its offerings to improve their students’ skills.

CEOs all administer examinations to potential hires to test their knowledge and skills, so why couldn’t they just coordinate among the Fortune 100 companies to publish their collective results? They could publish the names of the colleges whose graduates do well. The annual U.S. News & World Report rankings of colleges would pale by comparison!

Likewise, King’s College in Wilkes-Barre, Pennsylvania, participates in an institutionwide general education assessment. The faculty from each discipline identify a set of common cross-disciplinary core competencies that all undergraduate students graduating from the college should acquire by the end of their four years. Like the faculty of other programs at the college, business school faculty have mapped these abilities onto specific assignments that occur periodically throughout a student’s academic career. They then take samples from students’ work to determine whether those competencies are coming through.

Times Are Quickly Changing

Just 20 years ago, everybody knew what a college or university was supposed to look like—students attended class, sat in lecture halls, listened to professors, turned in assignments, and took exams. Today, however, that standard image is fading quickly, as diverse incarnations of the educational process are gaining ground. New course delivery mechanisms that use experiential learning, team projects, and distance learning technology simply don’t fit the mold. That means that our community perception of what it means to be a university is breaking down. Therefore, it has become increasingly important that educators develop a systematic and visible approach, not to teaching itself, but to evaluating whether that teaching is yielding the right learning outcomes.

The prospect of starting an integrated program of learning assessment can seem overwhelming. It’s true that determining objectives for each course in each discipline, from core courses to electives, is challenging. Likewise, the prospect of charting those objectives from year to year is a daunting task. But that shouldn’t be an obstacle to getting started. Institutions that have built comprehensive, highly integrated, well-documented systems of assessment have been developing their practices for years. They started with small steps, perhaps with only one course, and worked their way up to the whole.

Elementary and secondary schools have already been barraged by externally mandated assessment because their performance was seen as poor by public policymakers. This same outcry is now beginning to be directed toward higher education institutions. But in our case it is less because of overtly poor performance than because colleges and universities are developing a reputation for maintaining a sense of secrecy about what they do, how they teach, and what students actually learn. As an enterprise in higher education, we could gain more public credibility and support if we were to demonstrate that we investigate our own effectiveness and that we respond promptly when the results aren’t good.

I have been connected with the debate surrounding assessment of learning outcomes since 1985, when it first came into national prominence. Today, there’s much more ongoing debate on the subject, as it becomes clear that obtaining a high-quality college education is a public policy issue. There are increasing doubts about the quality of what’s being produced by our higher education institutions, especially by elite groups, such as top CEOs.

When we look at the need for assessment in my Center, we often begin by interviewing business leaders. We ask, “What should higher education be delivering?” Often, we find that business leaders say that although college graduates know the details of their disciplines, they lack good communication skills, they’re not good at teamwork, and they lack appropriate leadership skills required for businesses today.

I was recently part of a project that was investigating the feasibility of a National Assessment of Educational Progress for higher education—something already used in primary and secondary education. A number of CEOs were part of a roundtable discussion on the topic. During this discussion, they began to compare notes: They all administer examinations to potential hires to test their knowledge and skills, so why couldn’t they just coordinate among the Fortune 100 companies to publish their collective results? They could publish the names of the colleges whose graduates do well. The annual U.S. News & World Report rankings of colleges would pale by comparison!

That was quite an idea, and one that all colleges and universities should heed. The federal government is already whispering about the possibility of standardized assessment of higher education; business is already debating about the need for exit exams. So, if higher education institutions fail to offer a viable alternative by voluntarily integrating learning assessments into their programs, the danger is that someone else is going to do it for them.

Peter Ewell is vice president of the National Center for Higher Education Management Systems in Boulder, Colorado.