In the 20 years I’ve been involved in learning outcomes assessment, I’ve seen institutions of higher education make great strides in measuring student learning. As far back as ten years ago, more than 95 percent of surveyed administrators said they had implemented assessment practices or were planning to do so soon. Clearly, even more faculty today understand what assessment is and how it can benefit both their students and themselves.
However, we still have a long way to go. Because there is no single definition of assessment, it often can mean something different from one campus to the next. At one university, every department might be assessing student learning; at another, perhaps only one unit out of 15 is doing so. While the numbers of colleges engaged in assessment are rising, we don’t have good figures to show us how many educators are truly engaged in measuring student learning.
Although that’s an important concern, I am even more focused on two other issues related to how assessment is being handled today. One is ongoing—the fact that many faculty members still resist student assessment because they don’t understand its value or simply don’t know how to devise effective tests. And one is relatively new and rather alarming—the push by some policy makers to require that faculty use standardized tests to assess student learning. The former threatens to undermine learning assessment before it has the chance to take root in many business schools, and the latter threatens to unravel the efforts of schools that already have successful assessment models in place.
The Perils of Testing
Recently I was invited to speak at a meeting called by the Board of Trustees of the State University of New York (SUNY), where some stakeholders were pressing for use of a standardized test across all 57 of their campuses that serve undergraduates. The SUNY administration countered by bringing in experts, college presidents, and faculty leaders to discuss the ways in which assessment data are being used to improve individual programs and services rather than to compare programs through standardized tests.
Standardized testing is common at the K–12 educational level, where it might be more appropriate to test all students in the same state with an instrument that gauges minimum competence. But it’s far less realistic to attempt to test students at research universities and compare them with students at community colleges and other four-year institutions. There are too many variations across campuses.
For instance, if a university includes an architecture school, and some of the questions on the standardized test require students to figure square footage for a room, the architecture students in all likelihood will be much better at answering those questions than students in humanities will be. Therefore, the campus scores for that university may well be higher simply because some of the students have an extraordinary advantage on a portion of the test. It would be unfair to compare these scores on a standardized test to scores at universities without an architecture school.
An expression I like is one used by Peter Ewell, a recognized assessment expert and senior vice president at the National Center for Higher Education Management Systems. He says, “You don’t make a pig fatter by weighing it.” And you don’t make students smarter by testing them. It’s tremendously difficult to motivate students to take a standardized test that is not connected with what they believe they’ve learned in class. In cases where statewide testing is required, some students who don’t see the importance of the test won’t do their best work. They will become malingerers. This will affect test scores and further obscure the potential benefits of learning outcomes assessment.
I’m afraid higher education may be facing more and more pressure to use standardized tests. Often that pressure comes from governors and legislators, who do represent the people and are truly well-meaning. We must convince them that standardized tests are not an appropriate way of assessing learning outcomes for college students. We must help them understand that raising issues of comparability is chilling to educators in a country where the great advantage of the higher education system is the diversity of missions among its institutions.
While stakeholders such as state legislators represent one group that could benefit from gaining a better understanding of assessment, faculty represent another extremely critical group. Some faculty resist the need for outcomes assessment for a variety of reasons: They don’t see the need for it, because they believe their students are already being tested; they fear it will affect their jobs; they don’t know how to do it properly; or they don’t understand what it is.
Simply put, learning outcomes assessment requires a professor to define learning goals and objectives, and then devise a way to tell whether or not students have mastered those objectives. Perhaps students write papers or give oral presentations to show what they’ve learned. If they haven’t achieved the explicit goals, the professor may make changes in the way the course is taught until tests and other measures show that students are learning.
Just by defining their learning objectives and deciding where and when these will be covered, faculty improve their curriculum because they will ensure that essential skills are introduced and practiced in a variety of settings. If faculty share their goals with students, students will understand why professors take certain approaches or cover specific issues. They’ll understand they’ve been given a particular assignment because it will influence their learning of important concepts.
Goals and objectives can be confined to a single class or broadened to a curriculum or major. Tests, projects, and other assignments first are graded and the results shared with individual students to help them improve their own performances. In what I call “taking a second look,” faculty look again at student work—this time collectively, across students in a given course, across sections of the same course, or across courses in a curriculum.
If this second look at student work shows that a certain percentage of students is failing to grasp a particular concept, faculty can determine that they need to teach that concept in a different way.
To gather data in this “second look,” often at the end of a class or a program of study, faculty first ask students to demonstrate their learning. This might occur in a senior capstone course where students will be expected to show they have mastered the tools of research appropriate to their discipline. They might be asked to search for information on the Web, write a paper, and make an oral presentation. They should be able to do so successfully if the knowledge and skills they need have been embedded in various courses in the major. Students need multiple opportunities throughout the curriculum to practice the skills faculty consider most important.
If this second look at student work shows that a certain percentage of students is failing to grasp a particular concept, faculty can determine that they need to teach that concept in a different way or give students more practice with it. For instance, it might turn out that students in a senior capstone course aren’t very good at oral presentations. The department reviews the course structure and realizes that students haven’t had any opportunity to practice their speaking skills since their freshman speech course. Professors may decide then to look for ways to provide additional opportunities throughout the curriculum for students to make oral presentations. This kind of assessment of student achievement at the senior level can improve the entire curriculum.
Schools shouldn’t just rely on testing to discover if students have acquired the proper knowledge and skills. They also should survey enrolled students with a variety of questions such as: Are you getting frequent feedback from your professors? Are you studying more than ten hours a week? Are you increasing your technology skills? Surveys of graduates also can yield helpful data. These indirect measures of student learning can help individual professors and whole departments improve their academic programs and student services such as advising.
Disquiet in the Department
Even though learning assessment can have a positive effect on schools, it is still a process that, as I’ve mentioned, many faculty members resist implementing. There are three key reasons.
• Assessment takes time. Some faculty believe it diverts precious time from what they see as their principal activities, which are teaching, research, and service. However, I firmly believe that, in the long run, assessment will save faculty time. If professors discover the best way to teach a concept, students will grasp what they’re teaching more quickly. Faculty won’t have to go over and over the same material, and they won’t keep using ineffective teaching methods to try to get their points across.
• Assessment could be used to punish them. Some instructors fear that if assessment shows they aren’t preparing students to achieve at a certain level, they will lose their opportunity to teach or other privileges. Administrators must persuade these faculty that assessment is aimed at the common goal of improving the overall curriculum, not punishing individual instructors.
• Assessment is not something they know how to do. This is the single most compelling reason faculty resist assessment. Evaluation is considered the most difficult form of learning, at least according to Bloom’s Taxonomy of the Cognitive Domain. By that model, human beings learn through a series of six progressively more complex steps: knowledge, comprehension, application, analysis, synthesis, and evaluation. Learning outcomes assessment is a very specific kind of evaluation, and it’s not easy to do. In fact, many faculty are not even trained as teachers and certainly are not trained as assessors. Because they’re not experts in the technology of measurement, they don’t know all the fundamentals of building a reliable, valid exam or questionnaire or how to interpret its results.
Once schools have begun to measure student learning, they should start using the data they collect. Outcomes assessment is simply not worth doing unless it is used to enhance the student learning experience.
It’s therefore important that universities provide faculty development experiences to make sure faculty are comfortable performing assessments of student learning. Many already do: Today’s campuses often include departments variously called “teaching centers” or “faculty development offices,” and these usually house experts in assessment who are willing to share their knowledge. If there is no faculty development office, a professor in the psychology or education department may be willing to hold a workshop to discuss how to develop valid tests.
In addition, some organizations offer conferences and workshops on the topic of assessment. Administrators might consider sending a team of faculty to such conferences, and then encourage those who attended to share their newly developed knowledge with colleagues. Gradually, everyone on campus will become more comfortable with assessment tools.
The Good News
Assessment may be easier for the next generation of professors. Many of today’s graduate students take courses in pedagogy and measurement, and they have more opportunities to teach with supervision by a faculty mentor. As a generation of faculty leaves and is replaced by colleagues who have had more experience in teaching and assessment, attitudes will change.
Even as senior professors become more familiar with assessment, much of their fear is dissipating. I believe more faculty are pursuing assessment with energy and purpose because they understand the process and have seen its benefits.
That’s good news, because there is now more talk of assessment than ever before. All of the major accrediting bodies— from AACSB International to the regional agencies such as North Central Association, Middle States Association of Colleges and Schools, and New England Association of Colleges and Schools—are requiring their colleges to do some form of learning assessment. In several states, assessment is required in new higher education laws and policies. Assessment is here to stay.
Faculty and administrators should take the time to learn about assessment so that they can implement it in sensible ways. In many cases, it will be up to individual school administrations to put the mechanics in place to help faculty succeed at assessment. Once schools have begun to measure student learning, they should start using the data they collect. Outcomes assessment is simply not worth doing unless it is used to enhance the student learning experience—by improving instruction in a single class, the structure or sequencing of a curriculum, or the process of offering student services that complement coursework.
Trudy W. Banta is professor of higher education and vice chancellor for planning and institutional improvement at Indiana University-Purdue University Indianapolis.