One of the most potent advertising vehicles is word-of-mouth, the praise of satisfied customers. Word-of-mouth advertising works for retail shops, doctors, hair stylists, and hotels—and it’s becoming increasingly important for institutions like business schools, which rely on a continuing influx of new “customers” to keep themselves viable.
For most businesses, the key to generating positive word-of-mouth is satisfying customers with sales or service. For a business school, the key is providing a meaningful education that boosts a student’s career and provides practical knowledge. According to Kate Ferguson, associate vice provost and director of graduate student recruitment services at the State University of New York at Buffalo, “The No. 1 predictor of future recruitment success is the satisfaction of students who matriculate in a graduate program. Admissions and recruitment professionals are keenly aware of the importance of word-of-mouth—students telling their friends about how valuable their experience has been, or alumni speaking about how their training has helped to advance their careers.”
Because it is so important that alumni sell school programs after graduating, schools have begun to focus more tightly on satisfying current students—and measuring their level of satisfaction through surveys. These surveys help reveal not only what students want, but whether or not they feel they’ve received it during their time in school.
From Analysis to Action
Armed with survey figures that measure student satisfaction, administrators and program managers can pursue continuous improvements in their programs—and assure stakeholders that they know what they’re doing and where they’re going.
For instance, students at the Hankamer Business School at Baylor University, Waco, Texas, give and receive feedback about school programs every semester during a session called “Lunch with the Dean.” Explains associate dean Linda Livingstone, “The dean of the business school gives an update on issues related to the graduate programs. Students then can ask questions, express concerns, and provide their reactions to the program.”
On other occasions, the data can help schools administrators reallocate critical resources. Norm Blanchard, Director of Student Services at Boston University in Massachusetts, notes that, “Survey data confirmed something that my staff had suspected. Students were growing disgruntled with the amount of time they had to wait for advising appointments, especially during crucial times such as before registration. The first thing we did was go to a system of ‘all walk-ins, all the time’ during crunch periods, which allowed us to service many more students. Subsequent surveys confirmed that students were very happy with this change.”
“When your students are rating the instruction in one discipline low relative to the instruction in other disciplines at your college, low in comparison to other schools you are benchmarking against,
and low in comparison to all schools participating in a survey—well, you know it's something to investigate.”
The data also can fuel curriculum change. According to Blanchard at Boston University, “Students have made very specific requests regarding our curriculum. They wanted more courses focusing on entrepreneurship, e-commerce, and technology in general. In large part, this data drove our new course initiatives over the past few years, and the fact that we were able to go to the dean and provost with hard data made the decision-making process easier. Also, we found our students were clamoring for a minor in BU’s college of communication. Armed with the survey data, we’ve been working with the communication school over the past year to develop an advertising minor tailored specifically to the needs of our management students.”
After reviewing their first year’s undergraduate student satisfaction data, Alison Barber, senior associate dean at Michigan State University in East Lansing, says, “We thought that we should be doing as well as the schools with which we were comparing ourselves, but we weren’t.”
To improve the student experience, Michigan State administrators changed the requirements for admission into the business school, looking at a broader range of criteria and decreasing the number of students admitted from 1,800 to 1,200. They also used a gift from the Lear Corporation to open the Lear Corporate Career Services Center, designed to give guidance to business students and other students seeking a career in business. In addition, they restructured programs so that business majors and nonbusiness majors would take different tracks and even different courses.
“One outcome of this,” says Barber, “was the creation of smaller courses in the senior year. And over the past four years of this change, our student satisfaction scores have reflected the improved quality of the student experience.”
Fixing the Problems
Besides being a catalyst for change, student satisfaction metrics can provide feedback on whether major changes in an academic environment are affecting the core mission—education. Sacred Heart University in Fairfield, Connecticut, for example, has undertaken several dramatic changes over the past five years: restructuring the university, hiring a new dean, and applying for AACSB accreditation.
According to Ed Gore, director of academic affairs for undergraduate programs, the school made three other key changes. It replaced half its faculty as professors retired or took positions elsewhere; introduced systems to help adjuncts achieve greater consistency in teaching multiple-section courses; and placed more emphasis on research, while maintaining the teaching emphasis of the university. Steadily rising student satisfaction scores have been among the results of these changes. Says Gore, “The scores are not what set us on the road to change. But the scores confirmed that we were going in the right direction.”
Similarly, Dennis Hanno, director of undergraduate students at the University of Massachusetts Amherst, observes, “Student satisfaction is not the only measure. But it can certainly tell us where to look.” If the feedback from the survey reflects problems with one area of instruction, for example, UMA administrators can look into the teaching in a course or discipline. “When your students are rating the instruction in one discipline low relative to the instruction in other disciplines at your college, low in comparison to other schools you are benchmarking against, and low in comparison to all schools participating in a survey—well, you know it’s something to investigate.” Once they corrected such a problem, student satisfaction and learning improved.
“We also use student feedback to monitor the delivery of student services,” says Hanno. “Although our student satisfaction with placement was acceptable across the college, in one area the levels were significantly lower. We investigated that further, and brainstormed solutions. Now we offer additional internship opportunities to students in that major. Our scores have improved, along with student field experiences.”
Point of Reference
The trend toward studying satisfaction is clearly growing, and this is true not only in management education programs, but in other professional programs such as engineering, nursing, and education. Furthermore, with each year, schools become more confident in using the information they receive as a way to help manage change and strive for continuous improvement.
Knowledge is power; and power can be used to improve and to strengthen a program’s position, in itself and vis-à-vis its competition. Like any business, business schools need to deliver value. One measure of that value is the feedback from a major stakeholder group—students.
Scott Buechler is Management Education Project Manager for Educational Benchmarking Inc. Darlena Jones is head of Research and Development at Educational Benchmarking Inc.