Redefining Quality

A rating system, not a media ranking system, could be the best way to measure program quality, argue Robert Rubin of DePaul University and Frederick Morgeson of Michigan State.

Over the past two decades, politicians and education experts have called for reforms that would help stakeholders make more informed choices about the value of a college degree and promote increased accountability for universities. For instance, in 2013, U.S. President Barack Obama proposed a national database with a dual focus on college affordability and career outcomes. In 2006, U.S. Secretary of Education Margaret Spellings commissioned a report calling for “greater transparency and accountability for measuring institutional performance.” Here, too, the report called for a consumer database that would allow stakeholders to compare institutions on a variety of factors.

Although many university leaders agree that schools must adopt innovative measures to cut spiraling costs and provide consumers with useful comparative data, they generally have shown resistance to these proposals. We believe their resistance is well-justified. Most systems tend to reflect only a small part of what a school might consider its core mission. For instance, a school rated as relatively “expensive” might use the higher tuition dollars to create a very personalized educational environment. For the students who would flourish in such an environment, and only such an environment, the high tuition would be worth the price. In this case, a system that only captures costs without measuring the educational environment would paint an unfair picture.

As Malcolm Gladwell recently observed, all evaluation systems “enshrine particular ideologies” of the individuals who create them, thereby placing a high priority on some stakeholder needs and ignoring others. Unless they consider the myriad factors that make higher education valuable to various stakeholders, most systems will continue to be met with resistance. We have spent the last few years trying to understand the perils and promises of creating a workable evaluation system for business schools. We know firsthand that it’s a daunting undertaking, but we also know that it can be done.


Business schools already have spent decades under the tyranny of a highly deficient evaluation system: the media rankings. Rankings typically reflect very limited information, such as starting salaries, student satisfaction, and student test scores. In some cases, only one or two of these factors are used to determine a school’s ranking. This focus on a narrow slice of business school quality has spawned decades of well-documented dysfunction. Schools divert critical resources away from educational initiatives in favor of managing their impressions in the marketplace and courting recruiters so they can improve their ranking scores. This is not exactly the kind of educational reform a quality educational system ought to inspire.

In our view, only a rating system—not a ranking system—would present complete information to stakeholders with a broad variety of interests. For some stakeholders, starting salaries are irrelevant; for others, starting salaries are all that matter. Because media rankings are calculated on limited criteria, they don’t allow stakeholders to make decisions based on their own priorities, but a comprehensive rating system would. Similarly, because a rating system does not pick “winners,” it would support AACSB International’s mission-driven accreditation format by allowing schools to focus on quality improvement efforts most aligned with their particular missions.

Yet such a system would require clearly defining what is meant by “educational quality.” We recently took on this challenging task for graduate business school programs, and MBA programs in particular. In our study, we found that academic quality can be measured across nine dimensions, including curriculum, faculty, placement, reputation, student learning and outcomes, institutional resources, program and institution climate, program student composition, and strategic focus. In our Program Quality Model (PQM), these nine categories can be described further by 21 additional quality subdimensions, suggesting that quality is both complex and multifaceted. (See chart below for a more detailed description).

Of course, some of the PQM indicators include familiar inputs and outcomes, such as test scores and starting salaries, that many rankings capture. Yet our research suggests that 60 percent or more of program quality is described by factors that transpire during the educational process or support the educational environment. These factors are almost entirely absent from the most popular rankings. More important, it is within this 60 percent that an institution’s real value proposition is made. This includes the way it teaches students, promotes learning, and develops its curriculum.


Even using the PQM framework, building a comprehensive national or international rating system would require intense courage, cooperation, and commitment from all business school stakeholders. Because such an undertaking could take years, we offer four concrete steps business schools can take right now.

1. Focus on the business school mission. Schools can regularly assess programs and plan for improvements by using the PQM to evaluate their success against their own distinctive missions. For example, in examining curriculum, schools can involve stakeholders by asking, “Is our MBA curriculum aligned with what future managers need to be successful? How can we deliver the curriculum in a way that promotes optimal learning? How can we structure our program to maximize learning opportunities?” When schools communicate what is valuable to them, they clarify their missions, engage stakeholders, and clearly demonstrate their commitment to continuous improvement. As these activities are also essential to the accreditation process, schools that develop internal ratings systems will be able to use these data when they seek to attain or maintain accreditation.

2. Develop and track quality metrics. A school that wants to improve “teaching quality” first must define what that means for its particular program. There are dozens of potential metrics for each of the nine quality categories in the PQM; in fact, we generated more than 400 potential objective and subjective metrics, examples of which are listed in the chart below. Some quality measures are obvious—for instance, schools are likely to look at starting salaries when considering career outcomes or student satisfaction when considering reputation— but we have included some less familiar ways to capture quality as well. In particular, we think that subjective measures are often the most useful because they can be tailored to the mission of an individual program.

3. Seek to cooperate and compete. The media rankings make it appear as if business schools are in head-to-head competition with one another, ignoring the fact that they’re all engaged in the same primary endeavor: education. For this reason, we argue that business school quality will rise when schools create communities of practice that promote cooperation around central characteristics. For example, administrators of large part-time MBA programs share information and best practices through and AACSB’s Affinity Group designed for schools offering MBAs for working professionals.

Other collaborations rely on proximity or complementary programming. For instance, the University of Chicago’s Booth School of Business and Northwestern University’s Kellogg School of Management, located in nearby Evanston, recently teamed up to offer executive leadership programs. If all schools recognize that they have unique value propositions, they might be more eager to cooperate and learn from others to improve quality.

4. Participate in the creation of a broad rating system. Gaining consensus on the right metrics might be a challenging and iterative process, but it’s not an impossible one. Moreover, a rating system can be updated and expanded constantly without disrupting previous data, because the criteria are not weighted by the system, but by end users. We believe all business schools would benefit by contributing to the development of metrics to measure the success of their educational missions.


Schools that are highly ranked are often fearful of losing their lofty status if they rail against the media rankings that have benefited them so greatly. This is not an irrational fear. But if stakeholders want a system that provides a more meaningful evaluation of school quality, it is our view that deans—particularly deans of schools that achieve elite status in the rankings— will have to be among the first to support it. By doing so, these deans can signal to others a true readiness for change and usher in a new era of transparency and improved quality assessment. Our hope is that schools of business can lead the way in the national movement for improved quality in higher education. In this regard, we must be champions of change. There is simply too much at stake.




1. CURRICULUM—The overall quality of the courses of study provided by the institution, including content, delivery, and program structure.
  • Classroom “sit-in” quality reviews by external raters
  • Curricular relevance ratings measuring the extent to which curriculum matches managerial job requirements
  • Executive ratings of content applicability in syllabi of core courses
  • Ratings by students or subject matter experts of time spent in various delivery formats
  • Ratio of core-to-elective credit hours
2. FACULTY—The overall quality of teaching personnel, including qualifications, research, and teaching
  • Alumni ratings of overall faculty quality
  • Student course evaluations, controlling for associated factors (e.g., “degree of course difficulty”)
  • Percent of faculty with five or more years of industry experience
  • Percent of faculty on journal editorial boards
  • Number of faculty research citations
3. PLACEMENT—The overall quality of career-related programmatic opportunities for students, including alumni networks, career services, and corporate/community relations
  • Percent of active recruiters that are alumni
  • Percent of alumni serving as mentors
  • Percent of administrators or faculty serving on community organization boards
  • Percent of MBAs utilizing career services
  • Percent of courses that provide community-related service learning
  • Ratio of yearly internships to number of students
  • Ratio of career service advisors to student population
4. REPUTATION—The extent to which the institution is recognized by external stakeholders as being of high quality or merit
  • Application volume
  • Quality evaluations by peer institutions
  • Employer ratings of overall institutional quality
  • Accreditation peer reviews
  • Faculty ratings of programs and institutions
5. STUDENT LEARNING AND OUTCOMES—The extent to which students acquire relevant knowledge and attain associated career outcomes, including personal competency development, student career outcomes, economic outcomes, and learning outcomes
  • Students’ self-ratings of career readiness
  • Alumni ratings of job mobility
  • Recruiter ratings of the quality of job interview responses
  • Percent of graduates who pass CFA or CPA exams or other certifications
  • Average raise obtained during first five years post-graduation
6. INSTITUTIONAL RESOURCES—The overall quality of resources available to the institution and constituents, including facilities, financial resources, investment in faculty, tuition and fees, and student support services
  • Stakeholder ratings of tech-related instructional resources
  • Faculty ratings of facility quality
  • Percent of endowment spent on operating budget
  • Percent of revenue from continuing education
  • Incentive pay for faculty and staff
  • Ratio of academic advisors to students
7. PROGRAM/INSTITUTION CLIMATE—The overall educational context, consisting of prevailing values, attitudes, and norms within the institution, including the robustness of the educational environment and attitudes toward diversity
  • Number of minority-focused MBA recruiting events
  • Student ratings of the value of diversity
  • Percent of faculty involved in extracurricular activities
  • Presence of faculty committees to monitor educational development of students
  • Extent of formal student feedback
8. STUDENT COMPOSITION—The overall makeup and corresponding quality of the student population with respect to academic achievement and professional experiences
  • Extent of student managerial experience
  • Performance levels in assessment centers
  • Percent of honor students with prior educational experiences
  • Level of participation in regional and national educational competitions
9. STRATEGIC FOCUS—The overall quality of the institution’s articulated mission and its strategic plan to achieve that mission
  • Percent of programmatic or institutional growth
  • Participation of students in continuous improvement efforts
  • Extent of curricular alignment with mission

Robert S. Rubin is an associate professor of management and co-director of at DePaul University’s Driehaus College of Business in Chicago. Frederick P. Morgeson is the Eli Broad Professor of Management in the Eli Broad College of Business at Michigan State University in East Lansing. Their research on MBA program quality appears in Disrupt or Be Disrupted: A Blueprint for Change in Management Education from Jossey-Bass/Wiley.