What Price Rankings?

Many business schools are upping the ante to compete in the rankings game. Ultimately, the costs to the learning environment may simply be too high.
What Price Rankings?

Few people can remember what it was like before 1987—what I call the year before the storm. It was a time when business school deans could actually focus on improving the quality of their schools’ educational offerings. Discussions about strategic marketing were confined mostly to the marketing curriculum. PR firms were hired by businesses, not business schools. Many business schools had sufficient facilities, but few buildings had marble floors, soaring atriums, or plush carpeting. Public university tuition was affordable for most students, and even top MBA programs were accessible to students with high potential but low GMAT scores.

All this began to change in 1988 when BusinessWeek published its first article that ranked full-time MBA programs. That article set in motion a rankings frenzy that has escalated every year. Today, few business school deans can ignore the impact that MBA rankings have had on their schools. The percentage of resources business schools now devote to engineering the ranking of their full-time MBA programs is up; the percentage of resources they devote to their undergraduate programs, curricular innovation, and research is down. And with students, alumni, and donors veritably rankings-obsessed, deans ignore the rankings at their own peril.

To manage the rankings and their impact, deans must understand three important factors: how the rankings work, what they measure, and what business schools can do to exert their own influence. To understand the first two factors, business school deans can speak to editors of the rankings, who freely share their methods. Here, however, I focus on the last issue: what each business school can do individually and what business schools can do as a group to change the way the rankings work. Only in this way can we calm the storm that the rankings have created.

Positive Intentions

Many make the case that the rankings have had some positive influence on the business school market. After all, they do provide an external viewpoint that shows business school administrators and staff what students and recruiters think of their schools. Students and recruiters will often say to the media what they won’t say directly to deans or faculty. As a result, the rankings provide the type of insight that business schools are unable to provide for themselves.

In an age of abundant, yet often scattered, information, the various rankings also give prospective students a place to start their research. They provide accessible information to students seeking MBA programs that best suit their talents, intentions, and budgets.

But what value do rankings add to information already available, and just how useful is this information? It may not be very useful at all. An examination of the data from the rankings of full-time MBA programs between 2002 and 2004—published in BusinessWeek, Financial Times, and U.S. News & World Report—reveals three telling similarities:

•  All three rankings agree on 17 of the top 20 schools, the same 17 schools that most in the market would identify as top-tier. Flash back to before the rankings, to 1987, when a little-known publication called MBA placed the same 17 schools in its own top 20. This consistency suggests that these 17 schools have such strong reputations that their status in the marketplace is unlikely to change whether or not they are numbered in a published ranking.

•   Between 2002 and 2004, eight schools vied for the other three spots in the top 20 in the rankings from all three publications.

•  The quantitative differences among the 25 to 30 schools in the second tier are relatively slight. In fact, BusinessWeek lists the schools ranked 31 to 50 alphabetically, rather than assigning them a number, most likely because there is so little difference among these schools.

Despite these facts, business schools still devote a disproportionate amount of time and money to improving their status in the rankings. Some schools even have multiple staff members whose jobs are devoted simply to manning “the rankings desk.” As the war of the rankings escalates, the balance of power is shifting from assessment of learning outcomes and academic scholarship to obsession with rankings status.

Unintended Consequences

It’s likely that when BusinessWeek, Financial Times, and U.S. News published their first rankings, their intentions were to fill a niche and boost circulation, not to fundamentally alter the way business schools operated. But their rankings have dramatically changed business education in many significant and undesirable ways:

Higher GMAT requirements. The U.S. News ranking has probably had more impact on admissions policies than any other factor in the 120-year history of business schools, especially when it comes to average GMAT scores. To improve their rankings, many schools are admitting only students who fit the classic U.S. News profile—individuals with high GMAT scores, high GPAs, and extensive work experience. For the most part, only those students who fit this profile have access to full-time MBA programs at the top 50 schools. High-potential students with 550 GMATs, 2.5 GPAs, or fewer than five years of work experience are generally no longer accepted at second- and third-tier schools, even though many would add great value to the learning environment.

Smaller cohorts. Top-tier schools have brand recognition on their side, so they can easily attract top students. To compete in the rankings, second- and third-tier schools have decreased their class sizes to boost their selectivity factor. Their strategy is first to attract a smaller class filled with top students, in the hope of increasing their ranking and eventually making it easier to attract a larger class of top students.

This strategy often backfires, leaving schools in an unfortunate Catch-22 situation. To compete against the rankings, all second- and third-tier schools must decrease their class sizes. Since everyone is following the same strategy, no single school can increase its ranking solely by shrinking its class size. Yet any school that doesn’t follow this strategy will soon find itself ejected from the top 50. So schools reduce class size not to improve their standing, but merely to sustain it.

Overemphasis on paychecks. In U.S. News’ 2004 rankings, graduates of the top ten schools earned the highest starting salaries, and those from eight of the next ten earned the next highest. It seems that for any business school to make the top 20, its students must be placed in high-salary jobs. But consider the message this sends to aspiring students: The quality of the program is directly related not to what you learn, the network you create, or what you accomplish, but to the salary you earn after you graduate.

Shifts in spending. As mentioned above, business schools have significantly increased the amount of time and financial resources they allot to their full-time MBA programs, both in absolute terms and as a percentage of their budgets. This allocation of resources could be positive if, in fact, most of the money went to improving the educational environment. Unfortunately, many schools aim much of this funding directly at improving their status in the rankings. And as a result, their diversion of funds to engineer their MBA rankings often weakens the quality of the learning environment in all programs.

More frills, less substance. All schools care about student satisfaction. But student satisfaction has become such a major factor in BusinessWeek’s ranking that schools have become too concerned with making students happy. Top schools are now providing amenities such as fitness centers, plush student lounges, and upscale dining—perks generally unrelated to providing a stimulating learning environment.

I recently visited a business school where I found the classrooms in disrepair. Paint peeled from their cinderblock walls and broken tiles littered the floor; the classrooms were equipped with hard plastic chairs and blackboards from a previous era. I passed through a hallway and suddenly found myself in a completely different environment: The halls were decked with marble and granite, the offices sported plush carpeting and high-end furniture. Recruiters met in a cyber-lounge, students had five-star meeting accommodations, and Starbucks coffee was dispensed from carts along the walls.

In passing through that hallway, I had walked from the facilities for the undergraduate program to those for the all-important MBA program. But have these increased expenditures on MBA programs resulted in a commensurate increase in the quality of the learning experience? My guess is, probably not. Less marble might have meant that more resources would be devoted to faculty, curriculum, and other aspects directly related to learning.

Marketing bonanzas. A business school’s reputation with other business school deans plays a major role in U.S. News’ rankings. Just before U.S. News sends out its dean survey, deans’ desks everywhere are overflowing with mail from other business schools: glossy brochures, fancy announcement cards, and sometimes even gifts. Schools are spending a great deal of money not to improve their infrastructures and curricula, but to curry favor with other deans.

Interested to see how far schools were going in their marketing effort, the AACSB affinity group for public university business schools recently conducted an informal survey. Of the seven schools that responded, six were undergoing major branding initiatives, driven by a staff of three to eight people. Five had placed external PR agencies under contract. Before 1988, such expenditures would have been viewed as extravagant and marginal to a school’s mission.

Spending money to improve an MBA program is certainly reasonable, even if one desired result is improved standing in the rankings. But to what extent do the rankings measure program quality? Rather, when rankings measure “reputation” and “recruiter assessment,” they are measuring program quality only indirectly. Moreover, when schools funnel money into improving these areas, they often divert funds from teaching, deflect focus from learning outcomes, and drain resources from the school’s mission.

Next-Step Strategies

The media rankings are a market reality. With the public’s fascination with business school rankings growing unabated, the media have no reason to stop: Their rankings issues are perennially among the year’s top sellers. Rankings, in some form, are here to stay.

Business schools, however, can take steps to exert influence over many aspects of the rankings, including the standards they measure, the data they use, and even the way they compute their results. In fact, with the outcry over rankings growing louder by the year, I predict that in the near future the business school market will see the following changes in the rankings system:

Standardized collection of data. Much of the criticism of rankings derives from their methodologies. That is, their systems often depend on variables that have a questionable relationship to quality. Moreover, their data is often self-reported and unchecked for accuracy. In response to this aspect of the rankings, several groups have already begun identifying measures of quality that, when combined with reasonable qualitative judgment, can be used to assess the overall quality of an academic unit. In 1998, the Graduate Management Admissions Council (GMAC) formed an Industry Standards Task Force that defined the standards for reporting application and admissions data. At the same time, the MBA Career Services Council (MBACSC) began defining standards for placement and other career services data. These two groups have now agreed on a plan to place this data on GMAC’s Web site.

GMAC and MBACSC are working with the media in an attempt to convince them to use the data and standards developed through the Industry Standards Task Force. If agreement on measures were to be reached, a school could assess its strategy for improving quality based on academically driven standards, not by measures of quality the media has set. AACSB International, which houses the world’s largest business school database, also plans to work with the media to improve criteria, conduct research on the impact of rankings, and help members provide information to publications.

Within a short time, GMAC and partner organizations may develop a Web site and database that major media outlets can access, for a fee, and link to their rankings templates. Any school would be able to request that its form be completed for and sent to a particular publication.

Independent audits. GMAC has commissioned KPMG Peat Marwick to conduct external audits of the data submitted by schools. More than 100 schools have volunteered to have their data audited. In fact, a good number of schools already have been through the audit process.

Cooperation between media and academia. A task force among GMAC, AACSB, and the media should be created to annually oversee modifications to the data collection and auditing process. In this way, they can work to agree on methodology and measurements.

Once this system is in place, I predict that the top MBA programs will band together and refuse to submit any information other than that available through the database or their own Web sites. With this system, schools would no longer need full-time staff to respond and react to the rankings. They would no longer be required to respond to multiple calls from multiple outlets requesting diverse data throughout the year.

Ratings, Not Rankings

All deans breathe a sigh of relief when their schools’ rankings improve, and they dread the reaction when their rankings fall. They also know that they often have very little control over which direction their rankings go. When rankings change, it’s often a random or temporary event, based on nothing the school did or didn’t accomplish. Now is the time to create a system in which schools do have some control over their own improvement and reputation in the marketplace.

To return to the time before the storm—a time when the quality of the learning environment and the process of discovery and research took precedence over where a full-time MBA program was ranked—we must move to a system of ratings, rather than rankings. Schools could be rated on a grading scale of “A,” “B,” or “C,” or on a scale of one to five stars. Under a ratings system, no longer would schools worry about random movements in their numerical ranking each year. Fewer resources would be focused on trying to move a ranking up by one or two slots; more resources would be focused on increasing quality, which ultimately would lead to a highly rated school.

In addition, a system of ratings would remain stable over longer periods of time—there would be little news with each announcement of ratings from top media publications. Finally, a program would be able to earn a top rating by virtue of its quality alone, rather than by the few percentage points that separate the 30th-ranked school from the 50th.

It’s true that without the rankings race, there wouldn’t be as much drama in the media’s treatment of business school programs. Without the highs and lows involved in schools’ gain or loss of rankings status, stakeholders would attach much less importance to negligible differences among schools in the same tier.

What all stakeholders would gain, however, would be learning environments driven by a commitment to quality, not an obsession with a numerical standing. If we can accomplish that shift in focus, we will be offering business education in a much saner environment. More important, we can focus our attention and resources squarely where they matter most—on learning and discovery.

Andrew J. Policano is dean of the Paul Merage School of Business at the University of California, Irvine, and director of the California Institute for Management Leadership.