The Rankings Game

To gain a competitive edge in the high-stakes rankings game, European academic leaders must understand how diverse European markets and subjective methodologies skew the playing field.
The Rankings Game

Especially in the U.S., media rankings of business schools have been a contentious issue almost since they first hit the newsstands in the late 1980s. While administrators acknowledge that the rankings may have had some positive impact on business schools, most argue that the rankings are often based on flawed data and arbitrary measures that don’t accurately reflect the quality of business school programs. For better or worse, however, the number of media rankings continues to escalate. The rankings have become a truly international issue.

In Europe, where the popularity and influence of rankings have soared, deans and other business school leaders wrestle with many of the same concerns and frustrations expressed by their counterparts in other parts of the world. The perspectives, influences, and rationales that influence Europe’s business schools add to the complexity of the ranking issue. To start, Europe’s social, cultural, and historical diversity is extraordinary. Fiscal systems, welfare programs, and the purchasing power of currencies vary widely. These differences do not establish a smooth playing field for the rankings game.

Moreover, the rules of the rankings game can seem murky and abstruse, even for schools with extensive resources. Schools may achieve remarkable success in fulfilling their missions, invest heavily in physical resources, recruit better professors, and successfully launch graduates into the corporate world—but they cannot be certain that their rankings will improve.

Even so, academic leaders in Europe agree that high rankings can improve visibility, strengthen brand recognition, and attract applicants, faculty, recruiters, and donors. To gain that advantage, European business schools must better understand the perspectives, influences, and rationales that influence them, so that they can develop more effective strategies and philosophies concerning the ever-present rankings. Such exploration may be helpful, not only to business school leaders in Europe, but to schools across the globe.

The Bologna Declaration

Europe’s ranking race became even more complex after June 1999, with the establishment of the Bologna Declaration. Now undersigned by 45 countries, the Bologna Declaration marks a transformation of higher education in Europe. The Declaration’s main objectives are twofold: It aims to promote cooperation among its signatories to pursue quality assurance and to unite the university systems of its signatories into one “interchangeable system.” Many of the signatory countries already have implemented the principles of the Bologna Declaration, signing agreements for the exchange of credits, students, and professors.

It also seems certain that the Bologna Declaration will compel increased competition among universities and schools for funding and status. In fact, many business school leaders in Europe acknowledge that the Declaration will likely spur even more focus on the rankings, as schools increasingly view them as both benchmarking tools and as a means to showcase their strengths. The Financial Times has already initiated a new ranking on the Bologna Masters (MSc), supplementing those on MBA, EMBA, and executive education. More publications are likely to follow suit.

Is this good for Europe? Many believe that the rankings will play a positive role, but there are some risks—namely, that academic leaders will lose perspective on their true missions. As external factors such as the Bologna Declaration influence the rankings, many deans will ask themselves, “How can my business school improve its position in a ranking?” They may invest resources in recruiting better professors, adding programs, and improving their educational offering, not to better their schools, but to better their ranking performance. Ironically, such efforts will not guarantee that their schools will improve their rankings, as they do not guarantee such a result today.

Some of these factors are connected to purely statistical facts, while others are linked to the parameters defined by each ranking, or to monetary, fiscal and welfare issues, all of which can cause marked fluctuation in the rankings. Those leaders who focus too heavily on these fluctuations risk losing sight of the importance of real improvement factors. Understanding the real meaning of the results of the rankings is, therefore, of paramount importance for all business school stakeholders.

Understanding the Obstacles

In addition to confronting the pressures of external forces, European schools must deal with problems that many believe are inherent in media rankings. The compilation and comparability of data, stability of results, and the influence of weights are major issues.

Compilation and Comparability of Data—Business schools continually cope with problems inherent in compiling their data for the rankings. Cost is one issue, of course; but there are also risks of violating privacy laws, of conflicting interests in terms of advertising, and in the possibility that the responding sample may not be representative of the institution.

But perhaps of more concern is that media organizations can provide little assurance that all schools apply consistent methodologies. And since there is no common standard for data collection, business schools can be hard-pressed to provide accurate, comparable data.

For example, salary measurements are riddled with problems. For Europeans, the meaning of the term “salary” is uncertain. Is this net? Gross? Does it include contributions and taxes paid by the employer on behalf of the employee? Gross plus fringe benefits and delayed payments, such as the staff severance fund? What about fiscal rules, social security systems, salary composition, fringe benefits, and currencies? Given such differences among countries, defining salary for ranking purposes is a minefield. To make matters worse, sometimes the media give no indication at all regarding the definition of salary.

Severance funds represent a good example of a data comparability point. These funds, which exist in some countries and companies, represent an important salary measure for many Europeans. In certain cases, these funds represent one month’s salary per year. By not including severance funds in their calculations, the media rankings do not take into account about 8 percent of the real value of paid salary.

Another source of potential salary miscalculation is the annual holiday allotment for employees. In continental Europe, for example, that value is in the range of 30 days; in the United States, it’s around ten days. The difference in terms of total working days per year is close to 10 percent, which equates to an approximately 10 percent difference in salary either in terms of a better standard of life or in terms of other working opportunities. In the Financial Times MBA ranking, a 10 percent difference in salary could easily mean a change of some 15 positions for schools in the mid-range.

Year after year, changes in position of various schools in the media rankings seem correlated more to statistical noise than to actual changes in the schools’ performance.

These differences, which are not taken into account in some of the rankings, are significant. In addition, the fiscal implications connected to pension funds and health care systems cannot be neglected, in principle, without an influence on the results of rankings that refer to the salary as the main comparison parameters.

The most accurate rankings take into consideration the changes in relative prices by adjusting them according to the Purchasing Power Parity and the average salary per industry. No ranking, however, takes into account factors such as fiscal differences.

Stability of Results—Even if all rankings data were accurate and true, adjusted to account for fiscal differences, holiday allowance, and other variables, there is little certainty that the results of the rankings would be a precise reflection of the schools’ performance. Year after year, changes in position of various schools in the media rankings seem correlated more to statistical noise than to actual changes in the schools’ performance. 

For example, a review of the full-time MBA rankings of the Financial Times over the past six years indicates that the schools in the first group of 25 are reasonably stable. In the next 75 schools, it’s a different story. The changes in this second group appear substantial. Some schools increase their positions systematically, some decline. Some go up and down in a sort of sinusoidal pattern while others remain stable.

If the same data is compared again two years later, the average absolute change of the schools in the second group of 75 is 12 positions between the years 2005 and 2003, while that average change increases to 15 when years 2006 and 2004 are compared.

This analysis seems to suggest that “shock absorbers” should be applied in the rankings, as has already been done in some of them. Even better, emphasis should be placed on the three-year average position, as done in others. A year-by-year evaluation appears in principle to be misleading, at least outside the first group of schools.

Not surprisingly, if schools’ positions in the various rankings for 2005 are charted, there is a wide distribution of results as positions change depending on the parameters of the ranking. This is understandable, given that the media rankings refer to specific programs and not to the school as a whole. Even so, many academics and consumers share a tendency to generalize a school’s ranking for a given program to all the programs of that school. Business school leaders must guard against that tendency and honor the limitations of each ranking’s findings.

The Influence of Weights—Another “weighty” problem is that each ranking publication chooses different parameters by which to compare schools and adopts its own weighting system of those parameters. Of course, publications have the right to choose their own weighting systems, provided they are clearly indicated. Still, it’s easy to wonder what the results might be if small changes were made to the weights a particular publication employs. What would a “sensitivity analysis” show?

To conduct a thorough sensitivity analysis, the whole calculation process of a ranking’s raw data would have to be reproduced—an impossible task, because not all the raw data is available. However, simulations, which use a simplified calculation model and available data, can give at least a preliminary indication of a trend. Five simulations were conducted involving two of the most important parameters in the Financial Times’ MBA rankings: salary and salary percentage increase.

Any interpretation of the rankings should be done with an awareness of how weights can significantly influence results.

The first simulation was run using the publication’s own system, which weights salary and salary percentage increase at 20 percent each. Successive simulations modified these weights to 18 percent and 22 percent; 15 percent each; 21 percent and 19 percent; and 25 percent each. The weights of the other parameters were adjusted accordingly.

The simulation showed that the relative positions among schools can change significantly when the weights are changed even slightly. Minor changes in the weighting systems cause a school’s relative position to another school to change by up to ten positions. As this simple experiment indicates, any interpretation of the rankings should be done with an awareness of how weights can significantly influence results. A school may achieve a certain position in a certain ranking for a certain program in a given year, but it should be made clear that position was determined “according to that specific weighting system.”

Rankings and Ratings

Such subjective weighting systems beg the question: Can rankings be trusted at all? Not if one views them as valid comparisons that account for the diversity among business schools, not just in Europe, but in the global market. In physics, for example, representing a complex phenomenon with a single number requires researchers to follow similarity laws. As one might suspect, scientists find it rare that a complex phenomenon can be represented by a single number. This has happened only in a few cases and has been determined by very famous scientists, like the Mach number for supersonic speed in different media. In the majority of cases, it’s impossible to reduce any event to a single number—just as it’s impossible to confine the characteristics and offerings of a business school to a single numerical rank.

As business school administrators know, business school rankings do not respect similarity laws; they’re currently driven more by the business of publishing than by science. The fact that rankings results vary so widely from one publication to another is proof in itself that the information should be interpreted with caution. Factor in the diversity of schools in Europe—and around the world—and such interpretations become even more complex. 

To offer useful and valid comparisons, rankings should refer to only one parameter. For instance, the Fortune 500 ranks companies based on their revenues. Forbes magazine ranks CEOs based on compensation. Business schools, too, could be ranked according to the number of programs, the size of their student body, or the percentage of international faculty.

To compare multiple facets of business schools, a system of ratings would be much more appropriate—and more reliable. Much like a report on new digital cameras might award each model a number of stars, rather than a specific number, based on a complex comparative methodology, publications could use a similar system of ratings. However, ratings are less appealing to readers and publishers than rankings, which are tremendously popular and attract wide readership. That mass audience for rankings makes their continued presence all but inevitable.

The only rational solution seems to be to go toward a combination of rankings and ratings. If publications offer the public both options—rankings based on a single parameter and ratings based on many parameters—they maintain their readership and offer a more valid basis of comparison. Everyone receives the benefits of rankings and must cope with fewer of their inherent drawbacks.

Right now, the best course of action for European schools is to educate all stakeholders about what rankings measure—and what they don’t. European schools may be new, comparatively speaking, to the rankings game. Still, they must develop strategies that may lead to greater value and accuracy of rankings on a global basis. Business schools everywhere must participate in this effort. 

Andrea Gasparri is the managing director of SDA Bocconi’s School of Management in Milan, Italy. Gasparri also serves on the board of AACSB International and is a member of AACSB’s European Affinity Group.