Judging the Journals

Where professors publish is just as critical as how often they publish. But it’s essential to establish a fair and consistent method to gauge publication quality.
Judging the Journals

Most business schools rely heavily on a professor’s publication record when they’re evaluating faculty, so they track how often each professor publishes—and in what journals. But it’s no simple task for schools to determine the quality of the journal, and thus the quality of the scholarly contribution.

One popular method is for schools to create internal lists that place journals in “graded” categories. In fact, a recent survey by AACSB International reveals that approximately 40 percent of the association’s members opt for this way of officially documenting journal quality at their institutions. But how can they know that they have correctly judged a publication’s worth—or fairly gauged their faculty’s work?

It’s a challenge for any school to put together such a list in a comprehensive and consistent way. Understanding the assessment methods available and deciding which ones work best at a given institution are crucial tasks.

Rank and File

Methods utilized in published journal ranking studies include opinion surveys, citation scores, the author affiliation index, and other sources. Each method has its own advantages—and disadvantages.

Opinion surveys. Some researchers have developed journal rankings by surveying faculty, department heads, deans, journal editors, article authors, doctoral students, and other interested parties about their opinions of journal quality. They generally ask respondents to consider factors such as the journal’s reputation, prestige, and appropriateness as a publishing outlet.

While opinion surveys are common, they have been criticized for being biased toward U.S. journals, underestimating the importance of niche journals, and failing to recognize newer journals and practitioner publications. They also have been disparaged as popularity contests whose rankings are not derived from “hard” data. While these are legitimate concerns, opinion-based rankings generally produce a reliable assessment, at least for the top 20 journals in a field.

Citation scores. Citation scores, which are also widely used, measure a journal’s visibility by noting the extent to which its articles are cited in other publications, such as research studies, doctoral dissertations, and textbooks. A popular type of citation score is the impact factor, which is calculated as the ratio of the number of citations for articles in a journal to the number of articles in a set of journals, over a specified time period.

Detractors complain that citation prominence doesn’t necessarily equate to quality. They also claim that citation scores favor established journals with larger readership—most often U.S. journals—and those that publish articles on the latest hot topics. Nonetheless, because these scores are based on empirical data, many faculty and administrators feel comfortable using them to evaluate journal quality.

Author affiliation index. The AAI method takes a select set of schools—typically, the top research institutions in the discipline—and computes the percentage of a journal’s authors who have been drawn from those schools during a designated period of time. The assumption is that authors from top schools will produce articles of higher quality; journals with more articles by these authors therefore are the best in their fields.

One advantage of this approach is that AAIs can be customized; a given school can identify its peer institutions and track which journals publish articles by their faculty. On the other hand, if schools are judged by how many faculty publish in top journals, and journals are judged by the number of published authors who are from top schools, then the AAI method can become circular.

Other sources. Published ranking studies have used a variety of other ways to appraise journal quality:
Library holdings. The more available a journal is in academic libraries, the more likely it is to have influence.
Readership. The more readers a journal has—determined by library circulation records or Internet downloads of journal articles—the greater its sway.
Target lists. Journals are ranked by averaging across the graded lists from several schools.
Meta analyses. These presentations of journal standings essentially summarize the published ranking stream of articles in a specific discipline.
External ranking metrics. Many journals appear on the lists of outside sources, such as the Financial Times Top 40 Journals, the British Association of Business Schools (ABS) Ranking, and the list from the Australian Business Deans Council. In addition, a number of Internet sites also provide information about journal standings (see “Where to Read More”).

Because scholarly production is such a significant element in judging faculty performance, it’s critical that journal quality be fairly and transparently evaluated.

Rankings in Action

Because scholarly production is such a significant element in judging faculty performance, it’s critical that journal quality be fairly and transparently evaluated. Everyone must understand the school’s quality assessment system, from the professor being evaluated to the deans, provosts, and faculty promotion and tenure committees reviewing the professor’s performance.

Whether schools generate an internal rankings list or rely on external sources, they need a definitive way to determine journal quality—and justify their decisions to others. The published streams of articles ranking journals by discipline can be extremely helpful in this regard, though it’s important to remember that any ranking system can be flawed. For any school attempting to judge journal quality, the best approach is to consult multiple sources, triangulate the data, and draw thoughtful conclusions, while providing the maximum visibility in the process. When everyone understands how scholarly contributions are measured, everyone can strive to hit the highest mark.

Where to Read More

Dozens of articles have been published assessing journal quality through methods such as opinion surveys, citation scores, the author affiliation index, library holdings, and Internet downloads. A comprehensive listing of these studies, as well as links to journal rankings’ Web sites, can be found online at www.aacsb.edu/research.

Bruce R. Lewis is associate professor and Cooper Fellow in Information Systems at the Calloway School of Business and Accountancy at Wake Forest University in Winston-Salem, North Carolina.