Leadership in the Round

A 360-degree assessment tool helps deans determine what they’re doing right, where they could improve, and how to implement changes.
Leadership in the Round

When a business school dean was in his third year at a Midwestern university, his provost asked him to undergo a 360-degree assessment. He was pleased to learn that his raters believed he had good technical skills, deftness at promoting the business school, an excellent knowledge of the market, and the resilience to recover from mistakes.

He wasn’t quite as prepared for the candor of the write-in comments, where people spoke up plainly about certain areas where they thought his behavior could improve. Some raters thought he was dictatorial and subject to temper tantrums. Others wanted him to take more action in some situations and project a stronger presence.

“After the shock wore off, I was able to put their comments in the context of the survey data and really make sense of what I was doing that they wanted done differently,” he says. With the help of a coach, the dean decided to focus on key issues such as building relationships, listening to others’ perspectives, and creating a more positive climate for the b-school faculty and staff. “Today it all seems obvious and simple,” he adds.

This Midwestern dean isn’t the only top manager who has turned to a 360-degree assessment to get honest feedback about how he’s performing and where he could improve. The corporate world has long relied on such assessments, in which individuals are rated on a variety of attributes by a cross-section of peers, direct reports, clients, and supervisors. What’s new is that the multisource assessment tool is working its way into the university setting. It’s becoming another method administrators can use to improve the way the b-school functions.

Academic Assessment

In a sense, the academic world has always engaged in forms of 360-degree assessment, says Stephen Stumpf, professor of management and Fred J. Springer Chair in Business Leadership at Villanova University in Pennsylvania. A student evaluation might be considered an assessment “from below” or “from the customer.” A peer review of an article written for publication could be seen as an assessment from a peer on the organizational chart. The 360-degree tool is different because it provides a snapshot of a dean’s performance competencies as viewed by a wide range of constituents, all consulted at the same time.

“Any C-suite leader—and that’s what a dean is—has many different stakeholders who all have different demands,” says Stumpf. “There may be little overlap between what faculty want from a dean, versus what students want, versus what the advisory council members or peers want. A 360 assessment gives all of the constituents a legitimate voice. In the absence of a 360 assessment, what you get is the squeaky wheel. Whoever is the most empowered or most angry is the one who speaks, and that person’s input is often more negative than positive.”

It’s not uncommon for senior-level leaders to get little feedback—until they’re fired, Stumpf posits. “There are really two main reasons why deans leave before they’ve chosen to step down,” says Stumpf. “One is that a powerful senior person doesn’t like them or their performance. That could be the provost, a member of the board of trustees, or the president of the university. The other is the palace revolt, the faculty saying, ‘We don’t want him to represent us.’”

Deans who receive a 360-degree assessment, on the other hand, learn what all of their constituents see as their strengths and weaknesses—and they have a chance to leverage those strengths and shore up those weaknesses before the situation gets critical.

Gathering the Ratings

Here’s how it works: Deans first rate themselves on a 360-degree assessment instrument, and they also list the names and e-mail addresses of the people they think should rate them. This list might include some or all faculty; members of the advisory board; direct reports; peers from within the university or the professional field; top university leadership, such as the provost; the dean of students; the head of the alumni organization; and those deeply involved in student recruitment. In the case of the Midwestern dean, he asked about two-thirds of his faculty, two other deans, four senior administrators, and ten members of the b-school advisory board to participate.

To choose raters, says Stumpf, “deans should ask themselves, ‘From whom do I want to learn? How can I get as much consolidated information as possible so I can choose how to invest my time and energy?’”

Raters assess the dean on a number of leadership attributes, usually completing their evaluations electronically via Web-based instruments. Most instruments allow the raters to write in comments to give more depth to the data, and about two-thirds do so, says Stumpf. Once data is compiled, the feedback is returned privately to the dean, either as a document or via password-protected Web access. 

For the uninitiated, analyzing 360-degree feedback is not a simple task, says James Smither, Lindback Chair of Human Resource Management and professor in the School of Business at La Salle University in Philadelphia, Pennsylvania. For instance, if an assessment survey has 40 items on it, and each one is rated by a pool of peers, supervisors, and direct reports, and the response from each group of raters is gathered into its own mean, that’s 120 data points. “Who can attend to that much data? How do you make sense of that?” Smither asks. “It helps to have someone skillful in facilitating data to help you work your way through it and show you underlying themes.”

A coach can help the subject understand the data and formulate a plan for how to improve, Smither says. Executive coaches can be hired from consulting firms, he says, but deans might find someone within the faculty who has the skill set to guide them through the data and help them make any necessary changes. 

The Midwestern dean who underwent an assessment engaged an outside coach familiar with the business school industry. “It was very helpful to get another person’s views on the issues,” he says. “He showed me how my behaviors could lead to others’ perceptions of me.”

The dean also shared the results with his spouse, who had additional insights to offer. “She laughed at some of the comments that had gotten to me,” he says. “She helped me come to view the 360 as a discussion starter, not an answer.”

What the Data Show

According to these experts, feedback reports are most valuable when they’re used strictly on a developmental basis—but even so, they can paint a complex picture. Strengths and weaknesses can be perceived quite differently by different constituencies, leaving a dean to wonder how to respond to conflicting messages. 

For instance, the instrument might ask how well the dean creates a shared vision. If a dean has decided to build the marketing department and the executive board is thoroughly enamored of the plan, board members might give him high marks for his vision. Finance and accounting faculty who are not thrilled with pouring so many resources into the marketing curriculum might rate him significantly lower. 

“In response, he might spend some time with those departments, saying, ‘Here’s the funding, here’s the plan,’ and turning those people into ambassadors,” says Stumpf. “He may never have thought to do that on his own.”

Sometimes a dean might find out she is highly perceived within her own college, but that administrators of other colleges on campus don’t like her. That might make her reflect on her typical behavior. “When she goes to those university-level meetings, is she hanging around afterward and chummying up with dean colleagues?” Stumpf asks. “Maybe she needs to have more conversations with her university peers. Or maybe she just needs to realize that she can’t win that battle. She can formulate different strategies for dealing with the data, but until she has objective feedback, she doesn’t even know what the dynamics of the situation are.”

“Deans should ask themselves, ‘From whom do I want to learn? How can I get as much consolidated information as possible so I can choose how to invest my time and energy?’” —Stephen Stumpf, Villanova University

After his assessment, the Midwestern dean realized he had to work to be more collaborative—and more patient. He says, “Now I share more information and wait for responses, even if it takes days instead of hours. I’ve started pausing after I hear something negative or distasteful, rather than reacting. The pause gives me time to reflect and ask myself what might have led this person to have that perspective.” The new attitude is making a big difference, he thinks. A few faculty members who were “on the outside” have become more agreeable to changes the dean wants to introduce to the business school. “This is much to the benefit of us all,” he says.

While the feedback report sometimes leads to minor changes, occasionally it can help top executives restructure their approach to management on a more fundamental level. “I’m working with someone who never micromanages and has an informal style that everyone likes, but he doesn’t have enough discipline to make sure his people hit deadlines,” says Smither. “He’s a very senior person who’s savvy enough to realize that he’s probably never going to change, so he’s brought in an internal project person who will handle those areas. Smart managers realize they need to create a sense of balance by surrounding themselves with people who have the strengths they don’t possess.”

“I’ve started pausing after I hear something negative or distasteful, rather than reacting.  The pause gives me time to reflect and ask myself what might have led this person to have that perspective.” —A Midwestern dean

Influencing Factors

When assessments are used strictly for developmental purposes, and the feedback only goes to the person being rated, everyone can focus on what’s working well and what needs to be improved. Problems might arise when assessments are used for decision-making purposes, say these experts. Individuals might be more selective about whom they choose as raters. Raters might be less honest if they know their comments will be read by the person’s supervisor and have an effect on that person’s job. 

“It’s good to keep in mind that, when people are completing ratings, accuracy is not their goal,” Smither says. “Their goal is to send a message to somebody. The question becomes, what message do I want to send and to whom do I want to send it? If the feedback is only going to one person, I only have one audience. But once the feedback is going to that person’s manager, I have two audiences. I might want to communicate two different things.”

For example, says Smither, individuals asked to rate a boss who is superb in every category but one—say, public speaking—might honestly discuss their boss’s weakness if they think he can use the information to improve his performance. But if they’re afraid their boss’s supervisor will interpret low marks in that category as a reason to withhold a raise or a promotion, they might be less forthcoming. Because the boss will not learn where his weakness lies, he won’t be motivated to work on it.

“We know from many studies that when people are rating others and know their ratings are likely to have important consequences, they tend to be more lenient,” Smither says. 

Stumpf concurs. When assessments will be used for decision-making purposes, he says, “more biases tend to creep into the data,” he says. He acknowledges that old-fashioned ways of assessing a dean, such as soliciting faculty for comments, are also subject to biases, but he believes that with anonymously collected data, it is more difficult to weed out the personal prejudices that might be obvious in a face-to-face interview.

Thus, Stumpf and Smither both recommend that assessments be used primarily as developmental tools. Says Smither, “I don’t think the provost should see the feedback on a dean’s assessment, but I don’t think it’s unreasonable to expect the dean to sit down with the provost or with faculty and say, ‘Here are a couple of things I saw in the feedback. I saw a message that people would like me to do XYZ more often and do ABC a little differently.’ I think it’s important for people to communicate the feedback to others, including their managers, but the managers don’t need to see the actual feedback.” 

Smither thinks it’s equally important that subjects focus on what the data show they do well, so that they don’t just work on their weaknesses, but capitalize on their strengths. “If your only goal is to find out what’s not working and fix it, an assessment can become a bit disheartening,” he says. “You have to presume that most people got where they are because they have a lot of strengths in the first place.”

Responsive Raters 

While Web-based assessment instruments have made it easier than ever to acquire multisource feedback, submitting to an evaluation isn’t always the first thing on a leader’s mind, Stumpf admits. “No one is going to wake up in the morning and ask, ‘Should I get a 360-degree assessment?’” he says.

According to Stumpf, university administrators can increase their chances that deans will participate in an assessment by providing the right motivation. For instance, an off-site retreat attended by all university deans creates a certain peer pressure for everyone to fill out the self-rater part of the instrument. Once deans are back on campus, they can increase the number of responses they get from raters by personally inviting each one to complete the form. 

Typically, says Stumpf, raters are quite willing to participate because they’re eager to share their perceptions of the person in power. Normal response rates are between 70 percent and 80 percent, but that can be bumped up to 90 percent or better if the dean calls or e-mails his raters and asks for their input.

In the case of the Midwestern dean, he received responses from more than half of the faculty and two-thirds of the others he asked to be raters. If he were to do it over again, he says he might have asked more people to participate—and he definitely would have made a more direct appeal to those he did invite. “While the number of people who did respond was great, I wonder about the views of those who didn’t, or whom I didn’t ask,” he says.

Not surprisingly, the more areas there are to rate, the fewer responses a subject will get, but a brief questionnaire might not collect enough data to be useful. “The goal is to find a balance,” says Smither. “The question is, how much can we cover in terms of the skills and competencies we want to focus on and still get people to respond?”

“If you’re not committed to analyzing the feedback and having a public conversation about what you’re going to do as a result, then you shouldn’t have the assessment.” 
—James Smither, La Salle University

Another question is how often to conduct 360-assessments. Smither doesn’t favor annual assessments; instead, he suggests having the dean look at his feedback and set three goals that he shares with all of his constituents. These goals will then form the basis of a more targeted follow-up survey that’s sent out a year later, asking raters to assess how much progress he’s made on a scale from -5 to +5. “Deans can ask, ‘Have you seen any change, and in what direction? Do you have any further suggestions?’ A follow-up survey only takes a minute or two, but it tells the dean exactly what he needs to know to see if he’s making progress,” says Smither.

Obstacles to Improvement

While 360-degree assessments can lead to positive change, they can have negative consequences as well if they’re not managed correctly, says Smither. Once a dean undergoes the evaluation and receives the feedback, expectations are heightened. If he then makes no changes, the raters who participated may become disgruntled or cynical, and they may privately decide it’s pointless to participate in a future survey. “If you’re not committed to analyzing the feedback and having a public conversation about what you’re going to do as a result, then you shouldn’t have the assessment,” Smither says.

Smither also cautions that it’s unrealistic to expect major change to result from a 360-degree assessment. He has recently published a meta-analysis of research on the topic, which found that, over time, the amount of change people generally achieve is positive but small. He notes that several studies have shown that, if people talk publicly about the feedback they’ve received and the goals they’ve set as a result, they are more likely to improve.

Other studies suggest that people who set clear goals and engage in tangible developmental activity also have a better chance of improving. Conversely, people who are skeptical about organizational change in general are less likely to improve. 

“With all that being said, there’s still an enormous amount of variability that’s not explained,” says Smither. “It would be naïve to think, ‘Everybody who gets 360-degree feedback will get better,’ because that’s not what the data say. But if you set goals and engage in developmental activities and talk to people about your plans, you have a heightened likelihood of succeeding. Motivation is really key.”

It’s not clear that 360-degree assessments will be as effective the world over, Smither adds, although such tools are used internationally. “In some cultures, great deference is given to the manager. In those cultures, it might be awkward to ask subordinates to participate in a 360-degree rating process. Their response might be to give managers perfect scores across the board, in which case the feedback is not going to be informative.”

In addition, different cultures might respond much differently to the part of the process that requires participants to first rate themselves. Self-rating scores tend to be lower in cultures that value modesty and higher in cultures that promote assertiveness. “There are a lot of things that could vary across cultures, and we don’t exactly know how to interpret them,” says Smither. “It’s an open question that could be very important.” 

One Tool Among Many

Web-based multisource feedback instruments have, to a large degree, replaced the in-depth interviews a company used to commission to get a comprehensive view of a senior manager. To some extent, university leadership could use assessments to replace similar face-to-face interviews they conduct to determine how well a dean is doing.

“We’ve taken something that might involve a cumulative 80 or 100 hours of effort and reduced it to five or ten hours,” says Stumpf. “At the same time, we’ve eliminated many of the perceptual biases that creep in through interviews.”

Even so, he points out the assessment is just a tool. “You still have the administrator’s judgment about whether the data are worthwhile or biased,” he says. “It’s not like, if someone gets a 4.5 on a five-point scale, he’ll get promoted. An assessment becomes a quantitative measure of how a particular group feels about a person. The administrator still uses other data that helps create a global judgment.”

As with any tool, how an assessment report is used will determine how effective it will be. But as business schools aim for continuous improvement in the battle to stay competitive, multisource feedback tools offer one more way for deans and administrators to get the edge and outperform the competition.