When I took an introduction to psychology course as an undergraduate, I got a “C.” The course required straight memorization, and that’s not the way I learn. I dropped out of psychology, switched to math, did worse, and switched back to psychology. I now have been in the field of psychology for 30 years. I have been President of the American Psychological Association; director of Yale University’s Center for the Psychology of Abilities, Competencies, and Expertise; and am now dean of the School of Arts and Sciences at Tufts. I have never, in these various jobs, had to memorize a book or a lecture.
In fact, many otherwise gifted students are not memory-based learners, and yet our educational system is set up to recognize and reward individuals who excel at skills such as rote memorization. It also rewards students who are strong in analyzing and critiquing arguments. I call that ability analytical intelligence.
Analytical intelligence is essential for success in the workplace—it enables individuals to evaluate, explain, critique, and analyze the world around them. Yet people also need creative intelligence so they can create, invent, discover, explore, and imagine. They need practical intelligence so they can implement, contextualize, use, and apply. Put another way, there are three types of intelligence—analytical, practical, and creative—and they all work together.
Intelligence and Diversity
Each individual presents a different profile of the three types of intelligence, typically displaying stronger talents in one area than another. The best workers, however, draw on all three. Their creative skills help them generate new ideas, while their analytical skills let them evaluate whether an idea is a good one. Their practical skills help them persuade others that an idea is valuable and figure out ways to implement it. In business, knowing how to create and persuade are at least as important as, and arguably more important than, knowing how to evaluate.
Traditional education and, in particular, traditional tests, focus a spot-light on only one kind of intelligence and one kind of person. This causes the educational system to overlook diverse people who possess diverse kinds of intelligence. Not only does coursework demand that students possess analytical skills such as memorizing and critiquing, but popular admissions policies favor students with those skills. In fact, conventional exams only test for analytical intelligence.
One major advantage of exams that test for multiple aspects of intelligence is that they significantly reduce the ethnic-group differences displayed on tests such as the SAT, ACT, GMAT, and similar assessment tools.
Schools that rely primarily on tests such as the SAT and GMAT to determine who will be admitted to their programs are likely to end up with a very traditional student body—and leave out many outstanding candidates. This is not to say the tests are bad; rather, it is to say they are limited in what they measure. I believe that schools need to consider other measures that will identify students with practical and creative intelligence as well. These additional measures would be designed to supplement conventional tests, not to replace them.
In collaboration with other researchers and institutions, I have devised tests that will assess all three types of intelligence, providing a more comprehensive picture of what students have to offer. One is designed specifically as a college admissions test, the other as a business school admissions test. Using such tools, schools will be able to admit a diverse and potentially highly successful group of students into their programs.
A New Kind of Test
The first test, the Rainbow Project Assessment, was developed in collaboration with the College Board and gives scores for analytical, creative, and practical thinking. It has been administered to roughly a thousand high school and college students of widely varying levels of skills, socioeconomic levels, and scholastic attainments.
In tests that we’ve administered so far, there has been excellent correlation between how well students scored on the test and how well they performed in school. Adding our measures of creative and practical skills to the SAT test roughly doubled accurate prediction of freshman-year performance. It also very substantially reduced differences in scores between ethnic groups.
I developed the business school admissions test under contract with the University of Michigan Business School (now the Ross Business School) in collaboration with Jennifer Hedlund, Jeanne Wilt, Kristina Nebel, and Susan Ashford.
Known as the University of Michigan Business School Test, it measures practical skills in handling specific situations. It presents students with two types of questions: Case Scenario Problems (CSPs) and Situational Judgment Problems (SJPs).
The former are rather lengthy case scenarios of problems managers might encounter in business. In these problems, the participant takes on various roles—human-resource manager, management consultant, director of research and development, financial consultant, project manager—in different firms. Students have to solve the complex problems using available memos, e-mails, case records, and other material. The SJPs are shorter and draw on less information, but they also require students to use given information to devise appropriate solutions.
The Business School Assessment, added to the GMAT, substantially increased the predictive power of the testing. In addition, our business school measure predicted grades on an independent project at the University of Michigan Business School, whereas the GMAT did not. Moreover, it was a better predictor of positive involvement in extracurricular activities.
We have not worked with business schools other than the University of Michigan, but we have developed an admissions test for Choate-Rosemary Hall, a private high school. The results were rather spectacular, as our test greatly enhanced predictive validity over the Secondary School Admission Test (SSAT) taken alone.
One major advantage of exams that test for multiple aspects of intelligence is that they significantly reduce the ethnic-group differences displayed on tests such as the SAT, ACT,GMAT, and similar assessment tools. People growing up in different kinds of environments face different kinds of challenges. Those from white middle- to upper-middle-class backgrounds generally have the luxury of developing higher levels of abstract analytical skills. These are important skills for school and work.
Children who grow up under more challenging circumstances—for example, in poor homes or in homes where English is not spoken—must develop creative and practical skills in order to survive. An extreme example is street children, who are found in much of Latin America and some of Africa. These children literally die if they do not acquire practical and creative skills at a sufficient level.
An assessment method that measures the full range of intelligence better reflects abilities that have been developed through different forms of socialization. In our research on five continents, and even within the United States, we have found very different patterns of abilities across countries and cultures. On the Rainbow Assessment, for example, American Indians scored lower than whites on analytical measures, but higher on some creative measures, such as oral storytelling.
Obstacles to Testing
Tests that measure broad abilities have many benefits, yet they have not yet been widely embraced. In fact, many institutions see potential drawbacks. One is that the tests require additional time for students to take, since they are designed to supplement existing tests. The Rainbow Assessment and the University of Michigan Business School test require about the same amount of time as the SAT and the GMAT, respectively.
Another potential drawback may be that new tests require new systems of evaluation. Some of the items on these supplemental tests can be objectively graded, and some are graded by trained raters. Others are graded by distance measures, meaning the student’s answers are compared to expert or prototypical responses. Obviously, the graders must be trained, but that is also true for those grading the essay portion of the new SAT.
Some admissions experts might protest that they don’t have the resources to administer longer tests that require more personnel to evaluate. So we must ask ourselves: Do we want to get the very best students? Do we want to offer broader opportunities to students who merit them and who are not now being given such opportunities? If we do, then perhaps we need to spend a little more time and money. If three additional hours of testing time can ensure greater accuracy and equity in the admissions process, the time cost seems insignificant when compared to the thousands of hours a student will spend attending college or business school.
My hope is that that, in the future, college and university admissions tests will more comprehensively assess the full range of skills that are important for success, both in school and in life.
Concern about time and money is not the only obstacle preventing universities from implementing new assessments such as ours. Inevitably, it is difficult to bring about any change to an entrenched system that not only seems to work, but also provides a steady income flow to the organizations that embrace it. In addition, schools may resist implementing broader tests for other reasons:
• Unfamiliarity. Some people have a fear of innovation in general. Others distrust the new tests because they have yet to prove themselves fully.
• Pseudo-quantitative precision. When people see numbers coming from a test, they tend to think the numbers are highly valid, whether or not they are.
• Culpability. People in admissions may fear they will be blamed if they admit someone who did poorly on a conventional test and that person then does not succeed.
• Similarity. Currently, many people who are promoted to successively higher positions in the school pyramid are those who excel in analytical skills, as opposed to creative or practical ones. Conventional tests control admission not only to elite schools, but also to the job opportunities that are open to graduates of these schools. People who succeed under any system tend to value the existing system because it got them where they are. Additionally, people who are not creative often do not themselves particularly value creativity.
• Publication. Scores on conventional tests are widely published and instrumental in determining business school rankings. Institutions have become obsessive about keeping their conventional test scores high so as to raise their ratings.
• Expectancy effects. Whenever a specific trait is valued, it tends to create self-fulfilling prophecies. Individuals who excel at that trait are expected to succeed, and often they do. If they don’t excel in that trait, they are expected not to succeed, and often they don’t. I hope times change, but we are not currently living through the most progressive era in U.S. history—quite the contrary.
The Benefits of Testing
I emphasize that the tests we have devised are not intended to be used by themselves. They are supplements to, rather than replacements for, conventional tests. It is important to measure conventional analytical skills in addition to the creative and practical ones.
Although some schools have stopped requiring SAT tests as part of the admissions process, I am not an advocate of eliminating tests altogether. Tests were created to solve a problem. Before they were devised to aid in the process, admissions depended very heavily on social status, parental wealth, whether the student had attended a prestigious independent school, and so forth. Other current measures are equally ambiguous. Letters of recommendations are often inflated and not always truthful. Grades can mean very different things at different schools, as can involvement in extracurricular activities. Without tests, we might end up falling back on more traditional criteria or depend too heavily on unreliable measures.
We live in times in which there is great emphasis on how much students know rather than on whether they can use what they know in a reflective and constructive way. Current educational policies risk developing people who know a lot but do not think critically, wisely, or well with the knowledge they have. Over the years, U.S. business has seen more than its share of such people in power—at Enron, WorldCom, Tyco, Arthur Andersen, and Adelphia, to name a few—not to mention in government. To break this cycle, and balance what students know with what they can do, we should not give fewer tests. Instead, we should devise better, more comprehensive ones that enable everyone to show their patterns of strengths.
Is it possible to look at scores from our new tests or others like them and predict which students will turn into Dennis Kozlowski or Bernard Ebbers? Maybe not, and we have not followed students into the working world to determine how well their success has been predicted by the admissions tests that I have helped develop. However, we have done a number of studies of correlations between our measures of practical intelligence and measures of job performance in careers such as business manager, army officer, professor, and elementary school teacher.
Our tests for military leadership, commissioned by the U.S. Army Research Institute, successfully predicted 360-degree ratings of leadership. For the most part, the majority of our tests predict real-world success at about the same level as conventional intelligence tests, but show modest correlations with such tests. Thus, the best prediction is obtained when institutions combine conventional tests with ours, rather than using just one or the other.
Over the years, I have had scores of requests from companies that wanted to measure employees with our tests. Most often they wanted to use the Tacit Knowledge Inventory for Managers and the Tacit Knowledge Inventory for Sales, both of which were developed with Richard K. Wagner. The original work on the sales test was commissioned by an organization that did telemarketing. We have made the tests available, but we have not tracked how companies used them. None the less, theories of broad abilities clearly have value in the corporate world as well as in the admissions office.
By understanding how creative, practical, and analytical intelligence work, corporations can hire the most qualified candidates and schools can admit students with the greatest potential. Nonetheless, the current college admissions system is well established, and it will only change if people want it to change. If people are willing to settle for incomplete tests, we will continue to make judgments based on incomplete tests. We will lose the chance to maximize academic excellence and diversity simultaneously.
While existing exams do a reasonably good job of testing for analytical intelligence, I believe the system has to change. I am doing what I can constructively to change the system—not to do away with what we have, but rather, to expand it. My hope is that that, in the future, college and university admissions tests will more comprehensively assess the full range of skills that are important for success, both in school and in life.
For More Information
About the Rainbow Project:
Articles describing this test and the data have been published in The Educational Psychologist, in Change Magazine, and in Choosing Students, a book edited by Wayne J. Camara and Ernest W. Kimmel.
About the University of Michigan Business School Test: An article describing this test and the data can be found in The Educational Psychologist; a much more detailed article is scheduled to appear in the journal Learning and Individual Differences.
About the correlations between conventional intelligence tests and multiple-intelligence tests: Data can be found in the book Practical Intelligence in Everyday Life, by Robert Sternberg and collaborators, as well as in many published refereed articles.
About Tacit Knowledge tests and their correlations with leadership ratings: These are described in an article published in the journal Leadership Quarterly. The study and the results represent a collaboration with Jennifer Hedlund, George B. Forsythe, Joseph Horvath, Wendy Williams, and Scott Snook.
Robert J. Sternberg is dean of the School of Arts and Sciences at Tufts University in Medford, Massachusetts. He is also professor of psychology and Director of the PACE (Psychology of Abilities, Competencies and Expertise) Center, which will come to Tufts in 2006.