THE SCORES FOR STANDARDIZED
college admissions tests don’t accurately predict students’ grade point averages, and differences in predictions are systematically related to students’ gender and ethnicity. This means that hundreds of thousands of students are unfairly admitted to colleges, rejected from colleges, or offered scholarships. That’s the contention of new research produced by Herman Aguinis, the John F. Mee Chair of Management and a professor of organizational behavior and human resources at Indiana University’s Kelley School of Business in Bloomington; Steven Culpepper, assistant professor in the department of statistics at the University of Illinois at Urbana-Champaign; and Charles Pierce, the Great Oaks Foundation Professor of Human Resource Management at the Fogelman College of Business and Economics at the University of Memphis in Tennessee.
The three wrote a paper on the same topic in 2010, in which they suggested that standardized tests had the potential to be biased based on gender and ethnicity and that methods to reveal bias were deficient. The College Board—which administers the SAT and GRE—took issue with their conclusions and published a rebuttal in 2013. That rebuttal paper, written by Krista Mattern and Brian F. Patterson, studied the relationship between SAT data and first-year grade-point averages and found that the relationship between the two was generally consistent across various groups. Both studies were published in the Journal of Applied Psychology, which required Mattern and Patterson to make the College Board’s data available.
Aguinis, Culpepper, and Pierce based their new paper on that data, which provides information about roughly 475,000 test-takers. These include 257,336 female and 220,433 male students across 339 samples, as well as 29,734 African American and 304,372 white students across 264 samples. All samples were collected from 176 colleges and universities from 2006 to 2008. After analyzing the College Board data, Aguinis, Culpepper, and Pierce found the same average results across colleges as the College Board scientists. However, they also found much variation when data for each college was studied individually. They argue that admissions policies, grading approaches, and academic support resources differ greatly between institutions and even within them. They believe this raises questions about how useful and fair standardized tests are as predictors of student success across gender and ethnic groups.
Aguinis and his colleagues offer a closer look at the numbers based on the approximately 1,368,500 SAT takers who enroll in college annually. Their results suggest that 221,697 students attend institutions where there is bias toward men or women based on the math component of the SAT. In addition, hundreds of thousands of students are likely to attend institutions where there is a bias in prediction based on a student’s ethnicity such as black compared to white. That’s true for approximately 265,489 students taking the critical reading component of the SAT test, 183,379 taking the math component, and 221,697 students taking the writing component. “Tests do not work the same way across colleges and universities, and we have found that hundreds of thousands of people’s predicted GPAs based on SAT scores were under or overestimated," Aguinis says.
While the paper focuses on data for the SAT, Aguinis says its findings are applicable for other exams, such as the GRE, GMAT, civil service, and pre-employment tests that also measure intelligence and quantitative skills.
“Differential Prediction Generalization in College Admissions Testing” appeared January 21, 2016, in the Journal of Educational Psychology. Purchase it at psycnet.apa.org/ psycarticles/2016-03221-001. Read the original 2010 paper by Aguinis, Culpepper, and Pierce at www.apa.org/pubs/journals/releases/apl-95-4-648.pdf
. The College Board’s rebuttal of the original research also can be purchased at psycnet.apa.org/index.cfm?fa=search. displayrecord&uid=2012-29189-001