Testing, 1 ... 2 ...

By using two approaches to testing—a standardized test and a homegrown core concept test—business schools can draw a more complete picture of student learning.
Testing, 1 ... 2 ...

When business schools develop their learning assessment plans, formal testing of students usually plays a significant role. But they face a common dilemma: Should they rely on a standardized test such as the Major Field Test in Business (MFT) developed by the Educational Testing Service, or a core concept test developed by their own faculty? Standardized tests offer ease of use and better external validity, but they offer limited benchmarking and customization options. Homegrown tests, on the other hand, are more flexible and more in tune with each program’s unique curriculum but require a significant allocation of money and faculty time.

At the School of Business and Economics at Indiana University Northwest (IUN), our faculty wanted to achieve the most cost-effective plan that would provide the most valuable insights into student learning. In 2007, our assessment committee began analyzing the relative merits of the MFT and the school’s own core concept test. Our faculty discovered that they can achieve the best results by using both methods for assessment—and we believe other schools can learn from our experience.

The MFT

Pros: Easy to use, measures overall student performance
Cons: Doesn’t offer comparable data, doesn’t identify individual areas of weakness

The MFT is designed to assess the knowledge of business students in core business areas such as accounting, economics, finance, international business, and business law. But while it helps faculty identify functional areas where students need more curricular development, the MFT also has three significant limitations.

First, the MFT is “normed”—that is, it compares students’ scores only to those of other test takers in that semester. For that reason, we cannot compare how well students performed this semester to how well they performed last semester. Second, the MFT reports scores in terms of percentiles, not in terms of questions answered correctly. For example, if students are in the 90th percentile, it does not mean that they answered 90 percent of the questions correctly, only that they performed better than 90 percent of that round’s group of test takers. Finally, the MFT does not report the functional area specific scores for each student, so it’s impossible to analyze their strengths and weaknesses in these areas.

The Homegrown Test

Pros: Makes comparisons possible, tests performance in any core concept
Cons: Expensive to implement, time-consuming

A homegrown core concept test can overcome many of the limitations of the MFT. For example, the test that faculty designed for IUN is not normed, and it provides scores for each student in each functional area. For this reason, longitudinal comparisons of student scores can measure the reliability of the test itself. That is, if our faculty implement an intervention strategy in a functional area in a semester—say, for instance, they add a new simulation game to the core course in finance—they can measure its effectiveness by comparing the scores before and after the intervention.

Also, IUN’s test is flexible enough to test students on any core area. For instance, the MFT does not test students in ethics and operations management, two core areas we identify in our list of learning goals, but our own test does.

Despite the advantages of a homegrown test, schools may encounter faculty resistance to implementing one because it requires such a huge time commitment. Our faculty spend considerable time writing, editing, refining, piloting, and revising test questions. To win faculty over, administrators must clearly communicate how important the test is to the assurance of learning process. It helps to identify individuals who will be responsible for handling and maintaining the data, and even to appoint an “assessment captain” to champion the assessment process.

It also helps if schools encourage faculty who contribute to the design and implementation of the test by providing more opportunities for development. Providing a wider range of these opportunities might be difficult for schools with constrained finances, but even schools with tight budgets may find that the benefits a homegrown test brings to the teaching and learning process outweigh its cost.

With a hybrid approach, schools can capitalize on the merits of the MFT and a homegrown model, while mitigating their shortcomings.

Adopting the Hybrid Approach

IUN’s faculty have found that the best possible strategy, if time and budget allow, is to administer both tests. By keeping the following suggestions in mind, schools can capitalize on the merits of the MFT and a homegrown model, while mitigating their shortcomings:

Don’t overtest. At IUN, we require all students to take one of the two tests, but no single student is asked to take both tests.

Secure administrative support for a homegrown assessment tool. At IUN, after our assessment committee introduced its proposal for the new core concept test, the entire faculty approved it. At that point, our dean sent a clear signal to faculty members about its importance to the assurance of learning program and urged all faculty members to help develop our test question bank.

Secure faculty contributions from all functional areas. Our faculty helped us develop a test bank of more than 700 questions for the BS program and more than 600 questions for the MBA program. In many instances, faculty members teaching the same course jointly developed a test bank of questions for that course.

Faculty members from different functional areas such as marketing, economics, and finance collaborated to prepare the test banks for multidisciplinary areas such as international business. Today, our core concept test comprises 120 questions—the same number as the MFT.

Motivate students to do their best. We have built these tests into our capstone course, the last course our students take. It’s a time when students suffer from “test fatigue,” and they often don’t understand the importance of one more test that isn’t related to any single course. To circumvent this problem, we used multiple strategies:

  • Starting in summer 2008, our dean began to talk to undergraduates about the importance of the test to the school and the students.
  • At the start of the fall 2008 semester, the instructor of the capstone course announced the spring 2009 test date and informed students regularly about the test as the date approached.
  • In spring 2009, the instructor provided sample questions to students to familiarize them with the test’s focus and format.
  • We increased the weight of the test in each student’s total grade in the capstone course, from 0 percent in fall 2007 and summer 2008, to 10 percent in spring 2009, and finally to 20 percent in summer 2009.
  • The instructor of our capstone course meets with every student who takes our core concept test to provide feed-back on the test results. That feedback can serve as a strong motivator for students, who appreciate this attention to their learning process and development.
The instructor meets with every student to provide feedback on test results. That feedback can serve as a strong motivator for students, who appreciate the attention to their learning process and development.

After we took these steps, our undergraduates’ performance in the core concept test improved over its last three administrations, as shown in the table on page 35. The school plans to continue these interventions for future tests as well. Interestingly, our faculty’s core concept test remained unchanged in the last three administrations, which suggests that the improvements in scores are principally due to our interventions.

Continuous Improvement

The School of Business and Economics also made changes to its curriculum in response to students’ performance on the tests. Our faculty discovered that students traditionally do better in core areas such as marketing and management than in areas such as accounting, finance, international business, and information systems. The school has addressed this issue in several ways:

  • The addition of an elective international course to the program. Faculty also are considering adding a required international business course to our core curriculum.
  • The addition of preparatory classes and tutoring in accounting and finance.
  • A requirement for all students to take preparatory classes in information systems from the computer sciences department.
  • A test of all students’ IT competence in the 300-level MIS course, which encourages students to maximize their learning in this area.

Our faculty also will continuously evaluate the validity and reliability of our testing procedure. In the coming months, they will examine test scores over multiple testing periods, as well as analyze student responses on the test to check the validity and reliability of each question. They will compare scores between the MFT and the core concept test and re-evaluate the effectiveness of our intervention strategies.

The preliminary data our faculty have collected so far indicates that offering either the MFT or a test designed in-house provides an incomplete picture of student performance. A hybrid approach that strikes a strong balance between both options intensifies the strengths of each, mitigates their limitations, and maximizes the effectiveness of our overall assessment plan.

Subir Bandyopadhyay is professor of marketing and chairman of the assurance of learning initiative and Anna Rominger is dean at the Indiana University Northwest’s School of Business & Economics in Gary.