Information Brief 2002-3

Using Posttesting to Show the Effectiveness of Developmental/Remedial College Courses

Some entering college students’ academic knowledge and skills are not yet strong enough for them to be successful in standard first-year college courses (e.g., college algebra). Consequently, these students might be advised or required to enroll in lower-level courses (e.g., elementary algebra). These lower-level courses are often called “developmental” or “remedial,” and typically do not carry credit toward satisfying degree requirements. In this brief, we use the term “remedial” to refer to such courses.

A significant percentage of college students are underprepared for college-level instruction. For example, 41% of entering community college students and 29% of all entering college students are underprepared in at least one basic skill area.1 Consequently, remedial instruction is offered by about 99% of public two-year institutions and 85% of public four-year institutions.2

Students and the institutions they attend have a joint interest in the effectiveness of remedial courses. The courses should provide students with the academic knowledge and skills they need to perform successfully in standard courses. If the remedial courses are not effective, then students’ tuition, time, and effort will have been poorly invested. From an institution's perspective, remedial courses must enhance students’ academic strengths; otherwise, its resources (faculty, classroom space, etc.) will have been used ineffectively.

This brief illustrates one method for evaluating the effectiveness of remedial courses in college. The brief is intended for postsecondary faculty, staff, and administrators who are responsible for remedial education.

Posttesting

One way to determine whether a remedial course has been effective is to test students at the end of the course with an alternate form of the test used to place them in the course (“posttesting”). If the placement test score validly measures the knowledge and skills that students need to be successful in a standard course, and if the remedial course increases students’ knowledge and skills, then their posttest scores should exceed their original scores.

There are other important methods besides posttesting to assess the effectiveness of remedial courses. For example, an institution could follow up students after they finish the remedial course to determine whether they succeed in the standard course or whether they persist in college.

Example

In this brief, we illustrate posttesting with data from 9 two-year and 10 four-year institutions in a state postsecondary education system. In this system, students whose ACT Assessment® or SAT® scores meet or exceed certain cutoffs are placed directly into standard English and mathematics courses. Students who score below the ACT or SAT cutoffs are pretested with ACT’s COMPASS® test.3 Students who score at or above the COMPASS cutoff are placed into a standard course and are not posttested. Students who score below the COMPASS cutoff are placed into a remedial course. Students who finish the remedial course must take COMPASS as a posttest and must meet or exceed the cutoff before they are permitted to enroll in the standard course.

Figure 1 illustrates three groups of students in the state system according to their pretest/posttest COMPASS scores and whether they finished the remedial course. Of those students who scored below the COMPASS pretest cutoff score (N0), a subset (N1) finished the remedial course. A subset of those students (N2) not only finished the remedial course, but also scored at or above the posttest cutoff score.

Figure 1
 

Indicators of Remedial Course Effectiveness

The numbers of students in the groups shown in Figure 1 can be used to compute indicators of remedial course effectiveness. Table 1 shows several effectiveness indicators, by remedial course and the COMPASS test used to place students in the course.

Table 1
Indicator Remedial course (COMPASS test)
Mathematics
(Algebra)
Reading
(Reading
Skills)
Writing
(Writing
Skills)
1: Percentage of students who
finished the remedial course
21%42%30%
2: Percentage passing the posttest,
among remedial course finishers
93% 
[17%]*
73% 
[9%]*
91% 
[5%]*
3: Average score gain of
remedial course finishers
30 
[3]*
17 
[1]*
41 
[1]*

*Estimate assuming that no real growth in students' academic skills
occurred from pretest to posttest (see text below).

The percentage of students who finished the remedial course (Indicator 1) ranged from 21% in remedial mathematics courses to 42% in remedial reading courses. Although these percentages seem small, similar results have been reported in other studies. For example, the percentage of students who finish remedial courses has been found to range from 14% in remedial mathematics4 to 75% in courses of all types.5 At community colleges, 43% of remedial education students successfully complete remedial courses.6

Indicator 2 shows that among all students who finished the remedial course, the percentage who passed the posttest (i.e., scored at or above the posttest cutoff) ranged from 73% (reading) to 93% (mathematics). Indicator 3 shows that average differences between COMPASS posttest and pretest scores of remedial course finishers ranged from 17 points (Reading Skills) to 41 points (Writing Skills). These average differences represent score gains ranging from 1.2 to 2.0 COMPASS standard deviations.

A potential problem with posttesting concerns “measurement error,” the degree of inaccuracy of scores on any test, whether it is standardized or classroom-developed. For example, a distraction during the test might reduce the concentration of some students, but other students might make several guesses that result in correct answers. Therefore, we developed estimates of Indicators 2 and 3 that would be expected due solely to measurement error.7 These estimates, shown in square brackets in Table 1, assume that no real growth in students’ academic skills had occurred.

Indicators 2 and 3 are much larger than the estimates reflecting measurement error. We can conclude, therefore, that the indicators were influenced much more by real improvement in students’ academic skills than by error in measuring those skills.

Discussion

In this study, we did not have data from a control group (i.e., a group of students who had taken the posttest, but who had not taken remedial courses). Thus, we have not conclusively proved that taking remedial courses caused the increases in students’ academic skills. Of course, few institutions are likely to be willing to withhold instruction from students who need it, just for the purpose of doing research.

There was a significant dropout rate in the remedial courses, especially in mathematics. It is clear, however, that students who finished the remedial courses offered at these institutions increased their academic skills, as measured by COMPASS. In addition, students who finished remedial mathematics and writing courses were likely to score at or above the posttest cutoffs and, therefore, were likely to enroll in the corresponding standard courses. For these students, remedial course work did appear to be effective.


1. McCabe, R. H. (2000). No one to waste. Washington, DC: Community College Press.

2. National Center for Education Statistics (1999). Postsecondary institutions in the United States: 1997-98. (NCES 1999-174). Washington, DC: U.S. Government Printing Office.

3. COMPASS is a computer-adaptive system that measures students’ academic knowledge and skills in mathematics, reading, and writing. Scores are reported on a scale that ranges from 1 to 99.

4. H. R. Boylan, personal communication, July 25, 1999.

5. National Center for Education Statistics (1996). Remedial education and higher education institutions in fall 1995. (NCES 97-584). Washington, D.C.: U.S. Government Printing Office.

6. See Note 1 above.

7. For further information, see “Posttesting Students to Assess the Effectiveness of Remedial Instruction in College” (ACT Research Report No. 2000-7 PDF; 34 pages, 524KB).

**The results described in this brief are similar to those of an earlier study, described in ACT Research Report No. 2000-7. The original data were augmented with new data, considerably increasing the number of student records. The results in this brief are based on the augmented data.**