Viewpoints, Updates, and More from ACT Leaders  

Insights and news to help shape the public agenda for education and workplace success

Members of the ACT Executive Leadership Team, like all ACT team members, are thinkers, creators, and innovators. They come from a broad range of backgrounds and disciplines, they offer a wide spectrum of knowledge, and they provide thought leadership that influences policy and practice in education and in the workforce. Here in Perspectives from ACT, you will find context, background, responses to current topics and issues, updates, and announcements—all to provide you with timely insights to inform your perspective.

What the Research Says About the Effects of Test Prep

June 21, 2017
By Wayne Camara, Horace Mann Research Chair, ACT

There have always been claims that targeted interventions can increase scores on academic achievement tests. Much of the media attention has focused on the extent that commercial test preparation courses can raise scores on college admissions tests.

This is a highly controversial issue in education because it addresses fundamental questions about test validity and fairness.  If a modest intervention such as a test prep program can result in a large increase in test scores, then what does that say about the validity of scores earned, both by students who received the intervention and by those who did not?

A thoughtful piece by Jim Jump published last month in Inside Higher Ed (5/22) raises some of the same issues and questions about recent claims of large score increases on the SAT based on moderate amounts of “instruction.”

The interest in this topic provides an opportunity to review the research on test preparation in general and to make some connections to similar claims made about the impact of other types of interventions on achievement.

To cut to the chase: The research clearly suggests that short-term test prep activities, while they may be helpful, do not produce large increases in college admission test scores.

There are some fundamental principles about admissions test scores that have remained constant across numerous redesigns, changes in demographics, and rescaling efforts. They include the following:

  • Scores on college admissions tests (as well as most cognitive tests) generally increase with retesting, so any claim about score increases must statistically explain the proportion of change attributed to additional time, practice, and growth apart from any intervention1.
  • John Hattie’s exhaustive synthesis of over 800 meta-analyses related to achievement show almost any type of intervention—more homework, less homework, heterogeneous grouping, homogeneous grouping—will show some positive effect on student achievement; it is hard to stop learning2. But, in general, small interventions and shorter interventions have less impact on student achievement.  Web-based instruction has an effect size of .30, which is certainly good. The average effect size across all interventions, however, is less than .40.
  • Students who participate in commercial coaching programs differ in important ways from other test takers.  They are more likely than others to:  be from high income families, have private tutors helping them with their coursework, use other methods to prepare for admission tests (e.g., books, software), apply to more selective colleges, and be highly motivated to improve their scores.  Such differences need to be examined and statistically controlled in all studies on the impact of interventions. Claims about the efficacy of instructional interventions and test preparation programs on test scores have been shown to be greatly exaggerated.
  • There have been about 30 published studies of the impact of test preparation on admissions test scores.  Results across these studies are remarkably consistent. They show a typical student in a test prep program can expect a total score gain of 25 to 32 points on the SAT 1600-point scale, and similar respective results can be found for the ACT and GRE. The reality is far less than the claims.
  • In 2009, Briggs3 conducted the most recent comprehensive study of test preparation on admissions tests. He found an average coaching boost of 0.6 point on the ACT Math Test, 0.4 point on the ACT English Test, and -0.7 point on the ACT Reading Test4. Similarly, test preparation effects for the SAT were 8 and 15 points on the reading and math sections, respectively.  The effects of computer-based instruction, software, tutors and other similar interventions appear no larger than those reported for test preparation.

Claims about score increases and to what they may be attributed are among the most difficult to make and verify. There are factors that confound the results, such as differences in student motivation to improve, regression to the mean, and the fact that oftentimes students engage in multiple activities to increase their scores. However, research in this area consistently refutes claims of large increases in average or median scores.

There have been many more studies which attempted to examine the impact of instructional programs on achievement. Again, such studies are equally difficult to conduct and equally unlikely to show effect sizes larger than the typical growth students encounter simply from another year of instruction, coursework and development. Research-based interventions which are personalized to the learner can improve learning, and increased learning will impact test scores. However, in order to support such claims, research studies which address the representativeness of the sample and equivalent control groups, the extent and type of participation in the intervention, and many other contextual factors need to be addressed and published. This is how we advance scientific knowledge in education and basically any field.

Jim Jump’s previously referenced column identified many questions and possible problems associated with the claims related to the efficacy of participation in Khan Academy’s programs on SAT scores.  However, few if any of these questions or concerns can be answered, simply because no research behind the claims has been made available by the College Board to review or examine—all we have is their press release—and claims can neither be supported nor refuted when there is no methodology to examine.  Further speculation about the efficacy of this intervention is not helpful.  But there are some additional facts about testing, interventions, and claims of score increases to consider when we read any claims or research on the subject.

First, while test preparation may not lead to large score increases, it can be helpful. Students who are familiar with the assessment, have taken practice tests, understand the instructions and have engaged in thoughtful review and preparation tend to be less anxious and more successful than those who haven’t.  Such preparation is available for free to all students on the ACT website and other sources.

Second, the importance of scores on tests such as the ACT and SAT continues to be exaggerated. What is ultimately important is performance in college.

We know that some interventions can increase test scores by two-thirds of a standard deviation. The question should be whether there is evidence of a similar increase in college grades (which is the outcome that admissions tests predict).  Claims that test preparation could result in large score increases required serious investigation because they threatened to undermine the validity of admission testing scores. Simply put, if an intervention increases test scores without increasing college grades, then there is some bias present in some scores.

It is possible that scores of students participating in test prep or another intervention are being over-predicted and will not result in similar increases in college grades. Or it could it be that the scores of students who have not engaged in test prep have been under-predicted.

Hardison and Sackett5 demonstrated that a 12-hour intervention could increase performance on an SAT writing prototype while also increasing performance in other writing assignments.   While this was a preliminary experimental study of the coachability of a new writing assessment, it demonstrated that instruction could result in better writing on both a test and in course assignments.  This type of study highlights the types of questions that are raised whenever claims of large score increases are reported.  When results are too good to be true (and even when they are not), it is always better to verify.

Claims that test preparation, short-term interventions, or new curricular or other innovations can result in large score gains on standardized assessments are tempting to believe. These activities require so much less effort than enrolling in more rigorous courses in high school or other endeavors which require years of effort and learning.

If we find an intervention that increases only a test score without a similar effect on actual achievement, then we need to be concerned about the test score. And when we hear claims about score increases that appear to be too good to be true, we need to conduct research based on the same professional standards to which other scientific research adheres. Because if it sounds too good to be true, it very likely is.


1 See the What Works Clearinghouse for Criteria https://ies.ed.gov/ncee/wwc/

2 Hattie (2009). Visible Learning: A synthesis of over 800 meta-analyses related to achievement. New York, Routledge.

3 http://www.soe.vt.edu/highered/files/Perspectives_PolicyNews/05-09/Preparation.pdf

4 ACT scores are on a 1-36 scale so these raw numbers can’t be compared to the SAT. These effects represent a full model which controls for differences in socioeconomics, ability, and motivation between a baseline group and a test preparation group. Yes, students in the coached group saw a decrease in ACT reading scores relative to uncoached students.

5 Hardison and Sackett, (2009). Use of writing samples on standardized tests: Susceptibility to rule-based coaching and resulting effects on score improvement. Applied Measurement in Education, 21: 227-252.

May 23, 2017
By Marten Roorda, CEO

President Trump’s budget proposal, released this morning, paints a bleak picture for America’s students, families, and workers.

Investments in critically important education and workforce development initiatives would be drastically cut and, in many cases, eliminated. The proposal would, if enacted, reduce the budgets of the U.S. Departments of Education and Labor by 13.5 and 19.8 percent, respectively.

While Congress has the final say, the Trump budget stands in stark contrast to his campaign theme of “Make America Great Again” and to the needs of millions of people who depend on these programs for a fair shot at the American Dream.

The Trump Administration’s proposal lays out an unprecedented roadmap to educational and economic disinvestment:

  • After-school programs serving 1.6 million children—eliminates all $1.2 billion.
  • Teacher training programs and class-size reduction efforts—eliminates all $2.34 billion.
  • Every Student Succeed Act’s (ESSA) student support and academic enrichment grants under Title IV, which could be used to support dual enrollment and early college opportunities—eliminates all $400 million.
  • Career Technical Education (CTE), which increases graduation rates and increase academic achievement—reduces current $1.2 billion funding by $320 million.
  • Adult Basic Literacy Programs—reduces current $594 million funding by $94 million.
  • State workforce development grants from the Workforce Innovation and Opportunity Act (WIOA)—reduces by $1.3 billion at a time when workforce skills for critical infrastructure projects are sorely needed.

The cuts to education are to fund, at least in part, a proposed school choice program that would divert budget dollars from programs with proven track records to a new initiative that has never been put to the test at this scale.

The budget would also dramatically scale back social service and anti-poverty programs aimed at helping those most in need. Rather than “making America great again,” this budget would further exacerbate the immense levels of educational and economic inequality in this country.

As I wrote here in March, such a move is not sound business practice. If you’re trapped in a hole, why take away the ladders that offer the best opportunities for climbing out of the hole

Funding for afterschool programs, teacher training, class size reduction, career technical education, and job training are pro-growth investments that have a clear return—not just for the people who are helped, but for the nation and economy collectively.

Underscoring the importance of investing in education and workforce development, the Organization for Economic Cooperation and Development (OECD) reports that, if the average U.S. student score on the PISA matched the OECD average, the GDP of the U.S. could more than double. An aspirational goal such as this would truly put America first, by lifting up students and workers and empowering them to achieve their dreams.

The cuts identified in this budget are precisely the wrong approach to moving this country's education and workforce forward. They will not only harm public education as a whole but will cause untold hardship on those most vulnerable in our system—students who are traditionally underserved, first-generation college goers, or those pursuing needed technical training in high-growth, high-demand fields.

Congress has time to act this year to ensure these proposed cuts are not made a reality. Lawmakers should build upon the progress they made in their last spending bill to make certain all Americans are able to achieve education and workplace success. There is no higher budget priority than investing in the American Dream.

May 4, 2017
By Marten Roorda, CEO

This week lawmakers on Capitol Hill have been busy putting the finishing touches on a budget agreement for the remainder of the 2017 fiscal year. As I wrote a few months ago here, the need to invest in the nation’s students and workers has never been more essential.

I was pleased to see that the omnibus spending bill on its way to the President’s desk for signature protects many of the nation’s most important domestic priorities in education and workforce development. ACT applauds this bipartisan compromise which strengthens investments in programs that support underserved students and job seekers.

Here’s a recap of what’s in the bill:

  • Modest increases for training and education of young adults as well as for apprenticeship programs administered at the Department of Labor.
  • Renewed funding for grant programs under the Workforce Innovation and Opportunity Act that serve vulnerable communities, many of whom are disconnected from the labor market.
  • Restoration of “year-round” Pell Grants—a priority issue for ACT—that we believe levels the playing field and will allow low-income students to accelerate the time it takes to complete higher education.
  • Historically high investments made in the Higher Education Act’s TRIO and GEAR UP programs to ensure more low-income, minority, and first-generation students have the opportunity to enter into and succeed in postsecondary education.
  • New funding provided under Title IV of the Every Student Succeeds Act that can now be used for dual and concurrent enrollment programs. Much like year-round Pell Grants, these programs empower students to accelerate their studies while boosting persistence and achievement rates after they’ve left high school. 

These sorts of investments will pay dividends not only for students, but for the country on the whole. As noted by the Georgetown Center on Education and the Workforce, two-thirds of all new jobs will require some form of postsecondary education or training by the next decade. Workers with at least this level of educational attainment already make up 65 percent of the total employment. And those who have a four-year degree now earn 57 percent of the nation’s wages.

Clearly, investing in programs that allow students to invest in themselves is the surest path to education, career, and lifelong success. This week’s agreement certainly makes strides towards this laudable goal.

However, looking ahead toward the next fiscal year which begins in September, more can and should be done to invest in these programs to further expand opportunity. In order to make that vision a reality, Congress must build on the progress made in this spending agreement and ensure federal investments in education and workforce development continue to help all individuals achieve education and workplace success.

April 25, 2017

Wayne Camara, ACT Horace Mann Chair

We are in the midst of the annual college admissions cycle when many high school seniors are making decisions about where they will attend college in the fall. During this time of year I often see news stories that dismiss the role of admissions tests. Unfortunately, many of those stories are misinformed about the utility and value such assessments provide to students, schools, and colleges.

The biggest misperception I see is the argument that high school grades are the best indicator of college success and, therefore, we don’t really need standardized admissions tests. This notion is misinformed. That’s a polite way of saying it is nonsense.

Let’s start with the fallacy in the argument. High school grades are not, in fact, the best indicator of college success. Neither are test scores alone. In fact, the best predictor of success in college coursework is the combination of the two—grades and test scores together. Hundreds of independent studies have shown this to be true.

The figure below illustrates the additional value test scores contribute beyond grades.  Two students with the same high school GPA of 3.0 may have widely different probabilities of attaining a similar college freshmen GPA based on their ACT scores.  If one student had an ACT Composite score of 20, they had a 28% probability of earning a 3.0 or higher freshman GPA; if another student had an ACT Composite score of 30, they had a 54% probability.

High school grades have their limitations.  They not only reflect the idiosyncrasies of individual teachers’ grading standards and differences in course rigor, but they also contain inflation. More than 55 percent of college-bound students report having high school grades above 3.25, and 25 percent of U.S. high schools report an average GPA of 3.75 or higher for their graduating class.

We often accept the hypothesis that grades are fair and unbiased indicators of future success without much scrutiny, but grade inflation has steadily increased in the past few decades, and it has increased more rapidly for white and Asian-American students coming from more highly educated families.

Educators acknowledge that there are differences in the quality of schools and the rigor of curriculum. Test scores are one measure that helps colleges navigate and mitigate those differences, allowing them to compare the preparation of students coming from different backgrounds and different experiences. Without test scores, colleges must rely on their own subjective impressions of different groups of students and the quality of different high schools.  We know that subjective impressions and decisions have biases which are often implicit and never as accurate as empirical data.

Admissions tests provide a common metric that allows colleges to evaluate students who attend different high schools, live in different states, complete different courses, and receive different grades from different teachers. High school GPA simply cannot do that.

Good decision-making in the admissions process requires consideration of multiple sources of data; important decisions that impact students’ lives should never be based solely on one metric.  Research has shown that around 70 percent of college-bound students actually perform similarly across both high school grades and admission test scores (i.e., high, average or low on both measures).  In such situations, tests and grades provide confirmatory evidence that can increase our confidence in our decisions.

In the other 30 percent of situations, a student’s high school grades may be significantly higher or lower than his or her ACT scores. When this occurs, admissions professionals justifiably may place much scrutiny on both the test scores and grades. Perhaps the test scores become less persuasive and relevant, or possibly the grades and other factors receive additional scrutiny.  This is not a rationale to dismiss objective test scores but rather the justification for using multiple measures and professional judgment in evaluating college applicants and their potential fit and likely success at each institution. When multiple sources of information are available, basing decisions on less information is never the best solution.

Colleges, by and large, understand this. Most—despite what you may have heard—continue to require that applicants submit test scores. Colleges rank admissions test scores second in importance after high school grades earned in college-prep courses as an admission criterion. And they actually rank test scores above high school grades earned in all courses.

I often read articles that describe what admissions tests don’t do, but they ignore or lose sight of what admission tests can do.   ACT score results, for example, help with college and career planning. ACT score reports provide feedback on the types of careers and majors that best match the student’s interests and skills. They also provide an early indicator of the types of colleges at which students may be most competitive and allow parents, teachers and counselors to assist students in planning for admissions. Not every student can or should go to an Ivy League college, and admissions tests help determine potential schools and colleges that may best fit a student’s preparation and aspiration.

There is no single measure that can definitively predict future behavior by itself, and all measures have limitations.  The best decisions are made when multiple sources of data are considered. There is no reason to ignore test scores, just as there is no reason to ignore previous accomplishments, high school grades, or personal factors that have influenced a student’s development and aspirations.

Our ultimate goal should be to help students land where they have the best possibility of success, and there is no question that admissions test scores help accomplish this goal.

March 20, 2017

by Marten Roorda, CEO, ACT

The release of the President’s budget blueprint for fiscal year 2018 has rightly prompted deep concern within the education and workforce development communities, not least for its cuts to programs that benefit low-income and underserved learners and job seekers. The blueprint cuts the budgets of the U.S. Departments of Labor and Education by 21 and 14 percent, respectively.

The cut in the Labor Department reduces funding for job training programs that benefit seniors and economically disadvantaged youth. The cut in the Education Department reduces federal work-study aid to college students and diverts 37 percent of the surplus in the Pell Grant program, which provides aid to college students in financial need, to other uses.

The blueprint also eliminates the Supplemental Educational Opportunity Grant program; cuts Federal TRIO Programs by more than 10 percent; and cuts GEAR UP—a program long supported by ACT—by nearly one-third. These programs, either directly or through grants to states or organizations, assist economically disadvantaged middle and high school students prepare for, attend, and complete college. For example, in 2013, 75 percent of low-income high school graduates who participated in GEAR UP immediately enrolled in college, nearly ten percentage points higher than all graduates that year, and an astonishing 30 points higher than low-income graduates overall.

As Anthony P. Carnevale and others document in The College Payoff, the higher the level of educational attainment, the higher the payoff: bachelor’s degree holders can expect to earn more than 84 percent more than those with only a high school diploma. In addition, those with higher levels of educational attainment are more likely to pay more in taxes and less likely to rely on social services, which helps to support beneficial programs and services at the local, state, and federal levels. So why are programs that empirically benefit individuals and society being cut? Put simply, this is just not good business sense.

The mission of ACT is to help individuals achieve education and workplace success. Far from not helping, the President’s budget blueprint will actively harm. We add our voice to those of the many Republican and Democratic legislators who have voiced their strong opposition to the blueprint and its effects on the education and career preparation of many of the least fortunate among us. Students and job seekers of many ages rely on such services to fund their postsecondary pursuits, embark on a career, and ultimately find their way to realizing their goals.

I sincerely hope that the Administration rethinks its priorities and puts underserved students and workers first.

February 21, 2017

By Scott Montgomery, Senior VP - Public Affairs

High school students are our next generation of voters. In the coming years, they are going to help shape public policy, but their views are often overlooked because most of them are currently not eligible to vote. To address this lack of information, in December ACT surveyed a sample of students who took the ACT® test, asking them about their engagement with and concerns after the 2016 presidential election.

A new ACT issue brief, The Next Generation of Voters: A Sample of Student Attitudes after the 2016 Presidential Election, summarizes the findings.

We found that most students were engaged with the election: 67 percent reported following the news coverage very or fairly closely. However, their most popular news source was social media (72 percent), suggesting that work to help students identify “fake news" is critically important.

After the election, the majority of students reported feeling more concerned about a number of topics including race relations (72 percent) and college affordability (62 percent); while half reported feeling more concerned about getting a job. Less than 13 percent of students reported feeling less concerned about any of the topics after the election. Further, only about half (49 percent) of the students reported that discussions in their classroom about the election were always or almost always respectful.

The high levels of concern, particularly related to equity issues, indicates that educators and parents need to find a way to talk with their students about current issues and teach students how to engage in these discussions in a productive manner that does not interfere with student learning. The survey also suggests that civics and media education are needed to help students gather the information that is shaping their opinions.  

DOWNLOAD ISSUE BRIEF >

January 30, 2017

By Marten Roorda, Chief Executive Officer

The interaction of disparate cultures, the vehemence of the ideals that led the immigrants here, the opportunity offered by a new life, all gave America a flavor and a character that make it as unmistakable and as remarkable to people today as it was to Alexis de Tocqueville in the early part of the nineteenth century.

—John F. Kennedy, A Nation of Immigrants

America has always been a nation of immigrants. As the CEO of a company that works to support individuals and their pursuit of education and workplace success, I am alarmed and concerned by the actions taken by the President’s executive order this past weekend to restrict the entry into the United States of certain immigrants. Some are seeking refuge from the devastations of war, while others are in fact U.S. residents, but in nearly every case they are simply seeking to advance a better life for themselves, their families, and our nation.

Of the more than 1 million international students currently enrolled in U.S. postsecondary institutions, approximately 17,000 come from countries in the current immigration ban. Many of those students are understandably confused about what their futures may hold. And beyond these questions about their future education and career opportunities is also the potential “brain drain” that could impact U.S. companies in the long term and which should be equally alarming to us as a nation. An estimated 35% of all foreign students in the U.S. are pursuing degrees in STEM fields, and upon graduation many of them will likely be offered jobs with some of America’s largest tech firms: Apple, Microsoft, Google, DuPont, Exxon-Mobil, and Dow, to name just a handful. Because of the recent executive order, the contributions of these graduates to American innovation and economic competitiveness are now seriously in question.

As many companies around the world know, diversity breeds innovation and innovation breeds business success. Our own company benefits from the commitment of immigrants and foreign nationals—myself included—to helping individuals around the globe and here in the States achieve college and career success.  ACT is building for the future, and as we innovate to improve our own solutions and the measurement industry as a whole we will rely on talent from across the globe. This weekend’s immigration actions will have serious implications to the way we attract the best and brightest from around the world; without them we can’t bring about the required innovation to fix the American education system. We need help from outside the U.S. to find new ways to remedy our serious achievement and skills gaps. We are proud of our employees and the innovative culture that people from varied cultures and counties bring to ACT, and we are stronger because of our diverse culture. Being inclusive is one of our guiding principles.

Denying entry to people who have much to add to the country and who are often already here legally for work and education purposes runs counter to the ideals of a country founded by immigrants seeking safe harbor and sanctuary from foreign oppression. Creating blanket blockades of individuals from certain countries is not a formula to solve America’s immigration and security issues – we can and must do better than what we’ve seen this weekend. 

November 14, 2016

By Marten Roorda, Chief Executive Officer

Across the United States, nearly 4.5 million K-12 students are English language learners.

From 2004 to 2014, the percentage of public school students participating in programs for English learners increased from 8.8 percent to 9.3 percent. That represents an increase of about 240,000 students in a decade—including a jump of more than 60,000 during the 2013-2014 school year.

You’ll find the highest proportion of English learners in kindergarten, at 17 percent. California alone has 1.4 million English learners, 22.7 percent of the statewide student population.

According to the National Center for Educational Statistics the languages English learners are most likely to speak at home are Spanish, Arabic, and Chinese—and at least 30,000 speak Vietnamese, Hmong, Haitian, Somali, Russian, or Korean.

And in the case of one family I know very well, three of these in English Language Learners class speak Dutch.

Whatever their linguistic backgrounds, we want all students to succeed, but too often a lack of language familiarity interferes with students showing what they know.

Consider a quick math quiz:

What is 24 divided by 6? I’m certain you know the answer: 4.

Now, "¿Qué es 24 dividido por 6?" Do you still know the answer?

Finally, what if I had asked this question first: “什麼是24除以6?” Are you still sure?

Your math skills didn’t change as I asked the questions, but the languages did. If I try enough languages, at some point your lack of fluency will override the skill being measured—your facility in math.

When educational measurement is “confounded” by extraneous factors, validity is lost. Even Einstein would struggle with math problems written in languages he had never encountered.

Using the language of psychometricians, the net result is a false negative. In words we can all understand, it’s simply not fair.

To enable more students to demonstrate their abilities, starting in the fall of 2017 ACT will offer supports for qualified English learners in the United States taking the ACT. These supports will include limited additional time to take the test, the use of an approved word-to-word bilingual glossary (containing no word definitions), test instructions provided in the student’s native language (limited languages at first), and testing in a non-distracting environment.

Most importantly for these students, the test results will be college reportable.

To do well on the math test, students will still need to know math. Similarly, they will also need to need to know English, Reading, and Science, the other subjects covered on the ACT.

In English, the test measures usage, mechanics, and rhetorical skills; it’s not a vocabulary test. A bilingual glossary helps even the playing field, but does not define the words on the test.

For example, if you were a Spanish speaker learning English you may not understand the word “rhetorical” but you would likely understand the Spanish word “retórico.” The glossary doesn’t define “rhetorical.” It only clarifies the word is one you already understand in your native language.

ACT’s move to better support English learners is consistent with the federal Every Student Succeeds Act (ESSA), which calls for educators to help students whose difficulties in speaking, reading, writing, or understanding the English language may deny them the ability to meet challenging academic standards, achieve in classrooms where the language of instruction is English, or participate fully in society. 

We’re proud to be the first major assessment organization to offer the English learner supports I’ve described. We hope others in our profession will follow our lead.

The bottom line for ACT is not about being first, but about being fair to the nearly 5 million students across the United States whose first language is not English, including three young children who sit at my kitchen table—mostly speaking Dutch—each and every evening.

All students deserve the opportunity to show what they know. Starting next fall, with the ACT, they will.

October 19, 2016

By Wayne Camara, Senior Vice President, Research

We’ve received a good deal of feedback regarding our recent report on test-optional policies, More Information, More Informed Decisions: Why Test-Optional Policies Do NOT Benefit Institutions or Students. Some of the feedback has been positive, while some has been negative.

I thought it would be helpful to address some of the negative responses that we’ve received in order to help improve understanding of the report itself and ACT’s position on this matter.

Some of the critical feedback suggested that the conclusions of this report were self-serving and defensive. We anticipated this type of reaction, of course, and that’s why we included graphs illustrating the research findings on which we based those conclusions. Some of that research was conducted by ACT, but a good deal of it was conducted by independent, external sources.

But then there were those who misinterpreted the graphs. One individual, referencing a figure on page 4 of the ACT report, essentially argued that ACT was unfairly picking on low-performing students by illustrating that students with a low ACT composite score are not terribly successful academically in college even when they have a history of high grades. (The figure shows that if you rely on grades alone to predict academic success, you are missing much of the story—and that is true for students with an ACT score of 30, 20 or even 10.)

Test-based metrics are used widely across educational, employment and organizational settings, in training programs, certification, licensure and healthcare. Yet only in college admissions do we regularly hear calls for allowing individuals a choice in determining what information should be conveyed or hidden from decision makers.

The graph in our report illustrates that two students with the exact same high school grades have very different probabilities of academic success when their ACT scores differ significantly.  Of course the same argument applies to high school grades:  Two students with the exact same test score have very different probabilities of academic success when one has a history of low grades in high school and another student has a history of high grades. 

The overall message that the critic fails to address is as follows:  Research across multiple domains has consistently demonstrated that test scores add significant value above and beyond other predictors, whether one is examining student achievement, job performance, or workplace competencies. Decision accuracy is improved when all valid indicators are considered—grades, course rigor, test scores, background experiences, opportunities, etc. 

When a college makes test scores optional, it suggests that admissions officials must blindly weight those indicators in a mechanistic fashion and are unable to make holistic decisions based on the sum and consistency of various sources of evidence and the specific needs of the institution. In addition, forcing students to determine when it is in their best interests to report or suppress their test scores can lead to gaming and additional strategies which may undermine the very students who are seemingly targeted by such test optional policies.

The basic question here is whether or not test scores add value to admissions.  Quite frankly, if colleges did not see any value in test scores, then they would not be test-optional; they would be test-free. Colleges would not continue to use an instrument that did not offer incremental validity in admissions, placement, retention, diagnostics, and other important functions.  Colleges wouldn’t accept test scores from thousands of applicants if the information did not supplement the high school record and provide a common metric to evaluate students from different schools, who completed different courses and were graded by different faculty using different standards. 

The research findings are clear:  About four out of every 10 college-bound students report an “A” average in high school courses, but their actual college grades tend to run nearly a full point lower (on a 4.0 scale). ACT’s research simply repeats much of what has been found in peer-reviewed scientific research conducted by independent scholars with no affiliation to testing organizations. A good example is this report, in which the authors conclude (p. 13): “Test-optional admissions policies, as a whole, have done little to meet their manifest goals of expanding educational opportunity for low-income and minority students. However, we find evidence that test-optional policies fulfill a latent function of increasing the perceived selectivity and status of these institutions.” 

Each institution has the right to establish its own admission policies to meet its own needs, and ACT respects that right. However, claims that ACT scores add little-to-no validity above high school grades are simply not borne out by the data, and claims that test optional policies result in greater campus diversity have not been substantiated in independent research.

Assessments contribute valuable information that can inform decisions in admissions, placement, hiring, accountability, certification, licensure, diagnosis, and instruction, to name just a few.  Would you accept test-optional policies for certifying a pilot, licensing a pharmacist, or allowing a bank auditor access to your personal financial information?  Do the colleges that adopt test-optional policies institute a similar option for course grades? Do they allow students to decide whether their grades are based only on papers, research projects, and class participation or do they require quizzes, tests, and final exams? 

Most admissions professionals see the value in admissions tests and understand that, in the large majority of instances, when test scores confirm what high school grades indicate, it is confirming and reassuring, not a waste of time. Most also seek multiple sources of information and attempt to make important decisions based on all sources of data. ACT believes that test scores are a valuable source of data, and the research supports this conclusion.

Where FairTest Gets It Wrong

October 10, 2016

By Wayne Camara, Senior Vice President, Research

In a recent report by the National Center for Fair & Open Testing (FairTest), “Assessment Matters: Constructing Model State Systems to Replace Testing Overkill,” the authors deem performance assessments as the preferred model for state assessment systems and detail their Principles for Assessment.

The issue of high-quality assessments is of critical importance today and the use of assessments to inform and enhance student learning is certainly one of the primary uses; however, I disagree with many of their conclusions.

Performance assessments often provide students with an opportunity to engage in extended and complex problems and situations which can be more authentic than a typical objective test question.  ACT has highlighted in our K–12 Policy Platform, assessment formats should vary according to the type of standards that need to be measured and the intended construct to be measured; typically, a balance of question types provide the basis for a  comprehensive evaluation of student achievement.

In advocating for performance assessments, FairTest incorrectly claims that multiple-choice assessments are limited “to facts and procedures and thereby block avenues for deeper learning.” As ACT research shows in “Reviewing Your Options: The Case for Using Multiple-Choice Test Items,” multiple-choice items can test higher-order thinking skills—by requiring students to, for example, apply information they have learned from a given scenario to a new situation, or recognize a pattern and use it to solve a problem—and do so in an efficient and cost-effective manner.) Instead of being dogmatic to a particular assessment format, states and schools need to focus on what is being measured and try to balance innovation and sustainability.

The report also ignores some of the limitations of performance tasks:

  • they require significantly more time to complete, which reduces instructional time;
  • they sample relatively few skills, which means scores are based on only a very small subset of standards or content;
  • they are often highly expensive to create and score, which delays score reporting; and
  • they have lower reliability (and score precision) than multiple choice tests. 

Related to FairTest’s Principles for Assessment, I disagree that assessments systems should be decentralized and primarily practitioner developed and controlled. To create a fair, valid, and reliable assessment is difficult and time-consuming work. Before a question is placed on the ACT and scored, a number of very extensive and detailed processes needs to occur, including multiple reviews by internal and external experts to ensure the item is measuring what it says it is measuring and not introducing irrelevant information that may make it more difficult for students to access.

For example, at ACT we try to reduce the language load on math items to ensure that they measure math and not a student’s reading ability.  Other testing programs may include extensive reading passages and context in presenting a math item, but we need to ask ourselves: Does the heavy reading load disadvantage a student with limited English experience who otherwise is highly proficient in mathematics?  The reviews also ensure that all test questions are culturally sensitive and that test forms as a whole include a balance in terms of culture, gender, and life experience.

Further, tests forms are created to match particular content and statistical specifications. This helps to ensure that the assessments are comparable across time. Doing so is necessary to better maintain longitudinal trends used to monitor achievement gaps or measure growth within a classroom,   across districts, and/or across schools within a state.

Finally, FairTest includes among its principles that students should exercise significant control where appropriate, for example by deciding whether to include SAT or ACT scores in their college applications. As highlighted in recent ACT research, “More Information, More Informed Decisions,” more sources of student information—not fewer—are needed to better understand a student’s preparedness for college.  

In ignoring the realities of cost—both in teacher time and financial–that states face in developing their assessment systems and the need for fairness, reliability, and validity in the construction and administration of tests, FairTest inflates some good ideas for innovative item formats into a “system” that many if not the majority of states will find difficult to construct or unworkable at scale.

ACT advocates for a holistic view of student learning using multiple sources of valid and reliable information. Performance assessments and teacher-created assessments can be one source of information, but for most states, relying on them exclusively is not feasible due to technical capacity and costs. 

August 29, 2016
By 
Marten Roorda, Chief Executive Officer

College was once a privilege afforded to the fortunate few. For most high school graduates, a diploma was the end of the educational road. The road may not have been paved with gold, but it did lead to steady employment in stable careers that paid solid middle class wages. That was true in the United States, and perhaps even more the case in Europe, where I was born, raised, and educated.

Today, few of these assumptions endure. Instead of going straight to work, most high school graduates on both sides of the Atlantic enter some form of postsecondary education. Still, even with a college degree or vocational certification, it’s likely the modern millennial will have many jobs and even professions before easing into retirement—whatever that might look like a half century from now.

At ACT, we had the privilege of testing 64 percent of the U.S. high school Class of 2016—nearly 2.1 million students in all. What we found was both encouraging and sobering.

On the sobering side, the average composite score on the ACT declined slightly to 20.8, down from 21.0 last year. Across all graduates, 38 percent met the ACT College Readiness Benchmarks in at least three of the four core subject areas tested (English, math, reading, and science), an achievement that indicates they’re ready for first-year college success.

The flip side is that, based on their scores, 62 percent of graduates are not prepared. Worse, 34 percent met none of the ACT benchmarks, suggesting they will likely struggle with what comes next.

Still, while average scores are down, that doesn’t necessarily imply that the performance of this year’s graduates is worse.

How is that possible?

The reason is we are testing more students than ever—again, 64 percent of graduates this year, five percent more than a year ago. As a result, our findings include an additional 100,000 students who would not have tested before and are likely to score somewhat lower than previous testers.

Broad-based participation in the assessment process is a victory—for our society, which gets a more accurate perspective of America’s academic achievement, and for the nearly 2.1 million graduates who took the ACT, who now have a better understanding of the full range of opportunities available to them.

While 64 percent is a big number, an even bigger number is 84 percent—the percentage of this year’s tested seniors who aspire to postsecondary education.

The opportunities available to these students are considerable. By including them in the assessment process, we also include them in the conversation—the ones they are having with their counselors, parents, potential schools and training institutions, and most importantly themselves.

By better understanding where they stand, they can better appreciate where they need to go next. While their world may not resemble that experienced by their parents and grandparents, it is also likely to include opportunities few of us can even imagine.

And that, for all the students who will follow in our footsteps, is victory. 

July 12, 2016
By Suzana Delanghe, Chief Commercial Officer

Piracy is an international crime that accounts for an estimated $300 billion in lost intellectual property (IP) revenues (The Commission on the Theft of American Intellectual Property, 2013). Additionally, the theft of IP creates a significant drag on United States gross domestic product and diminishes future innovation by businesses. ACT, similar to other companies across the globe, is impacted by piracy, and while we cannot stop it, we intend to address it head on.

ACT’s tests are taken by millions of students every year, trusted by parents, accepted by every four-year college and university in the country, and used by scholarship agencies to make decisions that impact millions of students each year in the US and around the world. ACT knows how important trusted, valid results are to those who take our tests and use our scores, and we are committed to ensuring the validity of our assessments. We regularly refresh our test questions and forms, and we are continually improving our testing processes to ensure a fair testing experience for our test takers.

While the vast majority of test takers are honest, a small number of individuals—and a growing number of adults and organized fraud rings—are unfortunately seeking to undermine the system for their own financial gain, jeopardizing the hard work of honest test takers.

We realize the importance individuals and institutions place on the scores generated by our tests. We are committed to doing our part to curtail this type of fraudulent behavior by not only monitoring and addressing specific issues as they occur, but also by improving our test development and delivery processes to assure students, institutions, and the public that the scores we report are valid and reliable. We intend to do this while also maintaining the highest degree of access for test takers.

ACT has always sought to regularly improve our testing processes, and, to that end, we are aggressively planning for the development and launch of a Computer Adaptive Test (CAT) version of the ACT® test that will be implemented for international testing in the fall of 2017. More details on ACT’s International CAT will be forthcoming in the next week. 

The use of a CAT design allows for quicker scoring and turnaround of results for examinees, results in an assessment that is shorter in duration, and—because assessments delivered on a CAT platform are uniquely generated based on the test taker’s responses—are more secure and less prone to security threats. ACT’s desire has always been to innovate and advance the field of measurement. In doing so, we also hope to make it more secure and, therefore, more reliable.

ACT encourages anyone who has concerns about testing irregularities to report them via our anonymous Test Security Hotline.

ACT is also reaching out to National Association for College Admission Counseling (NACAC) leadership and to others in the admissions testing industry to discuss how—together—we can do more to limit the negative impact of cheating in higher education. We look forward to further conversations and to ensuring ongoing confidence in ACT’s results.

 

June 30, 2016
By Scott Montgomery, ACT Vice President for Policy, Advocacy, and Government Relations

“If you can not measure it, you can not improve it,” said the famous physicist Lord Kelvin.

As an organization committed for more than 55 years to helping people achieve education and workplace success, ACT firmly believes that measuring students’ college and career readiness in English, math, reading, and science will help improve their readiness. 

In science, improving students’ knowledge and performance is more critical than ever: The U.S. Department of Commerce estimates that jobs in the fields of science, technology, engineering, and math (STEM) will grow 17 percent by 2018 and that more than 1.2 million of these jobs will go unfilled because of a lack of qualified workers.

Several states have enacted laws that explicitly require students’ science skills be tested. In addition, the recently reauthorized Elementary and Secondary Education Act—the Every Student Succeeds Act—upholds the importance of science testing in elementary, middle, and high school.

While other nationally recognized high school tests, such as the SAT, reference science content—if they do so at all—only in the context of assessing reading, writing/language, and mathematics skills, the ACT® test has a full, separate science test with 40 questions devoted to measuring skills and knowledge deemed important for success in first-year college science coursework. The constructs measured on the ACT science test are unique and different from those measured by the ACT math and reading tests.

The inclusion of both a math and a science test allows ACT to offer examinees a STEM score, which represents their overall performance on the two tests. Only through the comprehensive measurement of both math and science skills can this unique score be determined.

The ACT test has empirically derived benchmark scores that indicate readiness for success in first-year college courses in each individual subject area measured, including science. And our new STEM benchmark score indicates whether a student is well prepared for the types of first-year college courses required for a college STEM-related major.

The science test on every ACT test form includes at least one passage on each of the science disciplines that are most often offered to students in high school—biology, chemistry, Earth/space science, and physics.

In fact, science educators who participated in the recently released 2016 ACT National Curriculum Survey overwhelmingly prefer a stand-alone science assessment with authentic scientific scenarios. Eighty-six percent of middle school teachers, 89 percent of high school teachers, and 87 percent of college instructors felt that such a test is a better assessment of science knowledge than either science-oriented questions included in a math test or questions on an English or reading test involving science-oriented topics.

Of the 1.9 million graduates who took the ACT in 2015, 49 percent declared an interest in STEM majors and careers. These students need to be prepared for STEM jobs, so why in the world would we cut back on measuring students’ science knowledge and skills?

If we want students who are prepared for the millions of science, technology, engineering, and math jobs of the future, we must invest in teaching them science skills. But we also must assess their performance to measure what they have learned and to identify areas in need of improvement. The ACT is the only nationally recognized high school assessment that does this.

June 24, 2016
By Marten Roorda, Chief Executive Officer

As is clear to most observers, not every student enjoys the same advantages as they advance through the K-12 educational system. Too often those disparate experiences not only impair their personal academic outcomes, they also limit the opportunities those students might have had to contribute their distinct perspectives to the colleges they might have attended—and, in the longer run, to contribute to the vitality of the communities they represent and to the prosperity and well-being of our country as a whole.

For generations, ACT has advocated that colleges and universities must use admissions criteria that are valid, reliable, holistic, and effective—and embrace the full range of students who could benefit from higher education. We believe the U.S. Supreme Court Fisher decision, to uphold the University of Texas’ efforts to promote diversity and inclusion, is consistent with that holistic perspective.

“A university is in large part defined by those intangible qualities which are incapable of objective measurement but which make for greatness,” Justice Anthony Kennedy wrote, adding “Considerable deference is owed to a university in defining those intangible characteristics, like student body diversity, that are central to its identity and educational mission.” 

On June 22, just one day before the Fisher decision was announced, we launched the ACT Center for Equity in Learning, which will advocate for underserved students and young working learners.

In some ways, our timing could not have been more fortuitous.

Building on ACT’s core strengths in the high school to postsecondary years, the Center's initiatives will reflect ACT's interests in both college and career readiness and highlight the use of data, evidence, and thought leadership to close gaps in equity and achievement.

Until the quality of education is uniformly high for every student, the Center—and all of our society—still has work to do. As we strive to reach that ambitious standard of equality of opportunity for every young person, we appreciate and applaud the court’s counsel to use “valuable data about…different approaches to admissions” to “foster diversity” rather than “dilute it.” 

By Marten Roorda, CEO, ACT

 

Imagine that a popular national blogger advised everyone to opt out of taking cholesterol and blood pressure tests—or even from stepping on a scale.

“The tests provide no useful information. Your cholesterol reading is nothing but numbers,” she says. “That is useless. That is not diagnostic. Same with your blood pressure. Your weight says nothing other than how you rank against other people.”

Would you put your health—or that of your child—into that blogger’s hands? Of course not.

But those are exactly the “opt out” arguments made by Diane Ravitch in her video blog posted on The Network for Public Education website titled: Why all parents should opt their kids out of high-stakes standardized tests.

Ravitch argues: “The tests provide no useful information…The score will tell the teacher nothing about your child other than how he or she ranks compared to other children.”

Wrong. Criterion-based tests measure performance against a standard—for example, can you answer questions about something you’ve just read, or can you calculate the sum of several numbers?

At ACT we want ALL students to understand what they’re reading, and to be able look at numbers and know what to do with them. These are basic skills necessary for education and workplace success.

We would like nothing more than for every student to get every answer right, but if they miss questions we also want them to know where to focus their efforts so they can be better prepared for success—the same way a caring and competent doctor tells you not only what your blood pressure and cholesterol numbers are but also what they mean and how to improve them.

That is why ACT score reports include much more than a score—they tell teachers and parents what skills students have mastered, where they need more work, and how they can build their skills—and the ACT College Readiness Benchmarks tell students if they’re on track for success.

The “norms” associated with tests, which according to Ravitch “tell the teacher nothing about your child other than how he or she ranks compared to other children,” actually provide parents and policymakers meaningful information for how a student, or group of students, ranks relative to comparable populations—and are one of the most important tools underserved students and schools have to bring attention to their struggles.

In a news release issued in 2015 by the Leadership Conference on Civil and Human Rights, signed by the League of United Latin American Citizens, the National Association for the Advancement of Colored People, and the Disability Rights Education and Defense Fund, among others, the organizations wrote:

“Until federal law insisted that our children be included in these assessments, schools would try to sweep disparities under the rug by sending our children home or to another room while other students took the test. Hiding the achievement gaps meant that schools would not have to allocate time, effort, and resources to close them. Our communities had to fight for this simple right to be counted and we are standing by it.”

And we are standing by the civil rights groups.

Standardized tests offer something else important: a standard. As we’ve written in our paper on this topic—Opt-Outs: What Is Lost When Students Do Not Test—when the only measures are classroom grades, which are subject to factors such as attendance, timely completion of homework, and grade inflation—students can think they’re on track for success, only to receive a nasty shock when they reach the next level in their schooling or in their emerging careers.

It’s best to be objective—and honest—when there’s still time to help students who need to fully develop their academic skills.

We do agree with Ravitch on several items: that tests should be as short as possible (our longest tests are half as long as the graphic in Ravitch’s video suggests) and that parents should “insist that [their] child have a full curriculum.” We could not disagree more, though, with Ravitch’s contention that opting out “is a powerful way of sending a message to the policymakers in your state capital and in Washington, DC.”

Opting out of standardized testing means opting out of valuable information that can help your children learn and your schools improve.

At ACT we believe that the best way for parents, policymakers and even pundits to ensure our children are learning is by opening our eyes instead of closing them.

Opt out of Ravitch instead.

May 26, 2016

ACT received the Culture of Innovation Award at the Chief Innovation Officer Summit on May 18, 2016, in San Francisco. The award recognizes the comprehensiveness of ACT’s innovation programs, processes, and platforms.

ACT’s Culture of Innovation Award was one of four Strategy and Innovation Awards announced at the summit. The summit’s Strategy and Innovation Advisory Board, composed of high-level executives working in strategy and innovation at major corporations and organizations, selected the winners for their “exceptional efforts in strengthening business performance and growth.” Learn more about the awards here.

In announcing the award, ACT CEO Marten Roorda congratulated all ACT team members for their engagement as innovators, saying that “developing and driving a culture of innovation involves everyone at ACT.”

May 24, 2016
By Suzana Delanghe, Chief Commercial Officer

Imagine a student with a motor impairment. He has limited use of his dominant arm and he can’t get a firm grip on his pencil.

He’s also got a big test coming up, the kind where you use a Number 2 pencil to indicate the correct answers.

The student is well prepared academically, but, try as he might, he cannot completely fill in the ovals as is required for scoring.

Should he fail the test simply because his disability prevents him from showing what he knows?

Of course not, which is why he should request, and receive, an appropriate accommodation—for example, one that allows him to circle his answers in the test book, and then have a monitored “scribe” transfer those responses to his answer document for scoring. 

In addition to conditions that are readily apparent, some disabilities may not be visible from the outside. For instance, an inability to maintain attention—while serious—does not affect the underlying capacity to calculate the correct solution to a math problem. It may, however, require an accommodation.

At ACT, we want students to show what they know. That is why for decades we have allowed appropriate accommodations for students with demonstrated disabilities.

We also want to minimize the burden on students and families applying for accommodations while ensuring the integrity of the testing experience. To that end we are pleased to announce several enhancements to our accommodations systems that will streamline the application process.

The Test Accessibility and Accommodations (TAA) system will create the following opportunities:

  1. All students will now be able to register online to take the ACT® test at act.org.
  2. There will be a uniform experience for students seeking accommodations.
    1. There will be one online form to fill out.
    2. The application process will require minimal (but sufficient) documentation.
    3. For most students there will be no requirement for additional requests or reviews, if the disabilities and accommodations are included in their approved IEP/504 plans.
  3. The information and documentation collected can be used to secure accommodations for all future National test dates.

ACT’s new TAA system will become operational for students testing in fall 2016 and beyond.

Nothing eliminates the challenges associated with disabilities, but we are pleased to take these important steps that will make it simpler for all students to show what they know—which is the larger point of the assessment process.

May 13, 2016

ACT CEO Marten Roorda stands behind his May 11 blog post (“Collaboration Essential When Claiming Concordance”) questioning the methodology and validity of the College Board’s new concordance tables. Roorda notes that the College Board’s response to his blog post, submitted by its senior vice president for research, Jack Buckley, did not dispute the scientific arguments he cited against use of the new concordance.

ACT believes it is important for students and colleges to be aware of the limitations of the SAT score converter when it comes to comparing new SAT scores to old SAT scores and new SAT scores to ACT scores, as using the concordance could lead to incorrect admission decisions. Until a complete concordance study can be conducted with involvement of and cooperation between both organizations, such concordance tables should be viewed as suspect.

In addition, ACT takes exception to two statements made in the College Board’s response:  First, Buckley claimed that the College Board’s approach to developing concordance exceeds industry standards. That is not the case. It certainly did not meet testing industry standards, such as The Standards for Educational and Psychological Testing. Second, Buckley claimed that the College Board reached out to ACT several months ago to express their interest in conducting a new SAT-ACT concordance study. We are not aware of any such outreach.

ACT stands ready to cooperate in such a concordance study, as we have in the past. Until that study has been conducted and the results released, ACT will not recognize or approve of any concordance tables that compare scores on the new SAT to scores on the ACT® test.

May 13, 2016

Word is out—and interest is rising in the three free online events being offered by ACT and Kaplan Test Prep.

On May 11, more than 7,000 students and parents tuned in to Introduction to STEM Concepts, a live event that helped students brush up on their math and science skills in advance of the ACT® test. An additional 10,000 viewers accessed an on-demand recording of the event within the first 24 hours.

ACT and Kaplan are partnering to provide free access to these live events and on-demand presentations to help students hone their skills before taking the ACT and know how best to use their results. One remaining event—Introduction to ELA Concepts (May 22)—will focus on steps students can take to refresh and refine their knowledge and skills in English and reading. The first event, Understanding Your ACT Scores and What to Do Next, was offered on April 30 and is now available on demand.

Find out more about the upcoming live event, and view the on-demand presentations at the link below. All live events and on-demand presentations are available to anyone at no cost and with no further obligation.

LEARN MORE >

Live-chat comments from students during Introduction to STEM Concepts:

"so grateful for all these act people! thank you for taking the time to help someone who cant afford an actual class! This has helped so much"

"THANK YOU SO MUCH #kingarthur and #queenkristen [instructors for the event] !!! you guys are so awesome and helpful!!!!"

"Thank you so much to the tutor I don't think i have ever liked learning math this much in my whole life"

"Thank you guys so much it was so helpful and actually enjoyable!"

"The class was awesome!! I sat through both of them [math and science sessions] and I sure feel more confident! thank you"

May 11, 2016

By Marten Roorda, Chief Executive Officer, ACT

Here’s an SAT word for you: equipercentile.

Even though the College Board promised to get rid of “SAT words” on the latest version of its test, if you want to understand your new SAT scores, you’d better know what “equipercentile” means.

Let’s back up a bit to see why.

The College Board just completed the overhaul of the SAT. The new test has been administered to students on two national test dates, most recently on May 7, 2016.

The trouble for students, schools, and colleges is that it’s difficult to compare scores from the old SAT to the new SAT. If you’re asking different questions using different rules and different scoring scales, how can you compare an old SAT score from last fall with a new SAT score from this spring?

The answer is: You need sophisticated statistics. This is where “equipercentile” comes in. In short, using the College Board’s own explanation, if 75 percent of students achieve a score of X on Test A and 75 percent achieve a score of Y on Test B, then the scores X and Y are considered “concorded.”

In fact, the College Board recently has been promoting its new “SAT Score Converter,” which, it says, allows you to compare scores on the new SAT with the old SAT and with the ACT® test. However, this mathematical makeover comes with several caveats the College Board didn’t tell you about.

For example, after past SAT revisions, such as that from 2006, concordance tables were created after more than a year’s worth of data were in. One reason for this is that students who test in the fall are more likely to be seniors than those who test in the spring. Moreover, students willing to take the first iteration of a test that has undergone a major overhaul are likely quite different from the typical student.

Therefore, to get a full-and-fair sample, it’s important to get at least a full year’s worth of data to compare. With data from only the March SAT available, it’s clear that the current sample stands a significant chance of being different from the whole.

In 2006, the College Board did wait for actual results to come in—results that changed the concordance calculations. Now, not only is the College Board not waiting to make pronouncements about its own tests, it’s asserting the concordance with the ACT—which is why we have skin in the game.

To arrive at the ACT concordance, the College Board appears to have used a technique called “chained concordance,” which makes links between the new SAT and the old SAT, and then from the old SAT to the ACT. It therefore claims to be able to interpret scores from the revamped SAT relative to the tried-and-true ACT.

Speaking for ACT, we’re not having it. And neither should you.

A lot has changed in education since 2006. Linking scores from a single administration of the new SAT to the old SAT, and then to the 2006 ACT, is a bridge too far.

In 2006, the College Board and ACT worked collaboratively under the aegis of the NCAA to produce the official ACT-SAT concordance table. That work represented the gold standard in concordance, and it remains the only concordance ACT recognizes.

Now, without collaborating with ACT, the College Board has taken it upon itself not only to describe what its scores mean, but what ACT’s scores mean. That’s different from 10 years ago, and different from the standard you should expect from a standardized testing agency.

Meaningful concordance is difficult to achieve, particularly when you have tests that are so different—not only the new SAT from the old SAT, but both SATs relative to the ACT, which, for example, continues to have a science test that the SAT lacks.

ACT cannot support or defend the use of any concordance produced by the College Board without our collaboration or the involvement of independent groups, and we strongly recommend against basing significant decisions—in admissions, course placement, accountability, and scholarships—on such an interim table. Those decisions require evidence and precision far beyond what has been offered to date.

ACT remains eager to engage the higher education community in conducting a rigorous concordance between scores on the ACT and the new SAT—when the data are available. That will be in about a year.

Until then, we urge you not to use the SAT Score Converter. And not to listen to messages suggesting the old SAT and the new SAT, or even the ACT, are comparable.

For me that’s unequivocal, to use another SAT word. 

May 9, 2016

Understanding Your ACT Scores and What to Do Next, the first of three free online events presented by ACT and Kaplan Test Prep, took place on Saturday, April 30. More than 2,000 participants, primarily students and parents, have benefited from the live event and the subsequent on-demand option.

There was lively virtual discussion throughout the 45-minute event. See below for a sampling of comments that were made in live chat.

ACT and Kaplan are partnering to provide these free live events and on-demand presentations to help students hone their skills before taking the ACT and to learn how to use all the information contained in their score reports. The two remaining events—Introduction to STEM Concepts (May 11) and Introduction to ELA Concepts (May 22)—focus on steps students can take to refresh and refine their knowledge and skills in math, science, English, and reading.

Find out more about the two upcoming events, and view the on-demand presentation from April 30 at the link below. All live events and on-demand presentations are available to anyone at no cost and with no further obligation.

LEARN MORE >

Comments from students during Understanding Your ACT Scores and What to Do Next:

Hey y'all! Looking forward to new information regarding my scores! :D

This is such a fantastic feature of the score report!  :)

Thank you for the presentation!

I love you speaker human.

Looking forward to learning more.

I find it amazing how you are talking and reading these [live chat messages] at the same time 

More than 365,000 students and parents were at the USA Science & Engineering Festival April 15-17, 2016, in Washington, D.C., and ACT was there to meet them.

ACT’s longtime advocacy for and commitment to rigorous education and assessment for students in the fields of science, technology, engineering, and mathematics (STEM) was on full display in an interactive booth staffed by 16 ACT team members, joined by a representative from the University of Iowa. Co-located in a booth across from ACT was STEM Premier®, which has partnered with ACT since 2014 to enhance opportunities for all students in the area of STEM.

The ACT team engaged with thousands of attendees from virtually every walk of life, all united by a passion for STEM and a desire to succeed in STEM-related classes and careers. Held every two years, the festival is a national grassroots effort to advance STEM education and inspire the next generation of scientists and engineers. Participating exhibitors, performers, speakers, partners, sponsors, and advisors represent a “who’s who” of science and engineering across the U.S.

“What we really wanted people to take away is that ‘ACT’ means more than just an admissions exam,” says Steve Kappler, Vice President Brand Experience. “We had the chance to share with students and parents the broad range of ACT resources and services that can help them understand their interests and academic abilities and how they align with their anticipated postsecondary or career choices.

“So many parents and students we visited with were interested to learn that the ACT is the only major college entrance exam to offer a science section and score that contribute to an ACT STEM score and provides students with a STEM ranking,” Kappler added.

Also of interest to many visitors, says Kappler, was the ACT research report “The Condition of STEM 2015”—the third such report from ACT on the topic.

Through an ACT Sweepstakes at the event, ACT collected names for a drawing to award eight $500 ACT Scholarships to students in grades 11 and 12, and 100 vouchers for the ACT® test to students in grades 9 through 12.