ACT Learning and Professional Services
CRASE®
ACT’s Automated Essay Scoring Engine
Automated essay scoring uses computers to reliably emulate how humans score writing assessment responses.
Overview – What is CRASE?
CRASE is ACT’s automated essay scoring engine.
CRASE stands for Constructed Response Automated Scoring Engine. CRASE accepts open-ended text from examinees and evaluates their answers according to the ACT Writing Rubric. CRASE was originally developed in 2007, and ACT acquired CRASE in 2017.
CRASE uses Natural Language Processing and machine learning methodology to understand and model the behavior of human scorers. Its use in large-scale writing assessments can serve to uphold the highest standards of quality, control costs, and increase scoring efficiency.
When did ACT start using CRASE to score the writing test?
Starting in late 2022, ACT began using CRASE to help score international administrations of the ACT writing test. CRASE replaced one of the two human scorers traditionally used for scoring this test. Human scoring continues to be used to resolve any scoring discrepancies and to conduct quality control.
Shortly after that, CRASE began to be implemented in other administrations of the ACT writing test when taken online.
- Spring 2023 – ACT District Testing
- Winter 2023 – ACT National testing
- Fall 2024 – ACT State testing
Starting in April 2026, ACT National testing will utilize CRASE as the sole rater for the writing test. Other versions of the ACT taken online such as ACT International, State testing, and District testing continue to use one human rater paired with the use of CRASE as the second rater.
How does CRASE work?
CRASE includes three main components: a preprocessor, a feature extractor, and a machine learning component.
During preprocessing, CRASE imports examinee essays and standardizes their format. Sentence spacing, word spacing, and paragraph breaks are made consistent across essays. Alternate forms of the essays, such as spell-corrected versions, are produced during preprocessing.
The feature extractor calculates various numeric qualities (called features) of the essays. Essay features vary in complexity. Features can be as simple as the average length (in words) of an essay’s sentences or as complex as certain measures of lexical diversity. All features to be analyzed by CRASE were determined by English language arts (ELA) experts and aligned to established writing rubrics.
The machine learning component uses statistical models to relate an essay’s features to its expected human-assigned score. When CRASE is being trained, the best statistical model is being determined. When CRASE is used for an assessment, an essay’s features are entered into the best model to determine the CRASE score.
For assessments where the examinee’s score is based on multiple writing traits (domains), such as the ACT writing test, an automated
scoring model is produced for each trait.
FAQs
Selected References about Automated Scoring and CRASE
The CRASE team at ACT use the best practices that we can to produce automated scoring models. The following references shape our best practices.
Automated Scoring
The CRASE+ team at ACT use the best practices that we can to produce automated scoring models. The following references shape our best practices.
The Standards for Educational and Psychological Testing, by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education (2014)
Guidelines for Technology-Based Assessment, by the International Test Commission and the Association of Test Publishers (2022)
Establishing Standards of Best Practice in Automated Scoring, by Scott Wood, Erin Yao, Lisa Haisfield, and Susan Lottridge (2021)
Public Perception and Communication around Automated Essay Scoring, from Handbook of Automated Scoring: Theory into Practice, by Scott Wood (2020)
Best Practices for Constructed-Response Scoring, by ETS (2021)
A Framework for Evaluation and Use of Automated Scoring, from Educational Measurement: Issues and Practice, by David M. Williamson, Xiaoming Xi, and F. Jay Breyer (2012)
Selected CRASE+ References
The following references illustrate uses of the CRASE+ engine on writing assessments.
CRASE+ ACT Writing Technical Report
CRASE5 for ACT Writing Technical Report
CRASE® Essay Scoring Model Performance Based on Proof-of-Concept and Operational Engine Trainings
Anchoring Validity Evidence for Automated Essay Scoring, from the Journal of Educational Measurement, by Mark D. Shermis (2022)
Communicating to the Public About Machine Scoring: What Works, What Doesn’t, by Mark D. Shermis and Susan Lottridge (2019)
Establishing a Crosswalk between the Common European Framework for Languages (CEFR) and Writing Domains Scored by Automated Essay Scoring, from Applied Measurement in Education, by Mark D. Shermis (2018)
The Impact of Anonymization for Automated Essay Scoring, from the Journal of Educational Measurement, by Mark D. Shermis, Sue Lottridge, and Elijah Mayfield (2015)
An Evaluation of Automated Scoring of NAPLAN Persuasive Writing, by the ACARA NASOP Research Team (2015)
NAPLAN Online Automated Scoring Research Program: Research Report, by Goran Lazendic, Julie-Anne Justus, and Stanley Rabinowitz (2018)
Using Automated Scoring to Monitor Reader Performance and Detect Reader Drift in Essay Scoring, from Handbook of Automated Essay Evaluation: Current Applications and New Directions, by Susan Lottridge, E. Matthew Schulz, and Howard Mitzel (2013)
Contrasting State-of-the-Art Automated Scoring of Essays, from Handbook of Automated Essay Evaluation: Current Applications and New Directions, by Mark D. Shermis and Ben Hamner (2013)
To learn more, contact ACT at crase@act.org.
US Students
District Testing