CRAT: Constructed Response Analysis


Due to the high human scoring cost associated with assessing constructed responses, CRAT was designed as an automatic scoring tool which can reduce the need for human scoring and increase scoring consistency. It is particularly well suited for the exploration of writing quality as it relates to summary writing. CRAT is an easy-to-use constructed response analysis engine that includes over 700 indices related to lexical sophistication, cohesion and source text/summary text overlap. It calculates indices related to the linguistic and semantic similarities between a source text and a constructed response, the linguistic sophistication of a constructed response, and text properties (e.g., length and syntactic categories).  It is freely available, cross-platform, and is accessed via a graphic user interface (GUI). CRAT takes plain text files as input (it will process all plain text files in a particular folder) and produces a comma separated values (.csv) spreadsheet that is easily read by any spreadsheet software.

Measures for CRAT

  1. Linguistic and semantic similarities — The similarity indices are measured in comparison between a source text and a constructed response. Indices include lexical similarity calculated using key word overlap, synonym overlap, and latent semantic analysis (LSA) similarity and phrasal similarity.
  2. Linguistic Sophistication —The constructed response sophistication indices include psycholinguistic word information indices (e.g., concreteness and familiarity), lexical frequency and range (words that occur in a wider range of texts) indices based on the British National corpus (BNC) and the Corpus of Contemporary American English (COCA), and syntactic categories (e.g., number of adjectives and nouns).\
  3. Text Properties — Text properties, such as length and syntactic categories, are indices that measure bigram frequency, bigram proportions, and bigram accuracy. They have shown to be predictive of human judgments of essay quality.

There are over 700 indices measured in CRAT. For a full listing, download the CRAT Index Spreadsheet (hyperlink to excel sheet).

Validation of CRAT

CRAT is a relatively new tool. In an initial study by Crossley et al. (in press) a comparison of CRAT and human scoring was conducted to test the accuracy of the automated scoring tool. A dataset of chemistry responses written in ChemVLab+ were analyzed. Results identify specific linguistic features which can be used to predict human ratings of accuracy of scoring. Though preliminary, this demonstrates that CRAT may be a beneficial NLP tool to assess student’s constructed responses.

References/further reading

Crossley, S. A, Kyle, K., Davenport, J., & McNamara, D. S. (2016). Automatic Assessment of Constructed Response Data in a Chemistry Tutor. Proceedings of the 9th International Educational Data Mining (EDM) Society Conference (pp. 336-340).