Date of Completion

5-6-2014

Embargo Period

5-5-2014

Keywords

Writing

Major Advisor

Natalie G. Olinghouse

Associate Advisor

Michael D. Coyne

Associate Advisor

D. Betsy McCoach

Field of Study

Educational Psychology

Degree

Doctor of Philosophy

Open Access

Campus Access

Abstract

Multi-tiered systems of support (MTSS) are being adopted to address the prevalence and incidence of struggling writers. Central to the success of MTSS is the use of assessment data for the purposes of universal screening and instructional diagnosis. Increasingly, states, districts, and schools are turning to benchmark writing assessments to provide such data. However, the use of benchmark writing assessment data is complicated by lack of research comparing available methods for scoring writing. Thus, the present study sought to identify optimal scoring methods for conducting screening and instructional diagnosis within the context of a benchmark writing assessment by comparing three commonly-used scoring methods. Texts composed by a sample of students who participated in a statewide benchmark writing assessment (n = 300) were assessed using human holistic scoring, automated analytic scoring via Project Essay Grade (PEG™), and several component-skills scoring measures. Receiver operating characteristic (ROC) curve analysis and logistic regression indicated that the multivariate model including students’ prior writing achievement and PEG Sum Score was the most accurate screening model. Confirmatory factor analysis and latent profile analysis indicated that PEG trait scores distinguished among students in terms of the magnitude of their writing difficulties; whereas, component-skills measures revealed three distinct profiles of struggling writers. Collectively, the study findings have implications for the design of assessment systems within MTSS aimed at identifying at-risk students and diagnosing their instructional needs.

COinS