Title

Employing contemporary psychometric methods to establish construct validity for a large-scale technology performance assessment

Date of Completion

January 2004

Keywords

Education, Tests and Measurements|Psychology, Psychometrics|Education, Technology of

Degree

Ph.D.

Abstract

Basic computer technology skills are important for both students and teachers throughout all levels of the educational system. Despite this importance, a great deal is unknown about the relationship technology competency has with other educational constructs. A necessary first step in exploring these relationships is to establish a measurable definition for basic technology competency. Recent educational technology literature advocates that technology competency be established on an individual's ability to use technology, rather than just their knowledge about technology, thus suggesting that a performance-based assessment may be an appropriate way to measure technology competency. ^ Performance-based assessment provides a promise of connecting teaching and learning to a real-world context. However, the implementation and validation of these assessments raises significant psychometric issues. Current testing theories are unable to model the complexities embedded within the scope of performance assessments. As a result, validity arguments have typically been less extensive than those required by the Standards for Educational and Psychological Testing (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 1999). ^ In this study a sample of 292 undergraduate students completed a basic technology performance assessment consisting of 19 tasks that was designed to measure an individual's level of competency with basic computer technology in the context of using Microsoft Word, Excel and PowerPoint. Student responses to the assessment were modeled using two item response theory models, a standard two parameter logistic model and a modified model incorporating a person-specific testlet effect. Model estimation was conducted using a Markov Chain Monte Carlo methodology known as the Metropolis-Hasting Algorithm within Gibbs sampling, implemented with the computer algebra system Mathematica. ^ Study results examine the fit for the two models specifically focusing on the standard item response theory assumptions of monotonicity (the probability of correctly completing a task increase with an individuals ability), uni-dimensionality (the fact that the items constituting a test must measure a single ability) and local item independence (examinees' responses to individual test items are independent of the responses to other items on the test) to establish evidence of construct validity and reliability for the scores from the basic technology competency assessment. ^

COinS