The influence of computer experience on the agreement between user preference and performance rankings in usability testing
Date of Completion
Psychology, Industrial|Computer Science
In the user-centered approach to software design and development, end-users act as evaluators in usability tests at various points during the development life-cycle. Some usability professionals argue that these usability tests simply reflect the preferences of the participants and should not be used in place of objective performance measures. In an attempt to strengthen the user-centered approach to software usability testing, the present study examined the relationship between computer experience, user preference, and performance measures. User preference and performance data were collected in a comparative usability test of three user interface designs. A simple data retrieval task was used to test three user interfaces: (1) a command line, (2) a listbox, and (3) a 2-level menu. Users (N = 36) rated and ranked the three user interfaces according to their preferences. To measure user performance, the computer measured the time needed to complete each of the twenty database searches performed by users using each of the three interfaces. User preference and performance were found to be significantly and positively correlated. User preference and performance data were then ranked from most preferred to least preferred and from fastest to slowest average search completion time, respectively. The preference and performance rankings were compared and assigned a "match score" of 1 for agreement between rankings and O for non-agreement. Further, volunteers (N = 241) completed a survey of hardware and software use and familiarity. A factor analysis was performed on the data to establish "computer experience" factors. The factor analysis resulted in four factors of "computer experience:" (1) software familiarity, (2) general hardware familiarity, (3) technical hardware familiarity, and (4) Internet/communications familiarity. The four "computer experience" factors were tested as predictors of the dichotomous "match score" to test the hypothesis that "computer experience" influences preference-performance agreement. Logistic regression analyses of data from participants (N = 36) who compared the three user interfaces showed that a user's "computer experience" influences the probability of agreement between user preference and performance rankings. The results of the present study suggest that human factors professionals should factor in users' "computer experience" when making performance-related design decisions based solely on user preference data. ^
Kissel, George Vincent, "The influence of computer experience on the agreement between user preference and performance rankings in usability testing" (1997). Doctoral Dissertations. AAI9730889.