Date of Completion

1-8-2016

Embargo Period

1-6-2016

Advisors

Jim Magnuson, Chi-Ming Chen

Field of Study

Psychology

Degree

Master of Arts

Open Access

Open Access

Abstract

Abstract

A remarkable amount of information is conveyed by the human voice. For example, the emotional state of a speaker is conveyed by vocal cues such as pitch and intensity, though as is true for other speech qualities, affect does not map onto auditory signals in a one-to-one fashion. Despite the widespread use of cell phone technology, there is still little information regarding how emotional states are conveyed during cell phone transmissions. In this study, listeners judged speech samples for their affective qualities. Samples were simultaneously recorded on a microphone and a cell phone, and endpoints of two emotional “continua,” (neutral to happy, neutral to angry) were elicited from each of two female talkers. Continua were created using the STRAIGHT algorithm (H. Kawahara et al., 2008) to produce ten total intensity levels. We anticipated few differences in reaction time (RT) or participant response as a function of recording type (microphone vs. cell phone). Logistic regression revealed no differences for participant response between recording types for three of the four conditions, largely supporting our predictions, though for one talker’s, the Neutral-Happy microphone stimuli were judged as happier at two of the ten intensity levels. We also predicted a slowing in reaction time for stimuli at the middle of each continuum, reflecting their greater ambiguity. Multi-level model analyses revealed that the data were best fit by a quadratic model, with slower RTs for cell phone stimuli across all blocks. Overall, the results suggest that acoustic cues to happiness and anger are largely retained in a cell phone transmission, and highlight the utility of speech samples collected over cell phones.

Major Advisor

Inge-Marie Eigsti

Share

COinS