Options
Designing Conversational Evaluation Tools: A Comparison of Text and Voice Modalities to Improve Response Quality in Course Evaluations
Journal
Proceedings of the ACM on Human-Computer Interaction (PACMHCI)
ISSN
2573-0142
Type
journal article
Date Issued
2022-11-11
Author(s)
Käser, Tanja
Koedinger, Kenneth R.
Research Team
IWI6
Abstract
Conversational agents (CAs) provide opportunities for improving the interaction in evaluation surveys. To investigate if and how a user-centered conversational evaluation tool impacts users' response quality and their experience, we build EVA - a novel conversational course evaluation tool for educational scenarios. In a field experiment with 128 students, we compared EVA against a static web survey. Our results confirm prior findings from literature about the positive effect of conversational evaluation tools in the domain of education. Second, we then investigate the differences between a voice-based and text-based conversational human-computer interaction of EVA in the same experimental set-up. Against our prior expectation, the students of the voice-based interaction answered with higher information quality but with lower quantity of information compared to the text-based modality. Our findings indicate that using a conversational CA (voice and text-based) results in a higher response quality and user experience compared to a static web survey interface.
Language
English
Keywords
conversational agents
course evaluations
educational applications
voice interfaces
HSG Classification
contribution to scientific community
Refereed
Yes
Publisher
Association for Computing Machinery
Publisher place
New York, NY, USA
Volume
6
Number
CSCW2
Start page
1
End page
27
Pages
27
Subject(s)
Division(s)
Eprints ID
268189
File(s)
Loading...
open access
Name
JML_901.pdf
Size
1.76 MB
Format
Adobe PDF
Checksum (MD5)
4157ca4e390205637ebdb81d463df7ed