Wambsganss, ThiemoThiemoWambsganssZierau, NaimNaimZierauSöllner, MatthiasMatthiasSöllnerKäser, TanjaTanjaKäserKoedinger, Kenneth R.Kenneth R.KoedingerLeimeister, Jan MarcoJan MarcoLeimeister2023-04-132023-04-132022-11-11https://www.alexandria.unisg.ch/handle/20.500.14171/10808210.1145/3555619Conversational agents (CAs) provide opportunities for improving the interaction in evaluation surveys. To investigate if and how a user-centered conversational evaluation tool impacts users' response quality and their experience, we build EVA - a novel conversational course evaluation tool for educational scenarios. In a field experiment with 128 students, we compared EVA against a static web survey. Our results confirm prior findings from literature about the positive effect of conversational evaluation tools in the domain of education. Second, we then investigate the differences between a voice-based and text-based conversational human-computer interaction of EVA in the same experimental set-up. Against our prior expectation, the students of the voice-based interaction answered with higher information quality but with lower quantity of information compared to the text-based modality. Our findings indicate that using a conversational CA (voice and text-based) results in a higher response quality and user experience compared to a static web survey interface.enconversational agentscourse evaluationseducational applicationsvoice interfacesDesigning Conversational Evaluation Tools: A Comparison of Text and Voice Modalities to Improve Response Quality in Course Evaluationsjournal article