Using automated text analysis to evaluate students’ conceptual understanding


Autoria(s): Goncher, Andrea; Boles, Wageeh W.; Jayalath, Dhammika
Data(s)

01/12/2014

Resumo

Background: A major challenge for assessing students’ conceptual understanding of STEM subjects is the capacity of assessment tools to reliably and robustly evaluate student thinking and reasoning. Multiple-choice tests are typically used to assess student learning and are designed to include distractors that can indicate students’ incomplete understanding of a topic or concept based on which distractor the student selects. However, these tests fail to provide the critical information uncovering the how and why of students’ reasoning for their multiple-choice selections. Open-ended or structured response questions are one method for capturing higher level thinking, but are often costly in terms of time and attention to properly assess student responses. Purpose: The goal of this study is to evaluate methods for automatically assessing open-ended responses, e.g. students’ written explanations and reasoning for multiple-choice selections. Design/Method: We incorporated an open response component for an online signals and systems multiple-choice test to capture written explanations of students’ selections. The effectiveness of an automated approach for identifying and assessing student conceptual understanding was evaluated by comparing results of lexical analysis software packages (Leximancer and NVivo) to expert human analysis of student responses. In order to understand and delineate the process for effectively analysing text provided by students, the researchers evaluated strengths and weakness for both the human and automated approaches. Results: Human and automated analyses revealed both correct and incorrect associations for certain conceptual areas. For some questions, that were not anticipated or included in the distractor selections, showing how multiple-choice questions alone fail to capture the comprehensive picture of student understanding. The comparison of textual analysis methods revealed the capability of automated lexical analysis software to assist in the identification of concepts and their relationships for large textual data sets. We also identified several challenges to using automated analysis as well as the manual and computer-assisted analysis. Conclusions: This study highlighted the usefulness incorporating and analysing students’ reasoning or explanations in understanding how students think about certain conceptual ideas. The ultimate value of automating the evaluation of written explanations is that it can be applied more frequently and at various stages of instruction to formatively evaluate conceptual understanding and engage students in reflective

Formato

application/pdf

Identificador

http://eprints.qut.edu.au/79509/

Relação

http://eprints.qut.edu.au/79509/1/AAEE_2014_266_final_Text_analysis.pdf

Goncher, Andrea, Boles, Wageeh W., & Jayalath, Dhammika (2014) Using automated text analysis to evaluate students’ conceptual understanding. In Proceedings of the Australasian Association for Engineering Education (AAEE2014), Wellington, NZ.

Direitos

Copyright 2014 [Please consult the Authors]

Fonte

School of Electrical Engineering & Computer Science; Science & Engineering Faculty

Palavras-Chave #099900 OTHER ENGINEERING #Learning analytics #automated lexical analysis #conceptual understanding #HERN
Tipo

Conference Paper