Multimodal learning analytics provides researchers new tools and techniques to capture different types of data from complex learning activities in dynamic learning environments. This paper investigates high-fidelity synchronised multimodal recordings of small groups of learners interacting from diverse sensors that include computer vision, user generated content, and data from the learning objects (physical computing components). We processed and extracted different aspects of the students’ interactions to answer the following question: which features of student group work are good predictors of team success in open-ended tasks with physical computing? To answer this question, we have explored different supervised machine learning approaches (traditional and deep learning techniques) to analyse the data coming from multiple sources. The results illustrate that state-of-the-art computational techniques can be used to generate insights into the "black box" of learning in students’ project-based activities. The features identified from the analysis show that distance between learners’ hands and faces is a strong predictor of students’ artefact quality which can indicate the value of student collaboration. Our research shows that new and promising approaches such as neural networks as well as more traditional regression approaches can both be used to classify MMLA data, and both have advantages and disadvantages depending on the research questions and contexts being investigated. The work presented here is a significant contribution towards developing techniques to automatically identify the key aspects of students success in project-based learning environments, and ultimately help teachers provide appropriate and timely support to students in these fundamental aspects.
|Titolo:||Supervised Machine Learning in Multimodal Learning Analytics for Estimating Success in Project-based Learning|
|Data di pubblicazione:||2018|
|Appare nelle tipologie:||1.1 Articolo su Rivista/Article|