Read between the lines

An annotation tool for multimodal data for learning

Conference Paper (2019)
Author(s)

Daniele Di Mitri (Open University of the Netherlands)

Jan Schneider (DIPF - Leibniz Institute for Research and Information in Education)

Roland Klemke (Open University of the Netherlands)

Marcus Specht (Open University of the Netherlands)

Hendrik Drachsler (DIPF - Leibniz Institute for Research and Information in Education, Open University of the Netherlands)

Affiliation
External organisation
DOI related publication
https://doi.org/10.1145/3303772.3303776
More Info
expand_more
Publication Year
2019
Language
English
Affiliation
External organisation
Pages (from-to)
51-60
ISBN (electronic)
9781450362566

Abstract

This paper introduces the Visual Inspection Tool (VIT) which supports researchers in the annotation of multimodal data as well as the processing and exploitation for learning purposes. While most of the existing Multimodal Learning Analytics (MMLA) solutions are tailor-made for specific learning tasks and sensors, the VIT addresses the data annotation for different types of learning tasks that can be captured with a customisable set of sensors in a flexible way. The VIT supports MMLA researchers in 1) triangulating multimodal data with video recordings; 2) segmenting the multimodal data into time-intervals and adding annotations to the time-intervals; 3) downloading the annotated dataset and using it for multimodal data analysis. The VIT is a crucial component that was so far missing in the available tools for MMLA research. By filling this gap we also identified an integrated workflow that characterises current MMLA research. We call this workflow the Multimodal Learning Analytics Pipeline, a toolkit for orchestration, the use and application of various MMLA tools.

No files available

Metadata only record. There are no files for this record.