Explainable Learning Analytics: assessing the stability of student success prediction models by means of explainable AI
Explainable Learning Analytics: assessing the stability of student success prediction models by means of explainable AI
Beyond managing student dropout, higher education stakeholders need decision support to consistently influence the student learning process to keep students motivated, engaged, and successful. At the course level, the combination of predictive analytics and self-regulation theory can help instructors determine the best study advice and allow learners to better self-regulate and determine how they want to learn. The best performing techniques are often black-box models that favor performance over interpretability and are heavily influenced by course contexts. In this study, we argue that explainable AI has the potential not only to uncover the reasons behind model decisions, but also to reveal their stability across contexts, effectively bridging the gap between predictive and explanatory learning analytics (LA). In contributing to decision support systems research, this study (1) leverages traditional techniques, such as concept drift and performance drift, to investigate the stability of student success prediction models over time; (2) uses Shapley Additive explanations in a novel way to explore the stability of extracted feature importance rankings generated for these models; (3) generates new insights that emerge from stable features across cohorts, enabling teachers to determine study advice. We believe this study makes a strong contribution to education research at large and expands the field of LA by augmenting the interpretability and explainability of prediction algorithms and ensuring their applicability in changing contexts.
Explainable AI, Learning analytics, Model stability, Self-regulated learning
Tiukhova, Elena
d892421d-5c0a-4091-9af2-a738e71518e7
Vemuri, Pavani
8f48dc82-c966-41a8-ac5a-09e06ba0614d
Flores, Nidia lópez
ef3f11a5-a360-47cf-a394-203e23dae2b6
Islind, Anna Sigridur
46e6353f-a1b6-4628-916c-18e817695d03
Óskarsdóttir, María
1622b6dd-5d25-4228-9418-a1729e9577e0
Poelmans, Stephan
980c531e-c892-4282-a671-c4b6d82b5463
Baesens, Bart
f7c6496b-aa7f-4026-8616-ca61d9e216f0
Snoeck, Monique
9aee96bc-8a57-4c37-bcd7-e83f0b173ee1
8 May 2024
Tiukhova, Elena
d892421d-5c0a-4091-9af2-a738e71518e7
Vemuri, Pavani
8f48dc82-c966-41a8-ac5a-09e06ba0614d
Flores, Nidia lópez
ef3f11a5-a360-47cf-a394-203e23dae2b6
Islind, Anna Sigridur
46e6353f-a1b6-4628-916c-18e817695d03
Óskarsdóttir, María
1622b6dd-5d25-4228-9418-a1729e9577e0
Poelmans, Stephan
980c531e-c892-4282-a671-c4b6d82b5463
Baesens, Bart
f7c6496b-aa7f-4026-8616-ca61d9e216f0
Snoeck, Monique
9aee96bc-8a57-4c37-bcd7-e83f0b173ee1
Tiukhova, Elena, Vemuri, Pavani, Flores, Nidia lópez, Islind, Anna Sigridur, Óskarsdóttir, María, Poelmans, Stephan, Baesens, Bart and Snoeck, Monique
(2024)
Explainable Learning Analytics: assessing the stability of student success prediction models by means of explainable AI.
Decision Support Systems, 182, [114229].
(doi:10.1016/j.dss.2024.114229).
Abstract
Beyond managing student dropout, higher education stakeholders need decision support to consistently influence the student learning process to keep students motivated, engaged, and successful. At the course level, the combination of predictive analytics and self-regulation theory can help instructors determine the best study advice and allow learners to better self-regulate and determine how they want to learn. The best performing techniques are often black-box models that favor performance over interpretability and are heavily influenced by course contexts. In this study, we argue that explainable AI has the potential not only to uncover the reasons behind model decisions, but also to reveal their stability across contexts, effectively bridging the gap between predictive and explanatory learning analytics (LA). In contributing to decision support systems research, this study (1) leverages traditional techniques, such as concept drift and performance drift, to investigate the stability of student success prediction models over time; (2) uses Shapley Additive explanations in a novel way to explore the stability of extracted feature importance rankings generated for these models; (3) generates new insights that emerge from stable features across cohorts, enabling teachers to determine study advice. We believe this study makes a strong contribution to education research at large and expands the field of LA by augmenting the interpretability and explainability of prediction algorithms and ensuring their applicability in changing contexts.
Text
DSS_special_issue___major_revision (2)
- Accepted Manuscript
Restricted to Repository staff only until 26 April 2026.
Request a copy
More information
e-pub ahead of print date: 26 April 2024
Published date: 8 May 2024
Keywords:
Explainable AI, Learning analytics, Model stability, Self-regulated learning
Identifiers
Local EPrints ID: 490778
URI: http://eprints.soton.ac.uk/id/eprint/490778
ISSN: 0167-9236
PURE UUID: e89c6ae0-7d7c-4daa-a6cf-c89e585d5216
Catalogue record
Date deposited: 06 Jun 2024 16:42
Last modified: 07 Jun 2024 01:38
Export record
Altmetrics
Contributors
Author:
Elena Tiukhova
Author:
Pavani Vemuri
Author:
Nidia lópez Flores
Author:
Anna Sigridur Islind
Author:
María Óskarsdóttir
Author:
Stephan Poelmans
Author:
Monique Snoeck
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics