The University of Southampton
University of Southampton Institutional Repository

CLEFT: Contextualised unified Learning of User Engagement in Video Lectures with Feedback

CLEFT: Contextualised unified Learning of User Engagement in Video Lectures with Feedback
CLEFT: Contextualised unified Learning of User Engagement in Video Lectures with Feedback
Predicting contextualised engagement in videos is a long-standing problem that has been popularly attempted by exploiting the number of views or likes using different computational methods. The recent decade has seen a boom in online learning resources, and during the pandemic, there has been an exponential rise of online teaching videos without much quality control. As a result, we are faced with two key challenges. First, how to decide which lecture videos are engaging to intrigue the listener and increase productivity, and second, how to automatically provide feedback to the content creator using which they could improve the content. The quality of the content could be improved if the creators could automatically get constructive feedback on their content. On the other hand, there has been a steep rise in developing computational methods to predict a user engagement score. In this paper, we have proposed a new unified model, CLEFT, that means “Contextualised unified Learning of user Engagement in video lectures with Feedback” that learns from the features extracted from freely available public online teaching videos and provides feedback on the video along with the user engagement score. Given the complexity of the task, our unified framework employs different pre-trained models working together as an ensemble of classifiers. Our model exploits a range of multi-modal features to model the complexity of language, context agnostic information, textual emotion of the delivered content, animation, speaker’s pitch, and speech emotions. Our results support hypothesis that proposed model can detect engagement reliably and the feedback component gives useful insights to the content creator to further help improve the content.
BERT, NLP, contextual language models, emotions, text-based emotions, video engagement
2169-3536
17707-17720
Roy, Sujit
cfc4159e-80f0-4c04-a4aa-ff012c350788
Gaur, Vishal
1d2cda70-6210-400d-a862-b78bbf4604f1
Razah, Haider
7d96403c-d857-4c37-af9d-de12b63035e5
Jameel, Shoaib
ae3c588e-4a59-43d9-af41-ea30d7caaf96
Roy, Sujit
cfc4159e-80f0-4c04-a4aa-ff012c350788
Gaur, Vishal
1d2cda70-6210-400d-a862-b78bbf4604f1
Razah, Haider
7d96403c-d857-4c37-af9d-de12b63035e5
Jameel, Shoaib
ae3c588e-4a59-43d9-af41-ea30d7caaf96

Roy, Sujit, Gaur, Vishal, Razah, Haider and Jameel, Shoaib (2023) CLEFT: Contextualised unified Learning of User Engagement in Video Lectures with Feedback. IEEE Access, 11, 17707-17720. (doi:10.1109/ACCESS.2023.3245982).

Record type: Article

Abstract

Predicting contextualised engagement in videos is a long-standing problem that has been popularly attempted by exploiting the number of views or likes using different computational methods. The recent decade has seen a boom in online learning resources, and during the pandemic, there has been an exponential rise of online teaching videos without much quality control. As a result, we are faced with two key challenges. First, how to decide which lecture videos are engaging to intrigue the listener and increase productivity, and second, how to automatically provide feedback to the content creator using which they could improve the content. The quality of the content could be improved if the creators could automatically get constructive feedback on their content. On the other hand, there has been a steep rise in developing computational methods to predict a user engagement score. In this paper, we have proposed a new unified model, CLEFT, that means “Contextualised unified Learning of user Engagement in video lectures with Feedback” that learns from the features extracted from freely available public online teaching videos and provides feedback on the video along with the user engagement score. Given the complexity of the task, our unified framework employs different pre-trained models working together as an ensemble of classifiers. Our model exploits a range of multi-modal features to model the complexity of language, context agnostic information, textual emotion of the delivered content, animation, speaker’s pitch, and speech emotions. Our results support hypothesis that proposed model can detect engagement reliably and the feedback component gives useful insights to the content creator to further help improve the content.

This record has no associated files available for download.

More information

Accepted/In Press date: 8 February 2023
e-pub ahead of print date: 15 February 2023
Published date: 23 February 2023
Additional Information: Funding Information: This work was supported by Brainalive Research Pvt. Ltd., Kanpur, India (http://braina.live/). The work of Haider Raza was supported by the Economic and Social Research Council (ESRC) funded by the Business and Local Government Data Research Centre under Grant ES/S007156/1. Publisher Copyright: © 2013 IEEE.
Keywords: BERT, NLP, contextual language models, emotions, text-based emotions, video engagement

Identifiers

Local EPrints ID: 476772
URI: http://eprints.soton.ac.uk/id/eprint/476772
ISSN: 2169-3536
PURE UUID: bf0c1cf0-d6eb-4ba5-982a-d5a9133fd909

Catalogue record

Date deposited: 15 May 2023 16:59
Last modified: 17 Mar 2024 02:12

Export record

Altmetrics

Contributors

Author: Sujit Roy
Author: Vishal Gaur
Author: Haider Razah
Author: Shoaib Jameel

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×