The University of Southampton
University of Southampton Institutional Repository

DNN Multimodal Fusion Techniques for Predicting Video Sentiment

DNN Multimodal Fusion Techniques for Predicting Video Sentiment
DNN Multimodal Fusion Techniques for Predicting Video Sentiment
We present our work on sentiment prediction using the benchmark MOSI dataset from the CMU-MultimodalDataSDK. Previous work on multimodal sentiment analysis have been focused on input-level feature fusion or decision-level fusion for multimodal fusion. Here, we propose an intermediate-level feature fusion, which merges weights from each modality (audio, video, and text) during training with subsequent additional training. Moreover, we tested principle component analysis (PCA) for feature selection. We found that applying PCA increases unimodal performance, and multimodal fusion outperforms unimodal models. Our experiments show that our proposed intermediate-level feature fusion outperforms other fusion techniques, and it achieves the best performance with an overall binary accuracy of 74.0% on video+text modalities. Our work also improves feature selection for unimodal sentiment analysis, while proposing a novel and effective multimodal fusion architecture for this task.
64–72
Williams, Jennifer
3a1568b4-8a0b-41d2-8635-14fe69fbb360
Comanescu, Ramona
74f57d32-f69c-4e0f-85f1-9295bca2317c
Radu, Oana
139a656e-626a-417b-bed6-3abda87e9955
Tian, Leimin
9dcb9cd6-ea4f-4e2f-995f-fb8c782ee827
Williams, Jennifer
3a1568b4-8a0b-41d2-8635-14fe69fbb360
Comanescu, Ramona
74f57d32-f69c-4e0f-85f1-9295bca2317c
Radu, Oana
139a656e-626a-417b-bed6-3abda87e9955
Tian, Leimin
9dcb9cd6-ea4f-4e2f-995f-fb8c782ee827

Williams, Jennifer, Comanescu, Ramona, Radu, Oana and Tian, Leimin (2018) DNN Multimodal Fusion Techniques for Predicting Video Sentiment. ACL 2018: 56th Annual Meeting of the Association for Computational Linguistics, Melbourne Convention and Exhibition Centre, Melbourne, Australia. 15 Jul 2018 - 20 Jul 2020 . 64–72 .

Record type: Conference or Workshop Item (Paper)

Abstract

We present our work on sentiment prediction using the benchmark MOSI dataset from the CMU-MultimodalDataSDK. Previous work on multimodal sentiment analysis have been focused on input-level feature fusion or decision-level fusion for multimodal fusion. Here, we propose an intermediate-level feature fusion, which merges weights from each modality (audio, video, and text) during training with subsequent additional training. Moreover, we tested principle component analysis (PCA) for feature selection. We found that applying PCA increases unimodal performance, and multimodal fusion outperforms unimodal models. Our experiments show that our proposed intermediate-level feature fusion outperforms other fusion techniques, and it achieves the best performance with an overall binary accuracy of 74.0% on video+text modalities. Our work also improves feature selection for unimodal sentiment analysis, while proposing a novel and effective multimodal fusion architecture for this task.

This record has no associated files available for download.

More information

Published date: 1 July 2018
Additional Information: Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML), pages 64–72, Melbourne, Australia. Association for Computational Linguistics.
Venue - Dates: ACL 2018: 56th Annual Meeting of the Association for Computational Linguistics, Melbourne Convention and Exhibition Centre, Melbourne, Australia, 2018-07-15 - 2020-07-20

Identifiers

Local EPrints ID: 467422
URI: http://eprints.soton.ac.uk/id/eprint/467422
PURE UUID: f2fba47e-7fff-4eda-a1b5-cd788d3ca094
ORCID for Jennifer Williams: ORCID iD orcid.org/0000-0003-1410-0427

Catalogue record

Date deposited: 08 Jul 2022 16:32
Last modified: 23 Feb 2023 03:27

Export record

Contributors

Author: Jennifer Williams ORCID iD
Author: Ramona Comanescu
Author: Oana Radu
Author: Leimin Tian

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×