The University of Southampton
University of Southampton Institutional Repository

GAMED: knowledge adaptive multi-experts decoupling for multimodal fake news detection

GAMED: knowledge adaptive multi-experts decoupling for multimodal fake news detection
GAMED: knowledge adaptive multi-experts decoupling for multimodal fake news detection
Multimodal fake news detection often involves modelling heterogeneous data sources, such as vision and language. Existing detection methods typically rely on fusion effectiveness and cross-modal consistency to model the content, complicating understanding how each modality affects prediction accuracy. Additionally, these methods are primarily based on static feature modelling, making it difficult to adapt to the dynamic changes and relationships between different data modalities. This paper develops a significantly novel approach, GAMED, for multimodal modelling, which focuses on generating distinctive and discriminative features through modal decoupling to enhance cross-modal synergies, thereby optimizing overall performance in the detection process. GAMED leverages multiple parallel expert networks to refine features and pre-embed semantic knowledge to improve the experts' ability in information selection and viewpoint sharing. Subsequently, the feature distribution of each modality is adaptively adjusted based on the respective experts' opinions. GAMED also introduces a novel classification technique to dynamically manage contributions from different modalities, while improving the explainability of decisions. Experimental results on the Fakeddit and Yang datasets demonstrate that GAMED performs better than recently developed state-of-the-art models. The source code can be accessed at https://github.com/slz0925/GAMED.
cs.LG, cs.AI
arXiv
Shen, Lingzhi
Long, Yunfei
6652ac59-2950-4738-b001-5e187655b0d8
Cai, Xiaohao
de483445-45e9-4b21-a4e8-b0427fc72cee
Razzak, Imran
85c57ead-8a63-4aec-bba3-559a43dd5888
Chen, Guanming
a5c50691-6b41-4669-b2c1-01a95d1be450
Liu, Kang
806457ef-1b75-4f94-beaf-576b7f3934b9
Jameel, Shoaib
ae3c588e-4a59-43d9-af41-ea30d7caaf96
Shen, Lingzhi
Long, Yunfei
6652ac59-2950-4738-b001-5e187655b0d8
Cai, Xiaohao
de483445-45e9-4b21-a4e8-b0427fc72cee
Razzak, Imran
85c57ead-8a63-4aec-bba3-559a43dd5888
Chen, Guanming
a5c50691-6b41-4669-b2c1-01a95d1be450
Liu, Kang
806457ef-1b75-4f94-beaf-576b7f3934b9
Jameel, Shoaib
ae3c588e-4a59-43d9-af41-ea30d7caaf96

[Unknown type: UNSPECIFIED]

Record type: UNSPECIFIED

Abstract

Multimodal fake news detection often involves modelling heterogeneous data sources, such as vision and language. Existing detection methods typically rely on fusion effectiveness and cross-modal consistency to model the content, complicating understanding how each modality affects prediction accuracy. Additionally, these methods are primarily based on static feature modelling, making it difficult to adapt to the dynamic changes and relationships between different data modalities. This paper develops a significantly novel approach, GAMED, for multimodal modelling, which focuses on generating distinctive and discriminative features through modal decoupling to enhance cross-modal synergies, thereby optimizing overall performance in the detection process. GAMED leverages multiple parallel expert networks to refine features and pre-embed semantic knowledge to improve the experts' ability in information selection and viewpoint sharing. Subsequently, the feature distribution of each modality is adaptively adjusted based on the respective experts' opinions. GAMED also introduces a novel classification technique to dynamically manage contributions from different modalities, while improving the explainability of decisions. Experimental results on the Fakeddit and Yang datasets demonstrate that GAMED performs better than recently developed state-of-the-art models. The source code can be accessed at https://github.com/slz0925/GAMED.

Text
2412.12164v1 - Author's Original
Available under License Creative Commons Attribution.
Download (2MB)

More information

Published date: 11 December 2024
Keywords: cs.LG, cs.AI

Identifiers

Local EPrints ID: 497977
URI: http://eprints.soton.ac.uk/id/eprint/497977
PURE UUID: e9aaf1aa-a0aa-4f1b-8965-054f89df3810
ORCID for Xiaohao Cai: ORCID iD orcid.org/0000-0003-0924-2834

Catalogue record

Date deposited: 05 Feb 2025 17:54
Last modified: 06 Feb 2025 03:01

Export record

Altmetrics

Contributors

Author: Lingzhi Shen
Author: Yunfei Long
Author: Xiaohao Cai ORCID iD
Author: Imran Razzak
Author: Guanming Chen
Author: Kang Liu
Author: Shoaib Jameel

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×