DomainForensics: exposing face forgery across domains via bi-directional adaptation
DomainForensics: exposing face forgery across domains via bi-directional adaptation
Recent DeepFake detection methods have shown excellent performance on public datasets but are significantly degraded on new forgeries. Solving this problem is important, as new forgeries emerge daily with the continuously evolving generative techniques. Many efforts have been made for this issue by seeking the commonly existing traces empirically on data
level. In this paper, we rethink this problem and propose a new solution from the unsupervised domain adaptation perspective. Our solution, called DomainForensics, aims to transfer the forgery knowledge from known forgeries (fully labeled source domain) to new forgeries (label-free target domain). Unlike recent efforts, our solution does not focus on data view but on learning strategies of DeepFake detectors to capture the knowledge of new forgeries through the alignment of domain discrepancies. In particular, unlike the general domain adaptation methods which consider the knowledge transfer in the semantic class category, thus having limited application, our approach captures the subtle forgery traces. We describe a new bidirectional adaptation strategy dedicated to capturing the forgery knowledge across domains. Specifically, our strategy considers both forward and backward adaptation, to transfer the forgery knowledge from the source domain to the target domain in forward adaptation and then reverse the adaptation from the target domain to the source domain in backward adaptation. In forward adaptation, we perform supervised training for the DeepFake detector in the source domain and jointly employ adversarial feature adaptation to transfer the ability to detect manipulated faces from known forgeries to new forgeries. In backward adaptation, we further improve the knowledge transfer by coupling adversarial adaptation with self-distillation on new forgeries. This enables the detector to expose new forgery features from unlabeled data and avoid forgetting the known knowledge of known forgery. Extensive experiments demonstrate that our method is surprisingly effective in exposing new forgeries, and can be plug-and-play on other DeepFake detection architectures.
7275-7289
Lv, Qingxuan
09dec60c-48fb-420d-b0e8-5a176a474abf
Li, Yuezun
f95883a5-3aeb-42ff-ae79-11ea2da9e1e2
Dong, Junyu
cb626ba3-7c15-4441-b364-bc33349ad5b4
Chen, Sheng
9310a111-f79a-48b8-98c7-383ca93cbb80
Yu, Hui
62623ded-fe42-4211-9529-ff32de116743
Zhou, Huiyu
3180d70f-059b-4421-b5c5-34707b3e5504
Zhang, Shu
47e96d7b-7eb6-44a8-a28c-baa1d8ab5fef
4 August 2024
Lv, Qingxuan
09dec60c-48fb-420d-b0e8-5a176a474abf
Li, Yuezun
f95883a5-3aeb-42ff-ae79-11ea2da9e1e2
Dong, Junyu
cb626ba3-7c15-4441-b364-bc33349ad5b4
Chen, Sheng
9310a111-f79a-48b8-98c7-383ca93cbb80
Yu, Hui
62623ded-fe42-4211-9529-ff32de116743
Zhou, Huiyu
3180d70f-059b-4421-b5c5-34707b3e5504
Zhang, Shu
47e96d7b-7eb6-44a8-a28c-baa1d8ab5fef
Lv, Qingxuan, Li, Yuezun, Dong, Junyu, Chen, Sheng, Yu, Hui, Zhou, Huiyu and Zhang, Shu
(2024)
DomainForensics: exposing face forgery across domains via bi-directional adaptation.
IEEE Transactions on Information Forensics and Security, 19, .
Abstract
Recent DeepFake detection methods have shown excellent performance on public datasets but are significantly degraded on new forgeries. Solving this problem is important, as new forgeries emerge daily with the continuously evolving generative techniques. Many efforts have been made for this issue by seeking the commonly existing traces empirically on data
level. In this paper, we rethink this problem and propose a new solution from the unsupervised domain adaptation perspective. Our solution, called DomainForensics, aims to transfer the forgery knowledge from known forgeries (fully labeled source domain) to new forgeries (label-free target domain). Unlike recent efforts, our solution does not focus on data view but on learning strategies of DeepFake detectors to capture the knowledge of new forgeries through the alignment of domain discrepancies. In particular, unlike the general domain adaptation methods which consider the knowledge transfer in the semantic class category, thus having limited application, our approach captures the subtle forgery traces. We describe a new bidirectional adaptation strategy dedicated to capturing the forgery knowledge across domains. Specifically, our strategy considers both forward and backward adaptation, to transfer the forgery knowledge from the source domain to the target domain in forward adaptation and then reverse the adaptation from the target domain to the source domain in backward adaptation. In forward adaptation, we perform supervised training for the DeepFake detector in the source domain and jointly employ adversarial feature adaptation to transfer the ability to detect manipulated faces from known forgeries to new forgeries. In backward adaptation, we further improve the knowledge transfer by coupling adversarial adaptation with self-distillation on new forgeries. This enables the detector to expose new forgery features from unlabeled data and avoid forgetting the known knowledge of known forgery. Extensive experiments demonstrate that our method is surprisingly effective in exposing new forgeries, and can be plug-and-play on other DeepFake detection architectures.
Text
TIFS2023_final
- Accepted Manuscript
Text
TIFS2024-Aug
- Version of Record
Restricted to Repository staff only
Request a copy
More information
Accepted/In Press date: 8 July 2024
Published date: 4 August 2024
Identifiers
Local EPrints ID: 492148
URI: http://eprints.soton.ac.uk/id/eprint/492148
ISSN: 1556-6013
PURE UUID: 4989154e-8723-4cf4-9de1-cd81a19a6899
Catalogue record
Date deposited: 18 Jul 2024 16:32
Last modified: 08 Aug 2024 04:01
Export record
Contributors
Author:
Qingxuan Lv
Author:
Yuezun Li
Author:
Junyu Dong
Author:
Sheng Chen
Author:
Hui Yu
Author:
Huiyu Zhou
Author:
Shu Zhang
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics