The University of Southampton
University of Southampton Institutional Repository

S-prompts learning with pre-trained transformers: an Occam's Razor for domain incremental learning

S-prompts learning with pre-trained transformers: an Occam's Razor for domain incremental learning
S-prompts learning with pre-trained transformers: an Occam's Razor for domain incremental learning
State-of-the-art deep neural networks are still struggling to address the catastrophic forgetting problem in continual learning. In this paper, we propose one simple paradigm (named as S-Prompting) and two concrete approaches to highly reduce the forgetting degree in one of the most typical continual learning scenarios, i.e., domain increment learning (DIL). The key idea of the paradigm is to learn prompts independently across domains with pre-trained transformers, avoiding the use of exemplars that commonly appear in conventional methods. This results in a win-win game where the prompting can achieve the best for each domain. The independent prompting across domains only requests one single cross-entropy loss for training and one simple K-NN operation as a domain identifier for inference. The learning paradigm derives an image prompt learning approach and a novel language-image prompt learning approach. Owning an excellent scalability (0.03% parameter increase per domain), the best of our approaches achieves a remarkable relative improvement (an average of about 30%) over the best of the state-of-the-art exemplar-free methods for three standard DIL tasks, and even surpasses the best of them relatively by about 6% in average when they use exemplars. Source code is available at https://github.com/iamwangyabin/S-Prompts
Wang, Yabin
de4369bd-bb1d-4e30-994c-ff992b94c2e6
Huang, Zhiwu
84f477cd-9097-44dd-a33e-ff71f253d36b
Hong, Xiaopeng
d34cc567-e65e-491d-95da-517f9f7ef4a8
Wang, Yabin
de4369bd-bb1d-4e30-994c-ff992b94c2e6
Huang, Zhiwu
84f477cd-9097-44dd-a33e-ff71f253d36b
Hong, Xiaopeng
d34cc567-e65e-491d-95da-517f9f7ef4a8

Wang, Yabin, Huang, Zhiwu and Hong, Xiaopeng (2022) S-prompts learning with pre-trained transformers: an Occam's Razor for domain incremental learning. In Conference on Neural Information Processing Systems. 14 pp .

Record type: Conference or Workshop Item (Paper)

Abstract

State-of-the-art deep neural networks are still struggling to address the catastrophic forgetting problem in continual learning. In this paper, we propose one simple paradigm (named as S-Prompting) and two concrete approaches to highly reduce the forgetting degree in one of the most typical continual learning scenarios, i.e., domain increment learning (DIL). The key idea of the paradigm is to learn prompts independently across domains with pre-trained transformers, avoiding the use of exemplars that commonly appear in conventional methods. This results in a win-win game where the prompting can achieve the best for each domain. The independent prompting across domains only requests one single cross-entropy loss for training and one simple K-NN operation as a domain identifier for inference. The learning paradigm derives an image prompt learning approach and a novel language-image prompt learning approach. Owning an excellent scalability (0.03% parameter increase per domain), the best of our approaches achieves a remarkable relative improvement (an average of about 30%) over the best of the state-of-the-art exemplar-free methods for three standard DIL tasks, and even surpasses the best of them relatively by about 6% in average when they use exemplars. Source code is available at https://github.com/iamwangyabin/S-Prompts

This record has no associated files available for download.

More information

Published date: 28 November 2022
Venue - Dates: The Thirty-Sixth Annual Conference on Neural Information Processing Systems, , New Orleans, United States, 2022-11-22

Identifiers

Local EPrints ID: 501686
URI: http://eprints.soton.ac.uk/id/eprint/501686
PURE UUID: 771efbf1-497e-46c1-8f4a-e47760f8a5fe
ORCID for Zhiwu Huang: ORCID iD orcid.org/0000-0002-7385-079X

Catalogue record

Date deposited: 05 Jun 2025 16:58
Last modified: 06 Jun 2025 02:06

Export record

Contributors

Author: Yabin Wang
Author: Zhiwu Huang ORCID iD
Author: Xiaopeng Hong

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×