Lightweight adaptation of neural language models via subspace embedding
Lightweight adaptation of neural language models via subspace embedding
Traditional neural word embeddings are usually dependent on a richer diversity of vocabulary. However, the language models recline to cover major vocabularies via the word embedding parameters, in particular, for multilingual language models that generally cover a significant part of their overall learning parameters. In this work, we present a new compact embedding structure to reduce the memory footprint of the pre-trained language models with a sacrifice of up to 4% absolute accuracy. The embeddings vectors reconstruction follows a set of subspace embeddings and an assignment procedure via the contextual relationship among tokens from pre-trained language models. The subspace embedding structure1calibrates to masked language models, to evaluate our compact embedding structure on similarity and textual entailment tasks, sentence and paraphrase tasks. Our experimental evaluation shows that the subspace embeddings achieve compression rates beyond99.8% in comparison with the original embeddings for the language models on XNLI and GLUE benchmark suites.
3968-3972
Association for Computing Machinery
Jaiswal, Amit Kumar
84c045be-8eb5-4a37-8b2a-b6618c53f550
Liu, Haiming
3ed791e3-9f1e-417e-a531-7faf19cca547
21 October 2023
Jaiswal, Amit Kumar
84c045be-8eb5-4a37-8b2a-b6618c53f550
Liu, Haiming
3ed791e3-9f1e-417e-a531-7faf19cca547
Jaiswal, Amit Kumar and Liu, Haiming
(2023)
Lightweight adaptation of neural language models via subspace embedding.
In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management (CIKM'23).
Association for Computing Machinery.
.
(doi:10.1145/3583780.3615269).
Record type:
Conference or Workshop Item
(Paper)
Abstract
Traditional neural word embeddings are usually dependent on a richer diversity of vocabulary. However, the language models recline to cover major vocabularies via the word embedding parameters, in particular, for multilingual language models that generally cover a significant part of their overall learning parameters. In this work, we present a new compact embedding structure to reduce the memory footprint of the pre-trained language models with a sacrifice of up to 4% absolute accuracy. The embeddings vectors reconstruction follows a set of subspace embeddings and an assignment procedure via the contextual relationship among tokens from pre-trained language models. The subspace embedding structure1calibrates to masked language models, to evaluate our compact embedding structure on similarity and textual entailment tasks, sentence and paraphrase tasks. Our experimental evaluation shows that the subspace embeddings achieve compression rates beyond99.8% in comparison with the original embeddings for the language models on XNLI and GLUE benchmark suites.
This record has no associated files available for download.
More information
Published date: 21 October 2023
Identifiers
Local EPrints ID: 503543
URI: http://eprints.soton.ac.uk/id/eprint/503543
PURE UUID: 2051b038-c5d5-41ec-9508-fbe65dbc30b7
Catalogue record
Date deposited: 05 Aug 2025 16:36
Last modified: 06 Aug 2025 02:06
Export record
Altmetrics
Contributors
Author:
Amit Kumar Jaiswal
Author:
Haiming Liu
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics