From tokens to lattices: emergent lattice structures in language models
From tokens to lattices: emergent lattice structures in language models
Pretrained masked language models (MLMs) have demonstrated an impressive capability to comprehend and encode conceptual knowledge, revealing a lattice structure among concepts. This raises a critical question: how does this conceptualization emerge from MLM pretraining? In this paper, we explore this problem from the perspective of Formal Concept Analysis (FCA), a mathematical framework that derives concept lattices from the observations of object-attribute relationships. We show that the MLM's objective implicitly learns a formal context that describes objects, attributes, and their dependencies, which enables the reconstruction of a concept lattice through FCA. We propose a novel framework for concept lattice construction from pretrained MLMs and investigate the origin of the inductive biases of MLMs in lattice structure learning. Our framework differs from previous work because it does not rely on human-defined concepts and allows for discovering "latent" concepts that extend beyond human definitions. We create three datasets for evaluation, and the empirical results verify our hypothesis.
Xiong, Bo
d8c3ce0a-07ac-43f8-bd67-f230c6cbc1ec
Staab, Steffen
bf48d51b-bd11-4d58-8e1c-4e6e03b30c49
Xiong, Bo
d8c3ce0a-07ac-43f8-bd67-f230c6cbc1ec
Staab, Steffen
bf48d51b-bd11-4d58-8e1c-4e6e03b30c49
Xiong, Bo and Staab, Steffen
(2025)
From tokens to lattices: emergent lattice structures in language models.
The Thirteenth International Conference on Learning Representations, , Singapore, Singapore.
24 - 28 Apr 2025.
16 pp
.
(In Press)
Record type:
Conference or Workshop Item
(Paper)
Abstract
Pretrained masked language models (MLMs) have demonstrated an impressive capability to comprehend and encode conceptual knowledge, revealing a lattice structure among concepts. This raises a critical question: how does this conceptualization emerge from MLM pretraining? In this paper, we explore this problem from the perspective of Formal Concept Analysis (FCA), a mathematical framework that derives concept lattices from the observations of object-attribute relationships. We show that the MLM's objective implicitly learns a formal context that describes objects, attributes, and their dependencies, which enables the reconstruction of a concept lattice through FCA. We propose a novel framework for concept lattice construction from pretrained MLMs and investigate the origin of the inductive biases of MLMs in lattice structure learning. Our framework differs from previous work because it does not rely on human-defined concepts and allows for discovering "latent" concepts that extend beyond human definitions. We create three datasets for evaluation, and the empirical results verify our hypothesis.
Text
TokenLattice CRC
- Accepted Manuscript
More information
Accepted/In Press date: 22 January 2025
Venue - Dates:
The Thirteenth International Conference on Learning Representations, , Singapore, Singapore, 2025-04-24 - 2025-04-28
Identifiers
Local EPrints ID: 498451
URI: http://eprints.soton.ac.uk/id/eprint/498451
PURE UUID: ffeff540-7b00-45e7-9a88-fd7eb54fc25c
Catalogue record
Date deposited: 19 Feb 2025 17:36
Last modified: 22 Aug 2025 02:13
Export record
Contributors
Author:
Bo Xiong
Author:
Steffen Staab
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics