Functional alignment can mislead: examining model stitching
Functional alignment can mislead: examining model stitching
A common belief in the representational comparison literature is that if two representations can be functionally aligned, they must capture similar information. In this paper we focus on model stitching and show that models can be functionally aligned, but represent very different information. Firstly, we show that discriminative models with very different biases can be stitched together. We then show that models trained to solve entirely different tasks on different data modalities, and even clustered random noise, can be successfully stitched into MNIST or ImageNet-trained models. We end with a discussion of the wider impact of our results on the community's current beliefs. Overall, our paper draws attention to the need to correctly interpret the results of such functional similarity measures and highlights the need for approaches that capture informational similarity.
Smith, Damian Howard Laurence
a334a75d-dfd3-4f50-85a7-588cf90c97f7
Mannering, Harvey George
c191ef32-b1de-4114-902d-ff3515d382c3
Marcu, Antonia
e970d626-8d70-4af0-aee3-40a0635a7ad1
11 July 2025
Smith, Damian Howard Laurence
a334a75d-dfd3-4f50-85a7-588cf90c97f7
Mannering, Harvey George
c191ef32-b1de-4114-902d-ff3515d382c3
Marcu, Antonia
e970d626-8d70-4af0-aee3-40a0635a7ad1
Smith, Damian Howard Laurence, Mannering, Harvey George and Marcu, Antonia
(2025)
Functional alignment can mislead: examining model stitching.
International Conference on Machine Learning 2025, Vancouver, Canada, Vancouver, Canada.
11 - 19 Jul 2025.
27 pp
.
Record type:
Conference or Workshop Item
(Paper)
Abstract
A common belief in the representational comparison literature is that if two representations can be functionally aligned, they must capture similar information. In this paper we focus on model stitching and show that models can be functionally aligned, but represent very different information. Firstly, we show that discriminative models with very different biases can be stitched together. We then show that models trained to solve entirely different tasks on different data modalities, and even clustered random noise, can be successfully stitched into MNIST or ImageNet-trained models. We end with a discussion of the wider impact of our results on the community's current beliefs. Overall, our paper draws attention to the need to correctly interpret the results of such functional similarity measures and highlights the need for approaches that capture informational similarity.
Text
11927_Functional_Alignment_Can (2)
- Accepted Manuscript
More information
Published date: 11 July 2025
Venue - Dates:
International Conference on Machine Learning 2025, Vancouver, Canada, Vancouver, Canada, 2025-07-11 - 2025-07-19
Identifiers
Local EPrints ID: 503336
URI: http://eprints.soton.ac.uk/id/eprint/503336
PURE UUID: 952143ea-c1e1-4099-a1e9-778532e46682
Catalogue record
Date deposited: 29 Jul 2025 16:47
Last modified: 01 Oct 2025 02:20
Export record
Contributors
Author:
Damian Howard Laurence Smith
Author:
Harvey George Mannering
Author:
Antonia Marcu
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics