On the effects of artificial data modification
On the effects of artificial data modification
Data distortion is commonly applied in vision models during both training (e.g methods like MixUp and CutMix) and evaluation (e.g. shape-texture bias and robustness). This data modification can introduce artificial information. It is often assumed that the resulting artefacts are detrimental to training, whilst being negligible when analysing models. We investigate these assumptions and conclude that in some cases they are unfounded and lead to incorrect results. Specifically, we show current shape bias identification methods and occlusion robustness measures are biased and propose a fairer alternative for the latter. Subsequently, through a series of experiments we seek to correct and strengthen the community’s perception of how augmenting affects learning of vision models. Based on our empirical results we argue that the impact of the artefacts must be understood and exploited rather than eliminated.
Marcu, Antonia
5054fd8c-0a18-41a3-a140-1521d9a19573
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
17 July 2022
Marcu, Antonia
5054fd8c-0a18-41a3-a140-1521d9a19573
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
Marcu, Antonia and Prugel-Bennett, Adam
(2022)
On the effects of artificial data modification.
39th International Conference on Machine Learning, Baltimore, Maryland, United States.
17 - 23 Jul 2022.
20 pp
.
Record type:
Conference or Workshop Item
(Paper)
Abstract
Data distortion is commonly applied in vision models during both training (e.g methods like MixUp and CutMix) and evaluation (e.g. shape-texture bias and robustness). This data modification can introduce artificial information. It is often assumed that the resulting artefacts are detrimental to training, whilst being negligible when analysing models. We investigate these assumptions and conclude that in some cases they are unfounded and lead to incorrect results. Specifically, we show current shape bias identification methods and occlusion robustness measures are biased and propose a fairer alternative for the latter. Subsequently, through a series of experiments we seek to correct and strengthen the community’s perception of how augmenting affects learning of vision models. Based on our empirical results we argue that the impact of the artefacts must be understood and exploited rather than eliminated.
More information
Accepted/In Press date: 15 May 2022
Published date: 17 July 2022
Venue - Dates:
39th International Conference on Machine Learning, Baltimore, Maryland, United States, 2022-07-17 - 2022-07-23
Identifiers
Local EPrints ID: 468206
URI: http://eprints.soton.ac.uk/id/eprint/468206
PURE UUID: c71eeb4c-bbc9-48ce-97f0-30ed8661aa30
Catalogue record
Date deposited: 05 Aug 2022 16:41
Last modified: 16 Mar 2024 18:19
Export record
Contributors
Author:
Antonia Marcu
Author:
Adam Prugel-Bennett
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics