The University of Southampton
University of Southampton Institutional Repository

GaitASMS: gait recognition by adaptive structured spatial representation and multi-scale temporal aggregation

GaitASMS: gait recognition by adaptive structured spatial representation and multi-scale temporal aggregation
GaitASMS: gait recognition by adaptive structured spatial representation and multi-scale temporal aggregation
Gait recognition is one of the most promising video-based biometric technologies. The edge of silhouettes and motion are the most informative feature and previous studies have explored them separately and achieved notable results. However, due to occlusions and variations in viewing angles, their gait recognition performance is often affected by the predefined spatial segmentation strategy. Moreover, traditional temporal pooling usually neglects distinctive temporal information in gait. To address the aforementioned issues, we propose a novel gait recognition framework, denoted as GaitASMS, which can effectively extract the adaptive structured spatial representations and naturally aggregate the multi-scale temporal information. The Adaptive Structured Representation Extraction Module (ASRE) separates the edge of silhouettes by using the adaptive edge mask and maximizes the representation in semantic latent space. Moreover, the Multi-Scale Temporal Aggregation Module (MSTA) achieves effective modeling of long-short-range temporal information by temporally aggregated structure. Furthermore, we propose a new data augmentation, denoted random mask, to enrich the sample space of long-term occlusion and enhance the generalization of the model. Extensive experiments conducted on two datasets demonstrate the competitive advantage of proposed method, especially in complex scenes, i.e., BG and CL. On the CASIA-B dataset, GaitASMS achieves the average accuracy of 93.5% and outperforms the baseline on rank-1 accuracies by 3.4% and 6.3%, respectively, in BG and CL. The ablation experiments demonstrate the effectiveness of ASRE and MSTA. The source code is available at https://github.com/YanSun-github/GaitASMS.
Adaptive structured feature, Biometric, Deep learning, Gait recognition, Temporal aggregation
0941-0643
7057–7069
Sun, Yan
ef17cf27-e9bf-4dc3-940e-0e58903290b4
Feng, Xueling
4b5a2476-700a-410a-bd58-f2b66a07ee76
Nixon, Mark
2b5b9804-5a81-462a-82e6-92ee5fa74e12
Long, Hu
f394ebf0-e044-4f2f-b5b9-c7cea353bccb
Sun, Yan
ef17cf27-e9bf-4dc3-940e-0e58903290b4
Feng, Xueling
4b5a2476-700a-410a-bd58-f2b66a07ee76
Nixon, Mark
2b5b9804-5a81-462a-82e6-92ee5fa74e12
Long, Hu
f394ebf0-e044-4f2f-b5b9-c7cea353bccb

Sun, Yan, Feng, Xueling, Nixon, Mark and Long, Hu (2024) GaitASMS: gait recognition by adaptive structured spatial representation and multi-scale temporal aggregation. Neural Computing and Applications, 36 (13), 7057–7069. (doi:10.1007/s00521-024-09445-z).

Record type: Article

Abstract

Gait recognition is one of the most promising video-based biometric technologies. The edge of silhouettes and motion are the most informative feature and previous studies have explored them separately and achieved notable results. However, due to occlusions and variations in viewing angles, their gait recognition performance is often affected by the predefined spatial segmentation strategy. Moreover, traditional temporal pooling usually neglects distinctive temporal information in gait. To address the aforementioned issues, we propose a novel gait recognition framework, denoted as GaitASMS, which can effectively extract the adaptive structured spatial representations and naturally aggregate the multi-scale temporal information. The Adaptive Structured Representation Extraction Module (ASRE) separates the edge of silhouettes by using the adaptive edge mask and maximizes the representation in semantic latent space. Moreover, the Multi-Scale Temporal Aggregation Module (MSTA) achieves effective modeling of long-short-range temporal information by temporally aggregated structure. Furthermore, we propose a new data augmentation, denoted random mask, to enrich the sample space of long-term occlusion and enhance the generalization of the model. Extensive experiments conducted on two datasets demonstrate the competitive advantage of proposed method, especially in complex scenes, i.e., BG and CL. On the CASIA-B dataset, GaitASMS achieves the average accuracy of 93.5% and outperforms the baseline on rank-1 accuracies by 3.4% and 6.3%, respectively, in BG and CL. The ablation experiments demonstrate the effectiveness of ASRE and MSTA. The source code is available at https://github.com/YanSun-github/GaitASMS.

Text
2307.15981v2 - Accepted Manuscript
Available under License Other.
Download (962kB)

More information

Accepted/In Press date: 15 January 2024
e-pub ahead of print date: 17 February 2024
Published date: 17 February 2024
Additional Information: Publisher Copyright: © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.
Keywords: Adaptive structured feature, Biometric, Deep learning, Gait recognition, Temporal aggregation

Identifiers

Local EPrints ID: 490233
URI: http://eprints.soton.ac.uk/id/eprint/490233
ISSN: 0941-0643
PURE UUID: 0c7bb7a1-dbcf-4fe8-913c-7dcbb3ec3fa3
ORCID for Mark Nixon: ORCID iD orcid.org/0000-0002-9174-5934

Catalogue record

Date deposited: 20 May 2024 17:41
Last modified: 08 Jun 2024 01:32

Export record

Altmetrics

Contributors

Author: Yan Sun
Author: Xueling Feng
Author: Mark Nixon ORCID iD
Author: Hu Long

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×