Identification by a hybrid 3D/2D gait recognition algorithm
Identification by a hybrid 3D/2D gait recognition algorithm
Recently, the research community has given much interest in gait as a biometric. However, one of the key challenges that affects gait recognition performance is its susceptibility to view variation. Much work has been done to deal with this problem. The implicit assumptions made by most of these studies are that the view variation in one gait cycle is small and that people walk only along straight trajectories. These are often wrong. Our strategy for view independence is to enrol people using their 3D volumetric data since a synthetic image can be generated and used to match a probe image.
A set of experiments was conducted to illustrate the potential of matching 3D volumetric data against gait images from single cameras inside the Biometric Tunnel at Southampton University using the Gait Energy Image as gait features. The results show an average Correct Classification Rate (CCR) of 97% for matching against affine cameras and 42% for matching against perspective cameras with large changes in appearance. We modified and expanded the Tunnel systems to improve the quality of the 3D reconstruction and to provide asynchronous gait images from two independent cameras. Two gait datasets have been collected; one with 17 people walking along a straight line and a second with 50 people walking along straight and curved trajectories.
The first dataset was analysed with an algorithm in which 3D volumes were aligned according to the starting position of the 2D gait cycle in 3D space and the sagittal plane of the walking people. When gait features were extracted from each frame using Generic Fourier Descriptors and compared using Dynamic Time Warping, a CCR of up to 98.8% was achieved. A full performance analysis was performed and camera calibration accuracy was shown to be the most import factor. The shortcomings of this algorithm were that it is not completely view-independent and it is affected by changes in walking directions.
A second algorithm was developed to overcome the previous limitations. In this, the alignment was based on three key frames at mid-stance phase. The motion in the first and second parts of the gait cycle was assumed to be linear. The second dataset was used for evaluating the algorithm and a CCR of 99% was achieved. However, when the probe consisted of people walking on a curved trajectory, the CCR dropped to 82%. But when the gallery was also taken from curved walking, the CCR returned to 99%. The algorithm was also evaluated using data from the Kyushu University 4D Gait Database where normal walking achieved 98% and curved walking achieved 68%. Inspection of the data indicated that the assumption made previously that straight ahead walking and curved walking are similar, is invalid. Finally, an investigation into more appropriate features was also carried out but this only gave a slight improvement.
Abdulsattar, Fatimah
ac510c08-4292-43f6-aa92-e636b8319b86
December 2016
Abdulsattar, Fatimah
ac510c08-4292-43f6-aa92-e636b8319b86
Carter, John
e05be2f9-991d-4476-bb50-ae91606389da
Abdulsattar, Fatimah
(2016)
Identification by a hybrid 3D/2D gait recognition algorithm.
University of Southampton, Physical Science and Engineering, Doctoral Thesis, 154pp.
Record type:
Thesis
(Doctoral)
Abstract
Recently, the research community has given much interest in gait as a biometric. However, one of the key challenges that affects gait recognition performance is its susceptibility to view variation. Much work has been done to deal with this problem. The implicit assumptions made by most of these studies are that the view variation in one gait cycle is small and that people walk only along straight trajectories. These are often wrong. Our strategy for view independence is to enrol people using their 3D volumetric data since a synthetic image can be generated and used to match a probe image.
A set of experiments was conducted to illustrate the potential of matching 3D volumetric data against gait images from single cameras inside the Biometric Tunnel at Southampton University using the Gait Energy Image as gait features. The results show an average Correct Classification Rate (CCR) of 97% for matching against affine cameras and 42% for matching against perspective cameras with large changes in appearance. We modified and expanded the Tunnel systems to improve the quality of the 3D reconstruction and to provide asynchronous gait images from two independent cameras. Two gait datasets have been collected; one with 17 people walking along a straight line and a second with 50 people walking along straight and curved trajectories.
The first dataset was analysed with an algorithm in which 3D volumes were aligned according to the starting position of the 2D gait cycle in 3D space and the sagittal plane of the walking people. When gait features were extracted from each frame using Generic Fourier Descriptors and compared using Dynamic Time Warping, a CCR of up to 98.8% was achieved. A full performance analysis was performed and camera calibration accuracy was shown to be the most import factor. The shortcomings of this algorithm were that it is not completely view-independent and it is affected by changes in walking directions.
A second algorithm was developed to overcome the previous limitations. In this, the alignment was based on three key frames at mid-stance phase. The motion in the first and second parts of the gait cycle was assumed to be linear. The second dataset was used for evaluating the algorithm and a CCR of 99% was achieved. However, when the probe consisted of people walking on a curved trajectory, the CCR dropped to 82%. But when the gallery was also taken from curved walking, the CCR returned to 99%. The algorithm was also evaluated using data from the Kyushu University 4D Gait Database where normal walking achieved 98% and curved walking achieved 68%. Inspection of the data indicated that the assumption made previously that straight ahead walking and curved walking are similar, is invalid. Finally, an investigation into more appropriate features was also carried out but this only gave a slight improvement.
Text
__userfiles.soton.ac.uk_Users_ojl1y15_mydesktop_PhD_thesis.pdf
- Other
More information
Published date: December 2016
Organisations:
University of Southampton, Vision, Learning and Control
Identifiers
Local EPrints ID: 404663
URI: http://eprints.soton.ac.uk/id/eprint/404663
PURE UUID: ab2268f0-25f1-4611-882c-734998246957
Catalogue record
Date deposited: 30 Jan 2017 15:26
Last modified: 15 Mar 2024 04:12
Export record
Contributors
Author:
Fatimah Abdulsattar
Thesis advisor:
John Carter
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics