Mowbray, Stuart David
Modelling and Extracting Periodically Deforming Objects by Continuous, Spatio-temporal Shape Description.
University of Southampton, Electronics and Computers Science,
This thesis proposes a new model for describing spatio-temporally deforming objects. Through a novel use of Fourier descriptors, it is shown how arbitrary shape description can be extended to include spatio-temporal shape deformation. It is further demonstrated that these new spatio-temporal Fourier descriptors have the ability to be used as the basis for both the recognition and extraction of deforming objects. Application of this new recognition technique to human gait sequences demonstrates recognition rates of over 86% for individual human subjects, signifying that these descriptors possess unique descriptive properties. Based upon the new spatio-temporal Fourier descriptor model, a new technique for the detection and extraction of deforming shapes from an image sequence is presented through a new variant of the Hough transform (the Continuous Deformable Hough Transform) that utilises spatio-temporal shape correlation within an evidence-gathering context. This new technique demonstrates excellent success rates and tolerance to noise, correctly extracting human subjects in image sequences corrupted with noise levels of up to 80%. The technique is also tested extensively using real-world data, thus demonstrating its worth in a modern-day computer vision system. Both the spatio-temporal Fourier descriptor model, the Continuous Deformable Hough Transform, and aspects of their application are fully discussed throughout the thesis, along with ideas and suggestions for future research and development.
||Supervisor was Mark Nixon
||Fourier Descriptors, Periodic Motion, Hough Transform, Deforming Object, Object Extraction, Object Description, Moving Object Extraction, Moving Object Description
||University of Southampton, Electronics & Computer Science
|14 July 2008||Accepted/In Press|
||14 Jul 2008 13:55
||17 Apr 2017 19:08
|Further Information:||Google Scholar|
Actions (login required)