The University of Southampton
University of Southampton Institutional Repository

Depth Measurement of Face and Palate by Structured Light

Depth Measurement of Face and Palate by Structured Light
Depth Measurement of Face and Palate by Structured Light
In order to model speech production for purposes such as articulatory synthesis, articulatory data must be acquired, preferably in a way that does not impede the speaker's ability to speak. A variety of techniques has been used, many of them, such as ultrasound, X-rays, or MRI, adapted from medical imaging. For externally accessible articulators such techniques are not appropriate, but the task of measuring the shape of a complex three-dimensional object is still difficult. For a static object such as a dental impression manual methods such as slicing a mold of the impression and measuring the slices with calipers can be quite accurate, though time-consuming. For a dynamic object such as the face shape during speech such manual methods cannot be used. A workable alternative has been developed at Grenoble that instead uses simultaneous video pictures of front and profile face. Blue lipstick is used on the subject's lips to provide a definite outline and provide maximum contrast with the teeth and tongue. The images must then be post-processed before parameters such as mouth area are extracted. In this paper we present a structured light system which uses a slide projector and a single video camera to acquire depth coordinates of a static or moving object. We discuss pilot experiments aimed at optimising its output and establishing its accuracy, and make some preliminary comparisons with the Grenoble double-video system. In the next section the structured light technique is summarised, focusing on the experimental constraints it imposes. The technique is described in more detail elsewhere. In Static measurements' its use on the static object of an EPG palate is described. In 'Dynamic measurements' its use in acquiring dynamic face shape data is described, focusing on methodology issues that arose in an extended recording session with a human speaker. In the discussion we describe the calibration procedure and the related tradeoffs, and finally we make some initial comparisons to the double-video system.
Shadle, C.H.
dc56253d-9926-466f-a27c-b9a8252a5304
Carter, J.N.
e05be2f9-991d-4476-bb50-ae91606389da
Monks, T.P.
9078cbd1-a4b9-457d-b44e-7dbe2c0a709a
Field, J.
fe78d1e9-7dc3-4e56-9481-c32117bd4e6f
Shadle, C.H.
dc56253d-9926-466f-a27c-b9a8252a5304
Carter, J.N.
e05be2f9-991d-4476-bb50-ae91606389da
Monks, T.P.
9078cbd1-a4b9-457d-b44e-7dbe2c0a709a
Field, J.
fe78d1e9-7dc3-4e56-9481-c32117bd4e6f

Shadle, C.H., Carter, J.N., Monks, T.P. and Field, J. (1994) Depth Measurement of Face and Palate by Structured Light

Record type: Monograph (Project Report)

Abstract

In order to model speech production for purposes such as articulatory synthesis, articulatory data must be acquired, preferably in a way that does not impede the speaker's ability to speak. A variety of techniques has been used, many of them, such as ultrasound, X-rays, or MRI, adapted from medical imaging. For externally accessible articulators such techniques are not appropriate, but the task of measuring the shape of a complex three-dimensional object is still difficult. For a static object such as a dental impression manual methods such as slicing a mold of the impression and measuring the slices with calipers can be quite accurate, though time-consuming. For a dynamic object such as the face shape during speech such manual methods cannot be used. A workable alternative has been developed at Grenoble that instead uses simultaneous video pictures of front and profile face. Blue lipstick is used on the subject's lips to provide a definite outline and provide maximum contrast with the teeth and tongue. The images must then be post-processed before parameters such as mouth area are extracted. In this paper we present a structured light system which uses a slide projector and a single video camera to acquire depth coordinates of a static or moving object. We discuss pilot experiments aimed at optimising its output and establishing its accuracy, and make some preliminary comparisons with the Grenoble double-video system. In the next section the structured light technique is summarised, focusing on the experimental constraints it imposes. The technique is described in more detail elsewhere. In Static measurements' its use on the static object of an EPG palate is described. In 'Dynamic measurements' its use in acquiring dynamic face shape data is described, focusing on methodology issues that arose in an extended recording session with a human speaker. In the discussion we describe the calibration procedure and the related tradeoffs, and finally we make some initial comparisons to the double-video system.

This record has no associated files available for download.

More information

Published date: 1994
Additional Information: 1994 Research Journal Address: Department of Electronics and Computer Science
Organisations: Southampton Wireless Group

Identifiers

Local EPrints ID: 250092
URI: http://eprints.soton.ac.uk/id/eprint/250092
PURE UUID: 52a709ea-d5a4-4147-9fe2-35281972c402

Catalogue record

Date deposited: 04 May 1999
Last modified: 22 Feb 2024 18:03

Export record

Contributors

Author: C.H. Shadle
Author: J.N. Carter
Author: T.P. Monks
Author: J. Field

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×