A new study from New York University’s Steinhardt School of Culture, Education, and Human Development found that facial motion capture – the same technology used to develop realistic computer graphics in video games and movies – has been utilized to identify differences between children with childhood apraxia of speech and those with other types of speech disorders.
Childhood apraxia of speech is a complex speech impairment in which children have difficulty planning and making accurate movements to create speech sounds. Children with apraxia of speech often are delayed in developing speech, have atypical speech patterns, and make slow progress in speech therapy.
To study the motor speech disorder, the NYU researchers placed tiny reflective markers on the face. Using the motion capture technology, researchers quantified facial movements by measuring how the lips and jaw move.
Grigos and her colleagues sought to understand if, by measuring facial movements, children with apraxia of speech can be distinguished from children with other types of speech impairment. The researchers examined the lip and jaw movement of 33 children, ages three to seven, during speech tasks.
The children were asked to repeat one, two, and three syllable words while the motion capture technology tracked jaw, lower lip, and upper lip movements. The researchers looked at metrics, including the timing, speed, and variability of the movement, as well as how far the lips and jaw moved during speech.
Using the movement tracking technology, the NYU team was able to pick up subtle differences that the ear could not hear. The most notable finding was that children with childhood apraxia of speech produced lip and jaw movements that varied more than the other two groups of children.

