![]() ![]() To test whether this type of correspondence could improve speech comprehension, we selectively degraded the spectral or temporal dimensions of auditory sentence spectrograms to assess how well visual speech facilitated comprehension under each degradation condition. When isolated from other speech cues, speech-based shape deformations improved perceptual sensitivity for corresponding frequency modulations, suggesting that listeners could exploit this cross-modal correspondence to facilitate perception. Consistent with this hypothesis, we found that the time–frequency dynamics of oral resonances (formants) could be predicted with unexpectedly high precision from the changing shape of the mouth during speech. Because visible articulators shape the spectral content of speech, we hypothesized that the perceptual system might exploit natural correlations between midlevel visual (oral deformations) and auditory speech features (frequency modulations) to extract detailed spectrotemporal information from visual speech without employing high-level abstractions. ![]() ![]() High-level models posit interactions among abstract categorical (i.e., phonemes/visemes) or amodal (e.g., articulatory) speech representations, but require lossy remapping of speech signals onto abstracted representations. Low-level models emphasize basic temporal cues provided by mouth movements, but these impoverished signals may not fully account for the richness of auditory information provided by visual speech. In addition, the use of common terminology in future research would improve access to evidence and the communication of this knowledge for researchers and clinicians.Visual speech facilitates auditory speech perception, but the visual cues responsible for these benefits and the information they provide remain unclear. Further research is needed to inform the development and mechanisms of audiovisual speech integration in children with different language development paths. Through this scoping review, key gaps were identified that include few studies in clinical populations, a few studies on languages other than English, and variability in terminology to describe similar or overlapping concepts. Most of the studies identified were behavioral studies, while a minority reported on neuroanatomical correlates underlying the audiovisual speech perception. Thirty-eight studies were identified: 18 articles that focused on children with typical development, 9 focused on children with autism spectrum disorder, 8 focused on children with speech and language disorders, and 3 focused on children with hearing loss. While research conducted prior to 2000 provided a strong foundation in this area, the past two decades have brought technical advances that have allowed for more precise measurement of audiovisual speech perception. We used eight databases to identify the experimental studies published 2000–2019, and reported the data using the guidelines of PRISMA-ScR designed for scoping reviews. This scoping review provides a descriptive synthesis of available evidence on children's audiovisual speech perception. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |