(Re)habilitation and Counseling (C)
Research (R)
Edward T. Auer, Jr., PhD
Associate Research Professor
The George Washington University
Lawrence, Kansas
Disclosure(s): No financial or nonfinancial relationships to disclose.
Lynne E. Bernstein, PhD
Professor
George Washington University
George Washington University
Arlington, Virginia
Disclosure(s): No financial or nonfinancial relationships to disclose.
Silvio P. Eberhardt, PhD
Research Associate Professor
George Washington University
George Washington University
Arlington, Virginia
Disclosure(s): SeeHear LLC: Consultant/Advisory Board (Ongoing), Employment (Ongoing), Intellectual Property/Patents (Ongoing), Ownership Interest (Ongoing), Patent Holder (Ongoing), Research Grant (includes principal investigator, collaborator or consultant and pending grants as well as grants already received) (Ongoing), Stockholder/Ownership Interest (excluding diversified mutual funds) (Ongoing)
Nicole Jordan, AuD
Audiologist and Adjunct Professor
George Washington University
The George Washington University
ALEXANDRIA, Virginia
Disclosure(s): SeeHear, LLC: Consultant (Terminated, July 31, 2022)
Speech recognition in noisy environments is typically improved by watching the talker. The magnitude of improvement is related to the perceiver’s lipreading, which is typically poor in individuals with adult-onset hearing loss. Multisensory speech training is a rehabilitative option for improving outcomes in individuals with difficulty understanding in noise. We will present a contemporary view of the phenomena associated with multisensory speech recognition. We will discuss recent successful approaches to speech perception training using feedback on perceptual errors in an open-set sentence recognition task. We will explore benefits of a computerized feedback system along with opportunities for future clinical applications.
Summary:
Speech is inherently multisensory, although it is not uncommon to solely focus on its acoustic properties and auditory perception. This module provides an interactive introduction to understanding multisensory speech perception and its training. The module will be split up into three parts. The first part will review current research in multisensory speech perception, the second will engage the audience in critically evaluating past failed attempts at lipreading training, and the final portion will discuss the important components of a successful multi-sensory speech perception training program and how to implement it in clinical practice. During the learning module, several modes of engagement will be used, including survey responses/ polling and several group activities with demonstrations to promote learning. Attendees will get the opportunity to engage in training through demonstrations that include visual only and audiovisual speech perception tasks.
An overview of lipreading will be presented in the context of discussing a training approach designed to improve speech perception in noise. Attendees will learn about the typical ranges of lipreading ability, and how they differ as a function of the lipreader’s perceptual experience. Attendees will understand how the typical range of lipreading ability of adults with early-onset deafness provides evidence for the availability of useful speech information on the talker’s face. Understanding of expert lipreading supports the hypothesis that there is room for training to improve the use of visual speech information in other populations. Evidence suggests that constraints provided by the lexicon are an important contributor to the ease of spoken word recognition. Phonetic information in lipreading is typically thought to be limited to the viseme, however evidence demonstrates that this is not an accurate conclusion. We will discuss the availability of visual phonetic information, and how it interacts with the mental lexicon, or words that people know, during spoken word recognition.
Previous attempts at training lipreading have not been highly successful. The current training system discussed in this module was designed by considering current theories of perceptual learning, which is defined as learning that effects long-term changes to an individual’s perceptual system to improve their ability to respond to the environment. Using this conceptual framework for word recognition, we will then explain the error-driven feedback provided during training, and we will discuss concepts of internal and external feedback in relation to setting the conditions for training that both improves perception and generalizes beyond the trained materials. During training, feedback is provided on errors made in open-set responses to sentence stimuli. The system displays correct words and the consonants in misperceived words.
Attendees will learn how a multisensory speech perception training paradigm with computerized feedback can improve patients’ overall outcomes to help meet (re)habilitation goals. The presenter(s) will discuss the benefits of a computerized feedback system, and how technology can be utilized to support training and track progress. The presenter(s) will describe various way computerized training programs have been integrated into the clinical practice and how the future of Audiology will likely be closely intertwined with asynchronous computerized hearing management options.