Research (R)
Silvio P. Eberhardt, PhD
Research Associate Professor
George Washington University
George Washington University
Arlington, Virginia
Disclosure(s): SeeHear LLC: Consultant/Advisory Board (Ongoing), Employment (Ongoing), Intellectual Property/Patents (Ongoing), Ownership Interest (Ongoing), Patent Holder (Ongoing), Research Grant (includes principal investigator, collaborator or consultant and pending grants as well as grants already received) (Ongoing), Stockholder/Ownership Interest (excluding diversified mutual funds) (Ongoing)
Lynne E. Bernstein, PhD
Professor
George Washington University
George Washington University
Arlington, Virginia
Disclosure(s): No financial or nonfinancial relationships to disclose.
Edward T. Auer, Jr., PhD
Associate Research Professor
The George Washington University
Lawrence, Kansas
Disclosure(s): No financial or nonfinancial relationships to disclose.
Rationale/Purpose:
There are claims in the literature that lipreading is an “inborn trait.” Indeed, laboratory training typically results in only small if any improvement. This is unfortunate, because the ability to recognize speech visually and to integrate visual and auditory speech is highly advantageous for recognizing noisy or degraded speech. We hypothesized that if we could improve lipreading, with generalization, we could also improve audiovisual speech recognition in noise in adults with age-related hearing loss (ARHL).
Methods:
In a preliminary study [Bernstein et al. 2022. During lipreading training with sentence stimuli, feedback controls learning and generalization to audiovisual speech in noise. Am J Aud 31: 57-77], we demonstrated with normal-hearing younger adults that lipreading training can significantly improve lipreading, resulting in generalization to untrained visual and audiovisual speech. In a randomized controlled study, reported here, adults with ARHL (mild to moderately-severe) (N = 50, ages 39 to 86) were assigned to 10 days of lipreading training (visual-only, VO) or to be untrained controls. Training was open set identification of the words in lipread sentences with feedback. One group received the whole printed sentence as feedback, and the other received feedback for correct words and the consonants in misperceived words. Everyone was tested on untrained auditory, visual, and audiovisual sentences before and after training. The sentences were spoken by a talker used in training and a talker not used in training. None of the testing materials were ever re-used. The results were analyzed using mixed effects logistic regression.
Results & Conclusions
Results:
In statistical analyses with the untrained control group as the reference, (1) both trained groups improved significantly (p < .01) on lipreading untrained sentences. But only the group that received feedback for misperceived words significantly improved on the new talker. (2) Only that group also improved their audiovisual speech in noise significantly (p < .001) , and (3) remarkably, the group that received whole sentence feedback decreased their recognition significantly (p < .01) on auditory sentences in noise. The reduction extended to the talker who was not seen during training. It is important to emphasize that the foregoing effects were tested against the test-retest results with untrained controls.
The average effect size changes from pre- to posttraining were:
Lipreading,
Audiovisual speech in noise,
Listening in noise,
Conclusions. Training with feedback for perceptual errors significantly improved lipreading and audiovisual speech recognition in noise. Training with sentence feedback resulted in lipreading improvements but significantly worse auditory speech in noise recognition. The participants who received printed sentences as feedback may have adopted inferencing or guessing strategies that were deleterious for auditory speech. Previous generally disappointing results from lipreading training were perhaps due to inadequate feedback during training.
Research supported by NIH/NIDCD.