Amplification and Assistive Devices (AAD)
Research (R)
Matthew B. Fitzgerald, PhD
Chief of Audiology
Stanford University
Stanford University
Palo Alto, California
Disclosure(s): No financial or nonfinancial relationships to disclose.
Dave A. Fabry, PhD
Chief Hearing Health Officer
Starkey
Eden Prairie, Minnesota
Disclosure(s): Starkey: Employment (Ongoing)
Difficulties understanding speech in noise is the most common paint of individuals with hearing loss, even with the use of hearing aids. Recent advances in hearing aid signal processing have the potential to improve speech recognition in noise. Here we evaluate the effectiveness of a user-guided signal processing algorithm. Our results demonstrate a small but significant improvement in multi-talker babble when audibility is maximized, with no improvements observed in speech-shaped noise or when audibility is compromised. These results suggest that signal processing algorithms may provide some benefit for users of hearing aids, but only with some listening conditions.
Summary:
For individuals with sensorineural hearing loss, the most common complaint is difficulty understanding speech in background noise. In these patients, the most common treatment is the use of hearing aids, and this approach has helped millions of patients to communicate more effectively. Unfortunately, users of hearing aids routinely report continued difficulty understanding speech in noise. These continued difficulties despite the use of well-fit devices suggest that advances in signal processing may be needed to help patients maximize their communication abilities. In this project, we evaluated the use of a user-guided signal processing algorithm designed to maximize spectral contrast via applied offsets from target gain and provide directional hearing cues and more aggressive noise management, when appropriate. Twenty listeners completed a pre-test, a four-week field trial, and then a post-test. In the pre-test, all participants underwent pure-tone audiometry, and completed four different measures of speech recognition in noise. These were the QuickSIN, the Words in Noise (WIN) test, the ‘Nonsense Syllable Test (NST)’ and CNC monosyllabic word-recognition with a 5 dB signal to noise ratio (SNR) in the presence of multi-talker babble. All participants were then fit with bilateral hearing aids; fittings were all verified to NAL-2 prescription using real-ear measurements. Then, at the end of the field trial, all participants completed aided testing with the four measures of speech in noise. All participants were tested with the user-guided signal processing algorithm both ‘on’ and ‘off.’ Results showed a small (~ 1 dB) improvement in SNR loss on the QuickSIN for the user-guided algorithm, and a significant improvement in the + CNC-word identification task with a +5 SNR. In contrast, no significant improvements were observed on the WIN, or the NST. Taken together, these results suggest that the user-guided signal processing algorithm tested here improved speech recognition in multi-talker babble for testing conditions in which the signal level was fixed (e.g., WIN and CNC +5 SNR). No improvements were observed on the WIN; this may reflect the design of the WIN. In this test the noise level is fixed and the signal level is gradually decreased. Thus, audibility may have been compromised for some signals, which would reduce the effectiveness of any signal processing algorithms. Finally, the lack of improvement observed on the NST may reflect an additional cognitive load associated with the identification of nonsense syllables, or it may reflect the use of a speech-shaped noise as the background noise. By this logic, the signal processing algorithm evaluated here may have been less effective for ‘static’ noises than more dynamic noises such as multi-talker babble. Taken together, these results suggest that this signal processing algorithm may provide some small but statistically significant improvements in speech recognition for users of hearing aids, but only with some listening conditions.