Research (R)
Edward T. Auer, Jr., PhD
Associate Research Professor
The George Washington University
Lawrence, Kansas
Disclosure(s): No financial or nonfinancial relationships to disclose.
Silvio P. Eberhardt, PhD
Research Associate Professor
George Washington University
George Washington University
Arlington, Virginia
Disclosure(s): SeeHear LLC: Consultant/Advisory Board (Ongoing), Employment (Ongoing), Intellectual Property/Patents (Ongoing), Ownership Interest (Ongoing), Patent Holder (Ongoing), Research Grant (includes principal investigator, collaborator or consultant and pending grants as well as grants already received) (Ongoing), Stockholder/Ownership Interest (excluding diversified mutual funds) (Ongoing)
Lynne E. Bernstein, PhD
Professor
George Washington University
George Washington University
Arlington, Virginia
Disclosure(s): No financial or nonfinancial relationships to disclose.
Rationale. Several of our recent speech perception training studies required 3-6 sessions of screening, pre- and post-training testing, and 8 or 10 training sessions on different days. Participants were 200 older adults with hearing loss. Those numbers were hard to recruit locally, and high attrition rates were expected due to multiple required laboratory visits. Consequently, we conducted these studies fully remotely.
Design. A web-based cloud platform presented stimuli and collected responses, with fully-automated implementation of almost all procedures needed for the study, including consenting and questionnaire administration. Visual and/or auditory-in-noise speech training and testing were implemented with syllable, word, and sentence materials, with controls for stimulus selection and randomization for each participant. Participants logged into the platform and then entered a simple code that initiated creation of the sequence of activities needed for each stage of the study. A dashboard displaying participants’ progress was used to identify participants who might be experiencing difficulties.
Pre- and posttraining test scores were the primary measures used in evaluating training effectiveness. Therefore, special care was taken to ensure experimental control during testing. An experimenter teleconferenced with participants for throughout the test sessions, to explain the procedures and to answer questions. The experimenter turned off their audio and video during testing. Acoustic control was provided by sending each participant an inexpensive, calibrated sound-level meter, for measuring background noise and for calibrating their computer’s volume level. Computerized visual acuity and contrast sensitivity tests were administered to check the participant’s vision when using their computer’s display.
Because pure-tone thresholds could not be obtained remotely, participants were asked to send a recent audiogram, or provide a release for us to obtain the audiogram from their provider.
Results. All participants (39 to 86 years of age) but one successfully accessed the platform on their computer. A few were sent USB speakers, if their device’s maximum volume was too low. From the pool of 200 participants, 28 withdrew, with most of these assigned to training paradigms that they found too challenging. Most participants who completed the studies found the experience rewarding and thought they had improved their speech understanding in noise.
Mixed-models analyses resulted in highly significant effects with clinically-relevant effect sizes between participant groups that trained with different paradigms, arguing strongly that variance introduced by the home environment was small compared to the experimental effects.
Reliability of remote testing was directly evaluated by comparing no-training control participants’ pre- and post-test scores. For each sentence and word test presented visually and/or auditorily in noise, reliability was consistently high, at r = ~ 0.8.
Conclusions. Data collected from remote speech perception testing studies can be highly reliable when experimenters can remotely monitor the most important measurements, and sound meters are supplied to participants for calibration of computer audio levels and ensuring a sufficiently low ambient background noise level at session outset. Remote testing expands the geographical distribution of participants, results in high levels of study completion and improved ecological validity.