Visual and auditory stimuli of 2 types of sound-sharing competitors were presented on different trials, onset- and offset-overlap. The target words and their phonological competitors were matched on linguistic characteristics, such as frequency, familiarity, and number of syllables. The corresponding pictograms were matched for recognizability and visual saliency.
On all displays, four images corresponding to object names in Hebrew are presented in the four corners of a 3 × 3 grid on a computer monitor (9 × 9 cm, subtending ~8.5° visual angle at a distance of 60 cm).
Our image database includes 288 different image, the majority of which were drawn from the normed color image set of Rossion and Pourtois (2004), with the others taken from commercial clip art databases and selected to match in terms of visual style.
CANlab uses a touch screen panel (T 23” ATCO infrared 4096 × 4096), to allow a more intuitive response.
The Hebrew adaptation includes only disyllabic words since in past research (Ben-David et al., 2011) disyllabic words yielded more accurate responses in a visual world paradigm.
The root mean square (RMS) intensity was equated across all recorded sentences.
For young adults in Hadar, Nitzan, and Baharav studies, auditory files were mixed with a continuous speech spectrum noise at a fixed -4dB signal to noise ratio (SNR) found to enable approximately 80% accuracy in a pretesting (Hadar et. al., 2015).
For Older adults in Baharav 2020, auditory files were mixed with a continuous speech spectrum noise at a fixed 0dB signal to noise ratio (SNR) based off of values for discrimination timeline in Ben-David et. al., (2012). Materials are presented binaurally at 50dB above individual pure tone average (PTA) via a MAICO MA-51 audiometer using TDH 39 supra-aural headphones. Stimuli are presented to participants at dB level determined by each individual’s pure tone average (PTA) results.
Participants are seated 60 cm from the computer screen, placing their head on the designated eye tracker chin rest to minimise head movement. Each participant’s dominant eye is calibrated to ensure that throughout the course of the trial participants’ online eye gaze position was recorded. A table mounted SR Eyelink 1000 eye-tracker in the “tower mount” configuration was used (SR Research Ltd., Ontario, Canada). Eye gaze position was recorded via the Eyelink software at a rate of 500Hz.
Trials begin with a visual cue of a black “play” triangle centered on the screen, immediately followed by the auditory presentation of the digit(s) preload through the headphones. In the Ewindmil, digits were presented both in quiet conditions and with adverse noise (see resources page for detailed description of auditory balancing).
Then, the 3X3 grid with the four images would appear. Participants were given 2 seconds to view the object positions, after which a fixation cross would appear in the center of the screen. A brief 1000 Hz tone would signal participants to press the fixation cross ensuring they were task focused. Once the system registered cumulative fixations on the cross for a period of at least 200ms, the cross would disappear and the instruction sentence “point at the ___ [target word]”, would be presented via the headphones. Following the participant selection of a stimulus, a visual feedback signal: red highlight for an incorrect answer, green highlight for a correct answer, would appear in the square of the selected image.