Auditory responsive naming
Auditory naming tasks were developed as an auditory analog of visual object naming (e.g., Bookheimer et al., 1998). Such tasks form an ideal supplement to visual object naming; examining regions of overlap allows identification of language areas independent of lower-order visual or auditory sensory regions.
In these tasks patients hear simple auditory cues such as "you walk with them" and imagine speaking the answer (e.g., feet, legs). They keep their eyes closed throughout to remove visual activation. Control conditions vary from simple rest (eyes closed), to controls matched for auditory input (e.g., hearing the same stimuli scrambled, as white noise). As with visual object naming, a limitation of such tasks is that it is difficult to monitor task engagement and accuracy.
As noted, the use of auditory naming tasks also occurred in other modalities and was then used within fMRI. For instance, work with PET demonstrated that such that tasks activate primary auditory regions as well as traditional language areas (Bookheimer et al., 1998). When used in fMRI as part of a panel of fMRI tasks, agreement with Wada results has been good (Gaillard et al., 2004).
Using a specific analysis approach to combine Auditory Responsive Naming with Object Naming and Verbal Responsive naming tasks we found overall 85% correspondence with Wada lateralization (Benjamin et al., 2017). The version of the task used in that paper is available free of charge for download here.
Bookheimer et al., 1997. A direct comparison of PET activation and electrocortical stimulation mapping for language localization. Neurology 1997;48:1056-1065.
Bookheimer et al., 1998. Regional cerebral blood flow during auditory responsive naming: evidence for cross-modality neural activation. Neuroreport 9(10):2409-13.
Gaillard et al., 2004. fMRI language task panel improves determination of language dominance. Neurology 63:1403–1408.
Benjamin et al., 2017. Presurgical language fMRI: Mapping of six critical regions. Human Brain Mapping: In press.
This task runs in Presentation software (www.neurobs.com) on a PC or Mac (via bootcamp or a virtual machine).