ATTENUATION OF AUDITORY NEGLECT BY IMPLICIT CUES
Auditory neglect exists when a person who has sustained a brain lesion does not perceive an auditory stimulus in one ear when stimuli are presented binaurally or dichotically; unilateral presentations to each ear are perceived much more accurately, and may even be perceived at normal levels (De Renzi et al., 1989). Auditory neglect and imperceptions are most commonly assessed in the clinical setting with bilateral simultaneous stimulation using simple sound stimuli, such as finger rubbing. The laboratory examination includes the presentation of high-resolution pure tones and complex stimuli presented in a sound-controlled environment using headphones that surround the ears. Basic auditory acuity is first evaluated by presenting unilateral stimuli, and once intact acuity is established, the bilateral simultaneous stimuli are presented. Neglect is present if the patient hears stimuli on only one side when stimuli were presented bilaterally (Heilman & Valenstein, 1972).
Auditory neglect frequently occurs concomitantly with visual neglect. De Renzi and colleagues (1984) found auditory extinction in 20 of 24 visually neglecting patients. Auditory extinction was present in 46% of 144 stroke patients tested within 1 week of hospitalization. Because auditory neglect rarely interferes with daily life, it is likely unnoticed by the patient and will remain undiagnosed unless the patient is administered specific tests. Sound transmission is pervasive in the environment, and sounds on the neglected side consequently are heard on the intact side. As a result, there is a degree of compensation possible with auditory neglect that is not possible with visual neglect. Furthermore, the auditory system has ipsilateral pathways to the cortex in addition to the major contralateral projections. It is possible that sound entering the neglecting ear may still be processed by the intact hemisphere through these ipsilateral pathways (Speaks et al., 1975).
Some investigators emphasized the defect of sound localization associated with visual–spatial neglect and essentially defined auditory neglect as a defect in localizing sound to the correct hemifield or as a bias of attention in favor of the intact hemifield (Soroker et al., 1997). Although sound localization is an important aspect of auditory neglect syndromes, especially in the interaction of sound perception and spatial processing, a defect in localizing sound is only one aspect of processing auditory information. Recent studies suggest that auditory neglect is the end result of several defects in auditory processing that extend beyond problems in spatial processing and attention to the hemifields (Bellmann et al., 2001; De Renzi et al., 1984). For the purposes of this study, auditory neglect includes the primary extinction of sound in one ear when each ear receives a different sound stimulus. Because the subject can perceive the sound when it is presented unilaterally, it is unlikely that this imperception results from a primary defect in localizing sound or a bias in attention.
Auditory neglect is most common following lesions of the inferior parietal–superior temporal junction (De Renzi et al., 1989; Heilman & Valenstein, 1972). However, it has also been reported after lesions to the right posterior pulvinar, right frontal lobe (Hugdahl et al., 1991), external capsule, anterior section of the internal capsule (Junque & Marti-Vilalta, 1990), periventricular region (Pujol et al., 1991), caudate and lentiform nuclei (Castro-Caldas et al., 1984; Eustache et al., 1990), and the posterior region of the corpus callosum (Alexander & Warren, 1988).
Explicit and implicit cuing methods have been used to examine and remediate visual neglect (Farah et al., 1993; Kartsounis & Warrington, 1989; Seron et al., 1989; Sieroff & Posner, 1988). Explicit cues do not usually include the fundamental sensory properties of the neglected stimulus and participants are aware of their presence. For example, such a cue is a command for a patient to “look left.” In contrast, implicit cues are an intrinsic aspect of the stimulus and the subject may be unaware of them. For example, seeing the end section of a common printed word will cue a unimpaired person reading it to look left for the letters that complete the word, whereas random letter strings of the same length may produce no search for completeness. Seron and associates (1989) examined neglect patients who observed progressively more complete drawings of common objects that had been cut into vertical sections. Increasingly larger sections of the same object were shown to the participants with more of the picture completed on the left side. Performance was better when objects remained ambiguous or obviously incomplete until presentation of the far left sections. The obvious incompleteness of a meaningful stimulus acted as an implicit cue for maintaining awareness and search of the left side. Explicit cuing in the form of commands to look at the entire picture had no effect on performance. Kartsounis and Warrington (1989) conducted an examination of the effects of implicit cues on visual neglect in a study of a patient who developed severe neglect after removal of a frontal–temporal meningioma in the right hemisphere. Tests included copying overlapping figures or objects, describing a picture that had interacting elements on the left and right sides (e.g., a picture of two people arguing), counting attached blocks versus counting separate dots, and reading meaningful sentences versus reading meaningless word strings. In each case, the tasks with interactive or meaningful stimuli were performed with a higher degree of completeness.
AUDITORY NEGLECT AND IMPLICIT CUES DERIVED FROM LANGUAGE
A dominant feature of language processing is the creation of semantic associations between elements in verbal memory (Miller & Fellbaum, 1991). These include common semantic structures, such as noun hierarchies, semantically integrated adjectives, and compound words (Channon et al., 1989; McCarthy & Warrington, 1988; Solso, 1995). If an object or experience is described in reference to particular semantic categories, it will be associated with other similar words and semantic concepts. After years of everyday use, these semantic associations become implicit (Charles & Miller, 1989; Miller & Fellbaum, 1991). The associative process enhances perception of related and meaningful stimuli, which may explain the results of the Kartsounis and Warrington (1989) study in which reading semantically meaningful sentences reduced visual neglect in contrast to meaningless sentences.
The effects of implicit semantic and other language cues on auditory neglect have not been examined. The auditory system is strongly associated with language processing, and there is sufficient theoretical support to hypothesize that semantic cues will attenuate auditory neglect in a manner similar to the attenuation of visual neglect associated with spatial cues.
To begin this investigation, we chose to examine the phonemic and semantic content of sound stimuli among patients with right hemisphere lesions, auditory neglect, and intact language centers. Five word structures were examined: synonyms, antonyms, semantic categories, rhyming, and spondaic compound words. These structures were compared with corresponding unrelated words. Synonyms obviously have close semantic association because they have the same denotative meaning. Words from the same semantic category, such as iron–steel or fork–knife, incorporate another clear semantic relationship. Although antonyms have opposite meanings, they are also related as members of the same semantic category. Compound spondaic words are pairs of words that have been combined into one as part of the development of the language; the pair taken together represents a semantic or lexical combination of the simple words. Examples are “police-man”, “out-side”, or “stop-light” (see Appendix). Spondaic compound words have equal emphasis on each syllable of the word and, thus, eliminate syllable stress as a possible contamination of the sound perception. Spondaic words are part of the standardized assessment of hearing function proposed by the American Speech and Hearing Association (ASHA; 1979). Rhyming was the one nonsemantic stimulus quality chosen for study. Rhyming words are associated by a similar phonemic structure.
The general study hypothesis was that participants with right hemisphere lesions, auditory neglect, and intact language centers, would perceive more words in their affected ear when the left ear word was related to a simultaneously played right ear word than when the two words were unrelated. Because the language centers are intact among these patients, semantic associations between related words, when presented in dichotic manner, should serve as a compelling implicit cue and attenuate the misperception of words presented to the affected ear.
METHOD
Research Participants
Clinical participants were recruited from inpatient hospital stroke units or outpatient stroke support groups in the New York City and Philadelphia areas. Control participants were recruited from the Philadelphia community. For inclusion in the study, clinical participants were required to have sustained a cerebral vascular accident that resulted in specific clinical evidence of a lateralized brain lesion of the right hemisphere, such as left hemiplegia, left visual neglect or visual field cut, other significant sensory loss on the left side, and evidence from a computed tomography or magnetic resonance imaging scan of a lateralized lesion. Participants with a history or current evidence of aphasia were excluded. Clinical participants with a past history of stroke were excluded. Inclusion and exclusion of clinical participants were established by review of the hospital medical record. Both clinical and control participants were required to speak English as their primary language and have intact hearing. Participants with obvious hearing loss, a history of auditory disease, or those who required a hearing aid were excluded. As mild bilateral loss of acuity for tones in the upper frequencies is a normal aspect of aging and does not generally interfere with speech perception, this condition was not considered exclusionary.
Sixty-four people with a history of cerebral vascular accident were evaluated for possible inclusion in the study. Clinical participants were required to demonstrate neglect of the left ear on at least 20% of the prestudy stimuli presented dichotically and binaurally. These stimuli are described below as part of the Test of Auditory Perception (TAP; Williams, 1990). Fourteen individuals (6 male, 8 female) who met criteria for moderate to severe auditory neglect were thereby selected as clinical participants. Of the other candidates evaluated, 21 showed no evidence of neglect, 14 had only minor symptoms of neglect, and 15 patients had invalid evaluations due to confounding factors, such as refusal to complete the evaluation (2 participants) or inability to complete the evaluation because of dementia, confusional state, fluctuating alertness, and significant bilateral peripheral hearing acuity loss (11 participants). The 14 control participants (4 male, 10 female) were healthy volunteers with no history of neurological illness. Clinical participants ranged in age from 32 to 82 (mean = 61.32; SD = 16.15); controls ranged from 35 to 85 years (mean = 64.29; SD = 14.09). All subjects were right-handed.
The study was reviewed and approved by the Institutional Review Board of Drexel University and the Institutional Review Boards of the respective hospitals where patients were recruited. The purpose of the study was reviewed with participants by the examiner, and all participants gave informed consent. Inpatient participants were tested at bedside, whereas outpatient participants and controls were tested in their homes. Participants evaluated at home were tested in a quiet room. When testing was done at bedside, attempts were made to eliminate distractions.
MATERIALS
Prestudy Assessment of Auditory Perception
The TAP (Williams, 1990) was used to assess auditory acuity and neglect. The inclusion and exclusion procedure for selection of clinical participants used the performance on the TAP subtests. The TAP contains a set of computer-mediated auditory tests described below.
Acuity screening
The primary purpose of the acuity section was to establish a reliable volume level and confirm that hearing acuity was equivalent for both ears. Acuity screening consisted of six levels of pure tones (200, 400, 600, 1000, 2000, 5000 Hz), each played at incrementally higher volume levels. The remainder of the testing was then conducted at the volume level at which the subject could reliably and comfortably hear the 1000 Hz tone.
Pure tone neglect test
This subtest consisted of 26 trials of 1000 Hz tones: 16 trials were unilateral presentations of the tone (8 left, 8 right), and 10 were bilateral simultaneous presentations. The presentation of stimuli was randomized. A bilateral stimulus was scored as neglected when it was reported to occur only in one ear. If the subject reported hearing nothing in either ear, it was scored as a failed item but not as neglected. The degree of neglect was calculated as a percentage of bilateral items perceived as occurring unilaterally. For example, three stimuli perceived as bilateral and seven stimuli perceived as occurring only in the right ear would be 70% neglect of the left ear.
Single word neglect test
This subtest is exactly the same as the Pure Tone subtest, except that the tonal stimuli are replaced by a single word spoken by a male voice. The word “Cat” was played either unilaterally or bilaterally.
Environmental sounds test
Ten environmental stimuli, such as hammering, baby crying, and lawn mower running, were played unilaterally and dichotically. The first 10 presentations were unilateral, each different stimulus was played once, and the other 16 were a random presentation of unilateral and bilateral stimuli. The 10 stimuli were played in random order during the initial unilateral section. For the dichotic presentations, the sounds were randomly paired, with no combination occurring more than once, and each stimulus was played twice over the 10 trials. Participants were requested to name the sound and report the ear in which they heard the stimulus. Degree of neglect was calculated in the same manner as with the first two subtests.
An additional environmental stimuli section was included. The additional environmental test consisted of 16 stimuli (10 bilateral, 6 unilateral). The same 10 sounds were used as stimuli, eliminating the need for the 10 preliminary unilateral trials, but instead of sounds being played dichotically, each sound was played bilaterally one time, making the presentation similar to the Pure Tone and Single Word Neglect Tests.
Displaced spondaic words test
Each of the 20 trials in this subtest contained four words. Two words were played to the right ear and two to the left ear. The two words played to the right ear formed a compound word (e.g., base-ball), as did the two words in the left channel (e.g., high-chair). The words were offset from each other so that two of the words were dichotic and two were unilateral. For example, “base” was played unilaterally to the right ear, after which “ball” and “high” were played simultaneously (“ball” in the right ear, “high” in the left ear); then “chair” was played unilaterally to the left ear. The subject only reported the words heard. The monaural words provided an indication of unilateral perception, whereas unilateral perception of the dichotic words was an indicator of auditory neglect. The magnitude of neglect was calculated based on the number of dichotic trials missed out of 20.
Displaced numbers test
This subtest is similar to the Displaced Spondaic Words Test, except that all of the words are replaced by a male voice speaking numbers. Four different random numbers occurred in each trial with the first and last number presented unilaterally to the right and left ear, respectively, and the middle two numbers presented dichotically. For example, in one trial, the participant would hear the number “1” in the left ear, followed by the simultaneous presentation of the numbers “4” and “6” presented in the left and right ears, ending with the number “9” presented to the right ear. The participant is requested to simply report the numbers heard. Based on this example, a participant with left ear neglect would report the numbers 1, 4, and 9. A participant with right ear neglect would report 1, 6, and 9.
Test of Auditory Perception Reliability
Data from all participants (n = 75) were combined to assess the reliability of the TAP subtests. Coefficient alpha (Cronbach, 1951) was used to assess inter-trial consistency. The reliability coefficients ranged from .64 to .98 for the unilateral stimuli and from .87 to .98 for the bilateral stimuli. The combined reliability for all 80 bilateral items was .98. These coefficients represent very consistent responding.
Performance of Control Participants on the Test of Auditory Perception
Across the 40 bilateral and 54 unilateral items of the first four neglect subtests of the TAP (Tones, Words, Environmental Sounds I, and Environmental Sounds II), control participants made virtually no errors. Two control participants made one error each on Environmental Sounds 1. On the bilateral items of the Displaced Words subtest, the mean number of errors was 2.5 (SD = 2.73); the mean error on the Displaced Numbers subtest was 1.0 (SD = 1.6). On the two dichotic subtests a left/right discrepancy of more than two errors by controls was rare; four control participants had discrepancies of three or more errors between left and right ears. On the bilateral items of the Displaced Words Test, the mean error for the left and right ears was 1.9 (SD = 2.7) and 1.4 (SD = 2.14), respectively. Left and right mean errors on the bilateral items of the Displaced Numbers Test was .26 (SD = .45) and .89 (SD = 1.7), respectively.
Performance of the Clinical Participants on the Test of Auditory Perception
No clinical participant produced more than one unilateral error on the Tones or Word subtests and no more than two unilateral errors on the Environmental I & II subtests. Variability was greater for unilateral items of the Displaced Words and Numbers subtests. Unilateral item means for errors on Displaced Words were 1.13 (SD = 36) and 4.87 (SD = 4.73) for the right and left sides, respectively. For unilateral items of Displaced Numbers, the right side mean errors was .81 (SD = 1.56) and the left side mean errors was 3.44 (SD = 5.6).
Construction of the Experimental Stimuli Containing Implicit Cues
Six lists of word-pairs were developed: synonyms, antonyms, categorically related words, rhymings, spondaic compound, and unrelated words. Each list contained 40 word-pairs presented in random order. Forty sets of six word-pairs were developed, each set including one word-pair from each category. All words were rated at a sixth-grade reading level or lower (Thorndike & Lorge, 1959). When choosing rhyming word-pairs, three elements of the words were considered. The words were required to have a matching vowel/consonant combination, the matching combination must have the same relative placement in the word (e.g., at the beginning of both words, end of both words), and the words had to be of equivalent duration when spoken aloud.
Experimental stimuli were recorded by digital sampling in a professional sound studio where recording quality could be carefully controlled and bilateral simultaneous stimuli accurately synchronized. Pure tones were produced with a digital synthesizer, digital samples of environmental sounds were read from a compact disc of special effects sounds, and verbal stimuli were spoken by a male voice. All sounds were digitally edited so that the onset of each sound was precisely synchronized within the left and right stereo channels. Sounds were also edited to have equivalent amplitude levels and duration. All stimuli were recorded from the Digital Audio Tape (DAT) onto an audio cassette (TDK-IECII/Type II, SA-X90) and played to participants through Pro-60 Realistic headphones from a 5CR-SI Realistic recorder.
PROCEDURE
The acuity test was used to establish a comfortable volume level for listening to stimuli and to screen out individuals with serious hearing deficits. Two clinical participants and three controls required the volume level to be increased to approximately 60 Db; all others heard stimuli at the initial setting of approximately 40 Db. The neglect tests were administered in the following order: Tones, Words, Environmental Sounds Dichotic Presentation, Environmental Sounds Bilateral Simultaneous Presentation, Displaced Words, and Displaced Numbers. The experimental test, consisting of semantic word lists, was administered last.
Statistical Analyses
Data analyses were performed to compare performance on each of the lists of related words with performance on the list of unrelated words. To control for the possible influence of directed attention in dichotic listening (Bloch & Hallige, 1989), the present study scored as correct only the trials in each list in which the subject heard the stimuli correctly on both the right and left side. All other trials were scored as incorrect. The mean numbers of trials correct were then compared using analysis of variance (ANOVA) and planned comparisons.
RESULTS
Two separate sets of analyses were run on the data derived from the experimental stimuli. First, the performance of controls was compared with clinical participants. This analysis focused on determining whether or not the clinical participants continued to demonstrate neglect on the dichotic items of the experimental test and to examine if their performance differed from controls. Second, perception of the unrelated stimuli was compared with perception of the related stimuli. This analysis focused entirely on the performance of clinical participants across the word lists.
To evaluate whether the clinical participants demonstrated a distinct pattern of neglect, all word list scores were summed to produce a combined performance measure. A two-way ANOVA with repeated measures over Ear revealed a significant main effect for Group (F(l,52) = 54.85; p < .0001), a significant main effect for Ear (F(1,52) = 35.14; p < .0001), and a significant interaction (F(l,52) = 30.86; p < .0001). The interaction was further analyzed by planned comparisons between the individual means (Table 1).
There was no significant difference in auditory perception between the left and right ears of control participants, indicating that they did not demonstrate neglect or an ear preference, t(13) = −.83; p = .42. Among clinical participants, there was a significant difference between left and right auditory perception, indicative of left auditory neglect of the stimuli (t(13) = −5.52; p < .0001). The difference in left ear perception between the two groups was significant (t(13) = −6.76; p <.0001), but the comparison of right ear perceptions between the groups was not (t(13) = −1.80; p < .09).
For the second analysis, the number of word pairs heard correctly was summed for each word list (Table 2). Planned comparisons, using paired t tests, were made between the unrelated word list and each of the related word lists. Although all the comparisons were significantly different, the comparison between rhyming words and unrelated words was significant in the opposite direction predicted: fewer rhyming words were heard on the affected side than with unrelated words. All of the other comparisons support the hypothesis that related word pairs were more accurately perceived than unrelated word pairs.
Because of the difference between the rhyming and unrelated lists was in the opposite direction than expected, further analyses of the lists were conducted. For each list, the number correct for right and left ears was calculated. The difference between the two ear scores is a measure of neglect. For all lists, more words were heard in the right than left ear, indicating that participants had neglect on all lists (Table 3). However, substantial differences in degree of right versus left differences were noticed between the lists, with the greatest discrepancy demonstrated on the rhyming word list. Right versus left difference scores for the separate lists were calculated, and comparisons were made using paired t tests (Table 4). The rhyming word list had significantly greater unaffected versus affected differences in contrast to the other word lists. Only one other difference was significant, that being the difference between synonyms and categorical words. These results indicate that neglect was substantially worse for rhyming word-pairs.
DISCUSSION
Participants who demonstrated auditory neglect on the TAP subtests that included bilateral simultaneous stimulation and dichotic listening, also showed neglect on the experimental dichotic listening tests. On the experimental tests, neglecting participants had decreased perception of stimuli in both ears relative to controls, but also demonstrated much greater deficits in perception by the contralesional ear. When dichotic word pairs were associated as synonyms, belonged to the same semantic category, or when presented as spondaic, compound words, the words played to the neglecting ear were more likely to be heard than when the dichotic words were unrelated. An unexpected finding was that pairs of rhyming words were more poorly perceived than the unrelated dichotic rhyming words. The presentation of rhyming stimuli exacerbated neglect.
These results demonstrated that implicit cues increase perception of auditory stimuli on the neglected side of persons with auditory neglect. Studies of patients with left visual neglect have demonstrated that awareness of stimuli on the left side increases when the stimuli are intrinsically related to stimuli on the right. Visual neglect studies that examined verbal and spatial cues found both to effectively increase perception of visual stimuli on the neglected side (Kartsounis & Warrington, 1989). Dichotic words related by semantic or other types of association provide implicit cues to the auditory and verbal processing areas of the brain in a manner that probably parallels implicit cuing in the visual systems.
Inconsistent with predictions, rhyming words exacerbated neglect. An explanation for this finding rests on the fact that rhyming words are simply more difficult to discriminate. For a task in which a person must identify two simultaneously spoken words, words with strikingly different phonemic structure, such as syllabic stress and length of verbalization, are likely discriminated and perceived easier than words in which only one or two phonemes differ (e.g., “bleach-leach”, “fight-white”). Support for this explanation can be found in the relative difficulty manifested by normal participants. Healthy participants have higher error rates on dichotic listening tasks that make use of consonant–vowel pairs of one syllable (e.g., “ga-ka”, “pa-ba”) than on tasks using entire words (Springer, 1986).
Integration of this study with attention control theories (Kinsbourne, 1993; Posner & Petersen, 1990) can be accomplished if one proposes an interaction between attention control and other cognitive systems, such as language processing. Directed attention allows one to focus on salient information in the environment. Information on one side of space or perceived through one side of the auditory channels becomes salient if it is integrated in some manner with information from other systems, such as semantic knowledge. This interaction serves as a cue to the system that directs attention. Under normal circumstances, the attention control system searches all channels and sources of information. When brain injury occurs, this system is rendered defective and search is limited to one side or channel. Only very salient interacting stimuli compel the system to direct attention to the contralateral channel. The end result of this defective search is identified as “neglect.” This integrated concept is supported by a study in which participants with visual neglect were shown progressively more complete pictures. If the pictures were ambiguous, then participants were more likely to attend to the neglected side to gain a complete interpretation of the picture (Seron et al., 1989). Furthermore, priming studies demonstrated that awareness of stimuli by normal participants is increased by presentation of semantically related stimuli (Tulving & Schacter, 1990). Applied to the current study, perception of a word acted as a prompt for the attention system to search for other semantically related words.
Another explanation of the findings is simply that auditory neglect results from the perception of degraded stimuli on the affected side. Following brain lesion, the auditory information on the affected side may be degraded, as if noise was mixed into the sound on one side, or the sound intensity was reduced to the point of substantial imperception. Attempts to perceive such a degraded stimulus would appear as auditory neglect. When the auditory stimulus incorporates language content in both channels, then the language centers become active and process the degraded and nondegraded stimuli. The subject then perceives the complete stimuli presented in both ears. Because rhyming words reduce discrimination across the ears, rhyming content actually serves to degrade the stimulus further and the participants have greater difficulty accurately perceiving rhyming sounds. This concept is very similar to the process of semantic binding. Aphasic patients who could not use syntax were able to activate semantic associations, and this facilitated language comprehension (Hagoort et al., 2003). In the present study, these semantic associations were designed into the interactive stimuli. Presumably the same semantic networks became active when these stimuli were presented, and the patients with neglect used them to correctly perceive stimuli from both auditory channels.
In general, this study and others that have examined implicit cues suggest that neglect is not a simple unitary phenomenon and it is not isolated to defects in spatial reasoning and attention bias. Information that is not perceived or processed following brain lesions varies with the processing systems involved and the type of information presented to the participant. Perceptual systems such as vision and hearing have elaborate secondary systems that process specific types of visual and auditory information. These secondary systems include those that control attention to the stimulus, process the stimulus for spatial location, and process the stimulus for semantic content. For example, the sound of a person speaking conveys the location of the speaker, the unique frequencies that correspond to that speaker's voice, the volume of the sound as well as the semantic content in the language conveyed by speech. If the investigator only presents pure tones to the neglecting subject, or the simple finger rubbing presented in the clinic, then the phenomena described as neglect will appear as if it is just the imperception of a simple tone or defect in localizing a simple tone to the left or right side of the head. This study and others strongly suggest that neglect is a common behavioral result of defects in one or more of the complex systems used to process sensory information.