Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-05T23:08:41.208Z Has data issue: false hasContentIssue false

Cochlear implant patients' speech understanding in background noise: effect of mismatch between electrode assigned frequencies and perceived pitch

Published online by Cambridge University Press:  05 March 2010

W Di Nardo
Affiliation:
Department of Otorhinolaryngology, Catholic University of the Sacred Heart, Rome, Italy
A Scorpecci*
Affiliation:
Department of Otorhinolaryngology, Catholic University of the Sacred Heart, Rome, Italy
S Giannantonio
Affiliation:
Department of Otorhinolaryngology, Catholic University of the Sacred Heart, Rome, Italy
F Cianfrone
Affiliation:
Department of Otorhinolaryngology, Catholic University of the Sacred Heart, Rome, Italy
C Parrilla
Affiliation:
Department of Otorhinolaryngology, Catholic University of the Sacred Heart, Rome, Italy
G Paludetti
Affiliation:
Department of Otorhinolaryngology, Catholic University of the Sacred Heart, Rome, Italy
*
Address for correspondence: Dr Alessandro Scorpecci, Institute of Otorhinolaryngology, Catholic University of the Sacred Heart, ‘A Gemelli’ University Hospital, Largo Gemelli 8, 00168 Rome, Italy. Fax: +39 06 3051194 E-mail: alessandroscorpecci@yahoo.it
Rights & Permissions [Opens in a new window]

Abstract

Objective:

To assess the electrode pitch function in a series of adults with postlingually implanted cochlear implants and with contralateral residual hearing, in order to investigate the correlation between the degree of frequency map mismatch and the subjects' speech understanding in quiet and noisy conditions.

Design:

Case series.

Subjects:

Seven postlingually deafened adults with cochlear implants, all with detectable contralateral residual hearing. Subjects' electrode pitch function was assessed by means of a pitch-matching test, in which they were asked to match an acoustic pitch (pure tones delivered to the non-implanted ear by an audiometer) to a perceived ‘pitch’ elicited by stimulation of the cochlear implant electrodes. A mismatch score was calculated for each subject. Speech recognition was tested using lists of sentences presented in quiet conditions and at +10, 0 and 5 dB HL signal-to-noise ratio levels (i.e. noise 10 dB HL lower than signal, noise as loud as signal and noise 5 dB HL higher than signal, respectively). Correlations were assessed using a linear regression model, with significance set at p < 0.05.

Results:

All patients presented some degree of mismatch between the acoustic frequencies assigned to their implant electrodes and the pitch elicited by stimulation of the same electrode, with high between-individual variability. A significant correlation (p < 0.005) was found between mismatch and speech recognition scores at +10 and 0 dB HL signal-to-noise ratio levels (r2 = 0.91 and 0.89, respectively).

Conclusion:

The mismatch between frequencies allocated to electrodes and the pitch perceived on stimulation of the same electrodes could partially account for our subjects' difficulties with speech understanding in noisy conditions. We suggest that these subjects could benefit from mismatch correction, through a procedure allowing individualised reallocation of frequency bands to electrodes.

Type
Main Articles
Copyright
Copyright © JLO (1984) Limited 2010

Introduction

Improving speech understanding in different settings of everyday life is a major goal of current cochlear implant research. Nowadays, patients with cochlear implants hear quite well in quiet settings, but generally report difficulties comprehending speech information in noisy environments.

Over the past few years, researchers have attempted to optimise signal quality in implanted patients. A number of strategies have been intensively investigated, including increasing the number of spectral channels (both realReference Friesen, Shannon, Baskent and Wang1 and virtual),Reference Friszt, Koch, Downing and Litvak2 and the introduction of new processingReference Kong, Cruz, Jones and Zeng3, Reference McDermott4 and pre-processing strategies;Reference Wouters and Vanden Berghe5, Reference Spriet, Van Deun, Eftaxiabis, Laneau, Moonen and Van Dijk6 however, they have yielded unsatisfactory and discordant results, probably because the causes of implanted patients' difficulties in these conditions are still unclear. Nonetheless, it is quite well established that the patient's ability to extract spectral information from the signal is a central element affecting everyday performance,Reference Nelson and Jin7Reference Qin and Oxenham9 and that this ability is strongly dependent on the way frequency bands are allocated to the different intracochlear electrodes of an implant array.Reference Dorman, Loizou and Rainey10Reference Fu and Shannon12

There is a considerable amount of literature demonstrating that experimental manipulation of the standard, software-predefined, frequency-to-electrode mapping severely affects implanted patients' speech recognition.Reference Fu and Shannon13Reference Friesen, Shannon and Slattery15 However, most of these cited studies have not suggested a more appropriate frequency band assignment and distribution across electrodes that might improve processor fitting. More importantly, they appear to accept unquestioningly that the standard frequency mapping system, based on the Greenwood formula,Reference Greenwood16, Reference Greenwood17 is the most appropriate arrangement, and therefore show very little concern as to whether a certain degree of frequency-place misalignment may already exist in the frequency maps fitted for everyday use in implanted subjects.

In fact, recent work has shown that this is indeed the case: a mismatch is frequently found in most patient's everyday maps, with a high degree of variability across patients; the software-predefined pattern of frequency distribution across electrodes means that the pitch elicited by stimulation of a certain electrode bears a poor correspondence to the allocated frequency band for that electrode.Reference Baumann and Nobbe18Reference Dorman, Spahr, Gifford, Loiselle, McKarns and Holden20 Proposed explanations for this include the considerable amount of variability of electrode placement within the cochlea, the irregular pattern of nerve fibre stimulation by electrodes and the uneven patterns of individual nerve survival.Reference Baumann and Nobbe18

Studying this mismatch is not an easy task, since it can only be done in the small number of implanted patients having good ipsilateral or contralateral residual hearing; only these patients can reliably match acoustically delivered pitch sensations to pitch sensations elicited by stimulation of electrodes. However, the effort is worth undertaking, and is more interesting if we consider that very little is known about the impact that spontaneously occurring frequency band misalignment may have on implantees' speech understanding. This in turn could provide an important basis for future mapping strategies that might improve implanted patients' word and sentence recognition.

In the present study, we used a psychophysical test to measure the degree of mismatch between the frequency bands allocated to electrodes and the pitch elicited by stimulation of the same electrodes, in a group of adult, postlingually implanted subjects with preserved residual hearing and similar electrode insertion depth. We also investigated possible correlations between the implanted subjects' mismatch and their speech understanding in the presence of different levels of background noise.

Methods

The research described below was reviewed and approved by the local review board at the Catholic University of the Sacred Heart, and was conducted according to principles expressed in the Declaration of Helsinki.

Participants

From among the postlingually deafened patients undergoing cochlear implantation in the ENT clinic of the Catholic University of the Sacred Heart, we selected seven subjects, aged between 36 and 66 years and implanted with Nucleus-24 devices. Intra-operative X-ray assessment, using Stenver's modified projection, showed that the implant electrode was inserted completely into the cochlea (>400°) in all seven subjects. At the time of study, all subjects had been using their implant for at least six months. Their processors were fitted with the Advanced Combinational Encoder (ACE) speech strategy and the Autosensitivity Smart Sound™ (Cochlear, ltd. Sydney, Australia) function, and their map frequency bands were automatically assigned to electrodes by the Custom Sound 1.4™ (Cochlear, ltd. Sydney, Australia) software package (frequency table number 22; (Cochlear, ltd. Sydney, Australia)). All of the subjects were native Italian speakers, and were ‘good users’ of their implant, achieving very high scores (>80 per cent) in sentence recognition tests administered in a quiet setting. The subjects' ages, causes of hearing loss, implantation side and implant models are shown in Table I. All seven subjects had residual hearing thresholds detectable in the contralateral ear for all frequencies from 125 to 8000 Hz (Table II). All electrodes were active in all subjects.

Table I Subjects' age, cause of hearing loss, implant side and implant model

S no = subject number; y = years; HL = hearing loss; L = left; R = right

Table II Subjects' contralateral residual hearing

Data represent pure tone audiometry air conduction hearing thresholds (dB). S no = subject number

Before each pitch-matching procedure, the Neural Response Telemetry (NRT) threshold was measured in all electrodes with the AutoNRT function in Custom Sound 1.4™ (Cochlear ltd, Sydney, Australia), and processor regulation was carefully performed before each test session.

Pitch assessment procedure

Before starting the procedure, we made sure that the selected subjects could reliably use their residual hearing without any warping phenomena due to the hearing loss and to the loudness of the acoustic stimulus, using a pitch-ranking test: we presented couplets of pure tones to the contralateral ear and asked the subject to pitch-rank stimuli in each couplet as ‘higher in pitch’ or ‘lower in pitch’. We also ensured that our subjects could identify all electrical stimuli as different from one another, by sequential stimulation of the implant electrodes from an apical to a basal position.

Acoustic stimulation consisted of 500-ms, pulsed, pure tones generated by an Amplaid 319 type 1–IEC 645 audiometer (Amplifon, Milan, Italy) and delivered through earphones calibrated according to ISO 389 and American National Standards Institute criteria.

Electrical stimulation was delivered by means of Custom Sound 1.4 software installed on an IBM® personal computer (IBM, Armonk, USA) with an Intel® Centrino (Santa Clara, California, USA) mother board and a Cochlear® (Sydney, Australia) implant-computer connection system (programming POD). Electric signals were supplied as 500-ms pulse trains at a stimulation rate of 900 pulses per second, which is considered to be sufficiently high to avoid temporal effects on pitch perception.Reference Baumann and Nobbe18

Before stimulation began, the loudness for all electrodes was adjusted to a comfortable level for each subject. Subjects were administered two practice runs before data collection began.

Patients were asked to find the best match between the acoustic pitch elicited by residual pure tones and the pitch elicited by electrode stimulation. While the patient listened to each single pure tone, the electrode sweep function was run at a comfortably audible level from apical to basal electrodes (i.e. E22 to E1), then all the way back to the apical electrodes. We proceeded with a back-and-forth stimulation modality, and according to the patient's instructions we restricted the testing field following a ‘bracketing’ technique. When the choice was narrowed to three or four electrodes, a two-by-two electrode stimulation was performed, in order to prevent any confusion between the pitch elicited by adjacent electrodes, until the patient found the one best-matching electrode. The subjects were sat in front of the computer screen while the electrode sweep proceeded, and were instructed to point at the best-matching electrode, which further reduced the possibility of confusion. We repeated the procedure for the seven residuals (i.e. 0.25, 0.5, 0.75, 1, 1.5, 2 and 4 kHz). Each residual frequency was tested twice consecutively, to ensure that the subject did not match by chance.

For all seven subjects, the test session was repeated one month later to ensure data reliability.

Speech recognition assessment

Cochlear implant performance was evaluated in quiet and in noise on the day of the pitch assessment procedure, using digitally recorded lists of Burdo and Orsi sentences.Reference Burdo, Cucinotta, Miccoli, Oneto and De Dionigi21 Each list comprised 10 sentences of bisyllabic words, which were presented in a soundproof cabin at 65 dB via a loudspeaker set 1 m in front of the patient. Sentences were spoken by a female voice and drawn from commonly used Italian vocabulary. For ‘in noise’ assessment, ‘cocktail party’ type background noise was delivered via a second loudspeaker set 1 m behind the patient. Patients were tested at +10, 0 and 5 dB HL signal-to-noise ratio levels. Subjects were told to use their usual everyday microphone volume. Earphones were used to occlude the non-implanted ear, to ensure hearing performance was not influenced by the subject's contralateral residual hearing. Subjects were asked to repeat back any words they understood, and the test score was calculated as the percentage of correctly repeated words.

Data analysis and representation

The electrode number (y axis) was plotted against the acoustic pure tones (x axis) to obtain a simple representation of both the standard pattern of frequency band allocation to electrodes (performed by the Custom Sound 1.4 mapping software) and the frequency-to-electrode matching reported by the subjects. For each subject, a total mismatch score (M) was calculated as follows: $M = \sum\limits^7_{i=1} \vert \hbox{E}_{\rm Si} - \hbox{E}_{\rm Pi}\vert$, where ES was the electrode that corresponded to the tested frequency according to standard software assignation, and EP was the electrode chosen by the patient as the best match for the same tested frequency. This calculation allowed us to derive a quantitative estimate of the overall mismatch, measured as a difference in electrodes.

Spearman's rank correlation coefficient (r 2) for mismatch scores and sentence recognition scores was calculated in all of the tested settings, after which data were analysed according to a univariate linear regression model. Significance was set at p < 0.05.

Results

Figure 1 shows the standard frequency band assignment performed by the software, and the matching of pure tones to electrodes, for the seven subjects. All of the subjects presented some degree of mismatch, although they performed very differently from one another in the pitch-matching task. At first glance, a distinction can be made between subjects one to three, in whom the overall mismatch was smaller (mismatch scores 5–6), and subjects four to seven, who had greater overall mismatch (mismatch scores 17–38). Remarkably, the latter subjects performed very poorly in the matching task, when asked to associate the electrical pitch to the acoustic pitch for pure tones at 2000 and 4000 Hz.

Fig. 1 Pitch-matching results for subjects one to seven, shown in parts (a) to (g), respectively. In each part, the right column gives the standard frequency bands assigned to electrodes, according to the Nucleus Custom Sound 1.4™ mapping software. Continuous line = standard frequency allocation to electrodes from mapping software; black squares = electrode frequency from subject's pitch perception; white triangles = tested pure tones; ES = electrode allocation of a given frequency; EP = electrical pitch perceived by patient

The seven subjects' speech recognition scores in +10, 0 and −5 dB HL signal-to-noise ratio and quiet conditions, together with their total mismatch scores, are summarised in Table III. Figure 2 presents the same data.

Fig. 2 Correlation between mismatch scores and speech recognition scores for the seven subjects. SNR = signal-to-noise ratio

Table III Subjects' speech recognition and mismatch scores

S no = subject number; SNR = signal-to-noise ratio

Spearman's correlation coefficient for mismatch score versus speech recognition score was r 2 = 0.91 for +10 dB HL signal-to-noise ratio and r 2 = 0.890 for 0 dB signal-to-noise ratio. Linear regression analysis showed that the correlations between mismatch and speech recognition score for +10 and 0 dB HL signal-to-noise ratio levels were both statistically significant (p < 0.005).

Spearman's correlation coefficient for mismatch score versus speech recognition score in quiet conditions was r 2 = 0.71; linear regression analysis showed this, too, to be a statistically significant correlation (p < 0.05).

The correlation between mismatch score and speech recognition score for the −5 dB HL signal-to-noise ratio setting was weaker (Spearman's correlation coefficient r 2 = 0.3), and was not statistically significant (p > 0.05).

Discussion

Our results confirmed that subjects with cochlear implants did not perform appropriate matching of the acoustic pitch to the electrical pitch, using the standard frequency band assignment made by the mapping software. Most of the electrodes, in fact, seemed to elicit pitch sensations that did not correspond to the pitch sensations expected from the standard frequency assignment established by the Custom Sound 1.4 mapping software. This is consistent with the previous literature on the subject,Reference Baumann and Nobbe18Reference Dorman, Spahr, Gifford, Loiselle, McKarns and Holden20 indicating that implanted patients with usable residual hearing match acoustic and electrical pitches in a pattern that differs greatly between individuals, and that does not seem to reflect the exponential frequency-place function of the normal cochlea as proposed by Greenwood.Reference Greenwood16, Reference Greenwood17

Thus, the current software-predefined frequency band allocation to electrodes, derived from Greenwood's function,Reference Sridhar, Stakhovskaya and Leake22 appears to be inappropriate, as it neither reproduces the way Cochlear implant electrodes stimulate nerve fibres, nor reflects the acoustic nerve intramodiolar route.

Recent work on human cadaveric cochleae has shown that cochlear implant electrodes stimulate near the modiolus, directly targeting the spiral ganglion cells within Rosenthal's canal, thereby supporting our hypothesis.Reference Sridhar, Stakhovskaya and Leake22

Electrode insertion depth could be one of the factors causing the observed mismatch: generally, the ganglion cells in the modiolus extend around 1875 turns, more than the array lengths used in our study.Reference Kawano, Seldon and Clark23 In particular, this could explain the pitch shift towards higher frequencies observed when some of the apical electrodes are stimulated (seen in our fifth subject and also reported by other authors).Reference Baumann and Nobbe18Reference Dorman, Spahr, Gifford, Loiselle, McKarns and Holden20

However, the array insertion depth cannot account for the mismatch observed for middle and high frequencies, which patients allocate to electrodes according to a highly irregular pattern, with remarkable inter-individual variation. Instead, a similar finding could be explained by the fact that cochlear implant electrodes do not activate hair cells, but neural fibres. Thus, an electrode may not activate neurons in the tonotopic location of the nerve which matches the analysis band for that electrode. Furthermore, a single electrode could activate different sites of the modiolar portion of the acoustic nerve at the same time, according to its specific location and to the way current spreads from it. Conversely, distinct electrodes, variably distanced along the array, could simultaneously send stimuli to the same group of nerve fibres, owing to non-uniform patterns of current spread.

Therefore, based on our findings and on these considerations, we hypothesise that several electrodes can elicit highly similar pitch sensations, which would explain our subjects' electro-acoustic pitch-matching results. From these speculations, we can deduce that the standard, software-predefined frequency band allocation to electrodes and the pitch sensations reported by implanted subjects do not coincide, because the former is based upon the cochlea's proposed tonotopicity, whereas the latter are based upon the acoustic nerve tonotopic distribution. More simply, modern analysis band filters do not take into account the fact that a natural mismatch exists between the Greenwood frequency map of the hair cells in the cochlea and the frequency map of the spiral ganglion cells.

The effect of poor pitch discrimination on speech comprehension in noise has been investigated in a number of studies.Reference Stickney, Zeng, Litovsky and Assmann24Reference Gfeller, Turner, Oleson, Zhang, Gantz and Froman26 As stated in the Introduction,Reference Shannon, Zeng and Wygonski11Reference Friesen, Shannon and Slattery15 over the past decade a number of studies have investigated the effects of frequency band shifting across electrodes upon implantees' phoneme, word and sentence recognition skills. However, this body of literature has not defined the impact of the existing mismatched allocation of frequency bands to electrodes upon implanted subjects' hearing performance in quiet and in noisy background conditions. Although our sample was small and other factors may have contributed to our subjects' speech recognition, our results seem to indicate that the poor correspondence between the software-allocated electrode frequencies and the pitch sensations elicited by electrode stimulation affects implanted subjects' speech recognition in quiet settings, and to an even greater extent in +10 and 0 dB HL signal-to-noise ratio conditions. The correlation was slightly weaker in quiet conditions than in +10 and 0 dB HL signal-to-noise ratio conditions, probably because we selected a study population that had good speech recognition scores in quiet conditions (this is commonly referred to as a ‘ceiling effect’). However, in all of these settings it is evident that patients with low pitch mismatch scores perform quite well, whereas patients with a high degree of mismatch experience a deterioration in speech recognition skills, which is dramatic at +10 and 0 dB signal-to-noise ratio levels.

If (according to our hypothesis) more electrodes, variably distanced along the array, can elicit similar pitch sensations, then an overlapping of pitch sensations is possible. As a consequence, in a quiet environment an overlapping of spectral information in the signal may take place, with subsequent impairment of phoneme recognition, and in a noisy environment the background noise could overlap with the signal. In support of this hypothesis, our poor-performing subjects generally had a mismatch for the 2000 and 4000 Hz frequencies, which are crucial to speech understanding.

  • Improving speech understanding in the different settings of everyday life is a major goal of current cochlear implant research

  • Implanted patients perform quite well in quiet settings, but generally have difficulties extracting speech information in noisy environments

  • The mismatch between frequencies allocated to electrodes and the pitch perceived on stimulation of the same electrodes can partially account for implanted subjects' difficulties in understanding speech in noisy environments

  • Implanted patients may benefit from mismatch correction through a function allowing individualised reallocation of frequency bands to electrodes

The weaker correlation between mismatch and speech recognition scores at −5 dB HL signal-to-noise ratio (r 2 = 0.3) makes it more difficult to estimate how much the poor correspondence between electrode allocated frequencies and pitch might influence speech understanding, even if patients with the best pitch-matching results still tended to perform better than poor pitch-matching patients at this signal-to-noise ratio setting (see Table III and Figure 2).

On the whole, these data seem to indicate that mismatch severely affects speech performance in quiet conditions and in noisy conditions with a +10 or 0 dB signal-to-noise ratio, whereas the role of mismatch remains unclear in −5 dB signal-to-noise ratio conditions. At −5 dB signal-to-noise ratio, it is still possible that pitch mismatch could influence hearing performance; however, to verify this we would need a larger subject population and the facility to correct the mismatch.

These results lead us to believe that the current system of frequency band assignment to cochlear implant electrodes in everyday use frequency tables is unfit for purpose in most implanted subjects, and consequently has a dramatic impact on speech recognition abilities in the presence of background noise. Therefore, frequency band assignment should be integrated by a more flexible allocation system, no longer based on rigid and automatic application of the Greenwood formula for normal cochlear tonotopic distribution, but allowing a personalised distribution of frequency bands based on the pitch sensations reported by patients upon stimulation of electrodes. Once standardised, the application of this strategy could be useful in the growing number of implanted patients who still maintain usable residual hearing: they could first be administered a pitch-matching test, and could then undergo frequency range redistribution to correct any mismatch between electrical and acoustic pitch. We believe that such a procedure could improve implanted patients' speech recognition both in quiet and noisy conditions, with a consequent overall improvement in their quality of life.

Conclusion

The present study suggests that in cochlear implant recipients there is a mismatch between the frequency bands assigned to electrodes by the mapping software and the pitch sensations elicited by the stimulation of the same electrodes. Such a mismatch is highly variable between patients, and seems to significantly affect speech recognition scores in quiet and in noisy conditions. In the light of these findings, we hypothesize that a function in the software allowing manual reallocation of frequency bands to electrodes for misalignment correction could improve cochlear implant patients' speech understanding.

References

1 Friesen, LM, Shannon, RV, Baskent, D, Wang, X. Speech recognition in noise as a function of the number of spectral channels: comparison of acoustic hearing and cochlear implants. J Acous Soc Am 2001;110:1150–63CrossRefGoogle ScholarPubMed
2 Friszt, JB, Koch, DB, Downing, M, Litvak, L. Current steering creates additional pitch percepts in adult cochlear implant recipients. Otol Neurotol 2007;28:629–36CrossRefGoogle Scholar
3 Kong, Y, Cruz, R, Jones, J, Zeng, F. Music perception with temporal cues in acoustic and electric hearing. Ear Hear 2004;25:173–85CrossRefGoogle ScholarPubMed
4 McDermott, H. Music perception with cochlear implants: a review. Trends Amplif 2004;8:4981CrossRefGoogle ScholarPubMed
5 Wouters, J, Vanden Berghe, J. Speech recognition in noise for cochlear implantees with a two microphone monaural adaptive noise reduction system. Ear Hear 2001;22:420–30CrossRefGoogle ScholarPubMed
6 Spriet, A, Van Deun, L, Eftaxiabis, K, Laneau, J, Moonen, M, Van Dijk, B et al. Speech understanding in background noise with the 2-microphone adaptive beamformer BEAM in the Nucleus Feeedom Cochlear Implant System. Ear Hear 2007;28:6272CrossRefGoogle Scholar
7 Nelson, P, Jin, SH. Factors affecting speech understanding in gated interference: cochlear implant users and normal-hearing listeners. J Acoust Soc Am 2004;115:2286–94CrossRefGoogle ScholarPubMed
8 Gantz, B, Turner, C, Gfeller, K, Lowder, M. Preservation of hearing in cochlear implant surgery: advantages of combined electrical and acoustical speech processing. Laryngoscope 2005;115:796802CrossRefGoogle ScholarPubMed
9 Qin, MK, Oxenham, AJ. Effects of introducing unprocessed low-frequency information on the reception of envelope-vocoder processed speech. J Acoust Soc Am 2006;119:2417–26CrossRefGoogle ScholarPubMed
10 Dorman, MF, Loizou, PC, Rainey, D. Simulating the effect of cochlear implant electrode insertion depth on speech understanding. J Acoust Soc Am 1997;102:2993–6CrossRefGoogle ScholarPubMed
11 Shannon, RV, Zeng, FG, Wygonski, J. Speech recognition with altered spectral distribution of envelope cues. J Acoust Soc Am 1998;104:2467–76CrossRefGoogle ScholarPubMed
12 Fu, QJ, Shannon, RV. Effects of electrode location and spacing on phoneme recognition with the Nucleus-22 cochlear implant. Ear Hear 1999;20:321–31CrossRefGoogle ScholarPubMed
13 Fu, QJ, Shannon, RV. Effects of electrode configuration and frequency allocation on vowel recognition with the Nucleus-22 cochlear implant. Ear Hear 1999;20:332–44CrossRefGoogle ScholarPubMed
14 Fu, QJ, Shannon, RV. Frequency mapping in cochlear implants. Ear Hear 2002;23:339–49CrossRefGoogle ScholarPubMed
15 Friesen, LM, Shannon, RV, Slattery, WH 3rd. The effect of frequency allocation on phoneme recognition with the Nucleus 22 cochlear implant. Am J Otol 1999;20:729–34Google ScholarPubMed
16 Greenwood, DD. Critical bandwidth and the frequency coordinates of the basilar membrane. J Acoust Soc Am 1961;33:1344–56CrossRefGoogle Scholar
17 Greenwood, DD. A cochlear frequency-position function for several species – 29 years later. J Acoust Soc Am 1990;87:2592–605CrossRefGoogle ScholarPubMed
18 Baumann, U, Nobbe, A. The cochlear implant electrode-pitch function. Hear Res 2006;13:3442CrossRefGoogle Scholar
19 Di Nardo, W, Cantore, I, Cianfrone, F, Melillo, P, Fetoni, AR, Paludetti, G. Differences between electrode-assigned frequencies and cochlear implant recipient pitch perception. Acta Otolaryngol 2007;127:370–7CrossRefGoogle Scholar
20 Dorman, MF, Spahr, T, Gifford, R, Loiselle, L, McKarns, S, Holden, T et al. An electric frequency-to-place map for a cochlear implant patient with hearing in the nonimplanted ear. J Assoc Res Otolaryngol 2007;8:234–40CrossRefGoogle ScholarPubMed
21 Burdo, S, Cucinotta, L, Miccoli, MT, Oneto, L, De Dionigi, M. Sentences in Italian with disyllabic words by Burdo and Orsi. In: Fondazione Audiologica Varese Onlus eds. Speech Audiometry [in Italian]. Varese: Varese Onlus Audiology Foundation, 2007;59Google Scholar
22 Sridhar, D, Stakhovskaya, O, Leake, PA. A frequency-position function for the human cochlear spiral ganglion. Audiol Neurootol 2006;11:1620CrossRefGoogle ScholarPubMed
23 Kawano, A, Seldon, HL, Clark, GM. Computer-aided three-dimensional reconstruction in human cochlear maps: measurement of the lengths of Organ of Corti, outer wall, inner wall, and Rosenthal's canal. Ann Otol Rhinol Laryngol 1996;105:701–9CrossRefGoogle Scholar
24 Stickney, GS, Zeng, FG, Litovsky, R, Assmann, PF. Cochlear implant speech recognition with speech maskers. J Acoust Soc Am 2004;116:1081–91CrossRefGoogle ScholarPubMed
25 Turner, CW, Gantz, BJ, Vidal, C, Behrens, A, Henry, BA. Speech recognition in noise for cochlear implant listeners: benefits of residual acoustic hearing. J Acoust Soc Am 2004;115:1729–35CrossRefGoogle ScholarPubMed
26 Gfeller, K, Turner, C, Oleson, J, Zhang, X, Gantz, B, Froman, R et al. Accuracy of cochlear implant recipients on pitch perception, melody recognition, and speech reception in noise. Ear Hear 2007;28:412–23CrossRefGoogle ScholarPubMed
Figure 0

Table I Subjects' age, cause of hearing loss, implant side and implant model

Figure 1

Table II Subjects' contralateral residual hearing

Figure 2

Fig. 1 Pitch-matching results for subjects one to seven, shown in parts (a) to (g), respectively. In each part, the right column gives the standard frequency bands assigned to electrodes, according to the Nucleus Custom Sound 1.4™ mapping software. Continuous line = standard frequency allocation to electrodes from mapping software; black squares = electrode frequency from subject's pitch perception; white triangles = tested pure tones; ES = electrode allocation of a given frequency; EP = electrical pitch perceived by patient

Figure 3

Fig. 2 Correlation between mismatch scores and speech recognition scores for the seven subjects. SNR = signal-to-noise ratio

Figure 4

Table III Subjects' speech recognition and mismatch scores