• Users Online: 4147
  • Home
  • Print this page
  • Email this page
Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contacts Login 

 Table of Contents  
Year : 2015  |  Volume : 31  |  Issue : 4  |  Page : 244-249

Effect of frequency lowering and auditory training on speech perception outcome

1 Unit of Audiology, ENT Department, Faculty of Medicine, Alexandria University, Alexandria, Egypt
2 Unit of Phoniatrics, ENT Department, Faculty of Medicine, Alexandria University, Alexandria, Egypt

Date of Submission01-May-2015
Date of Acceptance07-Jul-2015
Date of Web Publication27-Oct-2015

Correspondence Address:
Mohamed A Talaat
Unit of Audiology, ENT Department, Faculty of Medicine, Alexandria University, Champollion Str. Alazarita, 21131, Alexandria
Login to access the Email id

Source of Support: None, Conflict of Interest: None

DOI: 10.4103/1012-5574.168360

Rights and Permissions

The aim of this study was to investigate the effect of auditory training on speech sound perception tasks in patients with steep sloping high-frequency sensorineural hearing loss using amplification with frequency-lowering hearing aids.
Patients and methods
This study was conducted on 10 adults with steeply sloping high-frequency sensorineural hearing loss using frequency-lowering hearing aids. Pretraining and post-training evaluation tasks were prepared to evaluate the ability to perceive vowels and consonants of Arabic language using lists of consonant-vowel-consonant and vowel-consonant-vowel syllables. Perception tasks included speech sound recognition and discrimination. Arabic exercise tasks were constructed and applied to provide directed training on voiceless consonant speech sounds.
The study demonstrated enhancement in consonant perception using frequency lowering, provided the listeners were trained to conjugate between the newly perceived sounds moved to a functional neurophysiological substrate sensitive to lower frequency sounds.
Auditory training with frequency-lowering hearing aids improves consonant perception in individuals with steep sloping sensorineural hearing loss.

Keywords: auditory training, frequency compression, frequency transposition, steep sloping sensorineural hearing loss

How to cite this article:
Ahmed RA, Mourad MI, El-Banna MM, Talaat MA. Effect of frequency lowering and auditory training on speech perception outcome. Egypt J Otolaryngol 2015;31:244-9

How to cite this URL:
Ahmed RA, Mourad MI, El-Banna MM, Talaat MA. Effect of frequency lowering and auditory training on speech perception outcome. Egypt J Otolaryngol [serial online] 2015 [cited 2020 May 31];31:244-9. Available from: http://www.ejo.eg.net/text.asp?2015/31/4/244/168360

  Introduction Top

The spectral energy of many consonants - for example, /s/, /∫/, /z/, /s/, /θ/, /f/ - is primarily located in the high-frequency region; although these sounds are softer in intensity, their contribution towards understanding speech is critical. Severe hearing loss in that region will cause difficulty in discriminating speech sounds [1],[2] . A cochlear dead region is an area of the cochlea in which inner hair cells (IHCs) or auditory nerve fibers are not functioning, and may cause such difficulties [3],[4] .

Audiometric characteristics associated with a high-frequency 'dead' region are as follows: thresholds in the high frequencies greater than 90 dB HL; slope of the audiogram greater than 50 dB/octave; extremely poor speech recognition score in quiet and in noise; and distortion of sounds such as 'noise-like' perception for pure tones [5] .

Providing effective hearing-aid fittings for high-frequency hearing loss is a challenging and important clinical task. The importance of this task becomes more apparent when considering the development of speech and language skills in the hearing-impaired pediatric population [1] .

Frequency-lowering algorithm refers to current technologies that take high-frequency input signals and deliver these sounds to a lower-frequency region for improved speech understanding [6] . This can be achieved with either nonlinear frequency compression or linear frequency transposition. Nonlinear frequency compression algorithm moves the sounds to an adjacent area with less cochlear damage, where they can be processed and amplified [7] . Linear frequency transposition algorithm moves the unaidable high-frequency sounds to the aidable low-frequency regions. The frequency at which transposition begins is called the start frequency (the frequency where the hearing loss is unaidable). Thus, instead of amplifying that frequency, it transposes without amplification [8],[9] . A preselected range of high frequencies is compressed based on the listener's hearing loss into a narrower range so that global relations between different frequency components are preserved. It is assumed an important application in cases of high-frequency cochlear dead regions.

Auditory training may facilitate the wearer's attention to the transposed sounds and help users adapt more quickly to frequency transposition by creating a new perceptual image for the high frequencies.

  Patients and methods Top

Written consent was obtained from all patients before conduction of the study. The study was approved by the local ethics committee. Ten adults with bilateral steep sloping high-frequency sensorineural hearing loss were selected. Audiometric thresholds were normal to moderate at 250-500 Hz, mild to moderately severe at 1000 Hz, and severe to total hearing loss at frequencies 2000, 4000 and 8000 Hz.


(1) Pure tone audiometry, speech audiometry, and acoustic immittance measures were used.

(2) Threshold equalizing noise (HL) test was used for estimation of the cochlear dead region [10],[11] .

(3) Six participants were fitted with Inteo-19 (Widex, Lynge, Denmark), whereas four were fitted with NaidaIII hearing aid (Phonak, Stafa, Switzerland). The master program was saved as default program. Free-field-aided thresholds and live voice speech tested thresholds were determined for verifying the fitting.

(4) Speech sound perception evaluation was carried out before and after auditory training. This was conducted in a quiet room with the female examiner sitting behind and to the side of the tested ear. The female voice was adjusted using sound level meter (Radioshack, cat no. 33.2050, Fortworth, Texas, USA) starting with 45 dB SPL for detection and then elevated by 10 dB for discrimination tasks. Participants were instructed before training and visual cues were used to ensure that the participants understood the instructions. Scores were taken as percentage of the correct responses from the total number of stimuli at each task.

Lists for speech sound perception were developed for this study. They included lists in which vowels are the target and lists in which consonants are the target. Tasks for vowel recognition consisted of 30 consonant-vowel-consonant (CVC) syllables. The /a/, /i/, /o/ vowels were randomly and equally presented combined with /t/, /b/, /k/, /s/ consonants - for example, /bo:b/, /bae:b/, /bi:b/. In addition, the lists included monosyllabic words - for example, /mo:z/, /?i:d/, /fa:r/. For vowel discrimination, a list of 33 pairs of CVC words differing according to vowel height, advancement, and length were presented.

The lists for consonant recognition consisted of 52 vowel-consonant-vowel (VCV) syllables, in which the initial and the last vowels /a/ or /i/ were the same - for example, /asa/, /isi/, /ama/, /imi/. For consonant discrimination, a list of 32 pairs of VCV syllables was used. The pair differed on the basis of the manner and place of articulation, voicing, and emphatic nature (e.g. /asa/ and /asa/, /afa/ and /asa/). Finally, a list of 24 pairs of words with target consonant sounds in the initial, middle, and final position differing according to voicing, emphatic articulation, place, and manner of articulation were presented.

(5) Auditory training was carried out for 4 days using Arabic speech sounds recorded in an Egyptian female voice and introduced through a computer. The goal was to provide the participants with directed training on voiceless consonant. The criterion score for achievement on a particular training task was 90% or greater. The sounds selected for training were the most amenable to the action of frequency transposition considering their spectral content. The exercises thus targeted /s/, /f/, /ƒ/, and /θ/ sounds. Each sound was trained at the syllablic, word, and sentence level. Tasks involved matching, recognition and completion tasks, in addition to counting the target fricative in a list of sentences loaded with the target sound. The training was constructed in a hierarchical manner from easy to difficult.

Spectrographic analysis of the examiner female voice speech sounds was carried out. Consonants were distributed into three regions on the pure tone audiogram: high-frequency fricative sector (4000-8000 Hz); low-frequency fricative sector (2000-5000 Hz); and low-frequency consonants (below 2000 Hz) [Figure 1].
Figure 1: Distribution of consonant sounds referenced to pure tone audiogram

Click here to view

  Results Top

[Figure 2] presents the pure tone audiometry of the 10 cases included in the study. Performance on speech sound perception tasks using hearing aid master program is summarized in [Table 1]. Participants' performance of 90% correct response indicates no specific difficulty.
Figure 2: Pure tone audiograms of the 10 cases with high-frequency hearing loss

Click here to view
Table 1 Description of the performance of speech sound perception evaluation on master program

Click here to view

Patterns of consonant recognition were summarized as follows:

  1. Pattern 1: All fricatives perceived as stops.
  2. Pattern 2: Consonants within a frequency range are perceived as another consonant at the same frequency range.
  3. Pattern 3: High-frequency fricatives are perceived as another high-frequency fricative within the same frequency range.
  4. Pattern 4: Voiced sound as unvoiced sound.
  5. Pattern 5: Inconsistent performance.
Patterns of discrimination difficulties encountered included the following:

  1. Pattern 1: Place of articulation within the same frequency range.
  2. Pattern 2: Place of articulation within high-frequency range only.
  3. Pattern 3: Manner of articulation.
  4. Pattern 4: Voicing.
  5. Pattern 5: Emphatic nature.
The speech sound perception post-training revealed positive changes in vowel and consonant perception tasks except in vowel discrimination in a set of four words. Training showed improvement in all consonant perception tasks [Figure 3], [Figure 4] [Figure 5] [Figure 6] and [Table 2].
Figure 3: Mean scores on speech perception tasks using master program, frequency lowering and after auditory training

Click here to view
Figure 4: Distribution of the participants on the basis of their performance on consonants recognition tasks with reference to the summarized patterns in case of master program and with frequency lowering and after auditory training

Click here to view
Figure 5: Distribution of the participants on the basis of their performance on consonants discrimination [vowel-consonant-vowel (VCV)] tasks with reference to the summarized patterns in case of master program and with frequency lowering and after auditory training

Click here to view
Figure 6: Distribution of the participants on the basis of their performance on consonant discrimination [vowel-consonant-vowel (Word)] tasks with reference to the summarized patterns in case of master program and with frequency lowering and after auditory training

Click here to view
Table 2 Performance of the 10 patients in speech sound perception before and after auditory training using frequency lowering algorithm

Click here to view

  Discussion Top

Frequency lowering is designed to transpose sounds perceived by high-frequency regions in which perception of high-frequency speech sounds is lost and moved to a lower-frequency region for improved speech understanding. It is beneficial when the hearing loss above 3000 Hz is 70 dB or more and more than 70 dB HL at 1000 Hz [12] . Thus, the poorest performance on master program was noticed in case 9, who had the worst pure tone thresholds. Hearing loss profiles that are not corrected or compensated for with regular amplification may benefit - that is, still suffer from loss of sound perception for application of frequency-lowering algorithm.

The overall performance of the participants showed a stepwise increase in profile as the percentage of correct responses at vowel and consonant level increased with the use of frequency lowering and increased even more after auditory training.

It was noticed that most vowel errors were restricted to confusion between back vowels such as /a/, /o/, and front vowel /i/. Difficulties in discrimination mainly involved discrimination based on height and advancement independent of length.

With the use of master program the participants had difficulties with recognition of consonants in all ranges. They perceived fricatives as stops and consonants within any frequency range were perceived as another sound within the same frequency range. In addition, devoicing of sounds and restricted difficulty within high-frequency regions were encountered in few cases.

After the use of frequency lowering, some participants still perceived fricatives as stops. Marked confusion was encountered in some participants and thus revealed an inconsistent pattern, and some participants perceived high-frequency fricatives as another high-frequency fricative within the same frequency range. Devoicing pattern was noticed by only one patient.

The concept of auditory training is to help cortical reorganization such that a cortical region that was previously responsive to only one frequency may be reorganized as a consequence of learning; thus, it becomes responsive to another frequency [13] . This may explain the improvement in speech perception tasks noticed after auditory training as only one participant showed no improvement in perception at high-frequency fricatives and another had voicing difficulty.

With master program, participants had shown all patterns of discrimination difficulties at the VCV and word level. Using frequency lowering, the difficulty in discrimination remained, but with less degree at the VCV level. At the word level, the frequency lowering limited the discrimination difficulty to discrimination based on place of articulation within high-frequency range, manner of articulation in final position of the word and voicing. Thus, frequency lowering could not help much in discrimination, which requires attention to low-intensity sound. Auditory training helped in overcoming this difficulty, and only two patients did not show improvement in discrimination of close spectral high-frequency sounds at both CVC and word level. At the word level, participants were helped by guessing the meaning of the words.

In the present study, there was slight overall benefit with auditory training; although directed to consonant sound, it helped in the improvement of vowels and untrained sounds. Auriemmo et al. [14] showed that training with the frequency transposition program resulted in improvements in consonant and vowel identification. The improvement in overall consonant perception in post-training evaluations task at all levels was supported by a number of studies [14],[15],[16],[17] . The training effect on intelligibility of speech and quality of life was outside the scope of the study. The short duration of the training program, the limited sounds targeted in the training program, the use of two different frequency-lowering techniques, as well as the inclusion of prelingual participants who lack previous knowledge of the sounds may underlie the variable pattern encountered during description of the patient performance.

Financial support and sponsorship


Conflicts of interest

There are no conflicts of interest.

  References Top

Ching TYC, Dillon H, Katsch R. Do children require more high-frequency audibility than adults with similar hearing losses? In: Seewald RC, Gravel JS (eds). A Sound Foundation through Early Amplification 2001. Proceedings of the Second International Conference. Stafa, Switzerland: Phonak AG, 2002, pp. 141-152.  Back to cited text no. 1
Stelmachowicz PG, Pittman AL, Hoover BM, Lewis DE, Moeller MP. The importance of high-frequency audibility in the speech and language development of children with hearing loss. Arch Otolaryngol Head Neck Surg 2004; 130:556-562.  Back to cited text no. 2
Moore BC. Dead regions in the cochlea: diagnosis, perceptual consequences, and implications for the fitting of hearing AIDS. Trends Amplif 2001; 5:1-34.  Back to cited text no. 3
Moore BCJ, Glasberg BR. A model of loudness perception applied to cochlear hearing loss. Auditory Neurosci 1997; 3:289-311.  Back to cited text no. 4
Moore BC. Dead regions in the cochlea: conceptual foundations, diagnosis, and clinical applications. Ear Hear 2004; 25:98-116.  Back to cited text no. 5
Bentler R. Frequency-lowering hearing aids: Verification tools and research needs. 2010 Apr [cited 2010 April 6]. Available from: http://leader.pubs.asha.org/article.aspx?articleid=2291733.  Back to cited text no. 6
Audiology Online News (2008, June 6). Sound Recover- A Breakthrough in Enhancing Intelligibility. AudiologyOnline, News 3093. Available from: http://www.audiologyonline.com/releases/soundrecover-breakthrough-in-enhancing-intelligibility-3719.  Back to cited text no. 7
Kuk F, Korhonen P, Peeters H, Keenan D, Jessen A, Andersen H. Linear frequency transposition: extending the audibility of high frequency information. Hear Rev 2006;13:42-48.  Back to cited text no. 8
Andersen H. Audibility Extender-So the ''Dead'' (Region) May Hear, Integrated Signal Processing, A New Standard in Enhancing Hearing Aid Performance. Hear Rev 2006;13(Suppl 3):20-22.  Back to cited text no. 9
Moore BC, Huss M, Vickers DA, Glasberg BR, Alcántara JI. A test for the diagnosis of dead regions in the cochlea. Br J Audiol 2000; 34:205-224.  Back to cited text no. 10
Moore BC, Glasberg BR, Stone MA. New version of the TEN test with calibrations in dB HL. Ear Hear 2004; 25:478-487.  Back to cited text no. 11
Kuk F, Auriemmo J, Korhonen P. Re-evaluatingtheefficacy of frequency transposition. 2009 Jan [cited 2010 April 6]. Available from: http://leader.pubs.asha.org/article.aspx?articleid=2289675&resultClick=3.  Back to cited text no. 12
Thai-Van H, Micheyl C, Moore BC, Collet L. Enhanced frequency discrimination near the hearing loss cut-off: a consequence of central auditory plasticity induced by cochlear damage? Brain 2003; 126(Pt 10): 2235-2245.  Back to cited text no. 13
Auriemmo J, Kuk F, Lau C, Marshall S, Thiele N, Pikora M, et al. Effect of linear frequency transposition on speech recognition and production of school-age children. J Am Acad Audiol 2009; 20:289-305.  Back to cited text no. 14
Kuk F, Keenan D, Korhonen P, Lau CC. Efficacy of linear frequency transposition on consonant identification in quiet and in noise. J Am Acad Audiol 2009; 20:465-479.  Back to cited text no. 15
Korhonen P, Kuk F. Use of linear frequency transposition in simulated hearing loss. J Am Acad Audiol 2008; 19:639-650.  Back to cited text no. 16
Kuk F, Keenan D, Auriemmo J, Korhonen P, Peeters H, Lau C, et al. Interpreting the efficacy of frequency-lowering algorithms. Hear J 2010; 63:30.  Back to cited text no. 17


  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6]

  [Table 1], [Table 2]

This article has been cited by
1 Speech intelligibility benefits of frequency-lowering algorithms in adult hearing aid users: a systematic review and meta-analysis
Andrea Simpson,Alicia Bond,Michelle Loeliger,Sandy Clarke
International Journal of Audiology. 2017; : 1
[Pubmed] | [DOI]


Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

  In this article
Patients and methods
Article Figures
Article Tables

 Article Access Statistics
    PDF Downloaded162    
    Comments [Add]    
    Cited by others 1    

Recommend this journal