Quantcast
Channel: Hearing Review » Hearing Instruments BTE
Viewing all 119 articles
Browse latest View live

Hearing Aid Sales Rise 5% in 2013; Industry Closes in on 3M Unit Mark

$
0
0

Staff Standpoint | February 2014 Hearing Review

By Karl Strom, Editor-in-Chief HR Karl

Hearing aid unit sales increased by 4.8% in 2013, and behind-the-ear (BTE) hearing aids with external receivers (ie, RICs/RITEs) now make up more than half (52.2%) of all hearing aids sold, according to year-end US statistics from the Hearing Industries Association (HIA), Washington, DC. This year’s sales increase comes on the heels of 2.8%, 3.0%, and 2.9% increases for 2010, 2011, and 2012, respectively. Historically, annual US net unit volume growth has averaged 2% to 4%, so the 4.8% growth figure in 2013 represents a better-than-average year.

HAsales1979-2013

Figure 1.Percentages of hearing aids dispensed by BTE and ITE styles in 2013 (private sector + VA). BTEs constituted three-quarters (74%) of all units dispensed, with 52% of those being RIC/RITE-type devices.

Looking at the entire US market, three-quarters (74.0%) of all hearing aids were BTEs, and half (52.2%) were of the RIC or RITE styles (Figure 1). Traditional BTEs constituted 21.8% of the market. ITEs made up just over one-quarter (26.0%) of all hearing aids dispensed, led by full- and half-shell ITEs (11.6% of total market), ITCs (7.8%), and CICs (6.3%). Three-quarters (74.6%) of all aids dispensed in 2013 featured wireless technology.

The Department of Veterans Affairs (VA) accounted for 20.6% of all units—or 617,000 out of 2.99 million hearing aids—dispensed in the United States during 2013, for a unit growth rate of 7.3% over 2012. When ignoring dispensing activity at the VA, private-sector sales increased by 4.2% over the previous year. On average, private-sector professionals dispensed a higher percentage of RIC/RITEs than VA professionals (54.3% private sector vs 44.1% VA), and lower percentages of traditional BTEs (20.6% vs 26.2%) and ITEs (25.1% vs 29.6%). Over two-thirds (69.6%) of these contained wireless technology, while nearly all (94.0%) VA hearing aids were wireless.

HAstyles2013YE

Figure 2.Total US net unit sales (blue bars, private sector + VA) and VA units (red bars) from 1979 to 2013. Hearing aid units in 2013 came very close to topping the 3-million unit mark for the first time in industry history.

Closing in on 3-million unit mark. A total of 2,990,104 net units were dispensed in the US during 2013—tantalizingly close (only 9,896 units short) to the 3-million unit mark (Figure 2). The US hearing industry first hit the 2-million unit mark in 2004. Shortly after this sales landmark, in the February 2005 HR, Starkey Laboratories President Jerry Ruzicka commented: “This second-million mark is an important one, but it took much too long to get here. With the help of President Reagan, the industry topped the million mark in 1983, and together we must all ensure that it is not another 2 decades for the next million.”

So, the good news for the hearing industry is that we cut our time down by a decade to achieve that next million units in sales. How did it happen? The period from 2004 to 2013 has been marked by phenomenal technological growth (notably open-fit and RIC-type aids, feedback and speech-in-noise algorithms, and wireless technology), a lack of FDA/FTC interference, greater professionalism in practices, fewer manufacturers with more resources (and their forward-consolidation into retailing), the emergence of dispensing retail giants (notably Costco), and relatively flat average sales prices (ASPs) and binaural usage rates. The Great Recession aside, the hearing industry has experienced a good—and very competitive—business environment.

Maybe 4 million by 2020? Looking at the big picture and some of the big innovations coming down the R&D pipeline, it’s not impossible.

Original citation for this article: Strom K. Hearing aid sales rise 5% in 2013; Industry closes in on 3M unit mark. Hearing Review. 2014;21(2):6.


A History of e2e Wireless Technology

$
0
0

Tech Topics: Wireless Hearing Aids | February 2014 Hearing Review

By Rebecca Herbig, AuD, Roland Barthel, and Eric Branda, AuD

Herbig authors box

Ten years ago, in 2004, Siemens introduced e2e wireless—the first wireless system that allowed synchronous steering of both hearing instruments in a bilateral fitting. Since then, e2e wireless has become ubiquitous enough so that, today, all major hearing aid manufacturers offer variations of the technology.

One important user advantage was that e2e wireless enabled the synchronization of onboard controls. This means the wearer only had to change the volume or program on one side for the adjustment to be effective in both instruments. This offered not only a discreet and convenient control of the hearing instruments, but for those wearers with limited mobility or dexterity, it provided an even more significant practical benefit.

Wireless control synchronization also means that, in a pair of hearing instruments, one can have a program button and the other can have volume control. For the first time, patients could control both volume and program down to the smallest completely-in-the-canal (CIC) instrument.

HerbigFig2

Figure 1. The distribution of subject ratings (in percent) to the statement, “Controlling both hearing aids with a single push button was useful to me.”

HerbigFig1

Figure 2. The distribution of subject ratings (in percent) to the statement, “Controlling both hearing aids with a single volume control was useful to me.

Early research with e2e technology revealed encouraging findings concerning usability. Powers and Burton1 found that, even in a large group of relatively satisfied users, 84% of them expressed the need to minimize effort when adjusting their hearing instruments, while 46% stated that bilateral control of hearing instruments using only one button was important. A similar field study conducted at the Hörzentrum Oldenburg Germany examined 40 subjects bilaterally fitted with Acuris instruments. After wearing them in daily life for 2 weeks, over 95% of subjects reported being able to change the program with one push-button useful, and 70% responded that adjusting the volume in both hearing aids with a single control was useful (Figures 1 and 2). HerbigFig3-oldHerbigFig3-old

Aside from the obvious convenience for the wearer, synchronization of volume and program also ensures that gain remains balanced between the ears for optimal speech quality and intelligibility. Additionally, e2e wireless enables bilateral acoustic situation detection so that digital signal processing features, such as directional microphone technology and noise reduction algorithms, operate synchronously based on the detected acoustic situations from both hearing instruments. This avoids situations where, for example, one hearing instrument is in omni-directional mode while the other is in directional.

HerbigFig3

Figure 3. Connectivity options with e2e wireless 2.0 and Tek.

Good spatial localization contributes to

HerbigFig4

Figure 4. Illustration of the dB SNR bilateral advantage for two different listening conditions: Wireless only and Wireless + Microphone (adapted from Picou and Ricketts, 2011).7

HerbigFig5

Figure 5. Word recognition in quiet for soft levels (50 dB SPL) before and after 2 weeks of learning. Results shown for eight subjects (S1-S8) and the mean findings are displayed on the far right. Subjects S3 and S5 dropped out of the study prior to completing the evaluation at the end of the second week.

HerbigFig6

Figure 6. Preference ratings for initial programmed fitting versus the learned setting during the real-world field trial. Results displayed are medians and interquartile ranges.

improved safety and general environmental awareness. Keidser and colleagues2 at the National Acoustic Laboratories (NAL) studied the effect of non-synchronized microp

 

hones on localization for 12 hearing-impaired listeners. Results showed that left/right localization error is largest when an omni-directional microphone mode is used on one side and a directional on the other. If the microphones are matched, the localization error decreases by approximately 40%.

Some users have difficulty adjusting gain for two different hearing instruments, which can affect speech understanding. Hornsby and Mueller3 evaluated the consistency and reliability of user adjustments to hearing instrument gain and the resulting effects on speech understanding. They found that, after first being fitted to prescriptive targets, some individuals made gain adjustments between ears resulting in relatively large gain “mismatches.” This provided compelling reason to have linked gain adjustments as provided by e2e wireless.

Hornsby and Ricketts4 also investigated the effect of synchronized directional microphones on aided speech intelligibility for 16 subjects. Results showed that matched microphones provide a signal-to-noise (SNR) benefit of 1.5 dB when compared to mismatched microphone settings. This corresponds to an increase of about 15-20% in correct word recognition.

The benefit of e2e technology found in the laboratory also was seen in the real world. A large multicenter project5 at the UK National Health Services Clinics examined if shared processing enabled by e2e wireless resulted in improved function for real-world wearers. This study found that all 30 subjects who participated preferred bilateral amplification to unilateral. Almost two-thirds (65%) preferred linked, whereas only 15% preferred unlinked. In addition, there was a highly significant positive correlation for linked associated with “speech quality.”

Therefore, e2e technology not only provides a significant improvement in “ease of use,” but the synchronized decision-making and signal processing improves the wearer’s overall listening comfort and speech understanding in a variety of listening situations.

Connectivity and e2e

In 2008, e2e wireless 2.0 was introduced along with the Siemens Tek™ remote control. This second generation of e2e technology allows audio signals from external devices to be streamed wirelessly into the hearing instruments in stereo with no audible delay. For the first time, wearers could have telephone conversations, watch TV, and listen to music effortlessly and wirelessly while turning their hearing instruments into a personal headset (Figure 3).

This functionality offers significant convenience and discreetness for hearing instrument wearers, such as:

  • Hands-free mobile phone calls without worrying about electromagnetic interference or feedback;
  • Volume control of TV signals in their hearing instruments without disturbing fellow viewers; and
  • Ability to walk or jog while listening to an MP3 player without the need for headphones.

In addition, e2e wireless 2.0 offers significant audiological advantages. Wireless connections bypass ambient noise by picking up the “clean” target signal and transmitting it directly into the hearing instruments, significantly improving the SNR, which is especially relevant for telephone communication. In a bilateral fitting, the phone signal transmits to both hearing instruments, allowing the wearer to take advantage of binaural redundancy and central integration, which can improve the SNR by 2-3 dB.6

A study was conducted to examine this effect with 20 subjects fitted bilaterally with Siemens hearing instruments and Tek. They were asked to complete speech recognition in speech-shaped cafeteria babble. During testing, speech stimuli were always presented to the hearing instrument over the telephone or by wireless transmission (from the Tek transmitter). Results showed bilaterally listening on the phone via Tek improved the SNR by more than 5 dB (Figure 4).

The introduction of miniTek™ in 2012 built on the benefits of e2e wireless 2.0 by offering connectivity to several transmitters and audio devices. This is especially important to today’s tech-conscious wearers—from active kids/teens to savvy adults. Now, wearers can take advantage of multiple applications (apps) on their favorite gadgets simultaneously. For example, they can have a Skype™ video call on their laptop then transition seamlessly back to their iPod®. They can hear turn-by-turn instructions from Google Maps™ or listen to their favorite songs from Pandora® on their smartphone. Wearers can continue to watch and listen to TV at their preferred volume without disturbing others—but this time via streaming YouTube™ on their tablet or Netflix® on their HDTV.

Automatic Learning and e2e

One of the more innovative hearing instrument features in recent years is automatic learning or “trainable hearing instruments.” In SoundLearning, Siemens’ automatic learning algorithm, e2e wireless ensures that every time the patient makes an adjustment to loudness or frequency response, both hearing instruments record the specific listening situation. This is done based on the acoustic situation detection system, the sound pressure level of the input, and the patient’s desired gain and frequency response in synchrony. In a matter of weeks, it is possible for the hearing instruments to map out listeners’ preferences and then automatically adjust to that setting when a given listening situation is detected.

A study by Chalupper et al8 examined patient benefit of SoundLearning 2.0 with a 2-week home trial during which experienced hearing instrument wearers trained their hearing instruments. Their findings showed not only a mean improvement in speech understanding after training (Figure 5), but also a clear preference for the new trained setting (Figure 6). Similar findings were reported by Palmer9 who divided 36 new hearing instrument wearers into two groups—each initially fitted to NAL-NL1. Group 1 began training via SoundLearning 2.0 at the time of fitting for 8 weeks. Group 2 started training after 4 weeks of use, and trained for only 4 weeks. Results showed that, while trained gain remained similar to NL1 targets in general, 89% of the participants after training stated that they could tell a difference between the trained and original program, and 65% of the participants preferred the trained gain program.

2014 and Beyond

Current e2e wireless technology is responsible for a number of sophisticated features and new accessories for today’s products. In all instruments with adjustment controls on the housing, the split-control function enabled by e2e is more sophisticated than ever. With rocker switches, the wearer can mix and match adjustment options, such as microphone volume control, changing volume for low frequencies or high frequencies only, tinnitus masker volume control, and program change.

The e2e input is also crucial in hearing instruments with directional microphone features, which automatically switch between directional, omni-directional, and backwards directional (anti-cardioid) pattern, depending on where the strongest speech source originates. This automatic feature, which is especially relevant for use in the car, depends first on the accurate binaural detection of a “Car” situation, and also on the binaural detection of where speech originates. Research has shown significant benefit in efficacy studies of this feature.10,11

Along with Tek, other remote control accessories, such as EasyPocket and miniTek, have been introduced to support ease of use and expand connectivity options for the wearer. These accessories utilize e2e wireless technology to facilitate communication with the hearing instruments.

We often discuss the benefits of hearing instrument features such as directional technology and digital noise reduction. What is often overlooked, however, is that in the real world, these features are only as effective as the acoustic situation detection system that is driving them. That is, if the detection system does not appropriately identify speech or noise, or speech-in-noise, or misclassifies music as noise, the benefit of these features in the real world will be severely compromised.

Figure7

Figure 7. The accuracy of acoustic situation detection for five different premier hearing instruments from leading manufacturers. The situations evaluated were Speech in Quiet, Speech in Noise, Noise, Music, and Car.

The micon acoustic situation detection algorithm, enabled by e2e wireless 2.0, is an integral part of the micon platform, capable of detecting six different situations: Quiet, Speech in Quiet, Speech in Noise, Noise, Music, and Car. In fact, the sophistication and accuracy of this bilateral situation detection system is the industry benchmark. A recent benchmark study was carried out with the premiere products of the five leading hearing instrument manufacturers. The hearing instruments were exposed to audio recordings of each of the different acoustic situations (Speech in Quiet, Speech in Noise, Noise, Music, and Car) for 16 consecutive hours/sound sample in a sound-treated room. After each acoustic situation, datalogging results were read out in each instrument to determine the percentage of correct situation identification. Results revealed that Siemens achieves the most accurate acoustic situation detection across situations (Figure 7).

While Speech in Quiet is a situation easily identified by all manufacturers evaluated, Speech in Noise and Noise pose more of a challenge for many of the other manufacturers. Siemens detected these two situations with 83% and 84% accuracy, respectively. In comparison, observe that Competitors A, C, and D had significantly lower correct identification for these two situations.

Although Competitor B also appears to have a very accurate situation detection system, note that it is limited to the detection of Speech in Quiet, Speech in Noise, and Noise only. Like the other instruments without dedicated Car or Music classes, Competitor B instruments would misclassify these particular acoustic situations, and as a result, the signal processing could behave sub-optimally. It is also important to point out that Siemens achieves a more accurate situation detection, despite the fact that it has more situation classifications than others, and a particular sound sample would have a higher probability of being misclassified by a Siemens instrument.

Conclusion

As discussed, there has been a decade of success with wireless systems. Many advanced features in modern hearing instruments depend on the effective communication between the devices. These features provide improved usability, listening comfort, and audibility. In the very near future, more algorithms and technologies will capitalize on this technology and synergize information from both hearing instruments in order to provide more sophisticated and intelligent processing than ever before.

A prerequisite for these new features is that they need to be energy efficient, so that they can be engaged when necessary without excessive battery drain. In addition, they need to react automatically without wearer manipulation and be fast and effective in constantly changing real-life situations. Only when all these conditions have been met can two hearing instruments, working as a harmonious system, begin to achieve the benefits natural binaural hearing offers.

References

1. Powers TA, Burton P. Wireless technology designed to provide true binaural amplification. Hear Jour. 2005;58(1): 25-34.

2. Keidser G, Rohrseitz K, Dillon H, Hamacher V, Carter L, Rass U, Convery E. The effect of frequency specific directionality on horizontal localisation. Int J Audiol. 2006;45(10):563-579.

3. Hornsby BW, Mueller HG. User preference and reliability of bilateral hearing aid gain adjustments. J Am Acad Audiol. 2008;19(2):158-70.

4. Hornsby BW, Ricketts TA. Effects of noise source configuration on directional benefit using symmetric and asymmetric directional hearing aid fittings. Ear Hear. 2007;28(2):177-86.

5. Smith P, Davis A, Day J, Unwin S, Day G, Chalupper J. Real-world preferences for linked bilateral processing. Hear Jour. 2008;61(7):33-38.

6. Dillon H. Hearing Aids. 1st ed. New York: Thieme; 2001.

7. Picou EM, Ricketts TA. Comparison of wireless and acoustic hearing aid-based telephone listening strategies. Ear Hear. 2011;32(2):209-220.

8. Chalupper J, Junius D, Powers TA. Algorithm lets users train aid to optimize compression, frequency shape, and gain. Hear Jour. 2009;62(8):26,28,30-33.

9. Palmer CV. Implementing a gain learning feature. 2012. Live webinar presented on April 30 on AudiologyOnline.com. Course #19950.

10. Chalupper J, Wu YH, Weber J. New algorithm automatically adjusts directional system for special situations. Hear Jour. 2011;64(1):26-33.

11. Mueller HG, Weber J, Bellanova M. Clinical evaluation of a new hearing aid anti-cardioid directivity pattern. Int J Audiol. 2011;50(4):249-54.

CORRESPONDENCE can be addressed to Hearing Review or Dr Eric Branda at ebranda@nullsiemens.com

Original citation for this article: Herbig R, Barthel R, Branda E. A history of e2e wireless technology. Hearing Review. 2014;21(2): 34-37.

Improving Hearing and Hearing Aid Retention for Infants and Young Children: A practical survey and study of hearing aid retention products

$
0
0

Pediatrics | February 2014 Hearing Review

By Karen L. Anderson, PhD, and Jane R. Madell, PhD

By providing information about hearing aid retention strategies throughout early childhood, collecting accurate information about daily hearing aid wear challenges, and working as a team, it should be possible to find appropriate strategies to keep children “on the air,” facilitating listening, language, and learning.

Adjusting to hearing aid use can be very difficult for infants, children, and their families. Putting hearing aids on a child requires accepting the fact that the child has a hearing loss. Even when families accept the fact of the hearing loss, putting on hearing aids and taking a child out in public and to family gatherings can be difficult. Many parents will find excuses to limit hearing aid use. As children get older, they sometimes also find excuses to limit hearing aid use.

A number of factors limit a child’s hearing aid use. Children may remove the hearing aids or they may not stay on the child’s ears. All children remove hearing aids at one time or another. Suddenly hearing when a child has been in a quiet world can be a surprise, causing a child to remove the hearing aid. Young children may remove hearing aids for a number of reasons. Infants are exploring their world and may remove hearing aids as part of exploration. Toddlers may remove hearing aids exploring their environment and may attempt to take them apart. Preschoolers may remove them as part of a power struggle with parents, especially when having a temper tantrum.

One significant problem is the difficulty of keeping hearing aids on the head. With infants, the pinna is soft and will often bend causing hearing aids to fall off. Using a pediatric ear hook and short earmold tubing, the hearing aid may fall off if not fit well. Both infants and older children encounter problems keeping hearing aids on the head when they are active. When hearing aids flop around or fall off, parents may feel that it is easier to remove them while the child is active. Whenever the hearing aid is off—even for a few minutes—the child is missing language and listening input, which should be avoided.

Chart1

Figure 1. Data from the Jones and Launer study1 showed that 40% of children wear their hearing aids only 4 hours or less each day and only 10% achieve full-time hearing aid wear.

Figure 2. Number of hours hearing aids are worn by severity of hearing loss.

Figure 2. Number of hours hearing aids are worn by severity of hearing loss.

Survey

In an effort to explore the problem of hearing aid retention, the Pediatric Hearing Aid Retention Project Survey was developed by the authors and sent to parents and audiologists to determine what retention devices they find most useful.

A total of 286 parents and 101 audiologists responded to the survey. Products were listed and all respondents were asked to rate each product on the following:

  • Effectiveness;
  • Child safety;
  • Durability;
  • Ease of use;
  • Keeping the device on and working (parent questionnaire); and
  • Level of compliance by families (audiology questionnaire).

Results Chart with HIPS results

Table 1 illustrates the combined responses provided by parents and audiologists for rating hearing aid retention accessories. The results of the survey clearly illustrated that the families who use retention devices typically have a significantly different view of the effectiveness and safety of the devices as compared to how audiologists rate these features.

Table 1. Parent and Audiologist Ratings of Hearing Retention Devices, based on 286 Parent and 101 Audiologist Responses. Gray columns = Parents, White columns = Audiologists. *means less than 10 respondents rated product; weighted with next highest rating; **1 indicates the most recognized product whereas a score of 9 meant the least recognized product.

Table 1. Parent and Audiologist Ratings of Hearing Retention Devices, based on 286 Parent and 101 Audiologist Responses. Gray columns = Parents, White columns = Audiologists. *means less than 10 respondents rated product; weighted with next highest rating; **1 indicates the most recognized product whereas a score of 9 meant the least recognized product.

Acoustic Transparency Study

A valid concern exists about the acoustic transparency of hearing aid retention accessories that cover the microphone of hearing devices. The acoustic transparency of Ear Gear, Hanna Andersson pilot cap, Silkawear cap, and the Hearing Henry Headband was determined via electroacoustic evaluation with an AudioScan analyzer and Phonak Piconet2 hearing aid. With the volume control taped in a position just below full on, the response 50, gain at 50 dB and 60 dB SPL, SSPL90, frequency range, and harmonic distortion were obtained for the hearing aid alone, the hearing aid on a false ear (to obtain the pinna effect), and the hearing aid with each accessory.

The Ear Gear was positioned as it would be when typically worn. The Hanna Andersson cap and Silkawear cap were placed so that the material of the cap that would typically be over the ear was wrapped tightly over the hearing aid case and microphone to simulate typical wear. The Hearing Henry Headband, if worn appropriately, will not cover a hearing aid microphone. However, active children can easily dislodge the placement of a headband. Therefore, the Hearing Henry Headband was placed over the hearing aid microphone to simulate acoustic response if the headband placement was inappropriate.

AndersonTable2

Table 2. Frequency response of the conditions listed in Table 2.

Harmonic distortion results were within 1% of the hearing aid only condition. Gain and frequency range response are shown in Table 2. Ear Gear minimally changed the frequency range (low and high) when compared to the response when the hearing aid alone was on the false ear. The caps reduced the reception of low frequency sound only. The Hearing Henry Headband increased the low frequency range when the microphone was covered. The gain with Ear Gear and both caps was within 1-2 dB of that found with the hearing aid on the false ear. Ear Gear decreased gain by 1 dB and the caps increased gain by 1-2 dB. Only the Hearing Henry Headband, when positioned across the microphone, reduced gain (8-9 dB gain reduction).

AndersonTable3

Table 3. Gain and frequency range response for hearing aid (HA) only, hearing aid on false ear (with pinna effect), and four hearing aid retention products. Ear Gear minimally changed the frequency range, while the caps reduced reception of low frequency sounds only.

Table 3 illustrates the frequency response of all conditions. Ear Gear and Hanna Andersson and Silkawear caps had responses within 0-3 dB of the hearing aid worn on the false ear for most of the frequencies tested. Interestingly, both caps increased the frequency response at 5000 Hz by 4 dB. The Hearing Henry Headband worn inappropriately over the hearing aid microphone significantly impacted the high frequency response.

Based on the acoustic transparency evaluation results, Ear Gear is considered to be acoustically transparent, and the caps tested significantly affected the response only at 5000 Hz and that was only a 4 dB difference. The Hearing Henry Headband, and possibly all headbands, when worn inappropriately has the potential to significantly decrease gain and frequency response.

Comments by Audiologists

The survey solicited comments by audiologists to the question: In regards to working with families, what have you tried with success? The following provide a summary of the comments received:

  • Parent to parent support.
  • Audiologist working with early intervention providers to help establish hearing aid wear goals for the family and provide support.
  • Using interpreters and connecting them with families of a similar background.
  • Help families understand their child’s hearing loss and what their child is missing by not having their hearing aids on.
  • Counseling in a way that makes the hearing devices meaningful to the family.
  • Discussing the family’s specific challenges with them.
  • Providing real-world examples of strategies to increase wear time.
  • Having the family come in for frequent data logging checks.
  • Not one thing works with each family. Time and patience. You must have a good working relationship to keep them coming back.
  • Thinking outside the box and understanding what is holding them up gets you on the right path.
  • Work with a partner agency to provide a Mother’s Day Out situation so someone else will keep the hearing aids on to show the parents it can be done.
  • Encouragement by multiple audiologists as families come in for regular appointments.
  • A “to do” list with achievable measures and a date of completion usually prior to our next appointment (eg, complete Early Intervention evaluation, go to dispenser to pick up hearing aids, etc).
  • Keeping in close communication and not closing the door simply because the family has barriers or is taking a longer time to make choices.

Discussion

AndersonTable4

Table 4. Parent ratings of hearing aid retention accessories/strategies (8 products/strategies were rated in a survey completed by 286 parents).

Ear Gear, Hanna Andersson Caps, and SafeNSound received the best ratings by both parents and audiologists (see Table 4 for parent ratings). Clips (Critter Clips, Phonak Junior Kidz Clips, Otoclips) received high scores from audiologists—but not parents.

The difference in parent and audiologist ratings is significant and may indicate that audiologists and parents are not communicating well. Audiologists may not be aware of the problems parents experience with hearing aid retention devices and may therefore be making recommendations without real practical information to draw upon.

Also of significance is the number of parents who reported never having heard of a product. Of 286 parents responding to the survey, 57 never heard of Critter Clips, 29 never heard of Ear Gear, 111 never heard of Hanna Andersson Caps, 94 never heard of hearing aid sweat bands, 86 never heard of Huggie Aids, 93 never heard of Phonak Junior Kidz Clips (this may be explained by the fact that it is only distributed with Phonak aids), 159 never heard of SafeNSound, 139 never heard of Superseals, and 53 never heard of Sticky Wrap/Toupee/Wig Tape. Audiologists had heard of most products. Of 101 audiology respondents, 23 had not heard of Hanna Andersson Caps and 54 had not heard of SafeNSound.

Figures 3a-b. The authors have created three brochures designed respectively for families of children ages 0 to 12 months, 12 to 24 months, and 2 to 5 years old that can be accessed and downloaded at:  http://successforkidswithhearingloss.com/hearing-aids-on or www.JaneMadell.com

anderson toddlers 12-24

Figures 3a-b. The authors have created three brochures designed respectively for families of children ages 0 to 12 months, 12 to 24 months, and 2 to 5 years old that can be accessed and downloaded at:
http://successforkidswithhearingloss.com/hearing-aids-on or www.JaneMadell.com

If parents are having a problem with hearing aid retention, it would seem to be critical to have information about all products available and their effectiveness. Providing this information should be the responsibility of the audiologist. Parents should be told initially and at subsequent visits to the audiologist that there are devices available to assist in retention and given specific information to assist in selecting the appropriate product for a particular child. The audiologist needs to question the parents about problems with hearing aid retention at each visit as an important part of management of the young child with hearing loss.

One outcome of the survey was the development of free brochures for families of young children ages 0-12 months, 12-24 months, and 2-5 years (an example is shown in Figures 3a-b). These three brochures can be downloaded for use by audiologists and agencies serving families of infants and toddlers from http://successforkidswithhearingloss.com/hearing-aids-on and at www.JaneMadell.com. More information on hearing aid retention survey results, acoustic transparency, and strategies can be found at http://successforkidswithhearingloss.com/hearing-aid-retention.

By providing information about hearing aid retention strategies throughout early childhood, collecting accurate information about daily hearing aid wear challenges, and working as a team, it should be possible to find appropriate strategies to keep children “on the air” facilitating listening language and learning.

References

1. Jones C, Launer S. Pediatric Fittings in 2010: The Sound Foundations Cuper Project. A Sound Foundation Through Early Amplification. Warrenville, Ill: Phonak.  Available at: http://successforkidswithhearingloss.com/wp-content/uploads/2012/01/Phonak-Data-Logging-Study-2010.pdf

2. Stovall D. Teaching Speech to Hearing Impaired Infants and Children. Springfield Ill: Charles C. Thomas; 1982

Karen L. Anderson, PhD, is an educational and pediatric audiologist, and is director of Supporting School Success for Children with Hearing Loss, Plymouth, Minn. She is also the author of many test instruments and materials, such as the SIFTERs, CHILD, LIFE, and ELF. Jane R. Madell, PhD, has a consulting practice in pediatric audiology, and is an audiologist, speech-language pathologist, and LSLS auditory verbal therapist. With 40+ years of professional experience, she has published four books and written numerous book chapters and journal articles.

Original citation for this article: Anderson K, J Madell. Improving hearing and hearing aid retention for infants and young children. Hearing Review. 2014;21(2): 16-20.

Satisfying the Demanding Lifestyles of Today’s Hearing Aid Wearers

$
0
0

Tech Topics: Hearing Aids | February 2014 HR

By Brad Stephenson, AuD, and Donald Hayes, PhD

StephensonHayesBioBox0214While there persists a belief that as people near their retirement years, their lifestyle becomes less active and thus exposed to less noise, this is far from the reality of today’s generation of retirees. For many retirees, the opposite is true; their lifestyles are equally active in their retirement years.

In fact, a recent survey of more than 2,000 individuals under the age of 65 showed that 9 in 10 expect their retirement plans to include three phases spread over a 30-year period (Table 1).1 In general, the first two phases either reflect no change in lifestyle upon retirement, and/or actually reflect an increase in lifestyle activity. Even in the final phase of retirement, when health issues may require a less active lifestyle, many seniors facing a loss of independence make the shift to high density dwellings where speech in noise demands can be equally significant.

What Impact Do Lifestyle Demands Have on Hearing Aid Wearer Needs?

It is well recognized in the hearing care industry that lifestyle choices play a significant role in an individual’s hearing aid purchase decision. Hearing aid manufacturers continually invest significant resources to understand and remain current on the ever-evolving needs of hearing aid wearers, and advanced technologies to ensure continued satisfaction despite changing lifestyle demands.

With this in mind, we would like to share our research, which highlights the shifts in lifestyle of retirees that will ultimately increase the acoustic demands of a hearing aid wearer. To do this, we can start by looking at each of the three phases of retirement in more detail:

1) Hearing in the “Transition” phase. Mandatory retirement has been a widespread practice across the world since 1960, with a typical default retirement age of 65 years.2 For the hearing aid industry, this meant that the average first-time wearer, at 69 years, was already 4 years into their retirement. Manufacturers were aware of, or quickly learned, the importance of managing the noises that affected a wearer’s experience, which were largely outside of a work environment.

However, the practice of mandatory retirement has now been recognized as a form of age discrimination. Thus, legislation across the world is being updated to protect individual rights, which means that the average hearing aid wearer of today in the transition phase of retirement is more likely to be employed than their predecessors, may choose an entrepreneurial second career, or step up their volunteer activities.

Manufacturers are well aware that work environments can be incredibly complex from an acoustic perspective, as there are a multitude of noises in a variety of different listening situations. Workplace noise has been shown to have a significant impact on job satisfaction for normal hearing individuals,3 so manufacturers are proactively developing features to manage the impacts of workplace noises on a hearing aid wearer’s experience.

2) Hearing in the “My time” phase. In the “My time” phase of retirement, people have more time for travel and other leisure activities. While previous generations may have retired to quiet gated communities in sunny locations, today’s Baby Boomer retirees want to travel and gain educational and enriching experiences.

Many plan to be active both physically and intellectually.4 They are also active users of the Internet, which increases their awareness of the world around them and activities of interest within the community.

It is therefore believed that today’s hearing aid wearers will spend more time outside their homes and more time in listening situations (such as airports) with a diversity of noise sources. The noise levels encountered while attending a cooking class in Italy are certainly more complex than those encountered during a buffet dinner at the clubhouse in Florida.

These anticipated changes in listening environments are of particular importance to hearing aid feature development because the types and variety of noises are dependent on the amount of social engagement during leisure and travel-based activities.

3) Hearing in the “Last waltz” phase. General trends as to where hearing aid wearers choose to live are an important consideration for manufacturers, as the noise sources present in single family homes in the suburbs can be much different than those generated in a condominium located in a busy metropolitan center.

A recent survey5 shows that retirees are more apt to live in more accessible locations in relatively high density dwellings, such as condominiums and retirement residences. These high density locations are inherently more noisy for several reasons; there is less distance separating people and their noisy lifestyle, and homes are typically physically smaller, causing hearing aid wearers to be closer to distracting household sounds.

So we now can see that today’s hearing aid wearers of retirement age have different needs than those of previous generations due to a changing lifestyle. They also have different needs in each phase of their retirement as they progress from transitioning into semi-retirement, through the leisure phase and into the last waltz. And market research shows that hearing aid wearers’ satisfaction is dependent on the management of noises through their hearing aids.6

Manufacturers have ensured high levels of continued patient satisfaction through technology advancement, incorporating adaptive technologies such as directional microphones and noise reduction. To sustain these high satisfaction levels, they must continue to evolve adaptive technologies to address changes in hearing aid wearers’ needs and lifestyles—addressing the noise demands that arise during every phase of the retirement years.

Evolving Technology to Meet Shifting Lifestyle Demands

Unitron takes a hierarchical approach to its product development, considering a patient’s comfort at the most fundamental level, followed by natural sound quality, and finally performance improvements in areas such as speech intelligibility in challenging listening environments.

As a direct outcome of this philosophy, Unitron created a metafeature called SmartFocus™ that simultaneously controls multiple adaptive parameters to achieve either better clarity or increased comfort.  Implementing this feature within the automatic program allowed SmartFocus to provide benefit in any environment. In fact, this metafeature was shown to provide 16% improved speech-in-noise (SIN) performance over benchmark premium hearing aids with directional microphones only (see white paper by Donald Hayes available on Unitron website7).

However, the wide range of needs exhibited by today’s retirement age hearing aid wearers have necessitated a re-evaluation of the underlying adaptive features in order to address the more complex noise sources that arise in each phase of retirement. The focus of these changes was to maintain the same superior speech intelligibility performance, while ensuring these wearers experienced less distraction from everyday noise and ease of listening when in a noisy environment. This was achieved with the creation of the next generation of this feature, SmartFocus 2, which includes two key improvements: noise reduction LD and speech enhancement+.

Noise reduction LD. It was recognized that the noises inherent within today’s patient lifestyle could not be addressed with a “one size fits all” approach. That is, today’s satisfaction levels are dependent on our ability to effectively reduce noises that are considered distracting, and a linear noise reduction approach would not be sufficient. Our noise reduction feature now includes a level-dependency algorithm, which allows our hearing aids to be more aggressive with distracting noises present in an active lifestyle—noises that may otherwise be perceived negatively or require action such as lowering the volume to maintain comfort.

Allowances also were made for either soft ambient environments or loud environments. For example, soft noises such as a refrigerator’s motor would not necessarily impact one’s ability to hear speech, but do require cognitive effort to both recognize the sound and choose to ignore it. By contrast, patients trying to listen to a cashier in a noisy environment, such as a grocery store, need a way to focus on the cashier’s voice while ignoring abundant background noise.

These adjustments broaden the impact of the noise reduction feature, allowing SmartFocus 2 to be more effective with the noises that would ultimately impact the wearer’s satisfaction level.

Speech enhancement+. The diversity of environments hearing aid wearers face today, combined with Unitron’s ongoing focus and core competency in the area of speech intelligibility, made it necessary to take a different approach with speech enhancement in SmartFocus 2. While the previous version offered the same amount of level-dependent speech enhancement to each frequency, speech enhancement+ was purposely defined to apply different amounts of speech enhancement at different frequencies.

Mid-frequency speech sounds receive more enhancement as speakers shift their vocal energy upward to the mid-frequencies when raising their voices to be heard above a high level noise floor. Less enhancement is applied to low frequencies because this frequency range is predominantly made up of distracting noise. Finally, less enhancement is applied to high frequencies in order to deliver naturally sounding speech. The net result is ease of listening in the most complex speech in noise situations.

Conclusion

Satisfaction levels within the hearing aid industry have reached unprecedented levels, an achievement that is the culmination of technological advancements, improved clinical care, and a better understanding of the social and environmental changes impacting hearing aid wearers. However, the lifestyles of hearing aid wearers continue to change and evolve. People of retirement age today are active in every phase of retirement, from transition phase to the last waltz, with frequent exposure to a variety of noise.

Vigilance is required across the hearing aid industry to ensure both clinical care and technology evolve to address these shifting needs. SmartFocus 2 was purposely designed to provide today’s hearing aid wearer with a superior listening experience across all facets of their lifestyle.

References

1. UBS Investor Watch 4Q 2013. 80 is the new 60. Available at: http://www.ubs.com/us/en/wealth/research/Investor-Watch-Research-Archives/_jcr_content/par/textimage_4.1878809578.file

2. Wunsch C, Raman JV. Mandatory retirement in the United Kingdom, Canada and the United States of America. London: The Age Employment Network; 2010. Available at: http://taen.org.uk/uploads/resources/Combined_dissertation_final_formatted.pdf

3. Sundstrom E, Town JP, Rice RW, Osborn DP. Office noise, satisfaction, and performance. Environment and Behavior. 1994;26(2):195-222.

4. eTN Global Travel Industry News website. March 2009. Aging boomers poised to redefine travel industry. Available at: www.eturbonews.com/8320/aging-boomers-poised-redefine-travel-industry

5. Krizek KJ, Waddell P. Analysis of lifestyle choices: neighborhood type, travel patterns, and activity participation. Transportation Research Record: Journal of the Transportation Research Board. 2002;1807(1):119-128.

6. Kochkin S. MarkeTrak VIII: Consumer satisfaction with hearing aids is slowly increasing. Hearing Jour. 2010;63(1):19-20.

7. Hayes D. SmartFocus independent study: user control over adaptive features delivers real client benefit (white paper). Available at: http://unitron.com/unitron/au/en/professional/products/products-p/purpose-driven_technologies-p/smartfocus-userpreference.html

CORRESPONDENCE can be addressed to Dr Stephenson at: Brad.Stephenson@nullunitron.com.

Original citation for this article: Stephenson, B. D. Hayes. Satisfying the demanding lifestyles of today’s hearing aid wearers. Hearing Review. 2014;21(2):30-33.

Starkey Hearing Technologies Introduces Halo MFi

$
0
0

TruLink_phone and aidStarkey Hearing Technologies, Minneapolis, Minn, has introduced Halo™, a Made for iPhone® (MFi) hearing aid engineered to be compatible with iPhone, iPad®, and iPod® touch. The Halo hearing aid is designed to provide the more than 26 million Americans with untreated hearing loss a new discreet option that seamlessly connects with Apple devices.

Available March 31, Halo combines Starkey’s hearing aid technology with iOS to deliver a  new hearing solution that makes every aspect of life better —from conversations to phone calls to listening to music. Halo will connect with the TruLink™ Hearing Control app, which is available as a free download in the App StoreSM.

“Halo brings what people love about Starkey hearing aids to anyone who suffers from hearing loss,” says Dave Fabry, vice president of Audiology and Professional Services for Starkey Hearing Technologies. “Halo delivers new standards of performance and personalization, while providing convenient control and connectivity to iOS devices.”

Building on iOS features that let you do everyday things like making phone and FaceTime® calls, listening to music and using Siri®, Halo takes advantage of TruLink technology and compatible iOS devices to meet the needs of today’s demanding and tech-savvy consumers. In addition to seamless integration with iPhone, iPad, and iPod touch, Halo hearing aids are also standalone hearing aids that are reportedly packed with best-in-class performance features, including feedback cancellation, adaptive noise management, and directionality.

Source: Starkey

A Disappearing Receiver Tube and Wire Cover Treatment

$
0
0

Tech Topic | April 2014 Hearing Review

Robert Traynor-EdD-MBABy Robert Traynor, EdD, MBA

A new treatment for BTE and RIC/RITE receiver tubing and wire covers dyes the hearing aid plumbing to provide an even more invisible, personalized match to the individual’s skin color

Experienced clinicians know that the stigma of hearing instruments is much less than it was even a few years ago, largely due to high technology devices that allowed the disappearance of the devices into CIC, IIC, thin-tube, and receiver-in-canal/ear (RIC/RITE) styles. While all of these solutions are popular, the vast majority of hearing instruments (52%) fit in 2014 are small receiver in the ear/receiver in the canal BTE instruments.1 These new BTE styles now present a more acceptable cosmetic solution, especially when compared to the days before thin-tube RIC/RITEs.

Although hearing aid fitting technologies have become substantially better in the past 10 years, there are still millions of hearing-impaired individuals who should be using amplification. While the number of patients who use BTE hearing aids is higher, the stigma created by the use of hearing instruments is still of substantial concern to many of these unaided patients.

The National Institutes of Health2 indicates that stigma is still associated with hearing aid use and can be a factor as only 1 in 5 individuals with hearing loss actually use amplification. Wallhagan,3 investigating stigma and hearing instruments, suggests that there is a pervasive nature of stigma in the relationship to hearing loss and hearing aids, and the close association of this stigma to ageism. Other studies and discussions4-9 also support that stigma remains a factor, even for instruments that incorporate cosmetic small thin tubes and wires into their fittings.

A New “Plumbing” Process

Since there is such a stigma factor involved in hearing aid use, it makes sense to consider any alternative that offers a more cosmetic solution. While these new technologies and fitting methods have somewhat reduced stigma and made fittings more cosmetic for some individuals, current tubes and receiver wire covers remind us of Henry Ford’s classic motto for his all-black Model-T line: “You can have any color tubing you want as long as it is reflective white.”

Hearing aid manufacturers do not offer skin-colored tubes as it would require quite a significant increase in research and development costs to develop and produce 9 different cosmetically blending tube colors. Additionally, besides offering tubes and wire covers that are molded to right and left, in 3-5 different lengths, the availability in 9 different skin colors would create inventory issues for both the manufacturer and the dispenser in stocking these cosmetic tubes.

There are obvious reflections and skin color differences that make the thin tubes and wire transmission tubes still in need of some modification for the ultimate in cosmetic appeal. To some, these tubes can be appealing right out of the box; others are frustrated by a solution that does not blend these tubes with their skin color and reduce the reflectivity. Many patients would like their hearing aids to be less noticeable or distracting:

  • Cosmetically sensitive light-skinned individuals.
  • Tan and dark-skinned individuals, for whom tubes are reflective and provide an obvious contrast to skin color.
  • Teenagers who are always self-conscious of their hearing aids, particularly BTE devices.
  • Patients with short hair styles or who like to pull their hair back for recreation.
TraynorFig1a

Figure 1. Tubing before and after application of the new Vanish tubing application.

About 5 years ago, the inventor of the Vanish process was dissatisfied with the stigma and the appearance of his RIC/RITE and BTE hearing instruments, and he began a quest to make these tubes and receiver wire covers more cosmetically discreet. This carried him virtually all over the country to various plastics laboratories and dye companies.  As part of the research process, 24 plastics engineers evaluated 15 different plastic coatings and 9 specific chemicals to dull the surface of these polyamide thin tubes and receiver covers. After substantial research, the appropriate dyes were found and a process was then developed to treat these tubes and wire covers to blend with various skin colors (Figure 1). Once the dyes were determined, various accelerants were considered by the engineers to deliver the dye to the tubes.  For the process to be conducted safely in a clinical environment, a special container was designed to be used in the blending process.  The special container allows for the dyeing of the tube while leaving the receiver or the connection portion of the tube out of the process.

A 5-Step Process

Figure 2. Color chart showing different skin colors.

Figure 2. Color chart showing different skin colors.

Prior to the dyeing of the tube, it is necessary to determine the patient’s skin color.  This is done by the use of a color selection chart (Figure 2). The color should be selected by holding the chart behind the  ear in natural daylight. Each of the possible colors (L1 to L6 and D7 to D9) has a special curing time that is required for the dyeing process. Care must be exercised to watch the timer to ensure the exact color desired. The process of blending a tube or receiver wire cover (RWC) into a cosmetic solution for your patient involves a simple 5-step process as follows:

1) Depress the button on the top of the bottle and let the dye cascade into the mixture below to begin the mixing process. Once the dye is in the mixture, shake the bottle for 20 seconds and set aside (Figure 3).

Figure 3. The button on the top of the dye bottle is depressed, allowing the dye to flow into the mixture below.

Figure 3. The button on the top of the dye bottle is depressed, allowing the dye to flow into the mixture below.

TraynorFig4

Figure 4. Using the scuff pad to abrade surface of the tube and RWC.

2) Using the enclosed scuffing pad (Figure 4), gently buff all the surfaces of the tube and/or RWC until they look like frosted glass (ie, a milky appearance). Each scuffing pad will process two tubes. Scuffing the tube or RWC reduces its reflectivity and opens up the surface of the plastic for the thorough penetration of the dye.

TraynorFig5

Figure 5. Inserting the right ear hearing aid tube into the right hole of the reusable soaking container.

3) One side of the reusable soaking container is marked left (L), and the other side is marked right (R). At the end of each side of the soaking tray is a hole designed for inserting the tube or RWC. When inserting the tube or RWC into the tray, it helps to push with one hand and pull with the other. Once in the soaking tray, ensure that the tube or RWC is resting on the floor of the tray. If the tube has a “wing” on the connector, put the included retainer tube over both. If a RWC, ensure that the receiver is covered.

4) Set the timer for the chosen color, shake the dye for another 20 seconds, pour the mixture into the soaking tray, and immediately start the timer. Be careful to accurately time the soaking process so that the final color matches the selected color from the color chart.

TraynorFig6

Figure 6. The right ear hearing aid plumbing in the soaking container.

5) When time expires, rotate the tube or RWC, and pour the remaining dye into a bowl or the sink. Then rinse the tube or RWC and the tray, pull the tube or RWC out of the container, and dry it off. The dyed tube or RWC may be used immediately.

TraynorFig7

Figure 7. Final result of the Vanish treatment.

This 5-step process is all that is necessary to turn a reflective white tube or receiver wire cover into a nonreflective skin-blending cosmetic solution for your patient (Figure 7). The whole Vanish process takes less than 15 minutes for both tubes and offers a significant cosmetic result for a minimal cost.

The Vanish blending process adds even more value to a true custom fitting; it not only offers a custom cosmetic solution for your patient’s hearing aid fitting, but also sets the clinic apart from its competitors.

As clinicians, we have watched tubing go from large to small, receivers go from in-the-instruments to in-the-canal. Clinically, they look rather small to the seasoned dispensing professional. However, new patients still feel that there are cosmetic issues with the use of amplification. To some, these tubes and RWCs look like conspicuous telephone poles in their ear. In these days of extreme competition, the Vanish process can help set your clinic apart from the competition and even be used to obtain new patients who simply want a better cosmetic solution.

For more information about this new receiver tube and wire cover treatment, visit http://myvanish.com.

References

1. Hearing Review (2014). Hearing aid sales rise by 4.8% in 2013: Industry closing in on 3-million unit mark. http://www.hearingreview.com/all-news/22220-hearing-aid-units-rise-by-4-8-in-2013-industry-closing-in-on-3-million-unit-mark. Accessed February 16, 2014.

2. National Institutes of Health (2014).  Department of Health and Human Services, Research Portfolio. http://report.nih.gov/nihfactsheets/viewfactsheet.aspx?csid=95. Accessed February 14, 2014.

3. Wallhagan M. The stigma of hearing loss. Gerontologist. 2010;50(1):66–75.

4. Kochkin S. 20Q: More Highlights from MarkeTrak. 20Q with Gus Mueller. Audiology Online, July 2012. http://www.audiologyonline.com/articles/more-highlights-from-marketrak-6830>. Accessed February 16, 2014.

5. Kochkin S. 20Q: More Highlights from MarkeTrak. 20Q with Gus Mueller. Audiology Online, June 2012. http://www.audiologyonline.com/articles/20q-25-years-marketrak-highlights-6616. Accessed February 16, 2014.

6. Gagné J-P, Southall K, Jennings MB. Stigma and self-stigma associated with acquired hearing loss in adults. Hearing Review. 2011;18(8):16–22.

7. Ross M. The stigma of hearing loss and hearing aids.  Rehabilitation Engineering Research Center on Hearing Enhancement, Gallaudet University, 2011. http://www.hearingresearch.org/ross/hearing_aids/the_stigma_of_hearingloss_and_hearingaids.php. Accessed February 16, 2014.

8. Hear-it.org (2014).  Stigma about hearing loss. Hear-it. http://www.hear-it.org/Hearing-impaired-people-still-stigmatized-2. Accessed February 16, 2014.

9. Erler S, Garstecki D. Hearing loss- and hearing aid-related stigma perceptions of women with age-normal hearing. Am J Audiol. 2002;11(2):83-91.

Robert Traynor, EdD, MBA, is CEO at Audiology Associates Inc in Greeley, Colo, and serves as an adjunct professor at the Universities of Florida, Colorado, and Northern Colorado. His book, coauthored with Robert Glaser, PhD, Strategic Practice Management, now in its second edition, is widely used in college programs. Correspondence can be addressed to Dr Traynor at: traynordr@nullgmail.com

Original citation for this article: Traynor R. A disappearing receiver tube and wire cover treatment. Hearing Review. 2014;21(4):38-39.

US Hearing Aid Sales Flat in First Quarter of 2014

$
0
0

US hearing aid unit sales in the first quarter of 2014 increased by less than a percentage point (0.95%) for the total market, and by only one-tenth of a percent (0.12%) for the commercial (non-VA) sector compared to the same period last year, according to statistics compiled by the Hearing Industries Association (HIA), Washington, DC.  The Department of Veterans Affairs (VA), which accounts for one-fifth (20.7%) of all US hearing aids dispensed, experienced a 4.2% increase in unit volume.

Quarterly hearing aid net unit volume for the private/commercial sector (blue bars) and the US Dept of Veterans Affairs (VA, in red bars). Yearly unit growth percentages for the entire market (private + VA) shown at top.

Quarterly hearing aid net unit volume for the private/commercial sector (blue bars) and the US Dept of Veterans Affairs (VA, in red bars). Yearly unit growth percentages for the entire market (private + VA) are shown at the top. Source: Hearing Industries Assn (HIA)

Last year, the commercial sector saw sales dip by a half-percent (-0.51%) in the first quarter, followed by fairly strong unit gains of 5%-8% in the last three quarters of 2013. Thus, the tiny percentage gain seen in Q1 2014 might be viewed a cooling off from the better-than-average sales experienced during 2013. However, as with almost every sales and economic indicator for the first quarter, it should be noted that incredibly harsh weather (and high heating bills) throughout much of the United States could have played a substantial role.

Receiver-in-the-canal/ear (RIC/RITE) hearing aids continued to trend upward in usage, accounting for nearly 58.9% of all hearing aids sold in the commercial sector in the first quarter—up from 51.0% during same period last year. Almost 4 out of every 5 hearing aids (78.5%) dispensed by the commercial sector were behind-the-ear (BTE) type aids. Similarly, just under one-quarter (71.8%) of the units dispensed by the VA were BTEs, with RICs accounting for 48.3% of the VA’s total units.

For the total market (commercial + VA), RICs made up 56.7% and BTEs made up 77.1% of all units dispensed.

Wireless technology was incorporated in about 4 of 5 hearing aids dispensed (79.6%) in the first quarter, an increase of 14.9% over the same period last year. Again, the popularity of RIC instruments was mostly responsible for the trend: 91.7% of all these aids offered wireless features.

The average return-for-credit rate was 20.1% for BTEs, and 22.9% for ITEs in the first quarter, slightly higher (19.7% and 21.0%) than the same period last year.

The RIC as a Disruptive Technology

$
0
0

Staff Standpoint | Hearing Review June 2014

By Karl Strom, Editor-in-Chief

Receiver-in-the-canal/ear (RIC/RITE) hearing aids continue to trend upward in usage and now constitute well over half (56.7%) of all hearing aids sold in the United States. In the first quarter of 2014, they accounted for 58.9% of all hearing aids in the commercial sector—up from 51.0% during the same period last year. Similarly, in the Department of Veterans Affairs (VA), 48.3% of units dispensed were RIC-type devices.

HAStyles1994-2013

[Click on image to enlarge.] Figure 1. Estimated US hearing aid style utilization (in percent) from 1994 to 2013, extrapolated from HIA statistical data. HIA started tracking “external receiver” BTEs (ie, RIC/RITEs) in 2009; therefore, the data on traditional BTEs and RIC/RITE styles from 2003-2009 are HR estimates only.

RIC/RITEs as a disruptive techology. Figure 1 shows the meteoric rise of RIC/RITE styles in the US hearing aid market since the introduction of the ReSound Air and Sebotek PAC in 2003 (and, later, Vivatone in 2004) [please see correction below]. In less than a decade, “external receiver” BTEs (RIC/RITEs) have become the dominant style for dispensing professionals when specifying hearing instruments. It is of interest to note that “traditional receiver” BTEs also grew very quickly in market share, starting in 2002 (possibly due to a realization in the dispensing community of their enhanced directional capabilities), and doubtlessly benefited from the open-fit revolution initiated, in part, by the commercial success of ReSound Air. By 2013, BTEs as a product class made up almost three-quarters (74.0%) of the US hearing aid market when adding together traditional BTEs and RIC/RITEs.

Although RIC/RITE aids are custom-fit devices in the sense of an individualized acoustical fitting, most do not use a custom earmold. Several industry experts have pointed out that this leaves hearing aids more vulnerable to commoditization.

Will the pendulum swing the other way? RIC/RITE styles offer an array of advantages to the consumer and the professional. However, I have heard several discussions in convention hallways and lounges that point to the possibility that these devices are being overprescribed. And, no, this isn’t coming from the earmold suppliers! ITEs—and particularly deep-fitting CICs—do offer some distinct advantages that may be difficult to replicate (pinnae-effect and wind-noise algorithms aside).

We also have the interesting new technology of direct 3D ear scanning. Lantos Technologies (see the May 2012 Hearing Review Products, pgs 8-12) received FDA approval for its 3D ear scanning technology in March 2013, and GN’s Otoscan (based on 3DM technology which Hearing Review covered in July 2012, pgs 28-34) is expected to make its debut later this year. At least two other companies are involved in developing similar techology for ear scanning. When this technology is implemented, we may see for the first time the impact of true deep-fitting instruments on a wider scale than Phonak’s extended-wear Lyric. So it’s possible there may be a renaissance in ITE-style devices in the wings.

Original citation for this article: Strom KE. The RIC as a disruptive technology. Hearing Review. 2014;21(6):6.

CORRECTION: As pointed out by Natan Bauman, PhD, in a subsequent Letter to the Editor, the Vivatone hearing aid was introduced in 2004, not 2008 as originally stated in this article. The editor apologizes for the error. To see Dr Bauman’s letter and the editor’s response, click here.


Hearing Aid Sales Up 2.9% in First Half of 2014

$
0
0

According to statistics compiled by the Hearing Industries Assn (HIA), Washington, DC, second-quarter US net unit hearing aids sales increased by 3.3% for the commercial market and 10.2% for the Department of Veterans Affairs (VA) compared to the same period last year, for a total quarterly increase of 4.75% over Q2 2013. This compares favorably to flat (0.95%) first-quarter sales (0.1% increase for commercial sector; 4.3% for VA).

US net unit hearing aid sales increased by 4.75% in the second quarter of 2014, following a relatively flat (0.95%) sales in the first quarter, perhaps due to record harsh weather. At mid-year, the hearing instrument market is up 2.9% compared to the same period last year.

[Click on image to enlarge.] US net unit hearing aid sales increased by 4.75% in the second quarter of 2014, following relatively flat (0.95%) sales in the first quarter, perhaps due to record harsh weather. At mid-year, the hearing instrument market is up 2.9% compared to the same period last year.

When comparing mid-year 2013-2014 statistics, hearing aid net unit sales have increased by 2.9% in H1 2014. The VA, which now accounts for over one-fifth (21.3%) of hearing aid units dispensed, experienced unit increases of 7.3%, while hearing aid sales in the commercial (non-VA) sector increased by 1.75% compared to H1 2013.

Overall, the HIA statistics suggest “average-to-good growth” for an industry with an historical year-on-year sales growth of 2-4%. As noted in The Hearing Review’s Q1 2014 hearing aid sales analysis, very cold weather (and high heating bills) may have dampened unit sales in the first quarter. The 2.9% first-half increase puts the industry on pace to break the 3-million unit mark by year’s end. However, how much of these gains are due to growth in Big Box retail as opposed to private dispensing offices is unknown.

Receiver-in-the-canal (RIC) and receiver-in-the-ear (RITE) style hearing aids continued their upward sales trajectory, increasing from 51.8% to 56.6% of the total market from mid-year 2013 to mid-year 2014 (for more details, read “RICs as a Disruptive Technology” in the June 2014 Hearing Review). A total of 58.6% of all units dispensed by the commercial sector were of the RIC/RITE style, with slightly lower percentages dispensed (49.4%) at the VA.

Behind-the-ear (BTE) devices now constitute over three-quarters (76.9%) of all the hearing aids dispensed in the United States. Similarly, more than 4 out of every 5 (80.8%) hearing aids sold during the first half of 2014 contained wireless technology (76.1% commercial sector; 97.9% VA), according to the HIA statistics.

The commercial return-for-credit (RFC) rates in the first and second quarter were 19.1% and 19.4%, or about a percentage point less than RFC rates during the same quarters last year.

Better Messages on Better Hearing: Shaping Images and Attitudes for the Benefit of Our Patient

$
0
0

Final Word | Hearing Review August 2014

By Dennis Van Vliet, AuD

I recently listened to a lecture by Curtis J. Alcock. Mr Alcock has a background in design and marketing, and transitioned to hearing care about 12 years ago. He is part of a family-run hearing care practice in the UK, and speaks about his beliefs on shaping the future of hearing care (also see Alcock and Douglas Beck’s cover story “Right Product; Wrong Message” in the April 2014 edition of The Hearing Review).

I found his points interesting and thought-provoking. As an example, Alcock offers that the mantra we often discuss—the idea that people wait a certain amount of time before pursuing hearing help—is counterproductive. He feels that discussing it validates the waiting—waiting for a certain amount of time, or waiting for a pain threshold to be crossed before acting. We should, instead, be discussing ways to make using hearing technology relevant to the wide range of individuals who could benefit from the help.

People adopt things when they are relevant to their lives. Apple products are a good example. Apple products are often so innovative that we don’t know that we could use them until we learn that they are relevant to our lives.

Alcock also talks about the need to change vocabulary. “Testing” hearing has a more negative air than a “hearing check-up.” “Hearing aid” falls into a disability category, while “hearing system” isn’t automatically categorized in a negative way. Discussing “stigma” reinforces the negative construct.

All of this led me to think quite a bit about how society may perceive certain aspects of others around them. I began to wonder if the stigma of hearing loss and hearing aids isn’t so deeply ingrained in our attitudes that we may have to own it, or at least accept it.

Alcock counters that attitudes are learned behavior, and we should be able to change them if we take appropriate action. Certainly, in the United States, smoking has changed from a stylish acceptable activity to unacceptable in a manner of decades. As dental care has progressed, false teeth are no longer an expected consequence of aging. Eyewear is now as much of a fashion statement as a method for correcting vision.

Can we get to the point where hearing aids, like glasses, are the symbol of a solution, rather than a problem? More importantly, when I meet someone for the first time and tell them what I do for a living, will I no longer be treated to their dramatic charade of hunching over, scowling, holding a cupped hand to their ear, and shouting “Eh?”

Delivering Clear Messages

We do have much more to work with than when many of us started in the field. Unsightly “Frankenstein knob” full-shell ITEs have been largely replaced by more discreet custom products that offer better performance, comfort, and performance. Huge BTEs that looked like fenders from a 1930s antique car are now smaller, lighter, and generally more accepted, especially with thin-tube coupling when appropriate.

Attitudes, however, do need to change, and we should take stock of the marketing messages we offer, and the language we use. For example:

Hearing: In counseling, we typically describe evaluation results in terms of a “hearing loss.” “Hearing ability” can communicate the message just as well. A narrative similar to the following introduces the facts without branding the person with a negative diagnosis:

“We see that you can detect lower pitched sounds at very soft levels. Your hearing ability is such that higher pitched sounds require somewhat louder sounds for detection. Here’s how we can improve things for you…”

The differences are subtle, but may make a difference in how the message is received.

Cognition: While we see a number of indicators that hearing loss and dementia are associated, there is less evidence that there is a direct causative link. The link may be there, but the evidence isn’t great at this point. In my mind, I’m not sure that tying dementia and hearing loss together directly is a desirable path to take because of the negative context, but we should still create a dialog to allow the connection to be made by the public.

Just as we want to talk about hearing ability, hearing health, and staying connected with life and loved ones, we probably should emphasize the role of cognitive strength in supporting those concepts. The dementia-hearing loss connection will still be made by the consumer, but in an overall positive, healthy way. Try this:

“A properly fitted hearing system will not only provide the additional hearing you need, but will assure the good cognitive strength you depend upon in difficult listening situations.”

The listener will make the connection about hearing and cognitive health with this positive message.

van vliet author boxThe Final Word? We have a track record of not appreciably changing the behavior of potential hearing aid consumers over several decades. The average age of the first time user is still around 69 years, and there are many people who recognize that they have a hearing loss but do not take action. Maybe it is time for us to take action that will change attitudes rather than focusing on behavior. If attitudes change, behavior will follow.

Original citation for this article: Van Vliet D. Better messages on better hearing: Shaping images and attitudes for the benefit of our patient. Hearing Review. 2014;21(8):50.

Localization 101: Hearing Aid Factors in Localization

$
0
0

Tech Topic | Hearing Review September 2014

By Francis Kuk, PhD, and Petri Korhonen, MS

An introduction to localization, factors that influence localization when wearing hearing aids, and what steps can be taken to improve localization ability for hearing aid users.

Localization is the ability to tell the direction of a sound source in a 3-D space. The ability to localize sounds provides a more natural and comfortable listening experience. It is also important for safety reasons such as to avoid oncoming traffic, an approaching cyclist on a running path, or a falling object. Being able to localize also allows the listener to turn toward the sound source and use the additional visual cues to enhance communication in adverse listening conditions.

Hearing loss results in a reduction of localization ability1; however, the use of amplification may not always restore localization to a normal level. Fortunately, the application of digital signal processing and wireless transmission techniques in hearing aids has helped preserved the cues for localization.

How Do We Localize?

A sound source has different temporal, spectral, and intensity characteristics depending on its location in the 3-D space. A listener studies these characteristics (which are described as cues) originating from all directions on the horizontal and the vertical planes to identify the origin of the sound.

Horizontal plane. Localization on the horizontal plane involves comparison of the same sound received at the two ears (ie, binaural comparison for left/right) or between two surfaces of the same ear (ie, front/back).

  • Left/right: The head casts a shadow on sounds that originate from one side of the head. This head-shadow effect results in a change of intensity level (interaural level difference, or ILD) and time of arrival (interaural time difference, or ITD) of the same sounds at the opposite ear. For sounds below 1500 Hz, where the wavelengths of the sounds are greater than the dimension of the head, the head causes a significant time delay for sounds to travel from one side of the head to the other. This ITD is recognized as a cue in localization of sounds below 1500 Hz. Above 1500 Hz, where the effect of the time delays is less significant, the intensity difference of the sounds between ears, or ILD, has been listed as a cue for localization.2
  • Front/back: The pinna casts a shadow on sounds that are presented from behind the listener. Indeed, the amplitude of sounds above 2000 Hz that originate from the back of the listener is about 2-3 dB lower than sounds that originate from the front as measured at the ear canal entrance. This spectral difference is accepted as an additional cue to aid in elevation judgments and front/back discrimination.2

Vertical plane. Byrne and Noble1 demonstrated that sound sources striking at the pinna from different elevations result in a change in spectra of the sound, especially above 4000 Hz. This change in spectral content may provide a cue on the elevation of a sound source.

Monaural versus binaural cues. Because localization typically involves the use of two ears, people with hearing in only one ear (either aided monaurally or single-sided deafness) should have extreme difficulties on localization tasks. However, some people in this category still report some ability to localize. Frequently, a person moves their head/body to estimate the direction of a sound. The overall loudness of the sound and the spectral change of the sound as it diffracts the pinna from different orientations act as cues. This spectral cue relies on a stored image of the sound, so cognition is involved with monaural cues.3

Is Localization a Problem?

Physiological structures in the brainstem related to localization. The superior olivary complex (SOC) in the pons is the most caudal structure to receive binaural inputs. This suggests that the low brainstem is the most critical to binaural interaction.

Any lesion that affects input to the SOC could disrupt the cues for localization. Because cognitive factors (including memory, expectations, and experience) also could affect localization, it is possible that localization performance is not easily described by lesions just to one structure, but to many structures within the brain. It also makes it difficult to predict localization performance through the audiogram and any one set of information.4

Prevalence and ramifications. One conclusion from almost every study is that hearing-impaired people have poorer localization ability than normal-hearing people. The use of amplification may improve localization in that it provides the needed audibility. However, it is common to find that aided localization is poorer than unaided localization when tested at the wearer’s most comfortable listening level (ie, audibility is not involved).

In general, left/right discrimination is only mildly impaired in hearing-impaired listeners; front/back discrimination is consistently impaired1 to the point that it is poorer than the unaided condition.5

These observations were replicated in our internal study at the ORCA-USA facility. Figure 1 shows the accuracy of localization for sounds presented from 12 azimuths in the unaided condition at a 30 dB SL (sensation level) in 15 experienced hearing aid wearers. For ease of presentation, the results across the 12 loudspeakers were collapsed to only front, back, left, and right quadrant scores. The results of using two scoring criteria were reported: “0 degree” criterion requires absolute identification and a “30 degree” criterion suggests that the subjects’ response may be ±1 loudspeaker from the target sound source.

There are two clear observations. First, localization accuracy for sounds is impaired in this subject population. Even with the 30 degree criterion, subjects were only 60-70% accurate in localizing sounds from the front, left, and right. With a 0 degree criterion, it is only between 30% and 45%. Second, localization of sounds from the back was extremely poor. A score of 30% was seen with the 30 degree criterion; the percentage dropped to 20% when a 0 degree criterion was used.

Several authors have suggested that being able to focus one’s attention to a sound source in an adverse listening environment (eg, cocktail party) may help improve speech understanding and communication.6,7 Furthermore, Bellis4 indicated similar processing mechanisms are used in the brainstem for speech in noise task and localization task. There have been bimodal studies8 (eg, one cochlear implant and one hearing aid) that demonstrated a significant correlation between localization ability and speech recognition in bimodal and bilateral cochlear implant subjects. If this can be generalized to other hearing-impaired patients, it would suggest that an improvement in localization could result in an improvement in speech understanding in noise or communication in adverse listening conditions. It is unclear if improving speech-in-noise performance may lead to an improved localization performance. Currently, neither one of these hypotheses has been validated.

Factors Affecting Aided Localization Ability

Kuk_localization figure 1

Figure 1. Unaided localization accuracy of experienced hearing aid wearers (N=15) for sounds originating from 4 quadrants at a 30 dB SL input level and scored using two criteria (0 degree and 30 degree).

Despite the use of amplification, aided localization is still poorer than that of normal hearing listeners.9 In some cases, aided localization ability may be even poorer than unaided localization when tested at the same sensation level.

This has two important implications. First, audibility is not the only issue that governs localization ability. Otherwise, ensuring audibility by testing at the same sensation level should restore localization to a normal level for all wearers. Additional means to restore localization are also needed. Second, the manner in which amplification is applied could affect the aided localization performance. This is seen in the poorer aided performance than the unaided performance.

From the device manufacturer’s perspective, it is important to understand the reasons for the poorer performance, and design better algorithms so the aided performance can be as “normal” as possible. The following are some reasons for the poorer aided performance.

1) Hearing aids cannot overcome the effects of a distorted auditory system. Hearing loss is not simply a problem of sound attenuation or loss of sensitivity; it is also a problem of sound distortion. While a properly fit hearing aid may overcome the loss of hearing sensitivity and improve localization because audibility is restored, the effect of the distortion still lingers. Such distortion may affect the integrity of the aided signal reaching the brain, rendering the effective use of the available cues impossible or suboptimal.

As the degree of hearing loss increases, the extent of the distortion will likely increase.1 However, it is not possible to predict the degree of distortion/difficulties based only on the degree of hearing loss.

2) The wearers have developed compensatory strategies. Hearing loss deprives the brain of the necessary acoustic inputs. This would lead the listeners to develop alternative compensatory strategies that do not require the “correct cues.” They may use other sensory modalities (eg, head and body movement, visual searching) to help them localize. Thus, the newly amplified signals may not be “recognizable” to the brain. This would lead to continued localization errors unless the brain is retrained to interpret the new input.

For example, Hausler et al10 reported degraded localization in people with conductive hearing loss, and that such abnormality persisted for some time after hearing thresholds were returned to normal through surgery. The new cues could be useful when the wearer has learned a new strategy to use the available cues.

3) Loss of natural cues, including:

  • Binaural cues. Normal localization depends on cues provided by both ears (and not just one ear). A person who is aided monaurally (despite hearing loss in both ears) would have significantly more localization difficulties than if he or she is aided in both ears. For example, Kobler and Rosenhall11 noted a bilateral hearing aid advantage over unilateral hearing aid use for localization. This suggests that bilateral amplification will likely yield better localization than monaural amplification. However, amplification is not guaranteed to improve over the unaided localization ability even when audibility is ensured.
  • Interaural cues from signal processing. It was mentioned previously that the normal ears utilize the interaural time difference (ITD) and the interaural level difference (ILD) cues to localize sounds presented to the sides. However, these natural interaural differences could be altered with the use of a wide dynamic range compression (WDRC) hearing aid. In the case where a sound originates from one side (say, right), the same sound would appear at a much lower intensity level at the opposite ear (left), brought about by the head-shadow effect. This natural difference in intensity level signals the location of the sound (Figure 2).
Kuk_localization figure 2

Figure 2. Inter-aural level difference in the unaided condition.

When a pair of hearing aids that provide linear processing is worn, the same gain is applied to both ears. This preserves the inter-aural level difference. A WDRC hearing aid provides less gain to the louder sound but more gain to the softer sound (the opposite ear). This will reduce the interaural level difference between ears and could affect the accuracy of sound localization to the sides of the listener. For example, Figure 3 shows that the interaural level difference may decrease from 15 dB to 10 dB from the use of fast-acting WDRC hearing aids.

Kuk_localization figure 3

Figure 3. Inter-aural level difference in a fast acting WDRC hearing aid.

The components of a hearing aid (including the DSP chip) could introduce delay to the processed signal. The impact of the delay is minimal if the extent of the delay is identical between ears. However, if the delay is different between the two ears, the time of arrival (or phase) will be different, and that may result in a disruption of the localization cues.

Drennan et al12 determined the ability of hearing-impaired listeners to localize and to identify speech in noise using phase-preserving and non-phase-preserving amplification. At the end of 16 weeks of use, speech understanding in noise and localization was demonstrated to be slightly better with the phase-preserving processing than non-phase- preserving processing.

  • Pinna cues. It was mentioned previously that the pinna casts a shadow to sounds, especially the high frequency ones that originate from the back. This changes the spectrum of the sounds at the entrance of the ear canal. This spectral change is used as a cue to distinguish between front and back.

Hearing aids that destroy this pinna cue could result in front/back localization problems. Because the microphones of a BTE/RIC are typically over the top of the ear and are not shadowed by the pinna, wearers of BTE/RICs are more likely to be deprived of this pinna cue. Because the microphones of custom products (CIC/ITE/ITC) are at the entrance of the ear canal and are shadowed by the pinna, it is likely that wearers of these hearing aid styles will still retain the pinna cues. Indeed, Best et al13 compared the localization performance of 11 listeners between BTE and CIC fittings and found fewer front/back reversal errors with the CIC than the BTE. The loss of pinna cues could affect a significant number of hearing aid wearers because over 70% of hearing aids sold in the United States are BTE/RIC styles.14

  • Spectral cues. The use of occluding hearing aids reduces the magnitude of high frequency sounds that could signal a change in elevation of the input signal. Furthermore, even when the microphone picks up the high frequency sounds, the localization cues may still not be audible if the output bandwidth of the hearing aid is limited.

So, What Can Be Done to Improve Localization?

Kuk_localization figure 4

Figure 4. In-situ directivity index (DI) of an omnidirectional microphone on a BTE, of the natural ear with pinna, and of the pinna compensation algorithm microphone system used in the Dream.

One must provide the necessary acoustic cues so the impaired auditory system has as much accurate information to process as possible. Appropriate training may be necessary to retrain the altered brain to interpret the available information.

1) Bilateral hearing aid fitting. The first and most important requirement to receiving binaural cues is to wear hearing aids on both ears. In addition, the hearing aids should be very similar to avoid mismatching between devices, which can create unnecessary differences in delays.

2) Wireless hearing aids with inter-ear connectivity. One way to preserve the ILD is the use of wireless technology that allows communication between hearing aids of a bilateral pair. Recently, near-field magnetic induction (NFMI) technology has been applied in hearing aids such that the compression settings on both hearing aids of a bilateral pair are shared.15 In the Widex Dream hearing aids, a 10.6 MHz carrier frequency is used to transmit information between hearing aids of a bilateral pair 21 times per second (or <50 ms/transmission). When a sound is identified to be on one side of the listener, the gain on the opposite ear will be adjusted to preserve the ILD. This feature is called inter-ear compression (IE).

Korhonen et al16 studied the impact of the inter-ear compression on trained hearing impaired subjects’ ability to localize sounds (n=10). The comparison was made to a BTE with an omnidirectional microphone. The use of IE reduced the error rate of localizing sounds to the sides by 7.3°.

3) Restoring the pinna cues. It was pointed out earlier that the pinna cues are preserved in the custom products, but could be compromised in BTE/RIC hearing aids that use an omnidirectional microphone. Thus, pinna cues should only need to be preserved in BTE/RIC products.

Figure 4 shows the in-situ directivity index of the pinna (unaided) and that of an omnidirectional microphone. It can be seen that the directivity index of the pinna alone is about 2 dB between 2 and 4 kHz. For an omnidirectional microphone, one can see that the DI is altered to as much as -5 dB at 3000 Hz. This means that a 3000 Hz sound from the back could sound 5 dB louder than the same sound presented to the front! So, if one were to restore the directivity index of the pinna, one could perform spatial filtering of the sounds that come from behind so that the microphones have the same or better DI as the pinna (ie, 2 dB or more from 2000 to 4000 Hz).

In the Dream hearing aid, the Digital Pinna feature has a DI of around 4 dB from 2000 to 4000 Hz to restore the pinna shadow. Kuk et al17 reported that the digital pinna algorithm was able to improve front/back location from 30% to 60% over an omnidirectional microphone condition.

By the same logic, one would expect that a directional microphone on a BTE, in addition to providing a SNR improvement, to also improve front/back localization when compared to its omnidirectional counterpart. Chung et al18 compared localization between an omnidirectional microphone and a directional microphone in 8 normal-hearing listeners and 8 hearing-impaired listeners using an ITE style hearing aid. For both groups, the front-back localization performance with a directional microphone was better than the omnidirectional microphone.

Keidser et al9 examined the localization ability of 21 hearing impaired listeners in four conditions: unaided; aided with omnidirectional; partial directional [omni in lower frequencies and hypercardioid in higher frequencies to compensate for pinna shadow]; and fully directional [hypercardioid in all channels] modes. For stimuli with sufficient mid- and high-frequency content, the fully directional mode improved front/back localization, but had a detrimental effect on left/right localization. The partially directional mode did not have a negative effect on left/right localization and also improved front/back localization. This study demonstrated that the partially directional microphone in a BTE can provide helpful cues for localization.

Van den Bogaert et al19 also showed that ITC and ITP (in-the-pinna) hearing aids have less front/back confusion than BTEs. However, BTEs with a directional microphone were able to preserve some front/back localization.

4) Use of slow-acting compression. One can preserve the natural interaural level difference by keeping linear processing in both hearing aids. This could preserve the intensity differences between ears. However, the disadvantage of a linear hearing aid is that softer sounds will not be audible and the louder sounds may be too loud or it may saturate the output of the hearing aid.

An alternative is to use slow acting WDRC (SA-WDRC). This will keep the short term envelope of the input signal (ie, linear processing) while the hearing aids adjust their gain according to the long term input. This preserves the ILD and the interaural cues. A SA-WDRC also has the advantage of a more natural sound quality.20 It is noteworthy that Widex has been using SA-WDRC since it first introduced the Senso digital hearing aid in 1996.

5) Broad bandwidth and open fit. The use of open fitting allows the natural diffraction of sounds within the concha area and preserves any cues that are used for vertical localization.21 The use of hearing aids with a broad bandwidth (eg, Dream 4-m-CB or Dream Clearband that yields in-situ output up to 10 kHz) provides audibility of high frequency sounds that may help localization.

Unfortunately, an open-fit hearing aid also limits the maximum amount of high frequency gain to less than 40 dB, even with the use of active feedback cancellation. Thus, the use of an open-fit broad bandwidth hearing aid is only possible for individuals with up to a moderate degree of hearing loss in the high frequencies (around 60-70 dB HL at 4000 Hz).21,22

Toward Restoring Localization

In summary, the use of bilateral wireless hearing aids that exchange compression parameters between ears of the custom style (CIC, ITE/ITC) with a large vent and a broad bandwidth should provide the best potential to preserve any localization cues. The availability of a directional microphone may further enhance front/back localization. If BTE/RICs are desired, broadband bilateral wireless devices with inter-ear features and pinna compensation algorithm fitted in an open mode (or as much venting as possible) are appropriate. The addition of a directional microphone also enhances the effectiveness.

What else can be done to improve localization? Despite having the optimal acoustic cues, the brain may have been accustomed to the old pattern of neural activity so that the new cues may decrease performance. What needs to be available is rehabilitation programs that can instruct these wearers how to understand and utilize the new cues.

Wright and Zhang23 provided a review of the literature that suggests localization can be learned or relearned. The reviewed data indicated human adults can recalibrate, as well as refine, the use of sound-localization cues. Training regimens can be developed to enhance sound-localization performance in individuals with impaired localization abilities. This will be further discussed in a subsequent paper on the topic.

References

1. Byrne D, Noble W. Optimizing sound localization with hearing aids. Trends Ampl. 1998;3(2):51-73.

2. Blauert J. Spatial Hearing: The Psychophysics of Human Sound Localization. Cambridge, Mass: The MIT Press; 1997.

3. Durlach N. Binaural signal detection: equalization and cancelation theory. In: Tobias TJ, ed. Foundations of Modern Auditory Theory. New York. Academic Press; 1972:369-462.

4. Bellis T. Assessment and Management of Central Auditory Processing Disorders in the Educational Setting: From Science to Practice. Independence, Ky: Thomson Delmar Learning; 2003.

5. Vaillancourt V, Laroche C, Giguere C, Beaulieu M, Legault J. Evaluation of auditory functions for royal Canadian mounted police officers. J Am Acad Audiol. 2011;22(6):313-331.

6. Arbogast T, Mason C, Kidd G. The effect of spatial separation on informational masking of speech in normal-hearing and hearing-impaired listeners. J Acoust Soc Am. 2005;117(4):2169-2180.

7. Best V, Kalluri S, McLachlan S, Valentine S, Edwards B, Carlile S. A comparison of CIC and BTE hearing aids for three-dimensional localization of speech. Int J Audiol. 2010;49(10):723-732.

8. Litovsky R, Parkinson A, Arcaroli J. Spatial hearing and speech intelligibility in bilateral cochlear implant users. Ear Hear. 2009;30(4):419-431.

9. Keidser G, O’Brien A, Hain J, McLelland M, Yeend I. The effect of frequency-dependent microphone directionality on horizontal localization performance in hearing-aid users. Int J Audiol. 2009;48(11):789-803.

10. Hausler R, Colburn H, Marr E. Sound localization in subjects with impaired hearing. Acta Oto-Laryngol. 1983;[suppl 400]. Monograph.

11. Kobler S, Rosenhall U. Horizontal localization and speech intelligibility with bilateral and unilateral hearing aid amplification. Int J Audiol. 2002;41(7):395-400.

12. Drennan W, Gatehouse S, Howell P, Van Tasell D, Lund S. Localization and speech-identification ability of hearing impaired listeners using phase-preserving amplification. Ear Hear. 2005;26(5):461-472.

13. Best V, Marrone N, Mason C, Kidd G, Shinn-Cunningham B. Effects of sensorineural hearing loss on visually guided attention in a multitalker environment. J Assoc Res Otolaryngol. 2009;10(1):142-149.

14. Strom KE. Hearing aid sales up 2.9% in ’12. Hearing Review. 2013;20(2):6.

15. Kuk F, Crose B, Kyhn T, Mørkebjerg M, Rank M, Nørgaard M, Föh H. Digital wireless hearing aids III: audiological benefits. Hearing Review. 2011;18(8):48-56.

16. Korhonen P, Lau C, Kuk F, Deenan D, Schumacher J. Effects of coordinated dynamic range compression and pinna compensation algorithm on horizontal localization performance in hearing-aid users. J Am Acad Audiol. In press.

17. Kuk F, Korhonen P, Lau C, Keenan D, Norgaard M. Evaluation of a pinna compensation algorithm for sound localization and speech perception in noise. Am J Audiol. 2013;22(6):84-93.

18. Chung K, Neuman E, Higgins M. Effects of in-the-ear microphone directionality on sound direction identification. J Acoust Soc Am. 2008;123(4):2264-2275.

19. Van den Bogaert T, Carette E, Wouters J. Sound source localization using hearing aids with microphones placed behind-the-ear, in-the-canal, and in-the-pinna. Int J Audiol. 2011;50(3):164-176.

kuk petri20. Hansen M. Effects of multi-channel compression time constants on subjectively perceived sound quality and speech intelligibility. Ear Hear. 2002;23(4):369-380.

21. Byrne D, Sinclair S, Noble W. Open earmold fittings for improving aided auditory localization for sensorineural hearing losses with good high-frequency hearing. Ear Hear. 1998;19(1):62-71.

22. Kuk F, Baeksgaard L. Considerations in fitting hearing aids with extended bandwidth. Hearing Review. 2009;16(10):32-39.

23. Wright B, Zhang Y. A review of learning with normal and altered sound-localization cues in human adults. Int J Audiol. 2006;45[Suppl 1]:S92-98.

Original citation for this article: Kuk F, Korhonen P. Localization 101: Hearing aid factors in localization. Hearing Review. 2014;21(9):26-33.

Oticon’s BrainHearing Technology Designed to Help Brain Make Sense of Sound

$
0
0
Oticon’s “brain first” technology is designed to help provide better hearing with less effort by giving the brain the clearest, purest, sound signals to decode.

Oticon’s “brain first” technology is designed to help provide better hearing with less effort by giving the brain the clearest, purest, sound signals to decode.

Oticon Inc, Somerset, NJ, has announced the release of BrainHearing™, the company’s “brain first” technology designed to provide better hearing with less effort by giving the brain the clearest, purest, sound signals to decode. BrainHearing combines four features—Speech Guard E, Spatial Sound, YouMatic, and Free Focus—that are designed to work together continuously to provide the brain with the input needed to hear more clearly, with reduced listening effort.

The four features are powered by the Inium platform, which is said to be the first sound processing platform that supports the brain’s entire process of making sense of sound.  According to Oticon, Inium’s processing modifies only the parts of the signal that the individual ear doesn’t hear well so that the unique characteristics of speech are captured and preserved. This helps to keep neural pathways engaged so the brain doesn’t have to relearn how to hear.  The result is better hearing with less effort.

“With BrainHearing, we further advance our ‘brain first’ audiological focus, first introduced in 2003,” says Oticon President Peer Lauritsen. “Rather than emphasize amplification and suppression of sounds, our audiological approach recognizes that speech understanding and comprehension are cognitive processes that happen in the brain.”

New Wireless designRITE and New Custom Styles. The benefits of BrainHearing technology are evident in Oticon’s newest wireless solutions that combine discreet aesthetics, comfort, and excellent performance with reduced listening effort for a high level of patient satisfaction, says the company.  The small, elegant and button-free designRITE solution that disappears discreetly behind the ear and two new wireless custom solutions are designed to deliver BrainHearing advanced audiology and wireless connectivity in the smallest form factor possible.

The wireless capabilities of the new Inium-based solutions also allow users to talk hands-free on their cell or home phone, enjoy music,  watch TV and much more with Oticon’s integrated ConnectLine system.

A new small Remote Control, roughly the size of a modern car key, can be used with all wireless Oticon instruments. It enables users to adjust the volume of the hearing instruments and to switch between programs for a more customized listening experience. The easy-to-carry Remote Control is especially beneficial for patients who have difficulty with small buttons or who use instruments without buttons, like the new designRITE.

More Choices and Complete Custom Line. The designRITE and custom wireless CIC and IIC are available in Alta, Nera, and Ria at different performance levels and price points.  Each instrument’s extra-tough, ultra-thin shell has a durable, robust exterior.  Internal workings are guarded with moisture resistance and other effective protection measures.

Oticon’s new FittingLINK wireless programming device enables hearing care professionals to achieve a reliable and personalized wireless fitting. FittingLINK connects easily and wirelessly to computers for fast and convenient programming sessions.

For more information about BrainHearing technology and the new designRITE and custom solutions,  visit www.pro.oticonusa.com

Source: Oticon Inc

Oticon’s 2014 Human Link Conferences Explore the Brain, Cognition, and Hearing Health

$
0
0
Left to right:  Jim Kothe, MS, Oticon VP of Sales; Don Schum, PhD, VP of Audiology & Professional Relations; Nancy Palmere, Senior Marketing Manager; Marija Baranauskas, AuD, Director of Education and Training; behavioral scientist James Kane, keynote speaker; Oticon President Peer Lauritsen; Sheena Oliver, VP of Marketing and Mike Irby, Manager of Business Development.

Left to right: Jim Kothe, MS, Oticon VP of Sales; Don Schum, PhD, VP of Audiology & Professional Relations; Nancy Palmere, Senior Marketing Manager; Marija Baranauskas, AuD, Director of Education and Training; behavioral scientist James Kane, keynote speaker; Oticon President Peer Lauritsen; Sheena Oliver, VP of Marketing and Mike Irby, Manager of Business Development.

More than 500 US hearing care professionals attended the 2014 Human Link Conference, “Hearing Care is Health Care,” hosted by Oticon Inc, Somerset, NJ. The professional knowledge-sharing event, held in two sessions on the east and west coasts, discussed the growing body of research showing the connection between hearing health and brain health.  The full agenda of presentations and workshops explored the importance of hearing in maintaining cognitive function and looked at the newest innovations in hearing care, including Oticon’s BrainHearing™ technology. 

“Patients live in a world where cognition, attention, memory, and hearing each play critical roles in listening,” said Oticon President Peer Lauritsen. “With the latest information on the connection between hearing loss and healthy aging, we see an enormous opportunity for practitioners to help patients understand that hearing loss is a serious health care issue— and to motivate them to take action.”

Don Schum, PhD, VP of Audiology & Professional Relations for Oticon, opened the conferences with an in-depth review of the state of research on hearing and brain health, examining the relationships between hearing, cognition, dementia/Alzheimer’s and hearing device use.  He shared strategies for opening communication with consumers to help them understand that speech understanding and comprehension are cognitive processes that happen in the brain.

Keynote speaker and behavioral scientist James Kane invited participants to look more closely at how the brain operates to better appreciate the ways in which loyal patient relationships are built and maintained.  Supported by more than 40 years of Harvard University research, Kane made the case that human beings have a fundamental need to be loyal and shared loyalty-building behaviors practitioners can use to develop lasting relationships with patients.

A series of interactive workshops focused on new knowledge and solutions for attracting first time users, as well as new Oticon tools designed to improve marketing efficiency and productivity. Participants were also introduced to BrainHearing, Oticon’s “brain first” technology that the company says is designed to help provide better hearing with less effort by giving the brain the clearest, purest sound signals to decode. The benefits of the technology were showcased in Oticon’s newest ultra-compact wireless solutions that combine discreet aesthetics, comfort, and excellent performance with reduced listening effort for a high level of patient satisfaction, according to the company.

For more information about Oticon’s “brain first” approach, BrainHearing™ technology and new designRITE and custom solutions, visit:  http://www.pro.oticonusa.com

Siemens: New Binax System Outperforms Normal Hearing in Challenging Environments

$
0
0

At this week’s EUHA Congress held in Hannover, Germany, Siemens Hearing Instruments introduced its new binax hearing aid platform that reportedly provides hearing aid wearers with better speech recognition in cocktail-party situations than people with normal hearing.

Two independent studies, the results of which are summarized in the October edition of The Hearing Review by Kamkar-Parsi et al, show that Speech Reception Thresholds (SRT) improved up to 2.9 dB (a 25% advantage in speech understanding) for binax wearers compared to people with normal hearing.

The binax platform builds on the e2e wireless™ technology that Siemens introduced a decade ago (see the February 2014 article “A History of e2e Wireless Technology” by Rebecca Herbig et al). Now in its third generation, e2e wireless 3.0 enables real-time exchange of audio signals among four microphones (two on each hearing aid in a bilateral fitting). Each hearing aid processes this audio input, creating a virtual 8-microphone network designed to deliver a heightened sensitivity of acoustic environment, or High Definition Sound Resolution (HDSR).

HDSR is said to provide unprecedented localization and directionality, emulating the benefits of natural binaural hearing: better clarity, richer sound, and exceptional speech recognition.

Importantly, binax provides this to the user automatically without compromising spatial perception or battery life.  At the EUHA convention, the Siemens booth featured a demonstration in which a KEMAR was situated within a soundfield array of speakers and attendees both heard the difference as well as viewed the energy consumption of the system which varied from about 1.3 to 1.6 mA depensing on if the new binaural system was disengaged or engaged.

According to a Siemens press release, key binaural features derive practical benefits for the wearer, including:

Narrow Directionality: Focuses more precisely on the speech source at the front of the wearer by automatically narrowing the beam of the directional microphones. This helps wearers better recognize speech in demanding listening situations while maintaining spatial awareness.

Spatial Speech Focus: Highlights dominant speech from any direction, while suppressing background noise. It’s automatically activated when the wearer is in a car and does not affect spatial perception.

eWindScreen binaural: When in wind, strategically transmits audio signals from the instrument with better sound quality to the other one, without compromising spatial perception or speech intelligibility.

Other features and new devices launched by Siemens at EUHA:

touchControl App. The touchControl App is designed to turn virtually any Android or Apple smartphone into a hearing aid remote control. The app works with all binax hearing aids and does not require any intermediary device. Wearers can easily adjust volume, fine-tune treble, or select a listening program. The touchControl App is a free download from the App Store™ and Google Play™.

easyTek™ wireless remote/streamer. The new easyTek is the successor to Siemens’ award-winning miniTek, and is available for Pure® and Carat binax hearing aids. easyTek is only 2 inches in diameter and weighs less than an ounce, yet Siemens says its performance is superior to other streaming devices in its class. A single button controls streaming, answers phone calls, and connects to virtually any external audio source. easyTek works “out of the box” and seamlessly pairs with the users’ hearing aids to provide added functionality and Bluetooth streaming. It works with both iPhone and Android devices.

easyTek App. An easyTek App transforms Android and iPhones into a hearing aid control center. The app makes the most of binax technology using the Spatial Configurator (SC) feature. Using the SC, wearers can visually extend easyTek’s functionally by manually adjusting the direction and span of the hearing aids’ microphones via their smartphone. This gives them instant control over both where and who to listen, regardless of whether the sound source is coming from the left, right, in front, or behind. This allows a wearer, for example, to focus on the conversation of the person next to him at dinner, even if distracting, more dominant speech is coming from another direction. The easyTek App is a free download via the App Store and Google Play.

Pure, Carat, and Ace™ RICs will be the first hearing aids introduced on the binax platform.

For further information on binax, please visit http://usa.siemens.com/binax.

Phonak Launches Phonak Audéo V RIC Hearing Aid Line Based on New Venture Platform

$
0
0
Phonak_Audéo_V_Family_10in-300dpi

The Phonak Audéo V family.

Phonak has launched a new processing platform that reportedly features twice the processing power of its previous instruments while using up to 30% less power consumption. The new aid is designed to provide a new level of performance in different listening environments with a seamless listening experience via easy operation, and offers convenient phone use. The new Venture platform notably features the Phonak Audéo V RIC line, along with three other new styles, all offered in four performance levels.

“Clear speech understanding in daily life situations:  This is what people with hearing loss expect from a hearing solution,” says Phonak’s new Group Vice-president Martin Grieder. “As simple as it may be in quiet surroundings, as challenging it still is on the move or in noisy locations. This is where we have put the focus in developing our new platform and our latest generation of hearing aids. Whether in the car, on the phone, or in a restaurant, new Audéo V RIC hearing aids offer precise performance all while seamlessly adapting from one soundscape to another.”

Seamless listening experience. According ot Phonak, the Audéo V runs on AutoSense OS, the “central brain” of the new Phonak hearing aids. It captures and analyzes incoming sound and can draw on more than 200 settings to match the sound environment exactly and to seamlessly provide the best setting for that environment—all without requiring any manual interaction.

Audeo_V_10_Pair_P8_10in-300dpi

Phonak Audéo V RIC.

For example, when driving in a noisy car, Audéo V reduces broadband noise to create a stable listening environment (Speech in Car program). When listening to music at home or at a concert, Audéo V provides an extended dynamic range, slow compression speed, and increased gain for a richer music experience (Music program). When chatting with friends in a crowded restaurant, the new device zooms in on a single voice, improving speech intelligibility (Speech in Loud Noise program).

The Phonak Audéo V RIC hearing aid family is available in four newly designed styles, all wireless, at four performance levels. All Phonak Audéo V RICs come with the Binaural VoiceStream TechnologyTM  and a push-button. The redesigned housings are reinforced with high-tech composite materials for extra durability.

Great performance, maximum convenience. The enhanced Wireless Communication Portfolio keeps Phonak Audéo V hearing aid users connected on the phone, watching TV, or in a noisy environment. This range of accessories offers an additional boost in performance with minimal effort.

  •  Phonak EasyCall connects any Phonak wireless hearing aid to any Bluetooth enabled cell phone, including non-smartphones and older models. Permanently attached to the back of the phone, it streams the conversation directly to both hearing aids in unmatched sound quality.
  • Phonak ComPilot Air II, a handy clip-on streamer and remote control, allows for easy control of all Bluetooth-enabled audio devices such as cell phones, TVs, tablets and computers. Its neckloop-free design and stereo sound quality makes the streamer the perfect companion for every multimedia user who wears a Phonak hearing aid.
  • The Phonak RemoteControl App turns any smartphone into an advanced remote control for Phonak Venture hearing aids. In combination with Phonak ComPilot Air II or ComPilot II, the App enables direct selection of hearing programs, audio sources or the listening situation as well as individual left or right volume control.

The Phonak Audéo V hearing aid family is now available in the United States through hearing care professionals. For more information visit: www.phonakpro.com/audeo-v

Source: Phonak


New Binaural Strategies for Enhanced Hearing

$
0
0

Tech Topic | October 2014 Hearing Review

 

By Homayoun Kamkar-Parsi, PhD, Eghart Fischer, Dipl-Ing, and Marc Aubreville, Dipl-Ing

ParsiBioBoxThis paper describes the new binaural directional features of binax,  Siemens’ next generation of BestSound Technology. In addition to transmitting control and status data between hearing aids, the binax hearing aid system exchanges audio signals between left and right hearing aids.  Introduced in this paper are two key features of binax: 1) Narrow Directionality, an enhanced binaural beamforming algorithm designed for very difficult listening situations, and 2) Spatial SpeechFocus, a self-steering binaural beamforming algorithm designed especially for situations with talkers from side and back directions in noisy environments (eg, in cars or walking down a sidewalk).  The advantages of automatic activation and control of the new algorithms in select channels and their exceptionally low  power consumption are also reviewed.

Binaural processing can significantly improve speech understanding in background noise. At Siemens, our objective was to develop binaural algorithms that address the most challenging listening situations for those with hearing loss—hearing in difficult, noisy, listening environments, like restaurants, cocktail parties, and hearing in wind.

Two algorithms introduced in this article are Narrow Directionality and Spatial SpeechFocus. These advanced binaural beamforming algorithms use Siemens’ binaural wireless audio exchange technology (e2e wireless 3.0), which may potentially improve speech understanding in noisy and adverse acoustic environments. They may also provide a more efficient solution to the cocktail-party effect compared to bilaterally fitted conventional monaural differential microphone systems.

Narrow Directionality Processing

Narrow Directionality is designed to enhance the speech signal when noise is coming from all around the listener, and this “noise” is also made up from other people talking. It creates a narrow beam toward the front direction so the wearer can listen easily to any distinct speaker by turning his or her head toward the speaker.

Narrow Directionality improves the speech signal from the speaker in two ways simultaneously: 1) by quickly reducing other competing speech signals outside the beam angular range, and 2) by boosting the level of the speaker signal within the beam (ie, “focus” on the speaker). Additionally, the system has an automatic adaptive control that smoothly adjusts in selected channels to take directivity from a wider to a narrower beam as the SNR in the environment deteriorates. This ensures the optimal listening experience in all kinds of noisy environments, while preserving spatial queues.

As illustrated in Figure 1, Siemens’ binaural processing is comprised of three essential components: the binaural beamforming, the binaural noise reduction gain, and the head movement compensation module. These components work together to achieve the Narrow Directionality Effect.

Binaural Beamforming

[Click on figures to enlarge.] Figure 1. binax Narrow Directionality: Binaural processing system composed of three components: Binaural beamforming, binaural noise reduction gain, and a head movement compensation module.

[Click on figures to enlarge.] Figure 1. binax Narrow Directionality: Binaural processing system composed of three components: Binaural beamforming, binaural noise reduction gain, and a head movement compensation module.

This is a new kind of binaural beamforming that incorporates the head shadowing effect. It takes into account the contralateral wireless binaural signal as shown in Figure 1. For each side, the binaural beamformer is designed as follows: it takes as input the local signal, which is the monaural directional signal, and the contralateral signal, which is the monaural directional signal transmitted from the opposing hearing instrument (ie, via binaural wireless link e2e wireless 3.0). It should be noted that the monaural directional signal (from the local side or the contralateral side) is already an enhanced signal with reduced noise from the back hemisphere.

The output of the beamformer is generated by linearly adding the weighted local signal and the weighted contralateral signals. The weighting scheme, which is a crucial part of the overall design, is aimed to provide an output signal with maximum lateral interference cancellation while keeping the frontal speaker untouched.

How should the weightings be then optimized to achieve this goal? Taking the left hearing instrument as the reference, imagine the following example: a target speaker is located at 0° in front of the listener, who is fitted with binaural hearing instruments. The local and contralateral (ie, the transmitted signal from the right hearing instrument) signals will therefore arrive at the same time to both ears without any head shadowing effect. In other words, both signals will have approximately the same power and phase. However, for the case of a lateral interfering noise (eg, at 45°), the local and contralateral signals will be different due to head shadowing and interaural time difference; the signals will have power and phase (or time) differences. Therefore, given a target at 0° with the presence of an interfering lateral noise or a competing talker at 45°, the local signal power will be higher than the contralateral signal power due to the interfering noise. It also implies that the contralateral signal is the one that is less affected by the noise due to the head shadowing.

Having this example in mind, we designed an algorithm to derive the optimum weighting with the following criteria:

  1. The sum of the weighted local and the weighted contralateral input signals should always produce an output signal (to each ear) with minimum power with respect to the original local and contralateral signal powers. These criteria ensure that lateral interferences are attenuated. Importantly, the weightings are also adaptive and are updated within milliseconds to closely follow fast changes in the noisy environment.
  2. The additive combination of the weights should always add up to 1. This ensures that the target signal at 0° remains untouched.

Applying points 1 and 2 above, the new binaural beamformer creates a narrow beam to the front direction with the beam pattern. The resulting is a Narrow Directionality (combination of binaural beamforming and binaural noise reduction) output signal characteristic relative to monaural directionality.

Binaural Noise Reduction

Figure 2a-c. Narrow Directionality output signal characteristic. a) Monaural directional output characteristic; b) Binaural beamforming output characteristic; c) binax Narrow Directionality: Binaural beamforming combined with Binaural Noise Reduction.

Figure 2a-c. Narrow Directionality output signal characteristic. a) Monaural directional output characteristic; b) Binaural beamforming output characteristic; c) binax Narrow Directionality: Binaural beamforming combined with Binaural Noise Reduction.

To further enhance the output signal from our binaural beamformer, a new adaptive binaural noise reduction gain (fully integrated within the noise reduction unit) was developed (Figure 2). This noise reduction gain can be interpreted as a binaural Wiener-based gain computed using the local and the contralateral signals as inputs.

It was designed to have several specific properties. It attenuates lateral interfering speakers coming from outside the front target angular range (±10°). Therefore, a competing speaker, as close as 45°, is further attenuated (ie, by applying gain below 1—typical Wiener-based gain attenuation). However, if there is a front target speaker present in the front angular range, the front speaker is moderately amplified (or boosted) by applying a gain above 1 (which is not a typical Wiener-based gain property).

Narrow Directionality then gives hearing aid wearers the perception that they are focusing on the person they are facing (like a magnifying glass), as illustrated in Figure 2c. It should be noted that the adaptation of the binaural noise reduction gain is fast enough (within milliseconds) to rapidly amplify or attenuate depending on the acoustic situation—without background noise increasing.

Head Movement Compensation

Narrow Directionality also includes a head movement compensation module to avoid any distortion. This is necessary to ensure a normal, comfortable conversation without requiring the wearer to always directly face the speaker.

Head movements are a part of normal behavior during a conversation, and they usually occur very quickly. Plus-and-minus 10 degrees can be considered as a normal range of regular head movements by either the speaker or the listener. So, if the target signal is at +10°, the head compensation module modifies the originating input binaural signal at +10° and brings it back to 0° (more specifically, the phase and the level of the contralateral signal is re-adjusted to match the local signal). This allows the binaural beamforming and the binaural noise reduction to behave the same way; that is, the target signal is “seen” at 0°, which lies within the narrow frontal beam range of 0° of Narrow Directionality. As a result, the target signal is still enhanced rather than attenuated due to head movements.

Automatic Control of Narrow Directionality

The goal of this algorithm is to estimate the complexity of the listening situation, and seamlessly introduce more or less directionality when needed. For this, multiple criteria are evaluated.

The acoustic complexity of the hearing aid user’s listening environment is usually linked with background noise level. A certain level of background noise, optimized for various noisy environments, is needed to activate binaural processing. This threshold is higher than the respective noise level expected for monaural processing. If this binaural threshold is exceeded, the binaural audio transmission is enabled.

Figure 3. Frequency-dependent activation of binaural Narrow Directionality effect (eg, in a cocktail party situation).

Figure 3. Frequency-dependent activation of binaural Narrow Directionality effect (eg, in a cocktail party situation).

Once the audio signal from the contralateral hearing instrument is available, the hearing instrument has far more possibilities to analyze the scene. Using a combination of both ipsi- and contra-lateral metrics, the effect of the beamformer is restricted to situations and frequency bands where it is useful, and is adjusted to specific frequencies, depending on the background noise level (Figure 3).

For low noise levels, the effect is kept to a minimum, whereas for medium noise levels it is increased, and it reaches full directionality for high noise levels. This has the important advantage of keeping spatial orientation and natural characteristics of sound to a maximum in situations with medium noise, and in situations where most of the noise is contained in the lowest frequencies.

It should be noted that the classification of the acoustic situation surrounding the hearing aid wearer is also taken into account. For example, there may be some situations (eg, enjoying loud music) that should not activate a narrow beamformer in an automatic program.

Real-World Efficacy

The hearing instrument wearer only needs to be focused on audibility and listening comfort in every situation. This is why it is so important to have a seamless adaptation as the acoustic situation changes. Any control that is acting too fast is prone to misdetections, whereas any control that is acting too slowly can be noticed, and therefore irritating. If the situations change quickly, the automatic steering has to adapt quickly;  if the situation changes gradually, the automatic steering should also adapt gradually. The best possible reaction from a wearer is “I did not notice any automatic adjustments of the hearing instrument—it always seemed just right.” The maximum user benefit is reached when the directionality and noise reduction are optimized so that the overall perception is natural and unnoticeable.

The efficacy of the Narrow Directionality algorithm for speech recognition for hearing impaired individuals has been studied in clinical trials at two different independent sites.1 These behavioral findings are very encouraging, and align with SNR advantages expected in the algorithm design.

Advanced Beamforming in 360° and Spatial SpeechFocus

The binaural audio transmission introduced with the binax platform enables not only beamforming for situations where the target speaker is in the front, but also allows for beamforming to the side for situations like walking or sitting side-by-side. Until now, the best one could do when a target speaker is located to the side was to select the omnidirectional mode. However, in these situations the interfering sources (eg, street noise) often come from other directions, and omnidirectional processing cannot suppress the undesired noise.

This is where Spatial SpeechFocus, introduced in the binax platform, comes into play. By suppressing noise from one side, and enhancing the target signal from the other side on both ears, this technology can increase listening comfort, as well as speech understanding.

The algorithm works on the same fundamental principles as human hearing. When sound is coming from one side of the head, it will have two major differences in the two ears: First, as sound propagates, it will arrive earlier at the ear closer to the sound source, described as the interaural time difference (ITD). If the sound comes exactly from the side of the person (90°), this time difference is approximately 0.7 milliseconds. This will result in a phase difference of the respective sound, which can be used by a state-of-the-art differential beamformer.2 This phase difference is most evident for frequencies lower than approximately 750 Hz.

A second effect is the interaural level difference (ILD). As the sound waves travel past the head, lower frequency sound waves are diffracted, and hence, little attenuation occurs. But for higher frequencies, there is significant attenuation that can be detected and used to determine the origin of a sound. This can be accomplished even for high-frequency noises, albeit with less precision. In the Spatial SpeechFocus algorithm, a Wiener-filter-based approach is used to suppress signals coming from an undesired side.

Fig4A_polarplot_a_jpeg_conv

Figure 4a-b. Polar plots for left and right ear for a) omnidirectional mode and b) Spatial SpeechFocus to the left side, f = 2 kHz, measured on KEMAR in low reverberant room. The interaural level difference, represented by the shaded area, does not change in both modes.

Figure 4a-b. Polar plots for left and right ear for a) omnidirectional mode and b) Spatial SpeechFocus to the left side, f = 2 kHz, measured on KEMAR in low reverberant room. The interaural level difference, represented by the shaded area, does not change in both modes.

Both ITD and ILD are utilized to create a powerful beamforming algorithm, the effect of which can be observed in the polar plots shown in Figure 4a-b. Depending on the frequency and room acoustics, the attenuation is approximately 10 dB. The major advantage over mere copying of the preferred ear to the other side is that, in this application, spatial cues are kept. That is, the user can still localize the sound and has a natural spatial impression. The interaural level differences—and thus the localization of the sources—remain untouched.

This beamformer can be controlled manually, using a remote control application (Spatial Configurator), but it also can be controlled automatically. For the latter, features that correlate with the SNR of speech are calculated independently in both hearing aids. These features represent the probability of speech from the front, the back, and one of the sides. By exchanging this information and combining the data, it is possible to determine from which direction a speech signal originates.

This is an extension of Siemens’ SpeechFocus algorithm (designed to automatically focus to the back when a talker originates from this area) to now focus to the left or right directions, depending on the direction from where the target signal originates. The Spatial SpeechFocus beamformer is now able to provide a signal focus to either side. If speech is coming from both sides, or in quiet, an omnidirectional microphone mode is chosen (Figure 4a). The switching is performed synchronously on both ears, using a smooth transition. In all situations, the correct perception of the speaker(s) will remain because the binaural cues are maintained to a large extent.

Energy Efficiency

When talking about sophisticated algorithms in tiny hearing instruments with even smaller batteries, it is critical that these new features are energy efficient. Compared to the previous micon platform (which did not have bilateral audio exchange), binax battery consumption remains the same for the normal microphone modes. More importantly, when audio streaming is active, the additional binaural features offered by binax increase the battery consumption by only 200 µA or less, from 1.3 to still below 1.5 mA, while the new features are automatically activated.

This is considerably more efficient than other products in the market using similar binaural processing. In practical terms, assuming a wearer uses his/her hearing aids 12 hours/day, and uses the binaural audio streaming for 2 hours/day, overall battery life would only decrease by 2.5%—a difference so small it is unlikely it would be noticed by the hearing aid wearer.

Summary

This paper reviewed the new binaural directional features of Siemens’ binax platform. Narrow Directionality is an enhanced binaural beamforming algorithm, designed especially for very difficult listening situations with multiple talkers in the background. It provides a narrow acoustic focus to the “looking direction” of the wearer, and thus enables the hearing aid user to understand the preferred talker, even with several other talking persons in close proximity. What also makes it unique is its automatic smooth, situation-dependent activation and deactivation, and the high resolution control of its strength of effect, which separately depends on the acoustic conditions in each frequency band.

Spatial SpeechFocus is a self-steering, binaural, beamforming algorithm especially useful for situations when speakers are talking from the left or right side in noisy environments. It is automatically activated when the presence of speech from one side is detected in the presence of background noise. Like the human ear, it uses interaural phase and level differences for maintaining a natural spatially correct sound impression.

Whether it’s Narrow Directionality for listening to a talker from the front, or Spatial SpeechFocus for understanding speech from the sides or the back, we now have effective binaural algorithms that operate automatically to assist hearing aid wearers in many different types of complex listening environments—with extremely low power consumption.

References

1. Powers TA, Froehlich M. Clinical results with a new wireless binaural directional hearing system. Hearing Review. 2014. In press.

2. Elko GW, Nguyen Pong A-T. A simple adaptive first-order differential microphone. Paper presented at: IEEE AASP Workshop on Applications of Signal Processing to Audio and Acoustics. New Paltz, NY; October 15-18, 1995.

CORRESPONDENCE can be addressed to Thomas Powers, PhD, at: tpowers@nullsiemens.com

Original citation for this article: Kamkar-Parsi H, Fischer E, Aubreville M. New binaural strategies for enhanced hearing. Hearing Review. 2014;21(10):42-45.

 

Microson Expanding in Latin America; Launches Two New Devices at EUHA 2014

$
0
0

Congreso MicrosonMicroson, Barcelona, Spain, which is part of the GAES Group and is said to be Spain’s only hearing aid manufacturer (est 1958), launched two new products at the recent EUHA Congress in Hannover, Germany.

The M34 Digital FC Open and the M4 RIC Premium are designed to improve the user experience and to offer the best solutions with the most advanced technology of the market. According to the company, the M34 Digital FC Open includes all the benefits of “One adaptation” including reduced occlusion effect and natural sound quality. The M4 RIC Premium is compatible with HP headphones and has metallic finishes in different colors. All these characteristics guarantee the best quality, long duration, and a better sealing, says Microson.

Microson reports that it has commercialized advanced hearing solutions for 60+ years, a result of the combination of careful production (100% manufactured in Spain) and research and design. With an investment of close to 1 million euro in the past 4 years, the company reports that it is exports 35% of its production to 30 countries, mainly in Europe, the Middle East, and Latin America, including Chile, Portugal, Argentina, Brazil, Mexico, Peru, Paraguay and Bolivia. After the recent market opening in Ecuador and following its internationalization strategy, the company expects to keep seeking new alliances in the Latin American countries.

Source: Microson

 

FDA Approves MED-EL’s Sonnet Audio Processor and Software

$
0
0

MED-EL USA has announced the FDA approval of its new SONNET behind-the-ear audio processor for cochlear implants, which is reportedly water-resistant, lightweight, and tamper-proof. It is programmed with the company’s MAESTRO 6.0 software, which was also recently approved by the FDA. MED-EL reports that the SONNET Audio Processor will work with all of the cochlear implants it has released over the last 20 years.

According to the announcement, the SONNET audio processor, which will be commercially available in Spring 2015, was designed to be splash-proof and sweat-proof so active users don’t need to worry about problems with water damage. It will be available in a range of color combinations so recipients can mix and match the colors of the audio processor parts to create a customized style.

“SONNET is compatible with all of MED-EL’s multichannel cochlear implants, which have been consistently designed to access future-ready technology,” said Ray Gamble, president and CEO at MED-EL North America. “SONNET offers the very latest hearing technology available today to patients who have received MED-EL implants over the past 20 years.”

The company says the SONNET provides up to 60 hours of battery life, and uses two disposable 675 zinc-air batteries. The locking battery cover is tamper-proof, and the reinforced coil cable with a locking connection to the processor offers durability, making it a good option for children.

For more information about the SONNET audio processor and MAESTRO System Software 6.0, visit the MED-EL website.

Source: MED-EL

 

Unitron Introduces North Platform; New Approach to the Variability of Conversation

$
0
0

Unitron, Kitchener, Ontario, Canada, has announced the launch of North™, a new sound processing platform that reportedly can distinguish between different types of conversation.

According to Unitron, North’s SoundNav™ technology automatically identifies and classifies seven distinct environments, four focused specifically on conversation. (To learn more about the North platform, visit www.unitron.com/north)

“Conversations don’t just occur in quiet or in noise, and background noise can vary greatly depending on whether you are in small groups, at large events, in the car or on the street,” said Don Hayes, PhD, director of clinical research at Unitron. “The new platform technologies work in harmony so patients automatically experience the best speech understanding— along with natural sound quality— in the widest variety of situations.”

Concurrent with the North launch, Unitron introduces a new generation of its popular receiver-in-canal (RIC) Moxi™ hearing instruments. The all-new Moxi Fit™ features a fluid new look, giving patients the perfect combination of style and functionality with a 312 battery, push button, and telecoil.  Moxi Fit joins Moxi Kiss™ and Moxi Dura™, all available in five technology levels and all built on the North platform.

The new Moxi hearing instruments on the North platform are fully integrated with Unitron’s industry-leading Flex:trial™ program, which lets hearing care professionals send patients home with free trial hearing instruments, and Flex:upgrade™, which facilitates in-clinic technology upgrades as hearing needs change.

Unitron says the North, Moxi, and Flex combination become an even more compelling solution with the introduction of Log It All™, an industry first data logging feature that captures and displays a patient’s experiences with their hearing instruments across seven environments regardless of technology level. This gives hearing care professionals the ability to measure the performance of hearing instruments in everyday use— as well as during trial and upgrade situations—and explain the benefits of different technology levels in a personal way.

“North, Log It All, and Flex work together to empower hearing healthcare professionals to offer better listening with a focus on conversations, better insights into how the technology is performing in the real world, and risk-free hearing instrument trial and upgrade solutions,” said Unitron President Jan Metzdorff. “These are transformative innovations that will help to continue to build trust, increase patient confidence and help foster long-term loyalty between patients and their hearing healthcare providers.”

Source: Unitron

 

 

 

 

Letters: Composer Richard Einhorn Comments on “Less May Be More for Music”

$
0
0

Dear Editor:

Marshall Chasin’s provocative comments in his Back to Basics article, “Less May Be More for Music” (May 2015 Hearing Review), confirm best practices in the professional audio industry.  When I was hired by CBS Masterworks, one of the first things that the great classical record producer Andrew Kazdin told me was that the ideal of professional audio recording equipment was “a copper wire with gain.” What this meant was that electronic sound reproduction should be as flat and distortion-free as possible. Heavy signal processing of any sort directly contradicts that ideal. In reality, of course, sometimes elaborate compression or filtering was necessary, and we worked extremely hard to disguise it.

In my personal experience today listening to high-quality audio equipment with a very serious hearing loss—and also in working with others with even worse hearing than mine—non-processed (or minimally processed) audio reproduction can both be perceived and is very often preferred over the extensive processing that occurs when streaming recorded music to hearing aids. This is true for any genre of music and is quite obvious with a simple test. First, listen via hearing aids and Bluetooth streaming to recorded music on an iPhone, then remove the hearing aids and listen to the exact same music over a good pair of wired earphones. Good wireless sound is also possible using moderately priced consumer earphones using Kleer technology. Again, it would be useful to compare that experience with music streamed to hearing aids; the difference is often likely to be dramatic.

Obviously, some people with especially severe hearing losses may not be able to perceive much difference between listening to music over good earphones and hearing aids—but in informal tests, I have seen many who can. It might be possible to refine the experience for some users with some gentle equalization and compression. However, I’ve found that nowhere near the significant alterations of frequency response and dynamic range that hearing aids generate are necessary.

I recognize that there are energy and design issues that place constraints on sound quality in hearing aids. However, in addition to audibility, comprehension, and comfort, one important issue for music is enjoyment, and my informal tests confirm this approach is quite enjoyable—again not for every severe loss, but for more than I might have expected. So I hope for a day when most, if not all, the signal processing in hearing aids can be bypassed and I can simply listen to my music streamed flat, with ample bass. As the Beach Boys once sang, “Wouldn’t it be nice?”

—Richard Einhorn

Editor’s note: Richard Einhorn is an award-winning composer, record producer, recording engineer, and hearing loss advocate. Voices of Light, his oratorio with silent film, has been performed all over the world. His recording with Yo-Yo Ma performing the Bach Suites won a Grammy Award. Einhorn’s advocacy for improved techologies for hearing loss has been featured in the New York Times, The Washington Post, and NPR.

Also see Einhorn’s article in the July 2014 edition of The Hearing Review, “Using Professional Audio Techniques for Music Practice with Hearing Loss: A Thought Experiment,” as well as film composer Jeff Rona’s viewpoint on the same subject.

Viewing all 119 articles
Browse latest View live