Vanderbilt University Study On IEMs and Hearing Protection
Summary by Jeremy Federman, Vanderbilt University
In-ear monitors found not to inherently provide hearing protection, but have greater potential to do so when compared to floor monitors.
In February of 2008, the Journal of Speech Language and Hearing Research (JSLHR) published an article by Federman and Ricketts (2008) entitled, “Preferred and Minimum Listening Levels for Musicians While Using Floor and In-ear Monitors.” The study systematically examined how varying on-stage music and crowd noise levels influenced the preferred listening levels (PLLs) and minimum acceptable listening levels (MALLs) of musicians while using floor and in-ear monitors during performance. The findings of the study should be of interest to any one who uses monitors, but of particular interest to musicians.
The purpose of any monitor is to enable a musician to hear his or her own instrument or voice at desired levels. Monitors also make it possible to hear the musical parts others are playing, at potentially great distances away, which is important both to the musical composition and the playing of one’s instrument. Besides solving problems related to signal-to-noise ratios, in-ear monitors were developed to provide high-quality audio signals in conjunction with reduced exposure to high ambient sound levels during live music performances. Traditionally, musicians have relied on floor monitors that provided sound field signals. One potential problem encountered when using floor monitors is the signal competition that can occur among the high-level, on-stage ambient signals from individual instruments, crowd noise, and the desired monitor signal. A musician may have difficulty hearing his or her voice and other instruments because of the crowd noise and other on-stage sounds. Consequently, in-ear monitors are marketed as a way to attenuate ambient crowd noise and other onstage sounds, allowing for lower listening levels, as well as providing high-quality signals with minimal distortion
Despite apparent advantages over floor monitors, Federman and Ricketts (2008) identified a number of unanswered questions related to the actual sound levels being delivered to individuals’ ears using in-ear monitors. It is reasonable to question, for example, whether the high levels typical for live musical performances exceed safe exposure levels to avoid noise induced hearing loss (NIHL). Additionally, the authors wanted to know how the use of in-ear monitors as compared with floor monitors impacts the total overall level reaching the musician’s ear. For example, if well-fitted in-ear monitors on average provide 20 dB of attenuation, and if the on-stage ambient sound levels during a performance average 105 dBA (A-weighted dB SPL), then the resulting levels reaching the ear of the musician would be no less than 85 dBA. With this assumption made, it would then be reasonable to conclude that the in-ear monitor level would need to be at least that loud to provide any SNR benefit to the listener. In the same sound environment, a floor monitor would theoretically need to deliver levels of at least 105 dB SPL for the monitor to provide any SNR benefit to the musician. Therefore, in-ear monitors have the potential to allow musicians to listen at lower levels. However, prior to the study, it hadn’t yet been determined empirically whether attenuation of crowd noise and other on-stage levels through the use of in-ear monitors and, hence, an improvement in SNR, would help address the problem of high monitor signal levels. In addition, it remained unclear if participants would prefer a fixed SNR across levels or if SNR would vary with signal levels. These factors are important because if musicians prefer listening levels higher than what is deemed safe, despite the attenuation capability of an in-ear monitor, then the use of in-ear monitors would not likely provide level reductions that would increase the allowable exposure time or reduce risk of noise-induced hearing loss as compared to the use of floor monitors.
In summary, knowledge about PLLs and MALLs for musicians with and without hearing loss who use monitors was considered important by the authors for a number of reasons, including the fact that empirical data were not currently available to assess the potential risk of NIHL for musicians (or others) who utilize such devices. The use of in-ear monitors potentially provides opportunity for positive effects (e.g., attenuation of loud sounds, high fidelity, reduced vocal effort, better controlled signal monitoring). However, it was unknown whether the risk for NIHL remained. The purpose of the study was to evaluate the effects of crowd noise and overall music level on the preferred listening levels of musicians.
Specifically, several questions were of interest. First, do the two types of monitors differ in the risk of NIHL from high listening levels, as evidenced by PLLs and MALLs? Second, what effect does SNR have on PLL; do louder overall music level and crowd noise, in addition to non-vocal monitor signal levels, lead to higher PLLs? Additionally, does the risk of NIHL differ between individuals with and without hearing loss? Third, does the lowest level that musicians will tolerate (i.e., MALL) in comparison to their PLL differ by monitor type? If so, knowing the lowest level that musicians will tolerate could provide insight useful for counseling about the use of these devices.
To answer these questions, adult musicians aged 23 to 48 years old with and without hearing loss who had 10 years of musical training or comparable professional experience were included in the Federman and Ricketts study. The participants were asked to learn a novel song and then sing it along with a recording of the other instruments in a simulated live performance environment that included drums, bass, guitar, keyboards, and the participants own voice. They were asked to adjust the level of the monitor (floor or in-ear) up and down to find the level at which they would prefer to listen during a live performance (i.e., PLL). They were also asked to adjust the level of the monitor up and down to find the lowest level they would listen before any lower the monitor would be considered of no use (i.e., MALL). Combinations of overall on-stage music levels and crowd noise levels were varied for each trial (i.e., 92, 0; 92, 95; 97, 0; 97, 80; 97, 95; 102, 0; 102, 80; and 102, 95 dBA). Once PLL or MALL was established, a probe microphone system was used to record sound pressure levels at the eardrum during performance.
Major findings from the study showed that, although changes in signal-to-noise ratio did not impact PLL or MALL, as signal level increased, PLL and MALL also increased. In other words, as the onstage levels go up, so does the need to increase monitor level (both floor and in-ear). This finding suggests that overall level drives PLL and MALL. Also, the relation between floor and in-ear monitors was stable for both PLL and MALL. That is, PLL was the same for both floor and in-ear monitors. This means that, despite the reduction of ambient sound of ~20 dB by placement of the in-ear monitors, participants preferred to listen at levels virtually identical to their preferred floor monitor levels. Although there was a statistically significant difference between the two monitor types (0.6 dB), the result was not considered functionally meaningful by the authors because such a small difference in level would not affect recommendations about noise exposure time. Conversely, a 3 dB or greater difference would have been considered functionally significant because such a result would potentially double allowable exposure time based on NIOSH recommendations. As compared with PLL, the overall MALL was ~6 dB lower for in-ear as opposed to floor monitor. This finding suggests a potentially significant real-world difference between the two monitor types such that usable monitor levels can be lowered more for in-ear monitors than for floor monitors. Specifically, the MALL was only 1.8 dB lower than PLL for the floor monitor conditions, but was ~6 dB lower using the in-ear monitor. This result suggests that, if musicians are willing to turn down their monitor levels to preserve their hearing, bilateral in-ear monitors will allow almost a 6 dB greater reduction in level than a floor monitor, potentially resulting in either reduced risk of noise-induced hearing loss or significant increases in allowable exposure time.
One very important conclusion one can take away from the Federman and Ricketts (2008) study, is that there is no real-world difference between floor and in-ear monitor Preferred Listening Levels. This means the generally accepted assumption that the use of in-ear monitors provides users with hearing protection (or greater protection than floor monitors) is false. It means participants actually increased listening level in order to override the ~20 dB of ambient sound attenuation the in-ear monitors provided. This suggests musicians prefer a particular overall loudness level not solely dependent on SNR. Conversely, MALLs appeared to be driven by audibility because they represent the minimum SNR a musician requires to successfully monitor. Therefore, although in-ear monitors have the potential for reduced listening levels, counseling by qualified hearing professionals is likely required to take advantage of this benefit. The authors proposed that the most accurate way to assess risk for NIHL for individual musicians is to measure SPLs at the eardrum during soundcheck or performance.
Contact author: Jeremy Federman, Vanderbilt University Medical Center, Department of Hearing and Speech Sciences, Medical Center East, South Tower, 1215-21st Avenue South, Room 8310, Nashville, TN 37232. E-mail: [email protected].
What People Say
Find out what people you may know in your community say about our services.
Meet the Team
The best way to pick the right hearing expert is to get to know them.
Talk to the Experts
Call us today to cut through the confusion about hearing loss and hearing aids.