It is well documented that individuals who have hearing loss often complain of considerable difficulty understanding speech, especially in a background of noise. Recent advancements and improvements in digital hearing aid technology appear to have minimized this difficulty as evidenced by the subjective reports provided by many self-assessment hearing aid outcome measures. However, some empirical research has not shown significant measurable advantages of digital technology over analog technology (Valente, Fabry, Potts, & Sandlin, 1998;Walden, Surr, Cord, Edwards, & Olson, 2000). This lack of evidence to support patients' subjective perceptions of digital technology suggests that their responses on self-assessment questionnaires may be artificially inflated. The focus of this paper is to emphasize the need for both subjective and objective documentation of hearing aid outcomes, particularly in the area of speech perception performance. First, a review of some terminology is in order.
Verification and Validation
The terms verification and validation are often confused when used in the context of the hearing aid fitting process. Hearing aid verification techniques primarily focus on ways to confirm that the gain in the hearing aid matches the prescribed targets. Hearing aid verification is an important component of the hearing aid evaluation, but it does not evaluate whether the matched hearing aid targets are actually appropriate for the patient with regard to improvements in speech perception, or whether the patient will benefit from such prescribed hearing aid gain. Therefore, recent research in hearing aid fitting has focused more on hearing aid validation techniques. Hearing aid validation refers to outcome measures designed to assess treatment efficacy (i.e., whether the hearing aids are beneficial). Weinstein (1997) defines treatment efficacy in three different areas: "(1) treatment effectiveness: do the hearing aids improve speech intelligibility in quiet and in noise or do they restore normal loudness perceptions, (2) treatment efficiency: are certain hearing aids or hearing aid settings/adjustments better than others for improving speech understanding, and (3) treatment effects: does the use of hearing aids improve the patient's social or emotional well-being or his overall quality of life?" Because it is critically important for audiologists to demonstrate the outcomes of such treatments as hearing aids, much of our current clinical focus has shifted toward hearing aid validation.
There are generally two different philosophies regarding how hearing aid validation techniques can document outcomes from the hearing aid fitting process: those that focus on subjective outcomes (i.e., using questionnaires and interviews to document the opinions and attitudes of the patient), and those that focus on objective outcomes (i.e., using empirical data to verify improvements in performance) (Cox, 1999). The focus of this paper is the consideration that both subjective and objective outcomes are important and necessary in the documentation of hearing aid validation and should be incorporated into the hearing aid evaluation process. What follows is a review of approaches that have been or are currently being used for hearing aid evaluation, along with a discussion of the role of both subjective (self-assessment questionnaires) and objective outcome measures (speech recognition tests) that may be useful in documenting speech perception performance. Finally, current research establishing the effectiveness of specific subjective and objective outcome measures will be reviewed.
Approaches to Hearing Aid Evaluation
Behavioral Comparisons
The earliest approaches to evaluating hearing aid benefit were comparison methods where patient performance with two or three analog hearing aids was evaluated using traditional word recognition tests, such as NU-6 and CID W-22 monosyllabic word lists (Carhart, 1946). Such tests were administered to the patient in the sound field, and speech recognition thresholds (SRTs) and word recognition scores were recorded in unaided and aided conditions to determine if the hearing aids provided benefit. Comparisons were also made among the aided conditions to evaluate a variety of hearing aids to determine which hearing aid provided the lowest SRT and the best word recognition score.
Unfortunately, these speech perception measures of hearing aid benefit have long been criticized for not being sensitive enough to provide the information necessary to determine and define specific hearing aid benefit (Carhart, 1965;Wiley, Stoppenbach, Feldhake, Moss, & Thordardottir, 1995;Mendel & Danhauer, 1997). Many of these tests are considered insensitive for objectively and accurately measuring aspects of listeners' speech perception abilities as a reflection of their performance in realistic listening situations (Mendel & Danhauer, 1997). The diversity of this criticism reflects both the simplicity of the traditional measurement approaches and the complexity of speech perception by those having hearing impairment (Walden, 1984).
Real-Ear Verification Measures
Given the lack of sensitivity of early speech perception tests, other approaches to hearing aid benefit were incorporated in the 1980s. With the advent of computerized probe microphone real-ear technology, the challenge of developing scientifically based methods of selecting, evaluating, and fitting hearing aids became much easier (Northern, 1992). These objective probe-microphone measurements were used to verify that the prescribed real-ear gain of the hearing aid met desired targets. This objective measure is critical for providing verification data that determines how the real ear is performing with the hearing aid. The good news is that objective measures of insertion gain provide excellent information about the amount of real-ear gain delivered by the hearing aid;the bad news is they do not necessarily supply any information about the patient's speech understanding ability in realistic listening situations using that hearing aid.
It is critically important that audiologists use objective probe microphone real-ear measurements to verify the performance of the hearing aid in matching prescription targets. However, such techniques do not serve as a replacement for using speech recognition tests as measures of benefit in speech understanding. It is important that sensitive speech recognition testing be a part of the hearing aid evaluation process, as long as the tests used have documented validity and reliability (Martin, Champlin, & Chambers, 1998;Wilson, 2004;Mendel, 2007).
Unstandardized Procedures
More recently, advancements in digital hearing aid technology have changed the hearing aid evaluation process once again. With extensive computer software now available to fit these digital instruments, not only are speech recognition test materials not being used, but in some cases, even real-ear probe microphone verification techniques are not being employed. Some manufacturers provide CD-ROMs with environmental sounds and unstandardized speech materials with their fitting software. These stimuli, if used, are often played directly from a computer through uncalibrated speakers which are often not presented in a sound-treated room. Thus, the results from such evaluations are very difficult to quantify.
Subjective Outcome Measures
Over the years, several self-assessment inventories have been developed in an effort to quantify patients' subjective perceptions of their hearing handicap and/or hearing aid benefit. Many of these questionnaires also have versions that assess the perceptions of significant others as well. Some common questionnaires that assess hearing handicap include the Self Assessment of Communication (SAC) and the Communication Scale for Older Adults (CSOA;Schow & Nerbonne, 1982), the Hearing Performance Inventory (HPI;Giolas, Owens, Lamb, & Schubert, 1979), the Hearing Handicap Inventory for Adults (HHIA;Newman, Weinstein, Jacobson, & Hug, 1990), and the Hearing Handicap Inventory for the Elderly (HHIE;Newman & Weinstein, 1988). Many of these self-assessment tools provide subjective impressions from the individuals with hearing loss and their significant others regarding communication difficulties and their subsequent social and emotional consequences, while others provide situational questions used to create a profile of the individual's communication abilities.
The subjective outcome measures that are part of the hearing aid validation process include questionnaires that specifically assess hearing aid benefit. These hearing aid benefit outcomes are designed to more directly assess treatment efficacy, or the subjective benefits perceived by the listener. Some popular assessment tools for measuring hearing aid benefit include the Abbreviated Profile of Hearing Aid Benefit (APHAB;Cox & Alexander, 1995), the Client Oriented Scale of Improvement (COSI;Dillon, James, & Ginis, 1997), the Glasgow Hearing Aid Benefit Profile (GHABP;Gatehouse, 1999), and the Hearing Aid Performance Inventory (HAPI;Walden, Demorest, & Hepler, 1984). Although these instruments vary considerably in their format and content, each of them assesses the individual's perception of their listening capabilities in various situations with and without their hearing aids, providing a measure of subjective hearing aid benefit.
Objective Outcome Measures
Traditionally, hearing aid benefit was measured using more objective assessments of speech perception by means of speech recognition materials. As indicated earlier, many of the original speech perception materials lacked appropriate standardization data, making such tests insensitive to determining a listener's true capabilities with the hearing instrument. In addition, most studies of objectively measured hearing aid benefit have been conducted in a laboratory or clinical setting which limits the generalization of those findings to more realistic listening environments (Cox, Alexander, & Gilmore, 1991).
In recent years, however, several speech perception tests (e.g., Connected Speech Test [CST;Cox, Alexander, & Gilmore, 1987];Speech in Noise [SIN;Fikret-Pasa, 1993], Hearing in Noise Test [HINT;Nilsson, Soli, & Sullivan, 1994], Lexical Neighborhoods Test [LNT;Kirk, Pisoni, & Osberger, 1995], Quick Speech in Noise Test [QuickSIN;Killion, Niquette, Gudmundsen, Revit & Banerjee, 2004], Words in Noise [WIN;Wilson 2003]) have been developed with the goal of maximizing their validity and reliability in order to provide a more accurate reflection of a listener's speech understanding. These newer speech recognition materials were developed using appropriate psychometric methods, and all have considerable standardization data available that document their validity and reliability
Although tests of speech perception are tests of sensory capacity, they are analogous to tests of mental ability in their development and standardization (Bilger, 1984). Thus, principles of psychometric theory apply to the standardization of speech perception tests. Many of the early speech recognition tests were not subjected to such rigor in standardization and were used in situations for which they were not originally designed. Thus, the results of these tests were questionable at best at accurately measuring one's speech perception abilities. The data collected from tests of speech perception will serve their purpose more accurately, only if their validity and reliability have been established on the population of subjects for whom the tests were designed (Mendel & Danhauer, 1997).
Validity and Reliability
Any measurement used to assess one's behavioral performance should be subjected to rigorous standards with regard to its development to ensure that the measure accurately reflects the behavior of interest. A thorough review of the important concepts relevant to test standardization can be found in Mendel and Danhauer (1997). However, a few of these central issues are highlighted below.
Of the four types of validity that can be measured (predictive, content, construct, and face validity), a global decision of validity is typically made based on face validity. The extent to which a test instrument appears to measure what it is supposed to measure constitutes face validity. The degree of validity is measured as the correlation between test instrument scores and criterion-related variables, and generally, the higher the correlation, the greater the degree of validity. Thus, validity is a matter of degree, rather than an all-or-none property, and therefore, such measures should be ongoing so that appropriate modifications of existing tests can be made as necessary.
Reliability is another psychometric principle that is important in speech perception test development and standardization. Reliability refers to the extent to which measurements are repeatable by the same individual using the same measures of a particular attribute, by the same individual using different measures of the attribute, or by different people using the same measure of the attribute without the interference of error. Thus, a highly reliable test is one for which the differences among individuals are appreciably larger than the error of measurement (Bilger, 1984;Mendel, 2008).
The relationship between validity and reliability is critically important for the sensitivity of the test measure. A high degree of test-retest reliability does not necessarily mean that the test has high validity. Thus, even though a test may be repeatable or precise, the assessment itself may not be a valid or accurate measure of the behavior of interest. Therefore, a test can be reliable, but not valid;however, in order for it to be valid, it must be reliable.
With these psychometric concepts of test development and standardization in mind, audiologists should be aware that some tests that have been used in clinical and research settings for many years may not necessarily meet the requisite criteria for standardization as outlined above. Audiologists should always seek to find out whether these data exist for specific materials;and if not, perhaps another test should be considered, or at least caution should be taken in interpretation of the test results.
Methodologic Variability
In addition to ensuring that speech recognition tests are developed with proper attention to validity and reliability and that appropriate rules of test construction and standardization have been followed, carefully controlled test methodologies must also be used for sensitive and accurate assessments. For example, variables such as response format, method of scoring, and use of alternative test forms may influence the type and amount of information obtained from a particular test and how those results can be interpreted. Other methodologic variables such as stimulus familiarity, phonetic and phonemic balancing, use of a carrier phrase, presentation format, stimulus presentation levels, use of partial- vs. full-lists, use of background noise, and inter- and intra-subject variability should also be considered. These variables may result in potential weaknesses in some speech perception tests, which could lead to test insensitivity and prevent accurate measurement of speech perception (Mendel & Danhauer, 1997;Mendel, 2008). A thorough review of the methodologic variables that can affect the measurement of speech perception can be found in Mendel and Danhauer (1997).
Speech Recognition Assessments
As indicated earlier, comparative hearing aid evaluations employed speech audiometry as an important measure of performance, but these measures were not found to be sensitive enough to predict performance. In addition, many audiologists stopped using speech recognition testing because the industry moved toward in-the-ear hearing aid fittings that essentially eliminated the comparative hearing aid evaluation approach. Further, many audiologists found it too time consuming for the limited benefit received. The annual Hearing Journal survey (Mueller, 2001) reported that about half of dispensers perform aided speech recognition testing using monosyllabic words in quiet, some test binaurally, and only 6% use sentence stimuli such as the HINT (Nilsson et al., 1994), SIN (Fikret-Pasa, 1993), or Revised Speech Perception in Noise Test (R-SPIN;Bilger, Nuetzel, Rabinowitz, & Rzeczkowski, 1984). Sentence stimuli should be considered part of the hearing aid evaluation process because many of these materials have high validity and reliability. Factors that contribute to this standardization include the fact that these tests were designed for the purpose of evaluating hearing aids, often are used with some competing stimulus, are available as digital recordings, and they can be used to evaluate performance outside the audiometric test booth.
Mueller (2001) encouraged audiologists to perform unaided and aided speech recognition testing as part of the hearing aid evaluation. Unaided speech recognition testing can help determine hearing aid candidacy and provide patients with realistic expectations about speech understanding. Conducting aided speech recognition testing can also help demonstrate when aided performance is better than unaided, the advantages of special features of the hearing aids, and help obtain information for counseling.
Mueller (2001) and Wilson (2004) also suggest that speech recognition testing be performed in the presence of background of noise. Speech-in-noise testing can help identify a cognitive or auditory processing disorder, predict the aided outcome with amplification (i.e., estimate the upper limit of performance), and assist in determining amplification characteristics and features of the hearing aids. Many audiologists are also measuring signal-to-noise ratio (SNR) loss which is the dB increase in SNR required by a person with a hearing impairment to understand speech in noise compared to someone with normal hearing. Because SNR Loss cannot be reliably predicted from the pure-tone audiogram or other standard audiometric tests, it can be useful in predicting how well patients may do with their hearing aids. It is also useful in counseling patients appropriately regarding what to expect from given technologies, and it is helpful in improving satisfaction with hearing aids because the patient's difficulty in noise has been documented.
Research on Subjective and Objective Hearing Aid Outcomes
Our most recent work has examined the use of both objective and subjective outcome measures as a way to validate hearing aid benefit (Mendel, 2007). We investigated whether some newly developed sentence recognition assessments were sensitive enough to demonstrate objective hearing aid benefit and whether such results would correlate well with patients' subjective perceptions of that benefit. The R-SPIN, HINT (in Quiet and in three separate noise conditions), and QuickSIN were administered to 21 hearing aid users to determine if the tests could adequately document improvements in speech understanding with hearing aids compared to the research participants' self assessments of their own performance. Comparisons were made between unaided and aided performance on these sentence tests and on the HAPI which served as the subjective outcome measure.
Comparison of unaided and aided performance using both objective and subjective outcomes led to significant correlations between some (but not all) of the test measures. Aided responses on the HAPI were significantly better than unaided responses in most conditions, suggesting that the HAPI was successful at assessing overall subjective hearing aid benefit. In addition, the R-SPIN, HINT threshold in quiet, and QuickSIN SNR were the most sensitive and robust of the objective outcome measures evaluated and provided useful objective information regarding speech perception. Unfortunately, objective speech perception performance in the HINT noise conditions did not provide statistically relevant outcomes. However, the significant correlations that were measured between the other objective tests and the HAPI ratings indicated that as HAPI ratings improved, speech perception scores improved as well. These findings are most significant because they verify that both subjective and objective outcome measures for these cases were equally able to document improvements in speech perception performance among these hearing aid wearers.
Although this study was not able to show significant correlations among all of the subjective and objective measures, the findings are encouraging and warrant further study of other objective and subjective outcome measures. The fact that HAPI ratings for conversations in quiet and in noise were highly correlated with R-SPIN, HINT Quiet thresholds, and QuickSIN SNR Loss suggests an encouraging trend documenting not only the relationship between these subjective and objective measures, but also the sensitivity of these instruments in validating performance.
Conclusion
Subjective outcomes seem to have become the "gold standard" to which hearing aid benefit results are compared. The results of Mendel (2007) better define the relationship between the scores obtained on some objective sentence tests and subjective responses on the HAPI. Objective documentation of subjective impressions is essential for determining the efficacy of our treatment outcomes in hearing aid fitting. That is, objective documentation that some speech recognition test materials are sensitive measures of speech perception performance should support the use of such test materials in the hearing aid evaluation process. Such findings can have the potential for strong clinical impact, and ultimately may help to provide more standardization to the hearing aid evaluation process across clinics.
The results of Mendel (2007) also suggest that collecting objective outcome data from the R-SPIN, HINT threshold in quiet, and QuickSIN SNR Loss along with subjective outcome data from the HAPI can help to quantify the most important and desirable benefit of hearing aids - that of improved speech perception performance. If such improvement is not documented both subjectively and objectively in hearing aid evaluations, then it is difficult to verify that improvement has occurred. These results serve as an initial step toward defining the relationship between objective and subjective outcome measures in an attempt to better characterize true hearing aid benefit.
References
Bilger, R. C. (1984). Speech recognition test development. In E. Elkins (Ed.), Speech recognition by the hearing-impaired. Asha Reports, 14, 2-15. Rockville, MD: ASHA.
Bilger, R. C., Nuetzel, J. M., Rabinowitz, W. M., & Rzeczkowski, C. (1984). Standardization of a test of speech perception in noise. Journal of Speech and Hearing Research, 27, 32-48.
Carhart, R. (1946). Speech reception in relation to pure tone loss. Journal of Speech and Hearing Research, 11, 97-108.
Carhart, R. (1965). Problems in the measurement of speech discrimination. Archives of Otolaryngology, 82, 253-260.
Cox, R. (1999). Measuring hearing aid outcomes: Part 1. Journal of the American Academy of Audiology, 10, Editorial.
Cox, R. & Alexander, G. C. (1995). The abbreviated profile of hearing aid benefit. Ear and Hearing, 16, 176-183.
Cox, R., Alexander, G. C., & Gilmore, C. (1987). Development of the Connected Speech Test (CST). Ear and Hearing, 8, 119S-126S.
Cox, R., Alexander, G. C., & Gilmore, C. (1991). Objective and self-report measures of hearing aid benefit. In G. A. Studebaker, F. H. Bess, & L. B. Beck (Eds.), The Vanderbilt Hearing Aid Report II (pp. 201-213). Parkton, MD: York Press, Inc.
Dillon, H., James, A., & Ginis, J. (1997). Client Oriented Scale of Improvement (COSI) and its relationship to several other measures of benefit and satisfaction provided by hearing aids. Journal of the American Academy of Audiology, 8, 27-43.
Fikret-Pasa, S. (1993). The effects of compression ratio on speech intelligibility and quality. Ph.D. Dissertation, Northwestern University.
Gatehouse, S. (1999). Glasgow hearing aid benefit profile: Derivation and validation of a client-centered outcome measure for hearing aid services. Journal of the American Academy of Audiology, 10, 80-103.
Giolas, T.G., Owens, E., Lamb, S.H., & Schubert, E.D. (1979). Hearing performance inventory. Journal of Speech and Hearing disorders, 44, 169-195.
Killion, M.C., Niquette, P.A., & Gudmundsen, G.I., Revit, L.J., & Banerjee, S. (2004). Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners. Journal of the Acoustical Society of America, 116(4). Pt. 1. 2395-2405.
Kirk, K.I., Pisoni, D.B., & Osberger, M.J. (1995). Lexical effects on spoken word recognition by pediatric cochlear implant users. Ear and Hearing, 16, 470-481.
Martin, F. N., Champlin, C. A., & Chambers, J. A. (1998). Seventh survey of audiometric practices in the United States. Journal of the American Academy of Audiology, 9, 95-104.
Mendel, L.L. (2007). Objective and subjective hearing aid assessment outcomes. American Journal of Audiology, 16, 118-129.
Mendel, L.L. (In Press). Current considerations in pediatric speech audiometry. International Journal of Audiology.
Mendel, L.L. & Danhauer, J.L. (1997). Audiologic evaluation and management and speech perception assessment. San Diego, CA: Singular Publishing Company, Inc.
Mueller, H.G. (2001). Speech audiometry and hearing aid fittings: Going steady or casual acquaintances. The Hearing Journal, 54, 19-20, 24-26, 28.
Newman, C. & Weinstein, B. E. (1988). The Hearing Handicap Inventory for the Elderly as a measure of hearing aid benefit. Ear and Hearing, 9, 81-85.
Newman, C. W., Weinstein, B. E., Jacobson, G. P., & Hug, G. A. (1990). The hearing handicap inventory for adults: Psychometric adequacy and audiometric correlates. Ear and Hearing, 11, 430-433.
Nilsson, M., Soli, S. D., & Sullivan, J. A. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America, 95, 1085-1099.
Northern, J. L. (1992). Introduction to computerized probe microphone real ear measurements in hearing aid evaluation procedures. In H.G. Mueller, D. B. Hawkins, & J. L. Northern (Eds.), Probe microphone measurements: Hearing aid selection and assessment (1st ed., pp. 1-19). San Diego: Singular Publishing Group, Inc.
Schow, R.L. & Nerbonne, M.A. (1982). Communication screening profile: Use with elderly clients. Ear and Hearing, 3, 135-147.
Valente, M., Fabry, D. A., Potts, L. G., & Sandlin, R. E. (1998). Comparing the performance of the Widex SENSO digital hearing aid with analog hearing aids. Journal of the American Academy of Audiology, 9, 342-360.
Walden, B. E. (1984). Validity issues in speech recognition testing. In E. Elkins (Ed.), Speech recognition by the hearing-impaired. Asha Reports, 14, 16-18. Rockville, MD: Asha.
Walden, B. E., Demorest, M., & Hepler, E. (1984). Self report approach to assessing benefit derived from amplification. Journal of Speech and Hearing Research, 27, 49-56.
Walden, B. E., Surr, R. K., Cord, M. T., Edwards, B., & Olson, L. (2000). Comparison of benefits provided by different hearing aid technologies. Journal of the American Academy of Audiology, 11, 540-560.
Weinstein, B. E. (1997). Introduction: Customer satisfaction with and benefit from hearing aids. Seminars in Hearing, 18, 3-5.
Wiley, T. L., Stoppenbach, D. T., Feldhake, L. J., Moss, K. A., & Thordardottir, E. T. (1995). Audiologic practices: What is popular versus what is supported by evidence. American Journal of Audiology, 4, 26-34.
Wilson, R.H. (2003). Development of a speech in multitalker babble paradigm to assess word-recognition performance. Journal of the American Academy of Audiology, 14, 453-470.
Wilson, R.H. (2004). Adding speech-in-noise testing to your clinical protocol: Why and how. The Hearing Journal, 57(2), 10, 12, 16-19.