My study guide for next week's Hearing Aids II quiz, which contains a bunch of things I didn't discuss in the first post.

Describe the major advantages that digitally programmable hearing aids introduced to the hearing aid fitting process that we still enjoy today.

  1. They're smaller. (I like this. I've already got glasses; I sometimes need to take off my hearing aids because the backs of my ears are sore from having so much stuff crammed onto them.)
  2. They use less power. (I like this. Running out of battery mid-lecture sucks. Buying more batteries on a grad student's stipend also sucks.)
  3. They're reproducible -- you can perfectly copy an algorithm from processor to processor even in somewhat extreme conditions, but analog components (as I'm painfully aware, as an electrical engineer) are always different; no matter how carefully they're manufactured, you'll get capacitors out of tolerance, resistors changing values as the temperature fluctuates, and so on.
  4. You can far more easily compare one algorithm to another (numbers to numbers) than you can a thing made out of analog components.

Describe the Hearing Aid Placebo Effect and your obligations for dealing with it.

The placebo effect is that if people think something's better, they'll percieve it as better, regardless of whether it is or not. In other words, if a user or clinician sees shiny advertising (or similar) for a hearing aid, or gets told that a technology is "new" or "better," it biases them positively towards that device regardless of its actual technical performance. (This isn't just limited to patients; I'm pretty sure my parents and teachers were amazed at nonexistent hearing aid effects when I was in elementary school.)

The really weird thing is that even if hearing aids are functionally the same -- identical hearing aids, identical technologies -- if you take a pair of hearing aids that are exactly the same, label one "new" and one "conventional", and give patients a hearing test with both, they'll actually perform better on the hearing test with the "new" aids. The expectation of perception is so powerful it actually influences test performance.

As a hearing aid user, it's good to be aware of this; it makes me think about things like asking my audiologist to let me try on hearing aids blind (compare models without knowing which is which), about what sort of advertising I want to expose myself to when making a decision, and also about how I can tweak my own psychology to get better results from my hearing aids by expecting them to happen.

Identify 4 advantages that digital hearing aids offer over analog hearing aids.

  1. They can connect with digital accessories like cell phones, televisions, etc, which are becoming increasingly prevalent in some people's lives (mostly affluent first-worlders, but I'm one of them).
  2. They aren't limited to hardwired gain and frequency responses; analog hearing aids are forever bound to their components, but digital hearing aids can be reprogrammed.
  3. They can also do things like noise reduction, signal shaping, and other sorts of funky things beyond gain/frequency response. (As an engineer, I'm totally geeking out about this. As a hearing aid user, I'm insanely happy.)
  4. They can be smaller. Analog hearing aids are controlled with tiny screw trim pots (potentiometers), one screw for every adjustment to a physical component that you want to make. Every little screw makes the hearing aid bigger and more complicated. Digital hearing aids can have everything programmed off a single cable regardless of how many things you're trying to adjust, or even wirelessly in some cases.

They are, however, NOT "cleaner" at amplifying speech in quiet environments. Digital hearing aids can actually have more potential to distort the signal. But they can do all sorts of things -- expansion, automatic feedback reduction, loudness dampening.

List the 4 essential hearing aid functions and for each, provide 1 or 2 examples of how digital technology has contributed.

It's the hierarchy o' hearing aids from yesterday's blog post! Here's the Mel version: imagine you and me talking in a quiet hallway and then stepping into a loud restaurant and continuing our conversation.

  1. Audibility and comfort in quiet. (I want to hear things in the quiet hallway.)
  2. Comfort in noise. (But when we go into a loud space, I don't want to have to claw my ears out.)
  3. Intelligibility in noise. (Ideally, I'd still like to be able to understand you in that loud space.)
  4. Convenience and ease of use. (And please make the UI decent; if I need to stop and fiddle with a bunch of buttons as we walk through the door, it's going to interrupt our conversational flow.)

And how has digital technology contributed to each of them?

  1. Well, when we're walking through the quiet hallway, my hearing aids may be doing an expansion -- amplifying your speech without amplifying the air conditioning system.
  2. But when we walk into the restaurant, dishes are clanging -- those are impulses, sharp spikes in the spectrogram. My hearing aids can detect those spikes and mute them down. They can also guess at what the background noise is by sampling it during the times when we're not speaking, then try to screen that background noise out.
  3. And when you speak in the restaurant, my directional microphones will focus on the sound in front of me -- which is you talking -- instead of the sounds around me, which are dishes clanging and other people having conversations. As you move around me to get your tray, my microphones adjust their polar pattern, swivelling around to listen more to where you are instead of where you're not.
  4. And all of this is happening automatically. I don't need to fiddle with buttons unless I want to explicity change the mode (and there are multiple modes I can try if I want). And I can hook my hearing aids up to an FM unit you're wearing, or to my phone.

These things were less true with my analog hearing aids as a kid. It looked more like this:

  1. Classmates are talking as we walk down the hallway to lunch. I can't really hear them, so I tune out.
  2. We walk into the lunchroom. It is loud. It hurts. I rip my hearing aids out of my ears and stick them in my pocket.
  3. Now I can hear even less what my classmates are saying. I've already learned that keeping my hearing aids on during lunch does nothing except to amplify the background noise of a hundred kids shouting, so this is actually the best tradeoff available to me -- they wouldn't help me pick out a classmate's voice any better.
  4. At the age of 5, I'm juggling multiple devices and switches: the FM unit that loops around my neck and clips onto my belt, with its own set of on/off switches for various modes and a volume control, then the volume control and on/off/mode switches for my hearing aids, and then the FM mic unit, which I carry from teacher to teacher and have to explain to each new classroom speaker. As you can imagine, this doesn't do wonders for my social status (which, as a young geek who likes math and science, is already pretty dismal).

Needless to say, I wasn't a huge fan of my hearing aids in elementary school, even with the option of hot pink earmolds. I preferred to withdraw into a world of books and text, because none of the deaf kids I was introduced to were particularly keen on math -- if I wanted to talk with someone about math or physics, the closest I could get was reading a book by a person who was probably already dead. (I also won't discuss the social stigma of wearing hearing aids, other than to say that adults who see a kid with hearing aids tend to assume a low level of intelligence, and thus don't typically talk about Shakespeare or relativity or other interesting things with that kid. I began to consciously and prominently carry around "difficult" books when I was in 6th grade to counteract that.)

Understand what fidelity means and why we should be concerned about it when it comes to hearing aid design. Define fidelity and describe five physical features that contribute to it.

Fidelity is how accurately the nuances of the physical signal are captured and reproduced. An oversimplified version is "how close are you getting to normal hearing in real life?" It's the objective counterpart of sound quality, which is about how good the subject thinks something sounds; fidelity is the geeky engineering version that goes "well, let's talk about the numbers here." It's made of five components:

Frequency response - think of an equalizer. How much gain (loudness) do different frequencies get? A few terms useful in discussing frequency are bandwidth (what's the lowest and highest notes the equalizer reaches? Electrical engineers use "bandwidth" to refer to both the min and max frequencies, but audiologists usually only refer to the highest frequency) and smoothness (are the equalizer sliders level, or are a couple of them jagging up above the rest? Note that generally, 3db "bumps" are detectable). A high bandwidth and a lot of smoothness make for good fidelity.

Dynamic range - the ratio of maximum to minimum signal level. Can you hear really loud things and really soft things (high dynamic range, good), or only middling-loud stuff (low dynamic range, annoying as hell)? The minimum signal level ("what's the softest stuff you can hear?") is usually dictated by the noise floor. The maximum is usually dictated by saturation limit (the microphone or processor can't register things any louder) and battery voltage (the speakers can't pump any more sound out). A dynamic range of at least 90 is required for fidelity; the higher the dynamic range, the better the fidelity.

(Nonlinear) distortion - distortion is when the signal gets warped somehow, and nonlinear distortion is when that warping makes the sound come out in different frequencies. For instance -- and to once again oversimplify this -- I might be hearing a trumpet playing an A note, but nonlinear distortion might cause a bunch of noise to happen so that I also hear parts of a D note that aren't really there as part of the signal. Sometimes this can be intentional, constructive distortion, like when we shape a signal so that speech is audible but painfully loud noises are less so. Or maybe with the Example of the Oversimplified Trumpet, I can't hear the A but could hear the D, so the sound was being transposed down. But we can also have destructive distortion, which is usually unwanted and unintentional distortion that makes things less understandable. That's bad. We want to make that go away. So some distortion helps fidelity, and some hurts it.

Internal noise - noise that comes from inside the hearing aids, added by the presence of the components themselves. Most of it comes from the microphone. The less internal noise we have, the higher fidelity we have.

Processing delay / phase distortion - how long does it take for the hearing aid to process the sound (and therefore how long are you waiting before you hear it)? Signal-path delays vary with signal frequency, causing relative phase shift, which basically "makes stuff sound different" (and typically like crap). But even if all the frequencies have the same delay, you can't delay it too long, or you're forever hearing things a second after people say it -- not enough time to react, and a weird out-of-sync-with-visuals effect takes hold after 2-5 milliseconds. (This is like trying to lipread actors in a movie where the audio and video are out of sync. It is impossible.) The less delay (and corresponding phase distortion) we have, the higher fidelity we have.

Understand the physical characteristics that help determine sound quality for speech and music.

Distortion, smoothness, and dynamic range. Distortion is more noticeable in speech than music, but smoothness and dynamic range are more noticeable in music than speech.

Do hearing-impaired users care about fidelity? Well, yes. Intelligibility in noise correlates positively with fidelity. People with mild/moderate loss tend to care more; they want to hear things as they "ought to be" because they're already reasonably close. People with severe/profound loss (like me) tend to care more about functionality and intelligibility than fidelity; who cares how close you can come to reproduction if you can't understand a thing? (This is why cochlear implants, even if they don't sound anything like "normal" hearing, still make some of their users really happy. I am debating getting one, but want to find out what it's going to do to my piano playing first.)

According to Killion, what is one reason why hearing-impaired listeners reject hearing aids?

Um... because they're expensive and don't actually help?

Citation: Killion, M. C. (2004). “Myths that discourage improvements in hearing aid design,” The Hearing Review 11(1). [pdf]

Dispel the four myths discussed by Killion.

  1. Hearing people can't judge hearing aid fidelity. Researchers asked hearing and hearing-impaired people to rate the fidelity of some hearing aids. Their ratings were almost identical.
  2. Deaf people don't care about fidelity; they can't hear anyway. Actually, they care more.
  3. Intelligibility vs fidelity: pick one and sacrifice the other. The higher the fidelity, the higher the intelligibility; the best hearing aids do as little as possible.
  4. Transducer limitations make high fidelity hearing aids impossible. Right. So why do audiophiles and their recording studios have way better stuff than we do when they're using hearing aid receivers as components in their best earphones?