One of the great things about having engineers for friends is that they can imagine in tech - a blue sky brainstorming session about filter design makes perfect sense in the right group. So it's David Klempner (UIUC '06) that I've got to thank for the following kick back onto my original SigSys project path, abandoned two years ago.
Simply put: Make hearing aids that don't stink. (Or ones that don't stink for me.)
Since the traditional "why don't we amplify selected frequencies?" method doesn't do much for folks with a hearing profile like mine where the upper ranges are essentially gone (I've heard that mine is the worst kind of hearing loss to make hearing aids for), I'd been trying to come up with ways to interpret a full normal-human-hearing frequency range of sound within my limited frequency range of hearing.
It's like suddenly having the bandwidth of a communications channel halved, but having to transmit the same information. Fortunately, things like the English language are coded with plenty of redundancy, so there's wiggle room that enables me able to understand speech. For more information on this, check out What It Looks Like To Hear Like Mel. Basically, I look like a first-order low pass filter.
The audiograms (graphs) in that post were generated via hearing tests, which mostly involve beeping various frequencies at different volumes into headphones and you indicating which ones you can hear (press a button, raise your hand). It's a sort of primitive way of determining a person's FIR, because as a kid I swore I could hear the little microphone and equipment clicks when they try to test the high frequencies. I couldn't hear the notes, but I heard the tiny pop similar to the one that happens when you turn on a microphone hooked up to a great big speaker (not entirely sure what causes the impulse in the circuit, but the pop is the response of the sound system settling down from it).
Also, the "raise your hand if you hear" is subject to all sorts of experimental problems because people are inconsistent, easily confused, and... ah... cheat. (Hey, I raised my hand when I heard the microphone clicks, because I wanted to be "as hearing as possible.") It would probably be more accurate to measure brain activity (which, as Raymond's first lecture mentioned, also depends on SigSys - the stuff is everywhere). Anyhow, conversation went something like this, with my first chime-in being a laundry list of hearing aid schemas that aren't straight amplification:
David: The obvious thing to try would be outright compressing frequencies downwards; it would make everyone sound weird but you might get more useful information out of it.
Me: A few years back I tried all sorts of funky little things [as alternative hearing aids] - compressing all frequencies down, folding higher frequencies, modulating all frequencies above a certain cutoff downwards...
David: You were, in fact, both compressing and correcting for the volume and
speed effects, right?
So David suggested this:
I'd start with whatever sampling rate your [laptop] hardware supports (presumably there's an LPF there for anti-aliasing purposes), and then compress that down by a factor of two. Or, for that matter, maybe less than two; a 20% compression might give you a noticeable improvement without as much distorting effect. (eg, people's voices still sound human.)
Cool - a starting point for experimentation.