Can you give us an example?
If we have three acoustic parameters: low, mid, and high frequencies, and they can each be set to 13 different levels… that totals 2,197 combinations of the three settings. To sample and compare all these combinations to find the optimal listening settings would result in over two million comparisons.
No person can manage to review that kind of number to find their optimal setting. But
SoundSense Learn can reach the optimal outcome in 20 interactive steps with a human – or less.
So the listener gets immediate gratification without changing any of the programming that the hearing healthcare professional has done. But the listener has the power to refine and adjust their acoustics settings to meet their specific listening intention in real time – that’s right here, right now. That’s an example of a symbiotic collaboration that goes beyond what a human can achieve alone.
How does this kind of machine learning benefit the hearing aid user?
It allows a hearing aid user to customize and personalize a soundscape environment to their preference and intention. Previously, a hearing aid user would have to try and remember all of the details of a difficult listening situation, so they could explain it to their hearing healthcare professional at a later point. That’s really hard to do.
You’re collecting anonymous data from the SoundSense Learn feature to understand how it’s used. What interesting findings have you made?
One of the areas to really highlight is where users created a program to repeat the use of their preferred settings – namely in the workplace. We saw 141 programs that were created and reused by users in their workplace, and they were all very different.
A couple of years ago, a study from MarkeTrak told us that 83% of hearing aid users had reported being satisfied with their hearing aids in their workplace. That still leaves more than 15% who are either not satisfied with the sound in their workplace or would like it to be more customized. SoundSense Learn is a powerful and effective way for them to achieve that goal.
What are some examples of machine learning or AI from the hearing aid industry?
Currently Widex is the only manufacturer that uses real-time machine learning. I know that other manufacturers are using things like proximity sensors for fall detection and the
internet of things, but my concern is that none of these features are focusing on sound quality, patient preference or the listening intentions of the patient.
What do these new technologies mean for hearing care professionals?
Hearing technology has evolved, and we must evolve with it. Hearing healthcare providers will not be replaced by technology. But just like most professions, what will happen is that these providers will need to master upcoming technology to keep pace with the industry and the general technological development.
In your article, you write that in the future, “real-life applications of machine learning and AI go beyond what a human can achieve alone” – what might we be seeing as a result of this in the future of hearing aids?
We are on the precipice of integrating advanced technology that has never been seen before into the hearing industry. I can only imagine what the future holds… but I will leave that up to the incredible engineers in the hearing aid industry.