What is My Sound and how does it work?My Sound is what we call our powerful yet easy-to-use, AI-powered sound personalisation that gives users two ways to get the sound that feels right to them. The first is Made for You. It allows users to get sound recommendations through their hearing aids’ smartphone app based on how other users preferred to hear in similar situations. From a user’s perspective, it couldn’t be simpler; they just select the situation and listening intention that’s most appropriate to their needs and they get a recommendation, instantly.
If they want to refine their sound even further, they can choose Create your Own where the AI guides users to a bespoke sound profile based on a series of simple sound comparisons. This is where things get really interesting, because every time someone uses Create Your Own, the insights gained from the choices they make are fed back into the system to refine and improve the Made for You recommendations for other users. This is how our AI empowers users all over the world to help one another to hear better.
How do choices turn into recommendations?Even though the user’s experience of My Sound is one of speed and simplicity, the technology required to turn choices into recommendations is highly complex and requires the processing of vast amounts of data. But since Widex pioneered the use of real-time artificial intelligence in hearing aids in 2018, we’ve refined our techniques and gained a lot of insight into how people like to hear along the way.
For instance, when using Create Your Own a user goes through a number of sound comparisons in order to arrive at a sound that they like. Now, the end result is of course interesting for us to analyse, but we realised that the data generated on the way to that sound profile gives us a much richer dataset and is just the thing for an AI to get its teeth into.
Revealing the insights hidden in the dataUsing AI allows us to identify patterns that might not be immediately obvious to the naked eye. As you can see in the diagram below where each dot represents a different programme that was created as a result of a Create Your Own process, the raw data across situations does not show any obvious patterns.
(PICTURE)But when we use the AI, applying Gaussian process modelling and Meanshift clustering to analyse the data, we start to see interesting patterns for the individual listening situations.
Take the “Dining” situation, for example. Here the most prominent cluster (in blue) is one where the middle and treble frequency bands are turned down, while the bass is kept at the original level or only adjusted slightly. This cluster – which represents the majority of cases (64%) – might reflect users wanting to reduce noise from utensils or wanting to focus more on closer sound sources by reducing the overall level in the higher frequencies. The second most common cluster (in orange, representing 12%) has bass and middle bands turned up while the treble is kept stable, maybe for improved speech understanding.
Another interesting example is the “Quiet” situation. Here, there are two very different patterns that are almost equally frequent: In the blue cluster, bass and treble are turned up while middle is kept close to the original or a little higher, probably indicating a wish for more awareness of the surroundings. In contrast, the orange cluster shows all three parameters being generally turned down, presumably representing a need for lower volume. So, two different kinds of ‘quiet’!