Widex Moment App over shoulder

Widex My Sound – an interview with Oliver Townend

Widex My Sound takes user personalisation to the next level, using Dual Artificial Intelligence (AI) and Big Data...We caught up with Oliver Townend, Widex Lead Audiologist, to learn more.

A couple of months ago, we introduced you to Widex My Sound, the new intuitive user interface which uses dual AI to offer WIDEX MOMENT™ owners two options for perfecting their individual sound, empowering every WIDEX MOMENT™ owner to be able to hear the way they want.

 

Two paths to personalisation for every kind of wearer

SondSense Learn™ has provided wearers with true personalisation power through advanced artificial intelligence since it was launched in 2018. However, we know that not all users have the desire to be so engaged with their hearing aids and want their sound to be personalised automatically.

‘Made For You’ gives Widex Moment owners a second option for personalisation - by automatically recommending two ideal settings for every environment. With this second option, and because we are incorporating all of these personalisation options into one intuitive user interface – My Sound, we are taking the ‘leg work’ out for those wearers who want a more instantaneous solution to personalising their sound.

We caught up with Oliver Townend, Widex lead Audiologist, to find out more about this exciting upgrade to the AI capabilities of Widex Moment.

 

What is My Sound and how does it work?

My Sound is all about AI, the individual user and users all around the globe coming together to deliver smart, fast and personal solutions in sound. We know our automatic features do a lot of great work for individual wearers, but sometimes what the wearer wants is different to what the automatic features deliver.

When we launched SoundSense Learn™ (SSL) in 2018, we began a journey that brought AI and the individual wearer together to solve this problem. By involving the user directly to train SSL we could quickly find individual sound solutions, in a simple but powerful feature. The other great outcome of thousands of users around the world interacting with our AI interfaces was the huge amount of preference data created. Using AI modelling and clustering we could use all this rich data to create highly qualified recommendations to offer to users when they find themselves in a particular listening scenario. The best bit is this is a combined effort of thousands of users and AI leading to benefits for more users.

What is the technology behind My Sound and how was this developed?

The technology is part AI, part data and part real people! We had all this rich preference data from real people that has been shared via consented and secure data architecture. We have very smart AI scientists who could put a system together to learn from data to create these suggestions and finally we made a new app with a new home for our AI features, ‘My Sound’.

We are on the forefront of AI features to personalise sound. We have incredible automation that really allows people to just get on with their listening day but when there is a time when someone wants something unique and different, we now have more than one way to provide a potential solution. We expect even more innovations to come. It felt right that we have one place in the app where your sound is found, and that is ‘My Sound’.

How is the data collected? How much data has been collected since SoundSense Learn was first introduced?

The data is collected via the app, of course with consent and handled incredibly securely in the cloud. We are only interested in information such as the situation someone is in and their sound preferences in that situation. This is how we can make smart recommendations to others. Regarding the amount of data, it is getting difficult to quantify. We have actually had to use representative samples of our large pool of data to make some calculations, in other words we have too much to handle!

What are the benefits to the hearing aid user?

The clear benefit is being able to get the sound you want. Widex aims to provide that automatically, and we are very successful at doing so - with our Fluid Sound Technology and multiple Sound Classes we can steer the hearing aid though most day to day situations. Even the most sophisticated automatic system will need to follow some rules that have been pre-determined and the system is therefore going to, on occasion, make a choice that is not exactly what the hearing aid user wants.

Widex is the only company using AI to then solve this problem and by doing so can help a user out of a listening situation in moments. Either by using a recommendation from our app or by finding a unique sound setting just for them. The solution is in the palm of their hand, this is incredibly empowering and puts the user back in control of their hearing.

 

What are the benefits to hearing care professionals?

Having satisfied, empowered clients should be a great benefit to professionals. We want to keep in touch with our client base but it would be great if they could help themselves in the moment. Plus, offering ground-breaking and cutting edge AI technology helps you differentiate your services.

 

In what kind of situations would My Sound be used?

I don’t think there really is a situation it wouldn’t be used in, except maybe one when you shouldn’t be using your phone! Everyone is unique and we all have our tastes, until automation is able to accurately predict what each individual wants in a given moment there will always be moments when the individual user would prefer something else. If you can think of that situation then you have a moment when My Sound could be used.

 

To hear Oliver talking some more about Widex My Sound, check out the podcast.

Go to the top