Wearing hearing aids for the first time can be overwhelming and frustrating. The brain is suddenly bombarded with auditory inputs and It takes time to relearn what should be filtered and what should be emphasized. Through small pieces of content and daily wearing goals, the hearing coach feature assists new wearers in this learning process.
The hearing coach feature was introduced to help new hearing aid wearers with their familiarization process, motivating them to complete the process and keep their newly acquired hearing aids.
The feature was divided in two sections. The top section displayed how many hours the hearing aids were worn on the previous day, the wearing time goals for the current day, and the goals for the next day. The bottom section displayed a carousel with articles for new hearing aid wearers. These articles were unlocked one day at a time, pacing the onboarding process and gamifying the experience.
The usage metrics as well as the feedback gathered from customer consultants indicated that the feature was not well understood by most of our user-base. An update was planned to introduce almost real-time wearing time measurement to the feature, but we needed to lay the ground before we could introduce this update to reliably measure it’s impact.
Feedback from customer consultants and current usage metrics helped us paint part of the picture, but we lacked usability documentation and other kinds of concept validation for what was implemented. We needed to establish a baseline and document the status of the feature before starting any design work.
I ran a 20-seconds test to get some impressions about visual communication and comprehension among users similar to ours (60+ y/o U.S.A. residents). Unsurprisingly, we learned that people didn’t understand what was measured nor the time frame of the measurements. They expressed frustration over the layout and over the scarce wording.
I followed with an usability test. With this test I was interested in learning more about the affordability and discoverability of the bottom section of the feature. I asked users to find and open an article from the bottom section of the screen. This required them to scroll down, navigate through the carousel of articles, and then open the correct one.
Half of the users struggled with the task and openly expressed frustration over the functionality of the screen. Many users were thrown off by the affordability of the screen elements by tapping on places that weren't interactive or simply realizing late (or not at all) that the screen could be scrolled down.
Partnering with the Product Owner and the Developers from my team, we had sessions to discuss the test results, exchange ideas, share inspiration, and conclude by agreeing on a direction for the design and the next tests.
I created new designs and prototypes for the screen and in order to gather independent results I ran two separate usability tests, one per section of the screen. These tests showed an improvement on all usability metrics (time spent on task, average success, misclick rate, etc) and were complimented thanks to their clear communication.
The fourth usability test was the most important one, as I requested users to complete two different tasks on the old design and new design. The tasks were followed by a small set of questions about their opinions on the designs and ease of use. I wanted to find a solution that was easy but also pleasing to use.
Fortunately, the results of our user tests proved that the changes were the correct ones – all usability metrics improved and the visual design was praised.
I did a last visual refinement aimed to improve the multimodal relationship between the bars and the measurement numbers and validated the changes with a 20-second test. After confirming everything was alright we were ready to do a hand-off and break down the development work for the upcoming sprints.