Have you ever pushed the wrong button? Clicked the wrong link? Forgotten how to activate that feature on your smartphone?
Well, it might not necessarily be your fault.
A great deal of time, thought and effort went into designing your intended experience with the app, website, or piece of technology in front of your eyes (or in or on your ears, in our case).
Every aspect carefully considered
That moment when you first open the box.
The time it takes you to get the product up and running.
How far apart the buttons are and how firmly you have to push them.
What material the case is made of.
Sometimes these things are done really well, and we don’t notice the process of setting up a new device or using it. Other times, we’re left confused and frustrated, and swear we’ll never use that thing again.
“Having positive, successful, and joyful experiences early on matter a lot to the brand,” Dr. Robert Schumacher, UX expert, Managing Director of UX consultancy Bold Insight, and Adjunct Instructor at Northwestern University, told researchers attending this year’s GN’s Advanced Science conference.
As User Experience (UX) designers know, and what Dr. Schumacher went to lengths to emphasize, is that making a person’s experience with a product – from the first time they use it to the 1000th time – a positive and empowering one is a crucially important task. That is why it is the domain of experienced researchers and designers in the fields of Human-Computer Interaction (HCI) and UX, who painstakingly refine and optimize your every interaction with that product.
At the intersection of behavioral science, computer science, design, and media studies, Human-Computer Interaction puts users in focus. It is concerned with how users interact with technology, as well as how technology feeds back to users, and it was the theme of this year’s GN Advanced Science conference. GN researchers travelled from our labs in the US, the Netherlands, China and Copenhagen for the annual conference, taking in high-level strategy updates, a keynote session, and poster sessions where they shared research in process.
Dr. Robert Schumacher digs into the history of user-centered design at the GN Advanced Science conference
Airplanes, missile alerts, and everyday tech: Why HCI mattersWhile forgetting which button does what on your television remote is annoying, user-centered design finds its origins in much more serious stuff. After World War I, engineers began to consider how reducing errors in the cockpit could improve flight safety. They began to think about not just the functioning of the machine itself, but how pilots intuitively interact with the machine – where flicking the wrong switch could, unfortunately, literally be a matter of life and death.
Fast forward to the 1970s, when computers moved from research settings to everyday use. Suddenly, there was a need to re-design computer interfaces so they could be operated by everyday users, not just experts. Out of HCI research came the computer mouse, as well as the Microsoft Windows graphical user interface, which remains the basic interface most of us work with every day.
As Dr. Schumacher pointed out in his keynote session, "You can have the best tech in the world, but if people can’t use it, it doesn’t work."
A key challenge for companies in this space is that as technology is invested with ever-more capabilities and features, we as users hold high expectations that increasingly sophisticated features should still be easy to learn and use. I mean, when was the last time you read a user manual?
Good usability has always been a core focus in the design of such highly personal products as GN’s hearing aids and headsets, and in the ongoing challenge of bridging the widening gap between users and technology, our focus is in several key areas.
People buy products and services to get jobs done, so the starting point must always be to ask, “In which ways are user needs unmet, or only partially met?” From these insights, we can work out how new features should be designed to meet these needs, and how interactions with this technology can be optimized.
Designing interactions must take into account people’s limitations – cognitively, socially and physically – but also what uniquely human abilities they bring to that interaction.
In fact, as artificial intelligence plays a greater role in our society, how we utilize our uniquely human strengths becomes particularly relevant. There are things we can do that computers can’t – yet – and working in symbiosis is key. The best experiences for us as users will take place at the intersection where we combine computers’ strengths, like raw processing power and memory, with our human abilities, such as planning, awareness of context, and complex learning.
Inevitably, as devices become more intelligent, they become more complex, and this pushes people and technology further apart. Therefore, important work lies in closing this gap by designing features that are as close as possible to how people naturally behave.
How could this look? Take the example of needing to adjust the settings of your hearing aids in the middle of a conversation. Instead of pulling your smartphone out of your pocket or pushing a physical button, could other more natural, less intrusive behaviors like shaking your head or pointing your finger initiate that action instead?
“Designing good experiences is hard,” Dr. Schumacher admitted to GN’s brainiest brains.
This rings particularly true as people expect more from their devices, and want more personalized solutions. And that is exactly why HCI research such as gesture recognition, which could make increasingly advanced technology simpler and more intuitive to use, will only prove even more worthwhile in the future. Because the more that people are able to take advantage of the advanced capabilities in their devices, the more they actually use them, benefitting from better hearing, better conversations, more connectivity, better concentration, and ultimately a higher quality of life.