sound

How the ‘sound’ of AI can affect brands

Earlier this year, we experienced the audio equivalent of the infamous ‘The Dress’ Twitter debate, as one sound clip bemused listeners who formed camps based on those could hear either “Yanny” or “Laurel” from the same voice.

The illusion underlined the complexity of the brain’s aural processing systems, and the scope for technology to radically impact what we hear – factors like bass/treble balance, sound quality, volume and speakers versus earphones all seemed to have an impact on what people heard from the same clip.

It’s a timely reminder, as tech giants are releasing new waves of sophisticated voice tech, from Google’s Duplex to Amazon’s roll-out of eight new voices for Alexa or Microsoft’s acquisition of conversation AI start-up Semantic Machines. Software platforms are progressing to more nuanced voice propositions which more closely mimic natural human speech.

It seems likely that this race to sound human could be the next battlefield for voice assistants, as brands look to “voice” their content or product in a way that offers an experience of least friction.

Not only is this already something users look for in a voice assistant – research has shown that 45% of regular voice users use it because it’s faster, whilst 35% turn to voice when they’re feeling lazy and don’t want to type* – but it’s also likely to impact how the brain responds to the content it is being served.

In a study we conducted for Mindshare and JWT, as part of their recent Speak Easy study, we found that – in line with users’ inclination to be “lazy” – using voice significantly eased the cognitive load of the brain in comparison to typing, demonstrating that voice interactions are much more intuitive than other text-based forms of communication. The study itself consisted of observing 102 smartphone users while they interacted with the likes of Amazon’s Alexa, Google Assistant, text-based search and, finally, a real person. As they carried out various tasks, we monitored their brain responses on a second-by-second basis using Steady State Topography (SST).

For brands, the implications are double-edged.

In the first instance, offering a frictionless journey improves the customer experience, as users stumble across fewer pain points that come with clunky voice assistants. But, on the other hand, easing the cognitive load means significantly lower left- and right-brain memory encoding, as the brain has to work a lot less to make sense of the content it is processing.

This is potentially an issue for brands because successful memory encoding – when a piece of content is stored into long-term memory - is key to effective communications as a brand cannot be recalled at pivotal moments in the purchasing journey if the brain has failed to store any relevant information away.

The good news, however, is that a lightened cognitive load means less focus on the interaction itself, and potentially more on the brand content being communicated. For brands who learn to exploit this through a clear and identifiable voice, this can increase the likelihood of successful memory encoding and subsequent recall at those crucial purchasing decision-making moments further down the line.

The desire of brands to create a positive voice-based user journey goes beyond just ease of use however.

One key driver of successful memory encoding is emotional engagement, and the more human-like a voice assistant, the stronger an emotional bond we’re able to form with it – arguably driving a strong emotional connection and, by extension, the brands being communicated through them.

Indeed, Mindshare and JWT’s study revealed that 37% of regular voice-users questioned loved their voice assistants so much they wished they were real! It is therefore unsurprising that many tech giants, including Amazon, Google and Microsoft, are looking to capitalise on this by making their assistants sound more human.

Developing an emotional connection between user and technology requires a fine balance however, and developers must be careful not to fall into “uncanny valley” territory – when an almost human piece of technology elicits feelings of unease. Thankfully, developers need not rush the process; in the course of our research for Speak Easy, we found that emotional responses to voice assistants increased over time, suggesting that, for now, frequency of interaction trumps sophistication. Moreover, our research found that frequent users of voice technology who ask voiced questions involving brand names demonstrate much higher levels of emotional response than when typing the same question.

For brands looking to capitalise on voice technology, it’s clear there is further potential just waiting to be activated.

Crucially, however, it will be those who take the time to finely-tune their approach and understand how the brain responds to voice-based communications which will experience the most resounding success – for others, there’s the danger of just becoming white noise.


By Heather Andrew, UK CEO of Neuro-Insight

*Speak Easy, Mindshare and J. Walter Thompson Innovation Group

Newsletter

Enjoy this? Get more.

Our monthly newsletter, The Edit, curates the very best of our latest content including articles, podcasts, video.

CAPTCHA
11 + 0 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Become a member

Not a member yet?

Now it's time for you and your team to get involved. Get access to world-class events, exclusive publications, professional development, partner discounts and the chance to grow your network.