The frameworks and concepts that the behavioural sciences have defined over the last few decades give us a set of very powerful tools with which to understand and steer behaviour, but we need to read the instructions carefully.
Behavioural science is increasingly popular and in recent years has been experiencing a period of exponential growth, becoming rapidly more mainstream. Over the course of the last decade, ‘nudges’ have been demonstrated as effective and low cost ways to steer behaviour. A tiny change in the contextual environment – choice architecture – or a small change in wording or layout can lead to significant and measureable changes in behaviour.
We are seeing more behavioural insight intelligence being disseminated by academics in online talks and public seminars, new books being published every month, more courses in the behavioural sciences, more in-house behavioural economics positions in global companies, more start-ups grounded in behavioural science and more governments applying behavioural science to public policy. There are now several government-based behavioural insights teams around the world, including the UK, US, Australia, Germany, Singapore and the Netherlands, with more being set up.
People all over the world are quite rightly excited about applying what they have learnt, having been inspired by a book or talk, or a short course, because it feels simple. People pick up on the most salient examples, such as the example of increasing organ donation registration by changing the default or the impact of watching eyes. These types of examples often appear easy and straightforward to execute partly because of the simple and appealing way in which they are described. As behavioural scientists and psychologists themselves note, we are more likely to believe and be persuaded by information and ideas when they are presented to us in a simple way with high levels of what experts call ‘cognitive ease’.
This can lead to some enthusiastic, but often confused applications.
- Firstly, sometimes new BE practitioners may be keen to try out what they have learnt, without having fully grasped how to apply a particular concept. For example, we’ve seen several examples of organisations applying social norms in a sub-optimal way such as in Australia a sign on the rear of a car saying “2 in 3 car seats are not being used properly”, or in the UK “57% of parents don’t use any parental controls on their home computers”. As I’m sure you’ll have spotted, these examples draw attention to the fact that the majority of people are practising the wrong behaviour, rather than the right. We want to do what others are doing, so knowing that the majority of people are doing the wrong thing won’t help to persuade us to change our own behaviour.
People may not be aware of the detail of the initial research which was carried out in order to understand behaviour, or the scientific trials that may have also taken place to isolate the effects of an intervention. So we need to be careful that we don’t end up taking short cuts and applying the nudges and strategies offered by behavioural science without enough deep understanding of the behavior we seek to influence.
- Secondly, although the frameworks and concepts coming from behavioural science over the last few decades have illustrated how complex human behaviours can be better understood, much of the original research has been conducted in a lab context and not in the real world where there are many more variables and holistic factors to account for. It’s also evident that we're not yet clear on how ‘generalisable’ some of these effects are. Nick Chater, Professor of Behavioural Science at Warwick University notes: “What the past research shows is what things matter and what might be important. It won’t define which factors are essential.” He adds that interventions “still need to be tested before being rolled out on a grand scale.".
So the findings of behavioural science – the varied concepts and interventions - need very careful and thoughtful application in the field and we might need to adapt or fine tune them when implementing them in the real world.
In this article we highlight some of the more surprising results of recent trials and interventions, identifying two key factors behind these varying results, before outlining three steps to conduct the most effective and robust behavioural interventions.
The dual influence of context and time on behavioural interventions
Behavioural scientists and practitioners have come across some unexpected outcomes in behavioural trials in the real world, such as in a commercial or policy setting, creating a valid worry that a little knowledge and enthusiasm might in fact be dangerous.
We discuss some of these findings below, which often relate to the impact of:
- the specific context on the intervention
- holistic or long term effects from a single intervention
Part One: How context can affect outcomes of an intervention
While behavioural science has identified a plethora of cognitive biases and concepts impacting on our behaviour, the impact can sometimes be stronger or weaker depending on the context in which they are applied. Here are six fascinating examples from recent trials:
1) Commitment and priming: One well-known study a few years ago demonstrated how simply signing an insurance form at the start, as opposed to the end, helped to make people more honest, possibly by increasing commitment to telling the truth and reminding people that it’s important to be honest. A trial by a car insurance company found that people who signed the form at the start tended to be more honest about reporting the number of miles they drove annually – something which ultimately increases their premium.[1] However, David Halpern, CEO of the UK Behavioural Insights Team (BIT) recently noted that they had found very little effect when BIT had run the same trial in the UK and had not found the same strength of findings of the original study. Asking people to sign their name at the start worked no better than rewriting the letter in plain English.[2]
2) Defaults: Another study found very different results from changing the default – a well-known ‘nudge’ concept. Previous research has shown how effective automatically enrolling people into a scheme can be. Be it into a retirement savings scheme or registering as an organ donor, allowing people to opt-out if they want to leads to much higher participation rates than letting people choose to opt-in. Behavioural scientists believe that it helps to overcome procrastination; people have good intentions to set up their pension, but often don’t get round to it. This is an incredibly powerful finding, but we can't assume it will be effective in all contexts. We need to understand behaviour fully in every context and ideally test an intervention on a small scale first.
Take this recent finding: A study by Erin Bronchetti and her colleagues found that defaults seemed to be ineffective in one financial context. They conducted a field experiment on low-income individuals (with less than $50,000 annual income per household) filing their taxes in Pennsylvania, US. On filing taxes, most people receive a refund of some kind. Obama recently identified tax refunds as a ‘saveable moment’ - a great opportunity to save and put some of that lump sum away. Inspired by this, the researchers tested to see if defaults might have an impact on people saving some of their tax refund.
- Participant tax filers in the control group simply received their refund and were told about the opportunity to save into US Savings Bonds. As expected, few of them chose to save.
- The intervention group however, were informed that 10% of their refund would be allocated to US Savings Bonds unless they opted out. All of them opted out, ignoring the default.
Why might the default have been ineffective in this context? The researchers believe, based on enquiries with the participants, that the majority had already ear-marked their expected refund for other plans. Crucially, the tax refund was not an unexpected gain of surplus money, but one that had already been ‘spent’ in the minds of the money-stretched participants.[3]
3) Peer and social norms effects: One of the most well-known concepts in behavioural science is that of how knowing how our peers are behaving can influence our own behaviour – we want to do what others are doing - and there is considerable research demonstrating its effect. However, some recent studies have found scenarios where the concept has been less effective in changing behaviour. A Fortune 500 manufacturing company wanted to increase enrolment and contributions into retirement savings plans – 401(k)s – and recruited a team of behavioural scientists to devise an approach. They designed a simplified enrolment letter, which included information about the proportion of the employee’s peers who were saving eg “Join the 87% of 25-29 year old employees at our company who are already enrolled in our 401(k) plan.”.
Although these mailings led to a dramatic increase in enrolment overall, the effects were unequal across employees. Low wage workers on the shop floor tended to carry out upwards social comparisons and were actually discouraged by the information about their peers; they found it demotivating to know that so many of their peers were already saving for retirement. As the authors state “social norms marketing may have limited power and can even produce the opposite of the intended effect in important settings.”. So, keeping in mind the social context and mindset of people when applying concepts such as peer effects and social norms is essential.[4]
4) Messenger effects: A large scale experiment run by the UK’s Financial Conduct Authority looked at how to improve communication to consumers being offered redress for mis-sold products using different behavioural science based interventions. Companies often write to their customers notifying them that they might be eligible for compensation, but usually achieve a poor response rate. The FCA investigated how to increase response rates by working with a firm that was voluntarily writing to almost 200,000 customers about a past failing in its sales process. Using insights from the behavioural sciences, they designed several different versions of a letter with the aim of increasing customer response rates.
While many of the interventions had positive effects on the response rate, such as the inclusion of bullet-pointed information which raised response rates by 3.8% over the control[5], one intervention had no effect, despite promising findings elsewhere. Previous research has demonstrated the potential of an authoritative messenger to put weight behind a request or finding. For example, one study by Robert Metcalfe testing different interventions to reduce paper usage in an organisation found that if an email request to use less paper came from the CEO rather than another employee, the effect on reducing consumption was doubled.[6] For the FCA though, including the signature of the CEO on the letter actually had no impact – even a slight negative effect - on response rates. This illustrates again how the context is a big factor in the success of any intervention and concepts may not be generalisable or global in their effect. Just like an actual toolbox, some BE tools are useful in some circumstances, but others less so.
5) Anchoring effects: Anchors are probably one of the most tested and well-researched concepts in behavioural science – in the lab at least. So researchers at University of California, Berkeley wanted to find out what sort of effects they might have in a real retail setting.
Over the course of several years’ research, they found anchoring effects to be a little more complex than perhaps first thought, with widely varying effects. They tested numerous anchoring effects across 16 different large scale field trials in a number of varied commercial settings, from online retailers selling books, or purchasing museum tickets, to selling doughnuts. In each of these contexts their initial trials revealed little evidence of price anchoring. For example, the researchers collaborated with an online ebook retailer to test the impact of anchors on pay-what-you-want pricing for bundles of ebooks. Yet, customers paid just over $8 regardless of the $12 or $15 price anchor they were exposed to online. In another experiment selling doughnuts at an outdoor plaza, signs with different suggested price anchors for pay-what-you-want pricing - $1 or $3 – had no impact on the price paid. Customers all paid around $0.90 on average.
Four of their studies did show robust and statistically significant evidence for anchoring effects though. In fact, one study produced a considerably large impact, larger even than some studies based in a lab. Another set of customers at the same online ebook retailer were asked what proportion of the price they paid should go to the ebook author. Here, anchors of higher proportions (89%, 90% and 91%) yielded much higher nominations by customers – 89% on average - than anchors for lower proportions (49%, 50% and 51%), which yielded an average 62% allocation of a customer’s payment to an author. Their other three studies - using the doughnut stand and the online ebook retailer again, but with small changes to the choice and presentation of the anchors - also yielded statistically significant results.
Their overall experience illustrates the complex challenge of using anchoring – it was only by taking a closer look to understand the context and add little tweaks here and there to the purchase context that their trials began to demonstrate price anchoring. It’s not quite a simple as ‘one anchor fits all’, but if we invest, work hard and keep on experimenting and learning in the context we are studying, we will find an anchor variation which works. As the researchers note:
“eliciting anchoring in the real world is more difficult than […] the anchoring literature might suggest. … It took many attempts (and tens of thousands of participants) before we had some sense of the variables that mattered.”[7]
We need to take time to understand how people are navigating the specific purchase context and what is influencing current decision making. This will then inform and inspire ways of tweaking and changing the choice architecture and what different anchors might be effective – as always, context is king.
6) Online versus offline context: The research on anchoring at the ebook retailer above also hints at a new challenge – the possibility that the principles and concepts of behavioural science might vary between online and offline behaviour. Behavioural economist Shlomo Benartzi and his colleagues are now investing time in understanding how people behave differently on different types of screen. For example, some concepts may work less effectively online:
- Research is showing that we think faster on smaller screens, making us more prone to biases and sub-optimal decision making
- One study on hospital choice has suggested that online defaults make people less likely to choose the default option – they unclick it!
- Behavioural scientists Robert Metcalfe and Paul Dolan also found that social norms nudges seen online by customers were not effective in reducing household energy use in one experiment.
Conversely Benartzi has found that other concepts outlined by behavioural science work better online. For example, visual biases are more pronounced if we are in a visual environment and in a ‘System 1’ fast-thinking mode. And the more we multi-task – as we increasingly do in our digitally connected, busy lives - the more those biases affect our decisions.[8]
Critically, we don’t yet know if and how the principles and concepts of behavioural science differ in an online context versus an offline context, so we need to explore further, test and learn.
Being conscious of some of the potential contextual causes
One reason is that we're all different or heterogeneous in the way that cognitive biases and heuristics affect our decision making and behaviour. We exist on a spectrum and are affected by cognitive biases to different degrees depending on our genetics, the environment we were brought up in, age, intelligence, numeracy and cultural factors. So some of us are more loss averse or conformist than others whilst others may be more impulsive or more prone to framing effects.
Another reason is that the extent to which we might be influenced by biases and heuristics can depend on both the context or location we are in and the time or state of mind we are in. So a ‘nudge’ might work well in one location but not another, or might work badly at one time, perhaps when we are busy or focused on something else, but be more effective at another time. Sometimes our System 2 may be overloaded meaning we may not engage with a complex message; something which appeals more to our System 1 may be more effective, or receiving a complex message at a time when we are feeling more alert may lead to a better response.
The impact of an intervention can depend on what other factors might be influencing behaviour. Attending to one cognitive bias but not another, may lead an intervention to have little impact. For example, if there are well-designed anchors for a series of products but there is too much choice, or the choices are poorly structured, then the impact of the anchors may not be reflected in sales. Similarly, if loss aversion has been leveraged in a message to a consumer, but the message lacks cognitive ease, then any impact of loss aversion might be lessened.
Part two: Thinking holistically about behavioural interventions
Beyond these contextual effects, a second factor needs to be kept in mind. For some behavioural change interventions, particularly behaviours which are repeated frequently over the long term, such as consuming energy in the home, daily medication or exercise, there is also a need to consider the wider reaching holistic effects from nudging and steering behaviour in order to be sure of the overall outcome from a nudge or intervention. These holistic effects include five concepts, some positive, and others which often end up neutralising the effect of any nudge:
Positive effects:
- Persistent, sustainable impacts - how persistent and sustainable is a behavioural change intervention over the long term, particularly once an intervention is removed or ceases?
- Spillover effects – if we steer behaviour change in one issue, does it also change other related behaviours?
Neutralising effects:
- Displacement effects – if we nudge in one place, does it simply shift behaviour elsewhere?
- Licensing effects – if we nudge a positive behaviour in the morning, do people ‘nudge back’ in the afternoon and license themselves to do something less ‘good’?
- Compensating effects – if our behaviour has been less than exemplary we might try to compensate by doing something more worthy.
Below we discuss these five concepts in more detail:
- Persistent, sustainable behaviours: In many interventions and experiments, it’s important to consider the longer term consumer behavioural journey. It’s actually relatively easy to change someone’s behaviour once - and for infrequent, occasional behaviours, that’s great to know. But in cases where a behaviour is carried out on a regular basis, we need to know how sustainable an intervention is and measure the impact of an intervention over several months or even years.
Does an intervention change behaviour and habits for the long term? Is it a sustainable change in behaviour? To make it sustainable, we might need to run an intervention continually over a period of time and for habitual behaviours, this is likely to be at least a couple of months.
For example, researchers at UCLA ran a recent trial in a residential community in Los Angeles aimed at nudging households to reduce their energy use and found evidence of persistent and durable effects – people were still saving a considerable amount of energy over 3 months since the start of the intervention. By framing messages reminding households of the need to reduce energy consumption in order to reduce pollution and prevent impacts on health such as asthma and cancer, households reduced their energy consumption by 6% over the longer term.[9]
Earlier research into energy savings by Hunt Allcott and Todd Rogers also investigated how persistent a behavioural intervention had been over the long term. They tracked 78,000 households on the US West coast from 2008 to 2012 who had been receiving Home Energy Reports sent by the behavioural science based energy software company, Opower. Opower’s reports nudge households into using less energy by informing them how much their neighbours are using – knowing that we are using more energy than our neighbour often compels us to reduce our own consumption in line with others.
Although these reports had already been shown to reduce households’ energy consumption by between 2-3%, Allcott and Rogers were interested in how long these effects lasted after households no longer received these reports. When the reports stopped, there was some ‘backsliding’ and customers began using more energy – but not as much as they had used originally; energy use rose again by only 10-20% each year. Two years after the reports stopped, households were still using less energy – probably because new habits had become more engrained, such as automatically switching off lights or turning down the thermostat if leaving the house, or due to small but permanent changes in the technology now fitted in the house such as installing energy saving light bulbs.[10]
- Positive spillover effects: Tracking behaviour in order to spot possible spillover effects is also crucial and field experiments have found evidence of these effects in recent interventions. For example, prompting hotel guests to pledge to reduce their impact on the environment by reusing their hotel towels led to spillover effects during their hotel stay. Researchers found that guests were not only more likely to reuse their towels, but that they were also more likely to carry out other more environmentally friendly behaviour such as turning the lights off.[11] So the overall behavioural impact might be much more powerful than the initial objective.
- Displacement effects or ‘Kicking the can down the road’. This effect can occur when we nudge – successfully - in one location, but merely displace behaviour, and shift its location. It is more simply known as ‘Kicking the can down the road’!
For example, cycle theft signs inspired by behavioural science placed above a bike rack in Newcastle University dramatically reduced bike theft at that location. Bike racks which had ‘watching eye-posters’ placed above them experienced 62% fewer thefts than the previous year, but those racks elsewhere without watching-eye posters saw thefts increase by 65%. The thief merely stole from another rack![12] Although we may think we have had a positive impact on negative anti-social behaviour, it might not get rid of the problem but simply move it elsewhere, displacing it.
Displacement effects have also been illustrated in a field trial looking at vaccinations. At first glance, it appeared that automatically opting people in to receive a flu shot at a clinic – writing to them with an appointment date - had a big impact on behaviour. 56% of patients who had been defaulted in like this kept the appointment as opposed to just 5% of those who had to phone to schedule their appointment. However, when researchers tracked vaccinations over the longer term, they found that much of the uptake in vaccinations was explained by patients who usually received their flu vaccination from another clinic and had merely switched surgery. So taking the time to get a comprehensive understanding of how the results have been generated and including a carefully constructed control group is critical.[13]
This result leads us to ponder other findings. For example, a recent trial at one company to increase charitable giving via payroll found that opting people into a scheme had a dramatic impact on participation rates which rose from 10% to 49%.[14] But it would be interesting to understand what impact the opt-in had on charitable donations overall. Did the workplace scheme crowd out giving elsewhere? Did those people who were now donating through their workplace give less to other charities they had been donating to before? Or maybe there were positive spillover effects from donating in the workplace so that people donated a little more elsewhere too?
- Licensing and compensating effects or ‘nudging back’: Another reason why it’s important to research the long term, dynamic impacts of an intervention is that there is growing evidence that sometimes people may ‘nudge back’ or license themselves in a series of connected behaviours throughout the day or week. They can for instance do something ‘good’ and because of this feel permission to then do something ‘bad’– a concept known as a licensing effect. Conversely, if we feel our behaviour has been less than exemplary we might try to compensate by doing something more worthy, purging ourselves of the tainted past – known as a compensating effect. Several studies have found evidence of licensing and compensating effects, from purchasing environmentally friendly products to healthy eating behaviours.
For example, a study by Nina Mazar and Chen-Bo Zhong, researchers at the University of Toronto, showed that when people had purchased an organic and environmentally friendly or ‘green’ product online as opposed to a more conventional product, they were less inclined to act altruistically in a task and were more likely to lie and cheat.[15]
So being aware of the holistic effects – displacement or spillover effects, or licensing and compensating effects - and having a better understanding of the long term impact of a behavioural intervention is important.
Overall, it’s clear that we need to investigate the context more closely and also measure for evidence of holistic and long term effects, in order to have a more complete understanding of the behaviour change we have achieved.
Conclusion: The behavioural change checklist
Keeping the above points in mind, how should we set out to effect behaviour change, knowing what we know and also being aware of what we don’t know? Firstly, we should recognise that behavioural science has huge potential and has already had notable impacts on a huge variety of issues and challenges. But to ensure the most effective and powerful behavioural interventions we recommend a three-step process.
- First, we need to understand existing behaviours, and the triggers and barriers to behaviour or actions currently operating in the specific context we are researching. This will often involve carrying out and investing in consumer insight research, often longitudinal, to better understand behaviour. Research today can utilise technology such as mobile apps or simple devices which record and track behaviour without being intrusive. This can be combined with research techniques which disrupt habitual behaviour and existing preferences, allowing what may be influencing a consumer’s decision-making and behaviour to emerge.
- Second, we can then use the findings from this research to develop a number of hypotheses about behaviour and decision making and what interventions might have the potential to change behaviour. Behavioural economics can provide a robust framework with a clear set of concepts to make sense of what we have observed and also to inspire ways to nudge and steer behaviour.
- Third, we need to test behavioural interventions in the real world, running trials and experimenting to see what works in our own situation and context. Wharton Business School behavioural scientist Katy Milkman argues that “Experimentation is key to learning in organizations….only through careful experimentation can causal inferences be drawn about the success or failure of new ideas.”.[16]
In a perfect world we would test each of the interventions in isolation against a control. However, sometimes this is not possible and in that case we need to be open to what might also be impacting behaviour and having an effect. But we should try to test each intervention separately, or at least, incrementally so we can see the layered effects. For example, we might test the impact of defaults on response rate to a letter separately from the impact of a novel envelope or a salient image. When testing, keep in mind some of the holistic and long term effects on behaviour, such as how sustainable a behaviour change is over the long run. With the growth of technology-based tools to track behaviour, this is likely to become easier to do and some companies are already gathering data to enable this to happen.
These new frameworks and concepts grounded in behavioural science are incredibly powerful tools for both understanding and influencing behaviour, but we need to ‘BE careful’; we need to read the manual fully, to take the time to understand existing behaviour and the context surrounding it before we get to work. We have great power in our hands.
[1] Shu, L. L., Mazar, N. Gino, F., Ariely D., and Bazerman, M.H., 'Signing at the beginning makes ethics salient and decreases dishonest self-reports in comparison to signing at the end’, 2012, Proceedings of the National Academy of Sciences, 109 (38), 15197-15200
[2] David Halpern speaking on BBC Radio 4 Today programme, August 2014; BBC News, Just how well has the 'nudge unit' done?
26 August 2014
[3] Bronchetti, E.T., Dee, T.S., Huffman, D.B., Magenheim, E. “When a Nudge Isn’t Enough: Defaults and Saving Among Low-Income Tax Filers, March 2011, NBER Working Paper No. 16887
[4] Beshears, J., Choi, J.J., Laibson, D., Madrian, B., Milkman, K. “The Effect of Providing Peer Information on Retirement Savings Decisions” NBER Working Paper August 2011
[5] FCA “ Encouraging consumers to claim redress: evidence from a field trial” April 2013
[6] http://www.britac.ac.uk/policy/Nudge-and-beyond.cfm
[7] Jung, M., H., Perfecto, H, Nelson, L. “Anchoring in payment: Evaluating a Judgemental Heuristic in Field Experimental Settings” Available at SSRN, May 2014
[8] NSW Govt, Dept of Premier & Cabinet ‘Behavioural Insights Community of Practice’ Event Review: Professor Shlomo Benartzi on ‘Digital Nudging’, 14th Nov 2014; http://bi.dpc.nsw.gov.au/blog/event-review-professor-shlomo-benartzi-on-digital-nudging; http://www.thelavinagency.com/speaker-shlomo-benartzi.html
[9] Households were told “"Last week, you used XX% more/less electricity than your efficient neighbours. You are adding/avoiding XX pounds of air pollutants, which contribute to known health impacts such as childhood asthma and cancer." - Asensio, O. and Delmas, M.A., “The Dynamics of Information framing: The case of energy conservation behaviour”, Draft: May 2014
[10] Allcott, H., Rogers, T. “The Short-Run and Long-Run Effects of Behavioral Interventions: Experimental Evidence from Energy Conservation” NBER Working Paper, October 2012
[11] Baca-Motes, K., Brown, A., Gneezy, A., Keenan, E.A., and Nelson, L.D., “Commitment and Behavior Change: Evidence from the Field” Journal of Consumer Research, Vol. 39, Feb 2013
[12] Nettle D, Nott K, Bateson M (2012) ‘Cycle Thieves, We Are Watching You’: Impact of a Simple Signage Intervention against Bicycle Theft. PLoS ONE 7(12): e51738. doi:10.1371/journal.pone.0051738
[13] Chapman, G.B., Li, M., Leventhal, H., Wisnivesky, J., Leventhal, E.A. (2014). Defaults for influenza vaccination appointments in ‘Judge, Nudge, Dodge‘ presented at the SJDM conference, Nov 2014
[14] Behavioural Insights Team, UK “Applying behavioural insights to charitable giving” 28 May 2013
[15] Mazar, N. and Zhong, C. “Do Green Products make us better people?”, Psychological Science, 21(4) 494-498, 2010
[16] Milkman, K. “The Importance of Experimentation” Winter 2014, Wharton Magazine http://whartonmagazine.com/issues/winter-2014/the-importance-of-experimentation/
Newsletter
Enjoy this? Get more.
Our monthly newsletter, The Edit, curates the very best of our latest content including articles, podcasts, video.
Become a member
Not a member yet?
Now it's time for you and your team to get involved. Get access to world-class events, exclusive publications, professional development, partner discounts and the chance to grow your network.