In 2010, a new organisation was set up in the Cabinet Office called ‘The Behavioural Insights Team’ (BIT). It was better known as “The Nudge Unit”. Its purpose was to apply its own brand of nudge theory within the British government. The unit’s creation arose out of an epiphany that L&D departments are currently undergoing, namely: just because you think something’s good for people it doesn’t mean they’ll do it (a fact that marketing knows already).
Letting people to engage with learning more systematically is a daunting challenge for L&D departments. And not one you would think could be fixed with subtle messaging shifts. But you might equally think that messaging wouldn’t just make people pay more tax. However, BIT’s simple reminders about self-assessment brought forward £200m in tax revenue in a year. And it increased payments of vehicle excise duties from 40% to 49% by simply adding pictures of offending vehicles in their letters.
Learning’s most common mistake is hoping that the intrinsic value of a platform will draw users to it. Instead, you need to form habits; you need to meet learners halfway. Behavioural science takes a lot of the guesswork out of this engagement process, because organisations like BIT have done all the trial and error for you. That’s why the framework that BIT has developed is intrinsic to how Filtered runs engagement campaigns.
On its own, this framework isn’t enough; more-so than a lot of bureaucratic policy, the kind of learning that Filtered wants to inspire is very personal and requires underlying, human motivations. This being said, the BIT method gives you a simple, evidence-backed formula for persuading people to drive their own development.
The EAST Method
You can download the full BIT whitepaper here, but here’s a quick overview of their 4 part EAST method.
The first component of the nudge method is making it easy. If you want to encourage a behaviour, take away all the barriers that might stop a person doing it. This firstly means reducing hassle and effort. Secondly, it requires simplifying messaging. People don’t want to start a big complex task, but are far more willing to try if it’s broken down into simpler components. Finally, the BIT emphasises the benefits of harnessing ‘defaults’; an example of this in action is the fact that people now have to opt out of registering as an organ donor.
This has two main parts. Initially it means attracting attention simply by making your offer stand out - in terms of branding, visibility, or personalisation. It also involves rewards and sanctions. In the BIT blueprint, these could be financial or personal.
The third relies on the insight that we’re influenced by the community around us. First, BIT argues that people follow the crowd; if you describe ‘everyone’s’ response to a behaviour to an individual, that’s probably going to be the one they choose.
The other side of the coin is harnessing the power of networks. At a government level, this means supporting and fostering reciprocity. An example of this is collective purchasing, because people feel safer performing an action in a herd. For instance, the “Which? Big Switch” created a network of people who wanted to find a better energy deal: 287,365 people signed up and over 37,000 people switched to a better deal with an average saving of £223 a year.
Finally, BIT encourages driving a behaviour by getting people to make a commitment to others - it works in the same way that agreeing to go for your early morning run with a friend does.
Lastly, encouraging a behaviour requires picking your moments. People are more and less receptive at different times, although what time that is varies for each person. They’re also far more willing to focus on short term costs/benefits and ignore the longer term. A way BIT recommends encountering this is to suggest pre-planned responses to actions you want people to perform.
We don’t follow the EAST method like a blueprint, and don’t suggest doing so. Learning initiatives have their own complications and processes. However, what the EAST method can act as is a checklist to make sure you’re squeezing the most effectiveness out of each stage. Given how hard, and important, it is to get engagement in learning, every % you can lift a click-through rate is a battle won.
Getting a foot in the door
Our onboarding process relies on two key components of the BIT method: timely and social.
A learning platform/initiative launch needs to push the right buttons by hitting the workplace zeitgeist and being immediately relatable to prospective learners. This means finding out what the most pressing issues are to the workforce and (as much as data can allow) individual departments and workers, presenting an immediate response.
A key finding of BIT research is that people are more likely to make lasting behaviour changes when habits are already disrupted. This means launching onboarding efforts in moments of organisational upheaval, say as a business pivots to follow new procedures or goals. Relentlessly emphasising your learning in these moments of flux is likely to embed it as change sets into routine.
As BIT findings show, people aren’t willing to embark on a change without social proof. And that’s what we collect before anything else. By running iterative campaigns with an, initially, small audience, we generate internal case studies that resonate with prospective users. Putting a name, face, and shared experience onto our platform demonstrably drums up trust throughout an organisation.
Ease is something that flows through every stage of a Filtered learning campaign for obvious reasons. One thing to emphasise at the early stages is that the platform has to be accessible in as few clicks as possible. That means SSO and seamless integration.
Keeping it going
It’s not enough to get a single use. Learning has to become part of users’ routines or it’s too easy to forget. This is also the most difficult part of the process to get consistently right, and, arguably, the one in which nudge methods are most effective.
The trick is to start easy. Don’t try to get your learners to clock in 3 hours a week - they might for the first but it’s far less likely the second or the third. Instead, start small, only try to get learners to commit to learning for a couple of minutes a week, no more. Actually resist increasing the allotted time (if they happen to do more, great, but that’s just extra). This starts digging out a small niche of commitment into a user’s day - which they’re willing to accept because it’s so low effort. The investment will eventually pay out in the form of defaults.
Defaults are more powerful than exceptions. And once some level of commitment has been attained, leverage and encourage these defaults. There are lots of ways we do this. Some are: providing learners the notes they’ve made about a certain learning asset, so they can add to what they’ve learned from it; getting them to commit to a learning slot and suggesting our integration puts it into their calendar; helping learners to set regular learning reminders; sending emails or Slack/Teams messages to learners reminding them of assets that they’ve started and suggesting “why not finish it now.”
Especially when married with the ‘timely’ component (fitting in schedules), this habit-building method can make learning into something people have to decide not to do.
Don’t let it slip
People will drop off occasionally. Learning is not always the number one priority and users can fall out of routines. The key to getting them back on is making learning attractive.
Once the novelty factor has worn off, you can no longer guarantee a click if your learning doesn’t have an immediate pull. One way we combat this is through personalisation. Hand-picked and relevant content usually tip over the persuasion point that more generic messages don’t. Well curated content also adds a new freshness for learners who lost their commitment because they didn’t resonate with what they were originally offered.
Rewards also pull in more tentative learners. Gamification has its limitations, especially if it’s arbitrary. However, there is an undeniable pull if you construct the right messages to show learners what they’ve accomplished. Even something as simple as providing consistent improvement badges for learners can help tip a users’ opinion in your favour which, at a large scale, makes a difference.
The limits of nudges
Nudge theory is effective at a large scale because it compounds small changes made to people’s attitudes. That’s why it works for learning much like it did in government - people’s willingness to do a relatively low-risk task can be won and lost on a feature as minor as an MS Teams reminder, or a picture of an untaxed car.
However they’re not identical. Nudges worked so well in government because context is shared; people pay taxes and take out pensions for the same reasons. The problem is that much of the learning we facilitate and monitor is intensely personal - people’s individual motivations and contexts are so varied that one solution (or one type of nudge) doesn’t catch all. That’s where connections with real people, not anonymous systems, come in.
If you can capture the deep-seated reason of why someone is learning (remembering that it’s likely to be the same kind of reason that motivates them to get up and go to work in the first place) and get them to set a good, achievable goal linked to that, the habit-forming process will flow very naturally and your nudges will be especially effective. Nudge campaigns are all the rage in corporate L&D for good reason, but a good learning eco-system will take care to complement the role of mentors, career guides and even the life-changing books, films and stories that we use to come up with good, achievable goals in the first place.