When I started my career as a learning designer, it’s fair to say that my interest in data analytics was limited. It seemed equal parts intimidating, impenetrable and uninteresting. But with more experience of designing and delivering learning interventions, and the battle scars to prove it, I quickly realised that data could be my best friend. Data can dissolve stakeholders’ fears, overturn long standing assumptions, and ultimately win arguments.
I’ve stood at the shore waving off learning modules as they sailed away into the bermuda triangle of the corporate LMS too many times not to realise that finding better ways of measuring learning effectiveness is vital to ensuring our industry remains relevant. And contrary to what you’ll read, Learning and Development has been using data to inform decision making for a long time. The trouble is that the two rudimentary metrics that have been focussed on above all others are cost of production and number of employees delivered to.
It’s notoriously difficult to measure the contribution of learning interventions to a company’s output. That’s why they fall into the region Cal Newport describes as ‘the metrics black hole.’
In Deep Work, Newport identifies the reason why companies often embrace behaviours that run counter to productivity as The Principle of Least Resistance. The principle is that:
“In a business setting, without clear feedback on the impact of various behaviours to the bottom line, we tend towards behaviours that are easiest in the moment.”
Essentially, when there aren’t reliable metrics to point to, people default to doing what’s easiest. It’s easy to see how this dynamic has been at play in L&D. Because it’s far easier to measure the efficiency of learning interventions, efforts to evaluate and improve effectiveness have been neglected.
This isn’t to say improving efficiency is unimportant. But if it comes as the cost of delivering effective solutions that improve organisational performance, L&D departments risk becoming viewed as a cost to be minimised rather than a source of value creation.
But things can and should change now that there is a broader data set to provide insight into the wider context of how and why people learn on the job. There are more ways than ever to harness a data-driven learning strategy. Utilising this data productively to inform learning strategies can instill three behaviours that transform how learning teams operate. These behaviours will shift the focus back to improving effectiveness and delivering genuine value.
For too long, L&D practitioners only had the black box of SCORM packages and unreliable survey data to draw insight from. The difficulty of evaluating effectiveness has meant that L&D has had to justify its activities through a rigorous upfront process that creates a sense of false certainty.
But the idea that a robust upfront process leads to valuable outcomes and progress is largely unfounded. If progress, technological or otherwise, depended on careful upfront planning, the bicycle wouldn’t exist - how it works is still a scientific mystery! Progress in many domains rely on a process of trial and error.
Now that data is available to evaluate reasons for success or failure of interventions, L&D teams should stop setting arbitrary goals, and instead focus on generating hypotheses to test and learn from. By testing hypotheses about learning interventions, and exposing incorrect hypotheses quickly, L&D teams can learn faster and become more agile. Increasingly, activity can be based on evidence rather than assumptions.
If learning solutions are designed to be one-size-fits-all, the result will be the perfect fit for no one. There is no such thing as a typical learner, and what works for one audience might not work for another.
Understanding how people learn on the job, like any human behaviour, is complex. Looking at number of completions and survey data only reveals part of the picture, and an unreliable one at that. Similarly, shallow attempts at persona mapping, however well-intentioned, might just entrench existing biases and myths about how people learn.
Data means we can devise more effective audience segmentation, and can look at behaviour, and learners’ revealed preferences to better personalise the solutions designed for them. This is a far better indication of what learners really want than their own reported preferences, and the most prescient technology can now personalise learning based on each individual learner’s skills signature. This approach is the only way of revealing your organisation’s learning culture and finding ways of addressing its unique needs and challenges.
User-centred design is an admirable aim, but it doesn’t necessitate giving people what they say they want. Most big business ideas weren’t asked for. Jerry Stoppelman, founder and CEO of Yelp revealed that the platform’s core feature was an afterthought. At the time, the idea that people would want to post reviews for free was preposterous, but this assumption couldn’t have been more wrong.
The lesson here is that if valuable discoveries made sense, someone would have already discovered them. Because of the metrics black hole learning interventions found themselves in, it’s easy to understand why risk aversion has played a big part in their design, but it's also limited the scope for innovation.
Now that there is more data and more robust methods of evaluating effectiveness, learning teams should be emboldened to experiment and try out unusual approaches. With studies showing that only 10% of corporate learning is effective, the truth is that clinging to the status quo and familiar approaches is riskier than trying something different.