In April I put up an article called ‘Let’s ditch 70/20/10 and all L&D mantras’ on LinkedIn. It was designed to provoke a debate. And it did. The response was big (for the L&D world): 2000+ views, 200 likes, 70+ comments.
Lots of readers were supportive and shared stories. One reported feeling ‘numb’ when first confronted with 70/20/10 as a supposedly unchangeable reality. Another told a horror story about being ordered to implement learning according to these percentages. And there were RFPs that mandated a vendor’s ‘compliance’ with 70/20/10 as a scoreable component.Gee, I thought, this myth can bite!
However, there were also a number of very valid challenges to my article. While not expecting anything like the same level of response, I wanted to address some of them here.
The post led to some discussion about the problems with being negative about the L&D profession: ‘I'm becoming fatigued with the finger pointing about how terrible the L&D industry is and the within it people are’ and, in response, ‘I share your concern about the increasing incivility and near-misanthropy in our industry’.
We do need to stamp out bullying and abusive behaviour online and everywhere, I agree.
But I don’t think that by attacking L&D shibboleths we weaken L&D. You must be ready to throw out bad ideas if you want to improve things. Matthew Syed calls this a growth-mindset.
And there is an increasingly vocal community of ‘negative’ L&D people who want to take on the true complexity of learning, reject the myths and seek to use use research instead. Guy W Wallace, who is great on this, steered me towards the brilliant ‘Debunker Club’. Dr David Chandross has produced great analyses of whether VR works and how serious games do. Donald Clark and his excellent blog, Plan B, evaluates all the key learning theories.
So where do the numbers come from?
In the article I said one thing I think I need to clarify. Citing Bruyckere, Hulshof and Kirschner’s chapter on 70/20/10 in Urban Myths about Learning and Education, I said ‘there is no study that supports these figures. There has never been a situation in which people have self-reported - or it has otherwise been measured - that they learn things in these proportions’
It’s a bit more complicated than that.
Bruyckere, Hulshof and Kirschner refer to the work of Bob Eichinger, Michael Lombardo and Morgan McCall. This group, discussing leadership training for executives, originally proposed the rule using cautious language like ‘the odds are that’ or ‘there is a good chance’.
When Bruyckere, Hulshof and Kirschner conclude that ‘we could find no evidence in the scientific literature to support the ratio’, I think they’re right. However, a conversation after the article prompted me to dig deeper for the source of the actual numbers.
I found that Eichinger had written a short blog post to describe the research they based the numbers on. Here’s what he says:
The study interviewed 191 currently successful executives from multiple organizations. As part of an extensive interview protocol, researchers asked these executives about where they thought they learned things from that led to their success – The Lessons of Success. The interviewers collected 616 key learning events which the research staff coded into 16 categories.
The 16 categories were too complex to use in the course so we in turn re-coded the 16 categories into five to make them easier to communicate.
The five categories were learning from challenging assignments, other people, coursework, adverse situations and personal experiences (outside work). Since we were teaching a course about how to develop effective executives, we could not use the adverse situations (can’t plan for or arrange them for people) and personal experiences outside of work (again, can’t plan for them). Those two categories made up 25% of the original 16 categories. That left us with 75% of the Lessons of Success for the other three categories.
So the final easy-to-communicate meme was: 70% Learning from Challenging Assignments; 20% Learning from Others; and 10% Learning from Coursework. And thus we created the 70-20-10 meme widely quoted still today.
Eichinger goes on to to suggest that basic findings of the study ‘have been duplicated at least nine times that I know of’ and he quotes various ratios close to 70/10/10, but he doesn’t provide any references or citations. Here’s what I find problematic about this study:
The specific study itself is not published in a peer-reviewed journal and because of this I suspect Bruyckere, Hulshof and Kirschner would challenge Bob’s work on that basis. Peer-reviewed work is research whose methodology has been checked and criticised by people qualified to do so. Academic journals are more likely to be peer-reviewed than books. People do a lot of research which isn’t peer-reviewed, as peer review is expensive, and it’s not always a big problem. But it’s an important distinction. No one has checked Eichinger’s methods here and neither can we.
The sample size / dataset problem with the research is worth thinking about as it also affects nearly any work we might do with research in our organisations, especially with leaders. There’s a final thought about this problem and help at hand from the Centre for Evidence-Based Management below.
Not at all. In contrast, some sets of numbers are objectively validated.
An interesting example of this suggested to me by Lloyd Dean in the aftermath of the post is the 80/20 rule - the Pareto principle that '80% of the effects come from 20% of the causes'. Here we have numbers that do correspond to reality in scenarios like income distribution or software development, with all sorts of interesting mathematical applications.
I agree with Lloyd that the Pareto principle could be relevant to situations in L&D. Especially because we are often building software features or looking to deliver efficiency and simplicity.
I am singling out 70/20/10 because it is unfounded and unprovable, not because I think we can't generalize about reality using numbers.
Provided you are are measuring things you can measure rather than searching for the intangible notion of ‘learning’, we can and should look for predictable patterns.
You said 70/20/10 is a helpful push towards ‘resources instead of courses’ but isn’t that just another mantra?
This is what one commenter said and it is a good point. If you insisted we should provide resources instead of courses as an unthinking rule then you would make plenty of mistakes. But we should be mindful that not all mantras are unhelpful. I mentioned at the beginning of my article that sometimes the use of a mantra as mental shorthand is ‘benign’.
In this case, I do think a general bias towards resources instead of courses is helpful and it’s grounded in evidence. In a previous post I suggested that, in general, only 5% of users complete a non-mandatory course. With this in mind, in a pinch we could do better by assuming a course is not the solution which is needed, rather than by assuming it is.
This is actually how Clark Quinn defends the use of 70/20/10 - in some situations - in his useful book, Debunking Learning Myths and Superstitions. Quinn says: ‘If you have trouble getting people in your organisation to let you start working on a performance consulting approach or using extended models of organisational learning, 70-20-10 can help’ but ‘if you are doing performance consulting and creating solutions that focus on meaningful outcomes, you’re not likely in need’ (pp. 100-101).
I think a mantra like ‘resources instead of courses’ is useful but it’s no substitute for making some effort to understand the specific problem you face and experimenting to find a solution. That may well be a course, not a resource!
If you’re going to apply a mantra or general principle to learning or cite it in a business case, use one which is backed up by a body of research. Learning is complex but the research is abundant and we included some of the best in our free instance of magpie for L&D, such as:
One chapter of Urban Myths about Learning and Education brings these insights together to summarise what we do know (pp. 86-92) about effective learning experiences:
Just doing these things consistently and well as part of our learning design is not expensive and it would be a substantial improvement on many corporate learning programmes.
Using data. Being more agile
If you want to base a decision on existing organisational data you should be aware that internal data like surveys or sales figures are frequently unreliable because the sample sizes are too small and variables are not controlled for.
To fix these problems, Rob Ashton directed me to the Centre for Evidence-Based Management as a resource to use when seeking to establish organisational facts as evidence to back up your plans. You could also try to validate your findings with some external research relevant to your audience, such as the UK government’s Employer Skills Survey.
It would be even better to generate continuous evidence that what you do is working by adopting elements of agile product development, prioritising the frequent delivery and improvement of working L&D solutions that satisfy your end customers. Matt Ash has written and spoken extensively on applying elements of agile, like MVPs, to L&D.
I could imagine many solution elements associated with 70/20/10 - performance consulting, resources, mentoring - being honed and successfully deployed in this way. But let’s do away with the meaningless numbers.