Photo by Annie Spratt on Unsplash
TL/DR: It’s known that if you don’t do a good training needs analysis (TNA) you can’t design useful learning or measure its effectiveness. But do we do TNAs well? Research recommends a systematic approach to TNA at three levels. In practice, we seem to rely mainly on ad-hoc, competency-based surveys. The big problem with surveys is that they are unreliable and non-objective. Is it time to rethink?
How do you run your training needs analyses? The scientific research suggests that it’s best to follow a three-levelled process: 1) organisational analysis, 2) job-task analysis, and finally 3) person analysis. In terms of best practice, the research advises that:
- We should not base a TNA on gathering requirements for any particular L&D programme but run it regularly and systematically, in the context of business needs
- There is a difference between the knowledge that people need to memorise and the information they need access to in order to do their jobs
- People are bad at diagnosing their own needs, so treat self-assessments with caution
But there seems to be a gap between this guidance and what we do in practice. I’ve seen lots of examples where we do 1) at a very superficial level, skip 2) and go straight to 3) person analysis, by means of a survey. Many tools depend on this mechanism. This guide from no-less than Gartner and Pluralsight hinges, in the end, on a self-assessment.
As one meta-analysis puts it: ‘Unfortunately, systematic training needs analysis, including task analysis, is often skipped or replaced by rudimentary questions such as “what training do you want to take?”' (Salas et al 2012, p. 81).
The problem is that self-assessments of skills are known to be unreliable. It’s actually an important and relatively rare skill to know your own weaknesses and strengths. Few can do this well.
In Allen Tough’s landmark study of adult learning, his researchers had to use protracted and probing interviews to draw out authentic accounts of learning needs and the activities involved in skill development. Not surveys.
There is a great article summarising the psychological research in this area that explains why we might be bad at answering questions about learning. It tends to be the case that people who are bad at certain things also lack the ‘self-monitoring’ skills to know that they need to improve. On the other hand, people who are good at certain things are also poor at identifying this fact: they tend to believe that everyone performs as well as they do.
It is only the experience of testing and training in a given area that can recalibrate these mistaken assumptions. As anyone who designs digital or physical retail experiences will tell you: surveys are useless, look at behaviour.
Because of the known weakness of self assessments, it’s never been a cornerstone of how TNAs were conceptualised. A meta-analysis from 1978 suggests a far broader range of factors and data points to use, such as: job descriptions, demographics, performance data and environmental data (Moore & Dutton, 1978, pp. 534-535).
Yet look at any internal or off-the-shelf product and you see the same thing. They rely on an individual self-assessment. Often the self-assessment is calibrated by a manager’s input and peer’s input in the form of a 360 assessment. But the numerical ratings used in 360s are also unreliable, or at least subject to four paradoxes, because it’s very difficult for participants to offer objective feedback on peers and managers.
So if our main method of running a TNA is unreliable - if the people who are talented enough to know their own strengths and weaknesses are few and far between - is it any surprise when the learning programmes we implement as a result of these flawed methods fail?
For all this, there is little recent research on how big corporations actually run training needs analyses. Are they generally ad-hoc or systematic? Do they generally use range of data about roles and performance or do they rely too much on surveys? I don’t know. I’d love it if someone can point me to a recent and robust study of corporate TNAs in big companies.
Instead, all I have is anecdotal evidence, via a discussion with L&D people in a Slack channel, that we often do not even reach the level of a meaningful person-analysis. Instead learning and development priorities are ‘quite theoretical, based on senior opinions, not grounded in reality’. Participants also felt their organisations needed to get better at allowing people to diagnose their own needs in the first place.
It seems like we have a long way to go to get to best practice. What do you think?
In part 2, I’m going to use your responses to try to reconceptualise the TNA.
Further reading:
- Michael L. Moore and Philip Dutton, ‘Training Needs Analysis: Review and Critique’,The Academy of Management Review, 3.3 (July 1978), pp. 532-545. https://www.jstor.org/stable/257543
- R. Reed and M. Vakola, ‘What role can a training needs analysis play in organisational change?’, Journal of Organizational Change Management, 19 (2006), pp. 393–407. https://www.researchgate.net/publication/240260450
- Eduardo Salas, Scott I. Tannenbaum, Kurt Kraiger and Kimberly A. Smith-Jentsch, ‘The Science of Training and Development in Organizations: What Matters in Practice’, Psychological Science in the Public Interest, 13.2 (2012), pp. 74–101.https://pdfs.semanticscholar.org/0181/b9aa533fd262df009ff113ac42a887afdf95.pdf
- David A. Dunning on why self-assessments of skills don’t work: https://www.sfgate.com/news/article/Incompetent-People-Really-Have-No-Clue-Studies-2783375.php
- Marcus Buckingham on why ratings in 360 assessments don’t work:https://hbr.org/2011/10/the-fatal-flaw-with-360-survey
- Maury Peiperl on how to manage the paradoxes and get 360s right:https://hbr.org/2001/01/getting-360-degree-feedback-right