Rules of Thumb vs Content Chaos

By Marc Zao-Sanders

4 minute read

Over the course of time, human beings have discovered or established many rules of thumb (aka heuristics) to help navigate the unpredictable, chaotic waters of business and life. Too many of us stuff our lives with more than we can do or even comprehend and we often find ourselves overwhelmed, stressed and burnt out.

At their best, rules of thumb can be a useful means of taking stock and working out where we are, so that we can make what adjustments we need, utilising the experience of people who have been in similar situations before us. They serve as guidance rather than strict rules, dogma or mantras, and should be considered contextually rather than adopted religiously. There’s something comforting in seeing someone else’s footsteps, even if we don’t choose to follow them precisely. 

They’re everywhere.

In business…

One of the most famous is the Pareto Principle which says that in many domains, 80% of the outcomes result from just 20% of the causes. This teaches us to prioritise the clients, costs, collateral etc that will be most productive, and gives us a crude but useful way of quantifying the benefits of that focus. This idea has been championed by McKinsey (and it happens to be the concept that underpins the business idea and name of our business).

David Allen’s 2-minute rule advises that for any task that can be completed in less than 2 minutes, do it right away. For a large category of tasks, we can use this rule to avoid inefficient procrastination and delay.

In the world of SaaS, business owners and investors apply the Rule of 40 which says that an indication of a healthy, growing software company’s growth rate and profit margin combined should add up to 40% or more. The idea is that this somewhat artificial number captures two important business needs (growth and profit), which are often in tension. 

In learning…

Closer to learning, Tomasz Tunguz and others have combined the idea of compound interest and self-improvement. If you improve yourself (or simply something you do) by 1% each day for a year, you’d be 38 times better at it by the end (1.01365=37.7). Some people find this motivating that small, incremental, imperceivable increases will have a major impact down the line.

In L&D, the most obvious example is 70:20:10, popularised and probably best articulated by Charles Jennings. As many readers of this blog will know, 70:20:10 suggests that 70&% of learning happens while people are working, 20% happens in their interactions with others and 10% happens in a formal educational setting. The underlying lesson is that in trying to optimise the process of learning for colleagues, we should substantially take the jobs and social interactions into account.    

Albert Mehrabian, in a paper fabulously titled ‘Silent messages: Implicit communication of emotions and attitudes’ claimed that much of communication is not to do with words at all, specifically that communication is 55% non-verbal, 38% vocal, and 7% words. In fact, the context of his research was a little more nuanced but the headlines have stuck. Nonetheless, the idea that the impact of communication lies beyond the words expressed is probably correct and useful.

Notice that none of the explicitly stated numbers above are required for us to derive some kind of lesson from the rule.

In learning content…

We see and generate a lot of data through Content Intelligence. In this work, we have observed many patterns and some of them may be useful to you as you find your own ways to manage your version of Content Chaos

Of course, every organisation, industry and situation is different, so as with all the heuristics above, consider all these in your own context and to stimulate further thinking. We’ve been deliberately specific in the below (stating the measure followed by a recommended number), to make this as useful and provocative as may be.

Assets per skill: 50

For each important skill (or topic, capability, value, behaviour, etc) for your organisation, how much content do you really need? Many organisations have, for many skills, many thousands of assets eg 3,000 data visualisation resources. Is that better or worse for users than 50?

Relevance of asset to a skill: 0.8

This is specific to Filtered and Content Intelligence. For each asset a client has built or bought, we calculate a numeric value between zero and 1 which represents how much that asset is about a given skill. A threshold of 0.8 strikes a good balance between relevance and volume.

Number of enterprise-wide libraries per employer: 4

For large employers, they have a lot of their own (built) content already. Then there’s the web and what employees will search out for themselves. And then there’s considerable overlap between the larger libraries. So, in the end it, rarely makes sense to pay for and provide more than four libraries, especially not large ones. We can help quantify this overlap vs skills for you, by the way.  

Cost Per Qualifying Asset (CPQA): £50

Content libraries routinely stock and add to vast numbers of videos, articles, courses etc. But how many of them are relevant to your orgnisation’s needs and priorities? Let’s say a library has 20k assets in total and 1,000 of them are directly (above 0.8!) relevant and that the vendor charges £40k each year. Well, then you’re paying £40 for each relevant (qualifying) asset.

Completes-per-start: 50%

To be able to robustly measure the quality of learning content has been one of L&D’s holy grails for some time and is a difficult use-case for AI. But because data is sparse and, the concept quality is subjective, there’s been little progress. So while work into that field goes on (and that’s a an area of R&D for Filtered, fyi), you could do worse than looking at completes-per-start is (as featured in this Bersin case study). It’s probably a lot more useful than raw completes (which is as much a measure of culture and marketing as of the quality of the asset). 

Data per asset: 10 data points

If you don’t measure it, you can’t manage it. And if you have inordinate amounts of content, you won’t have much data per asset. Any form of intelligence, artificial or human, will need at least a few data points (opens, clicks, completes, explicit feedback) to draw a useful conclusion about an asset’s worth. Here’s an explicit sense in which less really is more:  → less content → more data per item of content → better decision-making about that content.

Descriptive metadata per asset: 20 words

We see a lot of demand to fill gaps in or make enrichments to metadata. In other words, to tag or retag content. Algorithms like ours can be extremely powerful here. But even the smartest algorithm needs a minimal quantity of information to go on. We look at the titles and descriptions of learning assets. These vary in length and quality but the rule of thumb we’ve got to is that with ~20 words of reasonably representative natural language, algorithms can produce a tag or several tags to a human level. 

So, there are seven content-oriented rules of thumb for you to consider as you manage your learning content, enhance the skills and spark the curiosity of your workforce. None are perfect, in any situation. But I hope some will be useful in yours.


Free learning content library benchmark
Filtered logo rotating

Get the best return on your L&D spend.