Why we've all had enough of bad recommendations

By James Wann

4 minute read

More than ever, our lives are dictated by digital recommendations and the whirring algorithms behind them. What should I listen to on Spotify? What should I watch on Netflix? What should I spend my paycheque on? (Other than bottomless brunches, that is).

When you type hangers into Amazon’s search box, you’ll be presented with “over 200,000” options. Users upload more than 400 hours of video to YouTube every minute. There’s just too much stuff. Gone are the days of the problem being finding something to watch, read or buy. Now, the problem is finding the right thing.

Space shuttle endeavour

I just wanted to hang up my clothes, and I got this. Source

Recommendation and curation systems are the solution to content overload, but they still need a lot of work. And the ones used by learning and development departments (when they’re being used at all) are far behind those learners are used to in their everyday consumer lives. Sometimes to jarring effect.

Thankfully, the technology available  is now starting to reach that consumer-grade experience. However good that technology gets, though, there always needs to be some human input to allow it to really resonate. There will always be a role for humans to influence the amazing tech behind high-quality, meaningful recommendations.

We don’t just want correlations – we want a why, a narrative, which machines can’t provide.

Michael Bhaskar’s words almost perfectly capture the problem I have with some of the recommendations I receive online. Although I would add a ‘yet’ onto the end of them.

There are plenty of examples of what can happen if you ignore the human element. Unfortunate recommendations based on correlation alone. Some of them are funny, some of them are disturbing, and others are downright offensive.

I have to flea my cat every couple of weeks, so inevitably I’ll run out of the medicine at some point. Show me an ad that reminds me I need to buy more pipettes - great. But (hopefully) nobody needs a new toilet seat every couple of weeks. AI assumes we want lorry loads because we bought one before. Humans know why we buy toilet seats, and thus why we don’t need another right now. And they can flag that.

bad recommendations

AI alone won’t stop unfortunate ad placement like this, either. It has no feelings (trust me, Siri’s rejected every advance). There’s no empathy, no taste. It works purely on correlation. We need human gatekeepers to help.

Ultimately, Google and Amazon have complete control over the digital territory we visit every day, even if they may have lost control of their own algorithms, and they aren’t going anywhere. Amazon’s product recommendations generate 35% of its total revenue. These companies have no reason to shoulder the cost of more human employees - they can just optimise their targeting based on their conversion rates. But if they were to commit to kinder recommendations, the potential impact of human vetting would be to leave fewer customers disappointed or disturbed.

YouTube’s recommendations are pretty questionable, too. YouTube makes money from ads on videos, and so consumption levels are prioritised over all else. That’s reflected by useful features added over the last few years. Notice the introduction of autoplaying your next video, an effective feature meant to encourage ‘bingeing’. And there’s always the tendency to the more extreme, the more captivating. One minute you’re watching a vegetarian burger recipe, the next you’re being shown the inside of a slaughterhouse.

YouTube’s issue is properly policing what they recommend to binge on. Since 2017, YouTube has been under fire for facilitating paedophilia rings with monetized videos of kids (read this article or watch this if you can stomach it), from ‘ElsaGate’ to more recent allegations. It was proven that just a few clicks placed you into a black hole of YouTube recommendations, where the algorithm clustered similar innocent videos and served them to users with disturbing intentions on their recommendations bar ad infinitum. There have to be people working to make sure this stuff doesn’t get recommended for the wrong reasons. AI needs human ethics, and without empathy, it simply can’t understand and implement them on its own.

Thankfully, our learning ecosystems are largely insulated from the potential harm of algorithmic avalanches unleashed by huge data. Nevertheless, we’re still competing with them for our users’ attention. The issue of relevancy is a major stumbling block in improving digital learning engagement.

'Amongst the ocean of digital content available is the growing sea of the L&D industry'

Myles Runham

There's lots of L&D content online, good and bad. According to a Training Industry research report, a third of survey respondents identified content relevancy as a challenge within their organisations. Learning needs recommendations.

But unlike Google, Amazon or YouTube, we don’t enjoy a monopoly on learners’ digital territory. We can’t depend on fail-to-win optimisation and risk serving already disengaged learners with irrelevant content. We have to get our recommendations right the first time. Otherwise, they’ll go elsewhere.

That’s why, at Filtered, we combine the science of our algorithm stack, Bifrost, with human vetting via our Implementation Team. They’re the guys who curate content for Filtered. They eliminate those recommendations that slip through the net by appearing relevant, but actually, serve little purpose.

As we’ve seen, algorithms aren’t infallible; my mum could write an article on what it’s like to work in Customer Success. It could be marked relevant for containing all of the right buzz words and metadata, but actually have little value, as she's not an expert. Thankfully, Implementation would spot it and flag as bad content. This two-pronged approach is intended to ensure the 'kind recommendations’ that our CEO Marc endorses.

There's proof that this works for us, too. Users can rate the relevance of our recommendations. Across all learners and all clients, that score currently sits at over 90%. And we aim to keep improving; the struggle to find time for learning at work is real, so we want to make discovery as efficient as possible.

As someone who works every day to help deliver meaningful recommendations, I think the quality of recommendations comes down to the ethos of the recommender, and the balance between AI and human guidance. Both can amplify each other. But one thing is for sure - it’s time to stop dumping metaphorical toilet seats on our learners.

Want to try some of Filtered's recommendations for yourself?

Free learning content library benchmark
Filtered logo rotating

Get the best return on your L&D spend.