When it comes to AI our imaginations are miles ahead of reality.
True AI, strong AI, artificial general intelligence (whichever way you know it), does not exist, and isn’t likely to exist for another decade, if that. As a tech company with an AI led platform we get a lot of enquiries from customers in search of an omniscient, omnipresent chatbot that understands (like a human would) the nuances of language and can be a personal learning coach, an intelligent assistant, for their employees. The simple truth is that technology doesn’t exist yet.
That’s the sad news out of the way, but here’s the good.
Narrow AI isn’t all knowing but it can focus on specific tasks and do them better than a human can. For example, when our algorithm stack Filtered went head to head with our CEO on tagging content (labelling it with the skills it builds) it won on 3 out of 4 metrics. Arguably it was a landslide, because where it took Marc just over 2 hours to complete this task, it took Filtered 5 seconds. How valuable is a few hours of your time, or your CEO’s time, to your organisation?
As this TechTalks article mentions, “Narrow AI can automate the boring, repetitive parts of most jobs and let the humans take care of the parts that require human care and attention.” So where does narrow AI currently exist and how is it impacting our lives?
Hannah Fry’s latest book, "Hello World: How to Be Human in the Age of the Machine" does a fantastic job in explaining where narrow AI exists in our world outside of the consumer experience platforms, like Spotify and Amazon who we often mention. As a marketer working in a tech company, this piqued my interest and I spent 3 weeks lost between the pages. Each chapter tackles a different area of life in which algorithms have made their presence felt as assistants to human experts.
Cars
In the book, Hannah quotes a statement from the Car and Driver magazine, “No car company actually expects the futuristic, crash free utopia of streets packed with driverless vehicles to transpire anytime soon, nor for decades. But they do want to be taken seriously by Wall Street as well as stir up the imaginations of a public increasingly dis-interested in driving. And in the meantime, they hope to sell lots of vehicles with the latest sophisticated driver assistance technology.” The purpose of AI in cars today then isn’t to replace humans but to help make driving safer. Some studies show that this tech has led to a 10% fall in road accidents in 5 years and is projected to save around 2,500 lives by 2030.
Medicine
In medicine (and justice) there is a delicate balance between false negatives and false positives. A false negative occurs when the pathologist/algorithm identifies an individual as not having a disease when they in fact do, and a false positive is when someone is identified as having a disease when they in fact do not. In breast cancer studies they found pathologists had over 96% accuracy in diagnosis vs an algorithm which achieved 92% accuracy at the same task. However pathologists scored higher on false negatives and algorithms scored higher on false positives. When combined the algorithms could sift through the slides, flagging the ones it deemed abnormal to the pathologist at speed; and the pathologist would have the final say. This combination of humans and algorithms brought diagnosis accuracy up to 99.5%. I guess machines aren’t quite coming for all the jobs after all!
Learning
Fortunately for AI, in L&D the stakes aren’t as high as wrongful imprisonment, misdiagnosis, and dying at the hands of a driverless car. And I would like to think if Hannah had a chapter on learning our “little filter that could” would be featured as a helpful assistant to the humans it serves. The worst-case scenario in our industry is that someone doesn’t access the piece of content that would help them most. Which is pretty much where we’re at now anyway.
Similar to some of the points shared above, at Filtered we use human curators to quality-check the content that is made available in the platform. We don’t intend to be a search engine, we cull and curate to ensure only the most high-quality and relevant content is available. Then we let Filtered do the work of understanding the learner’s goals - what content they like, what skills they are most interested in - and continuously tweak recommendations based on that. The goal is to make the most relevant piece of learning available to each learner. And if you’d like you can try it out here.
As a collective human mind I hope we can agree that the dystopian future of AI is far out of sync with the current tech available. But I hope I have convinced you that strong AI isn’t where the opportunities are. Narrow AI is. What we have available today - from medicine to learning and development - is really...well...cool!