Learning Insights | Filtered | Content Intelligence

Arms-length AI

Written by Marc Zao-Sanders | Sep 27, 2024 11:20:30 AM

Along with all the hype and excitement, there are many concerns and a need for corporates to act and to be seen to be acting prudently and responsibly. Many companies we talk to have imposed blanket bans or partial bans on the use of AI, especially the new, popular strain -- Generative AI -- that has driven most of the hype over the past two years. More than 1 in 4 companies have issued a ban on GenAI.

The infamous case of early-adopter Samsung had IT and legal teams beefing up their IT and AI policies.

What risks exactly?

Obviously, we’re not talking about the existential risk neatly described and explained in Nick Bostrom’s Superintelligence, posed by any kind of AI here.

We’re talking about legal, reputational, and productivity-related risks in corporates.

And there are so many! And they touch on many different areas of business. Most of us don’t have a great handle on the main risks, so we’re listing them out here, in approximate decreasing order of corporate concern:

  • Data privacy & increased likelihood security breaches. 
  • Bias and discrimination in output data and for decision-making.
  • Inaccurate, misleading or unreliable AI outputs eg hallucinations.
  • Job displacement and employee resistance.
  • Uncontrolled scaling.
  • Contribution to carbon emissions.
  • Reduced human oversight and accountability.
  • Lack of transparency and explainability.
  • Reputational damage resulting from misuse or poor use.
  • Copyright infringement and associated legal risk.
  • Misalignment with organisational values or policies.

Of course, set against all these risks is the risk of not adopting AI. Not acting enough or fast enough or in the right way enables the competition to steal a march on you with their new-found augmented strengths.

So what are the…

Ways to reduce risk

There are many ways to reduce the risk of using AI in general, including in L&D. Here are four categories to think about.

First, you’ll need to be clear about your AI plans, educate the workforce and put in governance guidelines. Your company might draw up and disseminate an AI manifesto / framework to set out your vision, along with certain parameters (what you will do, what you might do, what you won’t ever do, etc). You could address worker’s concerns about the risks in general and in particular the risk to their jobs. You might also set up an AI taskforce, ideally with some budget and hard responsibilities.

You’ll need to be (even) better at managing data. Data literacy just grew in importance, for everyone at your organisation so you’ll need to make some L&D provision there. You could restrict the AI’s access to sensitive data (training data as well as the data it’s drawing insights from) -- personally identifiable information (PII) is a clear marker here. You should perform data privacy checks at several points along the way.

It’s important to select and utilise the AI models carefully. Before you dive into opaque, complex neural nets, give some consideration to Good Old Fashioned AI (GOFAI ie rules-based) approaches. You might even be able to achieve most of what you want in Excel or using PowerBI. Retain human oversight and accountability eg by keeping a human in the loop to sense-check outputs systematically. Depending on what kind of data you’re working with, can and should look for bias mitigation techniques from the supplier. Select credible, road-tested vendors. Many vendors talk a big AI game but not all that glitters is gold. Most importantly and obviously, run experiments, pilots and proofs of concept, before wider roll-outs.

But for us in L&D, these measures don’t do what we need them to do. They’re a lot of work for one thing. They’re not in our (L&D’s) control for another. And on top of that, they’re not guaranteed to ensure complete AI safety.

Here’s what will.

Risk eradication: tech-enabled-services

You can actually eradicate the risk if you adopt AI as tech-enabled services. This way, the vendor handles the hot potato and delivers insights to you via conversation and data, which you can choose to use, adapt, ask more questions about, or not use at all. There’s nothing that gets pumped straight out of or back into your firm or to your people.

Although Filtered is known for algorithmic AI and technology, tech-enabled services have always been at least 15% of what we do. It makes the experience more human, it’s more interesting for us, and while AI is still treated with kid gloves by many corporates, educational dialogue has been essential.

How it works with Filtered:

  • We take your skills (not sensitive) from your framework/taxonomy. Sometimes we help clients to generate this.
  • We take your content metadata (not sensitive, especially if internal content is not involved). Often we’ll get this straight from the LMS/LXP.
  • We run our algorithms through the content through the lens of your skills. That tells us what’s relevant to you and what’s not.
  • We deliver those insights via spreadsheet, presentations, and iterative, consultative conversations.
  • No APIs, no licenses, no Single Sign-On, no major IT clearance needed. There is still some data transfer (first two steps), it’s true. Make sure the vendor has ISO27001 (which we just passed, for the 7th year).
  • And do all this alongside getting set up for a SaaS contract,  if you want to be in the driver’s seat.

The client (and we) get to enjoy a human dialogue alongside all the AI-generated outputs.

***

So know the risks, know some of the ways to mitigate them. And know that if you use a credible conduit (like Filtered) to keep the AI at arm’s length, almost all the risk evaporates. This is a sensible way to spark meaningful AI adoption as you and colleagues and the world gradually get more comfortable with it.

Please feel free to get in touch - we’d be happy to talk to you about any of the above and more.