We’re adopting a set of principles to guide our decision making on learner data capture, analysis and communication. We believe they will help us develop Filtered in a way that most benefits our learners.
We have always been careful with data, and last year we formalised data security principles in our infosec management system. But recently we’ve been thinking more broadly about how the information we capture can add most value for our customers and learners. We’ve researched how other progressive organisations think about data decisions (for example we’ve adopted and broadened Google’s data privacy principle on never selling-on user data), and how our clients and users derive value from data. We’ve now crystallised this thinking into six principles:
- Transparency. Learners and clients need to be able to make informed choices about how they use Filtered. So we will be clear to learners and client organisations which data we collect and why.
- Actionable insight. Data without context is of little practical use and at worst misleading. We will turn data into information and information into actionable insights that support learners’ and clients’ goals.
- A safe space to learn. Filtered is a safe space for learners - for Filtered to work, we need learners to be open about their concerns and priorities. So we will not share identifiable learner-level data with anyone except the learner.
- Integrity. We will never sell personal or organisational information to anyone.
- Openness. We are part of a global community of learners and a leader in learning and development. We want to be good citizens. Where we can - for instance where it does not affect our learners’ and clients’ interests - we will share anonymised insights to benefit all.
- Security. We will follow robust policy and process to protect learner data, including continuing to follow the ISO27001 standard for information security management.
Some of the principles are obvious choices: of course we’ll follow robust security policies to protect our data. But for others the choice is more tricky - we’re closing off some opportunities in making the decision. For example, with the ‘safe space to learn’ principle, we debated the balance between (occasionally conflicting) organisational and learner interests, before deciding that the overriding consideration was that learners needed to be able to be completely honest and unguarded in their usage of Filtered. As part of some research into how and when learners identify a self-development requirement, we observed that learners withhold self-diagnoses they consider sensitive when they know their learning is monitored. And since learning is fundamentally focused on areas of perceived shortfall in capability (at least with reference to a target level), monitored learners are likely to self-censor in areas that could drive valuable learning recommendations.
What do you think about the trade-offs behind some of the choices we’ve made? We’re keen to hear your thoughts as we work to embed the principles in our decision making and practice.
Originally posted on LinkedIn.