We’re adopting a set of principles to guide our decision making on learner data capture, analysis and communication. We believe they will help us develop Filtered in a way that most benefits our learners.
We have always been careful with data, and last year we formalised data security principles in our infosec management system. But recently we’ve been thinking more broadly about how the information we capture can add most value for our customers and learners. We’ve researched how other progressive organisations think about data decisions (for example we’ve adopted and broadened Google’s data privacy principle on never selling-on user data), and how our clients and users derive value from data. We’ve now crystallised this thinking into six principles:
Some of the principles are obvious choices: of course we’ll follow robust security policies to protect our data. But for others the choice is more tricky - we’re closing off some opportunities in making the decision. For example, with the ‘safe space to learn’ principle, we debated the balance between (occasionally conflicting) organisational and learner interests, before deciding that the overriding consideration was that learners needed to be able to be completely honest and unguarded in their usage of Filtered. As part of some research into how and when learners identify a self-development requirement, we observed that learners withhold self-diagnoses they consider sensitive when they know their learning is monitored. And since learning is fundamentally focused on areas of perceived shortfall in capability (at least with reference to a target level), monitored learners are likely to self-censor in areas that could drive valuable learning recommendations.
What do you think about the trade-offs behind some of the choices we’ve made? We’re keen to hear your thoughts as we work to embed the principles in our decision making and practice.
Originally posted on LinkedIn.