Well, well, there we were, having almost swallowed all of the new EU General Data Protection Regulation to the … hardly letter, yet, and seeing that there’s still much interpretation as to how the principles will play out let alone the long-term (I mean, you’re capable of discussing 10+ years ahead, aren’t you or take a walk on the wild side), and then there’s this:
Late last week, though, academic researchers laid out some potentially exciting news when it comes to algorithmic transparency: citizens of EU member states might soon have a way to demand explanations of the decisions algorithms about them. … In a new paper, sexily titled “EU regulations on algorithmic decision-making and a ‘right to explanation,’” Bryce Goodman of the Oxford Internet Institute and Seth Flaxman at Oxford’s Department of Statistics explain how a couple of subsections of the new law, which govern computer programs making decisions on their own, could create this new right. … These sections of the GDPR do a couple of things: they ban decisions “based solely on automated processing, including profiling, which produces an adverse legal effect concerning the data subject or significantly affects him or her.” In other words, algorithms and other programs aren’t allowed to make negative decisions about people on their own.
The notice article being here, the original being tucked away here.
Including the serious, as yet very serious, caveats. But also offering glimpses of a better future (contra the title and some parts of the content of this). So, let’s all start the lobbies, there and elsewhere. And:
[The classical way to protect one’s independence and privvecy; Muiderslot]