Does GDPR do enough to police AI?

Algorithms are increasingly powerful, and researchers have recently been grappling with how they can operate ethically, and effectively, in society. The broad consensus to date is that five core things are required:

  1. Someone responsible for each instance of AI
  2. The ability to explain what is done, how it's done and why it's done
  3. Confidence in the accuracy of the system, and knowledge of where biases may exist
  4. The ability for third parties to probe and audit the algorithms
  5. AI that is developed with fairness in mind

I wrote earlier this year about a fascinating project where researchers had developed an AI capable of explaining its own workings.

The researchers developed an algorithm that is not only capable of performing its task, but also translates how it achieved it into reasonably understandable English via a documentation process that is performed at each stage of its work.

Still work to be done

Despite official attempts to build this into our rules and regulations however, researchers suggest there is still a way to go. A paper, from a team from The Alan Turing Institute, suggests that the EU's General Data Protection Regulation does little to legally mandate tech companies to explain their algorithms.

What's more, there are also doubts raised as to just what kind of information may be included when explanations are provided. The authors suggest that urgent clarification is required in the face of a growing number of algorithms in operation.

The paper argues that the 'right to explanation' is more akin to a 'right to be informed'. Indeed, data controllers are only required to inform us that our data is being used, and the rudimentary design of the algorithm behind decisions. They aren't required to provided details behind specific decisions.

Right to reply

What's more, the authors argue that GDPR does little to support people who wish to contest the decision making of algorithms. The legislation provides a number of stipulations and criteria, which don't cover all circumstances at all.

Indeed, they also believe that the information shared with us about the way algorithms make their decisions are likely to be limited in large part due to the regulations controlling trade secrets.

It runs the risk therefore of meaning that we will receive little meaningful information should we wish to complain about an algorithm, and little recourse should we believe decisions were made improperly.

The authors suggest a number of improvements that could be made, not least of which is the creation of a regulator consisting of a wide range of expertise whose primary responsibility is to ensure that automated decisions are made in an appropriate way.

"Our article shows there is a lot of ambiguity and vagueness in the General Data Protection Regulation, which could result in fragmented standards across Europe," they say.

"An open dialogue with industry, government, academia and civil society is needed on ways forward, and we aim to provide a clear path forward to achieve meaningful accountability and transparency in rapidly emerging algorithmic and artificially intelligent applications.

It is important that we discuss these issues now, as algorithmic decisions affect our lives in increasingly significant ways."