New Study Is Scathing Of DeepMind's Use Of Patient Data

New Study Is Scathing Of DeepMind's Use Of Patient Data
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Google DeepMind were in the news recently after they announced the launch of a blockchain like ledger technology for storing medical data. Whilst I urged a degree of caution over the apparent lack of clarity as to who would own the data Google collected, it is a positive step in terms of ensuring the data is more secure.

A recent study examined the data sharing partnership between DeepMind and the Royal Free NHS Foundation Trust in London in 2015, which saw 1.6 million bits of patient data opened up to the tech giant. It suggests that whilst things like ledger technologies are great, we cannot rely on tech companies to regulate themselves in such matters, and more oversight is needed to ensure data is managed properly.

Fuzzy logic

Central to the concern was the apparently fuzzy terms of the agreement itself. Whilst ostensibly the data was to be used to improve diagnostic capabilities for kidney patients, the wording of the agreement was much more open-ended.

Subsequent investigations have revealed that DeepMind secured access to identifiable patient records, which were not part of the original agreement, and the Trust were not able to monitor how these were used. Despite the original agreement being replaced with a new one in November 2016, the Information Commissioner's Office (ICO) still felt the need to investigate the initial contract, with DeepMind retaining access to the data even whilst this investigation is under way.

DeepMind assure me that this aspect of the agreement was strictly above board, and that the Royal Free were in full control of it at all times, with the identification of patients crucial to the viability of the project. Whilst that may be true, the same level of awareness is surely not the case for the patients whose data was shared.

The Cambridge study examines the agreement in depth, and whilst they don't believe it presented a security risk, they nonetheless believed it to be very questionable, not least because of the perceived lack of transparency and legal basis.

"Data mining and machine learning offer huge promise in improving healthcare and clearly digital technology companies will have a major role to play," the researchers say. "Nevertheless, we think that there were inadequacies in the case of this particular deal."

"The deal betrays a level of naivety regarding how public sector organisations set up data-sharing arrangements with private firms, and it demonstrates a major challenge for the public and public institutions. It is worth noting, for example, that in this case DeepMind, a machine learning company, had to make the bizarre promise that it would not yet use machine learning, in order to engender trust."

Implied consent

The deal is particularly interesting because none of the patients whose data was shared ever gave consent for this to be done. The Department of Health guidelines state that providers have a duty to share knowledge freely when it revolves around the direct care of the patient. Not only do the guidelines primarily concern themselves with data sharing between providers rather than with commercial third parties, there is also murkiness about what is and what is direct care. The data shared with DeepMind included many patients with no history of kidney trouble. Indeed, the paper suggests just 1/6 of the records shared involved kidney patients. DeepMind argue that this broader dataset was crucial to allow them to spot potential kidney problems in patients who are in hospital for other issues, but it’s easy to see why the authors are skeptical of this assertion.

What's more, the authors argue that there is also a distinct lack of transparency in the agreement, with neither party clear on the volume of data involved, or that it could be used to identify individual patients. What's more, despite DeepMind having an 'ethics committee', there has been no scrutiny given as to how that data has been used. It's an approach the authors liken to a one way mirror.

"Once our data makes its way onto Google-controlled servers, our ability to track it - to understand how and why decisions are made about us - is at an end," they write.

Again, it’s a claim denied by DeepMind, who say that the whole process has been driven by the Royal Free, who decide exactly how many data records are shared, and to what end.

Lessons to be learned

Suffice to say, the paper presents a cautionary tale for how health data should be handled. I've written numerous times about the enormous potential for improvements to healthcare with better use of both the data currently held within the system, and user generated data.

That is really not in question, but it's crucial that the process is done correctly and transparently. Not only is it crucial that any data sharing is independently scrutinized, it's also crucial that patients are consulted throughout the process. This doesn't appear to have been done at all, which considering this lack of public buy-in was largely to blame for the failure of Care.Data suggests a lack of learning from officials in the NHS.

The authors warn of the very real risk that a company such as DeepMind will gain a defacto monopoly on patient data, which would be terrible news for providers, patients and the wider ecosystem.

"The reality is that the exact nature and extent of Google's interests in NHS patient data remain ambiguous," the authors say. "I personally think that because data like this can get out there, we are almost becoming resigned to the idea. This case stresses that we shouldn't be. Before public institutions give away longitudinal data sets of our most sensitive details, they should have to account to a comprehensive, forward-thinking and creative regulatory system."

The authors plan to apply the same level of scrutiny to the revised agreement between DeepMind and Royal Free, feeding in the work of the ongoing regulatory investigations too. Hopefully, their work will prompt officials in the NHS and Department of Health to start taking this matter seriously and adopting a joined up approach.

*update - I have been contacted by DeepMind with a statement, included below:

The Royal Free and DeepMind: "The Royal Free London and DeepMind are committed to working together to use technology to support world class care for patients. We also have the highest regard for patient privacy and confidentiality.

"This paper completely misrepresents the reality of how the NHS uses technology to process data. It makes a series of significant factual and analytical errors, assuming that this kind of data agreement is unprecedented. In fact, every trust in the country uses IT systems to help clinicians access current and historic information about patients, under the same legal and regulatory regime.

"The Streams app is already being used on the wards at the Royal Free Hospital and the feedback from clinicians has been overwhelmingly positive. Nurses estimate that it is saving them up to two hours every day because patient information is now available via a secure mobile app. It also means that clinicians are able to respond to patients who are at risk of developing life-threatening conditions in minutes rather than hours or days."

I have also contacted the chair of the Independent Reviewers for DeepMind Health for commentary, and will await with interest the official announcement on the deal from the Information Commissioner's Office, which is believed to be due for public release in the next week.

Popular in the Community

Close

What's Hot