Google’s DeepMind Runs Afoul Of UK Regulators Over Patient Data Access

Back in February, I recounted the tale of DeepMind, a standout AI startup acquired by Google a few years ago. In the story, I noted that DeepMind had announced that it would be working with the Royal Free London NHS Foundation Trust, which oversees three hospitals, to test out its healthcare app

DeepMind’s healthcare app, Streams, is designed to help providers kick out patient status updates to physicians and nurses working with them. Under the terms of the deal, which was to span five years, DeepMind was supposed to gain access to 1.6 million patient records managed by the hospitals.

Now, the agreement seems to have collapsed under regulatory scrutiny. The UK’s data protection watchdog has ruled that DeepMind’s deal with the Trust “failed to comply with data protection law,” according to a story in Business Insider. The watchdog, known as the Information Commissioner’s Office (ICO), has spent a year investigating the deal, BI reports.

As it turns out, the agreement empowered the Trust hospitals to share the data without the patients’ prior knowledge, something that presumably wouldn’t fly in the U.S. either. This includes data, intended for use in developing the Streams’ app kidney monitoring technology, which includes info on whether people are HIV-positive, along with details of drug overdoses and abortions.

In its defense, DeepMind and the Royal Free Trust argued that patients had provided “implied consent” for such data sharing, given that the app was delivering “direct care” to patients using it. (Nice try. Got any other bridges you wanna sell?) Not surprisngly, that didn’t satisfy the ICO, which found several other shortcomings and how the data was handled as well.

While the ICO has concluded that the DeepMind/Royal Free Trust deal was illegal, it doesn’t plan to sanction either party, despite having the power to hand out fines of up to £500,000, BI said. But DeepMind, which set up his own independent review panel to oversee its data sharing agreements, privacy and security measures and product roadmaps last year, is taking a closer look at this deal. Way to self-police, guys! (Or maybe not.)

Not to be provincial, but what worries me about this is less the politics of UK patient protection laws, and bore the potential for Google subsidiaries to engage in other questionable data sharing activities. DeepMind has always said that they do not share patient data with its corporate parent, but while this might be true now, Google could do incalculable harm to patient privacy if they don’t maintain this firewall.

Hey, just consider that even for an entity the size of Google, healthcare data is an incredibly valuable asset. Reportedly, even street-level data thieves pay 10x for healthcare data as they do for, say, credit card numbers. It’s hard to even imagine what an entity the size of Google could do with such data if crunched in incredibly advanced ways. Let’s just say I don’t want to find out.

Unfortunately, as far as I know U.S. law hasn’t caught up with the idea of crime-by-analytics, which could be an issue even if an entity has legal possession of healthcare data. But I hope it does soon. The amount of harm this kind of data manipulation could do is immense.

About the author

Anne Zieger

Anne Zieger

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

2 Comments

  • Thanks for sharing. Would you mind giving a few examples of what you are worried Google might do with the kind of healthcare data that DeepMind would have gained access to if this deal went through?

    What makes DeepMind and Google not trustworthy to have access to this data? Why can’t they be held to the same HIPAA (and similar UK privacy laws) that the original holders of the data are bound by?

    I may be naive, but I don’t see why we should hamstring companies that are trying to innovate and improve healthcare by breaking up deals over privacy concerns. If all companies/parties involved are willing and able to abide by HIPPA and other privacy laws, they should be given access to the data!

  • Hi Mike,

    Thanks for your feedback and questions. I’ll address the questions as best I can.

    * What might DeepMind or Google do with healthcare data?

    I’m not database engineer or developer, so I’m not qualified to address questions of how, physically, DeepMind and Google might manipulate data. And maybe I’m a bit paranoid. But I understand that it’s quite possible to derive the names of patients to disidentified data. Once a company like Google had access to patient-identifiable data, it could in theory be resold to pharmas, insurer and the like.

    * What makes DeepMind/Google not trusworthy, and why can’t they be held to data privacy regs?

    I’m not suggesting that Google or DeepMind are untrustworthy per se, but I do think that if a patient data matches *can* be done and might add countlesss millions to revenue, it’s a huge temptation. Yes, privacy laws exist, but I’m not sure they even contemplate the situations I have in mind here.

    Certainly, innovation is good. I’m merely suggesting that we should address potential privacy threats, or at least be aware of them, before we barrel ahead with patient data sharing schemes.

Click here to post a comment
   

Categories