Google’s semi-secret deal with Ascension is testing the limits of HIPAA as society grapples with the future impact of machine learning and artificial intelligence.
Glenn Cohen points out that HIPAA may not be keeping up with our methods of consent by patients and society on the ways personal data is used. Is prior consent, particularly consent from vulnerable patients seeking care, a good way to regulate secret commercial deals with their caregivers? The answer to a question is strongly influenced by how you ask the questions.
Here’s a short review of this current and related scandals. It also links to a recent deal between Mayo and Google, also semi-secret. A scholarly investigative journalism report of the Google AI scandal with London NHS Foundation Trust in 2016 might be summarized as: the core issue is not consent; it is a conflict of interest at the very foundation of the information governance process. The foxes are guarding the patient data henhouse. When the secrecy of a deal is broken, a scandal ensues.
The parts of the Google-Ascension deal that are secret are likely designed to misdirect attention away from the intellectual property value of the business relationship.
The Oct. 22 announcement starts with: “U.S. Sens. Mark R. Warner (D-VA), Josh Hawley (R-MO) and Richard Blumenthal (D-CT) will introduce the Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, bipartisan legislation that will encourage market-based competition to dominant social media platforms by requiring the largest companies to make user data portable – and their services interoperable – with other platforms, and to allow users to designate a trusted third-party service to manage their privacy and account settings, if they so choose.”
Although the scope of this bill is limited to the largest of the data brokers (messaging, multimedia sharing, and social networking) that currently mediate between us as individuals, it contains groundbreaking provisions for delegation by users that is a road map to privacy regulations in general for the 21st Century.
The bill’s Section 5: Delegation describes a new right for us as data subjects at the mercy of the institutions we are effectively forced to use. This is the right to choose and delegate authority to a third-party agent that can manage interactions with the institutions on our behalf. The third-party agent can be anyone we choose subject to their registration with the Federal Trade Commission. This right to digital representation by an entity of our choice with access to the full range of our direct control capabilities is unprecedented, as far as I know.
Medical AI testing is unsafe, and that isn’t likely to change anytime soon.
No regulator is seriously considering implementing “pharmaceutical style” clinical trials for AI prior to marketing approval, and evidence strongly suggests that pre-clinical testing of medical AI systems is not enough to ensure that they are safe to use. As discussed in a previous post, factors ranging from the laboratory effect to automation bias can contribute to substantial disconnects between pre-clinical performance of AI systems and downstream medical outcomes. As a result, we urgently need mechanisms to detect and mitigate the dangers that under-tested medical AI systems may pose in the clinic.
In a recent preprint co-authored with Jared Dunnmon from Chris Ré’s group at Stanford, we offer a new explanation for the discrepancy between pre-clinical testing and downstream outcomes: hidden stratification. Before explaining what this means, we want to set the scene by saying that this effect appears to be pervasive, underappreciated, and could lead to serious patient harm even in AI systems that have been approved by regulators.
But there is an upside here as well. Looking at the failures of pre-clinical testing through the lens of hidden stratification may offer us a way to make regulation more effective, without overturning the entire system and without dramatically increasing the compliance burden on developers.
Robust exchange of health information is absolutely critical to improving health care quality and lowering costs. In the last few months, government leaders at the US Department of Health and Human Services (HHS) have advanced ambitious policies to make interoperability a reality. Overall, this is a great thing. However, there are places where DC regulators need help from the frontlines to understand what will really work.
As California’s largest nonprofit health data network, Manifest MedEx has submitted comments and met with policymakers several times over the last few months to discuss these policies. We’ve weighed in with Administrator Seema Verma and National Coordinator Dr. Don Rucker. We’ve shared the progress and concerns of our network of over 400 California health organizations including hospitals, health plans, nurses, physicians and public health teams.
With the comment periods now closed, here’s a high-level look at what lies ahead:
CMS is leading on interoperability (good). Big new proposals from the Centers for Medicare and Medicaid Services (CMS) will set tough parameters for sharing health information. With a good prognosis to roll out in final form around HIMSS 2020, we’re excited to see requirements that health plans give patients access to their claims records via a standard set of APIs, so patients can connect their data to apps of their choosing. In addition, hospitals will be required to send admit, discharge, transfer (ADT) notifications on patients to community providers, a massive move to make transitions from hospital to home safe and seamless for patients across the country. Studies show that readmissions to the hospital are reduced as much as 20% when patients are seen by a doctor within the first week after a hospitalization. Often the blocker is not knowing a patient was discharged. CMS is putting some serious muscle behind getting information moving and is using their leverage as a payer to create new economic reasons to share. We love it.
Despite an area under the ROC curve of 1, Cassandra’s
prophesies were never believed. She neither hedged nor relied on retrospective
data – her predictions, such as the Trojan war, were prospectively validated. In
medicine, a new type of Cassandra has emerged –
one who speaks in probabilistic tongue, forked unevenly between the
probability of being right and the possibility of being wrong. One who, by conceding
that she may be categorically wrong, is technically never wrong. We call these
new Minervas “predictions.” The Owl of Minerva flies above its denominator.
Deep learning (DL) promises to transform the prediction
industry from a stepping stone for academic promotion and tenure to something
vaguely useful for clinicians at the patient’s bedside. Economists studying AI believe that AI is revolutionary,
revolutionary like the steam engine and the internet, because it better predicts.
Recently published in Nature, a sophisticated DL algorithm was able to predict acute kidney injury (AKI), continuously, in hospitalized patients by extracting data from their electronic health records (EHRs). The algorithm interrogated nearly million EHRS of patients in Veteran Affairs hospitals. As intriguing as their methodology is, it’s less interesting than their results. For every correct prediction of AKI, there were two false positives. The false alarms would have made Cassandra blush, but they’re not bad for prognostic medicine. The DL- generated ROC curve stands head and shoulders above the diagonal representing randomness.
The researchers used a technique called “ablation analysis.”
I have no idea how that works but it sounds clever. Let me make a humble
prophesy of my own – if unleashed at the bedside the AKI-specific, DL-augmented
Cassandra could unleash havoc of a scale one struggles to comprehend.
Leaving aside that the accuracy of algorithms trained
retrospectively falls in the real world – as doctors know, there’s a difference
between book knowledge and practical knowledge – the major problem is the
effect availability of information has on decision making. Prediction is
fundamentally information. Information changes us.
When you ask the ‘big data guy’ at a massive health system what’s wrong with EMRs, it’s surprising to hear that his problem is NOT with the EMRs themselves but with the fact that health systems are just not using the data they’re collecting in any meaningful way. Atul Butte, Chief Data Scientist for University of California Health System says interoperability is not the big issue! Instead, he says it’s the fact that health systems are not using some of the most expensive data in the country (we are using doctors to data entry it…) to draw big, game-changing conclusions about the way we practice medicine and deliver care. Listen in to find out why Atul thinks that the business incentives are misaligned for a data revolution and what we need to do to help.
Filmed at Health Datapalooza in Washington DC, March 2019.
Jessica DaMassa is the host of the WTF Health show & stars in Health in 2 Point 00 with Matthew Holt.
Get a glimpse of the future of healthcare by meeting the people who are going to change it. Find more WTF Health interviews here or check out www.wtf.health.
The 2019 Health 2.0 conference just wrapped up
after several days of compelling presentations, panels, and networking. As in
the past, attendees were a cross section of the industry: providers, payers,
health IT (HIT) companies, investors, and others who are passionate about
innovation in healthcare.
One of the more refreshing themes of the
conference was an emphasis on how health IT can enable the delivery of
services. This is a welcome perspective as too often organizations believe that
simply deploying technology will solve their problems. In my 30+ years in
healthcare, I’ve never seen that work. What does work is careful attention to
the iron triad of people, process, and technology. Neglect one of these and you
will fall short of your goals. Framing opportunities as services that are
enabled and enhanced by technology helps us avoid the common pitfall of
believing “Tech = Solution” and forces us to account for process and people.
Provider Burn-out and Health IT
Several sessions focused on the impact technology is having on end-users, especially clinicians. One session featured a “reverse-pitch” where practicing physicians “pitched” to health IT experts on the challenges they face, especially with EHRs, and what they need in order to do their job and have a life. This was summed up elegantly by a physician participant as, “Please make all the stupid sh*t stop!” There’s increasing evidence that the deployment of EHRs is a major factor for clinician burnout and the impassioned pleas of the attendees resonated throughout the conference.
Other sessions explored how to we might address these problems with improvements in user-interface design, workflow, and interoperability. Demonstrations of advanced technologies like voice-driven interfaces, artificial intelligence, enhanced communications, and smart devices show where we are headed and hold out the promise of a more efficient and pleasing HIT for providers and patients.
This piece is part of the series “The Health Data Goldilocks Dilemma: Sharing? Privacy? Both?” which explores whether it’s possible to advance interoperability while maintaining privacy. Check out other pieces in the series here.
A question I hear quite often, sometimes whispered, is: Why should anyone care about health data interoperability? It sounds pretty technical and boring.
If I’m talking with a “civilian” (in my world, someone not obsessed with health care and technology) I point out that interoperable health data can help people care for themselves and their families by streamlining simple things (like tracking medication lists and vaccination records) and more complicated things (like pulling all your records into one place when seeking a second opinion or coordinating care for a chronic condition). Open, interoperable data also helps people make better pocketbook decisions when they can comparison-shop for health plans, care centers, and drugs.
Sometimes business leaders push back on the health data rights movement, asking, sometimes aggressively: Who really wants their data? And what would they do with it if they got it? Nobody they know, including their current customers, is clamoring for interoperable health data.
While patients can often find comfort, compassion, and support in Facebook Groups dedicated to their health conditions, they don’t realize that their identity, location, and email addresses can be found quite easily by other members of their closed group — some of whom may not have well-meaning purposes for that information. Called a Strict Inclusion Closed Group Reverse Lookup (SICGRL) attack, this is a privacy violation of unprecedented magnitude.
Fred Trotter is one of the leaders of a group of activists co-led by Andrea Downing and David Harlow that is taking on Facebook to correct this health data privacy violation.
While this interview was filmed at Health Datapalooza in the Spring of this year, Fred has just published an update that details how Facebook continues to ignore the issue and remains unwilling to collaborate on a solution.
Catch up on the background behind this data privacy issue — currently, one of the most important opportunities we as healthcare innovators have to learn about what NOT to do when it comes to user privacy and sensitive data.
Sharing a hotel room, however, does not a marriage make. In order to get better digital health interventions to market faster, we need what I’m calling a Partnership for Innovators, Policymakers and Evidence-generators (PIPE). As someone who functions variously in the policy, tech and academic worlds, I believe PIPE needn’t be a dream.