The question of how much time I spend in front of the screen has pestered me professionally and personally.
A recent topic of conversation among parents at my children’s preschool has been how much screen time my toddlers’ brain can handle. It was spurred on by a study in JAMA Pediatrics that evaluated the association between screen time and brain structure in toddlers. The study reported that those children who spent more time with electronic devices had lower measures of organization in brain pathways involved in language and reading.
As a neurologist, these findings worry me, for my children and for myself. I wonder if I’m changing the structure of my brain for the worse as a result of prolonged time spent in front of a computer completing medical documentation. I think that, without the move to electronic medical records, I might be in better stead — in more ways than one. Not only is using them potentially affecting my brain, they pose a danger to my patients, too, in that they threaten their privacy.
As any practicing physician can tell you, electronic medical records represent a Pyrrhic victory of sorts. They present a tangible benefit in that medical documentation is now legible and information from different institutions can be obtained with the click of a button — compared to the method of decades past, in which a doctor hand-wrote notes in a paper chart — but there’s also a downside.
That history should provide a sobering perspective on the distinction between inevitable and imminent (a difference at least as important to investors as intellectuals), even on hot-button topics such as new data uses involving the electronic health record (EHR).
I’ve been one of the optimists. Earlier this year, my colleague Adrian Gropper and I wrote about pending federal regulations requiring providers to give patients access to their medical record in a format usable by mobile apps. This, we said, could “decisively disrupt medicine’s clinical and economic power structure.”
Google’s semi-secret deal with Ascension is testing the limits of HIPAA as society grapples with the future impact of machine learning and artificial intelligence.
Glenn Cohen points out that HIPAA may not be keeping up with our methods of consent by patients and society on the ways personal data is used. Is prior consent, particularly consent from vulnerable patients seeking care, a good way to regulate secret commercial deals with their caregivers? The answer to a question is strongly influenced by how you ask the questions.
Here’s a short review of this current and related scandals. It also links to a recent deal between Mayo and Google, also semi-secret. A scholarly investigative journalism report of the Google AI scandal with London NHS Foundation Trust in 2016 might be summarized as: the core issue is not consent; it is a conflict of interest at the very foundation of the information governance process. The foxes are guarding the patient data henhouse. When the secrecy of a deal is broken, a scandal ensues.
The parts of the Google-Ascension deal that are secret are likely designed to misdirect attention away from the intellectual property value of the business relationship.
The Oct. 22 announcement starts with: “U.S. Sens. Mark R. Warner (D-VA), Josh Hawley (R-MO) and Richard Blumenthal (D-CT) will introduce the Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, bipartisan legislation that will encourage market-based competition to dominant social media platforms by requiring the largest companies to make user data portable – and their services interoperable – with other platforms, and to allow users to designate a trusted third-party service to manage their privacy and account settings, if they so choose.”
Although the scope of this bill is limited to the largest of the data brokers (messaging, multimedia sharing, and social networking) that currently mediate between us as individuals, it contains groundbreaking provisions for delegation by users that is a road map to privacy regulations in general for the 21st Century.
The bill’s Section 5: Delegation describes a new right for us as data subjects at the mercy of the institutions we are effectively forced to use. This is the right to choose and delegate authority to a third-party agent that can manage interactions with the institutions on our behalf. The third-party agent can be anyone we choose subject to their registration with the Federal Trade Commission. This right to digital representation by an entity of our choice with access to the full range of our direct control capabilities is unprecedented, as far as I know.
Medical AI testing is unsafe, and that isn’t likely to change anytime soon.
No regulator is seriously considering implementing “pharmaceutical style” clinical trials for AI prior to marketing approval, and evidence strongly suggests that pre-clinical testing of medical AI systems is not enough to ensure that they are safe to use. As discussed in a previous post, factors ranging from the laboratory effect to automation bias can contribute to substantial disconnects between pre-clinical performance of AI systems and downstream medical outcomes. As a result, we urgently need mechanisms to detect and mitigate the dangers that under-tested medical AI systems may pose in the clinic.
In a recent preprint co-authored with Jared Dunnmon from Chris Ré’s group at Stanford, we offer a new explanation for the discrepancy between pre-clinical testing and downstream outcomes: hidden stratification. Before explaining what this means, we want to set the scene by saying that this effect appears to be pervasive, underappreciated, and could lead to serious patient harm even in AI systems that have been approved by regulators.
But there is an upside here as well. Looking at the failures of pre-clinical testing through the lens of hidden stratification may offer us a way to make regulation more effective, without overturning the entire system and without dramatically increasing the compliance burden on developers.
Robust exchange of health information is absolutely critical to improving health care quality and lowering costs. In the last few months, government leaders at the US Department of Health and Human Services (HHS) have advanced ambitious policies to make interoperability a reality. Overall, this is a great thing. However, there are places where DC regulators need help from the frontlines to understand what will really work.
As California’s largest nonprofit health data network, Manifest MedEx has submitted comments and met with policymakers several times over the last few months to discuss these policies. We’ve weighed in with Administrator Seema Verma and National Coordinator Dr. Don Rucker. We’ve shared the progress and concerns of our network of over 400 California health organizations including hospitals, health plans, nurses, physicians and public health teams.
With the comment periods now closed, here’s a high-level look at what lies ahead:
CMS is leading on interoperability (good). Big new proposals from the Centers for Medicare and Medicaid Services (CMS) will set tough parameters for sharing health information. With a good prognosis to roll out in final form around HIMSS 2020, we’re excited to see requirements that health plans give patients access to their claims records via a standard set of APIs, so patients can connect their data to apps of their choosing. In addition, hospitals will be required to send admit, discharge, transfer (ADT) notifications on patients to community providers, a massive move to make transitions from hospital to home safe and seamless for patients across the country. Studies show that readmissions to the hospital are reduced as much as 20% when patients are seen by a doctor within the first week after a hospitalization. Often the blocker is not knowing a patient was discharged. CMS is putting some serious muscle behind getting information moving and is using their leverage as a payer to create new economic reasons to share. We love it.
Despite an area under the ROC curve of 1, Cassandra’s
prophesies were never believed. She neither hedged nor relied on retrospective
data – her predictions, such as the Trojan war, were prospectively validated. In
medicine, a new type of Cassandra has emerged –
one who speaks in probabilistic tongue, forked unevenly between the
probability of being right and the possibility of being wrong. One who, by conceding
that she may be categorically wrong, is technically never wrong. We call these
new Minervas “predictions.” The Owl of Minerva flies above its denominator.
Deep learning (DL) promises to transform the prediction
industry from a stepping stone for academic promotion and tenure to something
vaguely useful for clinicians at the patient’s bedside. Economists studying AI believe that AI is revolutionary,
revolutionary like the steam engine and the internet, because it better predicts.
Recently published in Nature, a sophisticated DL algorithm was able to predict acute kidney injury (AKI), continuously, in hospitalized patients by extracting data from their electronic health records (EHRs). The algorithm interrogated nearly million EHRS of patients in Veteran Affairs hospitals. As intriguing as their methodology is, it’s less interesting than their results. For every correct prediction of AKI, there were two false positives. The false alarms would have made Cassandra blush, but they’re not bad for prognostic medicine. The DL- generated ROC curve stands head and shoulders above the diagonal representing randomness.
The researchers used a technique called “ablation analysis.”
I have no idea how that works but it sounds clever. Let me make a humble
prophesy of my own – if unleashed at the bedside the AKI-specific, DL-augmented
Cassandra could unleash havoc of a scale one struggles to comprehend.
Leaving aside that the accuracy of algorithms trained
retrospectively falls in the real world – as doctors know, there’s a difference
between book knowledge and practical knowledge – the major problem is the
effect availability of information has on decision making. Prediction is
fundamentally information. Information changes us.
When you ask the ‘big data guy’ at a massive health system what’s wrong with EMRs, it’s surprising to hear that his problem is NOT with the EMRs themselves but with the fact that health systems are just not using the data they’re collecting in any meaningful way. Atul Butte, Chief Data Scientist for University of California Health System says interoperability is not the big issue! Instead, he says it’s the fact that health systems are not using some of the most expensive data in the country (we are using doctors to data entry it…) to draw big, game-changing conclusions about the way we practice medicine and deliver care. Listen in to find out why Atul thinks that the business incentives are misaligned for a data revolution and what we need to do to help.
Filmed at Health Datapalooza in Washington DC, March 2019.
Jessica DaMassa is the host of the WTF Health show & stars in Health in 2 Point 00 with Matthew Holt.
Get a glimpse of the future of healthcare by meeting the people who are going to change it. Find more WTF Health interviews here or check out www.wtf.health.