By ROBERT C. MILLER, JR. and MARIELLE S. GROSS, MD, MBE
The problem with porridge
Today, we regularly hear stories of research teams using artificial intelligence to detect and diagnose diseases earlier with more accuracy and speed than a human would have ever dreamed of. Increasingly, we are called to contribute to these efforts by sharing our data with the teams crafting these algorithms, sometimes by healthcare organizations relying on altruistic motivations. A crop of startups have even appeared to let you monetize your data to that end. But given the sensitivity of your health data, you might be skeptical of this—doubly so when you take into account tech’s privacy track record. We have begun to recognize the flaws in our current privacy-protecting paradigm which relies on thin notions of “notice and consent” that inappropriately places the responsibility data stewardship on individuals who remain extremely limited in their ability to exercise meaningful control over their own data.
Emblematic of a broader trend, the “Health Data Goldilocks Dilemma” series calls attention to the tension and necessary tradeoffs between privacy and the goals of our modern healthcare technology systems. Not sharing our data at all would be “too cold,” but sharing freely would be “too hot.” We have been looking for policies “just right” to strike the balance between protecting individuals’ rights and interests while making it easier to learn from data to advance the rights and interests of society at large.
What if there was a way for you to allow others
to learn from your data without compromising your privacy?
To date, a major strategy for striking this balance has involved the practice of sharing and learning from deidentified data—by virtue of the belief that individuals’ only risks from sharing their data are a direct consequence of that data’s ability to identify them. However, artificial intelligence is rendering genuine deidentification obsolete, and we are increasingly recognizing a problematic lack of accountability to individuals whose deidentified data is being used for learning across various academic and commercial settings. In its present form, deidentification is little more than a sleight of hand to make us feel more comfortable about the unrestricted use of our data without truly protecting our interests. More of a wolf in sheep’s clothing, deidentification is not solving the Goldilocks dilemma.
Tech to the rescue!
Fortunately, there are a handful of exciting new technologies that may let us escape the Goldilocks Dilemma entirely by enabling us to gain the benefits of our collective data without giving up our privacy. This sounds too good to be true, so let me explain the three most revolutionary ones: zero knowledge proofs, federated learning, and blockchain technology.
You might not know it yet, but there’s a revolution coming
While digitization has driven innovation across the healthcare sector, the advent of 5G is set to spark a fourth industrial revolution.
3G and 4G networks enabled large-scale change and rapid
modernization. However, 5G delivers what these networks could not: blazing
speeds and ultra-low latencies that permit enormous data transfers between
devices in near-real time. That means that technologies like artificial
intelligence, machine learning and augmented reality will be capable of
transforming the industry as we know it.
Whether it’s strengthening telemedicine connections, implementing new teaching methods at medical school, or connecting large hospitals and clinics, see how 5G-powered technologies will open the door for innovation in healthcare.
At long last, we seem to be on the threshold of departing the earliest phases of AI, defined by the always tedious “will AI replace doctors/drug developers/occupation X?” discussion, and are poised to enter the more considered conversation of “Where will AI be useful?” and “What are the key barriers to implementation?”
As I’ve watched this evolution in both drug discovery and medicine, I’ve come to appreciate that in addition to the many technical barriers often considered, there’s a critical conceptual barrier as well – the threat some AI-based approaches can pose to our “explanatory models” (a construct developed by physician-anthropologist Arthur Kleinman, and nicely explained by Dr. Namratha Kandulahere): our need to ground so much of our thinking in models that mechanistically connect tangible observation and outcome. In contrast, AI relates often imperceptible observations to outcome in a fashion that’s unapologetically oblivious to mechanism, which challenges physicians and drug developers by explicitly severing utility from foundational scientific understanding.
In an effort to help women make informed decisions about where to deliver their babies, we set out to collect a comprehensive, nationwide database of hospitals’ C-section rates. Knowing that the federal government mandates surveillance and reporting of vital statistics through the National Vital Statistics System, we contacted all 50 states’ (+Washington D.C.) Departments of Public Health (DPH) asking for access to de-identified birth data from all of their hospitals. What we learned might not surprise you — the lack of transparency in the United States healthcare system extends to quality information, and specifically C-section data. Continue reading…
Adoption of technology in the healthcare field has been happening at an incredibly slow pace. This is a fact that few would disagree with. The market is saturated with health tech companies that are vying to be the next big unicorn in the field, but long sales cycles and simple underestimations of what is needed for HIPAA and FDA approval has led to the demise of many of these projects. The ones that do receive enough series funding to produce finessed products for health systems and pharmaceutical companies however soon realize that the battle against time is not over.
Simply getting into a health system is not enough. Once a contract is finally ironed out and the software is exchanged, the next uphill battle against the slow-pace of internal adoption is mounted. Not only is a speedy adoption important for hospitals to demonstrate that their purchases and investments were appropriate, but it is also key for founders who hope to demonstrate that their product works. Nothing is worse than the painfully slow adoption internally of a piece of technology. One bad experience has the potential to tarnish an organization’s appetite for future tech ventures.
On July 24, the new administration kicked off their version of interoperability work with a public meeting of the incumbent trust brokers. They invited the usual suspects Carequality, CARIN Alliance, CommonWell, Digital Bridge, DirectTrust, eHealth Exchange, NATE, and SHIEC with the goal of driving for an understanding of how these groups will work with each other to solve information blocking and longitudinal health records as mandated by the 21st Century Cures Act.
Of the 8 would-be trust brokers, some go back to 2008 but only one is contemporary to the 21stCC act: The CARIN Alliance. The growing list of trust brokers over our decade of digital health tracks with the growing frustration of physicians, patients, and Congress over information blocking, but is there causation beyond just correlation?
One way to get data to move is open APIs, which the 21st Century Cures Act mandates by tasking EHR vendors to open up patient data “without special effort, through the use of application programming interfaces.”
Contrary to what you may think, most doctors do want to make eye contact. They aren’t antisocial. They want to engage. But they can’t. They’re too distracted by one of the worst computer games ever invented—the electronic medical record (EMR).
You may be surprised to see the EMR compared to a computer game, but there are many similarities. Both offer a series of clicks with an often-maddening array of tasks to solve. There are templates to follow, boxes to fill in & scoring. However, unlike most electronic games, the points accrued in the EMR often translate into payment—real dollars for either your doctor or the hospital.
Although these clicks and boxes may be necessary to document your visit, it’s distracting. And your doctor begins to feel more like a librarian cataloging information rather than, say, a historian capturing your story.Continue reading…
The first time I met one of my staff physicians on Internal Medicine, he told our team he had just one rule:
“Our team must contact the patient’s family physician during the admission, inform him or her of the situation and plan for appropriate patient follow up after discharge.”
If you talk to any hospital physician or family doctor, they would almost certainly agree that this type of integration between hospital and community is essential for reducing avoidable ER visits, readmissions and improving other key health outcomes. Put more simply, it’s just good care.
And so you would think contacting a patient’s family doctor during a hospital admission would be the standard of care – but it’s not.There’s no rule or expectation; rather, it’s just something nice to do.
I’m not here to criticize health care providers who do or don’t act a certain way. I’m sure there are many best practices which some providers do that others don’t, and vice versa.
That said, I don’t think we can deny the harsh truth: It’s no longer about knowing what needs to be done to provide higher quality of care at a lower cost. We know enough answers to begin implementing.
I am writing this from the Apple Worldwide Developer Conference (WWDC) here in San Francisco, where I got to substitute for John Halamka at the Keynote (now I keep having urges to raise Alpacas); John missed the most amazing seats [front row center!].
There were many, many, many (I can not recall a set of software announcements of this scale from Apple) new technologies that were announced, demoed and discussed, but I will limit this entry to a few technologies that have implications for healthcare.
If you remember the state of digital music, prior to the introduction of the iPod and iTunes music store, that is where I feel the current state of the healthcare app industry is at; there is no common infrastructure between any of the offerings, and consumers have been somewhat ambivalent towards them as everything is a data island; switching apps causes data loss and is not a pleasant experience for patients.
Amazingly there are 40,000+ apps on the App store at Apple alone, showing huge demand from users, but probably a handful can talk to each other in a meaningful way; this is both on the consumer and professional side of healthcare.
Individual vendors such as Withings have made impressive strides towards data consolidation on the platform, but these are not baked into the OS, so will always have a lower adoption rate. If we take the music industry example further, Apple entering a market with a full push of an ecosystem at their scale, legitimizes the technology in ways that other vendors simply can’t match.
People hate to hear it, but the robots are coming and it’s only a matter of time before they start competing for skilled, white collar jobs.
Nurses are vulnerable –but before you get excited and start attacking me– so, too are consultants and bloggers. So get used to it, and figure out how you’re going to co-exist with and leverage the bots.
One of my pet peeves about robots is when their programmers try to make them act human by intentionally making them imperfect or have them simulate (feign?) empathy.
For example, I can’t stand it when the voice recognition airline rep talks in a sympathetic sounding voice when “she” can’t understand what I’m saying.
But apparently we’ll be seeing more of these little “humanizing” tricks, thanks to research from MIT that concludes that people like this kind of stuff. From the Wall Street Journal, we learn:
People like their therapy robots to be baby-faced
We feel emotionally closer to robots that sound like our own gender
When robots mimic our activity (like folding their arms) we like it
And then there’s this one:
“One study showed that people rated online travel booking and dating services more positively when the service communicated clearly that it was working for the consumer (e.g., “We are now searching 100 sites for you”) than when they simply provided search results. Surprisingly, having to wait 30 seconds for results but also receiving this communication of effort slightly increased users’ satisfaction, compared with receiving results instantaneously. Being made aware of the website’s willingness to work on their behalf made people feel that the service was sympathetic to their needs.”