By ROBERT C. MILLER, JR. and MARIELLE S. GROSS, MD, MBE
The problem with porridge
Today, we regularly hear stories of research teams using artificial intelligence to detect and diagnose diseases earlier with more accuracy and speed than a human would have ever dreamed of. Increasingly, we are called to contribute to these efforts by sharing our data with the teams crafting these algorithms, sometimes by healthcare organizations relying on altruistic motivations. A crop of startups have even appeared to let you monetize your data to that end. But given the sensitivity of your health data, you might be skeptical of this—doubly so when you take into account tech’s privacy track record. We have begun to recognize the flaws in our current privacy-protecting paradigm which relies on thin notions of “notice and consent” that inappropriately places the responsibility data stewardship on individuals who remain extremely limited in their ability to exercise meaningful control over their own data.
Emblematic of a broader trend, the “Health Data Goldilocks Dilemma” series calls attention to the tension and necessary tradeoffs between privacy and the goals of our modern healthcare technology systems. Not sharing our data at all would be “too cold,” but sharing freely would be “too hot.” We have been looking for policies “just right” to strike the balance between protecting individuals’ rights and interests while making it easier to learn from data to advance the rights and interests of society at large.
What if there was a way for you to allow others
to learn from your data without compromising your privacy?
To date, a major strategy for striking this balance has involved the practice of sharing and learning from deidentified data—by virtue of the belief that individuals’ only risks from sharing their data are a direct consequence of that data’s ability to identify them. However, artificial intelligence is rendering genuine deidentification obsolete, and we are increasingly recognizing a problematic lack of accountability to individuals whose deidentified data is being used for learning across various academic and commercial settings. In its present form, deidentification is little more than a sleight of hand to make us feel more comfortable about the unrestricted use of our data without truly protecting our interests. More of a wolf in sheep’s clothing, deidentification is not solving the Goldilocks dilemma.
Tech to the rescue!
Fortunately, there are a handful of exciting new technologies that may let us escape the Goldilocks Dilemma entirely by enabling us to gain the benefits of our collective data without giving up our privacy. This sounds too good to be true, so let me explain the three most revolutionary ones: zero knowledge proofs, federated learning, and blockchain technology.
US healthcare is exceptional among rich economies. Exceptional in cost. Exceptional in disparities. Exceptional in the political power hospitals and other incumbents have amassed over decades of runaway healthcare exceptionalism.
The latest front in healthcare exceptionalism is over who profits from patient records. Parallel articles in the NYTimes and THCB frame the issue as “barbarians at the gate” when the real issue is an obsolete health IT infrastructure and how ill-suited it is for the coming age of BigData and machine learning. Just check out the breathless announcement of “frictionless exchange” by Microsoft, AWS, Google, IBM, Salesforce and Oracle. Facebook already offers frictionless exchange. Frictionless exchange has come to mean that one data broker, like Facebook, adds value by aggregating personal data from many sources and then uses machine learning to find a customer, like Cambridge Analytica, that will use the predictive model to manipulate your behavior. How will the six data brokers in the announcement be different from Facebook?
The NYTimes article and the THCB post imply that we will know the barbarians when we see them and then rush to talk about the solutions. Aside from calls for new laws in Washington (weaken behavioral health privacy protections, preempt state privacy laws, reduce surprise medical bills, allow a national patient ID, treat data brokers as HIPAA covered entities, and maybe more) our leaders have to work with regulations (OCR, information blocking, etc…), standards (FHIR, OAuth, UMA), and best practices (Argonaut, SMART, CARIN Alliance, Patient Privacy Rights, etc…). I’m not going to discuss new laws in this post and will focus on practices under existing law.
Patient-directed access to health data is the future. This was made clear at the recent ONC Interoperability Forum as opened by Don Rucker and closed with a panel about the future. CARIN Alliance and Patient Privacy Rights are working to define patient-directed access in what might or might not be different ways. CARIN and PPR have no obvious differences when it comes to the data models and semantics associated with a patient-directed interface (API). PPR appreciates HL7 and CARIN efforts on the data models and semantics for both clinics and payers.
This post is part of the series “The Health Data Goldilocks Dilemma: Privacy? Sharing? Both?”
In our previous post, we described the “Wild West of Unprotected Health Data.” Will the cavalry arrive to protect the vast quantities of your personal health data that are broadly unprotected from sharing and use by third parties?
Congress is seriously considering legislation to better
protect the privacy of consumers’ personal data, given the patchwork of
existing privacy protections. For the most part, the bills, while they may
cover some health data, are not focused just on health data – with one
exception: the “Protecting Personal Health Data Act” (S.1842), introduced by
Senators Klobuchar and Murkowski.
In this series, we committed to looking across all of the
various privacy bills pending in Congress and identifying trends,
commonalities, and differences in their approaches. But we think this bill,
because of its exclusive health focus, deserves its own post. Concerns about
health privacy outside of HIPAA are receiving increased attention in light of
the push for interoperability, which makes this bill both timely and
potentially worth of your attention.
For example, greater interoperability with patients means that even more medical and claims data will flow outside of HIPAA to the “Wild West.” The American Medical Association noted:
“If patients access their health
data—some of which could contain family history and could be sensitive—through
a smartphone, they must have a clear understanding of the potential uses of
that data by app developers. Most patients will not be aware of who has access
to their medical information, how and why they received it, and how it is being
used (for example, an app may collect or use information for its own purposes,
such as an insurer using health information to limit/exclude coverage for
certain services, or may sell information to clients such as to an employer or
a landlord). The downstream consequences of data being used in this way may
ultimately erode a patient’s privacy and willingness to disclose information to
his or her physician.”
The rather esoteric issue of a national patient identifier has come to light as a difference between two major heath care bills making their way through the House and the Senate.
The bills are linked to outrage over surprise medical bills but they have major implications over how the underlying health care costs will be controlled through competitive insurance and regulatory price-setting schemes. This Brookings comment to the Senate HELP Committee bill summarizes some of the issues.
Those in favor of a national patient identifier are mostly hospitals and data brokers, along with their suppliers. More support is discussed here. The opposition is mostly on the basis of privacyand libertarian perspective. A more general opposition discussion of the Senate bill is here.
Although obscure, national patient identifier standards can help clarify the role of government in the debate over how to reduce the unusual health care costs and disparities in the U.S. system. What follows is a brief analysis of the complexities of patient identifiers and their role relative to health records and health policy.
Our Experience on Facebook Offers Important Insight Into Mark Zuckerberg’s Future Vision For Meaningful Groups
By ANDREA DOWNING
Seven years ago, I was utterly alone and seeking support as I navigated a scary health experience. I had a secret: I was struggling with the prospect of making life-changing decisions after testing positive for a BRCA mutation. I am a Previvor. This was an isolating and difficult experience, but it turned out that I wasn’t alone. I searched online for others like me, and was incredibly thankful that I found a caring community of women who could help me through the painful decisions that I faced.
As I found these women through a Closed Facebook Group, I began to understand that we had a shared identity. I began to find a voice, and understand how my own story fit into a bigger picture in health care and research. Over time, this incredible support group became an important part of my own healing process.
This group was founded by my friends Karen and Teri, and has a truly incredible story. With support from my friends in this group of other cancer previvors and survivors I have found ways to face the decisions and fear that I needed to work through.
Two years ago we wouldn’t have believed it — the U.S. Congress is considering broad privacy and data protection legislation in 2019. There is some bipartisan support and a strong possibility that legislation will be passed. Two recent articles in The Washington Post and AP News will help you get up to speed.
Federal privacy legislation would have a huge impact on all healthcare stakeholders, including patients. Here’s an overview of the ground we’ll cover in this post:
Six Key Issues for Healthcare
We are aware of at least 5 proposed Congressional bills and 16 Privacy Frameworks/Principles. These are listed in the Appendix below; please feel free to update these lists in your comments. In this post we’ll focus on providing background and describing issues. In a future post we will compare and contrast specific legislative proposals.
I recently had the opportunity to join Boston news media veteran, Dan Rea, on his AM radio program, Nightside with Dan Rea. It was a one-hour call in program, and an eye opening experience for me. Dan and I chatted about connected health and how it can truly disrupt care delivery and put the individual at the center of their own health. Then Dan opened the lines to the fine citizens of New England for questions, and the phones started ringing off the hook.
The overwhelming concern – actual fear — among callers was maintaining their privacy in an increasingly connected world, especially their personal health data. This is a topic I touched upon in my recent book, The Internet of Healthy Things, and one which I will explore further in my upcoming talk at our Connected Health Symposium in a few weeks. But I was so struck by the extent of concern, I thought I’d present a few theories I’ve been contemplating on the subject.Continue reading…
This weekend the NYTimes published an editorial titled Give Up Your Data to Cure Disease. When we will stop seeing mindless memes and tropes that cures and innovation require the destruction of the most important human and civil right in Democracies, the right to privacy? In practical terms privacy means the right of control over personal information, with rare exceptions like saving a life.
Why aren’t government and industry interested in win-win solutions? Privacy and research for cures are not mutually exclusive.
How is it that government and the healthcare industry have zero comprehension that the right to determine uses of personal information is fundamental to the practice of Medicine, and an absolute requirement for trust between two people?
Why do the data broker and healthcare industries have so little interest in computer science and great technologies that enable research without compromising privacy?
Today healthcare “innovation” means using technology for spying, collecting, and selling intimate data about our minds and bodies.
This global business model exploits and harms the population of every nation. Today no nation has a map that tracks the millions of hidden data bases where health information is collected and used, inaccessible and unaccountable to us. How can we weigh risks when we don’t know where our data are held or how data are used? See www.theDataMap.org .
Healthcare is abuzz with calls for Universal Patient Identifiers. Universal people identifiers have been around for decades and experience can help us understand what, if anything, makes patients different from people. This post argues that surveillance may be a desirable side-effect of access to a health service but the use of unique patient identifiers for surveillance needs to be managed separately from the use of identifiers in a service relationship. Surveillance uses must always be clearly disclosed to the patient or their custodian each time they are sent by the service provider or “matched” by the surveillance agency. This includes health information exchanges or research data registries.
As a medical device entrepreneur, physician, engineer, and CTO of Patient Privacy Rights, I have decades of experience with patient identifier practices and standards. I feel particularly qualified to discuss patient identifiers because I serve on the Board and Management Council of the NIST-founded Identity Ecosystems Steering Group (IDESG) where I am the Privacy and Civil Liberties Delegate. I am also a core participant to industry standards groups Kantara-UMA and OpenID-HEART working on personal data and I consult on patient and citizen identity with public agencies.
The Internet is abuzz criticizing Anthem for not encrypting its patient records. Anthem has been hacked, for those not paying attention.
Anthem was right, and the Internet is wrong. Or at least, Anthem should be “presumed innocent” on the issue. More importantly, by creating buzz around this issue, reporters are missing the real story: that multinational hacking forces are targeting large healthcare institutions.
Most lay people, clinicians and apparently, reporters, simply do not understand when encryption is helpful. They presume that encrypted records are always more secure than unencrypted records, which is simplistic and untrue.
Encryption is a mechanism that ensures that data is useless without a key, much in the same way that your car is made useless without a car key. Given this analogy, what has apparently happened to Anthem is the security equivalent to a car-jacking.
When someone uses a gun to threaten a person into handing over both the car and the car keys needed to make that care useless, no one says “well that car manufacturer needs to invest in more secure keys”.
In general, systems that rely on keys to protect assets are useless once the bad guy gets ahold of the keys. Apparently, whoever hacked Anthem was able to crack the system open enough to gain “programmer access”. Without knowing precisely what that means, it is fair to assume that even in a given system implementing “encryption-at-rest”, the programmers have the keys. Typically it is the programmer that hands out the keys.
Most of the time, hackers seek to “go around” encryption. Suggesting that we use more encryption or suggesting that we should use it differently is only useful when “going around it” is not simple. In this case, that is what happened.