This piece is part of the series “The Health Data Goldilocks Dilemma: Sharing? Privacy? Both?” which explores whether it’s possible to advance interoperability while maintaining privacy. Check out other pieces in the series here.
Alice makes an appointment in the breast cancer practice using the Mayo patient portal. Mayo asks permission to access her health records. Alice is offered two choices, one uses HIPAA without her consent and the other is under her control. Her choice is:
Enter her demographics and insurance info and have The Platform use HIPAA surveillance to gather her records wherever Mayo can find them, or
Alice copies her Mayo Clinic ID and enters it into the patient portal of any hospital, lab, or payer to request her records be sent directly to Mayo.
Alice feels vulnerable. What other information will The Platform gather using their HIPAA surveillance power? She recalls a 2020 law that expanded HIPAA to allow access to her behavioral health records at Austin Rehab.
Alice prefers to avoid HIPAA surprises and picks the patient-directed choice. She enters her Mayo Clinic ID into Ascension’s patient portal. Unfortunately, Ascension is using the CARIN Alliance code of conduct and best practices. Ascension tells Alice that they will not honor her request to send records directly to Mayo. Ascension tells Alice that she must use the Apple Health platform or some other intermediary app to get her records if she wants control.
US healthcare is exceptional among rich economies. Exceptional in cost. Exceptional in disparities. Exceptional in the political power hospitals and other incumbents have amassed over decades of runaway healthcare exceptionalism.
The latest front in healthcare exceptionalism is over who profits from patient records. Parallel articles in the NYTimes and THCB frame the issue as “barbarians at the gate” when the real issue is an obsolete health IT infrastructure and how ill-suited it is for the coming age of BigData and machine learning. Just check out the breathless announcement of “frictionless exchange” by Microsoft, AWS, Google, IBM, Salesforce and Oracle. Facebook already offers frictionless exchange. Frictionless exchange has come to mean that one data broker, like Facebook, adds value by aggregating personal data from many sources and then uses machine learning to find a customer, like Cambridge Analytica, that will use the predictive model to manipulate your behavior. How will the six data brokers in the announcement be different from Facebook?
The NYTimes article and the THCB post imply that we will know the barbarians when we see them and then rush to talk about the solutions. Aside from calls for new laws in Washington (weaken behavioral health privacy protections, preempt state privacy laws, reduce surprise medical bills, allow a national patient ID, treat data brokers as HIPAA covered entities, and maybe more) our leaders have to work with regulations (OCR, information blocking, etc…), standards (FHIR, OAuth, UMA), and best practices (Argonaut, SMART, CARIN Alliance, Patient Privacy Rights, etc…). I’m not going to discuss new laws in this post and will focus on practices under existing law.
Patient-directed access to health data is the future. This was made clear at the recent ONC Interoperability Forum as opened by Don Rucker and closed with a panel about the future. CARIN Alliance and Patient Privacy Rights are working to define patient-directed access in what might or might not be different ways. CARIN and PPR have no obvious differences when it comes to the data models and semantics associated with a patient-directed interface (API). PPR appreciates HL7 and CARIN efforts on the data models and semantics for both clinics and payers.
The governor of Vermont, Peter Shumlin, devoted all of his annual speech to the problem of drug addiction. On the national news, Shumlin points out the link between prescription painkillers and death, and he calls for treating opiate addiction as a medical problem no different than cancer. The White House praised the governor’s position.
Meanwhile in another part of Washington, I’m involved in the federal effort to link the law enforcement Prescription Drug Monitoring Program databases to the health records physicians use, and to link the databases across state lines.
The unintended consequences of criminalizing addiction and driving medical problems underground need to be considered here as well.
Physician-patient confidentiality is important to public health, and networked electronic health records have both individual privacy and public health consequences. Privacy is essential in infectious disease testing, domestic violence, mental health, adolescent, reproductive, and addiction medicine. Subjecting clinical encounters to law enforcement surveillance beyond the physician’s discretion is life-threatening.
Well-meaning people are now working to link PDMP databases to EHRs and across state lines. The evidence to justify the coerced crossing of the criminal – medical boundary is anecdotal findings in pilot studies that more physicians are in a position to uncover addiction and offer treatment.
The other goal is to reduce illegal diversion of prescription drugs by both physicians and patients. What could possibly go wrong?
While there has been much focus lately on the ways in which ObamaCare is chilling the growth of private business, we should not overlook the continuing deleterious effects of the one surviving relic of HillaryCare, the Health Insurance Portability and Accountability Act (HIPAA). Quietly, September 23 came and went as the compliance effective date for a new rule, expanding the reach of HIPAA, and likely driving many smaller players out of the health care industry.
Spearheaded by then First Lady Clinton, HIPAA was established in 1996 to improve privacy of personal health information, referred to as protected health information, or PHI. It requires health care providers, known as “covered entities,” and their vendors, contractors, and agents with access to PHI, known as “business associates,” to comply with certain privacy standards under its “Privacy Rule,” and with certain security standards under its “Security Rule,” in order to protect sensitive health information that is held or transferred in electronic form.
Over the past decade, equipped with the noble aim of protecting our privacy, HIPAA has successfully demonstrated the power of the law of unintended consequences. Improved protection of PHI has been marginal. However, HIPAA has impeded communication among physicians, reduced physician time devoted to patient care, and deterred medical research. And all at an enormous cost of compliance. While estimates vary widely, the cost of compliance for many providers has been in the millions.
Now, rather than take heed, the government has decided to double down through expansion. Under the Health Information and Technology for Economic and Clinical Health Act (HITECH), a corollary of HIPAA, promulgated to create incentives to facilitate the development of healthcare information technology, the government has sought to update the requirements of HIPAA in light of the changing dynamics of technology and health practices, increasing the safeguards and obligations of health care providers and their business associates.
Thanks to the flood of new data expected to enter the health field from all angles–patient sensors, public health requirements in Meaningful Use, records on providers released by the US government, previously suppressed clinical research to be published by pharmaceutical companies–the health field faces a fork in the road, one direction headed toward chaos and the other toward order.
The road toward chaos is forged by the providers’ and insurers’ appetites for categorizing us, marketing to us, and controlling our use of the health care system, abetted by lax regulation. The alternative road is toward a healthy data order where privacy is protected, records contain more reliable information, and research is supported or even initiated by cooperating patients.
This was my main take-away from a day of meetings and a panel held recently by Patient Privacy Rights, a non-profit for whom I have volunteered during the past three years. The organization itself has evolved greatly during that time, tempering much of the negativity in which it began and producing a stream of productive proposals for improving the collection and reuse of health data. One recent contribution consists of measuring and grading how closely technology systems, websites, and applications meet patients’ expectations to control and understand personal health data flows.
With sponsorship by Microsoft at their Innovation and Policy Center in Washington, DC, PPR offered a public panel on privacy–which was attended by 25 guests, a very good turnout for something publicized very modestly–to capitalize on current public discussions about government data collection, and (without taking a stand on what the NSA does) to alert people to the many “little NSAs” trying to get their hands on our personal health data.
It was a privilege and an eye-opener to be part of Friday’s panel, which was moderated by noted privacy expert Daniel Weitzner and included Dr. Deborah Peel (founder of PPR), Dr. Adrian Gropper (CTO of PPR), Latanya Sweeney of Harvard and MIT, journalist Sydney Brownstone of Fast Company, and me. Although this article incorporates much that I heard from the participants, it consists largely of my own opinions and observations.
Henrietta Lacks did not give researchers permission to take her cancer cells and study them. After she died in 1951, her family was not asked permission as her immortalized cells were used in countless laboratories. This month, the National Institutes of Health finally took a step in righting that wrong, announcing that the Lacks family would help decide who can access Henrietta’s DNA.
Today, getting a patient’s permission, often in writing, is standard in experimental medical research. Well, not always. Currently, there are at least nine ongoing studies involving 62 U.S. cities and towns with a combined population of more than 45 million that do not involve getting permission. They take place during emergencies, such as when ambulances arrive at an accident where patients are too injured to give permission.
For example, imagine this scenario based on a recent study sponsored by the University of Washington. You are involved in a car accident. Paramedics find you bleeding severely. They give you fluids to keep your blood pressure up, but they intentionally give you a bag of fluid that is smaller than the standard. Then they monitor your medical outcome and compare it with patients who received the larger amount of fluids. During the emergency, neither you nor your family know about the study.
Research on medical emergencies is vital in determining how to care for people with life-threatening injuries because we often do not have proof that standard methods are the best. People involved should be told that is how their records are being used.
In 1996, the Department of Health and Human Services and the Food and Drug Administration passed regulations allowing research about emergency treatment to occur without permission. For a study to qualify, patients need to have a life-threatening condition, current standards of care must be unproven or performing poorly, and obtaining permission must not be feasible (such as an unconscious patient or a patient whose condition does not allow time for informed consent).
Secrecy breeds suspicion. The role of secrecy in health care is practically non-existent so when we see examples of secrecy, as in the operational details of the Federal Data Services Hub, we get the recent outcry from a range of politicians and journalists waving privacy flags. For Patient Privacy Rights, this is a teachable moment relative to both advocates and detractors of the Affordable Care Act.
There’s a clear parallel between the recent concerns around NSA communications surveillance and health care surveillance under the ACA. Some surveillance is justified, to combat terrorism and fraud respectively, but unwarranted secrecy breeds suspicion and may not help our civil society.
“For all marketplaces, CMS [the Centers for Medicare and Medicaid Services] is also building a tool called the Data Services Hub to help with verifying applicant information used to determine eligibility for enrollment in qualified health plans and insurance affordability programs. The hub will provide one connection to the common federal data sources (including but not limited to SSA, IRS, DHS) needed to verify consumer application information for income, citizenship, immigration status, access to minimum essential coverage, etc.
CMS has completed the technical design, and reference architecture for this work, is establishing a cross-agency security framework as well as the protocols for connectivity, and has begun testing the hub. The hub will not store consumer information, but will securely transmit data between state and federal systems to verify consumer application information. Protecting the privacy of individuals remains the highest priority of CMS.”
Here’s where the secrecy comes in: I tried to find out some specific information about the Hub. Technical or policy details that would enable one to apply Fair Information Practice Principles? Some open evidence of privacy by design? Some evidence of participation by privacy experts? I got nothing. Where’s Mr. Snowden when we need him?
At my infectious-diseases clinic in Southeast Washington, I work with some of the city’s most indigent patients. Some don’t have jobs, a home, a car or enough to eat. But recently, I saw a patient whose problem made these issues seem trivial.
Dealing with fatigue, a cough and a fever for several months, this woman in her 40s had been evaluated by four internists. They had tested her for a variety of conditions but not HIV. Each had recommended rest, two prescribed antibiotics, and one suggested an over-the-counter cough medicine. Experiencing no physical relief from these suggestions, the woman had decided to “lay down and die.”
However, after her longtime partner insisted she get medical help, she agreed to go to a hospital emergency room. After a rapid test, which she initially refused because she said she was not at risk for HIV, she learned that she was HIV-positive.
After that ER visit, she brought her partner, whom she credits with saving her life, to my clinic to be tested; she was concerned that she had transmitted the virus to him. He tested positive. About a week later, when he accompanied her to an appointment with me, I asked if he had been seen by a doctor to discuss treatment. He said no and indicated that he wanted to establish care in the clinic.
When I asked if he had ever been on HIV drugs, he gazed at the medication chart and pointed out his previous regimen, a cocktail that contained indinavir. Because I and many other doctors stopped prescribing this medication a decade ago, I knew he had been keeping his condition from her for years. He stopped talking and avoided my gaze. It was clear he knew that I had learned his secret. I had many questions for him; but this visit was for her.
It was not the right moment to dredge up this history and ask how he could keep his diagnosis hidden while watching his partner struggle with her health. I chose not to ask about his dishonesty, their relationship and whether they had used condoms to protect her from getting HIV. At this point, I needed to help her understand that, even though she felt weak and sick, the medications would soon make her feel better. And that, with the right treatment, she could still live a long life.
While talking with my patient about her treatment, my mind kept wandering back to her partner’s secret. Was it my role to admonish him in front of her, or would that make things worse? What would they say to each other when they got home? I wanted to discuss these questions, but did I have a right to insert my judgment into this situation? At a private visit with me two weeks later, she let me know that this was the moment she realized he’d been keeping his diagnosis from her for years.
As a physician, I am not allowed to reveal any medical information about my patients or their circumstances without their written permission. This confidentiality is sacred. But in this case, that constraint felt inappropriate and irresponsible.
As my head reels at the implications of the IRS scandal mushrooming in Washington, the IRS’s recently disclosed ability to access e-mails without warrant, the intricacy of the NSA PRISM wiretap techniques that includes their ability to acquire tech firms’ digital data, and even the Justice Department’s ability to secretly acquire telephone toll records from the Associated Press, I wonder (as a doctor) what all this means for the privacy protections afforded by the Health Insurance Portability and Accountability Act of 1996 (HIPAA) in our new era of mandated electronic medical records. Are such privacy protections credible at all?
It doesn’t seem so.
Now it seems everyone’s health data is just as vulnerable to federal review as their Google search data. This is not a small issue. We have already seen that discovering “leaks” of personal health information has produced some very handsome rewards for the feds, so it is not beyond reason to think that HIPAA might also be a funding tool for our government health care administration disguised as a beneficent effort to protect the health care data of our populace.
But even more concerning is the role the IRS scandal has for America’s health care system. After all, the Affordable Care Act is ultimately funded by the IRS by administering some 47 tax provisions. These include the right to levy a penalty against businesses and individuals who don’t provide or acquire insurance and determining how to distribute annual subsidies to 18 million people who make less than $45,000 a year and thus qualify for subsidies in buying health coverage. In addition, the agency will collect taxes on medical devices and a surtax on people making more than $200,000 a year, as well as conducting compliance audits of tax-exempt hospitals.