This story was co-published with NPR’s “Shots” blog.
In the name of patient privacy, a security guard at a hospital in Springfield, Missouri, threatened a mother with jail for trying to take a photograph of her own son. In the name of patient privacy , a Daytona Beach, Florida, nursing home said it couldn’t cooperate with police investigating allegations of a possible rape against one of its residents.
In the name of patient privacy, the U.S. Department of Veterans Affairs allegedly threatened or retaliated against employees who were trying to blow the whistle on agency wrongdoing.When the federal Health Insurance Portability and Accountability Act passed in 1996, its laudable provisions included preventing patients’ medical information from being shared without their consent and other important privacy assurances.But as the litany of recent examples show, HIPAA, as the law is commonly known, is open to misinterpretation – and sometimes provides cover for health institutions that are protecting their own interests, not patients’.
“Sometimes it’s really hard to tell whether people are just genuinely confused or misinformed, or whether they’re intentionally obfuscating,” said Deven McGraw, partner in the healthcare practice of Manatt, Phelps & Phillips and former director of the Health Privacy Project at the Center for Democracy & Technology.For example, McGraw said, a frequent health privacy complaint to the U.S. Department of Health and Human Services Office of Civil Rights is that health providers have denied patients access to their medical records, citing HIPAA. In fact, this is one of the law’s signature guarantees.”Often they’re told [by hospitals that] HIPAA doesn’t allow you to have your records, when the exact opposite is true,” McGraw said.
I’ve seen firsthand how HIPAA can be incorrectly invoked.
In 2005, when I was a reporter at the Los Angeles Times, I was asked to help cover a train derailment in Glendale, California, by trying to talk to injured patients at local hospitals. Some hospitals refused to help arrange any interviews, citing federal patient privacy laws. Other hospitals were far more accommodating, offering to contact patients and ask if they were willing to talk to a reporter. Some did. It seemed to me that the hospitals that cited HIPAA simply didn’t want to ask patients for permission.
At the first White House public workshop on Big Data, Latanya Sweeney, a leading privacy researcher at Carnegie Mellon and Harvard who is now the chief technologist for the Federal Trade Commission, was quoted as asking about privacy and big data, “computer science got us into this mess; can computer science get us out of it?”
There is a lot computer science and other technology can do to help consumers in this area. Some examples:
• The same predictive analytics and machine learning used to understand and manage preferences for products or content and improve user experience can be applied to privacy preferences. This would take some of the burden off individuals to manage their privacy preferences actively and enable providers to adjust disclosures and consent for differing contexts that raise different privacy sensitivities.
Computer science has done a lot to improve user interfaces and user experience by making them context-sensitive, and the same can be done to improve users’ privacy experience.
• Tagging and tracking privacy metadata would strengthen accountability by making it easier to ensure that use, retention, and sharing of data is consistent with expectations when the data was first provided.
• Developing features and platforms that enable consumers to see what data is collected about them, employ visualizations to increase interpretability of data, and make data about consumers more available to them in ways that will allow consumers to get more of the benefit of data that they themselves generate would provide much more dynamic and meaningful transparency than static privacy policies that few consumers read and only experts can interpret usefully.
In a recent speech to MIT’s industrial partners, I presented examples of research on privacy-protecting technologies.
T was never a star service tech at the auto dealership where he worked for more than a decade. If you lined up all the techs, he wouldn’t stand out: medium height, late-middle age, pudgy, he was as middle-of-the-pack as a guy could get.
He was exactly the type of employee that his employer’s wellness vendor said was their ideal customer. They could fix him.
A genial sort, T thought nothing of sitting with a “health coach” to have his blood pressure and blood taken, get weighed, and then use the coach’s notebook computer to answer, for the first time in his life, a health risk appraisal.
He found many of the questions oddly personal: how much did he drink, how often did he have (unprotected) sex, did he use sleeping pills or pain relievers, was he depressed, did he have many friends, did he drive faster than the speed limit? But, not wanting to rock the boat, and anxious to the $100/month bonus that came with being in the wellness program, he coughed up this personal information.
The feedback T got, in the form of a letter sent to both his home and his company mailbox, was that he should lose weight, lower his cholesterol and blood pressure, and keep an eye on his blood sugar. Then, came the perfect storm that T never saw developing.
His dealership started cutting employees a month later. In the blink of an eye, a decade of service ended with a “thanks, it’s been nice to know you” letter and a few months of severance.
T found the timing of dismissal to be strangely coincidental with the incentivized disclosure of his health information.
The field of analytics has fallen into a few big holes lately that represent both its promise and its peril. These holes pertain to privacy, policy, and predictions.
Policy. 2.2/7. The biggest analytics project in recent history is the $6 billion federal investment in the health exchanges. The goals of the health exchanges are to enroll people in the health insurance plans of their choice, determine insurance subsidies for individuals, and inform insurance companies so that they could issue policies and bills.
The project touches on all the requisites of analytics including big data collection, multiple sources, integration, embedded algorithms, real time reporting, and state of the art software and hardware. As everyone knows, the implementation was a terrible failure.
The CBO’s conservative estimate was that 7 million individuals would enroll in the exchanges. Only 2.2 million did so by the end of 2013. (This does not include Medicaid enrollment which had its own projections.) The big federal vendor, CGI, is being blamed for the mess.
Note that CGI was also the vendor for the Commonwealth of Massachusetts which had the worst performance of all states in meeting enrollment numbers despite its long head start as the Romney reform state and its groundbreaking exchange called the Connector. New analytics vendors, including Accenture and Optum, have been brought in for the rescue.
Was it really a result of bad software, hardware, and coding? Was it that the design to enroll and determine subsidies had “complexity built-in” because of the legislation that cobbled together existing cumbersome systems, e.g. private health insurance systems? Was it because of the incessant politics of repeal that distracted policy implementation? Yes, all of the above.
The big “hole”, in my view, was the lack of communications between the policy makers (the business) and the technology people. The technologists complained that the business could not make decisions and provide clear guidance. The business expected the technology companies to know all about the complicated analytics and get the job done, on time.
This ensuing rift where each group did not know how to talk with the other is recognized as a critical failure point. In fact, those who are stepping into the rescue role have emphasized that there will be management status checks daily “at 9 AM and 5 PM” to bring people together, know the plan, manage the project, stay focused, and solve problems.
Walking around the hole will require a better understanding as to why the business and the technology folks do not communicate well and to recognize that soft people skills can avert hard technical catastrophes.
It’s 8.30 am, just before clinic opens. It is 2010. Dr Byte* checks an online forum, and something catches his eye.
A female patient is complaining about a doctor. Her posting has led to strident reactions from other doctors. Patients are taking her side. It looks ugly.
It turns out that the patient had asked her family doctor whether she could use her smartphone to record the encounter. Her doctor was apparently taken aback and had paused to gather his thoughts. He asked the patient to put her smartphone away, saying that it was not the policy of the clinic to allow patients to take recordings.
The patient described how the mood of the meeting shifted. Initially jovial, the doctor had become defensive. She complied and turned off her smartphone.
The patient wrote that as soon as the smartphone was turned off the doctor raised his voice and berated her for making the request, saying that the use of a recording device would betray the fundamental trust that is the basis of a good patient-doctor relationship.
The patient wrote that she tried to reason, explaining that the recording would be useful to her and her family. But the doctor shouted at her, asking her to leave immediately and find another doctor.
Some participants on the online forum expressed disbelief. But the patient then went on to state that she could prove that this had actually happened, because she actually had a recording of the encounter. Although she had turned off her smartphone, she had a second recording device in her pocket, turned on, that had captured every word.
Today, ONC released a report on patient matching practices and to the casual reader it will look like a byzantine subject. It’s not.
You should care about patient matching, and you will.
It impacts your ability to coordinate care, purchase life and disability insurance, and maybe even your job. Through ID theft, it also impacts your safety and security. Patient matching’s most significant impact, however, could be to your pocketbook as it’s being used to fix prices and reduce competition in a high deductible insurance system that makes families subject up to $12,700 of out-of-pocket expenses every year.
Patient matching is the healthcare cousin of NSA surveillance.
Health IT’s watershed is when people finally realize that hospital privacy and security practices are unfair and we begin to demand consent, data minimization and transparency for our most intimate information. The practices suggested by Patient Privacy Rights are relatively simple and obvious and will be discussed toward the end of this article.
Health IT tries to be different from other IT sectors. There are many reasons for this, few of them are good reasons. Health IT practices are dictated by HIPAA, where the rest of IT is either FTC or the Fair Credit Reporting Act. Healthcare is mostly paid by third-party insurance and so the risks of fraud are different than in traditional markets.
Healthcare is delivered by strictly licensed professionals regulated differently than the institutions that purchase the Health IT. These are the major reasons for healthcare IT exceptionalism but they are not a good excuse for bad privacy and security practices, so this is about to change.
Health IT privacy and security are in tatters, and nowhere is it more evident than the “patient matching” discussion. Although HIPAA has some significant security features, it also eliminated a patient’s right to consent and Fair Information Practice.
I’ve recently returned from the 7th ID Ecosystem Steering Group Plenary in Atlanta. This is an international public-private project focused on the anything-but-trivial issue of issuing people authoritative cyber-credentials: digital passports you can use to access government services, healthcare, banks and everything else online.
Cyber ID is more than a single-sign-on convenience, or a money-saver when businesses can stop asking you for the names of your pets, it’s rapidly becoming a critical foundation for cyber-security because it impacts the resiliency of our critical infrastructure.
Healthcare, it turns out, is becoming a design center for IDESG because healthcare represents the most diverse collection of human interactions of any large market sector. If we can solve cyber-identity for healthcare, we will have solved most of the other application domains.
The cyber-identity landscape includes:
proving who you are without showing a physical driver’s license
opening a new account without having to release private information
eliminating the risk of identity theft
civil or criminal accountability for your actions based on a digital ID
reducing your privacy risks through anonymous or pseudonymous ID
enabling delegation to family members or professional colleagues without impersonation
reducing hidden surveillance by state or private institutions
when appropriate, shifting control of our digital tools to us and away from corporations
The IDESG process is deliberate and comprehensive. It impacts many hot issues in health care including patient matching, information sharing for accountable care and population health, health information exchanges, prescription drug monitoring programs, accounting for disclosures, patient engagement and meaningful use, the physician’s ability to communicate and refer without institutional censorship, the patient’s ability to control information from our increasingly connected devices and implants, and more.
Hospitals and health industry incumbents that seek to solve the hot issues raised by health reform are not eager to wait for a deliberate and comprehensive process. For them, privacy and cyber-security is a nice-to-have. Who will pay for this digital enlightenment?
A Facebook user’s timeline provides both a snapshot of who that user is and a historical record of the user’s activity on Facebook. My Facebook timeline is about me, and fittingly, I control it. It’s also one, single profile. Anyone I allow to view my timeline views my timeline—they don’t each create their own copies of it.
Intuitive, right? So why don’t medical records work that way? There is no unified, single patient record—every doctor I’ve ever visited has his or her own separate copy of my records. And in an age where we can conduct banking transactions on my smartphone, many patients still can’t access or contribute to the medical records their doctors keep for them.
My proposal? Medical records should follow Facebook’s lead.
Cross-industry innovation isn’t new. BMW borrowed from the tech world to create its iDrive
; Fischer Sports reduced the oscillation of its skis by using a technology
created for stringed instruments. So I asked myself: Who has mastered the user-centric storing and sharing platform? The more I thought about it, the more I decided a Facebook timeline
approach could be just what medical records need.
To see what I mean, let’s explore some of Facebook timeline’s key features to see how each could map to features of the ideal medical record.
“About” for Complete, Patient-Informed Medical History
On Facebook: The “about” section is the one that most closely resembles the concept of a user profile. It includes a picture selected by the user and lists information such as gender; relationship status; age, political and religious views; interests and hobbies; favorite quotes, books and movies; and free-form biographical information added by the user.
In medical records: The “about” section would be a snapshot of the patient’s health and background. It should include the patient’s age, gender, smoking status, height, weight, address, phone number, and emergency contact information; the patient’s primary care provider; and insurance information. This section would include a summary list of the patient’s current diagnoses and medications, as well as family history. And importantly, both the doctor and the patient would be able to add details.
“Privacy Settings” and “Permissions” for Controlled Sharing
On Facebook: Privacy settings allow users to control who can see the information they post or that is posted about them. For example, in my general privacy settings I can choose to make my photos visible only to the people I’ve accepted as “friends.” However, if I post a photo I want the entire world to see, I can change the default setting for that photo to be visible publicly instead.
Facebook also allows users to grant “permissions” for outside applications to access their profiles. For example, let’s say I use TripAdvisor to read travel reviews. TripAdvisor lets me sign in to its site using my Facebook account, rather than creating a separate TripAdvisor account. But, to do this I must grant TripAdvisor “permission” to access my Facebook account.
In medical records: Patients could use “privacy settings” to control whether all or part of their information can be seen by a family member or caregiver. For
example, if my aging mother wanted to give me access to her “events” (upcoming doctor’s appointments), she could do so. If my college-aged son who is still on my health plan wanted to give me access to his knee X-rays, he could.
The Los Angeles Times has reported that Covered California, the largest state’s health insurance exchange under the Affordable Care Act, has started releasing to insurance agents throughout the state the names and contact information of tens of thousands of persons who started an application using the state’s online system but failed to complete it.
The Covered California director Peter Lee acknowledges the practice but says that the outreach program still complies with privacy laws and was reviewed by the exchange’s legal counsel. “I can see a lot of people will be comforted and relieved at getting the help they need to navigate a confusing process,” explained Lee.
I am hardly as confident as Covered California’s lawyers apparently were that this practice was legal.
The law requires that disclosures to third parties be necessary and I do not see why Covered California could not have contacted non-completers directly and ask them if they wanted help from an insurance agent rather than disclosing their identity to insurance agents. But even if the practice could be said to be borderline legal, it is difficult to imagine a practice more likely to sabotage enrollment efforts in California — and, since California’s interpretation could be precedent for other states — elsewhere.
For every person unable to complete their application online in California and who will, with the comforting help provided by insurance agents, now want to complete it, there are likely 10 who will be turned off by the cavalier attitude towards privacy exhibited by this government agency. Beyond a violation of ACA privacy safeguards, the action is either a sign of desperation about enrollment figures, even in a state that boasts of its success such as Peter Lee’s California, or monumental stupidity.
If California wanted to create an adverse selection death spiral, it would be difficult to be more effective than, without notice or consent, releasing personally identifiable information to insurance agents.
A common and somewhat unique aspect to EHR vendor contracts is that the EHR vendor lays claim to the data entered into their system. Rob and I, who co-authored this post, have worked in many industries as analysts. Nowhere, in our collective experience, have we seen such a thing. Manufacturers, retailers, financial institutions, etc. would never think of relinquishing their data to their enterprise software vendor of choice.
It confounds us as to why healthcare organizations let their vendors of choice get away with this and frankly, in this day of increasing concerns about patient privacy, why is this practice allowed in the first place?
The Office of the National Coordinator for Health Information Technology (ONC) released a report this summer defining EHR contract terms and lending some advice on what should and should not be in your EHR vendor’s contract.
The ONC recommendations are good but incomplete and come from a legal perspective.
As we approach the 3-5 year anniversary of the beginning of the upsurge in EHR purchasing via the HITECH Act, cracks are beginning to show. Roughly a third of healthcare organizations are now looking to replace their EHR. To assist HCO clients we wrote an article published in our recent October Monthly Update for CAS clients expanding on some of the points made by the ONC, and adding a few more critical considerations for HCOs trying to lower EHR costs and reduce risk.
The one item in many EHR contracts that is most troubling is the notion the patient data HCOs enter into their EHR is becomes the property in whole, or in-part, of the EHR vendor.
It’s Your Data. Act Like it.
Prior to the internet-age the concept that any data input into software either on the desktop, on-premise or in the cloud (AKA hosted or time sharing) was not owned entirely by the users was unheard of. But with the emergence of search engines and social media, the rights to data have slowly eroded away from the user in favor of the software/service provider.
Facebook is notorious for making subtle changes to its data privacy agreements that raise the ire of privacy rights advocates.