Thanks to the flood of new data expected to enter the health field from all angles–patient sensors, public health requirements in Meaningful Use, records on providers released by the US government, previously suppressed clinical research to be published by pharmaceutical companies–the health field faces a fork in the road, one direction headed toward chaos and the other toward order.
The road toward chaos is forged by the providers’ and insurers’ appetites for categorizing us, marketing to us, and controlling our use of the health care system, abetted by lax regulation. The alternative road is toward a healthy data order where privacy is protected, records contain more reliable information, and research is supported or even initiated by cooperating patients.
This was my main take-away from a day of meetings and a panel held recently by Patient Privacy Rights, a non-profit for whom I have volunteered during the past three years. The organization itself has evolved greatly during that time, tempering much of the negativity in which it began and producing a stream of productive proposals for improving the collection and reuse of health data. One recent contribution consists of measuring and grading how closely technology systems, websites, and applications meet patients’ expectations to control and understand personal health data flows.
With sponsorship by Microsoft at their Innovation and Policy Center in Washington, DC, PPR offered a public panel on privacy–which was attended by 25 guests, a very good turnout for something publicized very modestly–to capitalize on current public discussions about government data collection, and (without taking a stand on what the NSA does) to alert people to the many “little NSAs” trying to get their hands on our personal health data.
It was a privilege and an eye-opener to be part of Friday’s panel, which was moderated by noted privacy expert Daniel Weitzner and included Dr. Deborah Peel (founder of PPR), Dr. Adrian Gropper (CTO of PPR), Latanya Sweeney of Harvard and MIT, journalist Sydney Brownstone of Fast Company, and me. Although this article incorporates much that I heard from the participants, it consists largely of my own opinions and observations.
Poorly regulated data paths
Hospitals undoubtedly face a lot of cost pressures, including patients who lack insurance and the costs themselves of billing insurers. The emerging “fee for performance” payment regime will hopefully solve these pressures by eliminating unnecessary costs, such as errors that
lengthen patient stays and all too often kill the patient. In the meantime, it seems like the medical complaints we bring to the providers are not sufficient to support their revenue. They are always angling to persuade us we need new services.
Even when providers try to reduce costs, they gravitate toward paternalistic strategies that fail to give patients the right to decide for ourselves how to handle the most important decisions of our lives. As examples of such data-driven initiatives, some hospitals check the use of emergency rooms to identify patients who are “frequent flyers” and intervene to direct them to cheaper methods of treatment.
Of course, if the providers published what they charged for treatment, all of us could make better choices and rescue the health care system from its spiraling price increases, but they don’t choose that route. Patient centered medical homes and the use of telemedicine for
frequent patient contacts could also reduce the anxiety or attention-seeking that leads to excessive emergency room use.
It’s only fair to note that some very useful changes are emerging from data sharing, such as interventions aimed at people who have unusually high risks of hospital admissions. One of the main services offered by Health Information Exchanges (HIEs) to justify their use by providers are systems for tracking such high-risk patients, such as those diagnosed with both diabetes and heart disease. The providers can then take such measures as scheduling check-ups and phoning or texting patients to urge them to come in and take advantage of the extra
care. Once again, this kind of health management would be good for all of us, but given that providers have limited resources, it’s good that
they’re starting somewhere.
Few providers have the staff expertise to handle the data crunching required for marketing and risk management, so most bring in third-party vendors. Regulators allow this to happen without patientconsent or even notification of the patient, placing such activities under a rather broad exemption for “treatment, payment, and operations.” Recent changes to HIPAA that have just taken effect, luckily, place a few borders around these data exchanges.
To sense the chaos in the flow of data, check out the data map created by Professor Latanya Sweeney and colleagues. This graphic is constantly growing as the researchers discover new critters lining up at the data trough. Sweeney points out, “Most people want their medical information made available to medical research to help bring forth new discoveries. Surprisingly, healthcare researchers do not top the list of recipients in the data map. The biggest growing section are business analytics companies whose revenue streams rely on combining health data with other forms of health data.”
I do not want to be an alarmist concerning data flows. You can see that most of the flows in the map remove the patient’s name. Now, this doesn’t mean that data is sufficiently anonymized. In fact, a lot of institutions that think they have anonymized the patient data have done a bad job, as Sweeney found in a the highly publicized re-identification of patients in Washington state. But it’s a signal (often backed up by contractual requirements) that individuals should not be identified.
The seventeen-point list of fields that HIPAA requires providers to remove from patient data is crude and inadequate. Therefore, public pressure (and maybe some lawsuits) is required to force better techniques for anonymization. (One good start, if I may be permitted a plug, is O’Reilly’s book Anonymizing Health Data.) But even if patients can possibly be reidentified from the data exchanged, I don’t see this as one of the big risks to worry about. I am more worried about data breaches, which are common–some institutions can’t seem to learn, suffering repeat breaches–and will be examined later in this article.
Consider: even if a patient can theoretically be re-identified, it becomes a concern only when all the following are true:
- The patient must have a condition that can normally be hidden. If the patient’s condition is obvious to people around him, anonymization protects only against remote snoopers.
- The condition must have some stigmatizing quality. There must be some reason the patient is afraid of losing a job, insurance, child custody, or at least friends and associates.
- Someone must want to do harm to the sufferer. This could be an acquaintance with malicious intent. It could also be an institution
with some relationship to the patient, but these are unlikely to do something unethical if contractually required not to re-identify data.
- The patient must be one of those who can be re-identified. In the Washington state case mentioned earlier, the researchers used news reports as a crowbar to open up patient data. Similar data about hospital admissions or other medical conditions may be known to our
acquaintances, such as employers or former spouses, an element of risk known as adversary power.
- The malicious party must have the capability of exploiting the opportunity for re-identification. This requires specialized expertise, but analytics firms are increasingly making such expertise available as a service. (Just think of all the ads you see offering to show you someone’s arrest record.) Furthermore, more and more data sets about individuals are coming online, and services deliberately combine them to correlate various data fields and uniquely identify people.
Peel, drawing on her experience as a psychiatrist, says “I have spent 35+ years hearing from individuals who were harmed when their records were seen by people who should never have had access.” So it’s possible for our data to used against us individually, but I think weshould all be worried about it being used against us collectively.
Companies are essentially making policy, setting prices, and mandating treatments with the data they collect on us. A well-known example is the campaign by drug companies to track the way doctors treat various conditions, and tailor marketing campaigns to increase sales of their drugs. To do this, they collect information about how doctors prescribe drugs to patients.
Of course, I want data to be shared and exploited for the purpose of finding better treatments and discovering risks. But the interests of
drug companies, insurers, and hospitals are not always in line with the interests of patients and their doctors.
Barn door open on business associates
As I mentioned earlier, doctors can give patient data to third parties without patient consent or knowledge, under a wide range of conditions. This leads not only to the risk of abusing data, but to data breaches that can lead to medical identity theft, a growing problem that leads not only to fraud and increased medical costs, but real harm to patients whose records are polluted with bad data.
Until recently, HIPAA did not require providers’ business associates to adhere to the same basic standards required from the health providers themselves: encrypting records both in storage and in transit, audit trails, clear policies conveyed to all responsible employees, and so on. But HIPAA has belatedly attempted to close the loophole. The requirements are straightforward and just reflect what security experts have long recommended. But the health industry is worried that compliance
will be a heavy burden.
I have some sympathy for the providers and their business associates. If you have been lax, getting your act together can be hard. As Attorney Helen Oscislawski says about marketing and fund-raising, “The biggest challenge for business associates in particular is that they haven’t really paid attention to the rules.” (15:27 into the podcast)
The intent of the new HIPAA rules is to remind business associates that they are acting just on behalf of the providers who gave them the data. The third party can’t start doing fund-raising for a different organization or promote other products and services unless it obtains patient consent. I can see why enforcement and verification are difficult. It’s just too bad that the value of our medical data is so great that sticking to privacy regulations is like restraining a team of runaway stallions.
A few other comments during Friday’s panel included these:
- Weitzner said that trust in the virtues of open data is strongest in the United States, whereas people in other countries show more concern for the risks of revealing sensitive information about individuals.
- Peel offered the statistic that one out of eight patients put their own care at risk out of the fear of letting information getting into malicious hands. Some withhold information from their clinicians or lie about it, whereas others request that clinicians leave data out of their records (and many clinicians comply).
- Sweeney and Brownstone suggested that the public needs to hear a horror story before they mobilize behind a policy of protecting
I think people are already concerned about privacy, and just don’t see an alternative to the gauntlet of risks their data runs each day. If we offer them a simple and effective alternative, they’ll turn into advocates.
Massive data collection by faceless bureaucracies intent on manipulating our treatment options isn’t the inevitable face of future health care. The alternative is to take data into our own hands. Through a personal health record–not a portal offered by a health institution, but truly a repository owned by the patient and operated by an agent beholden to the patient, such as MyDataCan or Microsoft’s HealthVault.
Leaving data in patient hands would facilitate good sharing and suppress bad sharing. Patients would take data that is currently languishing behind the mortar or digital walls of the doctors who collect it, and offer it to their other providers to improve care and
eliminate unnecessary tests. They could also set up rules for releasing data to researchers. Providers who put patients first woulduse data in very different ways, as suggested by Amik Ahmad.
Gropper reports that regulators and providers are subverting laws meant to give patients control over their data. HIEs have turned into just another way for doctors to monopolize the benefits of patient information and withhold it from the patients themselves. The Direct project, which was explicitly set up to allow patient participation in data exchanged, is turning into a provider-to-provider enclave.
So, as usual, byzantine arrangements within the health industry–what one reformer calls a “freek-osystem” (12:40 into the video)–are being used to bypass regulation as well as patient rights. Costs go up along the way, partly because simple activities are made complex, but mostly because the public and the payers can’t figure out what’s going on. Reasserting control over where our data goes would be an
important start to getting a grip on our own health care.
Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. His work for O’Reilly includes the influential 2001 title Peer-to-Peer, the 2005 ground-breaking book Running Linux, and the 2007 best-seller Beautiful Code.
Why did it find in my favour? Because, as the PCC noted in its ruling, I had presented plenty of compelling evidence to demonstrate that the claims I had made were accurate. It was not impressed, for example, with the UEA’s defence that it had been cleared of wrongdoing by several enquiries because – as Andrew Montford and others have shown – these enquiries were little more than whitewashes organised by friends and sympathisers of the Climategate “scientists”.