One guarantee in the healthcare sector is that when it comes to personal health information (PHI), there is no lack of issues and pundits to discuss security and privacy of such information/data. If one does not jump up and down bleating on about the sanctity of PHI and the need to protect it at all costs, well then you may be labeled a heretic and burned at the proverbial stake.
Now don’t get us wrong. Here at Chilmark Research we firmly believe that your PHI is arguably the most personal information you have and you do have a right to know exactly how it is used. Whether or not you own it remains to be seen for we have seen, read and heard one more than one occasion – some healthcare providers believe that it is their data, not yours, and may only begrudgingly give you access to some circumscribed portion of your PHI that they have stashed in their vast HIT fortress, or worse, scattered in a number of chart folders.
But where we do differ with many on the sanctity of PHI is that the collective use of our de-identified PHI on a community, regional, state or even national level can give us some amazing insights into what is working and what is not in this convoluted thing we call a healthcare system in the US and needs to be strongly supported. Unfortunately, we do a terrible job as a country in educating the populace on the collective value of their data to understand health trends, treatments and ultimately ascertain accurate comparative effectiveness. This leaves the door wide open for others to use the old FUD (fear uncertainty and doubt) factor to keep patients from actively sharing their de-identified PHI.
Data, information, interpretation and decision-making are among the vital components of prevention, diagnosis, management and treatment.
The problem we have today is how to gather and manage the data that our bodies radiate.
In order to solve this problem, we have to surmount other problems – which are not just technological but also behavioral, cultural and financial.
But if you want an idea of what an extreme version of data-collection might look like, check out the application Placeme.
Now Placeme is *not* a Healthcare application. What Placeme does do, however, is to continually (in almost real-time) track the places that you visit. No check-ins; no need to enter and data – the application simply runs in the background and does its magic.
When you think about that (from the cultural perspective of today), that’s creepy.
And yet, this “creepy” model is the future. It represents the technological and cultural arc that social software is throwing us. We can fight it (and should in order to flesh out the nuances so we can ensure safety) but in the long-run we shall have to accept the trend and work accordingly.
So think of Placeme in terms of what the ‘Quantitative Self’ movement is attempting to achieve.
I must start out with a confession: When it comes to technology, I’m what you might call a troglodyte. I don’t own a Kindle or an iPad or an iPhone or a Blackberry. I don’t have an avatar or even voicemail. I don’t text.
I don’t reject technology altogether: I do have a typewriter—an electric one, with a ball. But I do think that technology can be a dangerous thing because it changes the way we do things and the way we think about things; and sometimes it changes our own perception of who we are and what we’re about. And by the time we realize it, we find we’re living in a different world with different assumptions about such fundamental things as property and privacy and dignity. And by then, it’s too late to turn back the clock.
When I think of new frontiers on the internet I’m reminded of a science fiction story I read in college by my favorite SciFi author, Isaac Asimov. It’s called “The Dead Past,” and it goes something like this: Scientists have invented a machine called a chronoscope that can be used to view any time in the past, anywhere in the world, but this technology is strictly regulated by the government. Historians try to get licenses to view ancient Carthage or Rome, but government bureaucrats churlishly deny most requests based on mundane considerations of cost and convenience. So a frustrated historian teams up with a frustrated physicist and a frustrated journalist and together they reverse-engineer the chronoscope. They are eventually apprehended, but by that time the journalist had sent the plans to half a dozen of his news outlets; the secret is out and can never be retrieved.
And there, in the closing pages of the story, Asimov explains why the government had been so secretive about this invention:
At the end of January, the European Commissioner for Justice, Fundamental Rights, and Citizenship, Viviane Reding, announced the European Commission’s proposal to create a sweeping new privacy right—the “right to be forgotten.” The right, which has been hotly debated in Europe for the past few years, has finally been codified as part of a broad new proposed data protection regulation. Although Reding depicted the new right as a modest expansion of existing data privacy rights, in fact it represents the biggest threat to free speech on the Internet in the coming decade. The right to be forgotten could make Facebook and Google, for example, liable for up to two percent of their global income if they fail to remove photos that people post about themselves and later regret, even if the photos have been widely distributed already. Unless the right is defined more precisely when it is promulgated over the next year or so, it could precipitate a dramatic clash between European and American conceptions of the proper balance between privacy and free speech, leading to a far less open Internet.
In theory, the right to be forgotten addresses an urgent problem in the digital age: it is very hard to escape your past on the Internet now that every photo, status update, and tweet lives forever in the cloud. But Europeans and Americans have diametrically opposed approaches to the problem. In Europe, the intellectual roots of the right to be forgotten can be found in French law, which recognizes le droit à l’oubli—or the “right of oblivion”—a right that allows a convicted criminal who has served his time and been rehabilitated to object to the publication of the facts of his conviction and incarceration. In America, by contrast, publication of someone’s criminal history is protected by the First Amendment, leading Wikipedia to resist the efforts by two Germans convicted of murdering a famous actor to remove their criminal history from the actor’s Wikipedia page.
We live in an age of “big data.” Data has become the raw material of production, a new source of immense economic and social value. Advances in data mining and analytics and the massive increase in computing power and data storage capacity have expanded, by orders of magnitude, the scope of information available to businesses, government, and individuals. In addition, the increasing number of people, devices, and sensors that are now connected by digital networks has revolutionized the ability to generate, communicate, share, and access data. Data create enormous value for the global economy, driving innovation, productivity, efficiency, and growth. At the same time, the “data deluge” presents privacy concerns that could stir a regulatory backlash, dampening the data economy and stifling innovation. In order to craft a balance between beneficial uses of data and the protection of individual privacy, policymakers must address some of the most fundamental concepts of privacy law, including the definition of “personally identifiable information,” the role of consent, and the principles of purpose limitation and data minimization.
Big Data: Big Benefits
The uses of big data can be transformative, and the possible uses of the data can be difficult to anticipate at the time of initial collection. For example, the discovery of Vioxx’s adverse effects, which led to its withdrawal from the market, was made possible by the analysis of clinical and cost data collected by Kaiser Permanente, a California-based managed-care consortium. Had Kaiser Permanente not connected these clinical and cost data, researchers might not have been able to attribute 27,000 cardiac arrest deaths occurring between 1999 and 2003 to use of Vioxx. Another oft-cited example is Google Flu Trends, a service that predicts and locates outbreaks of the flu by making use of information—aggregate search queries—not originally collected with this innovative application in mind. Of course, early detection of disease, when followed by rapid response, can reduce the impact of both seasonal and pandemic influenza.
The Supreme Court’s decision in Sorrell vs. IMS Health is being touted in many quarters as a privacy case, and a concerning one at that. Example: Senator Patrick Leahy (D-VT) released a statement saying “the Supreme Court has overturned a sensible Vermont law that sought to protect the privacy of the doctor-patient relationship.” That’s a stretch.
The Vermont law at issue restricted the sale, disclosure, and use of pharmacy records that revealed the prescribing practices of doctors if that information was to be used in marketing by pharmaceutical manufacturers. Under the law, prescription drug salespeople—”detailers” in industry parlance—could not access information about doctors’ prescribing to use in focusing their efforts. As the Court noted, the statute barred few other uses of this information.
It is a stretch to suggest that this is a privacy law, given the sharply limited scope of its “protections.” Rather, the law was intended to advance the state’s preferences in the area of drug prescribing, which skew toward generic drugs rather than name brands. The Court quoted the Vermont legislature itself, finding that the purpose of the law was to thwart “detailers, in particular those who promote brand-name drugs, convey[ing] messages that ‘are often in conflict with the goals of the state.’” Accordingly, the Court addressed the law as a content- and viewpoint-oriented regulation of speech which could not survive First Amendment scrutiny (something Cato and the Pacific Legal Foundation argued for in their joint brief.)
If they can hack your home computer, your mobile phone, apps, your store, your social networks, your bank account, your gaming system, your medical records, your school records, the government and its records, and pretty much anything anyone sets their mind to – isn’t it is only a matter of time until someone finds a way to hack your heart?
Not through a musical hook or melody that you can’t shake. Or a well timed smile by someone your soul connects with. Or a box of chocolates. Or a poem. People have been penetrating the human heart with those Luddite-ish tools since the beginning of civilization.
I was thinking more about that electronic device your doctor might have implanted into your chest to keep your heart beating. Or the little box stuck in your gut to help you and your pancreas regulate your diabetes. Or the mini-computer surgically inserted to keep your neurological systems on track.
Hacking the medical miracles put inside people to let them live longer with more normal lives.
While to my limited knowledge nobody has reported a single case and the likelihood is extremely low, it is a real enough concern that the New England Journal of Medicine published a paper about the need to improve security last year.
Personal data privacy once again has taken front stage in Sorrel v. IMS Health, Inc. Vermont passed the Vermont Confidentiality of Prescription Information Law that allows doctors which prescribe drugs to patients, to decide whether pharmacies can sell their prescription drug prescription records. IMS Health as well as other health information companies contested the law, arguing that the law poses a restriction on commercial speech as access to such information helps pharmaceutical companies market their drugs effectively to doctors. The Supreme Court is now tasked with determining the constitutionality of the restriction on access to prescription information with regards to our First Amendment. 
However, this post is focused on the secondary effects asserted in amici curiae briefs supporting the petitioners of allowing companies to purchase such information, specifically the concern of data privacy and patient re-identification.  Under the Health Information Portability and Accountability Act (HIPAA), personal health information is de-identified by your local pharmacy prior to such information being shared with any third party. By de-identifying the data, your personal data cannot, it is believed, be linked or traced back to you. De-identifying your health information is a way for covered entities to share your information without your consent or authorization and in accordance with the law. The information once shared is completely anonymized. After the transfer to a third party, like IMS Health, your information is solely data of zeros and ones that translate to dates of dispensing and drug names. No longer does your prescription record list your name or month or day of birth. Continue reading…
Today the Supreme Court will hear oral arguments in IMS Health v. Sorrell. The case pits medical data giant IMS Health (and some other plaintiffs) against the state of Vermont, which restricted the distribution of certain “physician-identified” medical data if the doctors who generated the data failed to affirmatively permit its distribution.* I have contributed to an amicus brief submitted on behalf of the New England Journal of Medicine regarding the case, and I agree with the views expressed by brief co-author David Orentlicher in his excellent article Prescription Data Mining and the Protection of Patients’ Interests. I think he, Sean Flynn, and Kevin Outterson have, in various venues, made a compelling case for Vermont’s restrictions. But I think it is easy to “miss the forest for the trees” in this complex case, and want to make some points below about its stakes.**
Privacy Promotes Freedom of Expression
Privacy has repeatedly been subordinated to other, competing values. Priscilla Regan chronicles how efficiency has trumped privacy in U.S. legislative contexts. In campaign finance and citizen petition cases, democracy has trumped the right of donors and signers to keep their identities secret. Numerous tech law commentators chronicle a tension between privacy and innovation. And now Sorrell is billed as a case pitting privacy against the First Amendment.
There is an old tension between privacy and the First Amendment, best crystallized in Eugene Volokh’s effort to characterize privacy protections as the troubling right to stop others from speaking about you. Neil Richards has dissected the flaws in Volokh’s Lochneresque effort to reduce the complex societal dynamics of fair data practices to Hohfeldian trump cards held by individuals and corporations. Societies reasonably conclude that certain types of data shouldn’t influence certain types of decisions all the time. And courts have acquiesced, allowing much “of the vast universe of speech [to] remain untouched (and thus unprotected) by the First Amendment.”Continue reading…
One day before the first of April, HHS published the much anticipated rules defining the creation and operations of Accountable Care Organizations (ACO) spanning 429 pages of business regulation, analysis of various options available, proposed solutions and ways to measure and reward (punish) success (failure) in achieving HHS seemingly incompatible goals of providing better care for less money. I am fairly certain that health policy experts, health care economists and the multitude of industry stakeholders will be dissecting and analyzing the hefty document in great detail in the coming weeks. I started reading the document with an eye towards the ACO implications for HIT, which as expected are many, but something on page 108 made me stop in my tracks. HHS is proposing to share personally identifiable health information (PHI) contained in Medicare claims with ACO providers unless patients “opt-out”.
Beginning on page 108 and through 22 pages of tortured arguments, HHS makes the case for the legality and benefits of providing ACOs with PHI contained in Medicare claims, unless the patient actively withdraws consent for this type of transaction. The argument for the legality of claim data sharing rests on the nebulous HIPAA clause which allows disclosure of PHI for “health care operations” within a web of covered entities and business associates connecting the ACO with Medicare and other providers of health care services for a particular patient. HHS is proposing to make available four types of medical information to participating ACOs:Continue reading…