Patients who search on free health-related websites for information related to a medical condition may have the health information they provide leaked to third party tracking entities through code on those websites, according to a research letter by Marco D. Huesch, M.B.B.S., Ph.D., of the University of Southern California, Los Angeles.
The research letter was recently published in JAMA Internal Medicine entitled “Privacy Threats When Seeking Online Health Information” and looked at how 20 health-related websites track visitors, ranging from the sites of the National Institutes of Health to the health news section of The New York Times online. Thirteen of the sites had at least one potentially worrisome tracker, according to the analysis performed by Dr. Huesch.
He also found evidence that health search terms he tried — herpes, cancer and depression — were shared by seven sites with outside companies. According to the paper:
“A patient who searches on a “free” health-related website for information related to “herpes” should be able to assume that the inquiry is anonymous. If not anonymous, the information knowingly or unknowingly disclosed by the patient should not be divulged to others.
Unfortunately, neither assumption may be true. Anonymity is threatened by the visible Internet address of the patient’s computer or the often unique configuration of the patient’s web browser. Confidentiality is threatened by the leakage of information to third parties through code on websites (eg, iframes, conversion pixels, social media plug-ins) or implanted on patients’ computers (eg, cookies, beacons).”
Dr. Huesch says that he was inspired to investigate this area by the archive of coverage on the topic by The Wall Street Journal on how the technology and market for your online information work. The most recent piece in this series is on Facebook privacy settings and some of the risks associated with “Graph Search.” This entire series is very good and worth the read.
The sharing of patient information in the US is out of whack — we lean far too much toward hoarding information vs. sharing it. While care providers have an explicit duty to protect patient confidentiality and privacy, two things are missing:
- the explicit recognition of a corollary duty to share patient information with other providers when doing so is the patient’s interests, and
- a recognition that there is potential tension between the duty to protect patient confidentiality/privacy and the duty to share — with minimal guidance on how to resolve the tension.
In this essay we’ll discuss:
1. A recent recognition in the UK
2. The need for an explicit duty to share patient information in the US
As the instigators of the OpenNotes initiative, we are thrilled that OpenNotes is being adopted by the VA. Prompted by Dr. Kernisan’s thoughtful post , the ensuing lively discussion, and our experiment with 100 primary care physicians and 20,000 of their patients ), we thought it useful to offer some observations drawing both on our experiences as clinicians and on ongoing conversations with clinicians and patients.
First and foremost, we don’t have “answers” for Dr. Kernisan. Our hope is to contribute to new approaches to these sticky questions over time. And, remember that patients’ right to review their records is by no means new. Since 1996, virtually all patients have had the right to access their full medical records. What’s new is that OpenNotes takes down barriers such as filling out forms and charging per page, while actively inviting far more patients to exercise this right in an easier and accessible way.
We think of open visit notes as a new medicine, designed like all therapies to help more than it hurts. But every medicine is inevitably accompanied by relative and absolute contraindications, and it’s useful to remember that it’s up to the medical and patient community to learn to take a medicine wisely as it becomes more widely available. A few specific thoughts:
Dementia and diminished physical capacity:
When a clinician notices symptoms or signs of dementia, chances are the patient and/or family has already been worrying about this for some time. Is it safe for the patient to live alone? What about driving? How and when could things get worse? They may actually be relieved when the doctor brings up these topics and articulates the issues in a note. Moreover, their worst fears may prove unfounded, and reading that in a note can be reassuring. But we need to consider the words we write so we don’t rush to label a condition as “Alzheimer’s.” Being descriptive is often better and more helpful than assigning one word definitions. In itself, OpenNotes reminds the health professional to choose words wisely. That doesn’t have to mean more work, but we believe it can certainly mean better notes that can be more easily understood by the patient. We urge colleagues to stay away from “The patient denies…,” or “refuses,” or “is SOB.”
Abuse or diversion of drugs, possible substance abuse, or unhealthy alcohol use:
These subjects are always tough, and what to write down has been an issue for clinicians long before they worried about open records. Over the course of our experiment in primary care, we have heard stories from patients about changing their attitudes and behavior after reading a note and “seeing in black and white” what their doctors were most worried about. Though substance abuse may seem like a particularly sensitive topic, at least one doctor in our study is convinced that some of his patients in trouble with drugs or medications did better as a result of reading his notes. And while some patients may reject our spoken (or unspoken) thoughts that we document in notes, experience to date makes us believe that more patients will be helped than hurt, and that it is worth the tradeoff.
In a time of EHR naysayers, mean-spirited election year politics, and press misinterpretation (ONC and CMS do not intend to relax patient engagement provisions), it’s important that we all send a unified message about our progress on the national priorities we’ve developed by consensus.
1. Query-based exchange – every country in the world that I’ve advised (Japan, China, New Zealand, Scotland/UK, Norway, Sweden, Canada, and Singapore) has started with push-based exchange,replacing paper and fax machines with standards-based technology and policy. Once “push” is done and builds confidence with stakeholders, “pull” or query-response exchange is the obvious next step. Although there are gaps to be filled, we can and should make progress on this next phase of exchange. The naysayers need to realize that there is a process for advancing interoperability and we’ll all working as fast as we can. Query-based exchange will be built on top of the foundation created by Meaningful Use Stage 1 and 2.
2. Billing – although several reports have linked EHRs to billing fraud/abuse and the recent OIG survey seeks to explore the connection between EHR implementation and increased reimbursement, the real issue is that EHRs, when implemented properly, can enhance clinical documentation. The work of the next two years as we prepare for ICD-10 is to embrace emerging natural language processing technologies and structured data entry to create highly reproducible/auditable clinical documentation that supports the billing process. Meaningful Use Stage 1 and 2 have added content and vocabulary standards that will ensure future documentation is much more codified.
3. Safety – some have argued that electronic health records introduce new errors and safety concerns. Although it is true that bad software implemented badly can cause harm, the vast majority of certified EHR technology enhances workflow and reduces error. Meaningful Use Stage 1 and 2 enhance medication accuracy and create a foundation for improved decision support. The HealtheDecisions initiative will bring us guidelines/protocols that add substantial safety to today’s EHRs.
Because nearly one billion users produce a lot of data, Facebook has had a hand in publishing more than 30 research papers since 2009, including research (.pdf) that may link social-networking activity and loneliness.
But outside researchers have been unable to validate those studies because Facebook refused to release the underlying raw data, citing the need to protect users’ privacy. Now Facebook is considering changes to its policy. Nature News reports:
Facebook is now exploring a plan that could allow external researchers to check its work in future by inspecting the data sets and methods used to produce a particular study. A paper currently submitted to a journal could prove to be a test case, after the journal said that allowing third-party academics the opportunity to verify the findings was a condition of publication.
When the University of Pennsylvania Health System sought new patients for its lung transplant service last year, it turned to Facebook and Google.
The results of the $20,000 advertising campaign on the websites exceeded administrators’ expectations.
During a few weeks in August and September, more than 4,600 people clicked on the ads and 36 people made appointments for consultations. One of those is now on the hospital’s lung transplant waiting list, and several others are being evaluated, hospital officials say. While the response may seem small, each transplant brings in about $100,000 in revenue.
“We wanted to test the theory of how successful a digital marketing campaign could be,” said Suzanne Sawyer, the health system’s chief marketing officer. “It was like looking for a needle in a haystack,” she said, noting only about 60 lung transplants are done each year in Philadelphia, where the health system is based.
One guarantee in the healthcare sector is that when it comes to personal health information (PHI), there is no lack of issues and pundits to discuss security and privacy of such information/data. If one does not jump up and down bleating on about the sanctity of PHI and the need to protect it at all costs, well then you may be labeled a heretic and burned at the proverbial stake.
Now don’t get us wrong. Here at Chilmark Research we firmly believe that your PHI is arguably the most personal information you have and you do have a right to know exactly how it is used. Whether or not you own it remains to be seen for we have seen, read and heard one more than one occasion – some healthcare providers believe that it is their data, not yours, and may only begrudgingly give you access to some circumscribed portion of your PHI that they have stashed in their vast HIT fortress, or worse, scattered in a number of chart folders.
But where we do differ with many on the sanctity of PHI is that the collective use of our de-identified PHI on a community, regional, state or even national level can give us some amazing insights into what is working and what is not in this convoluted thing we call a healthcare system in the US and needs to be strongly supported. Unfortunately, we do a terrible job as a country in educating the populace on the collective value of their data to understand health trends, treatments and ultimately ascertain accurate comparative effectiveness. This leaves the door wide open for others to use the old FUD (fear uncertainty and doubt) factor to keep patients from actively sharing their de-identified PHI.
Data, information, interpretation and decision-making are among the vital components of prevention, diagnosis, management and treatment.
The problem we have today is how to gather and manage the data that our bodies radiate.
In order to solve this problem, we have to surmount other problems – which are not just technological but also behavioral, cultural and financial.
But if you want an idea of what an extreme version of data-collection might look like, check out the application Placeme.
Now Placeme is *not* a Healthcare application. What Placeme does do, however, is to continually (in almost real-time) track the places that you visit. No check-ins; no need to enter and data – the application simply runs in the background and does its magic.
When you think about that (from the cultural perspective of today), that’s creepy.
And yet, this “creepy” model is the future. It represents the technological and cultural arc that social software is throwing us. We can fight it (and should in order to flesh out the nuances so we can ensure safety) but in the long-run we shall have to accept the trend and work accordingly.
So think of Placeme in terms of what the ‘Quantitative Self’ movement is attempting to achieve.
I must start out with a confession: When it comes to technology, I’m what you might call a troglodyte. I don’t own a Kindle or an iPad or an iPhone or a Blackberry. I don’t have an avatar or even voicemail. I don’t text.
I don’t reject technology altogether: I do have a typewriter—an electric one, with a ball. But I do think that technology can be a dangerous thing because it changes the way we do things and the way we think about things; and sometimes it changes our own perception of who we are and what we’re about. And by the time we realize it, we find we’re living in a different world with different assumptions about such fundamental things as property and privacy and dignity. And by then, it’s too late to turn back the clock.
When I think of new frontiers on the internet I’m reminded of a science fiction story I read in college by my favorite SciFi author, Isaac Asimov. It’s called “The Dead Past,” and it goes something like this: Scientists have invented a machine called a chronoscope that can be used to view any time in the past, anywhere in the world, but this technology is strictly regulated by the government. Historians try to get licenses to view ancient Carthage or Rome, but government bureaucrats churlishly deny most requests based on mundane considerations of cost and convenience. So a frustrated historian teams up with a frustrated physicist and a frustrated journalist and together they reverse-engineer the chronoscope. They are eventually apprehended, but by that time the journalist had sent the plans to half a dozen of his news outlets; the secret is out and can never be retrieved.
And there, in the closing pages of the story, Asimov explains why the government had been so secretive about this invention:
At the end of January, the European Commissioner for Justice, Fundamental Rights, and Citizenship, Viviane Reding, announced the European Commission’s proposal to create a sweeping new privacy right—the “right to be forgotten.” The right, which has been hotly debated in Europe for the past few years, has finally been codified as part of a broad new proposed data protection regulation. Although Reding depicted the new right as a modest expansion of existing data privacy rights, in fact it represents the biggest threat to free speech on the Internet in the coming decade. The right to be forgotten could make Facebook and Google, for example, liable for up to two percent of their global income if they fail to remove photos that people post about themselves and later regret, even if the photos have been widely distributed already. Unless the right is defined more precisely when it is promulgated over the next year or so, it could precipitate a dramatic clash between European and American conceptions of the proper balance between privacy and free speech, leading to a far less open Internet.
In theory, the right to be forgotten addresses an urgent problem in the digital age: it is very hard to escape your past on the Internet now that every photo, status update, and tweet lives forever in the cloud. But Europeans and Americans have diametrically opposed approaches to the problem. In Europe, the intellectual roots of the right to be forgotten can be found in French law, which recognizes le droit à l’oubli—or the “right of oblivion”—a right that allows a convicted criminal who has served his time and been rehabilitated to object to the publication of the facts of his conviction and incarceration. In America, by contrast, publication of someone’s criminal history is protected by the First Amendment, leading Wikipedia to resist the efforts by two Germans convicted of murdering a famous actor to remove their criminal history from the actor’s Wikipedia page.