Categories

Tag: civil rights

Contemplating Health Data Rights as Civil Rights

BY ERIC PERAKSLIS ON BEHALF OF THE LIGHT COLLECTIVE

Recently, despite decades of experience in cybersecurity, privacy, and data science, I got sent back to school.  

As a member of the Council of the Wise at the Light Collective, a patient advocacy group with a focus on healthcare technology and privacy, I attended a town hall event entitled “No Aggregation Without Representation,” which featured four eminently qualified leaders of the BIPOC and data advocacy communities: Dr Maya Rockeymore Cummings, Tiah Tomlin-Harris, Jillian Simmons, JD and Valencia Robinson. I was unprepared for the ownership and authority of these four leaders – and unprepared, too, for how they were able to transform professional truisms I’ve often relayed myself into something far more personal, meaningful, urgent, and authentic.

The commercial practice of aggregating health and consumer data crosses the borders of healthcare, retail, marketing, communications, transportation, and others. The risks and consequences (including abuse) related to the exploitation and monetization of personal health data are enormous, complex, and largely or entirely borne by the individuals whose data is being collected – whether they know it or not. 

What’s more, none of the current patchworks of 3-letter federal regulatory bodies (FDA, FTC, HHS, FCC) have the remit or resources to encompass the totality of it. Armed with pre-Internet privacy laws from the 1970s or with insurance portability laws from the mid-1990s that have been stretched to cover health and consumer privacy protections, a constellation of advocacy organizations, ethicists, and academics identify problems and offer opinions. All the while, though, data flows like oil to fuel a surveillance economy in which consumers and patients are exploited and harmed. For vulnerable communities including BIPOC and persons with frailties, disabilities, or mental illness, these harms are not only real, but – in light of sad history – they are also unsurprising.

The Deep Roots of Systemic Harm

During the town hall, the panelists describe an ongoing lack of trust that has grown steadily from events such as the Tuskegee syphilis experiments that began in the early 1930s and continued up until the early 1970s, the case of tissues taken without consent or compensation from Henrietta Lacks in the 1950s and used in thousands of experiments, and the continued stigma surrounding diseases such as HIV that are no less potent today than they were in the 1980s. As I listened to the panelists, these events were no longer historical to me, but instead were current events unfolding before my eyes and deeply affecting communities who are well aware that they are being harmed and exploited so that others may profit.  

But just what are those harms, and who feels their effects?  Those who pontificate about the virtues and value of data without boundaries often argue that the benefit to the many outweigh the risks to the few, but it is not “just a few” when we are talking about significant numbers of individuals within minority and underserved communities.  In the last few weeks alone, we have read stories about how data from persons seeking help due to suicidal ideation was captured and monetized by a crisis text line, seen credible evidence of how middleware ad tracking software can track users regardless of stated privacy policies on some health sites, and witnessed another class-action lawsuit against Meta (Facebook’s parent company) for the misuse of personal biometric data. 

Patient populations need rights.  So what?

I once had a mentor who advised me to ask myself, “so what?” at least three times before deciding whether an idea could stick. Let’s play that game using the wisdom shared by these four amazing leaders… 

Dr. Rockeymore Cummings spoke about how she had shared parts of her health journey story online to provide support to others on similar journeys. But the ultimate outcome of this sharing was that her insurance company asserted that because she had advocated for preventative options, the procedures she had undergone were elective, and therefore denied coverage for them.  Where is the internet warning label that advises consumers that the thoughts and experiences they share online may be scraped, scrutinized, and interpreted by their health insurance companies?  Interestingly, when the human genome was first mapped, scientists and ethicists helped ensure the passage of the Genetic Information Nondiscrimination Act (GINA) because they foresaw the potential for abuse when information about individuals’ genetic makeups became available. But over the decades that the internet has been proliferating, no comparable nondiscrimination protections have emerged to cover the myriad other kinds of personal data available online. To surveillance capitalists, this situation is a feature, not a bug.  

Patient populations’ data are owned by companies.  So what?  

Today we live in a world where any given patient population’s collective health data is owned by companies.  And what we share is increasingly part of our permanent public record.  So what?  What if I did say something like that once, and I changed my mind? Or what if I signed up for a service that required some of my data but then decided I was no longer interested, or worse, was worried about things I learned as I became more familiar with the service?  Jillian Simons pointed out that not a single state has passed comprehensive “right to be forgotten” privacy protections, which would grant people the right to have their personal data removed from specific and aggregated datasets. Beyond the obvious assurances, this approach affords to patients and consumers, it’s also an extremely effective way to ensure accountable stewardship of data, even in large, aggregated datasets. The right to be forgotten imposes a duty on the dataset owner/aggregator to know where that individual‘s data resides and where it has been shared; otherwise, their data cannot be found to be erased if they so desire.  As they would say in my hometown of Boston, “wicked smaht.” 

 So what?

If experiencing denial of care or preserving personal autonomy about participating in health websites are not sufficiently compelling issues, let’s get to the tougher stuff. As our preprint article (recently covered by Wired magazine) shows, health sites are using – sometimes without even realizing it – middleware that contains ad trackers. Ad tracking may not seem like something that’s obviously harmful, but such trackers can aggregate strikingly personal information from seemingly benign data. For instance, if you’ve used a ride-sharing app, your physical location might be shared across multiple platforms. Use that ride-sharing platform to visit Planned Parenthood, and various inferences could be drawn. Data from multiple browser windows left open on your phone can be aggregated to know your online purchases, the stores you frequent, the banking tools you use, the restaurants you visit – or when you’re at home waiting for a pizza.  At this point, the question of “so what” should be thoroughly answered. Does all of this still sound benign?  

Real World Data: Meet the Real-World Hazards

Pharma and digital health companies often tout the benefits of real-world data to improve population health.  These data hold power to create huge advancements in our understanding of disease.  We need real world data.  Yet without proper protections and stewardship, these data hold hazards, and are left unprotected.  All of these data and more are being grabbed by stalker were that is proliferating in epidemic proportions, and vulnerable people are most affected. One study in the European Institute for Gender Equality found that 7 out of 10 women in Europe that had experienced cyberstalking were also victims of intimate partner violence.  According to this report, the digital control exercised by a stalker can be immense: reading anything the surveilled person types, including user names and passwords for services such as banking applications, online shops and social networks; knowing where that person is by tracking their movements in real time with GPS; eavesdropping on or even recording phone calls, or even record them; reading text messages (regardless of whether encryption is used); monitoring social network activity; viewing photos and videos; and even being able to switch on the camera. 

Another survey showed that 85% of domestic violence workers reported having cared for victims whose attacker located them by GPS. Further, the statistics on the adverse effects of internet crime on racial and ethnic minorities show that BIPOC communities face identity theft more often, that the social media accounts of BIPOC communities are attacked more often, and that elderly persons are targeted more often than those under 65. Put simply: the more vulnerable are more vulnerable.

So why do we say “no aggregation without representation?”  It’s not an abstract ethical question about privacy or even autonomy. It is the right to be safe. The right to feel secure.  That makes it no less than a civil right

This concept of collective rights, representation, and stewardship of real-world health data is only getting started.   Those of us in healthcare who hold power, privilege, and influence have a responsibility to listen and act.  So what comes next?  

This incredible group will be meeting again at the National Health Policy Conference in Washington DC April 4th and 5th.  I hope you join them.  Thank you, Dr. Maya, Ms. Tomlin-Harris, Ms. Simmons and Ms. Robinson.  I promise I won’t forget what I’ve learned.

Eric D. Perakslis, PhD (@eperakslis) is a Rubenstein Fellow at Duke University.