The 19th century was about the Industrial Revolution. The 20th century, the Digital Revolution. As we march closer to the third decade of the 21st century, it is becoming clearer that this century’s revolution will be the Data Revolution. After all, companies are monetizing it, countries are weaponizing it and people are producing it.
In the medical space, this has fostered conflicting aims. The promise afforded by collecting and analyzing digital health data for insights into population health and personalized medicine is tempered by haziness on who owns and leverages that data.
But even as government actors struggle with the question of how to regulate data, technological progress marches on. Given the dizzying array of technological products claiming medical benefits hitting the marketplace, regulatory agencies have had to contemplate, and take, drastic steps to keep up. For instance, in the past two years the FDA has taken the following steps:
- In July 2016, the FDA clarified what constituted a “low-risk” device such as fitness trackers or mobile apps tracking dietary activity.
- In June 2017, new FDA Commissioner Scott Gottlieb outlined his vision for a more streamlined process for digital technologies which moves from a “case-by-case” approach to one that allows developers apply consistent safety standards to innovation.
- Just a month later, the FDA announced the pilot for a digital health pre-certification program for individual companies which allows those firms that demonstrate a “culture of quality and organizational excellence” and the need for minimal regulation to introduce products to be marketed as new digital health tools with less information communicated to the FDA, sometimes with no “premarket submission at all”.
- By September 2017, nine companies, including tech heavyweights Apple, Samsung and Alphabet-backed Verily, had been selected for the pre-cert process.
- On February 13, 2018, the FDA further specified that low-risk products would be evaluated by looking at the firm’s practices rather than the product itself and announced its intent to create a new Center of Excellence on Digital Health which would be tasked with establishing a new regulatory paradigm, evaluating and recognizing third-party certifiers and hosting a new cybersecurity unit to complement new advances.
With this flurry of activity, the FDA is clearly moving toward a principles-oriented and firm-based approach to regulating digital technologies. This means moving away from certifying medtech products to the producers.
From an efficiency standpoint, this is a logical and overdue re-orientation of regulatory purpose. If we live in and wish to foster an innovation economy, it stands to reason that government should seek to develop trust and relationships, with innovators large and small alike. Doing so can also theoretically ramp up competition in the marketplace by reducing the barriers to entry for smaller, nimbler, competitors who may have less time, and money, to navigate the byzantine FDA approval process for each medtech product before they go bust.
But the FDA has still punted on the elephant in the room – what do we do about the data collected and transmitted by these devices? Currently, we have disparate companies, agencies and medical facilities collecting reams of health data. But for what purpose? And to what end?
The truth is that governments the world over don’t seem to know how to answer that question. As noted by the Centre for International Governance Innovation (CIGI) when speaking about Canadian healthcare, “vast amounts of health data remain isolated and underutilized”. But why?
CIGI points to the lack of common standards that allow for even basic interoperability as the key culprit for current failures to leverage health data to derive insights into population health and to spur advances in personalized medicine. With information as a new driver of profit, companies are often disincentivized from bringing down the technological walled gardens which prevent the exporting of data for use in other applications, both in the U.S. and abroad. One only need to look at the interoperability mess created EHRs, a technology not regulated by the FDA.
However, there are other barriers at play as well. Legitimate concerns such as data privacy and cybersecurity plague the sharing of health data. In the U.S., this is compounded by an utter lack of governmental clarity on how to address these issues. Not only does the U.S. not have a single, centralized federal legislation related to data protection, various agencies (e.g. FTC, FCC, HHS, FDA) have legitimate and overlapping cases for jurisdiction when disputes arise, particularly when it comes to health data.
This poses numerous problems in regulation and questions abound, ranging from technical (what standards do we employ?), to legal (to what extent does which agency have jurisdiction?), to philosophical (what do we want to research with standardized health data?). In practice, we’re left with an absurd situation in which issues are punted between agencies and stakeholders are left unclear on where to turn for answers when it comes to how new technology deals with existing data. This has a knock-on effect on innovation with the FDA itself noting that the “current regulatory framework is not well-suited for driving the development of safer, more effective software-based devices, including the use of machine learning and artificial intelligence”.
With the U.S. failing to overcome a sectoral approach to regulation, there is no clear, singular voice tasked with guiding, educating and coordinating the public and private sectors on best practices. In fact, the hodge-podge of federal, state and local laws, regulations and rules relating to health data has created a confusing, and even contradictory, “alphabet soup of legislation” – that is, if the data is even regulated at all.
Simply put, continuing to do what we’re doing is a prescription for failure. It also seems both inefficient and naive to place faith in the private sector to bail us out of this mess. As noted previously, there are lucrative incentives to hoard health data rather than to figure out the fairest way to share and leverage it for the common good.
Instead, the time has come for real national leadership to address this issue. Health data should no longer be viewed as a byproduct of treatment or technology but instead as a key economic good to be contextualized, defined and regulated as part of an integrated national strategy and, ideally, tasked to a single national regulator. Doing so will allow all Americans the best shot at sharing in benefits of the proliferation of health data equitably.
And, as Americans know, all good revolutions are based on the promotion of freedom and equality. Why should the Data Revolution be any different?
Author’s Note: For a Canadian view of the need for a national data strategy, including in the healthcare sector, click here.
Jason Chung is the Law & Technology Editor at The Health Care Blog. He also writes on the intersection of health, technology and sports as the senior researcher and attorney at NYU Sports and Society, a think tank dedicated to the study of sports and social issues.
Who should be in charge? In a place like India where there’s neither incumbent health records or privacy laws, we see Big Hospitals, Government, and Global Corporations all saying me, me, me. Let me be in charge of your personal data!
As the three would-be kings compete to run our lives, who will speak for the alternative of communities and individuals in charge of technology?
The title of this post is brilliant if we avoid the distraction of interoperability, including TEFCA. There’s enough written about interoperability already. I hope we can use Jason’s initiative to focus on the issues of privatizing medicine and if it relates to reducing $1 T of waste in the US.
Must we equate corporate profit with scientific secrecy? There’s plenty of profit in healthcare already, maybe $1 T too much.
If patient data is essential to medical science, including AI, are patients being double-charged – once when we pay for treatment and again when our data is used to profit from the intellectual property component of the data?
Is it ethical to use data collected at a time a person is most vulnerable for secondary profit that then drives costs up for every other patient that follows them? How is that not a positive feedback mechanism: let’s collect more data at greater cost to the sick person so that we can have more data to turn into secret medicine so we can charge more for from the next patient?
The $2 B Roche Flatiron purchase demonstrates the scope and functioning of surveillance capitalism in medicine. The regulation we need is to force that database be made public or erased to make clear that society is not going to go down the path of secret science or secret medicine.
I agree with you that there’s enough written about interoperability already. At heart, the reason we don’t have it is because those in charge don’t want it. There’s too much economic incentive for walled gardens.
As you mention, the problem is with who’s in charge. Right now, with data becoming a core resource, you’ve got companies free-riding on the data generated by people simply trying to be healthy and live their lives. Granted, companies have created tools to capture and analyze that data but does that mean they now get to own and manipulate that data for their own purposes, free and clear and behind closed doors?
We’ve put the cart before the horse on health data in talking about standards before purpose. If people and their data are now the product, we need to be discussing who pulls our strings and why. At the very least, we need to talk about to what extent.
“You say you want a revolution…well, you know, we all want to change the world…”
(Well, maybe just healthcare!)
Great article and POV. If ever the word “revolution” fits, it is U.S. healthcare. Perhaps one of the most frustrating issues is that this is an industry where 99% of the people in the industry really want to help people live fuller, healthier, happier lives. So how do we spend so much and not have the most efficient health system in the world with the best outcomes?
I’d suggest that before we regulate the revolution, we let the early blooms of consumer engagement take hold and begin to drive an actual revolution.
For the first time, consumers of health services are beginning to make choices in where/when and how they engage in care. This is an exciting development.
I submitted comments on TEFCA and have started reviewing the other 219 or so commentators. First – hats off to the ONC team. As you read the comments, you quickly realize everyone has an issue(s) and recognize the almost impossible task of corralling all these points of view around a functional solution. Second, everyone is very congratulatory on all the wonderful progress – keep up the great work! ($38 billion in incentives, EMR deployment reaching new highs….but….we still print electronic records and fax them… Oh, the systems were supposed to talk with each other….)
So before we look to regulators to “work their magic”, let’s give the 21st century health consumer an opportunity to influence the digital future of healthcare. In the hands of the consumer lies the overarching organization that may well force a connected industry. The issue is no longer getting an EMR record from physician A to physician B (20th century healthcare) – it is about getting relevant health data from wherever it is created to wherever the consumer engages in care (21st century healthcare).
Appreciate your thoughts. I’d stress that I don’t view having responsible regulators and an engaged healthcare consumer as mutually exclusive. In fact, I think the relationship can be symbiotic to promote and focus the democratization of healthcare data.
For instance, Australia has outlined a national digital health strategy (https://www.digitalhealth.gov.au/about-the-agency/publications/australias-national-digital-health-strategy) that recognizes that there are impediments in law and technology to consumers engaging in their care. A strategy (and responsible regulator) must place the patient at the center of the healthcare system and encourage continuous public engagement and consultation to figure out a supporting framework. That doesn’t mean just engaging with expert commentators but having staff tasked with responsibilities like public education, policy analysis, investigations and citizen engagement to focus and corral the disparate views out there from all strata of American society.
Deliberative democracy is tricky but it should be prioritized. And it won’t happen without a dedicated and clear sheriff in charge that Americans can turn to and trust.
By any sensible estimation, our nation’s economy could not withstand any new investment in infrastructure, no matter how well intended. The portion of our economy devoted to health spending was 5.0% in 1960. In 2016, it was 18.0%. The rate of increase has worsened since the full implementation of ACA 2010 in 2013. ( see HEALTH SECTOR ECONOMIC INDICATORS Spending Brief for February 2018 by Altarum Center for Value in Health Care ) All of the other OECD nations have health spending that clusters around 11-12% of their nations’ economies. The difference between 13% and 18% of our national economy (aka GDP) in 2017 represented nearly $1 Trillion. That is the equivalent of fighting 10 Iraqi/Afghanistan wars SIMULTANEOUSLY in 2005. Yep, “10” !
Thanks for the thorough review, Jason and I agree that personal data is an increasingly important economic good. You lose me, however, when you call for national regulation. Unless we plan a “Great Firewall of America” for personal data there’s no way to regulate where data goes and how it’s processed unless we keep the personal data away from the person and insist it be managed, like the nuclear stockpile, by the state.
Your review misses one very important trend toward secrecy and black box medicine. Before computers and Big Hospital EHRs no aspect of medicine was secret. Sure, we had intellectual property in the form of patents and copyright, but the knowledge itself was open, accessible for peer review by anyone for extension and innovation, and for regulation via licensed professionals “at the edge of the value chain.” This allowed for innovation through off-label use, for example, where the licensed practitioner is responsible rater than the corporation.
As medicine shifts in the direction of machine learning, artificial intelligence, and secret databases like the $2 B paid by Roche for Flatiron Health, medicine is going dark at a phenomenal rate. Our personal data and health records are being privatized as surveillance capitalism sweeps over the medical industry. The surveillance capitalism business model has brought us Facebook and the fake news challenge of the day. Do we really want to double down on this model and apply it to medicine as well?
Thanks for this. Always great to hear your thoughts on health data and how it should be managed.
I’d just like to clarify what I meant by regulation. I know that the word generally carries negative connotations evoking Five-Year Plans but I’m actually calling for a regulator to take on a more principles and standards-based role that facilitates the establishment of interoperability principles and standards as well as works with government/academia/industry to negotiate (and sometimes set) parameters for research on pressing public health issues.
Like you, I’m not a fan of the black box approach to medicine. However, it’s a balance – we also have to acknowledge that companies have built the infrastructure/products to engage in the greatest collection of health data in human history. It’s hard to argue that they shouldn’t get to leverage their collected data, and their analysis of it, for profit. That being said, my hope is that a clear governmental point of contact through a regulator/agency can bring down barriers to data sharing (such as walled gardens) to facilitate and coordinate research priorities.
I also think that having a dedicated regulator/agency to deal with health data might also alleviate some of the concerns you have about surveillance capitalism. Having such a public organization that understands and is responsible for setting the limits of surveillance and acting as a watchdog on the algorithms that dictate corporate action/strategy might represent a potential compromise. But to do so, the proposed regulator/agency needs to have a clear mandate and teeth.
Of course, for any of this to work you’d need to guard against regulatory capture but that’s true of any society.