On one hand, regulators are reluctant to limit private corporate action lest we reduce innovation and patient choice and promote moral hazards. On the other hand, a privatized marketplace for services requires transparency of costs and quality and a minimum of economic externalities that privatize profit and socialize costs.
For over two decades, the HIPAA law and regulations have dominated the way personal health data is used and abused to manipulate physician practice and increase costs. During these decades, digital technology has brought marvels of innovation and competition to markets as diverse as travel and publishing while healthcare technology is burning out physicians and driving patients to bankruptcy.
Let’s give the Office of the National Coordinator (ONC) credit for trying. In what’s arguably the first significant piece of policymaking, the newly Republican HHS issued a draft Trusted Exchange Framework and Common Agreement (TEFCA) that aims to implement the massively bipartisan 21st Century Cures act mandate to end information blocking. Are they succeeding?
Why should you care? After almost a decade and many tens of $billions spent on health information technology, neither physicians nor patients have access to a longitudinal health record, transparency of quality or cost, access to independent decision support, or even the ability to know what their out-of-pocket cost is going to be. After eight years of regulation, precious little benefit has trickled-down to patients and physicians. This post looks at the TEFCA proposal from the patient experience perspective.
The patient perspective matters because, under HIPAA, patients do not have choice about how our data is accessed or used. This has led to information blocking as hospitals and EHR vendors slow-walk the ability of patients to direct data to information services we choose. Patients lost the “right of consent” in 2002. This puts a regulation-shy administration in a quandary: How do they regulate to implement Cures, when current HIPAA and HITECH-era regulations give all of the power to provider institutions bent on locking-in patients as key to value-based compensation?
21st Century Cures is now law. Aside from its touted research and mental health provisions, it’s the most significant health information technology regulation since HITECH, now 8 years ago. A decent summary of the health IT provisions of the bill by John Halamka concludes with “That is just not realistic.” He’s almost certainly right to the extent your perspective is the hospital-centered mega-EHR model. You can’t get there from here.
Halamka and others who think that consolidated institutions will drive interoperability are in denial of the gap between financial integration and clinical integration. This recent post by Kip Sullivan describes some of the wishful thinking. But there’s another reason why HITECH’s institutional EHRs cannot get us to the Triple Aim, and it’s mostly about liability.
Halamka ignored one of the items in 21st Century Cures that could lead to clinical integration around a patient: a longitudinal health record. Section 4006 on page 149 includes:
“(1) IN GENERAL.—The Secretary shall use existing authorities to encourage partnerships between health information exchange organizations and networks and health care providers, health plans, and other appropriate entities with the goal of offering patients access to their electronic health information in a single, longitudinal format that is easy to understand, secure, and may be updated automatically.”
Useful longitudinal health records require curation and, almost by definition, the curators are not going to be affiliated with any single hospital or other institution operating a traditional EHR. Allowing licensed physicians, family caregivers, and the patient themselves to edit an institutional EHR is risky to the point of impossible. That’s why the current initiatives to introduce modern APIs into EHRs like SMART and Sync for Science are read-only.
President Obama’s legacy for health information technology is about to see its first test at the hands of a little-known project for access to Medicare beneficiary data. The President’s Precision Medicine Initiative (PMI) database is the big brother of Medicare’s database. Although both databases will be managed by the Government, the PMI one will also have our DNA and as many of our health records as we are willing to move there. How much control will patients have over our data in either of these databases? Federal policy on these databases will impact all of healthcare.
The test is whether either of these databases will limit one’s ability to control and use our own data.
Can I have free first-class network access to my own data?
Can I send my own data instantly to anywhere I choose?
Can I direct my data digitally, without paper forms?
These three questions apply equally to my Medicare data, my data in a private-sector EHR, and my PMI data. Current HIPAA law allows it but will the Government and hospitals actually implement it? The policy for the Medicare database is being implemented as Blue Button on FHIR this summer, and so-far it doesn’t look good.
If our Federal Health Architecture (FHA) will not allow us the maximum control allowed by the law, then how can we expect private-sector healthcare systems to do it? I wrote about the current HIPAA law and how it needs to be changed to make a patient’s first-class access a right, instead of an option, in a previous post.
Now it’s clear. On Thursday, the Office for Civil Rights, responsible for HIPAA enforcement and protecting the public, published a new guidance to interpret HIPAA with respect to data blocking. The limits of the current law are now evident. In the interest of affordable health care, the Precision Medicine Initiative, and common sense, it’s time for Congress update HIPAA. Believe it or not, HIPAA still allows hospitals and other electronic health record (EHR) systems to require paper forms before they release data under patient direction. Along with an allowed 30-day delay in access to electronic health records, this data blocking makes second opinions and price comparisons practically inaccessible. Over $30B in stimulus funds have been spent on EHRs and now it is still up to Congress to give to patients full digital access to digital data.
Data blocking is the result of deliberate barriers designed into current EHRs that prevent patients being able to use their own data in efficient and innovative ways. It is practiced by both EHR vendors and healthcare institutions to avoid competition by favoring the services they control. As hospitals consolidate into massive “integrated delivery networks”, the business logic for data blocking becomes clear and irrefutable. Data blocking ensures the largest health delivery networks will get larger and control pricing. The bigger they are, the more data they have about each patient and the more money each patient’s data is worth to outside interests like pharmaceutical companies and data brokers. The results are ruinous healthcare costs and hidden discrimination in insurance, credit, employment, and other key life opportunities.
Why Are Apple’s Competitors Staying Silent On the iPhone Unlocking Fight? is the question of the day on tech blogs. The answer is hardly technical and may not be legal, it’s all about privacy policies and business strategy and it is very evident in healthcare.
Class 3 – “We will use your data according to xyz policy and if you don’t like it, take your illness elsewhere.” This is pretty much how healthcare and much of the Web world runs today. We have limited rights to our own data. On the other hand, the services that have our data can sell it and profit in dozens of ways. This includes selling de-identified data. In Class 3, you, the subject of the data are a third-class citizen, at best. In many cases, the subject doesn’t even know that the data exists. See, for example, The Data Map.
We are so completely engulfed by Class 3 privacy policies that we have lost perspective on what could or should be. A Class 1 policy like Apple’s is widely seen as un-American. A Class 2 policy like PPR’s is indirectly attacked as “insurmountable”.
Healthcare is abuzz with calls for Universal Patient Identifiers. Universal people identifiers have been around for decades and experience can help us understand what, if anything, makes patients different from people. This post argues that surveillance may be a desirable side-effect of access to a health service but the use of unique patient identifiers for surveillance needs to be managed separately from the use of identifiers in a service relationship. Surveillance uses must always be clearly disclosed to the patient or their custodian each time they are sent by the service provider or “matched” by the surveillance agency. This includes health information exchanges or research data registries.
As a medical device entrepreneur, physician, engineer, and CTO of Patient Privacy Rights, I have decades of experience with patient identifier practices and standards. I feel particularly qualified to discuss patient identifiers because I serve on the Board and Management Council of the NIST-founded Identity Ecosystems Steering Group (IDESG) where I am the Privacy and Civil Liberties Delegate. I am also a core participant to industry standards groups Kantara-UMA and OpenID-HEART working on personal data and I consult on patient and citizen identity with public agencies.
The essence of controlling Ebola is surveillance. To accept surveillance, the population must trust the system responsible for surveillance. That simple fact is as true in Liberia as it is in the US. The problem is that health care surveillance has been privatized and interoperability is at the mercy of commerce.
Today I listened to the JASON Task Force meeting. The two hours were dedicated to a review of their report to be presented next week at a joint HIT Committee Meeting.
The draft report is well worth reading. Today’s discussion was almost exclusively on Recommendations 1 and 6. I can paraphrase the main theme of the discussion as “Interoperability moves at the speed of commerce and the commercial interests are not in any particular hurry – what can we do about it?”
Health information technology in the US is all about commerce. In a market that is wasting $1 Trillion per year in unwarranted and overpriced services, interoperability and transparency are a risk. Public health does not pay the bills for EHR vendors or their hospital customers.
Today, ONC released a report on patient matching practices and to the casual reader it will look like a byzantine subject. It’s not.
You should care about patient matching, and you will.
It impacts your ability to coordinate care, purchase life and disability insurance, and maybe even your job. Through ID theft, it also impacts your safety and security. Patient matching’s most significant impact, however, could be to your pocketbook as it’s being used to fix prices and reduce competition in a high deductible insurance system that makes families subject up to $12,700 of out-of-pocket expenses every year.
Patient matching is the healthcare cousin of NSA surveillance.
Health IT’s watershed is when people finally realize that hospital privacy and security practices are unfair and we begin to demand consent, data minimization and transparency for our most intimate information. The practices suggested by Patient Privacy Rights are relatively simple and obvious and will be discussed toward the end of this article.
Health IT tries to be different from other IT sectors. There are many reasons for this, few of them are good reasons. Health IT practices are dictated by HIPAA, where the rest of IT is either FTC or the Fair Credit Reporting Act. Healthcare is mostly paid by third-party insurance and so the risks of fraud are different than in traditional markets.
Healthcare is delivered by strictly licensed professionals regulated differently than the institutions that purchase the Health IT. These are the major reasons for healthcare IT exceptionalism but they are not a good excuse for bad privacy and security practices, so this is about to change.
Health IT privacy and security are in tatters, and nowhere is it more evident than the “patient matching” discussion. Although HIPAA has some significant security features, it also eliminated a patient’s right to consent and Fair Information Practice.
I’ve recently returned from the 7th ID Ecosystem Steering Group Plenary in Atlanta. This is an international public-private project focused on the anything-but-trivial issue of issuing people authoritative cyber-credentials: digital passports you can use to access government services, healthcare, banks and everything else online.
Cyber ID is more than a single-sign-on convenience, or a money-saver when businesses can stop asking you for the names of your pets, it’s rapidly becoming a critical foundation for cyber-security because it impacts the resiliency of our critical infrastructure.
Healthcare, it turns out, is becoming a design center for IDESG because healthcare represents the most diverse collection of human interactions of any large market sector. If we can solve cyber-identity for healthcare, we will have solved most of the other application domains.
The cyber-identity landscape includes:
proving who you are without showing a physical driver’s license
opening a new account without having to release private information
eliminating the risk of identity theft
civil or criminal accountability for your actions based on a digital ID
reducing your privacy risks through anonymous or pseudonymous ID
enabling delegation to family members or professional colleagues without impersonation
reducing hidden surveillance by state or private institutions
when appropriate, shifting control of our digital tools to us and away from corporations
Hospitals and health industry incumbents that seek to solve the hot issues raised by health reform are not eager to wait for a deliberate and comprehensive process. For them, privacy and cyber-security is a nice-to-have. Who will pay for this digital enlightenment?