In the future, everything will be connected.
That future is almost here.
Over a year ago, the Federal Trade Commission held an Internet of Thingsworkshop and it has finally issued a report summarizing comments and recommendations that came out of that conclave.
As in the case of the HITECH Act’s attempt to increase public confidence in electronic health records by ramping up privacy and security protections for health data, the IoT report — and an accompanying publication with recommendations to industry regarding taking a risk-based approach to development, adhering to industry best practices (encryption, authentication, etc.) — seeks to increase the public’s confidence, but is doing it the FTC way: no actual rules, just guidance that can be used later by the FTC in enforcement cases. The FTC can take action against an entity that engages in unfair or deceptive business practices, but such practices are defined by case law (administrative and judicial), not regulations, thus creating the U.S. Supreme Court and pornography conundrum — I can’t define it, but I know it when I see it (see Justice Stewart’s timeless concurring opinion in Jacobellis v. Ohio).
To anyone actively involved in data privacy and security, the recommendations seem frighteningly basic:
–build security into devices at the outset, rather than as an afterthought in the design process;
– train employees about the importance of security, and ensure that security is managed at an appropriate level in the organization;
– ensure that when outside service providers are hired, that those providers are capable of maintaining reasonable security, and provide reasonable oversight of the providers;
– when a security risk is identified, consider a “defense-in-depth” strategy whereby multiple layers of security may be used to defend against a particular risk;
–consider measures to keep unauthorized users from accessing a consumer’s device, data, or personal information stored on the network;
–monitor connected devices throughout their expected life cycle, and where feasible, provide security patches to cover known risks.
–consider data minimization – that is, limiting the collection of consumer data, and retaining that information only for a set period of time, and not indefinitely;
– notify consumers and give them choices about how their information will be used, particularly when the data collection is beyond consumers’ reasonable expectations.
Stakeholders and FTC staff agreed that it is too soon for IoT-specific privacy and security legislation, and reiterated the agency’s the 2012 call for broad-based, flexible, technology-neutral data security and breach notification legislation. (See Health Populi for more on the IoT report.) The President seems to be in favor of strong, uniform, data privacy and security rules as well.
Uniformity would be a good thing. As things stand now, the FTC and OCR have overlapping jurisdiction when it comes to enforcing privacy and security rules with respect to health data. (Oh, and let’s not forget aboutstate attorneys general and, while we’re at it, private lawsuits, as vehicles for enforcement). While overlapping jurisdiction should not matter to those of us who are in compliance with the rules, the problem is that the rules (at least on the FTC side) are not necessarily clear. That issue is magnified because FTC enforcement can include long-term monitoring and reporting on remediation and compliance, and can drive a company out of business. (Consider the LabMD case, just for instance.)
In addition, the IoT report covers some of the same ground as the FDA’s recently-issued draft guidance entitled General Wellness: Policy for Low Risk Devices, which complements last year’s mHealth guidance.
When the federals put a stake in ground — as they have with all of these issuances — innovation can proceed because we all have a better sense of the contours of the regulatory landscape. The problem is that these are guideposts that can shift in unpredictable ways in the future, or that can easily disappear — like the landmarks disappearing under a blanket of heavy snow falling outside my window as I type this post.
Nevertheless, it is possible to plan for that inscrutable future by building products and services, and communicating with partners, consumers and regulators, in a way that honors public expectations and the policies underpinning the government’s various declarations about data privacy and security.
David Harlow practices health law with a focus on technology in Boston. He blogs at HealthBlawg, where this post first appeared.
Categories: Uncategorized
Definitely NOT reassuring is it?
Here we go again…and this is just the beginning:
Anthem Records Hacked….headlines Feb. 5th
here is quote from the Reuters story:
” The FBI had warned last August that healthcare industry companies were being targeted by hackers, publicizing the issue following an attack on U.S. hospital group Community Health Systems Inc that resulted in the theft of millions of patient records.
Medical identity theft is often not immediately identified by patients or their provider, giving criminals years to milk such credentials. That makes medical data more valuable than credit cards, which tend to be quickly canceled by banks once fraud is detected.
Security experts say cyber criminals are increasingly targeting the $3 trillion U.S. healthcare industry, which has many companies still reliant on aging computer systems that do not use the latest security features.”
Dr. Palmer is right: “I just have this feeling that there are going to be patients, and maybe users, who refuse to allow some of their sensitive medical information to enter the EHR system.”
Bill, thanks. You know better than I, but I believe anything can be hacked. However, if things are linked to an individual patient they can only be hacked one at a time. That makes it more difficult. Also I get notified from google if someone tries to enter my email account so I would expect that the individual could be notified anytime his personal data is accessed.
We could use biometrics to access an account and we could demand a code entry that has a legitimate address. I think today we have all the technology we need.
The problem I see is all the busybodies that want certain information and are unwilling to pay for it, spend the time getting it or even ask permission to get it. They are the one’s blocking the development of a physician friendly EHR.
@Allan
Well said, Allan. Amen.
Say you live in Florida and you bumped your head on a trampoline in Circus Circus in Reno on a ski trip. You go to the local hospital and it is very simple for them to see your last five years of PHI. Is this an easy sustem to hack? Yes___No___
“Can we have medical records that ONLY the patient can view?”
Bill, I think that is what we need, but that doesn’t run parallel with what government wants. I think we have the technology to make things reasonably secure if we didn’t want everyone’s records in one place. Different places I believe would mean accessing a person’s data one person at a time which works for the physician and hospital treating a patient, but doesn’t work for snoopy enterprises that want all the data in one place.
We can’t trust the government because we saw how the government invaded people’s privacy by using the IRS for political gain.
I have several thoughts, but am not a techie, so maybe they don’t make much sense anyway:
It seems that pretty much anything on line is fair game for hackers and or abuse by other parties. If we are making individual health records available for potentially any party that may need them, would other non-identified parties also be able to get them?
Secondly, there continue to be debates about interoperability. How does this work? Seems every institution has its own system. Is it possible to do this and if we do, does that make PMI more accessible to hacking?
Is it possible to design a system whereby one’s personal medical info can be stored on a memory stick and they can decide when and where they need to use it? That way the patient is in control of his/her own information.
I just have this feeling that there are going to be patients, and maybe users, who refuse to allow some of their sensitive medical information to enter the EHR system. Examples have been abundant on this blog. Providers also might not want certain discussions about their patients to enter the digital world. An easy one: do you want the blood type of newborns to enter the system? Why is this fraught with danger?
All that needs to happen is for some good writer, like Rachel Carlson, to come out with a dynamite expose about leaks and hackers and the porosity of health data. This vulnerability to dangerous leaks is bound to be made public. The entire project could easily be foiled by a few books or movies.
Thus, careful planning covers this possible future. What are we going to do?
To manage this we have to clear up the logic of EHRs a bit. Who is to benefit primarily? What is this effort really for? Can we have medical records that ONLY the patient can view? If the raison d’être is not just for patient beneficence, don’t we need patient permission for every tangential use? If all internet data is forever going to be leaky and insecure, what is our fallback? Do we design some LAN just for healthcare that does not rely on TCP/IP? Would this also degrade to insecurity over time? Interoperability surely eases hacking. Do we really want this?
These questions are political as well as technological. Do we ask patients and the public to weigh in pretty soon?
If you were betting in a futures market on the 20 year survivability of the full EHR, what odds would you give?