As if healthcare executives needed more to worry about, the recent hacker attack on Sony Pictures should send yet another reminder that data security can’t be ignored. On an international stage, Sony management learned the hard way that their e-mails, text messages, and private conversations were vulnerable to attack. Hackers accessed everything from the company’s sensitive financial information to its confidential employee communications. In the immediate aftermath of the attack, Sony is facing government inquiries, class action lawsuits from employees and business partners, and a significantly tarnished reputation.
Many executives in our industry might think that healthcare facilities are better prepared to withstand hacker attacks, with numerous government agencies regulating how we store and transmit protected health information (PHI) and personal identifiable information (PII). In reality, a significant number of healthcare facilities have already suffered damaging hacker attacks over the last few years and expectations are that hacker attacks will be a continued threat for the foreseeable future. The question healthcare executives must ask is: “What are we going to do about it?”
Secure Communication Is Crucial
For Sony executives, one of the most embarrassing aspects of the hacking scandal was the exposure of conversational e-mails between executives which were filled with casual, off-hand remarks and confidential information. Not long ago, those conversations would likely have occurred in-person or on a phone call, ephemeral forms of communication with no lasting record. Today, the vast majority of private communication between healthcare professionals is handled via email or text, a major shift from the way business was conducted just a few years ago. Instead of a phone conversation or a face-to-face meeting, private, confidential conversations are now written and fixed in a permanent medium of exchange, and then stored in the cloud forever.
The shift to e-mail and text is a natural byproduct of modern times. Scheduling conference calls or face-to-face meetings is often too great a challenge in today’s workplace. E-mail and text save us tremendous amounts of time. However, most executives don’t recognize how much confidential information they put into e-mails and texts. Even for savvy users, efforts to be circumspect often fail – it’s simply too difficult to continuously monitor our own activity. Even messages that have no negative legal implications (e.g., those that do not contain PHI or PII) can be incredibly damaging if released to the public. The Sony scandal vividly revealed how easily casual e-mails can spark a public relations nightmare and an avalanche of litigation.
How to Mitigate Our Risk
With the candid acknowledgement that real-time messaging is too convenient to be eliminated, healthcare executives must take risk mitigation efforts seriously. But simply focusing on how to build a better defense system against hackers is a losing battle. There will always be a hacker who manages to beat the system and access this vast new storehouse of potentially damaging e-mails and texts in the cloud.
The answer is simple – we must limit the amount, duration, and types of information that are stored in the cloud forever. By eliminating copious amounts of confidential messaging, there is less for hackers to steal. And the reality is that the vast majority of our confidential messages have no reason to live forever. Most phone calls are not recorded and stored forever. And neither should 99 percent of our daily communication.
Fortunately, technology already exists to make this happen, providing secure messaging that duplicates the benefits of email and text, with the added benefit of encryption and ephemerality. Dozens of healthcare facilities use the technology, and its effectiveness has already been proven within the high standards set by the industry. Yet there are still holdouts, and some facilities are still unprotected.
A Legal Duty to Take Action
Let’s be honest, while tremendous amounts of time and money are put into cutting-edge medical devices, research, and clinical facilities, the healthcare industry has taken an extremely conservative approach to updating its internal communication platforms. Outdated computer and software systems often live on in healthcare facilities long after they have been replaced in other industries. However, as the fallout from Sony and numerous other hacker attacks illustrate, healthcare executives no longer have the option of slowly adopting newer technologies.
In the legal arena, forces are moving quickly to punish executives who don’t take action. Class action lawsuits and Federal Trade Commission (FTC) complaints are quickly revealing that corporate executives have a legal duty to take “reasonable” industry standards to protect their data. And the duty is owed to the interests of a wide range of “stakeholders,” including shareholders, employees, patients, business partners, regulatory agencies, and even the general public at large.
So what is a “reasonable” standard with respect to data security? It’s a flexible standard depending on the size and nature of your business. But court cases are making it clear that data disposal plays an important role in reasonable data security. The general rule, articulated in court rulings and by the FTC, is that data should be stored only for so long as it serves a legitimate business need. Companies must create specific policies regarding the length of time their data will be stored in conjunction with those legitimate business needs.
After the Sony attack, it’s hard to imagine a healthcare executive successfully arguing in 2015 that he or she was unaware of the risk of storing every message in the cloud forever. There is clearly no reason to do so, and ample evidence to suggest otherwise. With ephemeral messaging solutions gaining widespread adoption, slow adopters risk claims that they failed to act in accordance with “reasonable” industry standards. Why take this risk? The reality is that most healthcare facilities don’t think it can happen to them … until it does.
About the author
Dean Steinbeck is General Counsel at TigerText.
Dean, I agree that secure communication is critical.
First question – with Anthem as an example, can we really trust that the encryption and “security” of communication?
Which leads into: you mention that we should not keep communications forever, which seems like a good idea. How long should we keep it for? Because that we greatly decrease the risk.
Thanks for your comments.
I think you raise very valid concerns about SMS. However, with respect to TigerText’s secure messaging product (and I mention this one because its the one I know best), there is a feature that provides delivery notification. So you know if your message was sent, received, and even read. Plus, if its not read within a certain period of time, the message can be escalated to a recipient’s email address. So while no technology is perfect (e.g. the doctor might lose his phone) I do believe TigerText’s secure messaging product is superior to SMS in many many ways.
Also, as you correctly state, doctors should not be using SMS to message PHI. They must use a HIPAA compliant communications platform (like TigerText) or risk very serious legal consequences. It continually amazes me how frequently PHI breaches occur even though there are excellent and inexpensive alternatives to SMS that comply with HIPAA.
I have to wonder what would happen if somebody went after this kind of detailed behind the scenes information at a pharma or health plan and released to the public or to the media. We might learn some interesting things.
Not totally sold on the use of text messages in healthcare, I’m going to need to be sweet talked a bit on this one. I’ve heard way too many stories about delayed messages and network issues to feel comfortable about using SMS as a front line weapon.
There’s a story going around about a young doctor at Yale who nearly lost a patient because he and his colleagues were texting updates and missed a crucial “we’re going to the OR with him, what do I do now?” text.
Now, it’s certainly true that may be the equivalent of the guys who were using Google Docs to share patient notes in clear contravention of a bunch of privacy laws – the kind of experimentation we want to be encouraging, not stamping out – but I think it’s an interesting story and something we need to think about.
I also know that a lot of docs are using the SMS camera feature on their phones to get second opinions from colleagues every day – a practice that opens some fairly serious legal questions.
“we must limit the amount, duration, and types of information that are stored in the cloud forever. ”
Limit the duration of data that are stored ‘forever’?
Does no one ever proofread anything anymore? A lawyer, no less.