The Internet is abuzz criticizing Anthem for not encrypting its patient records. Anthem has been hacked, for those not paying attention.
Anthem was right, and the Internet is wrong. Or at least, Anthem should be “presumed innocent” on the issue. More importantly, by creating buzz around this issue, reporters are missing the real story: that multinational hacking forces are targeting large healthcare institutions.
Most lay people, clinicians and apparently, reporters, simply do not understand when encryption is helpful. They presume that encrypted records are always more secure than unencrypted records, which is simplistic and untrue.
Encryption is a mechanism that ensures that data is useless without a key, much in the same way that your car is made useless without a car key. Given this analogy, what has apparently happened to Anthem is the security equivalent to a car-jacking.
When someone uses a gun to threaten a person into handing over both the car and the car keys needed to make that care useless, no one says “well that car manufacturer needs to invest in more secure keys”.
In general, systems that rely on keys to protect assets are useless once the bad guy gets ahold of the keys. Apparently, whoever hacked Anthem was able to crack the system open enough to gain “programmer access”. Without knowing precisely what that means, it is fair to assume that even in a given system implementing “encryption-at-rest”, the programmers have the keys. Typically it is the programmer that hands out the keys.
Most of the time, hackers seek to “go around” encryption. Suggesting that we use more encryption or suggesting that we should use it differently is only useful when “going around it” is not simple. In this case, that is what happened.
We are residents and a software developers. Before starting residency, we spent time as software developers in the startup community. We were witness to tremendous enthusiasm directed at solving problems and engaging people in their health. The number of startups trying to disrupt healthcare using data and technology has grown dramatically and every day established healthcare companies appear eager to feed this frenzy through App and Design Competitions.
When we started residency, the restrictiveness of data and reliance on decades-old technology was grossly apparent. Culturally, hospitals are an environment of budgets and deadlines that are better suited towards maintaining the status quo than promoting the creative process. Hospital IT departments harbor a deep cover-your-rear-end mentality and are incentivized for two things: first, keep systems running, and second, prevent security breaches. Perhaps rightly so–privacy and security need to be prioritized–but this environment has delayed them from facing the inevitable challenges of effectively using their own data and investing in new tools, including ones that could improve the triple aim of greater quality care with greater patient satisfaction and lower cost.
In the future, as hospitals and health systems become more accountable for the long term outcome of patients, we are optimistic that they will innovate as much out of cost-cutting necessity as for providing a better product to patients. We have little evidence that established players can power innovation solely on their own engines and expect many of the solutions will come from problem-solvers outside medicine. Doctors and patients will choose from an arsenal of apps to interact with the health information in EMRs. These healthcare apps come in three major categories: education, workflow, and decision support.
In a recent column, security expert Bruce Schneier proposed breaking up the NSA – handing its offensive capabilities work to US Cyber Command and its law enforcement work to the FBI, and terminating its programme of attacking internet security.
In place of this, Schneier proposed that “instead of working to deliberately weaken security for everyone, the NSA should work to improve security for everyone.” This is a profoundly good idea for reasons that may not be obvious at first blush.
People who worry about security and freedom on the internet have long struggled with the problem of communicating the urgent stakes to the wider public. We speak in jargon that’s a jumble of mixed metaphors – viruses, malware, trojans, zero days, exploits, vulnerabilities, RATs – that are the striated fossil remains of successive efforts to come to grips with the issue.
When we do manage to make people alarmed about the stakes, we have very little comfort to offer them, because Internet security isn’t something individuals can solve.
I remember well the day this all hit home for me. It was nearly exactly a year ago, and I was out on tour with my novel Homeland, which tells the story of a group of young people who come into possession of a large trove of government leaks that detail a series of illegal programmes through which supposedly democratic governments spy on people by compromising their computers.
I kicked the tour off at the gorgeous, daring Seattle Public Library main branch, in a hi-tech auditorium to an audience of 21st-century dwellers in one of the technology revolution’s hotspots, home of Microsoft and Starbucks (an unsung technology story – the coffee chain is basically an IT shop that uses technology to manage and deploy coffee around the world).
I explained the book’s premise, and then talked about how this stuff works in the real world. I laid out a parade of awfuls, including a demonstrated attack that hijacked implanted defibrillators from 10 metres’ distance and caused them to compromise other defibrillators that came into range, implanting an instruction to deliver lethal shocks at a certain time in the future.