The Internet is abuzz criticizing Anthem for not encrypting its patient records. Anthem has been hacked, for those not paying attention.
Anthem was right, and the Internet is wrong. Or at least, Anthem should be “presumed innocent” on the issue. More importantly, by creating buzz around this issue, reporters are missing the real story: that multinational hacking forces are targeting large healthcare institutions.
Most lay people, clinicians and apparently, reporters, simply do not understand when encryption is helpful. They presume that encrypted records are always more secure than unencrypted records, which is simplistic and untrue.
Encryption is a mechanism that ensures that data is useless without a key, much in the same way that your car is made useless without a car key. Given this analogy, what has apparently happened to Anthem is the security equivalent to a car-jacking.
When someone uses a gun to threaten a person into handing over both the car and the car keys needed to make that care useless, no one says “well that car manufacturer needs to invest in more secure keys”.
In general, systems that rely on keys to protect assets are useless once the bad guy gets ahold of the keys. Apparently, whoever hacked Anthem was able to crack the system open enough to gain “programmer access”. Without knowing precisely what that means, it is fair to assume that even in a given system implementing “encryption-at-rest”, the programmers have the keys. Typically it is the programmer that hands out the keys.
Most of the time, hackers seek to “go around” encryption. Suggesting that we use more encryption or suggesting that we should use it differently is only useful when “going around it” is not simple. In this case, that is what happened.
The average lay person, as well as the average clinician, do not bother to think carefully about security generally. Making an investment in the wrong set of defenses serves to decrease and not increase the overall security of the system. This argument is at the heart of the arguments against the TSA, which serves to make us “feel” more secure without actually increasing our security. The phrase for this is “Security Theater”.
You see encryption at rest, unlike encryption in transit, comes with significant risks. The first risk is that keys might be lost. Unlike car keys, once encryption keys are lost there is no way to “make new ones”. Of course you could backup your keys, securely, off-site, but that is extra costs, extra steps. Second, if encrypted data becomes corrupted, it is much more difficult to recover than unencrypted data.
In short, there are cases where encryption-at-rest can be dangerous and there are only a few cases where it can be helpful.
For clinicians, it is easy to make a parallel: the risks associated with unneeded testing. A lay person assumes that if there is any chance that the “CAT scan might catch it” then they should have a CAT scan. The clinician understand that this tests comes with a cost (i.e. increased long-term cancer risk) and is not as “free” as the patient feels it is. The public only becomes aware of this when a test scandal occurs like the famous PSA test, where the harm was massively larger than the good provided by a given test.
Both “Human Body” and “Information Technology” are both complex systems, and in general do not respond well at all to oversimplified interventions.
Moving back to Anthem.
Anthem has a responsibility, under HIPAA, to ensure that records remain accessible. That is much easier to do with unencrypted data. The fact that this data was not encrypted means very little. There is little that would have stopped a hacker with the level of access that these hackers achieved. Encryption probably would not have helped.
By focusing on the encryption at rest issue, the mainstream press is missing the main story here. If indeed Anthem was targeted by sophisticated international hackers, then there is little that could have been done to stop them. In fact, assuming international actors where involved, this is not as much as failure for Anthem as a failure of the NSA, who is the government agency tasked with both protecting US resources and attacking other nations resources.
As much as the NSA has been criticized for surveilling americans, it is their failure to protect against foreign hackers that should be frequent news. Currently, the NSA continues to employ a strategy where they do not give US companies all of the information that they could use to protect themselves, but instead reserve some information to ensure that they can break into foreign computer systems. This is a point that Snowden, and other critics like Bruce Schneier continue hammer: the NSA makes it easy to spy, for themselves and for others too.
It is fine to be outraged at Anthem and I am sure they could have done more, but I can assure you that no insurance company or hospital in the United States is prepared to defend against nation-state level attacks on our infrastructure. In fact, Anthem is to be applauded for detecting and cutting off the attack that it did find. Hackers are much like roaches, if you can spot one, there are likely dozens more successfully hiding in the walls.
Just because encryption may not have prevented this attack does not mean Anthem was right not to encrypt. Anthem was wrong not to, regardless of whether it even played a role in all this.
Mm. Sorry. There is no excuse for not encrypting SSN at rest. NONE. Also they have admitted to subpar logging techniques, no multi-factor authentication, and subpar intrusion monitoring. Stop giving them a pass. They skimped on security and got hit. They should have to pay for it. Unfortunately with only 2 years of credit monitoring. In my opinion, they should be hit with the requirement to provide lifetime credit monitoring for all involved. Raise the bar of punishment and companies will start acting professional.
By the way, your car jacking analogy is weak. Data encryption at rest can happen reliably. Split keys and spread it around. Data/multi-key sharding. There are many possibilities. Think your car will get jacked if there is no battery installed, no tires, and no fuses? You just raised the barrier.
Anthem’s admission that it did not encrypt this database has, oddly, occupied everyone’s attention even after Anthem admitted to other things it did not do–which are far more relevant to whether the company could have prevented this attack. As I wrote about on Feb. 14, http://www.ibj.com/articles/51789, Anthem told its employer clients that it did not use multi-factor authentication throughout its IT systems (even though every ATM in America uses that concept and even though Medicare requires its use around all sensitive data) and it did not employ user behavior analytics. That second concept would have given Anthem a much better chance of noticing that someone was transferring an estimated 35 gigabytes of data out of its system over the course of seven weeks. One cybersecurity expert described the fact that Anthem did not detect that much data leaking out of its system over such a long period of time as “outrageous” and “shocking.” Encryption most likely would not prevented this data breach, although it might have slowed the hackers down a bit, giving Anthem a better chance at detecting their activity. Anthem’s lack of encryption seems most significant as a signal that it was not doing everything it could to protect consumers’ data. Encryption is a standard feature on the most common database software, and encryption at rest adds about an 8 percent delay in processing time. So the price of encrypting seems fairly small given the size of the risk. Then again, hindsight is always 20-20.
“Many people have been surprised to hear that this sensitive data was not encrypted and that the federal mandate for securing health-related data, HIPAA, does not require it to be. In fact, HIPAA only “strongly encourages” encryption. Organizations that choose not to use encryption are supposed to document the reasons why not and implement an “equivalent alternative measure if reasonable and appropriate.” The vagueness of this requirement is the crux of class action and other lawsuits being filed against Anthem.
But even if Anthem had used encryption, the data could have still have been compromised. Encryption is just one part of the arsenal that organizations need to deploy to secure sensitive data. Encryption is great for securing data in transit and at rest, but if the credentials and keys are compromised it does little to protect the data.
The bigger issue in many breaches is that organizations haven’t properly implemented data access security controls. They need to have safeguards in place in case attackers can bypass perimeter defenses and compromise administrator level credentials.”
” … but if the credentials and keys are compromised it does little to protect the data.”
This is a next to impossible scenario when the data, encryption tool, key, and secret are stored in separate locations…. a common design, but again something few people do, so few people know. Not storing them separately would be like having a pile of money, with a lock sitting on top of it, and the key in the lock. Your scenario of all items compromised (as I pointed out in my comment) would require an inside job.
If Anthem is not cypher-texting my demographic data into the database tables then yes… I blame them and everyone else who is not “taking the trouble” to do so. It’s not that hard, it’s just not a priority for most healthcare companies because it’s “extra work”. Not ALL of the data needs to be encrypted… just the identifying attributes. This renders the rest of the health data de-identified. Encrypted discs, connections, and database files are useless when someone gains access to database login credentials. It is very easy to write a Trojan horse that looks for databases, tables, and column names like… Insurance, Patient, SSN, etc. Then still easy enough to wrap that data up and send out to a web server somewhere. But it is very extremely difficult to decrypt this data if it’s encrypted with AES_256 keys stored somewhere else in the network, and salted with additional values compiled in an application somewhere else on the network. To brute force decrypt this stolen data would be very expensive in compute resources from the electrical bill alone. The thief would have to decide if the data was worth the cost to decrypt. In some cases it would be. It’s all about lowering risk. Cyphering the demographic data with a key stored elsewhere and executed by an application compiled elsewhere would be nearly impossible to hack without absolute help on the inside. This would then not be a “sophisticated” hack, it would be an inside job. Under HIPAA, demographic data should be shown only when absolutely necessary to a care giver who has an actual use for the information. This is usually handled by the front-end software and rarely shown all together. Sometimes the argument for not encrypting demographic information is related to the need to easily search the data. I have handled this by decrypting the data in memory then searching, and with modern servers this can perform well. I also perform patient matching and de-duping using similar in-memory techniques.
I am not assuming to know the root cause of the breach at Anthem. What I detail is still a major potential issue at most healthcare organizations.
In the end – encryption had little to do with the breach. The roots of the Anthem breach go as far back as possibly April of 2014 – which may link it to the CHS breach – and the same vulnerability – the Heartbleed bug. This isn’t Anthem’s first bite at the HHS penalty apple either. They paid $1.7M (under Wellpoint banner) to HHS in 2013 – for 612,000 records that were accessible over the internet – between 2009 and 2010.
Are The Data Breaches At Anthem And CHS Linked? http://onforb.es/1CeAiKD via @forbes
My wife and I are not even Anthem customers, but her employer (where we get our health insurance) has notified them that if we’ve used providers who also take Anthem insurance, we may have been picked up in the hack. That is infuriating. Goes to some of Adrian Gropper’s points above (minimization, and persistence). If I’m not an Anthem subscriber, what right do they have to my personal information from a provider I’ve seen who also contracts with them? I would think that Anthem’s competitors will not be amused to know that they’re mining the data of THEIR customers as well.
Thanks for commenting Bobby… its always nice to hear from you.
If I understand what you are saying correctly here, you might consider publishing a “blurry version evidence letter” on your blog. This is the first I had heard that non-Anthem members might be included. If that is true it is a very very big deal.
It IS a big deal. That’s what employees at Cheryl’s company were told in the wake of the breach notice — that if our treating provider(s) also took Anthem insurance, they (Anthem) may also have our data as well, notwithstanding that they are not HIPAA CE’s with respect to us, and have no legal right to our PHI / PII.
I expect that there will be a lot of talk on the matter, and I also expect that Anthem will have resisted putting anything like this in writing.
If you do have something in writing… well that would make for good fun wouldn’t it?
>> RE “They presume that encrypted records are always more secure than encrypted records, which is simplistic and untrue.”
This appears to be a typo — shouldn’t it read: “They presume that encrypted records are always more secure than un-encrypted records..”?
I think my editors might have fixed this for us… thank you for pointing it out…
NP. And yes, it was fixed pretty quickly — maybe you can just delete my (now obsolete) comment.
No I think not. Since the error was copied to other blogs. You deserve credit for finding it. My editors deserve credit for fixing it, and I deserve the blame for the original mistake.
Nicely written post. There’s clearly a balance of responsibility here, and I expect we will learn that Anthem could have done more. It is time that major health care institutions build those higher walls and deeper moats–and the cost in dollars and inefficiency will be a cost of doing business.
Also, nice to see this linked on Morning Consult.
Wow, this is an interesting perspective to say the least. Let me give a technologist’s response.
Encryption is a valuable technology if you understand what it is and what it is not. Encryption at rest is certainly a best practice for hospitals.
At the strict end of the scale, encrypting at rest can mean a different key for every file or even block stored on media. This is generally considered impracticable, but it can be done. On the other end of the scale, whole disk encryption uses the premise that a single key (ideally entered at boot) accesses the entire data set.
Whole disk encryption is very common. It is most useful when you need to protect media that may be stolen, i.e. when a thief enters your data center or when a laptop is removed from the boot of your car.
Whole disk is of basically no value against online penetration attacks viz a viz the attack against Anthem. Once the attacker has administrator access all files are in the clear.
Did Anthem provide whole disk encryption? That doesn’t appear to be clear. Whole disk encryption would certainly meet HIPAA/ requirements and show intent to protect. It wouldn’t help much in this specific situation.
So, what about single file or even directory encryption on top of whole disk encryption? It can be done. However, unless the keys are managed in such a way that the systems administrator cannot obtain them at will, it is of no value. In most hospitals the infrastructure and budget for such a system does not exist. Practically speaking you are more likely to see this kind of approach in a defense environment.
So what’s right? Well, whole disk encryption is base protection that should always be there. It is valuable and necessary to prevent against physical theft. More complex forms of encryption such as file and directory level encryption must be combined with software and systems administration practices that support their use, and frankly, there is a huge skills gap in this area.
Unfortunately, Anthem was wrong and I think HHS will inform them of that in no uncertain terms.
I still maintain that encrypting at REST is only sometimes “best practice”. It is certainly not common practice, and it certainly would not have been helpful here.
Insurance companies need to run reports regarding large numbers of patients all of the time. The person whose credentials were hacked appears to have that type of role… So even the subtle partial encryption methods you are describing here would not have helped much.
It will be very interesting to see if HHS says anything about the Anthem situation.. as that will be telling about what they think “best practices are”. That is a great point!
“This decision will depend on a variety of factors, such as, among others, the entity’s risk analysis, risk mitigation strategy, what security measures are already in place, and the cost of implementation. The decisions that a covered entity makes regarding addressable specifications must be documented in writing. The written documentation should include the factors considered as well as the results of the risk assessment on which the decision was based.”
Is it clear? No. But if you chose not to encrypt at rest you better write down WHY and get good counsel.
Can you please point us to the “Root Cause” documentation of this breach at Anthem? Something that shows exactly how this data was stolen including versions of operating systems, databases, and password policies, etc. Without a clear understanding it is really difficult to make conclusions on what would and would not have helped SPECIFICALLY in this case. Generally encrypting “properly” both in transit and at rest is a best practiced as defined by HIPAA regulation and any cypher professional. Unfortunately the guidance provided by HIPAA law is not specific enough for “newbies” and is also a bit out dated. My experience in healthcare sees a minimal effort exerted by Health IT companies. I wish this was not true.
There is one word for this: fragile.
EMR will turn out to be one of civilization’s most fragile innovations.
These data were filched out of an EMR? Ya learn something every day.
No, but EMR will have its own problems, which this episode highlights.
Integrated systems tend to be fragile. They offer several advantages though.
Anthem had a duty to protect the personally identifiable information (PII) of its customers. Encryption at rest is a minimum “best practice” for Social Security numbers and dates of birth. Other techniques, such as tokenization, could also have been employed to protect PII. Unfortunately, we will never know if these techniques would have protected our PII. If Anthem had used them I would be more willing to cut them some slack.
By the way, there is a reason that cars come with keys. While keys do not always stop the theft of an automobile, how much worse would car theft be if we lived in a world in which car keys never existed? Do you leave your house unlocked? After all, with your logic the thieves are going to get in anyway.
You are extending a metaphor way past the point I would defend it.
My whole point is that encryption at Rest is only sometimes the “best practice”.
Merely claiming, briefly, that something is the best practice doesn’t make it so.
I call BS. Best practices are not subjective. They either are or are not a best practice. Encrypting SSN at rest is most definitely a best practice that always makes sense.
Lawsuits have begun….
Nice mention of this post in Medcity News 🙂
I am withholding judgement until more information is available on the specifics of Anthem, but I would say that encryption is generally a good thing even if it would not have stopped them in this specific hack. A maintain it should be a requirement for healthcare data to be encrypted at rest (however see ars technica http://arstechnica.com/security/2015/02/why-even-strong-crypto-wouldnt-protect-ssns-exposed-in-anthem-breach/)
I also strongly agree with Michael Turpin – a national patient identifier can at least stem the bleeding in these cases. SSN should not be used in healthcare unless they are disability claims, and even then may not be completely necessary…
Change patent identifiers to exclude social security numbers
and you solve for the insurer issue. I’m pretty certain the plans
don’t need SSNs to delineate patients. As for the failing
of the NSA, buckle up America, this is just the beginning. The digital
Age is fraught with unintended consequences. I’m not
sure that we can lay the crime at the feet of anyone. The key
is risk mitigation. I wonder if anyone shorted Anthem prior
to the breach. Now that would be interesting….
See my reply to Peggy…
I appreciate the notion that doctors and health care providers need access to my medical records. At least according to what I have read on the Anthem sites, those records have not be compromised. However, it is the other information, the social security and birthdate, address, et al, that is my greatest concern–and apparently not the concern of Anthem.
At a local hospital system, I use a patient identifier, verified secondarily by my name and address. To my knowledge, that is safer than described by Anthem, whose use of the SSN and address data puts me and my finances at risk.
We do need to understand the totality of the issues that surround this, and as unexciting as these kinds of issues are to me in general, the massive nature of this hack is a wake-up call to me…and to eighty millions others.
PS I have yet to hear from my Anthem provider.
I am fairly certain that SSN’s may actually be required for health insurance now, since health insurance now has tax implications.
Would like to hear what Harlow thinks, since I get conflicting information from my Google search.
My Massachusetts Form 1099-HC, which I’ve been getting for the past several years from my health plan, does not include my SSN (though it does include name, address, DOB, and subscriber number from my health plan). It allows for all the matching needed to confirm to the People’s Republic of Massachusetts that I am adequately insured, and thus do not need to be fined, or taken in chains to the gulag. Federal draft forms I’ve seen have fields for SSNs. Why can’t the IRS do whatever the Mass. Dept. of Revenue is doing and avoid use of SSNs in this context? (I mean, besides Congress barring HHS from using unique healthcare identifiers, which would, of course, help in the data minimization department, since then maybe we could get away without using DOB and address on the tax form.) Even if we can’t get away from using SSNs in some way to do the matching required for tax credits/penalties, it seems to me that insurers/exchanges could issue the forms using name and some partial SSN data (e.g., name plus last 6 digits of SSN), plus bare minimum coverage information needed to make the IRS calculations, in a separate database firewalled from any other member data (DOB, address, claims data, etc.).
Thanks for such a great post and exchange! Adrian, Fred, David and I all agree that the Anthem breach is a wake-up call for our accidental national identity management system, particularly now that we have systems that work better. Here’s the agonized post I issued the day before Fred’s (during a weekend of nonstop counseling for Anthem breach victims): https://www.linkedin.com/pulse/why-anthem-worst-breach-how-wed-protect-us-all-we-cared-jon-neiditz (or datalaw.net if you don’t like LinkedIn). And in response, the leading privacy scholar of our time, Dan Solove, responded by reminding me of his excellent post arguing that the FTC has the authority to stop the use of SSNs as identifiers: https://www.teachprivacy.com/ftc-can-readily-halt-identity-theft/. (Thanks also to David in his blog for invoking the decades of pain over the Patient Identifier).
About a decade ago a person on the phone from a Blue Cross plan told me she needed to authenticate me with my Member ID. After I searched and couldn’t find it, she whispered helpfully if conspiratorially (because there were state laws then as there are now) “It’s your Social Security number, Sir.” Yes, Adrian, it will take lots of unraveling, but let’s make this a REALLY teachable moment! Thanks again.
Fred — I agree and I disagree. Encryption is clearly not the be-all and end-all. Exploits such as this rely on human factors and vulnerabilities in systems and tools (exploiting a vulnerability in an Adobe product was apparently one of the issues in this particular case). Data minimization (which could be improved if Congress would stop banning development of a universal healthcare identifier) and better control of human factors are probably the most productive lines of defense available. However, encryption is also required by many state laws, even though it is not required by HIPAA. It is unclear to my why these laws were not seen as applicable in the situation at hand.
An excellent point. I wonder how many different states different insured live in… and I wonder if different notifications might have to be sent out as a result.
That would be an interesting source of new data about what happened here… which gets us closer to the lessons learned phase.
Fred’s post is misleading in the extreme. The Business of Medicine needs to be held accountable for their practices. Laying the Anthem breach at the feet of the NSA is equivalent to wishing for a police state. A public interest perspective to what’s going on comes to a very different conclusion. There are at least three things Anthem can be held accountable for: encryption, minimization, and persistence.
First, data encryption can be done securely for a bit more money. The keys are kept separate from the data and fetched as needed. In this case, an excess 80 million fetches of the keys would have been noticed earlier, wouldn’t you think? Also, I would like an email from Anthem each time the keys to my data are fetched. How hard would that be? Ahh, but what about the expense? Our US private insurance system has 3X the administrative cost of single-payer. Maybe we need to increase that to 4X so they can afford encryption and accounting for disclosures? Or maybe we should save all that money and let NSA handle the whole thing.
Second, how much data does Anthem need about me? Do they really need my social security number? Why can’t Anthem give me an Anthem ID to use? If a service provider wants to get paid, they need to supply my Anthem ID number, period. Much of the Business of Medicine is still paving the cow path of our paper-based history. I’m old enough to remember the little books of tiny credit card numbers that merchants would be required to check prior to accepting payment. That was before everything got connected. Today, I can buy a cup of coffee with a debit card and the back responds in a second. Why can’t healthcare payments that average 100X that amount be made without reference to my SSN and other personal info?
Third, how long does Anthem need to keep a copy of my data. An hour, a day, three months, three years, forever? The answer obviously depends but in an age when storage cost is effectively Zero relative to my $10,000 health insurance bill, what keeps Anthem and every other actor in the Business of Medicine from storing all of my private data forever?
Privacy comes at a cost. When VIPs go to the hospital or the pharmacy their information isn’t treated the same way as mine. The Anthem breach is a teachable moment for how we’re paying for the most intimate and important information we have. This is not a time to be letting the Business of Medicine and our regulators off the hook.
Adrian… I am shocked… shocked that you disagree with me…
As per normal I take issue with a number of your specific claims, which I shall enumerate just to make my position clearer. As always, I appreciate you forcing me to be clear.
> Fred’s post is misleading in the extreme.
Perhaps the title, which is a little “flashy” but the content of the article is fairly defensible. Which is why I can… below.
>The Business of Medicine needs to be held accountable for their practices.
Agreed, and I hope that Anthem looses clients because they were hacked, we need market forces to operate here. But I do not think we should knee-jerk on the solutions we suggest that they should undertake.
> Laying the Anthem breach at the feet of the NSA is equivalent to wishing for a police state.
In general, it is their job to “defend” us, and they spend more resources “attacking”. That policy does not make sense. The NSA should be helping american companies defend themselves from precisely this kind of attack. I think that help should be explicit and un-complicated by hidden agendas. That is not the kind of help we have been getting. Sadly, I think the help that we have been getting is much more compatible with the notion of a “police state”… because the “secret police” is perhaps the most dangerous aspect of a police state. So we are obviously talking past each other here..
> First, data encryption can be done securely for a bit more money.
money alone does not change how fragile encrypted data is
> The keys are kept separate from the data and fetched as needed. In this case, an excess 80 million fetches of the keys would have been noticed earlier, wouldn’t you think?
If you are presuming any design that requires a key fetch per request, then no I doubt 80 million would be noticed. Frequent key requests would just be a normal part of operations. What was noticed here was something that was done normally (database queries) being done in an abnormal way (the actual owner of the ID did not start it). Nothing about using keys would have helped if the account had the privilege of fetching keys…
> Also, I would like an email from Anthem each time the keys to my data are fetched.
You are alone in this. Everyone else would ignore it.
> How hard would that be?
You want a feature that no one else actually wants, its not about hard vs easy its about feature prioritization. Even if you were getting emails during every query there is no way you would have been able to act on it. This query would have been one among hundreds performed every day.
> Ahh, but what about the expense? Our US private insurance system has 3X the administrative cost of single-payer.
I agree with you my friend, but I fear that battle has been lost.
> Do they really need my social security number?
I think they might if you don’t want to pay the tax penalty… but that is an excellent question…
> Why can’t healthcare payments that average 100X that amount be made without reference to my SSN and other personal info?
I am not sure, but I certainly agree with that vision.
> Third, how long does Anthem need to keep a copy of my data.
That is an excellent question and should be one of the big questions that the press are asking instead of encryption drama.
> The Anthem breach is a teachable moment for how we’re paying for the most intimate and important information we have.
> This is not a time to be letting the Business of Medicine and our regulators off the hook.
While my title is admittedly a little dramatic, it certainly was not “Anthem is awesome and did nothing wrong”. Given how little detail we have on what actually happened, and Anthem’s unwillingness to actually face direct questions on the matter, I suspect that there is much more to this story.. places where “mistakes were made” that either will, or will not, become clear later.
I am writing this because there should be lots of questions that they should be answering, and there should be lots of attention here, but we need to stop dulling that attention with a focus on one poorly understood technology solution when it is fairly obvious that this is a security “posture” problem, either at Anthem, or at the NSA or both.
Again, thanks for holding me to account for both what I have said, and what you read in between the lines… it helps to make the “between the lines” much clearer and I can always count on you to not pull your punches on these matters.
“Nothing about using keys would have helped if the account had the privilege of fetching keys…”
Much about using keys would have rendered the data de-identified.
It is a poor cypher program that stores the encrypted data, cypher program, key, secret, salt, IV, and additional encryption configurations in the same location all under the same credentials. That would be like having a key in a lock on top of a pile of money in your front yard.
Usually the account that can pull all of this together is the most locked down and can only decrypt data according to the software feature designs. Few HIPAA compliant software features these days would let you export all of the demographic data in one big export through the UI. The only place you can do that (most of the time) is from the database. Rendering the demographic data unreadable in the database table would have made only hacking a database login not enough to use the stolen data.
I would be happy to show you one of my cypher programs and architecture via a GoToMeeting.
Fred, I think we’re all suffering from a lack of information on this case. The most detail I’ve read on the hack was from the L.A. Times and Wall Street Journal, who seem to agree that someone was able to obtain the ID and password of someone inside the company who had database access. Not only did they get the login credentials, but they were somehow able to then run a query and export the data to an offsite data storage vendor.
The operation was caught, according to sources, when the owner of the network ID (the DBA or programmer) saw his own ID on a screen executing a database query that he didn’t personally initiate.
In this case, encryption couldn’t have done much because the hacker had already gained access to the system as though he were the employee (either remotely or from inside the company), so he’d obtained all the rights the programmer had to open that database.
To me, this seems like an inside job, although it could have been run remotely by someone who had installed some really good malware, or who had very good knowledge of how to exploit remote access applications that may have been in use, e.g. for when employees work from home. I suspect that whoever executed the hack at least had some help from the inside, although it’s also possible that they’re just that good.
Encryption will help if files are on portable media, like a laptop or USB drive, and the media is stolen. Or if files are sent through e-mail. Then the thief would need the key to open it up, or at least some good software and bad encryption. In the case of stealing a network or database admin’s login credentials… The thief has the same access as the administrator.
It’s rare for data inside any company to be encrypted, because people need to access the data constantly. It’s assumed that firewalls and physical access will protect onsite data from external network hacks, but in this case someone exploited the open avenue to the offsite storage facility. Criminals will look for any way to get around existing security, and in this case they found that one weak link and exploited it.
No word yet on whether they’ve figured out who picked it up once it was parked at the storage vendor…
Thanks for replying. Obviously your points about when encryption is effective are particularly salient. What you are describing is using the right tool to solve the right problem.
I am really interested to understand how those credentials were aquired, and I have heard nothing about that.
Also good point about the internal threat. Like nation-state hacking internal threats are underestimated. Still I doubt an internal threat would have earned the term “sophisticated” hack and I don’t know why an internal threat who was “sophisticated” would use offsite uploads rather than just discrete USB out the door…
Still good point that an Internal Threat could technically have been at play.
All that needs to be cypher texted is the demographic data. This data does NOT need to be CONSTANTLY accessed (or read) by back end developers and DBA’s like me. I have personally cyphered demographic text data in database tables at a few Health Care companies (the ones who prioritize security) and this works out great.
Stoney, this exactly the point with encryption. I believe that when data encryption is required by statue or policy, the easiest solution is implement it at the hardware level. This doesn’t provide any control once you have access to the database. Generally anyone that can connect directly to the database as full access to all the data.
I believe it is better to encrypt at the application layer and to enforce user level access controls on the data. There should never be a single account that can access all of the data with a simple database query.
Agreed! It’s the “just-do-the-minimal-amount-of-security-to-keep-the-client-off-our-back” mentality. Plus, vendors are touting disc encryption as the solution to data encryption at rest when really that’s just going to protect if someone steals your discs. Disc encryption is an important thing to do, but it is not the end solution. I call it “layers of an onion”. Discs are an outer layer of the onion while the demographic text in the cells of the database table (usually obviously named PatientDemographics) are the core of the onion. If I want to steal your data (the core of the onion) then I should have a difficult time a) finding and b) reading. Unfortunately, most organizations just encrypt outer layers instead of the inner core.
I should mention I come form a position of authority on this subject. I have been consulting health data systems for 16 years. I have designed, built, and improved systems for insurance companies, physician organizations, patient portals, and national aggregate services. I see the darkness and I see the light. Come to the light everyone! Cipher and obfuscate your demographic text! ALWAYS! DO NOT use “difficulty” as an excuse. If difficulty was a legitimate excuse then we would all still be dying at 30 in the wilderness.
Partial encryption strategies like this are the hallmark of targeted solutions designed to actually provide balance between business goals and real security.
I do think this is the type of defensive sophistication that people should be looking into.
It does appear, however, that this database was primarily for managing demographic information. Since so little clinical data was inside, it is reasonable to assume its function was as a “demographics management system”. Given that are still confident that your techniques would be practical…
In your post you mention the possibility that the Anthem attack may have been a state sponsored action, similar to the now notorious Sony-Amy Pascal hack. I wasn’t aware that anybody had made that connection yet. Did I miss something.?
When we look at the Anthem incident, how technically complicated of a hack was this ? And what is the likelihood that hackers were able to gain system wide access – i.e. temporary access to all company assets – in the way they did at Sony?
John, I’ve seen this report saying the FBI may be linking this to a Chinese hacker group http://j.mp/174fRWl
The key word to pay attention to is “sophisticated”. Generally, this means that standard Internet Security practices were followed, but not found helpful. They are specifically withholding information about who the hacker was, but generally a “sophisticated” hack dramatically increases the likelihood that state-sponsored hacking was involved.
More on this angle: http://www.bloomberg.com/news/articles/2015-02-05/anthem-hacked-in-sophisticated-attack-exposing-customer-data
It’s interesting that the you highlight the word sophisticated.
I think I blank out the word sophisticated when I read stories like this in the same way I’d blank out similar words associated with national security stories “dangerous” terrorist group – “unpredictable” third world despot. Hacking attempts are assumed to be sophisticated, by definition. Just as terrorist groups are almost always dangerous. (Friendly ISIS cells being few and far between) and Third World despots are almost always unpredictable, and rarely highly reliable allies.
The word hack may even denote “sophisticated” “high level” “tech magic” in the public consciousness.
I’ll rethink this. The key is probably differentiating the sophisticated attacks from the unsophisticated ones. It sounds like Anthem may actually be the latter.
In general, where new techniques are used, or custom methods are developed to gain entry to a specific target, I would call that sophisticated. Of course, this might be a case where Anthem is trying to save face, and it is much better to have suffered a sophisticated hack, then it is to have suffered an unsophisticated one.
Still, there is just no information here, so I hesitate to judge Anthem too quickly.
There will be more information coming out about this over time (I would very much love to heard from the database administrator in question) and I hope that there is still a will to listen when the real “lessons learned” starts coming out.
Good job, Fred. Sane and well said.
Hackers want to make money too selling data, it’s the big $180 billion dollar a year business. As Fred said, there’s more out there and read by story about the Clinical Trial solicitation I got from China on a phone call that said they have me on record taking blood thinners..Not-I never have and never been prescribed..more of the data selling epidemic with flawed data…drop a few coins if you like as I have been working on indexing and licensing data sellers for 3 years now. Openly on Twitter I get mathematicians, quants…all agreeing with me as this is data base 101. How do you regulate a group when you don’t know who they are? It’s pretty tough:)
Sadly, lawyers with their non data mechanics perceptions have been on the lead with all of this and their perceptions are truly 100% wrong in thinking that this business can self regulate, but that’s what we have had from nutty lawyers. Come on at $ 180 billion a year, these folks won’t cheat and code people..give me a break and go watch the videos at the Killer Algorithms page. By the way, Chinese banks want source code for any apps that interact with their banks, yeah, they cheat just like our banks with code, takes on to know one.
I’m an old way back policy holder, like 8 years ago and now I hear that I might have been exposed too? I though it was rather entertaining that all expected some big post of data on the web..well duh? 80 million records, that’s money to repackage and sell our data. All they have had to do is watch how banks and corporations have been doing this in the wild west here in the US. Repackaging with no index or license, yes indeed stolen data just like counterfeit drugs will filter into the market place with people buying stolen and pilfered data not knowing.
One thing I do agree with here though is why such a massive big data base? I’m sure Fred agrees there too..regionalize some of it. You have to look at performance too with encryption so there’s that argument..do want to hear everyone complain about access time and how slow the site is? At some point we may have to all put up with that..decisions wil be made .
As far as the data taken, oh boy we certainly hope that all those credit card and FICO medication scores are not out there..but that’s ok, you don’t know about them anyway probably as its all part of the secret scoring of America. Folks over at Krebs are really interested in this..again support my cause to index and license data sellers..we need it.
Let me introduce you to Argus and E-Scoring..they are buying and selling and scoring you every which way but loose.
If you never read the Experian breach, that’s even bigger than this one and Experian bought a thief as they didn’t do enough due diligence. They bought the thief too with the business and he ran over a million queries and what those results were, we don’t know but he sold a lot of data by the time the Secret Service found him, Experian didn’t.