Lessons From the 100 Nation Ransomware Attack

The world is reeling from the massive ransomware attack on at least a hundred nations’ computer systems. The unprecedented malware spasm infected hundreds of thousands of computers, and would have infected millions more but for a 22-year old computer science student who found a vulnerability in the malware that he used to curtail the infection. He found it looked for a non-existent URL, so he a set up that URL and found he could stop it spreading. Of course, now the hackers know that, it is an easy matter to update the malware to use other URLs and other techniques. Clearly, this iconic malware attack is not going to be the last.

What do we know about the malware? The NSA (the US National Security Agency) found that there was a vulnerability in some of Microsoft’s operating systems. The NSA was itself hacked, and ransomware was developed that exploited the vulnerability. This ransomware was then distributed on the black market. The original flaw is called “EternalBlue” and it was converted into the now notorious ransomware program called “WannaCry.” It is important to note that no special skills are required to actually use ransomware: WannaCry is just a tool a criminal buys in the hope of causing chaos or making money or gaining fame.

The NSA has an interesting problem. It discovers a backdoor that may help it fight terrorism, so it makes sense to keep it secret. In hindsight, once the flaw is known to hackers it is quickly exploited, and the rest of the world is unprepared for the consequences. In any case, Microsoft, on March 17, 2017, issued patches for the vulnerabilities for its currently supported software, which most people use on a day to day basis.

This left the most vulnerable software, Windows XP, which is so old that Microsoft no longer issues security patches for it. However, perhaps because of the scale of the problem, MS belatedly sent a patch for the old XP software – a rare event for them. It is interesting to ask why they did not release the patch in early March (or before), since it is well known that many hospital systems rely on XP.

The computers that were most easily infected were those that had not updated the patches, including, most notably in US press accounts, about a quarter of the computer systems running hospitals and medical practices in England and Scotland. We have hundreds of reports of hospitals cancelling operations, of scheduling horrors, of doctors treating patients blind to their medications and medical data, of failed phone systems, of chaos and fear.

Perhaps the key questions for us are: Why would professional organizations not update their security software when the patches were made available and the organizations were notified about them? Why would professional organizations run critical systems that are no longer supported by their manufacturers? Why would professional organizations not have an immediate and effective strategy for when things go wrong?

The answers are our lessons:

  1. Few take IT, let alone cybersecurity, seriously enough. Many hospital IT departments are not populated with skilled practitioners or with those who can understand the complexity of the systems they run. Pay for cybersecurity workers in public facilities is seldom what is offered in the private sector. Many “IT leaders” in hospitals are well-meaning clinicians who have taken on the role part time.
  2. Complexity is dangerous for cybersecurity. Each connection is a vulnerability. Each link is an opportunity for malware to enter the system. In the USA, EHRs are connected to literally hundreds of other IT systems — pharmacy IT, smart infusion pump systems, insurance companies, inventory, dietary, internal labs, outside labs, outside pharmacies, other medical institutions, local, state and federal regulatory bodies, etc etc. And that’s before we start counting the central heating, the sterilization equipment, and every member of staffs’ own mobile phones and tablets. Few healthcare IT departments could even map the myriad connections, no less enact comprehensive and constantly changing needed protections. Few, if any, hospitals have an inventory of its IT equipment — not least because it is not obvious what IT is. Even a defibrillator in a corridor is full of IT, and is vulnerable to hacking.
  3. It’s important to emphasize the “constantly changing” part of that previous point. There are new drug packages, new suppliers, new or changing lab arrangements, modifications to existing systems, mergers, HIEs, and so on. If the IT guys think they’ve fixed it, they’ve fooled themselves.
  4. There are substantial investments in big equipment like MRI scanners and medical linear accelerators. These investments were expected to pay off over years — which means the equipment is still in use years later but using the original software it was bought with. Microsoft and often even the original suppliers no longer support these older systems, and they are very vulnerable but irreplaceable.
  5. The malware is always being improved and refined, by hackers around the world. There are organized groups developing malware. If the NSA is doing it, as they evidently are, so are China and Russia and many other states. There are thus enormous resources behind malware, and blaming hospitals for their unmanaged vulnerabilities is a distraction from bigger problems.
  6. Returning to the complexity issue: among the many reasons a hospital or other organization might still be running XP, is that they spent years building linkages to other software and hardware systems across the hospital. Changing one basic platform can be onerous or lethal, because it may stop working with other systems. It may even stop working with itself — a system like an MRI will have many interlinked computers inside it. Millions of those with complex systems just can’t afford the time or money to build new architectures for embedded, legacy systems. Even if they did, Microsoft and others would have changed some of the systems before the hospital had even caught up.
  7. For the UK, there is also the legacy of Maggie Thatcher and other Tory governments that sought to economize by underfunding the NHS. Eventually, the lack of equipment and, more important, underpaying key staff, creates opportunities for disaster. With the widespread ignorance of IT, it is very easy to say “it hasn’t happened therefore we are OK,” therefore we can save money. Worse, politicians do not understand IT, so their eyes glaze over before they even start thinking.
  8. Worse than not understanding IT is thinking you understand IT. Returning to the NHS, there’s enough irony to fill an Oscar Wilde play. The NHS recently issued its current views on “digital maturity,” which focuses on being paperless, as opposed to actually understanding IT or cybersecurity. See the  https://www.england.nhs.uk/digitaltechnology/info-revolution/maturity-index/or the recent NHS Watcher Report, “Making IT Work: Harnessing the Power of Health Information Technology to Improve Care in England Report of the National Advisory Group on Health Information Technology in England” published in 2016.
  9. If we are charitable, if hospitals employed more front-line IT staff, they could have kept their systems up to date with patches from Microsoft and other vendors. If hospitals employed more senior IT staff, their backup and recovery procedures may have worked. If hospital executives and managers understood the critical role of IT in their organizations, this resourcing would have been available. Nevertheless, it is not just a financial problem: if the people recruiting IT staff (or external consultants) don’t understand IT fully, then they won’t recruit competent workers — the problems of not understanding IT go all the way to the top of the organizations.
  10. You will note that some of the worst ransomware attacks are in Russia and India. One of the reasons for this is that organizations that use old or pirated software do not get the routine updates and patches. In this sense, the USA was less vulnerable.

In sum, we must learn from this attack that healthcare IT departments require a level of cybersecurity awareness and skill that is not ubiquitous. The system is only as strong as its weakest link. Relying on vendors—who focus on sales of individual products or services—will not provide enterprise-level security.

As of this writing, we don’t know how successful the efforts have been to re-establish systems. We know some were backed up, at least partially. But why not all of them?. We know some systems are now working again. But we don’t know what is being done with the data, and we certainly don’t know the consequences of delayed healthcare, frozen emergency departments, lost information, untreated illnesses and trauma, etc.

Theresa May (the current UK Prime Minister) has assured the UK that patient data were not affected or leaked, just that the computers weren’t working. One might ask: how does she know? She doesn’t; she couldn’t possibly know.

We should take cybersecurity seriously, and healthcare IT leaders need a synoptic vision and understanding if they are to protect the systems they operate — and the patients that depend on those systems working. Hospitals also need sufficient funding to make that protection possible. However, funding is not enough. Hospitals need wisdom to use that funding sensibly.

If that’s the main lesson, there are many difficult questions remaining for the future. A selection would include:

  1. Cybersecurity vulnerabilities have been around for a long time. Why isn’t there a standard operating procedure in place?
  2. If patient data are restored, that sounds great, until you ask how do you know they have been restored correctly? What happens when the patient has changed so the restored data is out of date? What systematic process will fix the ongoing mess?
  3. How should we change the incentives and economic models (and the laws and regulation) to encourage hospitals and vendors to work better together to solve these complex problems? One can bet that all the legal contracts will blame the doctors and nurses for not providing professional care. Nothing much will improve until there is radical action from the top to change the culture.

If we can’t answer the questions, we either fail (and we harm patients) or we have to go back to school and learn some more.

Professor Koppel is a leading scholar of healthcare IT, and of the interactions of people, computers and workplaces. His articles in the major medical journals are considered seminal works. He is on the faculty of the Sociology Department and of the Medical School at the University of Pennsylvania, where he is also a Senior Fellow of the Leonard Davis Institute at Penn’s Wharton School. For the past 6 years he has also worked on cybersecurity.

Prof Harold Thimbleby is an internationally recognized computer scientist who works in healthcare IT. He has a particular interest in patient safety and user interfaces (e.g., the design of infusion pumps). His work has been recognized with honorary fellowships of medical colleges, like the Royal College of Physicians, where he is Expert Advisor on IT.


Categories: Uncategorized

8 replies »

  1. Right. More resources? Has more resources allowed Microsoft to correct it’s habitual vulnerabilities?

  2. Great session!Currently there is no way to fix a computer that’s infected by WannaCry. But at the same time, paying them isn’t your best bet, since you are basically giving money to criminals. Professional organizations are not taking an immediate and effective strategy, when things go wrong.I would like to suggest cybersecurity communities like Connected Health Community, Society of Cyber Risk Management & Compliance Professionals | Opsfolio-opsfolio.com, which are good sources to get more ransomware information.

  3. Haven’t we also learned that the words cybersecurity and Microsoft should not be in the same sentence?

  4. The article discusses the cost of up-to-date software and the cost of competent in-house or contract IT to support it. When it comes to critical infrastructure, which includes almost all healthcare IT, the true cost of using secret proprietary software is now becoming clear.

    Consider how a WannaCry problem would be handled if the software, both operating system and health records applications, was Free and Open Source. There would be no economic reluctance to upgrade, there would be no reluctance to make the bugs and vulnerabilities public (sunshine as disinfectant vs. security through obscurity) and institutions would pay for _competitive_ support either through contractors or in-house IT instead of giving the software vendor an effective monopoly on support.

    Today, it’s rare to find core security programs are secret software for this very reason. We are learning of the need to treat all of our critical infrastructure as a security system and demand that it be open source.

  5. Great article. The obvious problem is that the cost to do what you suggest is enormous. It will dwarf any bricks and mortar costs for the next decade (which it should in any event). The cost is far more than any non-acutely aware vendor realizes, and the difficulty of convincing Boards of Directors and CEOs to spend that money when so many other demands exist is daunting. Even when a hospital spends $200 Million on its core platform, there’s nothing for the Board or the public to see, feel, or hear. It’s vapor, but wicked important vapor. We need to educate Boards and other strategic decision makers of the importance of what you wrote; otherwise, the sheer numbers will scare them off.

  6. Wonderful article! Thank you.

    I have a problem with your message: you seem to be pleading that with enough resources and acumen we can make this IT work well enough for society. Is this really true?

    Can you show us that the security difficulties with IT are theoretically finite?…that we can get ahead of them and finally “win”? Aren’t some experts saying that there are always going to be deficiencies in the software?…i.e. we have essentially an infinite job ahead of us? …that we are always going to be on an asymptotic curve approaching, but never reaching, safety?

    It sometimes feels as if we have fallen in love with these computers, almost as if they are toys. And we keep finding all these difficulties that we are trying to whip into submission. But they never end. They go on and on and they are becoming more and more dangerous as the penetration of this toy into our lives becomes ever deeper and more synoptic. It is managing us. It feels not our servant anymore.

    Aren’t there any revolutionary remedies out there? Totally new operating systems? internet protocols? decryption mathematics? Do we have to give up on interoperability? Unplug everything and use LANs?

    But, anyway, thanks for your fine effort in this blog piece.

  7. So, do we really want a vast system of variably protected systems all hooked up together? OR, do we just disconnect everything in the ED, Surgical Suite and the pharmacy from the internet? By the way, is your hospital’s standby diesel electricity generator connected to the internet? I just hope everyone gets more than 1 hour prior notice in each operating room when to stop their anesthesia machines!
    Finally, each hospital should update its risk management plan if doesn’t include internet protection that they can explain to each active medical staff Member.