The Food and Drug Administration has spent decades refining its processes for approving drugs and devices (and is still refining them), so what would happen if they extended their scope to the exploding health software industry?
The FDA, and its parent organization, the Department of Health and Human Services, are facing an unpleasant and politically difficult choice.
Sticking regulatory fences into the fertile plains of software development and low-cost devices will arouse its untamed denizens, who are already lobbying Congress to warn the FDA about overreaching. But to abandon the field is to leave patients and regular consumers unprotected. This is the context in which the Food and Drug Administration, the Office of National Coordinator, after consultation with outside stakeholders, released a recent report on Health IT.
I myself was encouraged by the report. It brings together a number of initiatives that have received little attention and, just by publicizing the issues, places us one step closer to a quality program. Particular aspects that pleased me are:
- The suggestion that quality programs should start to look at electronic health records (p. 8). EHRs have been certified by various bodies, but usually just to check off boxes and declare that the systems comply with regulations–neither the quality of their user interfaces nor the quality of their implementations have been questioned. Reportedly, the FDA considered “safety and quality standards” for electronic health records in 2010 but couldn’t get them adopted. It also checks certain forms of clinical decision support, but only if they are built into a regulated device. The current HHS report refers back to aspirational documents such as a Health Information Technology Patient Safety Action & Surveillance Plan and a set of guidelines on the safety of EHRs.
- A call for transparent reporting and sharing of errors, including the removal of “disincentives to transparent reporting”–i.e., legal threats by vendors (p. 25). Error reporting is clearly a part of the “environment of learning and continual improvement” I mentioned earlier. A regulation subgroup stated the need most starkly: “It is essential to improve adverse events reporting, and to enable timely and broader public access to safety and performance data.” Vague talk of a Health IT Safety Center (p. 4, pp. 14-15) unfortunately seems to stop with education, lacking enforcement. I distinctly disagree with the assessment of two commentators who compared the Health IT Safety Center to the National Transportation Safety Board and assigned it some potential power. However, I will ask ONC and FDA for clarification.
- A recognition that software is part of a larger workflow and social system, that designing it to meet people’s needs is important, and that all stakeholders should have both a say in software development and a responsibility to use it properly.
Don’t imagine that the FDA is unused to regulating software. For quite some time they have instituted practices for the software used in some medical devices , and have tried to keep them up-to-date.
A waterfall-like process of risk assessment and testing called computer system validation has long been required for pharma and devices.
Key aspects in the current FDA guidelines about medical devices and the software they contain are:
- Requiring documentation about the quality control practices used by the vendor, such as validating that the software performs the designated tasks.
- Providing a searchable database of errors reported in devices.
These principles will return later in this article as possible responses to the new environment for health-related and medical software. I use dual terms here because anything “medical” tends to draw scrutiny from regulators, whereas as one can evade that bureaucracy by talking about “health,” “wellness,” or “fitness.”
Examples of devices that lie outside the FDA’s jurisdiction can be found on pp. 20-22 of this guidance document. In addition, the FDA has decided not to regulate devices that help patients with peripheral medical tasks such managing their illness or organizing their health information (pp. 16-18 of the guidance document).
Why is the question of regulation currently so hot? And why is it a dilemma for the department?
In one word: mobile. Developers are just as jazzed about creating sensors and apps for health as for other fields such as social media.
Miniaturization and low-cost components open up the field of medical devices to scads of developers who would never imagine creating something on the order of an MRI or CT scanner. As I explain in my report, The Information Technology Fix for Health, this new democratic rush in medicine extends to the clinicians and even the general public, who can take vital signs and do other things that used to require expensive equipment in a clinical setting.
Even the devices built into ubiquitous cell phones can offload complicated functions. For instance, check out the award recently given to an app for assessing sports concussions.
But all these enticing possibilities in hardware and software lead to a clash of cultures. On one side tromps in the waterfall model, heavy on requirements and documentation, that has always been used for critical embedded systems such as we depend on in airplanes, power plants, and military equipment.
On the other side is the agile methodology with constant innovation preferred by app developers. There is no “eventual consistency” in the development of consumer-facing apps, because while programmers are ironing out the wrinkles in one release, new requirements are piling in. So we come down to the essential question of how to square the potential benefits of rapid roll-outs and fast-as-light updates with the assurance that health-related software will work well under all circumstances.
The HHS report (to which the FCC also contributed) endorses the agile approach when summarizing that “an appropriate regulatory framework for health IT should be flexible enough to accommodate innovative, continuously-evolving products undergoing rapid product iterations, upgrades, modifications, and customization, and should account for the complex environment in which the products operate and the multiple stakeholders that play key roles in the successful development, implementation and use of health IT.” (p. 11)
But saying good-buy to the waterfall model is harder than modern developers might think. Requirements, test plans, reviews, and loquacious documentation all the way down the line guide the development of embedded systems that shepherd us through life minute by minute.
And there are certainly spectacular examples of failures following this model, from Healthcare.gov to the suppressed dangers in cars by General Motors or Toyota. For more scares, readers will enjoy the presentation by embedded expert Michael Barr, which includes a look at the notorious Therac-25 radiation problems, and ends with calls for more transparency in the field.
The HHS report does find a role for “conformity assessment,” including certifications and accreditation, and it was highlighted in a teleconference they held Thursday, but I question how far one can apply it to fast-changing software. If the product is innovative, you can’t even define a standard for it to “conform” to. The workgroup assembled to advise HHS and the FCC asked it to avoid certification (p. 31 of their recommendations).
Any novice programmer knows that a one-character fix to a program can cause another part to unexpectedly and catastophically fail, but we can’t require programs to run a gauntlet of verifications for every trivial change. Not only would it squelch new development, but it would leave devices running ancient operating systems and unpatched software. A lot of them already do–not just in health care, but in many industries. The security risks are already a known problem, but so are routine bugs.
And in its embrace of agility, the department bends over backwards to assure readers it will not over-regulate (“no new or additional areas of FDA oversight are needed,” p. 3). During a teleconference presenting the report, speakers repeatedly beat their breasts over promises to support innovation above all. This did not prevent the usual critics of HHS from accusing it of imposing a heavy hand on the industry–that’s how open-ended the report is.
It’s sad to report that, after many decades of research, software quality is still a matter of culture and personal talent, not well supported by its own technology. Quality improvement tools such as static analysis and model checkers have been demonstrated effective in experiments, and are used at a few sites (Microsoft has created and adopted a lot these tools, for instance) but are still rarely found production programs and may prove inadequate for them. We’re still searching for processes that bridge the requirements of agility and reliability.
I would like to see two strong initiatives come from the HHS’s request for commments.
- Require some coherent program for testing software and determining that it is fit for its purpose, including usability.
- Collecting error reports in a open database, along with tools to measure the responses of vendors. As usual, open source software projects are more transparent and handle critical bug fixes faster than proprietary companies.
In addition, all sites adopting software should have back-up plans in case of system failure–which means also training staff to use those plans in a pinch.
We need a third paradigm, because we know the limitations of both waterfall and agile.
Hopefully, the field of health IT will advance along with the rest of software. In fact, because lives are at stake, it needs to take the lead–and it would be great to see health IT step out in advance of the rest of the software industry for a change.
Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. His work for O’Reilly includes the influential 2001 title Peer-to-Peer, the 2005 ground-breaking book Running Linux, and the 2007 best-seller Beautiful Code.
Categories: Uncategorized
The sham of meaningful use is that it uses risky medical devices that endanger patients to accomplish nothing but to enrich the vendors (who in turn “contribute” to congress people)
http://www.medscape.com/viewarticle/823602?src=rss
Yeah! And, in addition to the Fed ZIRP thing, she’s responsible for global warming and the Russian Crimea takeover as well.
This is a good discussion, and long overdue. These are the type of things to think about prior to mass implementation.
That instruments central to the mitigation of disease remain free of any surveillance for safet, risk, efficacy, and usability is despicable. Thank Ex-sec Sebelius for that.