Our healthcare system is now facing a problem that has plagued business leaders for years: how do you balance consistency and innovation?
The drive for consistency in healthcare is based upon the fundamental observation that physicians across the country treat similar medical conditions in dramatically different fashions. Sometimes, these different approaches are costly, such as using a more expensive treatment when a less expensive approach might be as effective. In other cases, these practice variations are dangerous – failing to provide patients with treatment the evidence suggests is best.
Standardizing the delivery of care — identifying “best practices,” and then insisting physicians follow these guidelines – could, in theory, save money while improving quality, and is the basis of Obama’s healthcare proposal.
Businesses have long known about the benefits of standardization – lower costs, higher baseline quality — and have aspired to achieve it. The ability to make the same product in the exact same way every single time has contributed materially to the success of companies from McDonalds to Intel. The global adoption of the “Six Sigma” program, an initiative originally developed by Motorola to reduce variability and ensure consistency, is perhaps the most visible example of the value most industries place upon achieving uniformity.
For many managers, one of the great attractions of consistency initiatives is that they offer instant metrics, quantitative methods of evaluating how well you are doing simply by measuring how close you are adhering to the established standard.
Yet, these exact metrics are also what most concern many physicians, as the drive for standardization seems to have far outstripped our ability to identify appropriate standards. Many practice guidelines are based on limited data, and in many cases, it’s not clear that strict adherence to these guidelines actually improves patient outcomes. (The ubiquitous use of “best practice” benchmarks in the corporate world likely rests on an even shakier foundation.)
The administration hopes that through improved communication, and aided by modernized information technology systems, physicians can be nudged to standardize themselves, motivated by a professional desire to provide the best care at the cheapest cost. However, given both the profound challenges of defining what “best” is, and the complexity of physician motivation, the only way to achieve the required cost-savings may be to mandate strict adherence to practice guidelines.
At some level, standardized algorithms might be good for medicine, reducing the blatant mismanagement of patients by physicians who have not stayed current, and discouraging doctors from reflexively selecting expensive procedures or medications that have been shown to offer little benefit. In simplifying the physician’s decision tree, such guidelines may also enable doctors to spend more time listening to patients, and less time running through a confusing litany of therapeutic alternatives.
At the same time, if medicine lurches in the direction of guidelines and algorithms, two important opportunities may be lost:
– First, we may lose the chance to individualize care; as Steven J. Gould famously wrote, “The median isn’t the message,” and a treatment ineffective for most patients may be enormously useful for some. A key driver of personalized medicine is the urgent clinical need to identify just which patients are most likely to benefit from a particular drug or intervention.
– Second, we may lose the opportunity to tinker and innovate – so many powerful discoveries originated with a clinician’s chance observation or slight deviation from standard treatment. If the role of physicians is dumbed down to the point where they are simply expected to mechanically execute on established protocols, the ability to intelligently improvise may be curtailed, thwarting medical progress.
Regrettably, the current fashion for standards has consumed not only medical practice but also medical training, as young doctors, nurses, and other healthcare providers are continously compelled to demonstrate “proficiency” in a series of expensive (and, for the sponsors, quite lucrative) certification examinations, despite minimal evidence that the score produced by this testing correlates in any meaningful way with the care subsequently delivered to patients. In a healthcare system fixated on metrics, the proliferation of such unvalidated testing instruments will only get worse.
Is there an intelligent way to harness the cost and quality benefits of standardization in a fashion that doesn’t lead to the dangers of guidance creep and preserves innovation?
One approach is to clearly differentiate between guidelines based on the most robust evidence – strong recommendations that truly deserve to guide clinical practice – from all other guidance, which can inform care, but should not dictate it. This will require from clinical leaders who develop guidelines a measure of humility – something often in short supply.
A second approach is to ensure that to the extent standardized treatment protocols are employed, they are routinely used to evaluate and improve care, not just deliver it. Treatment algorithms could enable the rigorous comparison of different therapeutic approaches when there are several reasonable alternatives, potentially providing more actionable conclusions than an army of tinkering practitioners. Success in this research endeavor would require planning, expertise, commitment, and funding (presumably also in short supply).
My own experience working with a range of companies suggests that balancing consistency and creativity can represent an overwhelming challenge. While many managers harbor a genuine desire to promote innovation, ultimately, the allure of standardization, and the seductive comfort of quantitative metrics (however meaningless) and rigid processes (however cumbersome) is often too powerful to resist. Our healthcare system is too important to suffer this same fate.
Ensuring good data are translated into clinical practice is essential, but replacing true uncertainty with false precision will only hurt patients, inhibiting innovation while obscuring the relatively few well-established standards that have clearly been shown to make a difference.
David Shaywitz, MD, PhD, is a management consultant in New Jersey, and co-founder of Harvard’s PASTEUR program in translational research. This essay originally appeared in abridged form in the Second Opinions Forum of the Washington Post.