Jeremy Hunt, secretary of state for health in Britain, recently toured the Virginia Mason Medical Center in Seattle. He said the visit was “inspirational” and announced plans to have the British National Health Service (NHS) sign up “heart and soul” to a similar culture of safety and transparency. Hunt wants doctors and nurses in NHS to “say sorry” for mistakes and improve openness among hospitals in disclosing safety events.
I had a similar reaction to my tour of Virginia Mason. The hospital appears impressive—and truly gets impressive results. My nonprofit, the Leapfrog Group, annually takes a cold, hard look at the hospital’s data and named Virginia Mason one of two “top hospitals of the decade” in 2010. Every year, it ranks near the top of our national ratings.
Virginia Mason’s success is rooted in its famous application of the principles of Japanese manufacturing to disrupt how it delivered care, partly at the behest of one of Seattle’s flagship employers, Boeing. There are numerous media stories and a book recounting the culture of innovation Virginia Mason deployed to achieve its great results, so I won’t belabor the point here. But at its essence is Virginia Mason’s unusual approach to transparency. Employees are encouraged to “stop the line” – that is, report when there’s a near miss or error. Just as Toyota assembly workers are encouraged to stop production if they spot an engineering or safety problem, Virginia Mason looks for every opportunity to publicly disclose and closely track performance.
It is not normal for a hospital to clamor for such transparency. Exhibit A: the Leapfrog Hospital Survey, my organization’s free, voluntary national survey that publicly reports performance by hospital on a variety of quality and safety indicators. More than half of U.S. hospitals refuse the invitation of their regional business community to participate in Leapfrog, suggesting that transparency isn’t at the top of their agenda. But for Virginia Mason and an elite group of other hospital systems, not only is the transparency of Leapfrog a welcome feature, but they challenge us to report even more data, faster.
The Leapfrog Group has just released its latest report grading the safety of hundreds of individual hospitals, but the real news isn’t the“incremental progress.” It’s how a group started by some of the most powerful corporations in America has quietly devolved into just one more organization hoping press releases produce change.
Amid the current enthusiasm for “value-based purchasing” by employers and possible privatization of Medicare, it is worth examining why Leapfrog’s initial notion that corporations would spearhead a crackdown on crummy care failed and what we can learn from that publicly unacknowledged failure.
Leapfrog was launched with the hoopla of a high-powered initiative. A widely publicized 1999 report by the Institute of Medicine declared that up to 98,000 patients die every year in hospitals from preventable errors and more than one million are injured. In November, 2000, the newly formed Leapfrog Group announced three targeted “leaps” in patient safety that promised to save some 58,000 lives, prevent a half million medication errors and (in calculations that came later) save billions of dollars.
“The number of tragic deaths brought about by preventable medical errors is too striking for those of us in the business community to ignore,” declared Lewis Campbell, chairman and CEO of TextronTXT -0.29%, at the group’s launch.
Campbell was head of a health care task force of the Business Roundtable, an elite group of corporate leaders that sponsored Leapfrog. Wielding the power of the checkbook to enforce “aggressive but feasible target dates” was “a straightforward business approach to tackling a complex problem,” Campbell explained.
In the past, neither hospitals nor practicing physicians were accustomed to being measured and judged. Aside from periodic inspections by the Joint Commission (for which they had years of notice and on which failures were rare), hospitals did not publicly report their quality data, and payment was based on volume, not performance.
Physicians endured an orgy of judgment during their formative years – in high school, college, medical school, and in residency and fellowship. But then it stopped, or at least it used to. At the tender age of 29 and having passed “the boards,” I remember the feeling of relief knowing that my professional work would never again be subject to the judgment of others.
In the past few years, all of that has changed, as society has found our healthcare “product” wanting and determined that the best way to spark improvement is to measure us, to report the measures publicly, and to pay differentially based on these measures. The strategy is sound, even if the measures are often not.
Last week, yet another alarming Computerized Physician Order Entry
(CPOE) study made headlines. According to Healthcare IT News, The Leapfrog Group, a
staunch advocate of CPOE, is now “sounding
the alarm on untested CPOE” as their new study “points
to jeopardy to patients when using health IT”. Up until now we had
inconclusive studies pointing to increased and also decreased mortality
in one hospital or another following CPOE implementation, but never an
alarm from a non-profit group who made it its business to improve
quality in hospitals by encouraging CPOE adoption, and this time the
study involved 214 hospitals using a special CPOE evaluation tool over a
period of a year and a half.
According to the brief Leapfrog
report, 52% of medication errors and 32.8% of potentially fatal
errors in adult hospitals did not receive appropriate warnings (42.1%
and 33.9% accordingly, for pediatrics). A similar study published in the
April edition of Health
Affairs (subscription required), using the same Leapfrog CPOE
evaluation tool, but only 62 hospitals, provides some more insights into
the results. The hospitals in this study are using 7 commercial vendors
and one home grown system (not identified), and most interestingly, the
CPOE vendor had very little to do with the system’s ability to provide
appropriate warnings. For basic adverse events, such as drug-to-drug or
drug-to-allergy, an average of 61% of events across all systems
generated appropriate warnings. For more complex events, such as
drug-to-diagnosis or dosing, appropriate alerts were generated less that
25% of the time. The results varied significantly amongst hospitals,
including hospitals using the same product. To understand the
implications of these studies we must first understand the Leapfrog CPOE evaluation tool,
or “flight simulator” as it is sometimes referred to.