Leapfrogging CPOE

Leapfrog group

Last week, yet another alarming Computerized Physician Order Entry
(CPOE) study made headlines. According to Healthcare IT News, The Leapfrog Group, a
staunch advocate of CPOE, is now “sounding
the alarm on untested CPOE”
as their new study “points
to jeopardy to patients when using health IT”.
Up until now we had
inconclusive studies pointing to increased and also decreased mortality
in one hospital or another following CPOE implementation, but never an
alarm from a non-profit group who made it its business to improve
quality in hospitals by encouraging CPOE adoption, and this time the
study involved 214 hospitals using a special CPOE evaluation tool over a
period of a year and a half.

According to the brief Leapfrog
report
, 52% of medication errors and 32.8% of potentially fatal
errors in adult hospitals did not receive appropriate warnings (42.1%
and 33.9% accordingly, for pediatrics). A similar study published in the
April edition of Health
Affairs (subscription required)
, using the same Leapfrog CPOE
evaluation tool, but only 62 hospitals, provides some more insights into
the results. The hospitals in this study are using 7 commercial vendors
and one home grown system (not identified), and most interestingly, the
CPOE vendor had very little to do with the system’s ability to provide
appropriate warnings. For basic adverse events, such as drug-to-drug or
drug-to-allergy, an average of 61% of events across all systems
generated appropriate warnings. For more complex events, such as
drug-to-diagnosis or dosing, appropriate alerts were generated less that
25% of the time. The results varied significantly amongst hospitals,
including hospitals using the same product. To understand the
implications of these studies we must first understand the Leapfrog CPOE evaluation tool,
or “flight simulator” as it is sometimes referred to.

The CPOE “simulator” administers a 6 hours test. It is a web based tool
where hospitals can print out a list of 10-12 test patients with
pertinent profiles, i.e. age, gender, problem list, meds and allergy
list and possibly test results. The hospital needs to enter these
patients into their own EHR system. According
to Leapfrog
, this is best done by admission folks, lab and
radiology resources and maybe a pharmacist. Once the test patients are
in the EHR, the hospital should log back into the “simulator” and print
out about 50 medication orders for those test patients, along with
instructions and a paper form for recording CPOE alerts. Once the paper
artifacts are created, the hospital is supposed to enter all medication
orders into the EHR and record any warnings generated by the EHR on the
paper form provided by the “simulator”. This step is best done by a
physician with experience in ordering meds in the EHR, but Leapfrog also
suggests
that the CMIO would be a good choice for entering orders. Finally, the
recorded warnings are reentered into the Leapfrog web interface and the
tool calculates and displays the hospital scores.

If the process above sounds familiar, it is probably because this is
very similar to how CCHIT certifies clinical decision support in
electronic prescribing. Preset test patients followed by application of
test scripts are intended to verify, or in this case assess, which
modules of medication decision support are activated and how the
severity levels for each are configured. As Leapfrog’s disclaimer
correctly states, this tool only tests the implementation, or
configuration, of the system. This is a far cry from a flight simulator
where pilot (physician) response is measured against simulated real life
circumstances (busy ED, rounding, discharge). The only alarm the
Leapfrog study is sounding, and it is an important alarm, is that most
hospitals need to turn on more clinical decision support functionality.

It is not clear whether doctors will actually heed decision support
warnings, or just ignore them. Since the medication orders are scripted,
we have no way of knowing if, hampered by the user interface, docs
without a script would end up ordering the wrong meds. And since the
“simulator” is really not a simulator, we have no way of knowing if an
unfriendly user interface caused the physician to enter the wrong
frequency, or dose, or even the wrong medication (Leapfrog has no actual
access to the EHR). We have no indication that the system actually
recorded the orders as entered, subsequently displayed a correct
medication list or transmitted the correct orders to the pharmacy. We
cannot be certain that a decision support module which generates
appropriate alerts for the test scripts, such as duplicate therapy, will
not generate dozens of superfluous alerts in other cases. We do know
that alerts are overridden in up to 96%
of cases
, so more is not necessarily better.

Do the high scoring hospitals have a higher rate of preventing
errors, or do they just have more docs mindlessly dismissing more
alerts?

All in all, the Leapfrog CPOE evaluation tool is a pretty blunt
instrument. However, the notion of a flight simulator for EHRs is a good
one. A software package that allows users to simulate response to
lifelike presentations, and scores the interaction from beginning to
end, accounting for both software performance and user proficiency,
would facilitate a huge Leap forward in the quality of HIT. This would
be an awesome example of true innovation.

Margalit Gur-Arie blogs frequently at her website, On Healthcare Technology. She was COO at GenesysMD (Purkinje), an HIT company focusing on web based EHR/PMS and billing services for physicians. Prior to GenesysMD, Margalit was Director of Product Management at Essence/Purkinje and HIT Consultant for SSM Healthcare, a large non-profit hospital organization.

Categories: Uncategorized

Tagged as: , ,

11 replies »

  1. The evidence is very clear that having CPOE is far safer than not having CPOE. That is why Leapfrog continues to stand behind our standard that every hospital in America should adopt CPOE. Indeed, they should have done so a long time ago.
    But just having a CPOE system, while safer than not having one, is inadequate. Systems must be monitored and tested over time. What our simulation shows is the extent to which CPOE systems work at intended. Our simulation tests whether systems alert physicians to important errors. We also test whether systems alert physicians to unimportant errors–thus we test for the problem noted in the blog of “alert fatigue” when there are too many alerts and physicians begin to ignore all of them.
    The reason systems don’t always work is not only about the quality of a vendor’s product.CPOE systems are not plug and play, they must be customized within the hospital over time. For that reason, their effectiveness should be monitored over time. Our tool is meant for that purpose, but there need to be many more tools for hospitals to use.

  2. Health care will not reform itself. The health care system is too proud, too certain, and stubbornly refuses to acknowledge the concerns of outsiders- be they the government nor the general public. And so medical errors-despite all the innovative ideas about making improvements will fail- just as attempts to improve doctor patient relationship have failed. Until physicians are required to have a much greater focus on patients as people they will continue to fail to meet the needs of the public-lets have patient-centric and not medicocentric medicine. Until then talk of improving patients safety is moonshine in my opinion

  3. I will never use CPOE. At least now the secretary also knows what the orders are. If I enter them myself, then I am the only one who knows what is going on, since the nurse is massaging her information into the computer. Do I need to call lab and xray myself, too? Why don’t I get my own vital signs?
    We have med errors because the nurses are not listening to the doctors and because too many patient contacts are from non-nurses.
    Barring that, just have the patient order from a menu what they want.

  4. Dr. Marcinko,
    I am very happy to see that CPOE is doing so well in some hospitals. Would you happen to have more information on that one “hospital in the southeast” who achieved the tremendous 72% reduction in medication errors? If there is a study published somewhere, I would love to read it.
    I think all of us here would like to see CPOE succeed in saving lives and generally improve quality of care, and that is probably why we would like to see the FDA involved in improving these systems.

  5. The problem with this simulation tool, if I am understanding it correctly, is that it doesn’t simulate normal everyday use of the CPOE system (which might generate even worse results.) To make another analogy to the clinical laboratory, my area of experience – as part of our accreditation process we received quarterly proficiency testing samples on most of our analytes. These were reconstituted and disguised as normal patient samples, put on a normal run by normal lab techs, unaware that this was a “test.” Suggesting that the CMIO enter the CPOE orders is ridiculous!
    Also, a better simulation would be to construct an entire set of known “problem entries” to really test the system’s ability to catch known areas of weakness.
    I have to agree with those who criticize Leapfrog for blindly including CPOE in their initial set of goals years ago, without really understanding what they were getting into.

  6. The problem with this tool, and indeed the entire study, is that it doesn’t really measure avoidance of errors. It doesn’t measure errors at all. One would have to infer that issuance of alerts translates into prevention of errors, but this is a bit of a stretch, since there is no conclusive evidence to that effect.
    To be sure, the study didn’t show that patients are in more danger than if paper was used and zero alerts were generated.
    I think we need more studies of actual errors, like this very thoughtful one
    http://pediatrics.aappublications.org/cgi/content/abstract/121/3/e421

  7. HIT is the American health care equivalent of BP’s Deepwater Horizon. The poor documentation stemming from terrible HIT will be fertile ground for garden variety medical malpractice suits.

  8. Wonderful effort to get this out on the web.
    These data are of no surprise to the users of this crap. This so call Leapfrog Group has caused hospitals to waste millions of dollars on devices that have not been approved by the FDA, and they now say are dangerous.
    Why did thy not do the tests before promoting this unsafe equipment to the Congress and the hospitals?
    How many guinea pig patients died in this experiment because Leapfrog intimidated hospitals to deploy these devices under pretense they would not get paid because they were not safe without CPOE devices?
    The conduct of Leapfrog and its leadership merit criminal investigation.

  9. This is what Leapfrog states on its website:
    http://www.leapfroggroup.org/for_hospitals/leapfrog_hospital_survey_copy/leapfrog_safety_practices
    “Leapfrog consulted many renowned patient safety experts in their field to develop our safety practices. These practices were intended to give a focus for recognition and rewards at the same time delivering significant ‘leaps’ in patient safety.
    Research shows that if the first three leaps (Computerized Physician Order Entry, Intensive Care Unit Physician Staffing and Evidence-Based Hospital Referral) were implemented in all urban hospitals in the U.S. we could save over 57,000 lives, prevent as many as 3 million serious medication errors, and save $12.0 billion each year (Lwin 2008).
    The safety practices are reviewed each year by our experts to ensure they are in line with current research and medical thinking.”
    Is it not shocking that this FDA imitating organization is still promoting virally toxic CPOE devices as a “leap” to safety after their “tool” (which additionally is not approved by the FDA) show these devices to be meaningfully useless?