I’ve written several posts about the frustrating aspects of Meaningful Use Stage 2 Certification. The Clinical Quality Measures (CQMs) are certainly one of problem spots, using standards that are not yet mature, and requiring computing of numerators and denominators that are not based on data collected as part of clinical care workflow.
There is a chasm between quality measurement expectations and EHR workflow realities causing pain to all the stakeholders – providers, government, and payers. Quality measures are often based on data that can only be gathered via manual chart abstraction or prompting clinicians for esoteric data elements by interrupting documentation.
How do we fix CQMs?
1. Realign quality measurement entity expectations by limiting calculations (call it the CQM developers palette) to data which are likely to exist in EHRs. Recently, Yale created a consensus document, identifying data elements that are consistently populated and of sufficient reliability to serve in measure computations. This is a good start.
2. Add data elements to the EHRs over time and ensure that structured data input fields use value sets from the Value Set Authority Center (VSAC) at NLM. The National Library of Medicine keeps a Meaningful Use data element catalog that is likely to expand in future stages of Meaningful Use.
3. Greatly reduce the number of CQMs required by private and public entities to a consistent, manageable number. That way we can focus on ensuring integrity of data elements used in quality measures.
This approach will create a “healthy tension.” If HHS restricts measure developers from using an infinite number of data elements, measure developers will express concern that available technology is limiting quality measurement. If measure developers continue to include data that does not exist in the EHR, then developers will create burdensome add-on data entry screens to prompt providers for extra information just for the sake of CQM.
A few years ago, Jacob Reider (now the interim National Coordinator) created these slides that illustrate how to cross the Quality Measurement chasm – modify expectations of quality measurement developers, while also enhancing EHRs with value sets from the VSAC and continuing to develop standards that support quality measurement (such as FHIR), optimizing workflow and usability.
As I’ve said before, I will do everything in my power to support Jacob Reider, ONC and “polishing” of Meaningful Use Stage 2.
Revising CQMs is likely to be a high priority of the HIT Standards Committee over the next year. Watch for that discussion at the November 13 HIT Standards Committee meeting.
John Halamka, MD, is the CIO at Beth Israel Deconess Medical Center and the author of the popular Life as a Healthcare CIO blog, where he writes about technology, the business of healthcare and the issues he faces as the leader of the IT department of a major hospital system. He is a frequent contributor to THCB.
I completely agree that quality metrics reporting should be less complex for clinics. If you want providers to use data to improve outcomes, the time spent on collecting data can’t overwhelm them. I think a clinic could utilize data to significantly improve their care if they had more leeway to take a deep dive in the data associated with a problem they can impact. Patients will benefit from real improvement, and that doesn’t [usually] happen without some time to reflect and act. You can’t do that on more than one or two issues at a time.
Would like to see this article translated into English if possible. Thanks!
“all the stakeholders – providers, government, and payers”