To understand how a landmark new report on diagnostic error breaks the mold, go past the carefully crafted soundbite (“Most people will experience at least one diagnostic error in their lifetime, sometimes with devastating consequences”) and rummage around the report’s interior.
You can’t get much more medical establishment than the Institute of Medicine (IOM), also called the National Academy of Medicine, author of the just-released Improving Diagnosis in Health Care. Yet in a chapter discussing the role played in diagnostic accuracy by clinician characteristics, there’s a shockingly forthright discussion of the perils of age and arrogance.
“As clinicians age, they tend to have more trouble considering alternatives and switching tasks during the diagnostic process,” the report says. Personality factors can cause errors, too: “Arrogance, for instance, may lead to clinician overconfidence.”
Wow. Sure, both those assertions are extensively footnoted and hedged later with talk of the importance of teams (see below). Still, given the source, this practically qualifies as “trash talking.”
Of course, those quotes didn’t make it into the press release. There, inflammatory language was deliberately avoided so as not to give opponents any easy targets. (Disclosure: I was an advocate of an IOM report on this topic while consulting to an organization that eventually helped fund it. After testifying at the first committee meeting, I had no subsequent involvement.)Continue reading…
There is a plethora of health care quality data being pushed out to the public, yet no rules to assure the accuracy of what is being presented publicly. The health care industry lacks standards for how valid a quality measure should be before it is used in public reporting or pay-for-performance initiatives, although some standards have been proposed.
The NQF does a good job of reviewing and approving proposed measures presented to it, but lacks the authority to establish definitive quantitative standards that would apply broadly to purveyors of performance measures. However, as discussed earlier, many information brokers publically report provider performance without transparency and without meeting basic validity standards. Indeed, even CMS, which helps support NQF financially, has adopted measures for the Physician Quality Reporting System that have not undergone NQF review and approval. Congress now is considering “SGR repeal,” or sustainable growth rate legislation, that would have CMS work directly with specialty societies to develop measures and measurement standards, presumably without requiring NQF review and approval .
Without industry standards, payers, policy makers, and providers often become embroiled in a tug-of-war; with payers and policy-makers asserting that existing measures are good enough, and providers arguing they are not. Most often, neither side has data on how good the contested measures actually are. Most importantly, the public lacks valid information about quality, especially outcomes, and costs.
Indeed, most quality measurement efforts struggle to find measures that are scientifically sound yet feasible to implement with the limited resources available. Unfortunately, too often feasibility trumps sound science. In the absence of valid measures, bias in estimating the quality of care provided will likely increase in proportion to the risks and rewards associated with performance. The result is that the focus of health care organizations may change from improving care to “looking good” to attract business. Further, conscientious efforts to reduce measurement burden have significantly compromised the validity of many quality measures, making some nearly meaningless, or even misleading. Unfortunately, measurement bias often remains invisible because of limited reporting of data collection methods that produce the published results. In short, the measurement of quality in health care is neither standardized nor consistently accurate and reliable.
The unfortunate reality is that there is no body of expertise with responsibility for addressing the science of performance measurement. The National Quality Forum (NQF) comes closest, and while it addresses some scientific issues when deciding whether to endorse a proposed measure, NQF is not mandated to explore broader issues to advance the science of measure development, nor does it have the financial support or structure to do so.
An infrastructure is needed to gain national consensus on: what to measure, how to define the measures, how to collect the data and survey for events, what is the accuracy of EHRs as a source of performance, the cost-effectiveness of various measures, how to reduce the costs of data collection, how to define thresholds for measures regarding their accuracy, and how to prioritize the measures collected (informed by the relative value of the information collected and the costs of data collection).
Despite this broad research agenda, there is little research funding to advance the basic science of performance measurement. Given the anticipated broad use of measures throughout the health system, funding can be a public/private partnership modeled after the Patient-Centered Outcomes Research Institute or a federally-funded initiative, perhaps centered at AHRQ. Given budgetary constraints, finding the funding to support the science of measurement will be a challenge. Yet, the costs of misapplication of measures and incorrect judgments about performance are substantial.
In the past, neither hospitals nor practicing physicians were accustomed to being measured and judged. Aside from periodic inspections by the Joint Commission (for which they had years of notice and on which failures were rare), hospitals did not publicly report their quality data, and payment was based on volume, not performance.
Physicians endured an orgy of judgment during their formative years – in high school, college, medical school, and in residency and fellowship. But then it stopped, or at least it used to. At the tender age of 29 and having passed “the boards,” I remember the feeling of relief knowing that my professional work would never again be subject to the judgment of others.
In the past few years, all of that has changed, as society has found our healthcare “product” wanting and determined that the best way to spark improvement is to measure us, to report the measures publicly, and to pay differentially based on these measures. The strategy is sound, even if the measures are often not.
The tendency of government to impose crude performance metrics on hospitals is a well known phenomenon, but its use is growing as jurisdictions look for ways to cut their budgets. The latest example is found in Massachusetts.
As reported by the MA Hospital Association:
Governor Deval Patrick’s FY2013 state budget proposal includes $40 million in rate cuts for hospitals. A significant portion of these cuts would be made through highly questionable policy changes. One of the more troubling policies would double penalties on hospitals for re-admissions that occurred in 2010.
The 2012 MassHealth acute hospital RFA – the main contract between the state and hospitals serving Medicaid patients — introduced a new preventable readmission penalty for hospitals that MassHealth determined had higher-than-expected preventable readmission rates.
Inpatient payment rates for 24 hospitals were reduced by 2.2% in FY2012. Now the administration is proposing to double the penalty to 4.4% in FY2013. There are so many things wrong with this. First, as I have reported in the past:
Even if the readmission rate is the right metric to use for comparison purposes, we don’t have a model that would accurately compare one hospital to the others. This suggests that the time is not ripe to use this measure for financial incentives or penalties. It might give the impression of precision, but it is not, in fact, analytically rigorous enough for regulatory purposes.
With unsustainably high costs and tremendous gaps in quality and patient safety, the health care system is ripe with opportunities for improvement. For years, many have seen quality measurement as a means to drive needed change. Private and public payers, public health departments, and independent accreditation organizations have asked health care providers to report on quality measures, and quality measures have been publicly reported or tied to financial reimbursement or both.
Throughout the Affordable Care Act (ACA), quality measures are tied to reimbursements in multiple programs. It is critical that the Department of Health and Human Services (HHS) move forward with a strategy for measure harmonization that will accommodate local and national needs to evaluate outcomes and value. Additionally, a standard for calculation measures such as the use of a minimal data set for the universe of measures should be considered.
The field of quality measurement is at a critical juncture. The Affordable Care Act (ACA)—which mentions “quality measures,” “performance measures,” or “measures of quality,” 128 times—heightened an already growing emphasis on quality measurement. With so much focus on quality, the resource burden on health care providers of taking and reporting measures for multiple agencies and payers is significant.
Furthermore, the field itself is being transformed with the continued adoption of electronic health records (EHRs). Traditional measures are largely based on administrative or claims data. The increased use of EHRs create the opportunity to develop sophisticated electronic clinical quality measures (eQMs) leveraging clinical data, which when linked with clinical decision support tools and payment policy, have the potential to improve quality and decrease costs more dramatically than traditional ones. Innovative electronic measures on the horizon include “delta measures” calculating changes in patient health over time and care coordination measures for the electronic transfer of patient information (i.e., hospital discharge summary or consultant note successfully transmitted to the primary care physician). Additionally, traditional data abstraction methodologies for clinical data require labor intensive, chart review processes, which would be eliminated if data could be electronically extracted.
Earlier this month, the National Quality Forum released its revised list of “Serious Reportable Events in Healthcare, 2011,” with four new events added to the list. While the NQF no longer refers to this list as “Never Events,” it doesn’t really matter, since everyone else does. And this shorthand has helped make this list, which will soon mark its tenth anniversary, a dominant force in the patient safety field.
The NQF was founded in 1999 at the recommendation of Al Gore’s Presidential Advisory Commission on healthcare quality. For its founding chair, the organization selected Ken Kizer, a no-nonsense, seasoned physician-administrator who had just done a spectacular job of transforming the VA system from the subject of scathing articles and movies into a model of high-quality healthcare, a veritable star in patient safety galaxy.
Kizer’s original charge at NQF was to develop a Good Housekeeping seal-equivalent for quality measures (“NQF-endorsed measures”). But soon after he arrived, Kizer added another item to the NQF’s wish list: the creation of a list of medical errors and harm that might ultimately be the subject of a nationwide state-based reporting system. As Kizer said at the time,
This is intended to be a list of things that just should not happen in health care today. For example, operating on the wrong body part [or] a mother dying during childbirth. That’s such a rare event today that it’s generally viewed as something that just shouldn’t happen. Now, there’s probably going to be an occasion now and then when it happens and everything was done right, but it’s so infrequent that it means you have to investigate it every time it occurs. So “never” has quotes around it in this case. Now, wrong-site surgery is a different story—that should never happen. There’s no way that you should take off the right leg when you’re supposed to do the left one. So in this case, never really means never.
Unsurprisingly, the items on the list quickly became known as “Never Events.” Twenty-seven of them were announced in 2002, and the list was expanded and revised four years later. (This primer, written by my colleague Sumant Ranji for our patient safety website, AHRQ Patient Safety Network, is the best description of the list and some of its policy implications.)Continue reading…