A trio of groundbreaking publications on healthcare came out this April. They are my required reading list for CEOs. First is a study published in last week’s Journal of the American Medical Association (JAMA) by Eappen and colleagues (including among them Atul Gawande). The study found infections occurred in 5 percent of all surgeries in an unnamed southern hospital system. For U.S. hospitals, this is not an unusual rate of error — even though it is about 100 times higher than what most manufacturing plants would tolerate. No automaker would stay in business if 5 percent of their cars had a potentially fatal mechanical flaw.
If that’s not bad enough, the second finding is where we enter the realm of the absurd: according to the study, purchasers paid the hospital to make these errors. Medicare paid a bonus of more than $3000 for each one of the infections; Medicaid got a relative “bargain,” paying only $900 per infection. But the real chumps were the commercial purchasers (CEOs, that’s you). Employers and other purchasers paid $39,000 for each infection, twelve times as much as your government paid through Medicare. Most companies could create a good job with $39,000, but instead they paid a hospital for the privilege of infecting an employee. How many good jobs haven’t been created so businesses can pay for this waste?
Most employers are far more hard-nosed about managing their purchase of, say, office supplies than they are in purchasing health care — even though, unlike healthcare, paperclips never killed anyone and no stapler can singlehandedly sap a company’s quarterly profit margin. Yet, according to the Catalyst for Payment Reform, only about 11 percent of dollars purchasers paid to healthcare providers are tied in any way to quality. The results reflect this neglect of fundamental business principles for purchasing: Quality and safety problems remain rampant and unabated in health care, while employer health costs have doubled in a decade. Continue reading…
What if you had access to all of the medical research in the world? Or better yet, what if the physician treating your particularly complex or rare condition had access to the latest research? Or what if a public health organization in your community could access that research to inform policymakers of measures to advance public health?
“Wait,” you may think, “can’t they already access that research? Doesn’t the Internet make that possible?” While unfortunately the answer to the first question is “No,” fortunately the Internet can make such access possible. As it is today, most physicians and public health professionals have very limited access to health research, almost all of which is published online. Only about a quarter of the research published today ends up being available to those working outside of universities, where libraries subscribe to a good proportion of the research journals.
So, what are these health professionals missing? What difference to their work would access to research make? Cheryl Holzmeyer, Lauren Maggio, Laura Moorhead and I seek to answer these questions with a new National Science Foundation study for which we are currently recruiting physicians and staff of public health NGOs.
We seek to demonstrate the difference it makes to the daily work of these health professionals to have easy electronic access to all the biomedical and public health research – or at least that large proportion held by Stanford University Library – for a period of eleven months (with one month of limited access as a control). To assess the impact of this access, we provide participants with a special portal to the research literature and track when and what research is viewed, while following up with interviews on the use and value of this access.
Much has already been written about the Oregon Medicaid study that just came out in the New England Journal of Medicine. Unfortunately, the vast majority is reflex, rather than reflection. The study seems to serve as a Rorschach test of sorts, confirming people’s biases about whether Medicaid is “good” or “bad”.
The proponents of Medicaid point to all the ways in which Medicaid seems to help those who were enrolled – and the critics point to all the ways in which it didn’t. But, if we take a step back to read the study carefully and think about what it teaches us, there is a lot to learn.
Here is a brief, and inadequate, summary (you should really read the study): In 2008, Oregon used a lottery system to give a set of uninsured people access to Medicaid. This essentially gave Kate Baicker and her colleagues a natural experiment to study the effects of being on Medicaid.
Those who won the lottery and gained access were compared to a control group who participated in the lottery but weren’t selected. Opportunities to conduct such an experiment are rare and represent the gold standard for studying the effect of anything (e.g. Medicaid) on anything (like health outcomes).
Two years after enrollment, Baicker and colleagues examined what happened to people who got Medicaid versus those who remained uninsured. There are six main findings from the study. Compared to people who did not receive Medicaid coverage:
- People with Medicaid used more healthcare services – more doctor visits, more medications and even a few more ER visits and hospitalizations, though these last two were not statistically significant.
- People with Medicaid were more likely to get lots of tests – some of them probably good (cholesterol screening, Pap smears, mammograms) and some of them, probably bad (PSA tests).
- People with Medicaid, therefore, not surprisingly, spent more money on healthcare overall.
The Illinois hospital dinosaurs continue to defy evolution and prove that they are not extinct. I am talking about our health facilities planning board, which just turned down another Certificate of Need application for a new hospital, this time in the northwest suburbs of Chicago. The board justified the decision by stating that the new hospital would harm existing hospitals.
I know that the Chicago School of economics tells us that regulators serve the interests of those they regulate, usually at the expense of the public. But just because the Illinois planning board sits in Chicago, that doesn’t mean they have to slavishly follow the Chicago School. They could act in the public interest at least once in a while! (Though if the board started approving too many new health facilities, someone might notice that they are not needed and put them out of a job.)
Ensuring that Americans who live in rural areas have access to health care has always been a policy priority. In healthcare, where nearly every policy decision seems contentious and partisan, there has been widespread, bipartisan support for helping providers who work in rural areas. The hallmark of the policy effort has been the Critical Access Hospital (CAH) program– and new evidence from our latest paper in the Journal of the American Medical Association suggests that our approach needs rethinking. In our desire to help providers that care for Americans living in rural areas, we may have forgotten a key lesson: it’s not about access to care. It’s about access to high-quality care. And on that policy goal, we’re not doing a very good job.
A little background will be helpful. In the 1980s and 1990s, a large number of rural hospitals closed as the number of people living in rural areas declined and Medicare’s Prospective Payment System made it more difficult for some hospitals to manage their costs. A series of policy efforts culminated in Congress creating the Critical Access Hospital program as part of the Balanced Budget Act of 1997. The goals of the program were simple: provide cost-based reimbursement so that hospitals that were in isolated areas could become financially stable and provide “critical access” to the millions of Americans living in these areas. Congress created specific criteria to receive a CAH designation: hospitals had to have 25 or fewer acute-care beds and had to be at least 35 miles from the nearest facility (or 15 miles if one needed to cross mountains or rivers). By many accounts, the program was a “success” – rural hospital closures fell as many institutions joined the program. There was widespread consensus that the program had worked.
Despite this success, there were two important problems in the legislation, and the way it was executed, that laid the groundwork for the difficulties of today. Continue reading…
In the past, neither hospitals nor practicing physicians were accustomed to being measured and judged. Aside from periodic inspections by the Joint Commission (for which they had years of notice and on which failures were rare), hospitals did not publicly report their quality data, and payment was based on volume, not performance.
Physicians endured an orgy of judgment during their formative years – in high school, college, medical school, and in residency and fellowship. But then it stopped, or at least it used to. At the tender age of 29 and having passed “the boards,” I remember the feeling of relief knowing that my professional work would never again be subject to the judgment of others.
In the past few years, all of that has changed, as society has found our healthcare “product” wanting and determined that the best way to spark improvement is to measure us, to report the measures publicly, and to pay differentially based on these measures. The strategy is sound, even if the measures are often not.
This month the Agency for Healthcare Research and Quality (AHRQ) published a new report that identifies the most promising practices for improving patient safety in U.S. hospitals.
An update to the 2001 publication Making Health Care Safer: A Critical Analysis of Patient Safety Practices, the new report reflects just how much the science of safety has advanced.
A decade ago the science was immature; researchers posited quick fixes without fully appreciating the difficulty of challenging and changing accepted behaviors and beliefs.
Today, based on years of work by patient safety researchers—including many at Johns Hopkins—hospitals are able to implement evidence-based solutions to address the most pernicious causes of preventable patient harm. According to the report, here is a list of the top 10 patient safety interventions that hospitals should adopt now.
Writing in the March 20 issue of JAMA, Drs. Douglas Noble and Lawrence Casalino say that supporters of Accountable Care Organizations (ACOs) are all muddled over “population health.”
This correspondent says the article is what is muddled and that the readers of JAMA deserve better.
According to the authors, after the Affordable Care Act launched the Medicare Accountable Care Organizations (ACOs), their stated purpose has morphed from Health-System Ver. 2.0 controlling the chronic care costs of their assigned patients to Health System Ver. 3.0 collaboratively addressing “population health” for an entire geography.
Between the here of “improving chronic care” and the there of “population health,” Drs Noble and Casalino believe ACOs are going to have to confront the additional burdens of preventive care, social services, public health, housing, education, poverty and nutrition. That makes the authors wonder if the term “population health” in the context of ACOs is unclear. If so, that lack of clarity could ultimately lead naive politicians, policymakers, academics and patients to be disappointed when ACOs start reporting outcomes that are limited to chronic conditions.
American consumers know more about the quality and prices of restaurants, cars, and household appliances than they do about their health care options, which can be a matter of life and death. While we have made some progress in getting consumers reliable quality information thanks to organizations like Bridges to Excellence and The Leapfrog Group, for most Americans, shockingly little information still exists about health care prices, even for the most basic services. And several studies have shown us that the price for an identical procedure can vary as much as 700 percent with no difference in quality. Moreover, with health care comprising 18 percent of the US economy and costs rising every day, it is extremely troubling that most health care prices are still shrouded in mystery.
Our organizations have been steadily pushing health plans and providers to share price information more freely, and we are seeing progress. But public policy—or even just pending legislation—can provide a powerful motivator as well.
Unfortunately, our new Report Card on State Price Transparency Laws shows most states are not doing their part to help consumers be informed and empowered to shop for higher value care. In the Report Card released Monday, 72 percent of states failed, receiving a “D” or an “F.” Just two, Massachusetts and New Hampshire, received an “A.” The Report Card based grades on criteria including: sharing information about the price of both inpatient and outpatient services; sharing price information for both doctors and hospitals; sharing data on a public website and in public reports; and allowing patients to request pricing information prior to a hospital admission.
Not long ago the Atlantic published a provocative article entitled “The Robot Will See You Now.” Using the supercomputer Watson as a starting point, the author explored the mind-bending possibilities of e-care. In this near future, so many aspects of medicine will be captured by automated technology that the magazine asked if “your doctor is becoming obsolete?”
The IT version of health includes continuous medical monitoring (i.e. your watch will check all vital functions), robotic surgery without human supervision, lifelong personal database with genetic code core and intensive preventive care modeled for each person’s need; all supervised by artificial intelligence with access to a complete file of medical research and findings. The e-doctor will never forget, never get tired, never get confused, never take a day off and will give 24/7 medical care at any location, anywhere in the world, for a fraction of the cost. Perfect care, everywhere, at every moment, for a pittance.
While the transformation for doctors seems clear, a shift from being at the core of medicine to being what the article described as “super-quality-control officers,” what intrigues me is not how doctors will change (retire); the real question is how patients will adapt to this new healthcare world? Particularly when experiencing extreme or life threatening illness, will patients accept that family, friends and a pumped up Ipad are enough?