A blistering attack by the national editor of the New England Journal of Medicine against the “less is more” movement in medicine omitted that the publication’s former editor-in-chief played a foundational role in popularizing the idea of widespread medical waste.
The commentary in late December by Dr. Lisa Rosenbaum, “The Less-Is-More Crusade – Are We Overmedicalizing or Oversimplifying?” has attracted intense attention. Rosenbaum berates a “missionary zeal” to reduce putative overtreatment that she says is putting dangerous pressure on physicians to abstain from recommending some helpful treatments. She also asserts that the research by Dartmouth investigators and others who claim 30 percent waste in U.S. health care, in which she once fervently believed, is actually based on suspect methodology.
What Rosenbaum fails to mention is that the policy consensus she seeks to puncture – that the sheer magnitude of wasted dollars in U.S. health care offers “the promise of a solution without trade-offs” – originated in the speeches, articles and editorials of the late Dr. Arnold Relman, the New England Journal’s editor from 1977 to 1991.Continue reading…
I am not sure how many docs still do this, but I still read the actual hard copy of my New England Journal of Medicine, and that means I flip past ad pages with smiling grandfathers playing with grandchildren thanks to supercalifragilistic products on my way to scholarly papers with tables and figures. But this time, I stopped in puzzlement when I came across an ad from Intermountain.
Intermountain is a health system based in Utah, highly respected for its sound approach to quality and cost control, but not broadly well known for cancer care in the way of centers like Dana Farber or Sloan Kettering. Digging further by going to the website uncovers the actual offering which is a streamlined 5 step process:
- Send tumor sample
- Deep sequencing of 96 key cancer genes
- Genomic data analysis
- Tumor board makes a treatment recommendation
- Facilitated procurement of the relevant cancer drugs
Turn-around time is about two weeks, fast enough to wait for the information before starting a regimen.
“I just want you to know, I won’t have a colonoscopy”, my new patient said with some amount of fervor in his voice. “And I don’t want to take a lot of medications.”
I looked him straight in the eyes and said “This is America, you don’t have to do anything, and I work for you. My job is to help you know your options.”
He seemed to relax. I reflected on the words I had just uttered, yet another time – it is the way I often try to set the tone as a non-authoritarian, patient focused physician.
“You don’t have to do anything”, of course, only applies to the patient.
The doctor has to do a lot of things, like document a treatment or follow-up plan for Medicare patients with a BMI over 30, or provide computer generated patient education to a minimum percentage of patients, and achieve a certain percentage of e-prescriptions. And right about now, we are starting to see financial consequences if too many of our patients, like the man I had just met, don’t want to take the medications that can bring their blood pressures or blood sugars below certain targets.
Last week, a study in the New England Journal of Medicine called into question the effectiveness of surgical checklists for preventing harm.
Atul Gawande—one of the original researchers demonstrating the effectiveness of such checklists and author of a book on the subject—quickly wrote a rebuttal on the The Incidental Economist.
He writes, “I wish the Ontario study were better,” and I join him in that assessment, but want to take it a step further.
Gawande first criticizes the study for being underpowered. I had a hard time swallowing this argument given they looked at over 200,000 cases from 100 hospitals. I had to do the math. A quick calculation shows that given the rates of death in their sample, they only had about 40% power .
Then I became curious about Gawande’s original study. They achieved better than 80% power with just over 7,500 cases. How is this possible?!?
The most important thing I keep in mind when I think about statistical significance—other than the importance of clinical significance —is that not only does it depend on the sample size, but also the baseline prevalence and the magnitude of the difference you are looking for. In Gawande’s original study, the baseline prevalence of death was 1.5%.
This is substantially higher than the 0.7% in the Ontario study. When your baseline prevalence approaches the extremes (i.e.—0% or 50%) you have to pump up the sample size to achieve statistical significance.
So, Gawande’s study achieved adequate power because their baseline rate was higher and the difference they found was bigger. The Ontario study would have needed a little over twice as many cases to achieve 80% power.
This raises an important question: why didn’t the Ontario study look at more cases?
The Kaiser Family Foundation (KFF) recently released a study that showed that 42% of Americans are unaware that Obamacare (the Affordable Care Act) remains the “law of the land.” News like this seems to us, to act as a Rorschach test on how observers feel about the law. Considering 50% of Americans can’t identify New York on a map we tend not to read too much into these polls. However, according to the logic of extrapolation, since we know that the ACA remains law, we are in the elite 58% (it’s about time we made it into the elite of something).
In almost parallel to the KFF news, the New England Journal of Medicine published a follow-up study of the “Oregon experiment.” For those who haven’t been following closely, the study found that previously uninsured people who were enrolled in Medicaid did not see an improvement in clinical measures when compared to those who remained uninsured. The study did seem to show a reduction in the amount of financial distress for the insured however.
Another contentious study, another Rorschach test (example, example). The problem we see with the polarity of views is that both sides seem to be cranking up the extrapolation machine and use single studies/data points to draw broad conclusions to gin up opinions about ACA’s success or lack thereof. In light of the fact that for most practical matters ACA doesn’t really get going until 2014, use of the extrapolation noise generator approach smacks of a lack of analytical rigor in our view. We will know soon enough how the program is doing… exchanges start enrolling on 10/1.
As investors, we should state upfront that we tend to give more weight to financial returns than what the philosopher-kings might call the political context. So what caught our eye in the Oregon study was that Medicaid recipients had higher healthcare utilization rates (and associated costs) than the uninsured. The connection between gaining insured status and healthcare utilization should not come as a surprise since there is a very extensive literature elucidating this connection.
Conservatives love to apply “cost-benefit analysis” to government programs—except in health care. In fact, working with drug companies and warning of “death panels,” they slipped language into Obamacare banning cost-effectiveness research. Here’s how that happened, and why it can’t stand.
Why are you reading this when you could be doing jumping jacks?
And how come you’ve gone on to read this sentence when you could be having a colonoscopy?
You and I could be doing all sorts of things right now that we have reason to believe would improve our health and life expectancy. We could be working out at the gym, or waiting in a doctor’s office to have our bodies scanned and probed for tumors and polyps. We could be using this time to eat a steaming plate of broccoli, or attending a support group to help us overcome some unhealthy habit.
Yet you are not doing those things right now, and the chances are very strong that I am not either. Why not?
In November 2008, the New England Journal of Medicine convened a small roundtable to discuss “Redesigning Primary Care.”
U.S. primary care is in crisis, the roundtable’s description reads. As a result … [the] ranks are thinning, with practicing physicians burning out and trainees shunning primary care fields.
Nearly five years out — and dozens of reforms and pilots later — the primary care system’s condition may still be acute. But policymakers, health care leaders and other innovators are more determined than ever: After decades where primary care’s problems were largely ignored, they’re not letting this crisis go to waste.
Ongoing Shortage Forcing Decisions
The NEJM roundtable summarized the primary care problem thusly: Too few primary care doctors are trying to care for too many patients, who have a rising number of chronic conditions, and receive relatively little compensation for their efforts.
It wasn’t until I had read this.
A national shortage of critical care physicians and beds means difficult decisions for healthcare professionals: how to determine which of the sickest patients are most in need of access to the intensive care unit. What if patients’ electronic health records could help a physician determine ICU admission by reliably calculating which patient had the highest risk of death?
Emerging health technologies – including reliable methods to rate the severity of a patient’s condition – may provide powerful tools to efficiently use scarce and costly health resources, says a team of University of Michigan Health System researchers in the New England Journal of Medicine.
“The lack of critical care beds can be frustrating and scary when you have a patient who you think would benefit from critical care, but who can’t be accommodated quickly. Electronic health records – which provide us with rich, reliable clinical data – are untapped tools that may help us efficiently use valuable critical care resources,” says hospitalist and lead author Lena M. Chen, M.D., M.S., assistant professor in internal medicine at the University of Michigan and an investigator at the Center for Clinical Management Research(CCMR), VA Ann Arbor Healthcare System.
The UMHS and VA study referenced in the article finds that patients’ severity of illness is not always strongly associated with their likelihood of being admitted to the ICU, challenging the notion that limited and expensive critical care is reserved for the sickest patients. ICU admissions for non-cardiac patients closely reflected severity of illness (i.e., sicker patients were more likely to go to the ICU), but ICU admissions for cardiac patients did not, the study found. While the reasons for this are unclear, authors note that the ICU’s explicit role is to provide care for the sickest patients, not to respond to temporary staffing issues or unavailable recovery rooms. Continue reading…
Twenty-five years ago this month, the New England Journal of Medicine published a special report on something that’s become medical gospel:
That’s right. Not as in “take two and call me in the morning,” but in the realm of the randomized double-blinded placebo-controlled trial. Or what we generally consider the gold standard of evidence in medical research.
If you’ve often heard that bit of jargon but always wondered why it’s so exalted, break it down:
- randomized: the assignment of the treatment (aspirin) or placebo (‘inert’ sugar pill) is not given in any planned sequence.
- double-blinded: neither the researchers nor the subjects know who is taking what (everything is coded so that analysts can find out at the end).
- placebo-controlled: the study compares the treatment against placebo to see if it’s helpful or harmful.
Even though acetylsalicylic acid’s properties as a pain reliever and fever reducer had been known in the time of Hippocrates, it was in 1899 that Bayer first patented and marketed what came to be known as aspirin worldwide.
A mere 89 years later, researchers from the “Physicians Health Study” did something unusual. Citing aspirin’s “extreme beneficial effects on non-fatal and fatal myocardial infarction”–doctor speak for heart attacks–the study’s Data Monitoring Board recommended terminating the aspirin portion of the study early (the study also was looking at the effects of beta-carotene). In other words, the benefit in preventing heart attacks was so clear at 5 years instead of the planned 12 years of study that it was deemed unethical to continue blinding participants or using placebo.
The paper from the New England Journal of Medicine that reports azithromycin might cause cardiovascular death is not new to electrophysiologists tasked with deciding antibiotic choices in patients with Long QT syndrome or in those who take other antiarrhythmic drugs. Heck, even the useful Arizona CERT QTDrugs.org website could have told us that.
What was far scarier to me, though, was how the authors of this week’s paper reached their estimates of the magnitude of azithromycin’s cardiovascular risk.
Welcome to the underworld of Big Data Medicine.
Careful review of the Methods section of this paper reveals that “persons enrolled in the Tennessee Medicaid program” were the subjects, and that the data collected were “Computerized Medicaid data, which were linked to death certificates and to a state-wide hospital discharge database” and “Medicaid pharmacy files.” Anyone with azithromycin prescribed from 1992-2006 who had “not had a diagnosis of drug abuse or resided in a nursing home in the preceding year and had not been hospitalized in the prior 30 days.” Also, they had to be “Medicaid enrollees for at least 365 days and have regular use of medical care.”
Hey, no selection bias introduced with those criteria, right? But the authors didn’t stop there.