NEJM

Last week, a study in the New England Journal of Medicine called into question the effectiveness of surgical checklists for preventing harm.

Atul Gawande—one of the original researchers demonstrating the effectiveness of such checklists and author of a book on the subject—quickly wrote a rebuttal on the The Incidental Economist.

He writes, “I wish the Ontario study were better,” and I join him in that assessment, but want to take it a step further.

Gawande first criticizes the study for being underpowered. I had a hard time swallowing this argument given they looked at over 200,000 cases from 100 hospitals. I had to do the math. A quick calculation shows that given the rates of death in their sample, they only had about 40% power [1].

Then I became curious about Gawande’s original study. They achieved better than 80% power with just over 7,500 cases. How is this possible?!?

The most important thing I keep in mind when I think about statistical significance—other than the importance of clinical significance [2]—is that not only does it depend on the sample size, but also the baseline prevalence and the magnitude of the difference you are looking for. In Gawande’s original study, the baseline prevalence of death was 1.5%.

This is substantially higher than the 0.7% in the Ontario study. When your baseline prevalence approaches the extremes (i.e.—0% or 50%) you have to pump up the sample size to achieve statistical significance.

So, Gawande’s study achieved adequate power because their baseline rate was higher and the difference they found was bigger. The Ontario study would have needed a little over twice as many cases to achieve 80% power.

This raises an important question: why didn’t the Ontario study look at more cases?

Continue reading “Why Bad Research Makes It into Good Medical Journals”

Share on Twitter

The Kaiser Family Foundation (KFF) recently released a study that showed that 42% of Americans are unaware that Obamacare (the Affordable Care Act) remains the “law of the land.” News like this seems to us, to act as a Rorschach test on how observers feel about the law. Considering 50% of Americans can’t identify New York on a map we tend not to read too much into these polls. However, according to the logic of extrapolation, since we know that the ACA remains law, we are in the elite 58% (it’s about time we made it into the elite of something).

In almost parallel to the KFF news, the New England Journal of Medicine published a follow-up study of the “Oregon experiment.” For those who haven’t been following closely, the study found that previously uninsured people who were enrolled in Medicaid did not see an improvement in clinical measures when compared to those who remained uninsured. The study did seem to show a reduction in the amount of financial distress for the insured however.

Another contentious study, another Rorschach test (example, example). The problem we see with the polarity of views is that both sides seem to be cranking up the extrapolation machine and use single studies/data points to draw broad conclusions to gin up opinions about ACA’s success or lack thereof. In light of the fact that for most practical matters ACA doesn’t really get going until 2014, use of the extrapolation noise generator approach smacks of a lack of analytical rigor in our view. We will know soon enough how the program is doing… exchanges start enrolling on 10/1.

As investors, we should state upfront that we tend to give more weight to financial returns than what the philosopher-kings might call the political context. So what caught our eye in the Oregon study was that Medicaid recipients had higher healthcare utilization rates (and associated costs) than the uninsured. The connection between gaining insured status and healthcare utilization should not come as a surprise since there is a very extensive literature elucidating this connection.

Continue reading “Into the Extrapolation Machine”

Share on Twitter

Conservatives love to apply “cost-benefit analysis” to government programs—except in health care. In fact, working with drug companies and warning of “death panels,” they slipped language into Obamacare banning cost-effectiveness research. Here’s how that happened, and why it can’t stand.

Why are you reading this when you could be doing jumping jacks?

And how come you’ve gone on to read this sentence when you could be having a colonoscopy?

You and I could be doing all sorts of things right now that we have reason to believe would improve our health and life expectancy. We could be working out at the gym, or waiting in a doctor’s office to have our bodies scanned and probed for tumors and polyps. We could be using this time to eat a steaming plate of broccoli, or attending a support group to help us overcome some unhealthy habit.

Yet you are not doing those things right now, and the chances are very strong that I am not either. Why not?

Continue reading “The Republican Case For Waste In Health Care”

Share on Twitter

In November 2008, the New England Journal of Medicine convened a small roundtable to discuss “Redesigning Primary Care.”

U.S. primary care is in crisis, the roundtable’s description reads. As a result … [the] ranks are thinning, with practicing physicians burning out and trainees shunning primary care fields.

Nearly five years out — and dozens of reforms and pilots later — the primary care system’s condition may still be acute. But policymakers, health care leaders and other innovators are more determined than ever: After decades where primary care’s problems were largely ignored, they’re not letting this crisis go to waste.

Ongoing Shortage Forcing Decisions

The NEJM roundtable summarized the primary care problem thusly: Too few primary care doctors are trying to care for too many patients, who have a rising number of chronic conditions, and receive relatively little compensation for their efforts.

Continue reading “The Radical Rethinking of Primary Care Starts Now”

Share on Twitter

It wasn’t until I had read this.

A national shortage of critical care physicians and beds means difficult decisions for healthcare professionals: how to determine which of the sickest patients are most in need of access to the intensive care unit. What if patients’ electronic health records could help a physician determine ICU admission by reliably calculating which patient had the highest risk of death?

Emerging health technologies – including reliable methods to rate the severity of a patient’s condition – may provide powerful tools to efficiently use scarce and costly health resources, says a team of University of Michigan Health System researchers in the New England Journal of Medicine.

“The lack of critical care beds can be frustrating and scary when you have a patient who you think would benefit from critical care, but who can’t be accommodated quickly. Electronic health records – which provide us with rich, reliable clinical data – are untapped tools that may help us efficiently use valuable critical care resources,” says hospitalist and lead author Lena M. Chen, M.D., M.S., assistant professor in internal medicine at the University of Michigan and an investigator at the Center for Clinical Management Research(CCMR), VA Ann Arbor Healthcare System.

The UMHS and VA study referenced in the article finds that patients’ severity of illness is not always strongly associated with their likelihood of being admitted to the ICU, challenging the notion that limited and expensive critical care is reserved for the sickest patients. ICU admissions for non-cardiac patients closely reflected severity of illness (i.e., sicker patients were more likely to go to the ICU), but ICU admissions for cardiac patients did not, the study found. While the reasons for this are unclear, authors note that the ICU’s explicit role is to provide care for the sickest patients, not to respond to temporary staffing issues or unavailable recovery rooms. Continue reading “Building a Better Health Care System: Electronic Health Records Could Help Identify Which Patients Most Need ICU Resources”

Share on Twitter

Twenty-five years ago this month, the New England Journal of Medicine published a special report on something that’s become medical gospel:

Aspirin.

That’s right. Not as in “take two and call me in the morning,” but in the realm of the randomized double-blinded placebo-controlled trial. Or what we generally consider the gold standard of evidence in medical research.

If you’ve often heard that bit of jargon but always wondered why it’s so exalted, break it down:

  • randomized: the assignment of the treatment (aspirin) or placebo (‘inert’ sugar pill) is not given in any planned sequence.
  • double-blinded: neither the researchers nor the subjects know who is taking what (everything is coded so that analysts can find out at the end).
  • placebo-controlled: the study compares the treatment against placebo to see if it’s helpful or harmful.

Even though acetylsalicylic acid’s properties as a pain reliever and fever reducer had been known in the time of Hippocrates, it was in 1899 that Bayer first patented and marketed what came to be known as aspirin worldwide.

A mere 89 years later, researchers from the “Physicians Health Study” did something unusual. Citing aspirin’s “extreme beneficial effects on non-fatal and fatal myocardial infarction”–doctor speak for heart attacks–the study’s Data Monitoring Board recommended terminating the aspirin portion of the study early (the study also was looking at the effects of beta-carotene). In other words, the benefit in preventing heart attacks was so clear at 5 years instead of the planned 12 years of study that it was deemed unethical to continue blinding participants or using placebo.

Continue reading “What the Story of a Famous Little White Pill Says About How Medical Research Works”

Share on Twitter

The paper from the New England Journal of Medicine that reports azithromycin might cause cardiovascular death is not new to electrophysiologists tasked with deciding antibiotic choices in patients with Long QT syndrome or in those who take other antiarrhythmic drugs.   Heck, even the useful Arizona CERT QTDrugs.org website could have told us that.

What was far scarier to me, though, was how the authors of this week’s paper reached their estimates of the magnitude of azithromycin’s cardiovascular risk.

Welcome to the underworld of Big Data Medicine.

Careful review of the Methods section of this paper reveals that “persons enrolled in the Tennessee Medicaid program” were the subjects, and that the data collected were “Computerized Medicaid data, which were linked to death certificates and to a state-wide hospital discharge database” and “Medicaid pharmacy files.”   Anyone with azithromycin prescribed from 1992-2006 who had “not had a diagnosis of drug abuse or resided in a nursing home in the preceding year and had not been hospitalized in the prior 30 days.”  Also, they had to be “Medicaid enrollees for at least 365 days and have regular use of medical care.”

Hey, no selection bias introduced with those criteria, right?  But the authors didn’t stop there.

Continue reading “How Bad Is Azithromycin’s Cardiovascular Risk?”

Share on Twitter

It was during my residency that the first indication of heart toxicity of antibiotics affected me personally.  The threat was related to the use of the first of the non-drowsy antihistamines – Seldane – in combination with macrolide antibiotics, such as Erythromycin causing a potentially fatal heart arrhythmia.  I remember the expressions fear from other residents, as we had used this combination of medications often.  Were we killing people when we treated their bronchitis?  We had no idea, but we were consoled by the fact that the people who had gotten our arrhythmia-provoking combo were largely anonymous to us (ER patients).

Fast forward to 2012 and the study (published in the holy writings of the New England Journal of Medicine) that Zithromax is associated with more dead people than no Zithromax.  Here’s the headline-provoking conclusion:

During 5 days of therapy, patients taking azithromycin, as compared with those who took no antibiotics, had an increased risk of cardiovascular death (hazard ratio, 2.88; 95% confidence interval [CI], 1.79 to 4.63; P<0.001) and death from any cause (hazard ratio, 1.85; 95% CI, 1.25 to 2.75; P=0.002).  Patients who took amoxicillin had no increase in the risk of death during this period. Relative to amoxicillin, azithromycin was associated with an increased risk of cardiovascular death (hazard ratio, 2.49; 95% CI, 1.38 to 4.50; P=0.002) and death from any cause (hazard ratio, 2.02; 95% CI, 1.24 to 3.30; P=0.005), with an estimated 47 additional cardiovascular deaths per 1 million courses; patients in the highest decile of risk for cardiovascular disease had an estimated 245 additional cardiovascular deaths per 1 million courses. (Emphasis Mine).

Continue reading “Z-Packing”

Share on Twitter

How many nurses does it take to care for a hospitalized patient? No, that’s not a bad version of a light bulb joke; it’s a serious question, with thousands of lives and billions of dollars resting on the answer. Several studies (such as here and here) published over the last decade have shown that having more nurses per patient is associated with fewer complications and lower mortality. It makes sense.

Yet these studies have been criticized on several grounds. First, they examined staffing levels for hospitals as a whole, not at the level of individual units. Secondly, they compared well-staffed hospitals against poorly staffed ones, raising the possibility that staffing levels were a mere marker for other aspects of quality such as leadership commitment or funding. Finally, they based their findings on average patient load, failing to take into account patient turnover.

Last week’s NEJM contains the best study to date on this crucial issue. It examined nearly 200,000 admissions to 43 units in a “high quality hospital.” While the authors don’t name the hospital, they do tell us that the institution is a US News top rated medical center, has achieved nursing “Magnet” status, and, during the study period, had a mortality rate nearly 40 percent below that predicted for its case-mix. In other words, it was no laggard.

As one could guess from its pedigree and outcomes, the hospital’s approach to nurse staffing was not stingy. Of 176,000 nursing shifts during the study period, only 16 percent were significantly below the established target (the targets are presumably based on patient volume and acuity, but are not well described in the paper). The authors found that patients who experienced a single understaffed shift had a 2 percent higher mortality rate than ones who didn’t. Each additional understaffed shift carried a similar, and additive, risk. This means that the one-in-three patients who experienced three such shifts during their hospital stay had a 6 percent higher mortality than the few patients who didn’t experience any. If the FDA discovered that a new medication was associated with a 2 percent excess mortality rate, you can bet that the agency would withdraw it from the market faster than you could say “Sidney Wolfe.”

The effects of high patient turnover were even more striking. Exposure to a shift with unusually high turnover (7 percent of all shifts met this definition) was associated with a 4 percent increased odds of death. Apparently, patient turnover – admissions, discharges, and transfers – is to hospital units and nurses as takeoffs and landings are to airplanes and flight crews: a single 5-hour flight (one takeoff/landing) is far less stressful, and much safer, than five hour-long flights (5 takeoffs/landings). Continue reading “Nurse Staffing, Patient Mortality, And a Lady Named Louise”

Share on Twitter

For most of the past decade, Democrats and Republicans in Congress have competed over who could pour more money into the National Institutes of Health, the largest funder of biomedical research in the world.

But the party is over. The budget cuts proposed by a leading House Republican this week included cancellation of the $1 billion that the Obama administration wanted to add to the $31 billion NIH budget.

It was part of a broad assault on science funding that was announced by appropriations chairman Hal Rogers, R-Ky., who also called for large cuts at the National Science Foundation, the White House Office of Science, the National Oceanic and Atmospheric Administration and the National Aeronautics and Space Administration.

The purpose, according to Rogers, is “to rein in spending to help our economy grow and our businesses create jobs.”

If creating jobs is his goal, Rogers might want to take a look at a new study that appeared yesterday in the New England Journal of Medicine, which found that publicly-funded research is a far more important contributor to the creation of new drugs and vaccines than previously thought. The classical view of innovation is that government funds basic science, while industry comes up with the new and innovative products based on that science. Continue reading “NIH and Drug Innovation”

Share on Twitter

Masthead

Matthew Holt
Founder & Publisher

John Irvine
Executive Editor

Jonathan Halvorson
Editor

Alex Epstein
Director of Digital Media

Munia Mitra, MD
Chief Medical Officer

Vikram Khanna
Editor-At-Large, Wellness

Joe Flower
Contributing Editor

Michael Millenson
Contributing Editor

We're looking for bloggers. Send us your posts.

If you've had a recent experience with the U.S. health care system, either for good or bad, that you want the world to know about, tell us.

Have a good health care story you think we should know about? Send story ideas and tips to editor@thehealthcareblog.com.

ADVERTISE

Want to reach an insider audience of healthcare insiders and industry observers? THCB reaches 500,000 movers and shakers. Find out about advertising options here.

Questions on reprints, permissions and syndication to ad_sales@thehealthcareblog.com.

THCB CLASSIFIEDS

Reach a super targeted healthcare audience with your text ad. Target physicians, health plan execs, health IT and other groups with your message.
ad_sales@thehealthcareblog.com

ADVERTISEMENT

Log in - Powered by WordPress.