Two weeks ago, the Kellogg School of Management was privileged to host Joe Doyle, an outstanding economist from MIT.
In a broad research portfolio, Joe has focused on the effects from differing intensity of medical treatments.
This research is shattering some long held beliefs about the relationship between health spending and outcomes.
We think that Joe’s work is not known widely enough outside of the academic community, so we are using our blog to let you know what you have been missing and, in the process, perhaps change the way you think about healthcare spending.
It is well known that the U.S. far outspends other nations on healthcare, yet the outcomes for Americans (in terms of coarse aggregate measures such as life expectancy, infant mortality, and other dimensions) are quite average.
Of course, these outcomes are not the only things that we value in health care.
A lot of our spending is on drugs and medical services that improve our quality of life and won’t show up in these aggregate outcomes. For example, more effective pain management can decrease pain and improve quality of life – often with important economic benefits.
Despite this fact, most health policy analysts have concluded that we can cut back on health spending, without harming quality on any dimension.
This is not a new idea, of course. In a famous 1978 New England Journal article, Alain Enthoven coined the term “flat of the curve medicine” to describe how the U.S. had reached the point of diminishing returns in health spending. And for nearly 30 years the Dartmouth Atlas has documented how health spending dramatically varies across communities without any apparent correlation with outcomes.
The question has always been, what health spending to cut? Garthwaite’s previous work has shown that broad regulations requiring longer hospital stays for new mothers and their babies have provided only limited benefits and that more targeted rules could save money without sacrificing quality.
Beyond some wasteful regulations, we can always point to gross examples of overspending such as the rapid proliferation of proton beam treatments. But beyond those clear examples how can one identify what is waste and what is medically necessary?
In two important papers, Joe Doyle and co-authors ask a more fundamental question – is the often cited broad variation in health spending actually wasteful at all? They find that even in healthcare, there really is no such thing as a free lunch.
His work should be mandatory reading for everyone who believes that broad spending cuts will have no adverse consequences.
For those who lack the time to read these papers, we provide the “Cliff’s Notes” versions.
The settings for Joe’s two studies are broadly similar. He compares the outcomes for patients who receive emergency treatment at different hospitals, some of which are much more costly than others. Getting unbiased results from such a comparison is not as simple as it seems.
At the aggregate level, hospitals with more spending may simply be systematically treating sicker patients.
This will tend to make the outcomes look worse for the patients at high cost hospitals. Of course, this says nothing about efficiency or whether these sick patients would have fared as well had they been treated at low cost hospitals. Any fair comparison of hospitals must control for severity of illness in order to avoid what statisticians call “omitted variable bias.”
Unfortunately, the available data is not usually up to this task, so prior to Joe’s work, nearly all studies that compare costs and outcomes have been subject to this bias. (This is certainly true for cross-nation studies.)
From a research perspective, the best way to remove omitted variable bias is to conduct an experiment in which patients are randomly assigned to hospitals with different costs. For pragmatic and ethical reasons, researchers are unable to perform such experiments.
Therefore, Joe has identified situations in the real world that meet the statistical requirements for random assignment – statisticians call these “natural experiments.” Through these natural experiments, Joe removes omitted variable bias and provides compelling evidence that contradicts the conventional wisdom that variations in health spending are pure waste.
In one study, Joe examined what happened to individuals who had medical emergencies while vacationing in Florida. Joe argues (quite plausibly) that when individuals choose a vacation spot, they give little thought to the potential for a medical emergency and the relative quality of healthcare providers in their chosen destination.
(That is, visitors to Florida choose Orlando over Fort Lauderdale because they love Mickey Mouse and not because they are worried about having a heart attack and perceive that Orlando hospitals have better emergency medical services.)
If one believes this argument, then vacationers who are treated in high cost areas of Florida will be no more or less sick than vacationers who are treated in low cost areas – Joe provides convincing evidence to bolster this claim. What does he find? Vacationers who have medical emergencies in high cost areas in Florida have better outcomes than vacationers who have emergencies in low cost areas.
The effect is not subtle. On average, additional spending of $50,000 (in billed charges) is associated with saving one year of life. This falls well within the range used by stingy governments in Europe when determining whether to allocate additional money towards healthcare.
In a second study, Joe (working with John Graves and Jon Gruber) exploits an institutional feature of New York State’s ambulance service, namely, that most communities have several competing ambulance services. A central dispatcher who receives an emergency call examines availability before assigning an ambulance from a particular company.
This creates an effective random assignment of patients to ambulances.
As Joe and his coauthors discovered, each company tends to take their patients to different hospitals, which introduces an effectively random assignment of patients to hospitals. They develop a statistical method that teases out the purely random part and then compare outcomes for patients who, due to purely random chance, are taken to low cost versus high cost hospitals.
Once again, they find that emergency patients do better when they are taken to higher cost hospitals, and once again the effects are not subtle.
For every 10 percent increase in spending at the high cost hospitals, there is a 4 percent reduction in the one year mortality rate.Importantly, these differences in outcomes cannot be accounted for with standard measures of hospital quality such as indicators for the use of “appropriate care” after a heart attack.
Joe Doyle will be the first to admit to the limitations in his work. He only studies emergency patients and does not compare across states. But the results in Joe’s studies did not have to come out as they did. If the conventional wisdom was unconditionally correct, then Joe would have found no relationship between spending and outcomes.
Joe has produced two studies, free of omitted variable bias, that show conditions where the conventional wisdom fails.
A hallmark of excellent economic research is the ability to provide bias-free estimates of relationships of great social importance. It has been our pleasure to introduce you to Joe Doyle and his excellent work.
David Dranove, PhD is the Walter McNerney Distinguished Professor of Health Industry Management at Northwestern University’s Kellogg Graduate School of Management, where he is also Professor of Management and Strategy and Director of the Health Enterprise Management Program. He has published over 80 research articles and book chapters and written five books, including “The Economic Evolution of American Healthcare and Code Red.”
Craig Garthwaite, PhD is an assistant professor of management and strategy at Northwestern University’s Kellogg Graduate School of Management.
Dranove and Garthwaite are the authors of the blog, Code Red, where this post originally appeared.
This is extremely interesting info from 2 apparently methodologically valid studies.
My assumption had been that more care = more opportunity for error, meaning that hospitals providing the least amount of care would generate the best patient outcomes. That’s based on Dartmouth-type analyses.
These studies turn that assumption on its head, suggesting the benefits of additional tests and interventions outweigh the risks. False positives and overdiagnoses, though real risks, may ultimately have less impact than the marginal information gain.
I hope there’s more to come.
Another way to study this might be to look at two groups of folks sent to the same EHR in the same hospital: one group would have health plans with high actuarial values and hence would spend more insurance money for their care. The other group would have low actuarial value plans and would spend less insurance money on their plans…and more OOP. The OOP payments would on average be slower and less efficiently collected so that the actual amount of money spent on these two groups would differ, with the former spending more.
By eliminating a few more variables this way we might have a more truthful result.
Though I like the idea behind the study I agree with the paramedic and would need a lot more information. The choice of vacation in Florida is very much determined by socio economic factors. That is true even for where people choose to live.
Don’t have the time to read the full papers – thank you for the summary.
One point though (as a former paramedic): where EMS takes a patient is FAR from random.
1) Trauma/Burn, etc. activations introduce concurrent acuity and destination bias (e.g. guy in MVA gets taken to low-cost (tertiary/county hospital) center of care. Does not do well.
2) Well-insured (read: higher SES and [thus] probably better health at baseline) patients will categorically state which hospital they want to go to. And they don’t ask to go to ABC County Hospital. They ask to go to a higher cost center of care. Again, destination and prior health status bias.
Would love to hear if the EMS-drive studies corrected for these.
Less spending=fewer nurses spending more time filling out EHR grids=less surveillance of patients who need it the most=medical catasrophes and death.
Then, there is the obligate, “patient was ok and we just found him/her unresponsive”.