Avik Roy has read and posted about the papers I reviewed as part of my Medicaid-IV series. If you’ve forgotten, the purpose of that series of posts was to examine studies that use proven, sound methods to infer the causal effect of (as opposed to a correlation between) Medicaid enrollment on health outcomes. From that series, I concluded that there is no credible evidence that Medicaid is worse for health than being uninsured. Considering only studies that show correlations (not causation), Avik disagrees.
Avik’s post is long, but you can save yourself some trouble by skipping the gratuitous attack on economists in general, and Jon Gruber in particular, as well as the troubled description of instrumental variables (IV).* About halfway down is his actual review of the papers; look for the bold text.
The point I want to drive home in this post is why an IV approach is necessary in studying Medicaid outcomes. People enrolling in Medicaid differ from those who don’t. They differ for reasons we can observe and for those we can’t. An ideal study would be a randomized controlled trial (RTC) that randomizes people into Medicaid and uninsured status. Thats neither practical nor ethical. So we’re stuck, unless we can be more clever.
The next best thing we can do is look for natural experiments. That’s what IV exploits. In this case, the studies I examined use the state-level variation in Medicaid eligibility (and related programs). That variation obviously affects enrollment into Medicaid (you can’t enroll unless you’re eligible), though it is not determinative. Importantly, state-level variation in Medicaid eligibility rules does not itself affect individual-level health. Other than figuratively, do you suddenly take ill when a law is passed or a regulation is changed? Do you see how Medicaid eligibility rules are somewhat like the randomization that governs an RTC, affecting “treatment” (Medicaid enrollment) but not outcomes directly? (If this is unclear, go here.)Continue reading…
I believe I am one the few commentators on the Internet who routinely compares the fields of health and education (see previous posts here and here). The reason: lessons from one field are often applicable to the other.
The parallels are obvious: In both fields (1) we have systematically suppressed normal market forces; (2) the entity that pays the bill is usually separate from the beneficiaries of the spending; (3) providers of the services see the payers, not the beneficiaries, as their real customers and often shape their practice to satisfy the payers’ demands — even if the beneficiaries are made worse off; (4) even though the providers and the payers are in a constant tug-of-war over what is to be paid for and how much, the beneficiaries are almost never part of these discussions; and (5) there is rampant inefficiency on a scale not found in other markets.
Long before there was a Dartmouth Atlas for health care, education researchers found large differences in per pupil spending (more than three to one among large school districts, e.g.) that were unrelated to differences in results. In fact, study after study has found no correlation between education spending and education results. (See Linda Gorman’s summary at Econlog.)
Internationally, the parallels continue. Just as the United States is said to spend more than any other country and produce worse outcomes in health care, the same claim is now made for education.Continue reading…
I sat at home with a sense of relief. I had just finished my first month of residency – a grueling inpatient hospital month where I was pushed to new limits. I now finally had my first “golden weekend” (meaning I had both Saturday and Sunday off). More importantly, I had survived my first month without any patient deaths on my service. Given how sick people are when they come to the hospital, I felt pretty good about this result.
That feeling lasted less than 24 hours. As I logged in from home onto the electronic medical record to finish some documentation, I realized one of my patients was in coma due to a sudden stroke. This patient had few clinical symptoms and appeared the healthiest amongst all the patients I managed the entire month. A heavy knot quickly developed in my stomach, as I could not shed the feeling that perhaps I did something wrong. I scoured the medical records, retracing my management. Over the next couple of days I discussed the case with other colleagues and experts in the field, and read in depth on the management of this condition. To my relief it was clear that I did not nor did anyone involved in the patient’s care make an error in management. Unfortunately, however, this patient eventually passed away.
As I reflect on the experience, an important point stands out in my mind. This patient exhibited few signs of being “sick” and was managed very well by all the physicians during the course of the hospital stay, but died. On the other end of the spectrum are patients who appear incredibly sick, and despite a poor prognosis survive against odds. One of the goals of residency is to learn to assess a patient and quickly identify who is in imminent danger and may need immediate attention. Unfortunately, however, physicians cannot predict everything, as situations similar to the one above are not uncommon scenarios. Given this fact it makes the discussion about measuring healthcare and pay for performance very cloudy.
Sepsis is the number one cause of death in American hospitals–higher than cancer or stroke. Your chance of dying from a sepsis infection can triple if you choose a hospital that doesn’t have a good sepsis response team.
Care outcomes always vary from site to site and from caregiver to caregiver. For instance, if you have cystic fibrosis, your life expectancy can be diminished by a decade if you choose one of the lower success care programs for that disease.
But people don’t know where to go for best care for almost any level or category of care. That is the missing link in our healthcare delivery infrastructure. The least successful cancer centers will not get better if neither they nor the world knows how relatively low their success levels are. The world needs a scorecard for care performance that is mathematically sound and scientifically valid. It should only measure and report outcomes where outcomes vary and matter.
Enough of those areas exist now, but others still need to be created. The survival rates for each stage of each major cancer should be in a publicly accessible database, and patients with cancer should be able to consult that database to see where to go for best care. The database should also show clearly what the survival rates are for each major type of treatment for each stage of cancer. For example, surgery survival rates, hospital infection rates and cancer treatment survival rates would be a nice starter set for improving patient choices about care.
Such a database is entirely feasible, but we need people with authority and purchasing power to demand it. Employers, care purchasers, governmental care buyers and the new health insurance exchanges created under the new American health care reform act should all be insisting on these data sets.