Should we blame technology for the growth in healthcare spending? Austin Frakt, a healthcare economist who writes for the New York Times, thinks so. Citing several studies conducted over the last several years, he claims that technology could account for up to two-thirds of per capita healthcare spending growth.
In this piece, Frakt contrasts the contribution of technology to that of the ageing of the population. Frakt notes that age per se is a poor marker of costs associated with healthcare utilization. What’s important is the amount of money spent near death. If you’re 80 years old and healthy, your usage of healthcare services won’t be much more than that of a 40-year-old person.
So far, so good. But should we accept the proposition that technology is the culprit for healthcare spending growth? Says Frakt:
Every year you age, health care technology changes — usually for the better, but always at higher cost. Technology change is responsible for at least one-third and as much as two-thirds of per capita health care spending growth.
Frakt’s position is common among mainstream economists who come to their conclusions through the application of complex mathematical models of the economy. The studies Frakt cites all use statistical analysis to try to disentangle the relationships between a number of interacting cost factors (e.g., demographics, GDP growth, income growth, insurance growth, etc.) before drawing conclusions about the relative contribution of each of these factor.
The models, however, necessitate making assumptions that may not hold true. Moreover, technology spending is generally not measured directly. Instead, the models first explain spending on the basis of other measurable factors (e.g., demographics), and then attribute to technology the share of spending that remains “unexplained.”
But if we resist the seduction of quantitative models and, instead, apply common sense reasoning, it becomes apparent that the conclusion that technology per se drives the crisis of out-of-control spending growth is manifestly untenable.
To see this, it is helpful to imagine a simpler context where healthcare spending is decided voluntarily by patients and their families.
In such a context, a company may speculate that a particular technology (say, one that produces artificial limbs), could serve a certain need. The company then makes an entrepreneurial decision to develop, manufacture and sell artificial limbs on the basis of an estimate of the willingness of patients to pay for the limbs at a price sufficiently high to cover the costs of production and allow for some profit.
The technology company obviously takes a risk. It may err in its estimation of how patients will value its product: If the asking price is above the one patients are willing to pay, it will incur a loss and may go out of business. On the other hand, if the asking price is below the level at which patients value artificial limbs, the company will succeed and make a profit.
What is certain, however, is this: if the company succeeds and patients are willing to pay for the product, healthcare spending will increase, but that will not be viewed as a problem. If patients voluntarily pay for artificial limbs—or for bionic hearts, xeno-transplanted pancreata, or miracle longevity pills—it is because they value the technology more than the money they have parted with, or else they would keep the money. Overall welfare is increased, and there is no reason to blame technology.
Admittedly, some patients may later regret their purchase. But such a regret does not in itself indicate that technology is at fault for the increased spending. It simply means that those patients miscalculated the value they personally derived from the technology.
This potential for miscalculation is something many mainstream healthcare economists frown upon. In 1963, Nobel Prize-winner economist Kenneth Arrow gave fresh impetus to the field of healthcare economic theory in a seminal paper calling attention to this potential for miscalculation and attributing it to “product uncertainty:” Because of sickness, and because of the complexity of medical care and technology, patients are unable to make proper value decision. They can miscalculate in two ways.
First, producers and service providers may take advantage of the situation and obtain a higher price than would otherwise be established under normal “competitive” market mechanisms. Arrow (and many economists following him) therefore recommend various government regulations to mitigate the effect of this “information asymmetry.” (I have previously shown that the standard assumptions put forth by Arrow and others regarding the effects of information asymmetry in medical care are refuted by historical evidence.)
Second, patients may miscalculate in the other direction and forego technology that could potentially be beneficial to them. Healthcare economists also find this possibility intolerable and invariably favor government intervention to promote or finance health insurance so as to avoid self-rationing by patients.
The problem with these interventions, apart from their inherent paternalism, is that they do nothing to “bridge” the maligned information gap that can lead patients to miscalculate value. In fact, they widen it.
In the first instance, the regulation of technology means that regulators substitute their own value for those of patients. It is regulators who decide what level of evidence and what level of risk is acceptable for a technology to be legalized. In doing so, they deprive patients from even knowing about certain products. They thus make the information gap infinitely large.
In the second instance, the provision of health insurance impairs the ability of patients to make proper value decisions since they no longer bear the full cost (or even any cost) of the technology. Therefore, they are more likely to seek out technology that they might not have purchased at an unhampered market price.
The natural tendency for patients who are shielded from costs to over-utilize healthcare technologies naturally drives the price of technology upwards, so long as the insurer is willing to accommodate this demand. In most cases, in fact, insurance companies do end up paying for technology. This goes to show that Frakt and the modeling studies he cites have it exactly backwards: it is increased spending that causes increasingly high technology prices, not the other way around.
Mainstream healthcare economists have long minimized the potential for health insurance to lead to increased spending. In his same 1963 paper, published 2 years before the enactment of Medicare, Arrow had asserted that
The welfare case for insurance policies of all sorts is overwhelming. It follows that the government should undertake insurance in those cases where this market, for whatever reasons, has failed to emerge.
Arrow did consider that health insurance might increase demand for healthcare, but he minimized that possibility and left it to future economists to obtain empirical evidence to determine the extent to which so-called “moral hazard” (the tendency for insurance to increase demand) would affect prices in healthcare. With Arrow’s reassurance, the government embarked on a massive program that has subsidized the demand for not only healthcare technology, but for services and products across the entire healthcare sector.
Because economic analysis is poorly suited for empirical study (since the factors involved change constantly, may not be fully accounted for, and interact with one another), obtaining persuasive evidence for the effect of health insurance on spending has taken decades to materialize. Recently, however, Amy Finkelstein, a prominent MIT healthcare economist, was able to analyze a large set of historical data on spending patterns before and after the introduction of Medicare. In regards to the relationship between spending growth and technology, she commented that:
…there is widespread consensus that technological change is the driving force behind the growth in health spending. But this just kicks the can down the road. What then drives technological change in medicine?
…[In my recent study] I find evidence that the introduction of Medicare encouraged the adoption of new medical technologies…Now we find that when large-scale insurance changes lead to a big aggregate increase in demand, hospitals have an incentive to adopt new medical technologies. People will use these technologies because they are not paying for them out-of-pocket…
It therefore looks like insurance, by increasing demand because it lowers the price [to the patient] of medical care, encourages both the adoption of new technologies…and, further down the pipeline, the innovation and development of these new technologies.
In fact, Finkelstein showed that “the introduction of Medicare [caused] …enormous spending effects” and that “the spread of insurance played a very big role in driving health care spending growth over the second half of the twentieth century.”
Whether Finkelstein’s study will eventually persuade other economists, such as Frakt, remains to be seen. But it is noteworthy that her historical evidence is only confirming what should have been demonstrable by careful reasoning all along: subsidies raise prices and massive subsidies raise prices massively.
So here’s a paradox to conclude with. Compared to technology, ideas are cheap. But when bad ideas are concocted into a widely embraced but faulty economic theory, the result can be ruinously expensive.