British Petroleum’s Wellness Program is Spewing Invalidity

A critical observation in Cracking Health Costs is you need not “challenge the data” to invalidate claims that wellness saves money.  Instead, you can simply read the data as presented.  You’ll find it usually invalidates itself.

Nowhere is that more true than in a study published this month by Mercer, Staywell and British Petroleum (“BP America”) in the Journal of Occupational and Environmental Medicine (JOEM).   As we’ll demonstrate, the results completely contradict Staywell’s own statements, and are also mathematically impossible.  Indeed, Mercer was a wise partner choice by BP America because their validations are often unconstrained by the limits of possibility.   For instance, they validated massive savings both for infants in a North Carolina Medicaid program that did not enroll infants, and for a Georgia Medicaid disease management program that did not manage diseases, at least according to the FBI.

Along those lines, let’s see what happens when one compares the JOEM conclusion — that the Staywell wellness program for BP America achieved almost $20,000,000 in savings on 20,343 BP participants after only two short years – to the limits of possibility.

It turns out this overall savings claim of $1,000/person would require completely wiping out wellness-sensitive medical events (heart attacks, diabetes events etc.) not just on those 20,000+ people, but also on perhaps 40,000 of their closest friends.  The authors elected not to disclose the change in wellness-sensitive medical events across the entire eligible population, perhaps because they were embarrassed by the size of the decline, if indeed those events declined at all.

The authors went a step farther and decided not to even acknowledge that a wellness program should reduce wellness-sensitive medical events, a perfectly logical nexus not unlike saying that an antihyperlipidemic drug should reduce cholesterol levels or an asthma rescue drug should improve breathing, or sunblock should prevent sunburn.

It’s not just the two of us who think logic should prevail in connecting wellness initiatives to wellness events.   Health Affairs laid them out too, in a seminal 2013 study that Mercer and Staywell didn’t footnote despite being the most recent scholarly article on exactly this same topic.  They also selectively omitted other key citations.  The equally seminal RAND study – which concludes that wellness programs produce no two-year population-wide savings – also went unacknowledged.

Curiously biometric screens were noted as one of the sources of BP’s alleged savings, but this once again turns out to breach the wall of possibility:   another – also unacknowledged — Health Affairs article showed that it is mathematically impossible not to lose money on biometric screens, which is why government agencies, expert panels and books warn against using them.  (They can and do also harm patients, which is perhaps why most people only participate when threatened with large penalties, as was the case with BP, where 30% of their employees elected to forfeit $1200, to stay out of their program.)

These selective citation omissions are indicative of the wellness industry researchers’ distaste for facts.  This is not to say that wellness researchers are stupid.  Quite the opposite: they are smart enough to realize that facts are their worst nightmare.

The Data that Didn’t Bark in the Nighttime

And in this case, the most troubling fact is right in their own study, albeit buried deep enough (page eight) that the JOEM peer reviewers may not have noticed it.  That is the percentage of people whose risk factors actually improved:  5.6%.  As modest as this figure is, it actually excludes the people most likely to get worse– those who dropped out of the program and those who wanted nothing to do with it in the first place.

Understandably, the study authors decided not to draw attention to the simple division of the savings ($19,700,000) into the number of people who even theoretically could have avoided wellness-sensitive medical events (5.6% of 20,343, or 1,139 people), because that division would have yielded a whopping $17,300 in savings per person whose risks declined.

And once, again, the limits of possibility don’t stand a chance against Mercer and Staywell, for three reasons.  $17,300 is:

  1. Almost three times what a company spends in total on the average covered person`.

  2. At least 30 times what a company spends on wellness-sensitive medical events for the average covered person.

  3. More than 100 times what the lead Staywell author says on Staywell’s own website is possible.

This assumes there were any savings at all, which we’ll never know because recall they failed to disclose the wellness-sensitive medical event tally.

Lessons for Wellness Researchers

So, what can be learned from this valuable addition to our growing library of teaching tools for population health management, other than how easy it is for vendors and consultants to convince their HR clients in large corporations that they are saving massive sums of money through wellness?

First, the most sophisticated-sounding study designs in the world – and this one looks at least six figures worth of sophistication – can’t ignore the question:  “Is this result even mathematically possible?”

Second, those expensive “matched control” study designs are worthless anyway when comparing actively motivated participants to unmotivated non-participants.  You can match demographics but you can’t match the difference in mindset between (for example) smokers who want to quit and smokers who don’t.  Health Fitness Corporation showed that would-be participants significantly outperform non-participants even without a program.

Third, researchers need to stop attributing all good things to wellness and instead employ a basic biostatistical tenet, which is that the outcome should bear some relationship to the intervention.  In the case of wellness, that means a program designed to reduce wellness-sensitive risks should have its greatest if not only effect in avoidance of medical events of a wellness-sensitive nature.

Fourth, they need to understand the relationship between risk factor reduction and cost savings.  As Why Nobody Believes the Numbers shows mathematically, costs decline about a tenth as much as risk factors decline, meaning that BP could attribute roughly a 0.56% decline in costs to the 5.6% risk factor decline.  This 0.56% gross decline would be offset by more preventive physician visits, more preventive drugs, the cost of the program and incentives.  It also assumes that the 5.6% itself figure is real, which it almost certainly isn’t because…

…Fifth, this industry needs to stop ignoring the overwhelming preponderance of facts that don’t support its belief system.  In particular, it needs to stop citing the single company (Johnson & Johnson) which has a strong wellness culture that few others have duplicated , and instead look at RAND and (for example) IBM, Dow Chemical or Salt Lake County to see a much more realistic view of what companies can accomplish.   Wellness industry leaders specialize in distorting fundamental scientific and analytic principles to meet their own needs.  Increasingly, however, health policy experts are calling them on this, which means that there is hope for introducing integrity into the wellness outcomes reporting process.

Along with their distaste for facts is a distaste for arithmetic.  Page eight of the JOEM paper asserts that “there are no industry standards on how to calculate ROI…”  They could easily have added:  “…because most vendors, including us, refuse to disclose the rate of wellness-sensitive medical events, which would be the obvious choice as an industry standard if we were able to actually reduce these events.” Instead the vendor association uses an invalid consensus methodology, making wellness/disease management the only industry where mathematical proof is trumped by popular vote.

Finally, industry supporters need to stop citing the single article in a first-tier journal with a positive finding about wellness’s impact on health spending.  That article is now approaching its fourth birthday, it was a meta-analysis of studies mostly drawn from third-tier journals that among them had never published an article that found anything other than positive results from wellness (there is even a term for this — “publication bias”), and even its authors no longer seriously defend it.  (Reflecting the dearth of more recent positive press in first-tier journals, a leading wellness proponent recently wrote that this article “should be cited much more frequently by health promotion and wellness professionals.)

This industry needs to acknowledge that almost all credible recent literature – ours or anyone else’s –finds largely failure, and then develop or identify some best practices.   Hence we once again invite people to identify a single wellness vendor that saves money by reducing population-wide wellness-sensitive medical events, for us to highlight in a future column.

Al Lewis is the author of Why Nobody Believes the Numbers, co-author of Cracking Health Costs: How to Cut Your Company’s Health Costs and Provide Employees Better Care, and president of the Disease Management Purchasing Consortium.

Vik Khanna is a St. Louis-based independent health consultant with extensive experience in managed care and wellness.  An iconoclast to the core, he is the author of the Khanna On Health Blog.  He is also the Wellness Editor-At-Large for THCB.

Categories: Uncategorized

Tagged as: , , ,