In an effort to help women make informed decisions about where to deliver their babies, we set out to collect a comprehensive, nationwide database of hospitals’ C-section rates. Knowing that the federal government mandates surveillance and reporting of vital statistics through the National Vital Statistics System, we contacted all 50 states’ (+Washington D.C.) Departments of Public Health (DPH) asking for access to de-identified birth data from all of their hospitals. What we learned might not surprise you — the lack of transparency in the United States healthcare system extends to quality information, and specifically C-section data. Continue reading…
Value-based healthcare is gaining popularity as an approach to increase sustainability in healthcare. It has its critics, possibly because its roots are in a health system where part of the drive for a hospital to improve outcomes is to increase market share by being the best at what you do. This is not really a solution for improving population health and does not translate well to publicly-funded healthcare systems such as the NHS. However, when we put aside dogma about how we would wish to fund healthcare, value-based healthcare provides us with a very useful set of tools with which to tackle some of the fundamental problems of sustainability in delivering high quality care.
What is value?
Defined by Professor Michael Porter at Harvard Business School, value is defined as a function of outcomes and costs. Therefore to achieve high value we must deliver the best possible outcomes in the most efficient way, outcomes which matter from the perspective of the individual receiving healthcare and not provider process measures or targets. Sir Muir Gray expands on the idea of technical value (outcomes/costs) to specifically describe ‘personal value’ and ‘allocative value’, encouraging us to focus also on shared decision making, individual preferences for care and ensuring that resources are allocated for maximum value.
This article seeks to demonstrate that the role of data and informatics in supporting value-based care goes much further than the collection and remote analysis of big datasets – in fact, the true benefit sits much closer to the interaction between clinician and patient.
Despite (some might say, because of) a raft of new biological methods, pharma R&D has struggled with its EROOM problem, the fact that the cost of successfully developing a new drug, including the cost of failures, has been relentlessly increasing, rather than decreasing, over time (EROOM is Moore spelled backwards, as in Moore’s Law, describing the rapid pace of technology improvement over time).
Given the impact of technology in so many other areas, the question many are now asking is whether technology could do its thing in pharma, and make drug development faster, cheaper, and better.
Many major pharmas believe the answer has to be yes, and have invested in some version of a by-now familiar data initiative aimed at aggregating and organizing internal data, supplementing this with available public data, and overlaying this with a set of analytical tools that will help the many data scientists these pharmas are urgently hiring to extract insights and accelerate research.
Artificial intelligence requires data. Ideally that data should be clean, trustworthy and above all, accurate. Unfortunately, medical data is far from it. In fact medical data is sometimes so far removed from being clean, it’s positively dirty.
Consider the simple chest X-ray, the good old-fashioned posterior-anterior radiograph of the thorax. One of the longest standing radiological techniques in the medical diagnostic armoury, performed across the world by the billions. So many in fact, that radiologists struggle to keep up with the sheer volume, and sometimes forget to read the odd 23,000 of them. Oops.
Surely, such a popular, tried and tested medical test should provide great data for training AI? There’s clearly more than enough data to have a decent attempt, and the technique is so well standardised and robust that surely it’s just crying out for automation?Continue reading…
Data is not always the path to identifying good medicine. Quality and cost measures should not be perceived as “scores,” because the health care process is neither simplistic nor deterministic; it involves as much art and perception as science—and never is this more the case than in the first step of that process, making a diagnosis.
I share the following story to illustrate this lesson: we should stop behaving as if good quality can be delineated by data alone. Instead, we should be using that data to ask questions. We need to know more about exactly what we are measuring, how we can capture both the physician and patient inputs to care decisions, and how and why there are variations among different physicians.
A Tale of Two Doctors
“As soon as I start swimming, my chest feels heavy and I have trouble breathing. It is a dull pain. It is scary. I swim about a lap of the pool, and, thankfully, the pain goes away. This is happening every time I go to work out in the pool”.
Her primary physician listened intently. With more than 40 years of experience, the physician, a stalwart in the medical community, loved by all, who scored high on the “physician compare” web site listing, stopped the interview after the description and announced, with concern, that she needed to have a cardiac stress test. The stress test would require walking on a “treadmill” to monitor her heart and would include, additionally, an echocardiogram test to see if her heart was being compromised from a lack of blood flow.
“But, I have had three echocardiogram tests in the last year as part of my treatment for breast cancer and each was normal. Why would I need another”?
“Well, I understand your concern about more tests, but the echocardiograms were done without having your heart stressed by exercise. The echo tests may be normal under those circumstances, but be abnormal when you are on the treadmill. You still need the test, unfortunately. I want to order the test today and you should get it done in the next week”.
I don’t know why, but even as a young person I never could make sense of the saying, “seeing is believing”. Seeing, vision, is nothing more than a data collection instrument, not an arbiter of insight. I saw my wife frown at me the other day, for example, after I claimed to have washed the dishes so thoroughly that no spot of grease could be left behind. I have made this claim before and been incorrect, so the frown, the data, triggered an anticipation of being rebuffed. However, nothing of that sort followed. I asked, Why the frown?” She responded, “I just cut my finger”. The frown was obvious, the cause unclear. I believed I was about to be reprimanded and missed the chance to notice her accident. This story suggests that a truer aphorism might be, instead, then, that “believing is seeing”.
The phrase “healthcare data” either strikes fear and loathing, or provides understanding and resolve in the minds of administration, clinicians, and nurses everywhere. Which emotion it brings out depends on how the data will be used. Data employed as a weapon for purposes of accountability generates fear. Data used as a teaching instrument for learning inspires trust and confidence.
Not all data for accountability is bad. Data used for prescriptive analytics within a security framework, for example, is necessary to reduce or eliminate fraud and abuse. And data for improvement isn’t without its own faults, such as the tendency to perfect it to the point of inefficiency. But the general culture of collecting data to hold people accountable is counterproductive, while collecting data for learning leads to continuous improvement.
This isn’t a matter of eliminating what some may consider to be bad metrics. It’s a matter of shifting the focus away from using metrics for accountability and toward using them for learning so your hospital can start to collect data for improving healthcare.Continue reading…
Get a group of health policy experts together and you’ll find one area of near universal agreement: we need more transparency in healthcare. The notion behind transparency is straightforward; greater availability of data on provider performance helps consumers make better choices and motivates providers to improve. And there is some evidence to suggest it works. In New York State, after cardiac surgery reporting went into effect, some of the worst performing surgeons stopped practicing or moved out of state and overall outcomes improved. But when it comes to hospital care, the impact of transparency has been less clear-cut.
In 2005, Hospital Compare, the national website run by the Centers for Medicare and Medicaid Services (CMS), started publicly reporting hospital performance on process measures – many of which were evidence based (e.g. using aspirin for acute MI patients). By 2008, evidence showed that public reporting had dramatically increased adherence to those process measures, but its impact on patient outcomes was unknown. A few years ago, Andrew Ryan published an excellent paper in Health Affairs examining just that, and found that more than 3 years after Hospital Compare went into effect, there had been no meaningful impact on patient outcomes. Here’s one figure from that paper:
The paper was widely covered in the press — many saw it as a failure of public reporting. Others wondered if it was a failure of Hospital Compare, where the data were difficult to analyze. Some critics shot back that Ryan had only examined the time period when public reporting of process measures was in effect and it would take public reporting of outcomes (i.e. mortality) to actually move the needle on lowering mortality rates. And, in 2009, CMS started doing just that – publicly reporting mortality rates for nearly every hospital in the country. Would it work? Would it actually lead to better outcomes? We didn’t know – and decided to find out.
Several years ago both Microsoft and Google invested millions of dollars on a flawed assumption: If they built a useful and free healthcare application, people would flock to it. In both cases, the effort failed. At its peak Microsoft HealthVault was only able to enroll a few thousand—largely inactive—users. Google Health was discontinued after a few years.
The problem was (and is) that unlike almost any other business, healthcare is a negative good.
Even if it’s “free,” as was the case with both the Microsoft and Google offerings, most people find tracking their health to be, in some sense, an admission of frailty, imperfection and mortality. Except for occasional blips related more to vanity (weight loss is the prime example), when it comes to our health most of us are in denial. So when people talk about technology for patient engagement, I tend to pause and wonder: Should we be building apps and services just for patients, or for the people who care about them too?
I am an IT geek physician. I have my an EHR which I created and control.
Today, I wanted to understand my diabetic practice a little more, so I dumped all my HbA1c data out of my EHR and into a spreadsheet where I was able to manipulate the data and learn a few things about my practice.
I learned that:
If my patient had a HbA1c ≥ 8, the likelihood that the HbA1c would be < 8 at the next visit is 68%.
If my patient had a HbA1c ≥ 8, the likelihood the HbA1c would be even higher at the subsequent visit is 29%.
If my patient had a HbA1c ≥ 8, the average change in the HbA1c at the next visit was -0.7.
If my patient had a HbA1c < 8, the likelihood that HbA1c at the subsequent visit would exceed 8% would be 15%.