I’ve been thinking a lot about “big data” and how it is going to affect the practice of medicine. It’s not really my area of expertise– but here are a few thoughts on the tricky intersection of data mining and medicine.
First, some background: these days it’s rare to find companies that don’t use data-mining and predictive models to make business decisions. For example, financial firms regularly use analytic models to figure out if an applicant for credit will default; health insurance firms can predict downstream medical utilization based on historic healthcare visits; and the IRS can spot tax fraud by looking for fraudulent patterns in tax returns. The predictive analytic vendors are seeing an explosion of growth: Forbes recently noted that big data hardware/software and services will grow at a compound annual growth rate of 30% through 2018.
Big data isn’t rocket surgery. The key to each of these models is pattern recognition: correlating a particular variable with another and linking variables to a future result. More and better data typically leads to better predictions.
It seems that the unstated, and implicit belief in the world of big data is that when you add more variables and get deeper into the weeds, interpretation improves and the prediction become more accurate.Continue reading…
In December, THCB asked industry insiders and pundits across health care to give us their armchair quarterback predictions for 2015. What tectonic trends do they see looming on the horizon? What’s overrated? What nasty little surprises do they see lying in wait? What will we all be talking about this time next year? Over the next few weeks, we’ll be featuring their responses in a series of quick takes.
Joe DeSantis, Vice President of HealthShare Platforms, InterSystems
Information Exchange is dead. Long live Information Exchange: There was a lot of talk in 2014 about the failure of information exchange. When people take a closer look, they are going to see there are actually some good examples of this working and changing how care is delivered. We’ll see lots more examples in 2015.
(Big) garbage in, (big) garbage out: People are looking to big data and analytics to tackle population health and other problems. They will soon find that without addressing data quality and conditioning up front, the results will be disappointing at best. This will be the year of clean data.
Keep it simple: The mobile revolution has not yet had the impact on healthcare that it has had in other sectors. Recreating desktop applications on a phone is not the answer, nor are retreads of messaging standards. We will have to rethink how healthcare information is presented and used.
One portal, please: Everyone agrees that patient engagement is essential – but giving me four separate portals, six more for my wife and three more for my mother makes me enraged, not engaged! Thought leaders will begin to realize that patient engagement must be built atop true information sharing.Continue reading…
If another case of Ebola emanates from the unfortunate Texas Health Presbyterian Hospital, the Root Cause Analysts might mount their horses, the Six Sigma Black Belts will sky dive and the Safety Champions will tunnel their way clandestinely to rendezvous at the sentinel place.
What might be their unique insights? What will be their prescriptions?
One never knows what pearls one will encounter from ‘after-the-fact’ risk managers. I can imagine Caesar consulting a Sybil as he was being stabbed by Brutus. “Obviously Jules you should have shared Cleo with Brutus.” Thanks Sybil. Perhaps you should have told him that last night.
Nevertheless, permit me to conjecture.
First, they might say that the hospital ‘lacks a culture of safety which resonates with the values and aspirations of the American people.’
That’s always a safe analysis when the Ebola virus has just been mistaken for a coronavirus. It’s sufficiently nebulous to never be wrong. The premise supports the conclusion. How do we know the hospital lacks culture of safety? ‘Cos, they is missing Ebola, innit,’ as Ali G might not have said.
They would be careful in blaming the electronic health record (EHR), because it represents one of the citadels of Toyotafication of Healthcare. But they would remind us of the obvious ‘EHRs don’t go to medical school, doctors do.’ A truism which shares the phenotype with the favorite of the pro-gun lobby ‘guns don’t kill, people kill.’
Put the question in 1880: Will technology replace farmers? Most of them. In the 19th century, some 80% of the population worked in agriculture. Today? About 2% — and they are massively more productive.
Put it in 1980: Will technology replace office workers? Some classes of them, yes. Typists, switchboard operators, stenographers, file clerks, mail clerks — many job categories have diminished or disappeared in the last three decades. But have we stopped doing business? Do fewer people work in offices? No, but much of the rote mechanical work is carried out in vastly streamlined ways.
Similarly, technology will not replace doctors. But emerging technologies have the capacity to replace, streamline, or even render unnecessary much of the work that doctors do — in ways that actually increases the value and productivity of physicians. Imagine some of these scenarios with me:
· Next-generation EMRs that are transparent across platforms and organizations, so that doctors spend no time searching for and re-entering longitudinal records, images, or lab results; and that obviate the need for a separate coding capture function — driving down the need for physician hours of labor.Continue reading…
The term Big Data is ubiquitous and enigmatic. It’s so overused that it has practically morphed into a meme for using fancy math to make technology better. In a recent Center for Technology Innovation analysis of Big Data in education the term was defined as a, “group of statistical techniques that uncover patterns.” But, others disagree, so what is Big Data?
To answer that question Jenna Dutcher, Community Relations Manager for datascience@berkeley, the UC Berkeley School of Information’s online masters in data science, asked subject matter experts from industry, academia, and the public sector how they define Big Data. All of the answers are fascinating but there were several worth highlighting.
Everywhere we turn these days it seems “Big Data” is being touted as a solution for physicians and physician groups who want to participate in Accountable Care Organizations, (ACOs) and/or accountable care-like contracts with payers.
We disagree, and think the accumulated experience about what works and what doesn’t work for care management suggests that a “Small Data” approach might be good enough for many medical groups, while being more immediately implementable and a lot less costly. We’re not convinced, in other words, that the problem for ACOs is a scarcity of data or second rate analytics. Rather, the problem is that we are not taking advantage of, and using more intelligently, the data and analytics already in place, or nearly in place.
For those of you who are interested in the concept of Big Data, Steve Lohr recently wrote a good overview in his column in the New York Times, in which he said:
“Big Data is a shorthand label that typically means applying the tools of artificial intelligence, like machine learning, to vast new troves of data beyond that captured in standard databases. The new data sources include Web-browsing data trails, social network communications, sensor data and surveillance data.”
Applied to health care and ACOs, the proponents of Big Data suggest that some version of IBM’s now-famous Watson, teamed up with arrays of sensors and a very large clinical data repository containing virtually every known fact about all of the patients seen by the medical group, is a needed investment. Of course, many of these data are not currently available in structured, that is computable, format. So one of the costly requirements that Big Data may impose on us results from the need to convert large amounts of unstructured or poorly structured data to structured data. But when that is accomplished, so advocates tell us, Big Data is not only good for quality care, but is “absolutely essential” for attaining the cost efficiency needed by doctors and nurses to have a positive and money-making experience with accountable care shared-savings, gain-share, or risk contracts.
Healthcare costs far too much. We can do it better for half the cost. But if we did cut the cost in half, we would cut the jobs in half, wipe out 9% of the economy and plunge the country into a depression.
Really? It’s that simple? Half the cost equals half the jobs? So we’re doomed either way?
Actually, no. It’s not that simple. We cannot of course forecast with any precision the economic consequences of doing healthcare for less. But a close examination of exactly how we get to a leaner, more effective healthcare system reveals a far more intricate and interrelated economic landscape.
In a leaner healthcare, some types of tasks will disappear, diminish, or become less profitable. That’s what “leaner” means. But other tasks will have to expand. Those most likely to wane or go “poof” are different from those that will grow. At the same time, a sizable percentage of the money that we waste in healthcare is not money that funds healthcare jobs, it is simply profit being sucked into the Schwab accounts and ski boats of high income individuals and the shareholders of profitable corporations.
Let’s take a moment to walk through this: how we get to half, what disappears, what grows and what that might mean for jobs in healthcare.
Getting to half
How would this leaner Next Healthcare be different from today’s?
Waste disappears: Studies agree that some one third of all healthcare is simple waste. We do these unnecessary procedures and tests largely because in a fee-for-service system we can get paid to do them. If we pay for healthcare differently, this waste will tend to disappear.
Prices rationalize: As healthcare becomes something more like an actual market with real buyers and real prices, prices will rationalize close to today’s 25th percentile. The lowest prices in any given market are likely to rise somewhat, while the high-side outliers will drop like iron kites.
Internal costs drop: Under these pressures, healthcare providers will engage in serious, continual cost accounting and “lean manufacturing” protocols to get their internal costs down.
The gold mine in chronic: There is a gold mine at the center of healthcare in the prevention and control of chronic disease, getting acute costs down through close, trusted relationships between patients, caregivers, and clinicians.
Tech: Using “big data” internally to drive performance and cost control; externally to segment the market and target “super users;” as well as using widgets, dongles, and apps to maintain that key trusted relationship between the clinician and the patient/consumer/caregiver.
Consolidation: Real competition on price and quality, plus the difficulty of managing hybrid risk/fee-for-service systems, means that we will see wide variations in the market success of providers. Many will stumble or fail. This will drive continued consolidation in the industry, creating large regional and national networks of healthcare providers capable of driving cost efficiency and risk efficiency through the whole organization.
European health care systems are already awash in “big data.” The United States is rushing to catch up, although clumsily thanks to the need to corral a century’s worth of heterogeneity. To avoid confounding the chaos further, the United States is postponing the adoption of the ICD-10 classification system. Hence, it will be some time before American “big data” can be put to the task of defining accuracy, costs and effectiveness of individual tests and treatments with the exquisite analytics that are already being employed in Europe. From my perspective as a clinician and clinical educator, of all the many failings of the American “health care” system, the ability to massage “big data” in this fashion is least pressing. I am no Luddite – but I am cautious if not skeptical when “big data” intrudes into the patient-doctor relationship.
The driver for all this is the notion that “health care” can be brought to heel with a “systems approach.”
This was first advocated by Lucien Leape in the context of patient safety and reiterated in “To Err is Human,” the influential document published by the National Academies Press in 2000. This is an approach that borrows heavily from the work of W. Edwards Deming and later Bill Smith. Deming (1900-1993) was an engineer who earned a PhD in physics at Yale. The aftermath of World War II found him on General Douglas MacArthur’s staff offering lessons in statistical process control to Japanese business leaders. He continued to do so as a consultant for much of his later life and is considered the genius behind the Japanese industrial resurgence. The principal underlying Deming’s approach is that focusing on quality increases productivity and thereby reduces cost; focusing on cost does the opposite. Bill Smith was also an engineer who honed this approach for Motorola Corporation with a methodology he introduced in 1987. The principal of Smith’s “six sigma” approach is that all aspects of production, even output, could be reduced to quantifiable data allowing the manufacturer to have complete control of the process. Such control allows for collective effort and teamwork to achieve the quality goals. These landmark achievements in industrial engineering have been widely adopted in industry having been championed by giants such as Jack Welch of GE. No doubt they can result in improvement in the quality and profitability of myriad products from jet engines to cell phones. Every product is the same, every product well designed and built, and every product profitable.
An organization’s “business model” means: How does it make a living? What revenue streams sustain it? How it does that makes all the difference in the world.
Saturday, Natasha Singer wrote in the New York Times about health plans and healthcare providers using “big data,” including your shopping patterns, car ownership and Internet usage, to segment their markets.
The beginning of the article featured the University of Pittsburgh Medical Center (UPMC) using “predictive health analytics” to target people who would benefit the most from intervention so that they would not need expensive emergency services and surgery. The later part of the article mentioned organizations that used big data to find their best customers among the worried well and get them in for more tests and procedures. The article quoted experts fretting that this would just lead to more unnecessary and unhelpful care just to fatten the providers’ bottom lines.
The article missed the real news here: Why is one organization (UPMC) using big data so that people end up using fewer expensive healthcare resources, while others use it to get people to use more healthcare, even if they don’t really need it?
Because they are paid differently. They have different business models.
UPMC is an integrated system with its own insurance arm covering 2.4 million people. As a system it has largely found a way out of the fee-for-service model. It has a healthier bottom line if its customers are healthier and so need fewer acute and emergency services. The other organizations are fee-for-service. Getting people in for more tests and biopsies is a revenue stream. For UPMC it would just be a cost.
The evil here is not using predictive modeling to segment the market. The evil here is the fee-for-service system that rewards waste and profiteering in medicine.
At the first White House public workshop on Big Data, Latanya Sweeney, a leading privacy researcher at Carnegie Mellon and Harvard who is now the chief technologist for the Federal Trade Commission, was quoted as asking about privacy and big data, “computer science got us into this mess; can computer science get us out of it?”
There is a lot computer science and other technology can do to help consumers in this area. Some examples:
• The same predictive analytics and machine learning used to understand and manage preferences for products or content and improve user experience can be applied to privacy preferences. This would take some of the burden off individuals to manage their privacy preferences actively and enable providers to adjust disclosures and consent for differing contexts that raise different privacy sensitivities.
Computer science has done a lot to improve user interfaces and user experience by making them context-sensitive, and the same can be done to improve users’ privacy experience.
• Tagging and tracking privacy metadata would strengthen accountability by making it easier to ensure that use, retention, and sharing of data is consistent with expectations when the data was first provided.
• Developing features and platforms that enable consumers to see what data is collected about them, employ visualizations to increase interpretability of data, and make data about consumers more available to them in ways that will allow consumers to get more of the benefit of data that they themselves generate would provide much more dynamic and meaningful transparency than static privacy policies that few consumers read and only experts can interpret usefully.
In a recent speech to MIT’s industrial partners, I presented examples of research on privacy-protecting technologies.