Big Data

flying cadeuciiIf another case of Ebola emanates from the unfortunate Texas Health Presbyterian Hospital, the Root Cause Analysts might mount their horses, the Six Sigma Black Belts will sky dive and the Safety Champions will tunnel their way clandestinely to rendezvous at the sentinel place.

What might be their unique insights? What will be their prescriptions?

One never knows what pearls one will encounter from ‘after-the-fact’ risk managers. I can imagine Caesar consulting a Sybil as he was being stabbed by Brutus. “Obviously Jules you should have shared Cleo with Brutus.” Thanks Sybil. Perhaps you should have told him that last night.

Nevertheless, permit me to conjecture.

First, they might say that the hospital ‘lacks a culture of safety which resonates with the values and aspirations of the American people.’

That’s always a safe analysis when the Ebola virus has just been mistaken for a coronavirus. It’s sufficiently nebulous to never be wrong. The premise supports the conclusion. How do we know the hospital lacks culture of safety? ‘Cos, they is missing Ebola, innit,’ as Ali G might not have said.

They would be careful in blaming the electronic health record (EHR), because it represents one of the citadels of Toyotafication of Healthcare. But they would remind us of the obvious ‘EHRs don’t go to medical school, doctors do.’ A truism which shares the phenotype with the favorite of the pro-gun lobby ‘guns don’t kill, people kill.’

Continue reading “Six Sigma vs Ebola”

Share on Twitter

Joe FlowerPut the question in 1880: Will technology replace farmers? Most of them. In the 19th century, some 80% of the population worked in agriculture. Today? About 2% — and they are massively more productive.

Put it in 1980: Will technology replace office workers? Some classes of them, yes. Typists, switchboard operators, stenographers, file clerks, mail clerks — many job categories have diminished or disappeared in the last three decades. But have we stopped doing business? Do fewer people work in offices? No, but much of the rote mechanical work is carried out in vastly streamlined ways.

Similarly, technology will not replace doctors. But emerging technologies have the capacity to replace, streamline, or even render unnecessary much of the work that doctors do — in ways that actually increases the value and productivity of physicians. Imagine some of these scenarios with me:

· Next-generation EMRs that are transparent across platforms and organizations, so that doctors spend no time searching for and re-entering longitudinal records, images, or lab results; and that obviate the need for a separate coding capture function — driving down the need for physician hours of labor. Continue reading “Will Technology Replace Doctors?”

Share on Twitter

Screen Shot 2014-09-22 at 9.19.30 AM

The term Big Data is ubiquitous and enigmatic. It’s so overused that it has practically morphed into a meme for using fancy math to make technology better. In a recent Center for Technology Innovation analysis of Big Data in education the term was defined as a, “group of statistical techniques that uncover patterns.” But, others disagree, so what is Big Data?

To answer that question Jenna Dutcher, Community Relations Manager for datascience@berkeley, the UC Berkeley School of Information’s online masters in data science, asked subject matter experts from industry, academia, and the public sector how they define Big Data. All of the answers are fascinating but there were several worth highlighting.

Continue reading “What Does Big Data Actually Mean?”

Share on Twitter

flying cadeuciiEverywhere we turn these days it seems “Big Data” is being touted as a solution for physicians and physician groups who want to participate in Accountable Care Organizations, (ACOs) and/or accountable care-like contracts with payers.

We disagree, and think the accumulated experience about what works and what doesn’t work for care management suggests that a “Small Data” approach might be good enough for many medical groups, while being more immediately implementable and a lot less costly. We’re not convinced, in other words, that the problem for ACOs is a scarcity of data or second rate analytics. Rather, the problem is that we are not taking advantage of, and using more intelligently, the data and analytics already in place, or nearly in place.

For those of you who are interested in the concept of Big Data, Steve Lohr recently wrote a good overview in his column in the New York Times, in which he said:

“Big Data is a shorthand label that typically means applying the tools of artificial intelligence, like machine learning, to vast new troves of data beyond that captured in standard databases. The new data sources include Web-browsing data trails, social network communications, sensor data and surveillance data.”

Applied to health care and ACOs, the proponents of Big Data suggest that some version of IBM’s now-famous Watson, teamed up with arrays of sensors and a very large clinical data repository containing virtually every known fact about all of the patients seen by the medical group, is a needed investment. Of course, many of these data are not currently available in structured, that is computable, format. So one of the costly requirements that Big Data may impose on us results from the need to convert large amounts of unstructured or poorly structured data to structured data. But when that is accomplished, so advocates tell us, Big Data is not only good for quality care, but is “absolutely essential” for attaining the cost efficiency needed by doctors and nurses to have a positive and money-making experience with accountable care shared-savings, gain-share, or risk contracts.

Continue reading “The Power of Small”

Share on Twitter

flying cadeuciiHealthcare costs far too much. We can do it better for half the cost. But if we did cut the cost in half, we would cut the jobs in half, wipe out 9% of the economy and plunge the country into a depression.

Really? It’s that simple? Half the cost equals half the jobs? So we’re doomed either way?

Actually, no. It’s not that simple. We cannot of course forecast with any precision the economic consequences of doing healthcare for less. But a close examination of exactly how we get to a leaner, more effective healthcare system reveals a far more intricate and interrelated economic landscape.

In a leaner healthcare, some types of tasks will disappear, diminish, or become less profitable. That’s what “leaner” means. But other tasks will have to expand. Those most likely to wane or go “poof” are different from those that will grow. At the same time, a sizable percentage of the money that we waste in healthcare is not money that funds healthcare jobs, it is simply profit being sucked into the Schwab accounts and ski boats of high income individuals and the shareholders of profitable corporations.

Let’s take a moment to walk through this: how we get to half, what disappears, what grows and what that might mean for jobs in healthcare.

Getting to half

How would this leaner Next Healthcare be different from today’s?

Waste disappears: Studies agree that some one third of all healthcare is simple waste. We do these unnecessary procedures and tests largely because in a fee-for-service system we can get paid to do them. If we pay for healthcare differently, this waste will tend to disappear.

Prices rationalize: As healthcare becomes something more like an actual market with real buyers and real prices, prices will rationalize close to today’s 25th percentile. The lowest prices in any given market are likely to rise somewhat, while the high-side outliers will drop like iron kites.

Internal costs drop: Under these pressures, healthcare providers will engage in serious, continual cost accounting and “lean manufacturing” protocols to get their internal costs down.

The gold mine in chronic: There is a gold mine at the center of healthcare in the prevention and control of chronic disease, getting acute costs down through close, trusted relationships between patients, caregivers, and clinicians.

Tech: Using “big data” internally to drive performance and cost control; externally to segment the market and target “super users;” as well as using widgets, dongles, and apps to maintain that key trusted relationship between the clinician and the patient/consumer/caregiver.

Consolidation: Real competition on price and quality, plus the difficulty of managing hybrid risk/fee-for-service systems, means that we will see wide variations in the market success of providers. Many will stumble or fail. This will drive continued consolidation in the industry, creating large regional and national networks of healthcare providers capable of driving cost efficiency and risk efficiency through the whole organization.

Continue reading “Half the Cost. Half the Jobs?”

Share on Twitter

Nortin Hadler

European health care systems are already awash in “big data.” The United States is rushing to catch up, although clumsily thanks to the need to corral a century’s worth of heterogeneity. To avoid confounding the chaos further, the United States is postponing the adoption of the ICD-10 classification system. Hence, it will be some time before American “big data” can be put to the task of defining accuracy, costs and effectiveness of individual tests and treatments with the exquisite analytics that are already being employed in Europe. From my perspective as a clinician and clinical educator, of all the many failings of the American “health care” system, the ability to massage “big data” in this fashion is least pressing. I am no Luddite – but I am cautious if not skeptical when “big data” intrudes into the patient-doctor relationship.

The driver for all this is the notion that “health care” can be brought to heel with a “systems approach.”

This was first advocated by Lucien Leape in the context of patient safety and reiterated in “To Err is Human,” the influential document published by the National Academies Press in 2000. This is an approach that borrows heavily from the work of W. Edwards Deming and later Bill Smith. Deming (1900-1993) was an engineer who earned a PhD in physics at Yale. The aftermath of World War II found him on General Douglas MacArthur’s staff offering lessons in statistical process control to Japanese business leaders. He continued to do so as a consultant for much of his later life and is considered the genius behind the Japanese industrial resurgence. The principal underlying Deming’s approach is that focusing on quality increases productivity and thereby reduces cost; focusing on cost does the opposite. Bill Smith was also an engineer who honed this approach for Motorola Corporation with a methodology he introduced in 1987. The principal of Smith’s “six sigma” approach is that all aspects of production, even output, could be reduced to quantifiable data allowing the manufacturer to have complete control of the process. Such control allows for collective effort and teamwork to achieve the quality goals. These landmark achievements in industrial engineering have been widely adopted in industry having been championed by giants such as Jack Welch of GE. No doubt they can result in improvement in the quality and profitability of myriad products from jet engines to cell phones. Every product is the same, every product well designed and built, and every product profitable.

Continue reading “Missing the Forest For the Granularity”

Share on Twitter

flying cadeuciiAn organization’s “business model” means: How does it make a living? What revenue streams sustain it? How it does that makes all the difference in the world.

Saturday, Natasha Singer wrote in the New York Times about health plans and healthcare providers using “big data,” including your shopping patterns, car ownership and Internet usage, to segment their markets.

The beginning of the article featured the University of Pittsburgh Medical Center (UPMC) using “predictive health analytics” to target people who would benefit the most from intervention so that they would not need expensive emergency services and surgery. The later part of the article mentioned organizations that used big data to find their best customers among the worried well and get them in for more tests and procedures. The article quoted experts fretting that this would just lead to more unnecessary and unhelpful care just to fatten the providers’ bottom lines.

The article missed the real news here: Why is one organization (UPMC) using big data so that people end up using fewer expensive healthcare resources, while others use it to get people to use more healthcare, even if they don’t really need it?

Because they are paid differently. They have different business models.

UPMC is an integrated system with its own insurance arm covering 2.4 million people. As a system it has largely found a way out of the fee-for-service model. It has a healthier bottom line if its customers are healthier and so need fewer acute and emergency services. The other organizations are fee-for-service. Getting people in for more tests and biopsies is a revenue stream. For UPMC it would just be a cost.

The evil here is not using predictive modeling to segment the market. The evil here is the fee-for-service system that rewards waste and profiteering in medicine.

Share on Twitter

Screen Shot 2014-04-30 at 11.21.56 AM

At the first White House public workshop on Big Data, Latanya Sweeney, a leading privacy researcher at Carnegie Mellon and Harvard who is now the chief technologist for the Federal Trade Commission, was quoted as asking about privacy and big data, “computer science got us into this mess; can computer science get us out of it?”

There is a lot computer science and other technology can do to help consumers in this area. Some examples:

•    The same predictive analytics and machine learning used to understand and manage preferences for products or content and improve user experience can be applied to privacy preferences. This would take some of the burden off individuals to manage their privacy preferences actively and enable providers to adjust disclosures and consent for differing contexts that raise different privacy sensitivities.

Computer science has done a lot to improve user interfaces and user experience by making them context-sensitive, and the same can be done to improve users’ privacy experience.

•    Tagging and tracking privacy metadata would strengthen accountability by making it easier to ensure that use, retention, and sharing of data is consistent with expectations when the data was first provided.

•    Developing features and platforms that enable consumers to see what data is collected about them, employ visualizations to increase interpretability of data, and make data about consumers more available to them in ways that will allow consumers to get more of the benefit of data that they themselves generate would provide much more dynamic and meaningful transparency than static privacy policies that few consumers read and only experts can interpret usefully.

In a recent speech to MIT’s industrial partners, I presented examples of research on privacy-protecting technologies.

Continue reading “Using Technology to Better Inform Consumers about Privacy Decisions”

Share on Twitter

Human beings are big data. We aren’t just 175 pounds of meat and bone. We aren’t just piles of hydrogen and carbon and oxygen.  What makes us all different is how it’s all organized and that is information.

We can no longer treat people based on simple numbers like weight, pulse, blood pressure, and temperature. What makes us different is much more complicated than that.

We’ve known for decades that we are all slightly different genetically, but now we can increasingly see those differences. The Hippocratic oath will require doctors to take this genetic variability into account.

I’m not saying there isn’t a place for hands-on medicine, empathy, psychology and moral support. But the personalized handling of each patient is becoming much more complicated.  The more data we can gather, the more each individual is different from others.

In our genome, we have approximately 3 billion base pairs in each of our trillions of cells.  We have more than 25,000 genes in that genome, sometimes called the exome.  Each gene contains instructions on how to make a useful protein.  And then there are long stretches of our genomes that regulate those protein-manufacturing genes.

In the early days, some researchers called this “junk DNA” because they didn’t know what it did.  But this was foolish because why would evolution conserve these DNA sequences between genes if they did nothing?  Now we know they too do things that make us unique.

Continue reading “Is Medicine a Big Data Problem?”

Share on Twitter


In their best-selling 2013 book Big Data: A Revolution That Will Transform How We Live, Work and Think, authors Viktor Mayer-Schönberger and Kenneth Cukier selected Google Flu Trends (GFT) as the lede of chapter one.

They explained how Google’s algorithm mined five years of web logs, containing hundreds of billions of searches, and created a predictive model utilizing 45 search terms that “proved to be a more useful and timely indicator [of flu] than government statistics with their natural reporting lags.”

Unfortunately, no. The first sign of trouble emerged in 2009, shortly after GFT launched, when it completely missed the swine flu pandemic. Last year, Nature reported that Flu Trends overestimated by 50% the peak Christmas season flu of 2012. Last week came the most damning evaluation yet.

In Science, a team of Harvard-affiliated researchers published their findings that GFT has over-estimated the prevalence of flu for 100 out of the last 108 weeks; it’s been wrong since August 2011.

The Science article further points out that a simplistic forecasting model—a model as basic as one that predicts the temperature by looking at recent-past temperatures—would have forecasted flu better than GFT.

In short, you wouldn’t have needed big data at all to do better than Google Flu Trends. Ouch.

In fact, GFT’s poor track record is hardly a secret to big data and GFT followers like me, and it points to a little bit of a big problem in the big data business that many of us have been discussing: Data validity is being consistently overstated.

As the Harvard researchers warn: “The core challenge is that most big data that have received popular attention are not the output of instruments designed to produce valid and reliable data amenable for scientific analysis.”

The amount of data still tends to dominate discussion of big data’s value. But more data in itself does not lead to better analysis, as amply demonstrated with Flu Trends. Large datasets don’t guarantee valid datasets. That’s a bad assumption, but one that’s used all the time to justify the use of and results from big data projects.

Continue reading “Google Flu Trends Shows Good Data > Big Data”

Share on Twitter

THCB BLOGGERS

FROM THE VAULT

The Power of Small Why Doctors Shouldn't Be Healers Big Data in Healthcare. Good or Evil? Depends on the Dollars. California's Proposition 46 Narrow Networking
MASTHEAD STUFF

MATTHEW HOLT
Founder & Publisher

JOHN IRVINE
Executive Editor

JONATHAN HALVORSON
Editor

JOE FLOWER
Contributing Editor

MICHAEL MILLENSON
Contributing Editor

ALEX EPSTEIN
Director of Digital Media

MICHELLE NOTEBOOM Business Development

MUNIA MITRA, MD
Clinical Medicine

Vikram Khanna
Editor-At-Large, Wellness

THCB FROM A-Z

FOLLOW US ON TWITTER
@THCBStaff

WHERE IN THE WORLD WE ARE

The Health Care Blog (THCB) is based in San Francisco. We were founded in 2004 by Matthew Holt and John Irvine.

MEDIA REQUESTS

Interview Requests + Bookings. We like to talk. E-mail us.

BLOGGING
Yes. We're looking for bloggers. Send us your posts.

STORY TIPS
Breaking health care story? Drop us an e-mail.

CROSSPOSTS

We frequently accept crossposts from smaller blogs and major U.S. and International publications. You'll need syndication rights. Email a link to your submission.

WHAT WE'RE LOOKING FOR

Op-eds. Crossposts. Columns. Great ideas for improving the health care system. Pitches for healthcare-focused startups and business.Write ups of original research. Reviews of new healthcare products and startups. Data-driven analysis of health care trends. Policy proposals. E-mail us a copy of your piece in the body of your email or as a Google Doc. No phone calls please!

THCB PRESS

Healthcare focused e-books and videos for distribution via THCB and other channels like Amazon and Smashwords. Want to get involved? Send us a note telling us what you have in mind. Proposals should be no more than one page in length.

HEALTH SYSTEM $#@!!!
If you've healthcare professional or consumer and have had a recent experience with the U.S. health care system, either for good or bad, that you want the world to know about, tell us about it. Have a good health care story you think we should know about? Send story ideas and tips to editor@thehealthcareblog.com.

REPRINTS Questions on reprints, permissions and syndication to ad_sales@thehealthcareblog.com.

WHAT WE COVER

HEALTHCARE, GENERAL

Affordable Care Act
Business of Health Care
National health policy
Life on the front lines
Practice management
Hospital managment
Health plans
Prevention
Specialty practice
Oncology
Cardiology
Geriatrics
ENT
Emergency Medicine
Radiology
Nursing
Quality, Costs
Residency
Research
Medical education
Med School
CMS
CDC
HHS
FDA
Public Health
Wellness

HIT TOPICS
Apple
Analytics
athenahealth
Electronic medical records
EPIC
Design
Accountable care organizations
Meaningful use
Interoperability
Online Communities
Open Source
Privacy
Usability
Samsung
Social media
Tips and Tricks
Wearables
Workflow
Exchanges

EVENTS

TedMed
HIMSS South x South West
Health 2.0
WHCC
AHIP
AHIMA
Log in - Powered by WordPress.