I am happy to announce the release of the doctor “referral” social graph. This dataset, which I obtained using a Freedom of Information Act request against the Medicare claims database, details how most doctors, hospitals and other providers team together to deliver care in the United States. This graph is nothing less than a map of how healthcare is delivered in this country.
For the time being, the only way to get a copy of this data set is to support the Medstartr crowd funding campaign for either $100 (for the viral “open source eventually” version of the data) or $1000 (for the proprietary friendly version of the data, that any business can freely “merge” with other data). If you need consulting around this data, you can buy in at the $5k or $10k levels. Also, we are going to have really awesome t-shirts.
I will be writing a more in-depth technical article about this dataset over on the brand new O’Reilly Strata blog (which focuses specifically on Big Data) so I will gloss over most of the technical details here, with a few important exceptions.
First, when I say a “graph” I am not talking about a diagram. I am talking about a mathematical model that supports nodes and connections between those nodes. These are visualized as diagrams, but it is not possible to really analyze large graphs without a database. In this case, the nodes are doctors, hospitals and other providers and the connections between those nodes represent the degree to which they collaborate on specific patients.
Also, despite my branding to the contrary, this is not strictly a “referral” data set, although a fairly large portion of the data do represent referral relationships. Instead, it depicts the degree to which any healthcare provider “works” on a patient in the same time frame as some other provider. This means, for instance, that many primary care doctors are linked to emergency rooms. But this just means that a patient they were seeing was also seen by the emergency room in the same time period. Referral relationships can be inferred from this data, but not presumed.Continue reading…
Pascal Lardier, Director, International Events of Health 2.0, answers questions about the co-production of health by patients and physicians today and in the future.
Health 2.0. What exactly does this quite a new word describe? When did you use that word for the first time?
Pascal Lardier: It is a quite a new word indeed. Our first conference was in 2007 in San Francisco and at the time some people called the movement a fad. Since then our organization Health 2.0 has introduced over 500 technology companies to the world stage, hosted more than 9,000 attendees at our conferences and code-a-thons around the world, awarded more than $1,400,000 in prizes through our developer challenge program and inspired the formation of 46 new chapters in cities around the globe! The movement was obviously far from being a fad. Just like web 2.0 was a new version of the web, Health 2.0 describes a new era for health innovation where stakeholders collaborate, patients are empowered and the production of health becomes participatory.
Many people associate the word with social media and related things such as blogs, health platforms and health websites. Is that correct? How does “Health 2.0” differ from “e-Health” or “ICT”, for example?
PL: Communities such as online patient forums and the associated produced content played an important role in the Health 2.0 movement from the start. But it’s not just about social media and communities anymore: it’s also about patient-physician communication, personalized medicine, population health management, wellness, sensors/devices/unplatforms, data, analytics, system reform and more. In the beginning, health content became participatory. It is now becoming more and more personalized. All these profound transformations were calling for a new name and Health 2.0 was a good candidate for describing the extension of eHealth.
I am an EMR geek who isn’t so thrilled with the direction of EMR. So what, I have been asked, would make EMR something that is really meaningful? What would be the things that would truly help, and not just make more hoops for me to jump through? A lot of this is not in the hands of the gods of MU, but in the realm of the demons of reimbursement, but I will give it a try anyhow. Here’s my list:
- Require all visits to have a simple summary.
One of the biggest problems I have with EMR is the “data diarrhea” it creates, throwing piles of words into notes that is not useful for anything but assuring compliance with billing codes. I waste a huge amount of time trying to figure out what specialists, colleagues, and even my own assessment and plan was for any given visit. Each note should have an easily accessible visit summary (but not at the bottom of 5 pages of droll historical data I already know because I sent them the patient in the first place!).
- Allow coding gibberish to be hidden.
Related to #1 would be the ability to hide as much “fluff” in notes as possible. I only care about the review of systems and a repetition of past histories 1 out of 100 times. Most of the time I am only interested in the history of the present illness, pertinent physical findings, and the plan generated from any given encounter. The rest of the note (which is about 75% of the words used) should be hidden, accessed only if needed. It is only input into the note for billing purposes.
Who owns a patient’s health information?
·The patient to whom it refers?
·The health provider that created it?
·The IT specialist who has the greatest control over it?
The notion of ownership is inadequate for health information. For instance, no one has an absolute right to destroy health information. But we all understand what it means to own an automobile: You can drive the car you own into a tree or into the ocean if you want to. No one has the legal right to do things like that to a “master copy” of health information.
All of the groups above have a complex series of rights and responsibilities relating to health information that should never be trivialized into ownership.
Raising the question of ownership at all is a hash argument. What is a hash argument? Here’s how Julian Sanchez describes it:
In our rush to establish a national electronic medical record (EMR) system as part of the American Recovery and Reinvestment Act of 2009, powerful silos of independent EMR systems have sprung up nationwide.
While most systems are being developed responsibly, like the Wild, Wild West, many have been developed without an objective eye toward quality and the potential harm they may be causing our patients.
As most readers of this blog are aware, since 2005 the medical device industry in which I work has had widely publicized instances of patient deaths splashed all over the New York Times and other mainstream media outlets from defibrillator malfunctions that resulted in a just a few patient deaths.
The backlash in response to these deaths was significant: device registries were developed, software improvements to devices created, and billions of dollars in legal fees and damages paid to patients and their families on the path to improvement. In addition, we also learned about the limits of corporate responsibility for these deaths thanks to legal precedent established by the Reigel vs. Medtronic case.
In an article posted earlier this year on this blog I argued that hospitals have traditionally done a sub-par job of leveraging what has now been dubbed “big data.” Effectively mining and managing the ever rising oceans of data presents both a major challenge – and a significant opportunity – for hospitals.
By doing a better of job connecting the dots of their big data assets, hospital management teams can start to develop the crucial insights that enable them to make the right and timely decisions that are vital to success today. And, better, timelier decisions lead to improved results and a higher level of quality patient care.
That’s the good news. The less than positive story is that hospitals are still way behind in using the mountains of data that are being generated within their institutions every day. Nowhere is this more apparent than in the advanced data management practice of predictive modeling.
At its most basic, predictive modeling is the process by which data models are created and used to try to predict the probability of an outcome. The exciting promise of predictive modeling is that it literally gives hospitals the ability to see into (and predict) the future. Given the massive changes and continuing uncertainty that are buffeting all sectors of the healthcare industry (and especially healthcare providers), having a clearer future view represents an important strategic advantage for any hospital leader.
Last week’s New York Times article on cardiac care at some HCA-owned hospitals yielded a chorus of comments from readers who argued that for-profit hospital care is inherently low-quality care. As it happened, in working on a history of the investor-owned hospital sector, I had just been crunching data that might either support or refute that assertion. The results are surprising, if far from decisive.
Last September, the Joint Commission released the first of what it said would be annual lists recognizing “Top Performers on Key Quality Measures™” among the nation’s accredited hospitals. The all-star roster is based on “core measure performance data” that hospitals report to the Commission. The data cover adherence to “accountability measures ” established as best practice in the eyes of the Commission – making sure to prescribe beta-blockers for heart attack patients at discharge, for example, or to discontinue prophylactic antibiotics within 24 hours after surgery.
Unlike hospital quality measures that look at results – death rates and other outcomes – this one looks at processes. In theory, then, it ought to be more fair to hospitals that tend to serve sicker or more compromised patients, such as government-run hospitals in inner cities.
There is a corner of the health care industry where rancor is rare, the chance to banish illness beckons just a few mouse clicks away and talk revolves around venture deals, not voluminous budget deficits.
Welcome to the realm of Internet-enabled health apps. Politicians and profit-seeking entrepreneurs alike enthuse about the benefits of “liberating data” – the catch-phrase of U.S. Chief Technology Officer Todd Park – to enable it to move from government databases to consumer-friendly uses. The potential for better information to promote better care is clear. The question that remains unanswered, however, is what role these consumer applications can play in prompting fundamental health system change.
Michael W. Painter, a physician, attorney and senior program officer at the Robert Wood Johnson Foundation, is optimistic. “We think that by harnessing this data and getting it into the hands of developers, entrepreneurs, established businesses, consumers and academia, we will unleash tremendous creativity,” Painter said. “The result will be improved and more cost efficient care, more engaged patients and discoveries that can help drive the next generation of care.”
The foundation is backing up that belief with an open checkbook. RWJF recently awarded $100,000 to Symcat, a multi-functional symptom checker for web and mobile platforms. Developed by two Johns Hopkins University medical students, the app determines a possible diagnosis far more precisely than is possible by just typing in symptoms as a list of words to be searched by “Dr. Google.” Symcat also links to quality information on different providers and can even direct users to nearby emergency care and provide an estimate of the cost.
In a piece just posted at TheAtlantic.com, I discuss what I see as the next great quest in applied science: the assembly of a unified health database, a “big data” project that would collect in one searchable repository all the parameters that measure or could conceivably reflect human well-being.
I don’t expect the insights gained from these data will obsolete physicians, but rather empower them (as well as patients and other stakeholders) and make them better, informing their clinical judgment without supplanting their empathy.
I also discuss how many companies and academic researchers are focusing their efforts on defined subsets of the information challenge, generally at the intersection of data domains. I observe that one notable exception seems to be big pharma, as many large drug companies seem to have decided that hefty big data analytics is a service to be outsourced, rather than a core competency to be built. I then ask whether this is savvy judgment or a profound miscalculation, and suggest that if you were going to create the health solutions provider of the future, arguably your first move would be to recruit a cutting-edge analytics team.
The question of core competencies is more than just semantics – it is perhaps the most important strategic question facing biopharma companies as they peer into a frightening and uncertain future.
Because nearly one billion users produce a lot of data, Facebook has had a hand in publishing more than 30 research papers since 2009, including research (.pdf) that may link social-networking activity and loneliness.
But outside researchers have been unable to validate those studies because Facebook refused to release the underlying raw data, citing the need to protect users’ privacy. Now Facebook is considering changes to its policy. Nature News reports:
Facebook is now exploring a plan that could allow external researchers to check its work in future by inspecting the data sets and methods used to produce a particular study. A paper currently submitted to a journal could prove to be a test case, after the journal said that allowing third-party academics the opportunity to verify the findings was a condition of publication.