Natural language processing might seem a bit arcane andtechnical – the type of thing that software engineers talk about deep into the night, but of limited usefulness for practicing docs and their patients.
Yet software that can “read” physicians’ and nurses’ notes may prove to be one of the seminal breakthroughs in digital medicine. Exhibit A, from the world of medical research: a recent studylinked the use of proton pump inhibitors to subsequent heart attacks. It did this by plowing through 16 million notes in electronic health records. While legitimate epidemiologic questions can be raised about the association (more on this later), the technique may well be a game-changer.
Let’s start with a little background.
One of the great tensions in health information technology centers on how to record data about patients. This used to be simple. At the time of Hippocrates, the doctor chronicled the
patient’s symptoms in prose. The chart was, in essence, the physician’s journal. Medical historian Stanley Reiser describes the case of a gentleman named Apollonius of Abdera, who lived in the 5th century BCE. The physician’s note read:
There were exacerbations of the fever; the bowels passed practically nothing of the food taken; the urine was thin and scanty. No sleep. . . . About the fourteenth day from his taking to bed, after a rigor, he grew hot; wildly delirious, shouting, distress, much rambling, followed by calm; the coma came on at this time.
The cases often ended with a grim coda. In the case of Apollonius, it read: “Thirty-fourth day. Death.”
As the health care community waits for the outcome of King v. Burwell, the latest Affordable Care Act (ACA) challenge, the focus has been on a key question: What happens if the Supreme Court doesn’t allow the federal healthcare marketplace to continue to offer premium tax subsidies? But how such a decision would affect the rate of insurance is just the tip of the iceberg. Eliminating federal subsidies impacts a whole range of ACA policies that were carefully navigated during the legislative process. As we wait for legal decision, we have an opportunity to examine whether the choices made in 2010 remain on solid ground if a significant portion of subsidized coverage disappears.
The ACA is the result of a complex web of compromise and, of course, a healthy dose of politics. By its very nature, the legislative process seeks to balance interests and assign responsibilities. In the case of the ACA, this meant that a dramatic coverage expansion helped define which stakeholders – providers, insurers, employers, and others – would benefit down the line in the form of new customers (and revenue) or reduced costs. In turn, it was reasoned, these stakeholders would bear burdens, in the form of reduced revenue or new tax or regulatory obligations, to help pay for the legislation.
If only the trade offs were that simple. In reality, complex and often charged discussions took place with numerous stakeholders and were linked to policies that extended beyond healthcare coverage (e.g. Medicaid drug rebates). Additionally, since the law passed numerous efforts to repeal, amend, or delay key ACA financing components – including insurer fees, medical device taxes, hospital subsidies, and the small business mandate – have surfaced and threatened to upend the ACA’s attempted balancing act.
On May 4, 2015 Department of Health and Human Services (HHS) Secretary Burwell announced that the Pioneer ACO program had saved the federal government $384 million and improved quality in its first two years and would therefore be expanded. HHS also released a 130 page independent program evaluation by L&M Policy Research that served as the basis for the Centers for Medicare and Medicaid Services (CMS) Actuary’s certification of the Pioneer program.
Burwell’s triumphant announceand ment was an intended shot in the arm for the troubled Pioneer ACO program, 40 percent of whose initial 32 members dropped out in the first two years. It also illustrated the yawning reality gap between DC policymakers and the provider-based managed care community. In reality, the Pioneer program badly damaged CMS’s credibility with the provider-based managed care community and sharply reduced the likelihood that the ACO will be broadly adopted.
We are data druggies.
We spend our days like desperate junkies crawling the carpet, sifting through the shaggy strands of patient histories with shaky fingers in search of facts. Every word our patients utter we feed to the never-ending demands of the electronic chart.
We find a fact and we enter it. The database grows. Someone somewhere adds another question we are supposed to ask our patients. We get back on our hands and knees. We start sifting once again.
Have you been to the continent of Africa in the last twenty-one days? Click. Do you or a loved one feel threatened at home? Click. How was your experience today? Click.
In the background the blood pressure cuff inflates, the quiet hiss filling the room. The monitor beeps along with the patient’s pulse, each ding another penny tossed into the ever-growing bank of patient data.Continue reading…
Congress is infected with the budget-cutting bug, and building an effective immune system requires political savvy. Sometimes, it’s simple (“We bomb terrorists” or “We process Social Security checks”), but sometimes an agency struggles. Case in point: AHRQ.
A House subcommittee recently voted to eliminate the Agency for Healthcare Research and Quality (AHRQ) as of Oct. 1, 2015, the start of fiscal 2016. If you hadn’t heard the news or aren’t sure why you should care, that’s exactly the point.
The GOP-led House Subcommittee on Health, Employment, Labor and Pensions (HELP) first voted to ax AHRQ back in 2012, along with other big government cuts; the agency escaped thanks to political gridlock that led to continuing budget resolutions instead of individual appropriations bills. Now, with the GOP in control of both houses of Congress, AHRQ has again been “terminated,” to quote the legislative language. But before railing against the Republicans, look at it from their viewpoint.
What HELP did was take about a half-billion dollars from Obamacare bureaucrats and use it as part of the budget boost given to scientists seeking to cure cancer, Alzheimer’s disease and similar ills at the National Institutes of Health, and to those at the Centers for Disease Control and Prevention working to protect Americans from dangerous epidemics such as Ebola.
You got a problem with that?
Can I fool you with the picture above? Apparently, some people think so.
I’m a Twitter newbie, but I’ve already discovered that sometimes you can tweet what you think is a helpful piece of data, then find yourself suddenly caught up in an explosive controversy. When it’s the Brookings Institute and US News and World Report on one side and passionate e-patients on the other, a research tweep is liable to feel like a nerdy accountant who wandered into the OK Corral at high noon with neither Kevlar nor a gun.
This happened to me when Niam Yaraghi of Brookings posted on the US News blog and the Brookings blog that people shouldn’t trust Yelp reviews in health care—the URL for the post actually ends “online-doctor-ratings-are-garbage”—because patients hadn’t been to medical school.
Last night my friend and mentor Marie Ennis-O’Connor (@JBBC) highlighted her recent post on Medium entitled: “Patients As A Prop.”
Marie’s post pointed me to an opinion piece written by Niam Yaraghi (@niamyaraghi) for US News and World Report. (Niam is a fellow at Brookings Institute’s Center for Technology Innovation with a special focus on healthcare economics and health information) Niam’s post titled, “Don’t Yelp Your Doctor” discusses whether or not patients are capable or qualified to evaluate their physicians.
“Patients are neither qualified nor capable of evaluating the quality of the medical services that they receive.”
In the future, doctors who provide better healthcare will be paid more. When a doctor gives good care, she will get credit. For factors out of that doctor’s control, she won’t be penalized. The patient, too, will be rewarded for taking care of his own health. In short, payments will align with good care, and good care will become more common.
This is the promise of value-based care, which is coming, according to almost everyone. Medicare is pushing it. Private payers are preparing for it.Top providers are tooling up.
And yet, the question lingers — how exactly do we measure quality? Today quality measurement is rigid, periodic, and manual. Here’s a peek behind the curtain of what we measure today — and what’s possible tomorrow.
There’s been a lot of talk about crowdsourcing lately. Everything from criminal investigations, to the tax code, to ski resorts have been crowdsourced or considered for crowdsourcing. And now medicine has thrown its hat in this trendy ring. KQED’s “Future of You” recently reported on a company called CrowdMed that wants to be the “Wikipedia of medicine.” (Due to space constraints, this blog post will not engage the important question of whether Wikipedia itself, is, in fact, the Wikipedia of medicine.)
CrowdMed touts itself as harnessing the wisdom of the crowd to improve and expedite diagnosis and treatment for patients whose doctors don’t have the answer. (The company was inspired by the difficulty its founder’s sister had in getting a rare condition diagnosed.) “Patients” pay CrowdMed a subscription fee ranging from $99-$249 per month in order to submit an account of their symptoms and medical history to CrowdMed’s “Medical Detectives.”
The Medical Detectives – who might be physicians or other healthcare professionals, but also might be any average Joe – read patients’ cases, and interact directly with patients to ask questions about their cases.
Prior to attending medical school, Parth Desai took a gap year to help his mom manage his dad’s small internal medicine practice. She was worried about how she was going to handle the looming transition from ICD-9 to ICD-10. Parth said he would help her out.
He looked at different consultants and programs, but they were all too complicated, too expensive, or both. He also looked at a number of different ICD-10 training programs, but didn’t really find anything that he thought was that good. He wanted help with code conversions, but everything he saw was slow, or required additional personnel, or was too costly.
So, he did what lots of entrepreneurs do, he decided to build what he needed himself. He enlisted his former college roommate, Will Pattiz, a “tech whiz, outdoor enthusiast, and filmmaker” to help him and together they developed software that automates the conversion of ICD-9 to ICD-10 codes.