Leonard Kish, Principal at VivaPhi, sat down with Ed Park, COO of athenaHealth, to discuss how interoperability is defined, and how it might be accelerating faster than we think.
LK: Ed, how do you define interoperability?
EP: Interoperability is the ability for different systems to exchange information and then use that information in a way that is helpful to the users. It’s not simply just the movement of data, it’s the useful movement of it to achieve some sort of goal that the end user can use and understand and digest.
LK: So do you have measures of interoperability you use?
EP: The way we think about interoperability is in three major tiers. The first strata (1) for interoperability can be defined by the standard HL7 definitions that have been around for the better part of three decades at this point. Those are the standard pipes that are being built all the time. So lab interoperability, prescription interoperability, hospital discharge summary interoperability. Those sort of basic sort of notes that are encapsulated in HL7. The second tier (2) of interoperability we are thinking about is the semantic interoperability that has been enabled by meaningful use. The most useful thing that meaningful use did from an interop standpoint was to standardize all the data dictionaries. And by that I mean that they standardized the medication data dictionary, the immunizations, allergies and problems.
Making a decision requires you to compare tests/treatments that have been contrasted in researh studies to see if one over another results in improved chances of good outcomes. In a sense, medical decision making is a competition. To assess the competition, you compare the chances of outcomes, or results from groups of people taking different options. The comparison is a simple subtraction in the amounts of outcomes that occur in each studied group.
Subtracting results in a difference that is either a benefit (if better for you) or a harm (if worse for you). For nearly all decisions, however, the test/treatment that is better for disease outcomes (benefit) is worse for complications (harm). Comparing, then, results in the following possibilities:
The chances of outcomes associated with the condition you have and the tests/treatments available will be the same for all options. In this case, chose the cheapest option.
The chance of outcomes associated with the condition you have will be lesswith one option. That option provides added benefit
The chance of a complication caused by the test/treatment that adds benefit for the disease outcomes will be greater (harm).
Since the test/treatment that is better for you in terms of the disease you have will be, simultaneously, worse for you in terms of complications caused by that test/treatment, a trade-off of benefit and harm is required.
Hence, the definition of “works” is that:
A test/treatment works when you feel there is more to gain from the greater chance of better disease associated outcomes than there would be to lose from suffering the complications caused by your chosen treatment.
So, medical-decision-making is a competition between options and there is always some good to be balanced against some bad.
The balance of good and bad from your perspective is what makes one treatment work over another.
Robert McNutt, MD is a board certified internist in Clarendon Hills, Illinois. He is a Professor at Rush Medical College of Rush University.
What if policymakers, science reporters and even scientists can’t distinguish between weak and trustworthy research studies that underlie our health care decisions?
Many studies of healthcare treatments and policies do not prove cause-and-effect relationships because they suffer from faulty research designs. The result is a pattern of mistakes and corrections: early studies of new treatments tend to show dramatic positive health effects, which diminish or disappear as more rigorous studies are conducted.
Indeed, when experts on research evidence do systematic reviews of research studies they commonly exclude 50%-75% because they do not meet basic research design standards required to yield trustworthy conclusions.
In many such studies researchers try to statistically manipulate data to ‘adjust for’ irreconcilable differences between intervention and control groups. Yet it is these very differences that often create the reported, but invalid, effects of the treatments or policies that were studied.
In this accessible and graph-filled article published recently by the US Centers for Disease Control and Prevention, we describe five case examples of how some of the most common biases and flawed study designs impact research on important health policies and interventions, such as comparative effectiveness of medical treatments, cost-containment policies, and health information technology.
That’s because falling premiums are causing the size of the Obamacare tax credits to fall even faster in Indiana. And since 87 percent of Indiana’s exchange buyers this year received a tax credit, smaller tax credits will make the out-of-pocket cost far higher for those Hoosiers.
How much more? According to my analysis of insurer’s filing with the Indiana Department of Insurance, 30 percent, 60 percent, 90 percent and even 180 percent increases will be common for Hoosiers buying Silver plans for 2016, depending on their age and incomes.
Imagine what critics of Obamacare would be saying about those figures?
This topsy-turvy system is due to the convoluted system the Affordable Care Act set up to determine the size of tax credits in each state.
A recent New York Times article profiled a pair of ultra-expensive pain medications designed to go easy on the stomach. Common pain relievers, like aspirin, ibuprofen and naproxen are prone to irritate the stomach if taken repeatedly throughout the day. A newer class of pain medication, called cox-2 inhibitors, are the preferred pain relievers for those who cannot take nonsteroidal anti-inflammatory drugs (NSAIDs) on a long term basis. Celecoxib, the generic version of Celebrex, is now available at a cost of about $2 per tablet, but that can add up to about $700 to $1000 per year.
More than a decade ago researchers found that taking heartburn medications with common NSAIDs could mimic the benefits of the costly cox-2 inhibitors. However, the study found (at that time) combining heartburn medications and NSAIDS would not deliver any cost savings due to the high price of prescription heartburn treatments. A lot has changed in the years since the study. The costly proton pump inhibitors for heartburn are now available over the counter (OTC) for $0.31 cents to $0.60 cents apiece. The drugs mentioned in the Times article, Duexis and Vimovo, are based on the premise of combining NSAIDs with heartburn medications.
The catch? Each drug costs more than $1,500 for only a month’s supply. The cost per tablet is $17 and $25 respectively. Why so much? That’s a good question that doesn’t have a logical answer. Although nearly 90 percent of the drugs Americans take are inexpensive generics, a small segment – about 1 percent of all drugs prescribed – falls into a category known as “specialty drugs”. Continue reading…
The SEC has finally finalized its crowdfunding rule (presser) under the JOBS Act. The health innovation crowdfunding crowd has been waiting for these rules for quite some time, as has the rest of the crowdfunding fan club. (It’s only taken three and a half years.)
So, was it worth the wait?
The crowdfunding rule (full text) sets the stage for broader participation in early-stage investing and may empower crowdfunding platforms (“intermediaries,” in SEC-speak) to compete with angel funding platforms servicing “accredited investors” (SEC-speak for high net worth folks who can afford to lose their entire investment in a startup). It is a democratizing move consistent with the ethos of the internet and digital innovation.
Let’s look at some of the particulars and then think about whether this is a good thing for startup companies (“issuers”) that might want to sell securities rather than their products or promotional T-shirts, and for intermediaries — such as Kickstartr etc. — that might want to have a role in matchmaking individual investors with issuers. (Kickstartr itself has reportedly said it’s not interested in going down this path; IndieGoGo is interested, though.)
We all know Luddites. They proudly pronounce their rejection of Facebook and feign disgust about how they finally “broke down” and bought that awesome Motorola Razor they still carry. Maybe you are a Luddite or pretend to be because you can’t make Gmail work on your phone. So who was this Ludd and why is he the timeless symbol of rage against the machine?
My guess was that the original Ludd was probably some horse breeder that bet the farm against the future of the automobile. As it turns out, the Ludd story is not at all what you’d expect.
Legend has it that in 1779, a fed up British factory worker named Ned Ludd took his aggression out against the knitting machines he was employed to operate, smashing two of them to pieces with a hammer. In this one brazen act of defiance, he became the symbol of man’s rebellion against automation, technological displacement, the death of artisanship, and the worsening conditions of the working class.
Not long after, as the Industrial Revolution gained steam (terrible, I know) young Ned became the poster boy, quite literally, for factory worker uprisings each of which was punctuated with the destruction of machines.
The Luddites met in secret and their operations ranged from sabotage to all out warfare, including a battle with the British Army. They became so fearsome that industrialists had secret chambers constructed in their factories in which they could hide should the Luddites come knocking. Fearing that the name “Ned” lacked gravitas, his PR team apparently took to branding him King Ludd or General Ludd.
Britain’s health secretary wants to uncharm his way to a revolution.
To galvanize support for a seven-day National Health Service (NHS), which the NHS was before Jeremy Hunt’s radical plans, and still is, he asserted that thousands die because there is a shortage of senior doctors during weekends. This is an expedient interpretation of a study which showed that mortality was higher in patients admitted on weekends. Hunt ignored the fact that patients admitted on Friday night are actually sicker than those admitted on Wednesday morning.
When logos failed, and after briefly dabbling with pathos, Hunt resorted to ethos. He insinuated that doctors were clock watchers (“service that cranks up on a Monday morning and starts to wind down after lunch on a Friday”). This led to a hashtag on Twitter: #ImInWorkJeremy.
Hunt wants to modernize the NHS. Leaving aside whether modernization is modernization, post-modernization or pre-post-pre-modernization, presumably this endeavor benefits from having doctors on board. How has Hunt enticed the doctors? He prophesized that GP’s diagnostic skills could be obsolete in twenty years. He wanted to replace doctor’s clinical judgment with computers, sooner rather than later (he’d just returned from Silicon Valley).
In our healthcare system, the “middleman” is not who you think
During my recent podcast interview with Jeff Deist, president of the Ludwig von Mises Institute, I remarked that third-party payers are not, in fact, intermediaries between doctors and patients. In reality, it is the physician who has become a “middleman” in the healthcare transaction or, as I argued, a subcontractor to the insurer.
Important as it is, this reality is not well recognized—not even by physicians—because when doctors took on this “role” in the late 1980’s, the process by which healthcare business was conducted did not seem to change in any visible way.
When health insurance was first introduced on a large scale in the 1940’s and 1950’s, a patient would see a doctor and pay the bill directly. The doctor would issue a receipt and the patient would submit the receipt to the insurance company for reimbursement. The insurance company was, in that sense, a financial intermediary since it would enable the patient to afford the care and see the doctor.
Uber has long stirred controversy and consternation over the higher “surge” prices it charges at peak times. The company has always said the higher prices actually help passengers by encouraging more drivers to get on the road. But computer scientists from Northeastern University have found that higher prices don’t necessarily result in more drivers.
Researchers Le Chen, Alan Mislove and Christo Wilson created 43 new Uber accounts and virtually hailed cars over four weeks from fixed points throughout San Francisco and Manhattan. They found that many drivers actually leave surge areas in anticipation of fewer people ordering rides.
“What happens during a surge is, it just kills demand,” Wilson told ProPublica. “So the drivers actually drive away from the surge.”
When contacted this week, Uber said that their own analysis has shown that surge pricing does, in fact, attract more drivers to surge areas. “Contrary to the findings in this report — which is based on extremely limited, public data — we’ve seen this work in practice day in day out, in cities all around the world,” Uber spokeswoman Molly Spaeth wrote in an email.
The researchers also uncovered a few tips about how to avoid surge prices. They found that changing your location, even by a few hundred feet, can influence the price you get. They also discovered that you can often get back to normal fare levels by waiting as few as five minutes.
“The vast majority of surges are short-lived, which suggests that savvy Uber passengers should ‘wait-out’ surges rather than pay higher prices,” the authors wrote in a new study they are presenting at a conference in Tokyo on Friday.