For more than a decade, a running joke among electronic health (EHR) record skeptics has been that its clunky “decision support” functions, defined as the on-screen provision of clinical knowledge and patient information that helps physicians enhance patient care , is condemned to always remain an innovation of the future. Yet, while published studies like this continue to fuel doubt about the prime-time readiness of this EHR-based technology, a growing body of clinical research suggests that the science is getting better. Jonathan Cohn, writing in The Atlantic, points out that IBM’s Watson has achieved enough of a level of sophistication to warrant clinical trials at prestigious institutions such as the Cleveland Clinic and Memorial Sloan-Kettering.
Unfortunately, there is an under-recognized threat to EHR-based decision support: the dysfunctional U.S. tort system.
The experience of Google’s “driverless car” may be instructive. According to National Public Radio, years of testing is putting this technology within reach of consumers. Thanks to the prospect of fewer accidents, better transportation options for the disabled, reduced traffic congestion and lower hydrocarbon consumption, some states have responded by attempting to support this promising technology with “enabling legislation.”
Unfortunately, the legislation in some state jurisdictions is being hindered by the prospect of complicated lawsuits. As physicians know all too well, when a single mishap lands in court, adroit attorneys can use the legal doctrine of joint and several liability to tap multiple deep pockets to increase the potential size of the award. In the case of driverless cars – in which the owner is more of a passenger than a driver – the accidents that are bound to happen could metastasize upstream from the owner and tie up the driverless automobile manufacturers and all of their business partners in time-consuming and expensive litigation.
Ditto the EHR’s decision support technology. Even with Watson’s intelligence, medicine will remain imperfect and allegations of medical mistakes will be inevitable. When lawsuits arise, the defendant medical providers will likely argue that their judgment was clouded by the very technology that otherwise helped them better serve their other patients. Personal injury lawyers are unlikely to let that theory of liability go unused. Tapping the same kind of lucrative joint-and-several legal theories that have served them so well in decades of standard malpractice litigation, they’ll undoubtedly be happy to name the EHR manufacturer and all of its decision-support business partners in these lawsuits.
Unfortunately, this may be one more reason for EHR vendors and their health technology partners to harden their stance with their notorious “hold harmless” contracting provisions. Since physicians are learning to push back against these clauses, manufacturers of decision support systems will probably delay product releases until they better understand the risks and embed the anticipated legal costs in their products. In the meantime, gun shy providers will prefer algorithms that combine defensive as well as evidence-based medicine. As a result, otherwise cost-saving information technology is likely to ironically fuel the nation’s $54 billion “malpractice” tab.
What should happen?
In addition to supporting some or all of the tort reform options outlined here, producers as well as consumers of decision support should support federal and state legislation that provides reasonable safe harbors for this health information technology. Last but not least, physicians, developers and vendors need to collaborate on systems that reconcile defensible local standards of care with national guidelines.
We may be finally reaching a decision support “tipping point.” Health consumers and their providers are far more likely to embrace “Health 2.0” when the “information” in technology and the “meaningful” in use makes diagnoses more accurate, testing more intelligent and treatments more tailored. Unfortunately, without tort reform that protects EHR-based decision support, when physician don their Google glasses to access their patients’ records, all they’re likely to see are a lot of lawyers.
Jaan Sidorov, MD, is a primary care internist with over 20 years of experience in medical professional liability insurance. The opinions expressed in this article are those of the author, and they do not reflect in any way those of any institutions to which he has been or is affiliated.
John William Lambert’s Buckeye gasoline buggy, made in 1891, is considered the first practical gasoline-powered automobile made in the United States. But it took The Federal Aid Road Act of 1916 allocated $75 million for building roads (read recovery act) and Ford to really make the auto the standard of transportation and travel. We are early we are primitive still think of a delivery man deciding in 1906 to re shod the horse daisy or get that thing without a horse yup it was hard to deliver goods with no roads no maps and a pile of junk built in someone’s back yard but by 1916 just a decade later the guy with the horse was struggling to stay in business and by 1926 he was out of business. I saw that in 3 small practices in upstate NY where data and PCMH was a driver. We are still pre-ford in the USE of Data to support ourselves with CDS and our patients. The tools we have are early primitive — 10 years after the this started place are the future now.
So t hope We just had a large group of Chinese visiting South East Texas Associates and here is what they saw.
The new paradigm: and already works in Dr Holly’s practice SETMA
· Thinking about patients when they are NOT in the office!
· Thinking about patients as a person, a population and a pattern.
· Personalizing care with data, genomics, plans of care and treatment plans.
In the near future and noiw at SETMA Focus on the patient: Data will support Medical homes to help patients and families to manage, organize and participate in health care decisions as fully informed partners in their care. This leads to patients seeking the right care, from the right place, at the right time. Whether visiting your primary care doctor or a specialist, your entire family’s health records are centrally located to be quickly and easily accessed by any provider, from childhood through old age
Comprehensive coordination: support with HIT In a medical home, a team of care providers is wholly accountable for the patient’s physical and mental health care needs, which includes the entire spectrum of care from prevention and wellness to chronic, long-term care. Physicians and nurses ensure care is organized across the broader health care system, should patients require a hospital stay or visit to a specialist, like a cardiologist.
This level of comprehensive coordination means patients are less likely to seek care from emergency settings, delay care or leave conditions untreated. It also means providers are less likely to order duplicate tests or procedures, which can lead to lower costs and more efficient treatment for patients.
Accessibility: Hit will no longer leave us trapped in the world of the most advance means of communication the Auto Medical homes support by apps, mobile, portal will engaged differently we will see reduce wait times, increase patients’ access to their doctor and keep better electronic health records. A more accessible doctor means a patient is more likely to receive preventive care, reducing the incidences and severity of chronic diseases.
Commitment to quality and safety: Knowing how patients fare after leaving the doctor’s office results in fewer future hospitalizations and readmissions. So physicians and staff are tasked with monitoring quality improvement goals and using the data to track their patients’ outcomes.
PROACTIVE — electronic health records and medication management allow doctors and staff to let patients know when it’s time for vaccinations, check-up appointments or physical exams. The result is a bigger focus on wellness and preventive care, instead of a patient only seeking care when they are sick or injured.
All this sounds very good and some of it can be helpful, but in total the attitude projected and the predictions all end up with the creation of multiple new expensive bureaucracies that distance the patient from the doctor and create many of the problems we are all hoping to solve.
Take note how so many seemingly logical conclusions have turned out to be terribly wrong. Healthcare requires organic development from the bottom up, not the top down.
The auto and then the process of mass production was developed from the bottom up. The government gave a push with a road system that might be considered a public good, but in the process of creating roads for individual cars suppressed rail transport and other forms of mass transport that would have reduced costs, congestion, pollution, dependence upon foreign nations for our oil and a whole host of other things. The cost of the road system, a top down solution, and the oil supply was not worked into the calculations of the costs of owning a car.
Remind me: what year what is that the government mandated everyone own and drive an automobile, or be punished financially?
Perry raises a good point: if the work station interferes with a collaborative doctor-patient relationship, decision support could suffer.
Perry also suggests that the entire tort system is in need of reform. No one can disagree with that reasonable argument, but lacking that, what can be done to increase the chances that the emerging promise of decision support will be fulfilled?
CDS is a medical device and should undergo robust evaluation by the FDA.
The other problem I hear complained about most often with EHR is lack of patient-physician contact/communicaton. This only serves to widen the gap between patient and physician, which in turn increases the likelihood of a lawsuit if something bad occurs.
Nothing in the whole scheme of trying to control costs, EHR, EBM, etc, will be effective unless the tort system is reformed, not to allow bad docs and institutions to get away with anything, but to allow good, reasonable docs to practice reasonable medicine without constant fear of a lawsuit.
The trial attorneys and most Democrats would have you believe otherwise.
Jordon joins the chorus that that there are less to EHRs than meets the eye. I agree, but I also hope that his disdain for the billing functions of the EHR doesn’t infect the real promise of a wholly different technology – called decision support – that is tethered to the EHR work station
EMR’s are cash registers attached to hard drives.
They are built by engineers, not clinicians. I can’t wait for the day when doctors build SDK kits for developers and see what happens.
All people want is simplicity, predictability and reliability – and to feel connected to their health team (especially when they are sick and unable to think clearly).
When technology understands the nuances of how health, well-being, and existential anxiety permeates the human operating system’s (hOS), we’re likely to hit an inflection point that matters.
Every single patient wants the best outcome.
Every single doctor wants the best outcome.
Every single healthcare system wants the best outcome.
Technology can only enable us to figure out how to communicate with people more efficiently – if we lose empathy and one’s life narrative, we’re only 1’s and 0’s.
The Human Operating System is analog – and the interface between analog and digital is where you can either create magic or over-engineered technology that masquerades as something clinical.
“combining both (human and computer) results in a practically unbeatable combination.”
See the Weeds’ “Medicine in Denial.” They make precisely that point.
I’m sympathetic of Joe Flower’s point about badgering docs with recommendations that upset workflows, don’t quite fit the patient’s needs and carry an unsaid threat. That being said, it’s been pointed out that while high end computer chess programs can beat a chess master, combining both (human and computer) results in a practically unbeatable combination.
The same could be true for physicians with high end, high performing decision support systems. The best combination of physician and decision support should include the option of letting the human physician/chess master decide on the next diagnostic or therapeutic move without any subliminal threats. It’s possible, we just need to create the legal framework to enable it too happen.
With more and more information greater degrees of liability are created. When the computer suggests something that automatically becomes something that adds to liability whether or not the suggestion is followed.
The plaintiff attorney shows in black and white that the computer system thought something else should be done and points to the plaintiff with pity. Alternatively the physician follows what the computer said using the same black and white image and the plaintiff’s attorney says you are the doctor not the computer. If all you do is follow the computer’s advice then why do we need doctors? All these appeals to the jury while showing a sick and desperate plaintiff can make a jury award quite high no matter how good the medical care was.
Rob is naturally correct: our legal atmosphere could strangle the promising technology of high performing decision support in the crib.
Vik – with whom I often disagree – finally makes a good point, this time about safe harbors, but that’s an all-or-none approach. If, in the name of a greater societal good, “enabling legislation” protects the emerging technology of Google cars, why can’t the same be considered for decision support? Isn’t there a template in there somewhere where reasonable persons can agree?
allan is understandably upset about the dysfunctions of a tort system and EHR system, raising a point that needs to be said: combining both dysfunctions will be greater than the sum of its parts. That serves to underline my point that a medical technology that could make health care faster, better and cheaper is more vulnerable than most persons realize.
BobbyGvegas points out the unpredictability of the joint and several doctrine, and he’s also right: it’s only used when it’s to the advantage of the plaintiff. In the case of decision support, we can expect the same dynamic.
I can imagine many doctors would say, “Fine, give me information, give me options. Don’t tell me what to do. And that means don’t give me recommendations, since in this legal climate any recommendation is inherently coercive.”
Good post, good comments — and there is a deeper problem, which Rob hints at. I’ll use computerized mammogram reading support as an example. Studies have shown that statistically speaking, we can’t point to a single extra tumor found, a single life saved by this new technology. What we can see is a lot of false positives, a lot of unnecessarily worried women, a lot of call-backs, a lot of unnecessary biopsies. The computer flags certain cloudy shapes as possible tumors. The experienced physician examining the scan may say, “Nah. No chance.” But he or she is more or less forced to prove that the shape is not tumorous by the legal system — because the computer has circled it.
If the decision support system really is based on truly the best practice at any given juncture in the medical decision process, then great. If it’s not perfect, it puts physicians in the spot of having to practice defensive medicine just to prove on the record that the recommendations were wrong in this case — even in the many cases where the decision is really arguable either way.
Rob resists PSA antigen tests. I understand that this is a unite reasonable position. But if the decision support software recommends the test, Rob may feel coerced into providing the test so that he doesn’t find himself on the stand trying to prove that the recommendation was wrong when one of his patients does get prostate cancer.
“As physicians know all too well, when a single mishap lands in court, adroit attorneys can use the legal doctrine of joint and several liability to tap multiple deep pockets to increase the potential size of the award. ”
I started my white collar career in a forensic level environmental radiation lab in Oak Ridge (radionuclide dose and exposure analytics for litigation support). “Joint and Several,” yeah. If you caused 1% of the contamination relative to the contributions of other participating entities, but have 99% of the money, guess what? There was no “hold harmless” in that arena — except for the fine print in the compiler apps “terms and conditions” (“…no express or implied merchantability or suitability…”) I used for writing and compiling my code to generate my production executables.
I’ve been adversarially deposed right down to my rounding algorithms. No such grinding scrutiny applies to EHR developers. It was all on me, the “end-user,” not on my compiler vendor.
The tort system is dysfunctional and now we have added a dysfunctional record keeping system, the EHR. The dysfunction is more than additive. We also have ObamaCare with its push for brainless physicians that are incentivized not to think or innovate. I hate to think of what medical care will be like in another decade or so. The only bright spot is medical technology that blunts some of the negative effects of undesirable government intervention.
The tort system is clearly dysfunctional, but I don’t see it being made more functional by creating a separate safe harbor for EHR/decision support technologies. That perpetuates our unsuccessful policy making template of “pick me, pick me!,” which asks legislators to choose a side, generally the one that’s been most useful for campaign fund raising.
But, the issues raised in this post — and the comments — are important. Maybe the push towards EHR/decision support technologies can serve as a tipping for fixing a tort system that is badly in need of repair so that injured patients are properly compensated in a timely and rational manner and bad institutional and individual actors are identified and punished also in a timely and appropriate manner. EHR/decision support manufacturers should part of a global fix, not given a safe harbor of their own.
Here’s the big irony of this: most docs feel like EHR gets in the way with our ability to give quality care. It’s not the computer itself, it’s the tasks of note/code generation and data collection for which the EHR is designed which take the time away from the patient and toward the record. Despite this, the EHR manufacturers are not held responsible for the non-care created by the wasted time. I don’t think they should be, but it is definitely ironic in light of this post, in which EHR vendors are worried about the legal implication of offering something that would potentially improve point-of-care quality for fear that there would be some outliers that cause harm. I’ve used a significant amount of decision support in EHR products and feel that it is the thing that EHR does best. There is a big risk of docs being force-fed guidelines they may or may not agree with, so it has to be set up in such a way to inform, rather than coerce (PSA testing, for example, would be a guideline that I would resist in many instances). Regardless, the point is clear that the legal atmosphere does inhibit tools that could really help both doctors and patients, yet there is little legal consequence for the way in which the computer-assisted bureaucracy, which harms patients daily. Like I said: ironic.
MrRoberts usefully reminds us that there is nothing about the EHR that automatically reduces the risk of an allegation of malpractice. I agree, but I think of “decision support” as a separate function that just happens to use the EHR platform.
Bubba agrees with MrRoberts and suggests the travails of the EHR could undermine any good will that physicians may have about decision support. I agree with that also. Perhaps decision support architects need to keep that in mind: if it fails to account for physician work flow, even the brightest version will fail.
The legal implications of electronic medical records are a can of worms.
This is a reality that it is rarely acknowledged.
Asking providers to document their work using a badly automated and cumbersome system that produces electronic evidence that can be used against them in a court of law was a recipie for disatisfied, rebelloius users ..
In theory alogrithm-driven decision support is exactly the kind of medicine that is capable of helping. We can make healthcare faster, better and safer and. The problem is we’re not quite there yet, and it may be a little while before we get there.
We don’t do anybody any favors by pretending that this technology is more advanced than it is.
This of course is a reality that vendors are unwilling to admit (refreshing candor about the realistic limitations of allegedly revolutionary , world changing products not being a trait Silicon Valley encourages) So I’ll provide the candor. Yes. There are risks. Both to consumers and providers. And yes, decision making power is going to have to change hands.
Why? Because change is inevitable. And the risks are worth it.
Until things change, doctors and nurses are going to be complaining about being crushed to detah by an oppressive volume of electronic paperwork, products are going to suck, and we’re going to be stuck with clunky technology that is ten years out of date behind where it could be in some cases.
If you’re wondering why electronic medical records are for want of a better word, dumb as a proverbial post – this is one of your answers. Many of the others have similar roots ..
A thank you to THCB’s editors for allowing me to replace my earlier, rushed comment with this …
I don’t think it’s the worst thing for EMR vendors to have some real liability and skin in the game. Maybe that will teach them both humility and the importance of collaboration rather than just focusing on self interest and profit.
That being said, I go through each day with my hospital’s EMR system twisting with each word I write in a note, email or patient communocation. I have been sued and know exactly how the words can be twisted and manipulated to create a false impression.
While I appreciate the goal of EMR technology, I end each day exhausted from dealing with it. It makes my job take longer, it documents and seals every possible mistake I could make especially when I’m rushing to see all the patients that have been packed into the schedule, and provides relatively limited decision support of any significant value.