I’m involved in a whole scad of research at the moment about why American medical care isn’t working as well as it could. Obviously there are many many factors involved, but one of them is the general problem that, in the spirit of the Defense secretary, the "things we know we know, we don’t know" how to do in practice. Or as it was said much more clearly to me 7 years ago by John Mattison, IT guru at Kaiser, "we know what to do, but we don’t know how to do it". The subject of course is how to implement best medical practices, otherwise know as evidence-based medicine.
Well I’ve come across two very interesting articles from the UK, where as I pointed out the other day, the government actually cares enough about improving the health care system that it’s spending money to make it better and encouraging things like best practice dissemination. But even there implementing evidence-based medicine is very hard.
Why so? Well the BMJ had a study in 2001 where they actually got several GPs to open up and talk about cases in which they had knowingly done the "wrong" things; in other words not followed the guidelines, in this case how to deal with patients with severe hypertension.
Of course, it’s not as simple as you might think. The GPs tended to believe that they had to deal with the whole patient, while the specialists (consultant in Brit speak) only had to deal with their cardiology issues. It took a combination of patience and conning to get patients to try something different, and even after using these skills patients frequently didn’t want to know about the "best" treatment:
Implementation was influenced by the relationships that doctors developed with their patients. "Even if the evidence was extremely good," one general practitioner said, "most of us would only ever interpret it in the context of the patient." Perceived patient characteristics could have a positive or negative effect on implementation. "Of course, if they’re the sort who always want the specialist, then you follow their [the specialist’s] advice." Another explained, " I think you have to judge how people feel about it. I try to get patients to reveal to me where they lie in the game . . . from I want it mate to I don’t want to know nothing about it doc . . . I make tremendous judgments."
However, there is also the all too human side of interpreting evidence in terms of what the individual has experienced. Several comments were of the type that suggested that personal experience outweighed the data:
Accidents, mishaps, or spectacular clinical successes have a direct influence on subsequent practice. Commenting again on anticoagulation in atrial fibrillation, a participant exclaimed, "I’m back on it." This doctor had previously been uneasy about anticoagulating patients in atrial fibrillation but had recently seen one of his patients who was not given warfarin have a cerebrovascular event……..One doctor summed up this view. thus: "We are influenced at least as much, if not more, by the experiences of individual patients as we are by the evidence."
Meanwhile, despite the fact that health administrators have been pushing the use of guidelines and those GPs thought that specialists were using them, guidelines are not uniformly followed by consultants either. A different study which surveyed several hundred doctors and health officials on their use of guidelines found that:
There was little variation in the belief that the evidence-based guidance was of "good quality", but respondents from the health authorities (87%) were significantly more likely than either hospital consultants (52%) or GPs (57%) to perceive that any of the specified evidence-based guidance had influenced a change of practice.
My conclusion is that no evidence-based guideline will be perfectly applied. Some don’t take into account the human situation of the patient. Meanwhile physicians will find it very hard to do something that their experience tells them is wrong–no matter what the data says.
But of course in the US this is more or less moot, as we don’t have the data.
UPDATE: Over at DB’s Medical Rants, Robert Centor has an excellent post about this post and links to some of his earlier posts and other articles about this issue. He makes some glaringly obvious but all too often overlooked points about how technology/innovation gets adopted and has a very nice version of the classic "S" curve, as applied to medical adoption. He’s also been working directly in the field for several years so I defer to him if he says I’ve "partially" nailed it–better than hitting my thumb, I suppose! Robert’s point is that plenty of work is being done in the US on evidence-based medicine, and that it is changing practice patterns. He therefore quibbles with me when I say that we "don’t have the data". My response is that the "data" we have is the numerous studies that he and others have been involved in about what is the best way to treat condition X, Y or Z. In other words we have the "we know what to do" part–it’s the "how to do it" part that’s missing
I, of course, know that evidence based medicine is studied intensely in the US, as is health technology assessment, health services research and regional health planning. Unfortunately like those other worthy disciplines (and I have a degree in one of them!) its study stays mostly in academia and makes precious little impact in general patterns of medical practice. The "data" we do not have and the data that I was (obtusely) referring to earlier in this post was the data directly gathered about how physicians actually practice from their records. It’s the lack of accessible electronic records which stops us accurately understanding (and then managing) how practice works in real life/real time. Several medical directors of leading medical groups have been telling me for years that they don’t have an accurate picture of what their MDs are doing because they can only get statistical glimpses of their practice patterns at the end of each month. Of course the vast majority of physicians do not practice in groups that have this kind of collegial monitoring and end up having their performance assessed only by adversarial health plans, trial lawyers, the occasional academic study, or most likely not at all. Given that you cannot assess performance when the data is locked up in paper charts, I believe I’m justified in saying that on balance we "don’t have the data".
Of course if you look at the statistical glimpses that Wennberg and his colleagues at Dartmouth have extracted mostly from Medicare claims data, the wide regional variations in practice show that evidence-based medicine can not logically be being applied nationwide. Otherwise you wouldn’t find three times the amount of surgery going on for the same condition in Denver than you see in Salt Lake City. Part of the reason behind the UK’s investment in electronic records is the desire to get at the information source that is the everyday recording of clinical activity. If it’s achieved that huge data set will be used to both monitor medical care and assess what is the best evidence-based practice from huge data sets, rather than from chart abstracted studies done later. And eventually the one (practice) will be monitored against the other (evidence based guidelines)–something not all doctors will welcome.
In the US the lack of electronic records prevents this, and as I’ve explained in this post, we don’t seem to be in too much of a hurry to change that situation. And even if we did, then all the problems of actually changing practice patterns that Robert and I have been discussing still have to be overcome.