Making a diagnosis is easy if the test we use to make the diagnosis defines the disease. These sorts of tests, called “reference-standard” tests, when present at any level of the test’s result, make the diagnosis. A spinal fluid culture growing listeria or opioids in the urine are examples.
Using reference-standard tests in clinical medicine, however, is not the norm. The reason for this is that reference-standard tests often don’t exist and if they do they may be dangerous, difficult to obtain, and costly. Hence, we use most often non-reference standard tests that can only raise or lower the likelihoods of diseases. There is nothing particularly new in these comments. Every reader will know such concepts as, the “sensitivity and specificity” of a test. Every reader will remember hearing about, or be able to construct, 2X2 tables showing the sensitivity of a test; the corresponding false negative percent; the specificity of the test; the corresponding false positive percent.
But, despite the ever-present teaching of how tests ‘work”, it is my experience that physicians and patients have difficulty using the measures of a test’s value in clinical care. This difficulty is manifest in the observation that diagnosis mistakes may be common and the perceived mistake is the inciting event in up to 40% of malpractice cases. If the conceptual ideas for appropriate test characteristics are so clear and well taught, why is there so much difficulty in using tests to make a correct diagnosis? I contend that the way we teach and understand testing has not allowed us to advance an ideal, numerate approach to accurately making a diagnosis. I claim, also, that the concept of a single “sensitivity and specificity” for a test is actually suspect, even incorrect.
Making a decision requires you to compare tests/treatments that have been contrasted in researh studies to see if one over another results in improved chances of good outcomes. In a sense, medical decision making is a competition. To assess the competition, you compare the chances of outcomes, or results from groups of people taking different options. The comparison is a simple subtraction in the amounts of outcomes that occur in each studied group.
Subtracting results in a difference that is either a benefit (if better for you) or a harm (if worse for you). For nearly all decisions, however, the test/treatment that is better for disease outcomes (benefit) is worse for complications (harm). Comparing, then, results in the following possibilities:
The chances of outcomes associated with the condition you have and the tests/treatments available will be the same for all options. In this case, chose the cheapest option.
The chance of outcomes associated with the condition you have will be less with one option. That option provides added benefit
The chance of a complication caused by the test/treatment that adds benefit for the disease outcomes will be greater (harm).
Since the test/treatment that is better for you in terms of the disease you have will be, simultaneously, worse for you in terms of complications caused by that test/treatment, a trade-off of benefit and harm is required.
Hence, the definition of “works” is that:
A test/treatment works when you feel there is more to gain from the greater chance of better disease associated outcomes than there would be to lose from suffering the complications caused by your chosen treatment.
So, medical-decision-making is a competition between options and there is always some good to be balanced against some bad.
The balance of good and bad from your perspective is what makes one treatment work over another.
Robert McNutt, MD is a board certified internist in Clarendon Hills, Illinois. He is a Professor at Rush Medical College of Rush University.
Our comments regarding this interesting blog, and the comments to the blog, may seem tangential to the author’s points.
The blog and comments point, we think, to a confusing set of principles being considered, perhaps, out of context?
Those comments range from: ACOs will lead to better figuring out what is best (impossible) – to mismatched information regarding a specific clinical case (reasonable). What is striking is that we have medical students worrying about costs of care.
Instead, shouldn’t we be teaching them to understand the value of information for decision-making? Shouldn’t we be teaching them the concepts of co-dependent testing leading to all tests being less useful than we think?
Shouldn’t we be teaching students the concepts of decision-analysis, and thresholds, and patient’s being involved in the decisions? Shouldn’t we be teaching that it is better to know than to think we know? Shouldn’t we be doing studies rather than scratching at the “tragedy of the commons” (so many physicians feasting on the grassy fields of a sick patient)?
Any confusion over the recent news of cholesterol guidelines in the U.S. is perfectly understandable. On the one hand, the guidelines suggest that nearly half the population should use statins to stave off heart attacks and strokes. On the other, use of the drugs is not with potential side effects and, to many, will offer no substantive benefits. The controversy highlights a problem mired in an outdated way of thinking about health care and the doctor-patient relationship.
Guidelines came about after generations of physicians wanted to bring something more than “opinion and experience” to the patient’s bedside. In the late 1960s legislation for the U.S. Food and Drug Administration was amended to call for a demonstration of efficacy and an assessment of benefits and risk as prerequisite to the licensing of any pharmaceutical. Modern clinical science resulted, first slowly and now with an avalanche of clinical trials, each pouring forth outcome data galore.
The Burden of Clinical Data
Clinicians are expected to stay current with this wealth of information. The modern medical curriculum instructs all budding physicians on how to evaluate the quality and the clinical relevance of all such contributions to the body of clinical science. Because some (or perhaps many) find this exercise overwhelming, there are organizations—many academic and some without any discernible relationships with purveyors that could pose a conflict of interest—that attempt to bundle the information in a fashion that might be relevant to particular physicians or physicians in particular specialties. Some of this bundling is quite systematic, some quite helter-skelter.
Occasionally there is a contribution to the literature that offers an unequivocal advantage for a particular patient group. More often, the bundlers are faced with a heterogeneous literature that often demonstrates little, if any, efficacy. Faced with these circumstances, biostatistics has offered up many a method to impute more value to the literature than is apparent at first blush. The result is that all this bundling adds to an enormous and ever-expanding secondary literature.
What is the clinician to do?
In a previous blog we demonstrated how guidelines can compromise the care of individual patients when designed to serve the health care system.
Why should treating physicians defer to guideline committees at all, we asked? For decades medical students have been taught to read and understand information from published papers.
We are all trained in critical appraisal and can keep up with the clinically meaningful literature, the literature that is relevant and accurate enough to present to patients. Just because there are nearly 20,000 biomedical journals does not mean that any, let alone all are replete with meaningful information. We can discern the valuable from the not valuable; why do we need others to tell us?
In fact, we even argued in our last post that patients can and should judge the value of medical information. After all, they face the consequences of misinterpreting the likelihoods of benefit and of harm associated with various options for care.
No one remembers the numbers that describe the chances for benefit and harm or ask more questions about the veracity of information than a patient who must choose. The smartest information managers we have ever encountered are our patients; when informed, they quickly determine the validity of the information and apply their personal values to the estimations of the chances for benefit and harm.
Take the example of a patient who recently entered into a therapeutic dialogue with one of us, RAM. This was not the traditional clinical interview. This patient had been diagnosed with prostate cancer and was scheduled for an approach to treatment that the diagnosing physician had offered as the most sensible. However, the decision did not rest easily.
The appointment with RAM was scheduled because the patient sought a dialogue that might offer a chance to reflect on the rationale for the approach he was about to initiate. Two hours into the dialogue, the patient, a 40ish year old African-American man accompanied by his wife, were mulling over the marginal benefits and harms of the options for treating an early stage prostate cancer.
The wife asked how many African-Americans were in the study under discussion. “None”. The husband perked up and then asked, “How many people in the study was my age?” “None”. They then asked if the difference in benefit was a certain, fixed amount? “No, it varies over this range.” – examining the descriptive statistics.
They then asked when the study was started and did it pertain to the present day. “It started over 15 years ago” and the stage of disease of the men in the study was generally more aggressive than in this particular case.
We entered the 21st century awash in “evidence” and determined to anchor the practice of medicine on the evidentiary basis for benefit. There is the sense of triumph; in one generation we had displaced the dominance of theory, conviction and hubris at the bedside. The task now is to make certain that evidence serves no agenda other than the best interests of the patient.
Evidence-based medicine is the conscientious and judicious use of current best evidence from clinical care research in the management of individual patients”. [1,2]
But, what does “judicious” mean? What does “current best” mean? If the evidence is tenuous, should it hold sway because it is currently the best we have? Or should we consider the result “negative” pending a more robust demonstration of benefit? Ambiguity is intolerable when defining evidence because of the propensity of people to decide to do something rather than nothing.  Can we and our patients make “informed” medical decisions on thin evidentiary ice? How thin? Does tenuous evidence mean that no one is benefited or that the occasional individual may be benefited or that many may be benefited but only a very little bit?Continue reading…