As a physician and writer on the topic of medical careers, I’ve noticed extensive interest in nonclinical career options for physicians. These include jobs in health care administration, management consulting, pharmaceuticals, health care financing, and medical writing, to name a few. This anecdotal evidence is supported by survey data. Of over 17,000 physicians surveyed in the 2016 Survey of America’s Physicians: Practice Patterns and Perspectives, 13.5% indicated that they planned to seek a nonclinical job within the subsequent one to three years, which was an increase from less than 10% in a similar survey fielded in 2012.
The causes of this mounting interest in nonclinical work have not been adequately investigated. Speculated reasons tend to be related to burnout, such as increasing demands placed on physicians in clinical practice, loss of autonomy, barriers created by insurance companies, and administrative burdens. However, attributing interest in nonclinical careers to burnout is misguided and unjustified.
Physicians are needed now – more than ever – to take on nonclinical roles in a variety of industries, sectors, and organizational types. By assuming that physicians interested in such roles are simply burned out and by focusing efforts on trying to retain them in clinical practice, we miss an opportunity promote the medical profession and improve the public’s health.
Supporting medical students and physicians in learning about and pursuing nonclinical career options can assist them in being prepared for their job responsibilities and more effectively using their medical training and experience to assist various types of organizations in carrying out missions as they relate to health and health care.
By PRANAV PURI, PUNEET KAUR, and MARCUS WIGGINS, MBA
As current medical students, the ongoing COVID-19 pandemic represents the most significant healthcare crisis of our lifetimes. COVID-19 has upended nearly every element of healthcare in the United States, including medical education. The pandemic has exposed shortcomings in healthcare delivery ranging from the care of nursing home residents to the lack of interoperable health data. However, the pandemic has also exposed shortcomings in the residency match process.
Consider the United States Medical Licensing Examination (USMLE) Step 1. A 2018 survey of residency program directors cited USMLE Step 1 scores as the most important factor in selecting candidates to interview. Moreover, program directors frequently apply numerical Step 1 score cutoffs to screen applicants for interviews. As such, there are marked variations in mean Step 1 scores across clinical specialties. For example, in 2018, US medical graduates who matched into neurosurgery had a mean Step 1 scores of 245, while those matching into neurology had a mean Step 1 score of 231.
One would assume that, at a minimum, Step 1 scores are a standardized, objective measure to statistically distinguish applicants. Unfortunately, this does not hold true. In its score interpretation guidelines, the National Board of Medical Examiners (NBME) provides Step 1’s standard error of difference (SED) as an index to determine whether the difference between two scores is statistically meaningful. The NBME reports a SED of 8 for Step 1. Assuming Step 1 scores are normally distributed, the 95% confidence interval of a Step 1 score can thus be estimated as the score plus or minus 1.96 times the standard error (Figure 1). For example, consider Student A who is interested in pursuing neurosurgery and scores 231. The 95% confidence interval of this score would span from 215 to 247. Now consider Student B who is also interested in neurosurgery and scores 245. The 95% interval of this score would span from 229 to 261. The confidence intervals of these two scores clearly overlap, and therefore, there is no statistically significant difference between Student A and Student B’s exam performance. If these exam scores represented the results of a clinical trial, we would describe the results as null and dismiss the difference in scores as mere chance.
The United States Medical Licensing Examination (USMLE) Step
1, a test co-sponsored by the Federation of State Medical Boards (FSMB) and the
National Board of Medical Examiners (NBME), has been the exam that people love
to hate. For many years, blogs, Twitter feeds, and opinion pieces have been
accumulating urging the presidents of the FSMB/NBME to stop reporting a 3-digit
score and instead report a pass/fail score. This animosity towards the Step 1
exam originates from the reality that medical schools have increasingly focused
their curriculum on teaching what the Step 1 wants you to learn – medical
trivia that almost always has no bearing on how to approach a clinical problem.
This “Step 1 Madness” is unhealthy. The reasons for its
existence are many: residency and fellowship programs allow it to exist by
idolizing higher scores, some believe it is a metric that can predict future
quality of care, board pass rates, etc. And some are naïve enough to think that
what is tested on the Step 1 is actually useful medical knowledge! It may be
due to a combination of the above that the Step 1 has found itself in such a
peculiar spot. However, the emphasis on the Step 1 score means that medical
students’ fate is being determined by a single test. Nobody wants their fate to
be so unmalleable.
Surely every resident has had the experience of trying to explain to a patient or family what, exactly, a resident is. “Yes, I’m a real doctor… I just can’t do real doctor things by myself.”
In many ways, it’s a strange system we have. How come you can call yourself a doctor after medical school, but you can’t actually work as a physician until after residency? How – and why – did this system get started?
These are fundamental questions – and as we answer them, it will become apparent why some problems in the medical school-to-residency transition have been so difficult to fix.
In the beginning…
Go back to the 18th or 19th century, and medical training in the United States looked very different. Medical school graduates were not required to complete a residency – and in fact, most didn’t. The average doctor just picked up his diploma one day, and started his practice the next.
But that’s because the average doctor was a generalist. He made house calls and took care of patients in the community. In the parlance of the day, the average doctor was undistinguished. A physician who wanted to distinguish himself as being elite typically obtained some postdoctoral education abroad in Paris, Edinburgh, Vienna, or Germany.
Many patients make this or similar requests, especially in January it seems.
This phenomenon has its roots in two things. The first is the common misconception that random blood test abnormalities are more likely early warning signs of disease than statistical or biochemical aberrances and false alarms. The other is the perverse policy of many insurance companies to cover physicals and screening tests with zero copay but to apply deductibles and copays for people who need tests or services because they are sick.
It is crazy to financially penalize a person with chest pain for going to the emergency room and having it end up being acid reflux and not a heart attack while at the same time providing free blood counts, chemistry profiles and lipid tests every year for people without health problems or previous laboratory abnormalities.
A lot of people don’t know or remember that what we call normal is the range that 95% of healthy people fall within, and that goes for thyroid or blood sugar values, white blood cell counts, height and weight – anything you can measure. If a number falls outside the “normal” range you need to see if other parameters hint at the same possible diagnosis, because 5% of perfectly healthy people will have an abnormal result for any given test we order. So on a 20 item blood panel, you can pretty much expect to have one abnormal result even if you are perfectly healthy.
“YOUR LIKELIHOOD OF SECURING RESIDENCY TRAINING DEPENDS ON MANY FACTORS – INCLUDING THE NUMBER OF RESIDENCY PROGRAMS YOU APPLY TO.”
So begins the introduction to Apply Smart: Data to Consider When Applying to Residency – a informational campaign from the Association of American Medical Colleges (AAMC) designed to help medical students “anchor [their] initial thinking about the optimal number of applications.”
In the era of Application Fever – where the mean number of applications submitted by graduating U.S. medical students is now up to 60 – some data-driven guidance on how many applications to submit would be welcome, right?
And yet, the more I review the AAMC’s Apply Smart campaign, the more I think that it provides little useful data – and the information it does provide is likely to encourage students to submit even more applications.
This topic will be covered in two parts. In the first, I’ll explore the Apply Smart analyses and air my grievances against their logic and data presentation. In the second, I’ll suggest what the AAMC should do to provide more useful information to students.
Introduction to Apply Smart
The AAMC unveiled Apply Smart for Residency several years ago. The website includes lots of information for students, but the piece de resistance are the analyses and graphics that relate the number of applications submitted to the likelihood of successfully entering a residency program.
One of the most fun things about the United States Medical Licensing Examination (USMLE) pass/fail debate is that it’s accessible to everyone. Some controversies in medicine are discussed only by the initiated few – but if we’re talking USMLE, everyone can participate.
Simultaneously, one of the most frustrating things about the USMLE pass/fail debate is that everyone’s an expert. See, everyone in medicine has experience with the exam, and on the basis of that, we all think that we know everything there is to know about it.
Unfortunately, there’s a lot of misinformation out there – especially when we’re talking about Step 1 score interpretation. In fact, some of the loudest voices in this debate are the most likely to repeat misconceptions and outright untruths.
Hey, I’m not pointing fingers. Six months ago, I thought I knew all that I needed to know about the USMLE, too – just because I’d taken the exams in the past.
But I’ve learned a lot about the USMLE since then, and in the interest of helping you interpret Step 1 scores in an evidence-based manner, I’d like to share some of that with you here.
If you think I’m just going to freely give up this information, you’re sorely mistaken. Just as I’ve done in the past, I’m going to make you work for it, one USMLE-style multiple choice question at a time._
Recently, I was on The Accad and Koka Report to share my opinions on USMLE Step 1 scoring policy. (If you’re interested, you can listen to the episode on the show website or iTunes.)
Most of the topics we discussed were ones I’ve already dissected on this site. But there was an interesting moment in the show, right around the 37:30 mark, that raises an important point that is worthy of further analysis.
ANISH: There’s also the fact that nobody is twisting the arms of program directors to use [USMLE Step 1] scores, correct? Even in an era when you had clinical grades reported, there’s still seems to be value that PDs attach to these scores. . . There’s no regulatory agency that’s forcing PDs to do that. So if PDs want to use, you know, a number on a test to determine who should best make up their class, why are you against that?
BRYAN: I’m not necessarily against that if you make that as a reasoned decision. I would challenge a few things about it, though. I guess the first question is, what do you think is on USMLE Step 1 that is meaningful?
ANISH: Well – um – yeah…
BRYAN: What do you think is on that test that makes it a meaningful metric?
ANISH: I – I don’t- I don’t think that – I don’t know that memorizing… I don’t even remember what was on the USMLE. Was the Krebs Cycle on the USMLE Step 1?
I highlight this snippet not to pick on Anish – who was a gracious host, and despite our back-and-forth on Twitter, we actually agreed much more than we disagreed. And as a practicing clinician who is 15 years removed from the exam, I’m not surprised in the least that he doesn’t recall exactly what was on the test.
I highlight this exchange because it illuminates one of the central truths in the #USMLEPassFail debate, and that is this:
Physicians who took Step 1 more than 5 years ago honestly don’t have a clue about what is tested on the exam.
That’s not because the content has changed. It’s because the memories of minutiae fade over time, leaving behind the false memory of a test that was more useful than it really was.
United States medical education system is heralded as one among the top in the
world for medical training. Given the strict standards of education, multiple
licensing boards, and continuous oversight by governing bodies, getting a placement
to train in the US is extremely competitive. In 2017 alone, nearly 7000+ non-US citizens
(commonly referred to as “foreign medical graduates”) applied to compete with 24,000+
US citizens for American residency spots to pursue specialty training. The
reasons for this competitiveness are simple. The vast majority of medical institutions
in the US boast a comprehensive curriculum that entails basic sciences,
clinical principles, practical and hands-on didactics, and enriched exposure to
the clinical aspects of patient care. This training produces astute clinicians
that are capable of resolving the most complex diagnoses while providing comprehensive
it is high time to recognize that being a shrewd clinician is no longer a
sufficient product for the demands of the healthcare market today. That is to
say, the scope of medicine today for a physician has gone far beyond resolving
complex medical problems, but demands a higher understanding of multidisciplinary
skillsets, most important of which are finance and legal theory. In these
aspects, the US medical education system direly underprepares physicians, and
thus, requires a thorough reevaluation.
art of medicine, as much as it was originally developed to be purely about the
betterment of patient health, has become yet another siloed service industry.
Simply put, patients are customers, and physicians are increasingly held
accountable for the financial metrics and revenue their work produces. Compensation
models are increasingly favoring productivity based payment methods, such
as the relative value unit (RVU) system, and are moving away from the
traditional, salaried physician. This has resulted in increased pressure on
physicians to become more efficient with their workload and patient docket,
while managing the often turbulent and contradictory interests of insurance,
patients, and hospital administration.