By Shiv Gaglani
This week there’s been a debate brewing about why so many young doctors are failing their board exams. On one side John Schumann writes that young clinicians may not have the time or study habits to engage in lifelong learning, so they default to “lifelong googling.” On the other, David Shaywitz blames the tests themselves as being outmoded rites of passage administered by guild-like medical societies. He poses the question: Are young doctors failing their boards, or are we failing them?
The answer is: (C) All of the above.
I can say this with high confidence because as a young doctor-in-training who just completed my second year of medical school, I’ve become pretty good at answering test questions. Well before our White Coat Ceremonies, medical students have been honed into lean, mean, test-taking machines by a series of now-distant acronyms: AP, SAT, ACT, MCAT. Looming ahead are even more acronyms, only these are slightly longer and significantly more expensive: NBME, COMLEX, USMLE, ABIM. Even though their letters and demographics differ, what each of these acronyms share is the ability to ideologically divide a room in less time than Limbaugh.
This controversy directly results from the clear dichotomy* between the theory behind the exams and their practical consequences. In theory these exams do serve necessary and even agreeable purposes, including:
1) Ensuring a minimum body of knowledge or skill before advancing a student to the next level in her education,
2) Providing an “objective” measure to compare applicants in situations where demand for positions exceeds supply.
So apart from the common, albeit inconvenient, side effects that students experience (fatigue, irritability, proctalgia), what are the problems with these tests in practice? These are five of the core issues that are cited as the basis for reformations to our current examination model:
1) Lack of objectivity. Tests are created by humans and thus are inherently biased. While they aim to assess a broad base of knowledge or skills, performance can be underestimated not due to a lack of this base but due to issues with the testing format, such as duration, question types, and scoring procedure (e.g. the SAT penalizes guessers, whereas the ACT does not). Just as our current model of clinical trial testing is antithetical to personalized medicine (What is a standard dose? Or, more puzzlingly, a standard patient?), our current model of testing does not take into account these individual differences.