We need more doctors.
Between older care providers retiring, and the general population shift that is the aging of the Baby Boomers, we are running into a massive demographic of more, older patients, living longer and managing more chronic conditions. This puts incredible pressure not just on the remaining doctors and nurses to make up the gap, but strains the capacity of schools to recruit, train, and produce competent medical professionals.
So how can schools do more to reach students and empower them to enter the healthcare field?
The increasing popularity of online programs (particularly at the Masters level, among working professionals looking for a boost to their career advancement) has called forth a litany of studies and commentaries questioning everything from their technology to their academics,compared to traditional, on-campus programs. More productive would be questioning the structure and measuring the outcomes of degree programs in general, rather than judging the value of a new delivery mechanism against an alternative more rooted in tradition than science.
In terms of sheer practicality, though, a distance education—yes, even for doctors and surgeons—makes a certain amount of sense. One of the hottest topics in the medical community right now is Electronic Health Records (EHRs) and the ongoing struggle to fully implement and realize the utility of such technology.
Rolling out in October of 2015, comes the sidecar for the EHR vehicle: ICD-10, the international medical coding language that the U.S. has long postponed adopting. While the digital nature of modern records platforms at least makes ICD-10 viable, it still represents a sharp learning curve for current care providers.
Then there is the intriguing promise of pharmacogenetics, whereby medication is developed, tested, and prescribed, all on the basis of a patient’s individual genetic profile. Combined with an EHR and a personal genetic profile, a patient could be observed, screened, diagnosed, referred to a pharmacist, and able to order and receive a prescription, all without leaving home. Taking into consideration the growing need for medication therapy management—driven by the Baby Boomers living longer with more conditions under care—the value of such a high-tech system is clear.
This draws on what is perhaps the most lucrative (in terms of health outcomes and large-scale care delivery) set of possibilities enabled by the shift to digital: telemedicine. From consultations to check-ups, telehealth in the digital age no longer necessitates sacrificing face-to-face interaction; streaming video chat means patients and doctors can still look one another in the eye, albeit through the aid of cameras.
Proponents of the technology take it further, declaiming that world-class surgeons will no longer be anchored to a single facility—human-guided robotic surgery (telesurgery) will bring expertise to even the most remote locations.
If industry leaders anticipate so much being done remotely, why then are others squeamish about delivering an education online? It would seem that the medical skillset of the future requires greater comfort and competence in dealing with virtual settings, online interaction, and digital record-keeping.
The problem many have is not with online med school in particular so much as online degree programs in general. How can a virtual setting possibly hope to compete with the unique, collaborative, community-oriented environment of the college campus—whatever the area of study?
Forward-thinking professors like Sharon Stoerger at Rutgers have pioneered at least one possible answer to this question. Adopting the online immersive social platform known as Second Life, Stoerger and her like-minded peers have constructed virtual classrooms with accompanying courses, and successfully guided several cohorts (of students as well as instructors) through the experience.
For the aspects of learning that simply require hands-on practice, of course, there are limits to the promise of such virtual environments. Then again, synthetic patient models, known as Human Patient Simulators (HPS), are already proving their merits as an efficient, effective way to let students gain practical experience in a controlled environment. While Ohio Universityinstructors have pioneered the use of HPS in the school’s nursing programs, advancing technology continues to push the functional limits of such systems.
In order to realize the potential of modern delivery of patient care, we first need to realize the potential of modern instructional delivery. The technology is already showing that the real limits of online learning are not practical considerations; they are attitudes and assumptions about what learning ought to look like.
Edgar T. Wilson is a healthcare and policy analyst.
“In terms of sheer practicality, though, a distance education—yes, even for doctors and surgeons—makes a certain amount of sense. One of the hottest topics in the medical community right now is Electronic Health Records (EHRs) and the ongoing struggle to fully implement and realize the utility of such technology.”
I’m having a difficult time seeing what those two topics have to do with each other, as they comprise this one paragraph. You may want to re-write this.
“Then there is the intriguing promise of pharmacogenetics, whereby medication is developed, tested, and prescribed, all on the basis of a patient’s individual genetic profile. Combined with an EHR and a personal genetic profile, a patient could be observed, screened, diagnosed, referred to a pharmacist, and able to order and receive a prescription, all without leaving home.”
A couple of things: Docs often don’t have enough time TODAY to get through an electronic SOAP note effectively. Adding in “omics” data may be problematic, both in terms of the sheer number of additional potential dx/tx variables to be considered in a short amount of time, and then there’s to questions of “omics” analytic competency (which, of course, maps back to your pedagogy riff here).
The complexity will be daunting. Consider:
“Just as we are getting to grips with the idea of sequencing millions of genomes, evidence is suggesting that even one per person might not be enough. The dogma that each of us has one genome to sequence is crumbling under the weight of evidence. It seems that we might be genomic mosaics and the new paradigm could be ‘one human, multiple genomes’.
The most common source of our multiple human genomes is cancer. Genetic disease is conventionally thought to arise from inherited genetic lesions found in the germ line— the sperm and eggs that combine to form the first human cells from which we all grow. In contrast, cancer is a disease that can arise from genetic mutations occurring within cells in the body— somatic cells (for soma, meaning body). Cancerous cells are aggressive in their attempts to grow and spread to places they are not meant without permission.
We all possess precancerous or slow-growing cancerous cells. In an autopsy study of six individuals, high rates of cellular mosaicism were found across different tissues. Mosaics were classified as having one or more large insertions, deletions, or duplications of DNA compared to the original ‘parent genome’ created at conception.
Mosaicism goes far beyond cancer. An increasing number of somatic mutations are being linked to other genetic diseases. These include neurodevelopmental diseases that can arise in prenatal brain formation and cause recognizable symptoms even when present at low levels. Brain malformations associated with these changes are linked to epilepsy and intellectual disability.
Humans can also be mosaics of ‘foreign’ genomes. Rare cases of confounded identities brought to light the first examples. In one case, a woman needing a kidney transplant did not genetically match her children; her kidney grew from the cells of her lost twin brother. In another case, the identity of a criminal was masked because cells from his bone marrow transplant had migrated into the lining of his cheek. Cheek swabs were taken for his DNA test. Even more remarkable, observations suggest that many women who have been pregnant might be genomic chimeras. In samples from brain autopsies of 59 women, for example, 63 per cent of neurons contained Y chromosomes originating from their male offspring (actually from the fathers).
Doctors and geneticists are just starting to explore what having a multiplicity of genomes means for human health. At this point they are busy mapping the extent of the phenomenon but the message is already loud and clear: genomics continues to astonish us and genomic diversity is appearing everywhere we imagine to look, including inside our own bodies.
Beyond genomics, epigenomics is perhaps an even higher mountain of diversity to scale. Genomes might be relatively static entities at the level of their nucleotides A, C, G, and T, but the double helix can be decorated in numerous ways that change how genes are turned on and off, and in which combinations. In essence, exactly the same genome sequence can have very different effects depending on its history and context. Gene expression patterns can change frequently, and in some cases the modifications are even passed on to the next generation. It never ends. Human genetic variation continues to blindside us with its enormity and complexity.”
Field, Dawn; Davies, Neil (2015-01-31). Biocode: The New Age of Genomics (pp. 33-34). Oxford University Press. Kindle Edition.
President Obama’s current infatuation with “Precision Medicine” notwithstanding, just dumping bunch of “omics” data into EHRs (insufficiently vetted, and inadequately understood) is likely to set us up for our latest HIT disappointment.
Specifically with regard to clinical pedagogy (and its intersection with HIT), See my citation of Lawrence and Lincoln Weeds’ seminal book “Medicine in Denial”
See also my current look into Martin Ford’s “Rise of the Robots.” Significant implications for clinical pedagogy there as well.
“The Robot will see you now — assuming you can pay” http://regionalextensioncenter.blogspot.com/2015/05/the-robot-will-see-you-now-assuming-you.html
Chapter 4, “Rise of the Robots”
“The fact that two teams of doctors can struggle to make the same diagnosis— and that they can do so even when the answer to the mystery has been broadcast to millions of prime-time television viewers— is a testament to the extent to which medical knowledge and diagnostic skill are compartmentalized in the brains of individual physicians, even in an age when the Internet has enabled an unprecedented degree of collaboration and access to information. As a result, the fundamental process that doctors use to diagnose and treat illnesses has remained, in important ways, relatively unchanged. Upending that traditional approach to problem solving, and unleashing all the information trapped in individual minds or published in obscure medical journals, likely represents one of the most important potential benefits of artificial intelligence and big data as applied to medicine.
In general, the advances in information technology that are disrupting other areas of the economy have so far made relatively few inroads into the health care sector. Especially hard to find is any evidence that technology is resulting in meaningful improvements in overall efficiency. In 1960, health care represented less than 6 percent of the US economy. 2 By 2013 it had nearly tripled, having grown to nearly 18 percent, and per capita health care spending in the United States had soared to a level roughly double that of most other industrialized countries. One of the greatest risks going forward is that technology will continue to impact asymmetrically, driving down wages or creating unemployment across most of the economy, even as the cost of health care continues to climb. The danger, in a sense, is not too many health care robots but too few. If technology fails to rise to the health care challenge, the result is likely to be a soaring, and ultimately unsustainable, burden on both individual households and the economy as a whole.
Artificial Intelligence in Medicine
The total amount of information that could potentially be useful to a physician attempting to diagnose a particular patient’s condition or design an optimal treatment strategy is staggering. Physicians are faced with a continuous torrent of new discoveries, innovative treatments, and clinical study evaluations published in medical and scientific journals throughout the world.”…
Ford, Martin (2015-05-05). Rise of the Robots: Technology and the Threat of a Jobless Future (pp. 146-147). Basic Books. Kindle Edition.
Ford devotes a good bit of discussion to issues pertaining to tactics such as “MOOCs” (Massively Open Online Courses) and “distance education” methods more broadly.
“If industry leaders anticipate so much being done remotely, why then are others squeamish about delivering an education online? It would seem that the medical skillset of the future requires greater comfort and competence in dealing with virtual settings, online interaction, and digital record-keeping.
The problem many have is not with online med school in particular so much as online degree programs in general. How can a virtual setting possibly hope to compete with the unique, collaborative, community-oriented environment of the college campus—whatever the area of study?”
Yeah. Pedagogy again. Former K-P CEO George Halvorson has spoken to the fact that “we seem to be hard-wired to learn from each other.” I worry about our increasing reliance on cloistered online content delivery. Maybe I’m just an old coot, though.
@BobbyGvegas, I don’t think you are just an old coot. Quite the contrary, and I’m glad you took these ideas to task.
So, pedagogy. I think the notion that MOOCs, virtual classrooms, and other emerging elements of online education fail to account for interactive learning is flawed and unfair. It is probably the most obvious risk and potential shortcoming of online content delivery, which is why you see so many different solutions (or at least creative attempts to compensate) emerging in the academic and business worlds. See Stoerger, et al.
Yes, age and cultural differences may influence comfort with ‘virtual’ interactions, but digital natives are the ones who will be learning and operating in the increasingly connected environments of the future, so immersion is not inherently far-fetched. Digital natives are conditioned almost from birth to interact with computers and electronic media in a way previous generations simply cannot learn to do. Not only does that give the natives a fighting chance of overcoming current challenges with EHRs, SOAP notes, etc., but it means they may actually learn more effectively in alternative settings, distance programs included.
That is what I meant when I connected the practicality of distance education with the rise of electronic records. I concede I could have been more explicit. But my greater point is that teaching and practice can be brought closer together with technology, and the tools, interfaces, and general trappings of virtual classrooms may soon look more like modern doctors’ offices than brick and mortar college classrooms.
I am highly skeptical of notions that relatively untested methods of learning and teaching are inherently less effective than traditional ones. I don’t mean to advocate for distance learning, of which I am also skeptical, but I do think giving results a chance to speak for themselves makes more sense than choosing to examine and explore the different ways we can find to present skills, knowledge, and evolving notions of fact, best practices, etc.
Perhaps the challenges we are experiencing in bringing together EHRs, ‘omics,’ and all the other modern elements supposedly set to revolutionize medicine is as much a user problem as a technological one. In that case, I would say give more people more chances to learn and solve this problems, and don’t let geography get in the way.
As for the economics of it all, that is a real can of worms. Information technology stands to potentially slash the costs of medicine and education alike; both are untenably expensive, and show no signs of changing course. Our entire system of valuation, delivery, and public-private coordination is flawed, and I do not believe technology represents either a silver bullet solution, or the straw that will break the camel’s back. I don’t see that rapid development and integration of new tech and toys will continue indefinitely and ahead of proven efficiency and affordability of providers and consumers alike, without a major correction occurring.
The full cost-cutting potential of technology will certainly require industrial adaptation, and the two will (ideally, and probably) stimulate one another. Clearly, medical professionals are pushing back against Meaningful Use and ICD-10 mandates, which emerged in response to a clear, felt need in the industry to account for evolving technology. This tussle is on-going, and I don’t see the promise or threat of technology, economics, or bureaucracy lunging us past this process and into any particular end result any too quickly. That is why I don’t think robots will put everyone out of work without a macroeconomic adjustment to go along with it. But that is speculation.
I think you bring up very worthwhile and significant questions, which is all I hoped to do with my observations. I think the answers will emerge more in transit, than as a clear-cut final word.