In the future, implanted chips will have the ability to stop food absorption when caloric intake reaches 2200. Cells in our forearm will be able to monitor our glucose levels and adjust our insulin appropriately. These implantable cells or “chips” have their own IP address with their own circuitry that is connected to a network 24/7. Through this network, cells communicate with real-time super computers to synthesize the next step for an individual’s body. If Dr. Anthony Atala can utilize 3D printers to create a new kidney, then it is only a matter of time before we can incorporate the circuitry within an organ necessary to monitor its function wirelessly.
This was the future I was challenged to paint in my talk at TEDMED 2012 at the Kennedy Center for the Performing Arts in Washington, DC. With the conclusion of TEDMED 2013 last week, I ask myself, where are we one year later?
A caveat: The following are simple overviews on novel technologies I had been tracking over the past year and does no justice to the many amazing leaps we have made in innovative science and medicine during this time.
At the turn of the 20th century, we built a healthcare system on responding to acute, curative, episodic issues. This system saw the eradication of many diseases and the advent of vaccinations and new treatments. The model was truly developed to be a “sickcare system,” which was what we needed at the time, and saw huge successes.
Fast forward 100 years and Americans are sicker than ever — but with different illnesses. What’s more, there is finally a national consensus that our healthcare system is broken. With increasingly tragic consequences, the reactionary medical paradigm has not provided the preventive care or chronic illness management that our culture needs. Healthcare spending currently consumes 17 percent of our GDP and without a radical shift in thinking, this number may grow even higher.
Sadly, patients are not the only ones suffering. The status quo is breeding a morale crisis among our nation’s doctors. If you asked one of the many thousands of medical students who are just beginning their fall semester why they chose medicine, many of them would give you confused, anxious responses about the field they are entering. This does not bode well for the health of future generations.
Last Spring, we met at TEDMED, an annual “grand gathering” in Washington, DC where forward thinkers from all sectors explore the promise of technology and the potential of human achievement as it pertains to health and medicine. Here, we presented our respective positions. One of us, Ali, argued that new technologies will actively change our health behavior. Another, Sunny, argued that we needed systems thinking in public health, focusing on the causes of the causes. Yet another, Jacob, argued for stopping the “imaginectomies” and fostering creativity in medical training by rethinking selection criteria and curricula for entrance to medical school.
At a time when one in three Americans report difficulty paying medical bills, up to $750 billion is being spent on care that does not help patients become healthier. Although physicians are routinely required to manage expensive resources, traditional medical training offers few opportunities to learn how to deliver the highest quality care at the lowest possible cost. While the gap is glaring the problem is not new.
In 1975, the department of medicine at Charlotte Memorial Hospital initiated a system to monitor medical costs generated by house officers. In the Journal of Medical Education leaders of the Charlotte initiative described how simply being aware of how clinical decisions impact the costs of care could decrease inpatient length of stay by 21%. Over the last four decades there have been dozens of similar efforts to educate medical students and residents about opportunities to improve the value of care. Some interventions were simple like the one in Charlotte, and simply revealed the cost of routine tests to their trainees. Others provided more sophisticated didactics, interrogated medical records to give trainee-specific feedback on utilization, or creatively leveraged the hospital computer order-entry systems.
Recently, I was having a discussion with a colleague about being a doctor. She confided in me that if someone asked her about becoming a doctor, she would tell him or her to become a nurse practitioner. After reading the emotional open letter to our policymakers in Washington DC, it may sound like a reasonable suggestion. After all, why go into this much debt and spend so much time in training if your prospects are not much better? More recently, the New York Times article points out job prospects for radiology trainees are thinning, meaning the well known “ROAD” (Radiology, Ophthalmology, Anesthesiology, and Dermatology) to success may soon become a road to nowhere if there are no jobs.
There in lies the question, why become a doctor? If the answer is to make money or to have an easy life, then you probably need to look for a new profession. With healthcare payment reform, doctors can expect lower salaries as bundled payment and cost cutting measures are instituted. Moreover, the demand for healthcare will go up as more patients have insurance, leading to higher patient volumes and the expectation to see more patients with the same amount of time.
The progeny of the iPhone and the iPad will change the shape of your institution — and your balance sheet.
One of the more striking images, to me, out of the online spew in the last few months was from the inauguration. It was a wide view of an inaugural ball. There was the president waltzing with the first lady, and a crowd of several hundred watching them. What was striking about that image was that the several hundred people held several hundred small glowing rectangles in their hands. Practically every member of the crowd was carrying a smartphone and was photographing or videotaping the moment.
The scene was commonplace in its moment, remarkable only in the perspective of history — but such a short history. We could not have imagined so many people carrying smartphones at Obama’s first inaugural only four years ago. Four years before that, we could not have imagined any. The iPhone had not been invented.
There had been attempts at smartphones before the iPhone, and devices like tablets before the iPad. But the rampant success of iOS devices did far more than establish two profitable niche. It changed our relationship with the world.
Should I be prescribing apps, and if so, which ones?
I recently came across this video of Happtique’s CEO Ben Chodor describing his company to Health 2.0’s Matthew Holt. In it, the CEO explains that Happtique is creating a safe and organized space, to make it easy for doctors to prescribe apps and otherwise “engage with patients.”
Because, he says “we believe that the day is going to come that doctors, and care managers, are going to prescribe apps. It’s going to be part of going to the doctor. He’s going to prescribe you Lipitor, and he’s going to give you a cholesterol adherence app.”
He goes on to say that they have a special process to make sure apps are “safe” and says this could be like the good housekeeping seal of approval for apps.
Hmm. I have to admit that I really can’t imagine myself ever prescribing a “cholesterol adherence” app. (More on why below; also found myself wondering what it exactly meant for Happtique to say an app was safe. What would an unsafe cholesterol app look like?)
“What does the 21st Century Physician look like?”
Lisa Fields (@PracticalWisdom) cc’ed me on a tweet about this; it’s the featured question at www.tomorrowsdoctor.org, an organization founded by three young professionals who spoke at TEDMED last year.
I’ll admit that the question on the face of it struck me as a bit absurd, especially when juxtaposed with the term “tomorrow’s doctor.”
Tomorrow’s doctor needs to be doing a much better job of dealing with today’s medical challenges, because they will all be still here tomorrow. (Duh!) And the day after tomorrow.
(As for the 21st century in general, given the speed at which things are changing around us, seems hard to predict what we’ll be doing by 2050. I think it’s likely that we’ll still end up needing to take care of elderly people with physical and cognitive limitations but I sincerely hope medication management won’t still be a big problem. That I do expect technology to solve.)
After looking at the related Huffington Post piece, however, I realized that this trio really seems to be thinking about how medical education should be changed and improved. In which case, I kind of think they should change their organization’s name to “Next Decade’s Doctor,” but I can see how that perhaps might not sound catchy enough.
A few quick impressions from last week’s FutureMed extravaganza put on by Singularity University at the Museum of Computer History, a stone’s throw from Google’s Mountain View headquarters.
The event featured an exhibition session where emerging digital health companies (with some others) demo’d their initial products, followed by a plenary session introduced by FutureMed Executive Director (and former MGH medicine colleague) Daniel Kraft, and featuring presentations to the packed house by several leading innovators – including one of the developers of IBM’s Watson, which is pivoting from Jeopardy to clinical medicine.
Given the high density of reporters there – to say nothing of innovators, would-be innovators, VCs, and assorted poseurs (categories not mutually exclusive) – I expect there should be lucid coverage available elsewhere on the web.
Instead, I want to capture the three sequential reactions I had, which strike me as somewhat analogous to Haeckel’s Law (ontogeny recapitulates phylogeny), as each response seems to reflect a distinct stage of professional development.
The inevitable initial, and most visceral reaction to this sort of event, is that technology is wicked cool, and will deliver us all; I think this two minute introductory video captures the vibe more effectively than any description I could offer. I’m also certain any student of semiotics would find it especially rewarding.
Accordingly, even much of the informal discussion at the event seemed to revolve around Big Questions, lofty ideas, and the Next Big Thing. New technologies and approaches – artificial organs from stem cells! Computers that can read your mind! Bottom-up innovation! Exponentials! – were discussed expectantly, the key question being not if, but when. The remarkable progress many in the tech crowd had seen in other disciplines suggested that technology advances in health would be similarly achievable, and just as inevitable.