At long last, we seem to be on the threshold of departing the earliest phases of AI, defined by the always tedious “will AI replace doctors/drug developers/occupation X?” discussion, and are poised to enter the more considered conversation of “Where will AI be useful?” and “What are the key barriers to implementation?”
As I’ve watched this evolution in both drug discovery and medicine, I’ve come to appreciate that in addition to the many technical barriers often considered, there’s a critical conceptual barrier as well – the threat some AI-based approaches can pose to our “explanatory models” (a construct developed by physician-anthropologist Arthur Kleinman, and nicely explained by Dr. Namratha Kandulahere): our need to ground so much of our thinking in models that mechanistically connect tangible observation and outcome. In contrast, AI relates often imperceptible observations to outcome in a fashion that’s unapologetically oblivious to mechanism, which challenges physicians and drug developers by explicitly severing utility from foundational scientific understanding.
In an effort to help women make informed decisions about where to deliver their babies, we set out to collect a comprehensive, nationwide database of hospitals’ C-section rates. Knowing that the federal government mandates surveillance and reporting of vital statistics through the National Vital Statistics System, we contacted all 50 states’ (+Washington D.C.) Departments of Public Health (DPH) asking for access to de-identified birth data from all of their hospitals. What we learned might not surprise you — the lack of transparency in the United States healthcare system extends to quality information, and specifically C-section data. Continue reading…
Adoption of technology in the healthcare field has been happening at an incredibly slow pace. This is a fact that few would disagree with. The market is saturated with health tech companies that are vying to be the next big unicorn in the field, but long sales cycles and simple underestimations of what is needed for HIPAA and FDA approval has led to the demise of many of these projects. The ones that do receive enough series funding to produce finessed products for health systems and pharmaceutical companies however soon realize that the battle against time is not over.
Simply getting into a health system is not enough. Once a contract is finally ironed out and the software is exchanged, the next uphill battle against the slow-pace of internal adoption is mounted. Not only is a speedy adoption important for hospitals to demonstrate that their purchases and investments were appropriate, but it is also key for founders who hope to demonstrate that their product works. Nothing is worse than the painfully slow adoption internally of a piece of technology. One bad experience has the potential to tarnish an organization’s appetite for future tech ventures.
On July 24, the new administration kicked off their version of interoperability work with a public meeting of the incumbent trust brokers. They invited the usual suspects Carequality, CARIN Alliance, CommonWell, Digital Bridge, DirectTrust, eHealth Exchange, NATE, and SHIEC with the goal of driving for an understanding of how these groups will work with each other to solve information blocking and longitudinal health records as mandated by the 21st Century Cures Act.
Of the 8 would-be trust brokers, some go back to 2008 but only one is contemporary to the 21stCC act: The CARIN Alliance. The growing list of trust brokers over our decade of digital health tracks with the growing frustration of physicians, patients, and Congress over information blocking, but is there causation beyond just correlation?
One way to get data to move is open APIs, which the 21st Century Cures Act mandates by tasking EHR vendors to open up patient data “without special effort, through the use of application programming interfaces.”
Contrary to what you may think, most doctors do want to make eye contact. They aren’t antisocial. They want to engage. But they can’t. They’re too distracted by one of the worst computer games ever invented—the electronic medical record (EMR).
You may be surprised to see the EMR compared to a computer game, but there are many similarities. Both offer a series of clicks with an often-maddening array of tasks to solve. There are templates to follow, boxes to fill in & scoring. However, unlike most electronic games, the points accrued in the EMR often translate into payment—real dollars for either your doctor or the hospital.
Although these clicks and boxes may be necessary to document your visit, it’s distracting. And your doctor begins to feel more like a librarian cataloging information rather than, say, a historian capturing your story.Continue reading…
The first time I met one of my staff physicians on Internal Medicine, he told our team he had just one rule:
“Our team must contact the patient’s family physician during the admission, inform him or her of the situation and plan for appropriate patient follow up after discharge.”
If you talk to any hospital physician or family doctor, they would almost certainly agree that this type of integration between hospital and community is essential for reducing avoidable ER visits, readmissions and improving other key health outcomes. Put more simply, it’s just good care.
And so you would think contacting a patient’s family doctor during a hospital admission would be the standard of care – but it’s not.There’s no rule or expectation; rather, it’s just something nice to do.
I’m not here to criticize health care providers who do or don’t act a certain way. I’m sure there are many best practices which some providers do that others don’t, and vice versa.
That said, I don’t think we can deny the harsh truth: It’s no longer about knowing what needs to be done to provide higher quality of care at a lower cost. We know enough answers to begin implementing.
I am writing this from the Apple Worldwide Developer Conference (WWDC) here in San Francisco, where I got to substitute for John Halamka at the Keynote (now I keep having urges to raise Alpacas); John missed the most amazing seats [front row center!].
There were many, many, many (I can not recall a set of software announcements of this scale from Apple) new technologies that were announced, demoed and discussed, but I will limit this entry to a few technologies that have implications for healthcare.
If you remember the state of digital music, prior to the introduction of the iPod and iTunes music store, that is where I feel the current state of the healthcare app industry is at; there is no common infrastructure between any of the offerings, and consumers have been somewhat ambivalent towards them as everything is a data island; switching apps causes data loss and is not a pleasant experience for patients.
Amazingly there are 40,000+ apps on the App store at Apple alone, showing huge demand from users, but probably a handful can talk to each other in a meaningful way; this is both on the consumer and professional side of healthcare.
Individual vendors such as Withings have made impressive strides towards data consolidation on the platform, but these are not baked into the OS, so will always have a lower adoption rate. If we take the music industry example further, Apple entering a market with a full push of an ecosystem at their scale, legitimizes the technology in ways that other vendors simply can’t match.
People hate to hear it, but the robots are coming and it’s only a matter of time before they start competing for skilled, white collar jobs.
Nurses are vulnerable –but before you get excited and start attacking me– so, too are consultants and bloggers. So get used to it, and figure out how you’re going to co-exist with and leverage the bots.
One of my pet peeves about robots is when their programmers try to make them act human by intentionally making them imperfect or have them simulate (feign?) empathy.
For example, I can’t stand it when the voice recognition airline rep talks in a sympathetic sounding voice when “she” can’t understand what I’m saying.
But apparently we’ll be seeing more of these little “humanizing” tricks, thanks to research from MIT that concludes that people like this kind of stuff. From the Wall Street Journal, we learn:
People like their therapy robots to be baby-faced
We feel emotionally closer to robots that sound like our own gender
When robots mimic our activity (like folding their arms) we like it
And then there’s this one:
“One study showed that people rated online travel booking and dating services more positively when the service communicated clearly that it was working for the consumer (e.g., “We are now searching 100 sites for you”) than when they simply provided search results. Surprisingly, having to wait 30 seconds for results but also receiving this communication of effort slightly increased users’ satisfaction, compared with receiving results instantaneously. Being made aware of the website’s willingness to work on their behalf made people feel that the service was sympathetic to their needs.”
After decades of bravely keeping them at bay, health care is beginning to be overwhelmed by “fast, cheap, and out of control” new technologies, from BYOD (“bring your own device”) tablets in the operating room, to apps and dongles that turn your smart phone into a Star Trek Tricorder, to 3-D printed skulls. (No, not a souvenir of the Grateful Dead, a Harley decoration or a pastry for the Mexican Dia de Los Muertos, but an actual skullcap to repair someone’s head. Take measurements from a scan, set to work in a cad-cam program, press Cmd-P and boom! There you have it: new ear-to-ear skull top, ready for implant.)
Each new category, we are told, will Revolutionize Health Care, making it orders of magnitude better and far less expensive. Yet the experience of the last three decades is that each new technology only adds complexity and expense.
So what will it be? Will some of these new technologies actually transform health care? Which ones? How can we know?
There is an answer, but it does not lie in the technologies. It lies in the economics. It lies in the reason we have so much waste in health care. We have so much waste because we get paid for it.
Yes, it’s that simple. In an insurance-supported fee-for-service system, we don’t get paid to solve problems. We get paid to do stuff that might solve a problem. The more stuff we do, and the more complex the stuff we do, the more impressive the machines we use, the more we get paid.
A Tale of a Wasteful Technology
A few presidencies back, I was at a medical conference at a resort on a hilltop near San Diego. I was invited into a trailer to see a demo of a marvellous new technology — computer-aided mammography. I had never even taken a close look at a mammogram, so I was immediately impressed with how difficult it is to pick possible tumours out of the cloudy images. The computer could show you the possibilities, easy as pie, drawing little circles around each suspicious nodule.
But, I asked, will people trust a computer to do such an important job?
Oh, the computer is just helping, I was told. All the scans will be seen by a human radiologist. The computer just makes sure the radiologist does not miss any possibilities.
I thought, Hmmm, if you have a radiologist looking at every scan anyway, why bother with the computer program? Are skilled radiologists in the habit of missing a lot of possible tumors? From the sound of it, I thought what we would get is a lot of false positives, unnecessary call-backs and biopsies, and a lot of unnecessarily worried women. After all, if the computer says something might be a tumor, now the radiologist is put in the position of proving that it isn’t.
I didn’t see any reason that this technology would catch on. I didn’t see it because the reason was not in the technology, it was in the economics.
Years later, as we are trending toward standardizing on this technology across the industry, the results of various studies have shown exactly what I suspected they would: lots of false positives, call-backs and biopsies, and not one tumor that would not have been found without the computer. Not one. At an added cost trending toward half a billion dollars per year.
When it comes to discussing exercise with friends, family and patients, it seems that many of us are at a loss for words. What kind of exercise should we recommend? How much exercise is enough? How much is too much? How do I know that my patient is actually exercising? How do I prescribe exercise?
According to the U.S. Department of Health and Human Services, U.S. adults should engage in moderately intense physical activity for a minimum of 150 minutes each week; this is equivalent to 30 minutes a day, 5 days per week . While it is relatively easy to keep track of the duration and frequency of exercise, it is much more difficult to quantify the intensity of an activity, let alone ensure that the activity is “moderate” for the entire 30 minutes.
In fact, in a 2008 study of women’s understanding of “moderate-intensity” of physical activity as presented in the popular media, the authors found it is not enough to simply hear and read a description of physical activity, but that it requires practice .
So, what are we to do? Should we have our patients log their daily activities? Should we have our patients show us sign-in sheets from the local gym?
It turns out that the dilemma of how to quantify physical activity has been a hot topic for more than 50 years. In 1965, a Japanese inventor developed the first pedometer to give people the opportunity to meet measurable goals and, thus, increase their physical activity. The device was called the Manpo-Kei (meaning “10,000 steps meter”) and it was based on research by Dr. Yoshiro Hatano that demonstrated that 10,000 steps per day allowed for a proper balance between the traditional Japanese caloric intake and the activity-based caloric expenditure of walking approximately five miles per day (the average person’s stride length is approximately 2.5 feet long, therefore 2,000 steps/mile) .