Health Policy

As Shared Decision-Making Ails, AI May Save This Human Interaction

By MICHAEL MILLENSON

Shared decision-making between doctors and patients may be “the pinnacle of patient-centered care,” but three new medical journal articles suggest it’s encountering more problems than peaks. Yet counterintuitively, it may be artificial intelligence that rescues this intimately human interaction.

“Shared decision-making is at a crossroads,” declares a Perspective in the Journal of General Internal Medicine, “Saving Shared Decision-Making.” Unfortunately, its more-research-and-education recommendations for “advancing the science of SDM implementation,” seem more crossing guard than crisis management.

Even a cursory historical perspective shows that SDM is suffering from a failure to flourish. Back in 1982, a report by a presidential commission on ethics in medicine declared SDM “the appropriate ideal for patient-professional relationships” and called on doctors “to respect and enhance their patients’ capacities for wise exercise of their autonomy.”

Yet 43 years later, the Perspective authors – 18 members of the Agency for Healthcare Research and Quality Shared Decision-Making Learning Community – acknowledged that while some doctors respectfully ask patients, “What do you think you would like to do, given these options?” many others still believe that, “Let’s do this option, sound OK?” is a shared decision process.

That attitude reminded me of a tongue-in-cheek comment by comedian Stephen Colbert. “See what we can accomplish when we work together by you doing what I say?” he told a 2015 Colbert Nation audience. “It’s called a partnership.”

Cancer Communication Curtailed

In cancer, where patient-doctor interactions have the highest stakes, shared decision-making was named one of the central components of quality care in a 1999 report, Ensuring Quality Cancer Care, by the Institute of Medicine (now the National Academy of Medicine). Nonetheless, a review of SDM among cancer patients in the journal Psycho-Oncology found that for physicians, “making decisions and taking responsibility for the decisions remain an important part of the physicians’ professional identity.” The fear of losing this identity, the authors wrote, “tends to hinder the patient involvement and implementation of SDM.”

Not surprisingly, cancer patients who want to speak up feel as if they won’t be listened to or can’t really refuse whatever their oncologist considers clinically “optimal.” And, it turns out, oncologists are actually less open to SDM if a patient does speak up and resists the recommendations they feel are in the patient’s best interest.

The Prompt: Get the week’s biggest AI news on the buzziest companies and boldest breakthroughs, in your inbox.

Meanwhile, for those hoping Gen Z doctors will naturally be more sensitive, a JAMA Perspective, “When Patients Arrive With Answers, brought discouraging news. When the topic of patients bringing in a treatment recommendation from ChatGPT came up among a group of medical students in the Seattle area, these Internet-native physicians of tomorrow bristled with an old-fashioned dismissiveness of the patient who’s “going to tell us what to order.”

There’s an implicit message that “we still know best,” lamented Dr. Kumara Raja Sundar.

AI Addresses Chronic Problem

When you take a hard look at SDM use, misuse and non-use, it’s clear this is a chronic problem, not an acute one. Good intentions collide with cultural norms going back to Hippocrates. The idea of patient self-determination, writes medical ethicist Dr. Jay Katz in The Silent World of Doctor and Patient, represents “a radical break with medical practices, as transmitted from teacher to student during more than two thousand years of recorded medical history.”

Perhaps equally important individual physicians are increasingly less likely to control their own time. In the 1980s, 80% of physicians worked in practices of ten or fewer doctors, according to the American Medical Association, and the overwhelming percentage of those were in private practice. In 2024, for the first time, private practice doctors were a minority, at just 42%, and about one in five doctors worked in practices of 50 or more.

Paradoxically, AI may push shared decision-making onto what is now often an extremely time-pressured agenda precisely because the detailed, personalized level of information that it’s able to force a reassessment of physician professional identity. Similarly, the scale, scope and depth of the AI revolution will also compel the group practice leaders, health system executives, private equity satraps and all others who now pull the strings on so many physicians to adapt to the democratization of medical knowledge.

There may be no other choice. Already, individuals with breast, lung or prostate cancer can go to a well-funded start-up that will help them transfer their medical record into a platform that compares their treatment plan to the clinical practice guidelines of the National Comprehensive Cancer Network. Separately, a cancer survivor and entrepreneur has launched an online platform to make personalized agentic AI, a sophisticated search of the medical literature, available to every cancer patient. And real-world evidence in cancer care, now being marketed to clinicians and researchers, will inevitably be available directly to patients. Meanwhile, online venues like the PatientsUseAI Substack help guide those who wish to be full partners in their care how to use the new tools.

The question no longer will be whether there is shared decision-making, but how it takes place. Sundar, a family physician, suggests “relational humility,” with doctors “seeing AI-informed visits as opportunities for deeper dialogue rather than threats to clinical authority.”

He adds, “If patients are arming themselves with information to be heard, our task as clinicians is to meet them with recognition, not resistance.”

Michael L. Millenson is president of Health Quality Advisors & a regular THCB Contributor. This first appeared in his column at Forbes