Categories

Category: Health Tech

How Did the AI “Claude” Get Its Name?

By MIKE MAGEE

Let me be the first to introduce you to Claude Elwood Shannon. If you have never heard of him but consider yourself informed and engaged, including at the interface of AI and Medicine, don’t be embarrassed. I taught a semester of “AI and Medicine” in 2024 and only recently was introduced to “Claude.”

Let’s begin with the fact that the product, Claude, is not the same as the person, Claude. The person died a quarter century ago and except for those deep in the field of AI has largely been forgotten – until now.

Among those in the know, Claude Elwood Shannon is often referred to as the “father of information theory.” He graduated from the University of Michigan in 1936 where he majored in electrical engineering and mathematics. At 21, as a Master’s student at MIT, he wrote a Master’s Thesis titled “A Symbolic Analysis Relay and Switching Circuits” which those in the know claim was “the birth certificate of the digital revolution,” earning him the Alfred Noble Prize in 1939 (No, not that Nobel Prize).

None of this was particularly obvious in those early years. A University of Michigan biopic claims, “If you were looking for world changers in the U-M class of 1936, you probably would not have singled out Claude Shannon. The shy, stick-thin young man from Gaylord, Michigan, had a studious air and, at times, a playful smirk—but none of the obvious aspects of greatness. In the Michiganensian yearbook, Shannon is one more face in the crowd, his tie tightly knotted and his hair neatly parted for his senior photo.”

But that was one of the historic misreads of all time, according to his alma mater. “That unassuming senior would go on to take his place among the most influential Michigan alumni of all time—and among the towering scientific geniuses of the 20th century…It was Shannon who created the “bit,” the first objective measurement of the information content of any message—but that statement minimizes his contributions. It would be more accurate to say that Claude Shannon invented the modern concept of information. Scientific American called his groundbreaking 1948 paper, “A Mathematical Theory of Communication,” the “Magna Carta of the Information Age.”

I was introduced to “Claude” just 5 days ago by Washington Post Technology Columnist, Geoffrey Fowler – Claude the product, not the person. His article, titled “5 AI bots took our tough reading test. One was smartest — and it wasn’t ChatGPT,” caught my eye. As he explained, “We challenged AI helpers to decode legal contracts, simplify medical research, speed-read a novel and make sense of Trump speeches.”

Judging the results of the medical research test was Scripps Research Translational Institute luminary, Eric Topol.  The 5 AI products were asked 115 questions on the content of two scientific research papers : Three-year outcomes of post-acute sequelae of COVID-19 and Retinal Optical Coherence Tomography Features Associated With Incident and Prevalent Parkinson Disease.

Not to bury the lead, Claude – the product – won decisively, not only in science but also overall against four name brand competitors I was familiar with – Google’s Gemini, Open AI’s ChatGPT, Microsoft Copilot, and MetaAI. Which left me a bit embarrassed. How had I never heard of Claude the product?

For the answer, let’s retrace a bit of AI history.

Continue reading…

What AI and Grief-bots Can Teach Us About Supporting Grieving People

By MELISSA LUNARDINI

The Rise of Digital Grief Support

We’re witnessing a shift in how we process one of humanity’s most universal experiences: grief. Several companies have emerged in recent years to develop grief-related technology, where users can interact with AI versions of deceased loved ones or turn to general AI platforms for grief support.

This isn’t just curiosity, it’s a response to a genuine lack of human connection and support. The rise of grief-focused AI reveals something uncomfortable about our society: people are turning to machines because they’re not getting what they need from the humans around them.

Why People Are Choosing  Digital Over Human Support

The grief tech industry is ramping up, with MIT Technology Review reporting that “at least half a dozen companies” in China are offering AI services for interacting with deceased loved ones. Companies like Character.AI, Nomi, Replika, StoryFile, and HereAfter AI offer users the ability to create and engage with the “likeness” of deceased persons, while many other users use AI as a way to quickly normalize and seek answers for their grief. This digital migration isn’t happening in a vacuum. It’s a direct response to the failures of our current support systems:

  • Social Discomfort: Our grief-illiterate society struggles with how to respond to loss. Friends and family often disappear within weeks, leaving mourners isolated when they need support, especially months later.
  • Professional Barriers: Traditional grief counseling is expensive, with long wait times. Many therapists lack proper grief training, with some reporting no grief-related education in their programs. This leaves people without accessible, qualified support when they need it most.
  • Fear of Judgment: People often feel safer sharing intimate grief experiences with AI than with humans who might judge, offer unwanted advice, or grow uncomfortable with the intensity of their grief.

The ELIZA Effect

To understand why grief-focused AI is succeeding, we must look back to 1966, when the first AI-companion program called ELIZA was developed. Created by MIT’s Joseph Weizenbaum, ELIZA simulated conversation using simple pattern matching, specifically mimicking a Rogerian psychotherapist using person-centered therapy. 

Rogerian therapy was perfect for this experiment because it relies heavily on mirroring what the person says. The AI companion’s role was simple: reflect back what the person said with questions like “How does that make you feel?” or “Tell me more about that.” Weizenbaum was surprised that people formed deep emotional connections with this simple program, confiding their most intimate thoughts and feelings. This phenomenon became known as the “ELIZA effect”.

ELIZA worked not because it was sophisticated but because it embodied the core principles of effective emotional support, something we as a society can learn from (or in some cases relearn).

What AI and Grief-bots Get Right

Modern grief-focused AI succeeds for the same reasons ELIZA did, but with enhanced capabilities. Here’s what AI is doing right:

  • Non-Judgmental Presence: AI doesn’t recoil from grief’s intensity. It won’t tell you to “move on,” suggest you should be “over it by now,” or change the subject when your pain becomes uncomfortable. It simply witnesses and reflects.
  • Unconditional Availability: Grief doesn’t follow business hours. It strikes at 3 AM on a Tuesday, during family gatherings, while you’re at work, or on a grocery run. AI works 24/7, providing instant support by quickly normalizing common grief experiences like “I just saw someone who looked like my mom in the grocery store, am I going mad?AI’s response demonstrates effective validation: “You’re not going mad at all. This is actually a very common experience when grieving someone close to you. Your brain is wired to recognize familiar patterns, especially faces of people who were important to you… This is completely normal. Your mind is still processing your loss, and these moments of recognition show just how deeply your mom is still with you in your memories and awareness.” Simple, on-demand validation helps grievers instantly feel normal and understood.
  • Pure Focus on the Griever: AI doesn’t hijack your story to share its own experiences. It doesn’t offer unsolicited advice about what you “should” do or grow weary of hearing the same story repeatedly. Its attention is entirely yours.
  • Validation Without Agenda: Unlike humans, who may rush to make you feel better (often for their own comfort), AI validates emotions without trying to fix or change them. It normalizes grief without pathologizing it.
  • Privacy and Safety: AI holds space for the “good, bad, and ugly” parts of grief confidentially. There’s no fear of social judgment, no worry about burdening someone, no concern about saying the “wrong” thing.
  • No Strings Attached: AI doesn’t need emotional reciprocity. It won’t eventually need comforting, grow tired of your grief, or abandon you if your healing takes longer than expected.

AI Can Do It, But Humans Can Do It Better. Much Better.

According to a 2025 article in Harvard Business Review, the #1 use of AI so far in 2025 is therapy and companionship.

Continue reading…

Now is the Time to Modernize Communication in the Medicaid Program

By ABNER MASON

What do television shows 60 Minutes, Roseanne, Designing Women, and Murder, She Wrote all have in common? They were top 10 prime time shows in the 1991 – 92 season according to Nielson Media research. Obviously, what Americans want to watch has changed in 34 years. The decline in market share the major networks – ABC, CBS, and NBC – have experienced, and the dramatic growth of streaming services proves the point. It makes sense to let people watch what they want to watch on the device of their choice, and use new technologies like streaming services to access the shows they want to watch.  It would be foolish for us to insist that Americans watch only shows from the legacy networks on traditional TVs. But this is basically what we are doing now when we force Medicaid Managed Care Plans (Plans) to comply with a 1991 Federal Law when they communicate with Medicaid recipients.

Here’s the problem. Federal legislation called the Telephone Consumer Protection Act (TCPA), enacted in 1991, makes it very difficult for States and Plans to use text messaging to communicate with their members, even though texting is the primary, and preferred mode of communication for all Americans including Medicaid recipients. TCPA requires a State or Plan sending a text to have permission from the person receiving the text before the text is sent. Violations of TCPA result in significant financial penalties for each infraction, and penalties are tripled if the sender knowingly sent the text without consent.

Medicaid recipients are typically assigned to Plans, they do not choose their Plan, and as a result, in light of TCPA, and potentially enormous financial penalties being assessed, plans have taken the position that they do not have consent from recipients to text them. And that is the problem.

Texting is the way most Americans communicate today. Other modalities like US mail (called snail mail for a reason), phone calls (who answers calls anymore?), and email (likely to go without a response for days or weeks) are dramatically less effective. Because they are low income, many Medicaid recipients often do not have a landline, or a laptop. They rely on their mobile phone for all their communication, including healthcare related communication. Texting is their preferred, and often only way of communicating.

As Founder and CEO for SameSky Health, I spent over a decade working with Plans to help them engage their members and navigate them into healthcare at the right time and the right place. Again and again, we found when we could maneuver around the outdated restrictions TCPA placed on Plans, we got higher engagement which translated into more well child visits, more breast cancer screenings, more diabetes (a1c) screenings, and so on. Using modern tools of communication is a way of meeting people where they are. It builds trust and leads to better health outcomes. But sadly, because of TCPA, we were not able to text members in most instances.

What has been a significant problem will be made exponentially worse when Federal Work Requirements are implemented as now seems likely. A Federal Medicaid Work requirement will dramatically increase the need to modernize how States and Plans communicate with Medicaid recipients. Compliance with TCPA is standing in the way of this modernization. And if it is not fixed, many, many people will lose their Medicaid benefits for purely procedural reasons.

To improve health outcomes, allow efficient communication to verify work status,  and provide twice yearly redetermination information, States and Plans must be exempted from the outdated provisions of TCPA. Senate action on, and final passage of the Reconciliation legislation offers the best opportunity to get an exemption from TCPA passed and signed into law.

The time to act is now.

So lets focus on (1) getting the exemption language in the Senate version of the Reconciliation legislation, (2) working with HHS and CMS to ensure post legislation guidance directs States and Plans to include texting as a best practice when implementing work requirement programs and communicating with recipients more generally, and (3) implementing a media strategy to build support for using modern technology to create easier more efficient ways for Medicaid recipients to comply with the new work requirements.

We have two months – June and July – to get action on an exemption in the Senate, and the remainder of the year to influence Administration guidance on work requirement programs.

Medicaid beneficiaries will be the biggest winners if we succeed because an exemption is a key strategy to reduce unnecessary loss of Medicaid benefits.

What can you do? Call your Senator and ask them to support modernizing how States and Plans communicate with Medicaid recipients. And please share this blog post with your network.

Abner Mason is Chief Strategy and Transformation Officer for GroundGame Health. He serves on the Board for Manifest MedEx, California’s largest health information exchange, is Vice-Chair of the Board for the California Black Health Network, and is a member of the National Commission on Climate and Workforce Health. Here are are just some of articles and interviews he has published over the past 10 years pushing for States and Plans to be able to text Medicaid recipients. 

Owen Tripp, Included Health

Owen Tripp is CEO of Included Health. It started way back in the 2010s as a second opinion service but now has added telehealth, continuous primary care, behavioral health and guidance for its populations. He’s taken to calling what they do all in one personalized healthcare. Underlying all this is a data integration and analytics platform that’s now being used by some of the biggest employers including Walmart, Comcast, CALPers and more. Essentially Included Health is building the new multi-specialty medical group. Owen & I really got into the details and had a great conversation about how we develop a “3rd way” between the payers and providers–Matthew Holt

I’m Sensing Some Future

By KIM BELLARD

One of my frequent laments is that here we are, a quarter of the way into the 21st century, yet too much of our health care system still looks like the 20th century, and not enough like the 22nd century. It’s too slow, too reactive, too imprecise, and uses too much brute force. I want a health care system that seems more futuristic, that does things more elegantly.

So here are three examples of the kinds of things that give me hope, in rough order of when they might be ready for prime time:

Floss sensor: You know you’re supposed to floss every day, right? And you know that your oral health is connected to your overall health, in a number of ways, right? So some smart people at Tufts University thought, hmm, perhaps we can help connect those dots.

 “It started in a collaboration with several departments across Tufts, examining how stress and other cognitive states affect problem solving and learning,” said Sameer Sonkusale, professor of electrical and computer engineering. “We didn’t want measurement to create an additional source of stress, so we thought, can we make a sensing device that becomes part of your day-to-day routine? Cortisol is a stress marker found in saliva, so flossing seemed like a natural fit to take a daily sample.”

The result: “a saliva-sensing dental floss looks just like a common floss pick, with the string stretched across two prongs extending from a flat plastic handle, all about the size of your index finger.”

It uses a technology called electropolymerized molecularly imprinted polymers (eMIPs) to detect the cortisol. “The eMIP approach is a game changer,” said Professor Sonkusale. “Biosensors have typically been developed using antibodies or other receptors that pick up the molecule of interest. Once a marker is found, a lot of work has to go into bioengineering the receiving molecule attached to the sensor. eMIP does not rely on a lot of investment in making antibodies or receptors. If you discover a new marker for stress or any other disease or condition, you can just create a polymer cast in a very short period of time.”

The sensor is designed to track rather to diagnose, but the scientists are optimistic that the approach can be used to track other conditions, such as oestrogen for fertility tracking, glucose for diabetes monitoring, or markers for cancer. They also hope to have a sensor that can track multiple conditions, “for more accurate monitoring of stress, cardiovascular disease, cancer, and other conditions.” 

They believe that their sensor has comparable accuracy to the best performing sensors currently available, and are working on a start-up to commercialize their approach.

Nano-scale biosensor: Flossing is all well and good, but many of us are not as diligent about it as we should be, so, hey, what about sensors inside us that do the tracking without us having to do anything? That’s what a team at Stanford are suggesting in A biochemical sensor with continuous extended stability in vivo, published in Nature.

The researchers say:

The development of biosensors that can detect specific analytes continuously, in vivo, in real time has proven difficult due to biofouling, probe degradation and signal drift that often occur in vivo. By drawing inspiration from intestinal mucosa that can protect host cell receptors in the presence of the gut microbiome, we develop a synthetic biosensor that can continuously detect specific target molecules in vivo.

“We needed a material system that could sense the target while protecting the molecular switches, and that’s when I thought, wait, how does biology solve this problem?” said Yihang Chen, the first author of the paper. Their modular biosensor, called the Stable Electrochemical Nanostructured Sensor for Blood In situ Tracking (SENSBIT) system, can survive more than a week in live rats and a month in human serum.

Continue reading…

High-Profile Start-Ups Inato And Prenosis Show AI ‘Best Practice’

By MICHAEL MILLENSON

Treating artificial intelligence as just one ingredient in a business success recipe was a prominent theme at the MedCity INVEST 2025 conference, with this AI “best practice” advice epitomized by high-profile start-ups Inato and Prenosis.

“You need to build a business model that makes sense, then use AI,” cautioned Raffi Boyajian, principal at CIGNA Ventures and a panelist at the MedCity INVEST 2025 conference in Chicago.

That sentiment was echoed and emphasized by fellow investors Aman Shah, vice president of new ventures at VNS Health, and Dipa Mehta, managing partner of Valeo Ventures. Both emphasized the necessity in a tough economic environment to find a “burning platform” that could immediately boost a customer’s bottom line.

In a separate panel, high-profile start-ups Inato and Prenosis accentuated that AI approach.

Innovation Customers Need

Inato was named by Fast Company magazine as one of the Most Innovative Companies of 2024, and that same year chosen by Fierce Healthcare as one of its Fierce 15. The Paris-based company connects drugmakers with otherwise hard-to-enroll patients for clinical trials by means of an AI-based platform that has attracted more than 3,000 community research sites in over 70 countries. By making clinical trials “more accessible, inclusive, and efficient,” in the company’s words, breaking a shocking pattern where 96% of trials do not include a representative population, Inato has established partnerships with more than a third of the top 30 pharmaceutical firms.

In describing its technology, Inato says it “assembled an AI agent to de-identify patient records, quickly determine which trials are relevant to each patient and evaluate patients against inclusion and exclusion criteria to assess eligibility” accurately and at scale. However, that phrase, “assembled an AI agent,” obscures a subtler process.

Liz Beatty, Inato’s co-founder and chief strategy officer, described using “off-the-shelf” large language models like ChatGPT and Claude and then optimizing them for a particular process with algorithms attuned to each model. As new models appear, the company adjusts accordingly. Although Beatty did not offer an analogy, there seemed an obvious parallel to a chef choosing among the right ingredients in the right proportions to ensure a recipe’s success.

Said Beatty, “I hear, ‘Let’s apply AI to everything.’ That’s not the right answer.” Investors are convinced enough that Inato does have the right answer that they’ve poured in $38.2 million, according to Pitchbook.

AI has also been central to the success of Prenosis. The company’s Sepsis ImmunoScore was the first Food and Drug Administration-approved tool using AI to predict the imminent onset of an often-deadly condition known as sepsis. Integrated into the clinical workflow, it was hailed by Time magazine as one of “the best inventions of 2024,” while Bobby Reddy Jr., Prenosis co-founder and chief executive officer, was subsequently named to the Time100 Health List recognizing influential individuals in global health.

Chicago-based Prenosis describes itself as an artificial intelligence company tailoring therapy to individual patient biology as part of “a new era of precision medicine.” As with Inato, though, the AI headline hides a more complex reality.

Sepsis is a heterogenous syndrome with close to 200 different symptoms possibly at play. “AI brings it together so we can understand the process of deterioration,” Reddy said. The company used machine learning to develop and validate a sophisticated algorithm, according to a New England Journal of Medicine study.

But the right AI was only one product ingredient. Prenosis also assembled a database of thousands of patients and set up a “wet lab” to find sepsis biomarkers – and to use for other conditions as the company expands its offerings – based on what is now 120,000 blood samples. Adding biomarkers to EHR data enabled the company to position itself as a more accurate, real-time complement to the sepsis tool Epic provides free to hospitals using its EHR.

“That’s our competitive advantage,” Reddy said.

Focused AI

Just as Inato focused on AI for its specific purposes, Prenosis also focused on a crucial goal. The AI was used “first and foremost to fit the FDA model for approval,” said Reddy.

Sepsis is caused by an overactive immune response to infection. It costs the U.S. health care system billions of dollars annually while claiming the lives of at least 350,000 people – more than all cancers combined, according to the Prenosis website. The World Health Organization has labeled sepsis a threat to global health, and the economic impact of just this one condition amounts to an average 2.7% of a nation’s health care costs, according to a 2022 study.

Unmentioned by Reddy at the INVEST conference was that a U.S. hospital’s performance in preventing and effectively treating sepsis is a factor in value-based payment by Medicare and in the hospital patient safety score published by the Leapfrog Group. A “burning platform,” indeed.

For Prenosis and Inato alike, AI best practice is based on practicality. As Reddy put it, AI is “just a tool” in product development.

Michael L. Millenson is president of Health Quality Advisors & a regular THCB Contributor. This first appeared in his column at Forbes

This One Weird Trick Can Fix U.S. Healthcare

By OWEN TRIPP

Creating a healthcare experience that builds trust and delivers value to people and purchasers isn’t a quick fix, but it’s the only way to reverse the downward spiral of high costs and poor outcomes

Entrepreneurs like to say the U.S. healthcare system is “broken,” usually right before they explain how they intend to fix it. I have a slightly different diagnosis.

The U.S. healthcare system is the gold standard. Our institutions and enterprises, ranging from 200-year-old academic medical centers to digital health startups, are the clear world leaders in clinical expertise, research, innovation, and technology. Capabilities-wise, the system is far from broken.

What’s broken is trust in the system, because of the glaring gap between what the system is capable of and what it actually delivers. Every day across the country, people drive past world-class hospitals, but then have to wait months for a primary care appointment. They deduct hundreds for healthcare from each paycheck, only to be told at the pharmacy that their prescription isn’t covered. While waiting for a state-of-the-art scan, they’re handed a clipboard and asked to recap their medical history.

This whipsaw experience isn’t due to incompetence or poor infrastructure. It’s the product of the dysfunction between the two biggest players in healthcare: providers and insurers, two entities that have optimized the hell out of their respective businesses, in opposition to one another, and inadvertently at the expense of people.

Historically, hospitals and health systems — including those 200-year-old AMCs — have dedicated themselves fully to improving and saving lives. I’m not saying they’ve lost sight of this, but until recently, margin took a back seat to mission. With industry consolidation and the persistence of the fee-for-service model, however, providers’ hands have been forced to maximize volume of care at the highest possible unit cost, which in turn has become a main driver of the out-of-control cost trend at large.

This push from providers has prompted an equal-and-opposite reaction from insurers. Though the industry has been villainized (rightly, in some cases) for a heavy-handed approach to utilization management and prior authorization, insurers are merely doing what their primary customers — private employers — have hired them to do: manage cost. Insurers have gotten very good at it, not just by limiting care, but also through product innovation that has created more tiers and cost-sharing options for plan sponsors.

Meanwhile, healthcare consumers (people!) have been sidelined amid this tug-of-war. Doctors and hospitals say they’re patient-centered, and insurers say they’re member-centric — but the jargon is a dead giveaway. Each side is focused on their half of the pie, and neither is accountable for the whole person: the person receiving care and paying for care, not to mention navigating everything in between.

It should come as no surprise that trust is falling. Only 56% of Americans trust their health insurer to act in their best interest. Even trust in doctors — the good guys — has plummeted. In a startling reversal from just four years ago, a whopping 76% of people believe hospitals care more about revenue than patient care.

Loss of Trust in Healthcare Providers

Hospitals in the U.S.
are mostly focused on…
⏺  Caring for patients⏺  Making money


Source: Jarrard/Chartis (2025)

This trust deficit is the root cause of so many healthcare problems. It’s the reason people disengage, delay and skip care, and end up in the ER or OR for preventable issues. When a good chunk of the population falls into this cycle, as they have, you end up with the status quo: unrelenting costs and deteriorating outcomes that is dragging down households, businesses, and the industry itself.

There’s no quick fix. Despite what my fellow entrepreneurs might say, no one point solution or technology (no, not even AI) can rebuild trust. The only way to reverse the downward spiral is by serving up a modern experience that is genuinely designed around people’s needs.

Continue reading…

How to Buy and Sell AI in health care? Not Easy.

By MATTHEW HOLT

It was not so  long ago that you could create one of those maps of health care IT or digital health and be roughly right. I did it myself back in the Health 2.0 days, including the old sub categories of the “Rebel Alliance of New Provider Technologies” and the “Frontier of Patient Empowerment Technologies”

But those easy days of matching a SaaS product to the intended user, and differentiating it from others are gone. The map has been upended by the hurricane that is generative AI, and it has thrown the industry into a state of confusion.

For the past several months I have been trying to figure out who is going to do what in AI health tech. I’ve had lots of formal and informal conversations, read a ton and been to three conferences in the past few months all focused dead on this topic. And it’s clear no one has a good answer.

Of course this hasn’t stopped people trying to draw maps like this one from Protege. As you can tell there are hundreds of companies building AI first products for every aspect of the health care value (or lack of it!) chain.

But this time it’s different. It’s not at all clear that AI will stop at the border of a user or even have a clearly defined function. It’s not even clear that there will be an “AI for Health Tech” sector.

This is a multi-dimensional issue.

The main AI LLMs–ChatGPT (OpenAI/Microsoft), Gemini (Google/Alphabet) Claude (Anthropic/Amazon), Grok (X/Twitter), Lama (Meta/Facebook)–are all capable of incredible work inside of health care and of course outside it. They can now write in any language you like, code, create movies, music, images and are all getting better and better. 

And they are fantastic at interpretation and summarization. I literally dumped a pretty incomprehensible 26 page dense CMS RFI document into ChatGPT the other day and in a few seconds it told me what they asked for and what they were actually looking for (that unwritten subtext). The CMS official who authored it was very impressed and was a little upset they weren’t allowed to use it. If I had wanted to help CMS, it would have written the response for me too.

The big LLMs are also developing “agentic” capabilities. In other words, they are able to conduct multistep business and human processes.

Right now they are being used directly by health care professionals and patients for summaries, communication and companionship. Increasingly they are being used for diagnostics, coaching and therapy. And of course many health care organizations are using them directly for process redesign.

Meanwhile, the core workhorses of health care are the EMRs used by providers, and the biggest kahuna of them all is Epic. Epic has a relationship with Microsoft which has its own AI play and also has its own strong relationship with OpenAI – or at least as strong as investing $13bn in a non-profit will make your relationship. Epic is now using Microsoft’s AI both in note summaries, patient communications et al, and also using DAX, the ambient AI scribe from Microsoft’s subsidiary Nuance. Epic also has a relationship with DAX rival Abridge

But that’s not necessarily enough and Epic is clearly building its own AI capabilities. In an excellent review over at Health IT Today John Lee breaks down Epic’s non-trivial use of AI in its clincal workflow:

  • The platform now offers tools to reorganize text for readability, generate succinct, patient-friendly summaries, hospital course summaries, discharge instructions, and even translating discrete clinical data into narrative instructions.
  • We will be able to automatically destigmatize language in notes (e.g., changing “narcotic abuser” to “patient has opiate use disorder”),
  • Even as a physician, I sometimes have a hard time deciphering the shorthand that my colleagues so frequently use. Epic showed how AI can translate obtuse medical shorthand-like “POD 1 sp CABG. HD stable. Amb w asst.”-into plain language: “Post op day 1 status post coronary bypass graft surgery. Hemodynamically stable. Patient is able to ambulate with assist.”
  • For nurses, ambient documentation and AI-generated shift notes will be available, reducing manual entry and freeing up time for patient care.

And of course Epic isn’t the only EHR (honestly!). Its competitors aren’t standing still. Meditech’s COO Helen Waters gave a wide-ranging interview to HISTalk. I paid particular attention to her discussion of their work with Google in AI and I am quoting almost all of it:

This initial product was built off of the BERT language model. It wasn’t necessarily generative AI, but it was one of their first large language models. The feature in that was called Conditions Explorer, and that functionality was really a leap forward. It was intelligently organizing the patient information directly from within the chart, and as the physician was working in the chart workflow, offering both a longitudinal view of the patient’s health by specific conditions and categorizing that information in a manner that clinicians could quickly access relevant information to particular health issues, correlated information, making it more efficient in informed decision making.  <snip>

Beyond that, with the Vertex AI platform and certainly multiple iterations of Gemini, we’ve walked forward to offer additional AI offerings in the category of gen AI, and that includes both a physician hospital course-of-stay narrative at the end of a patient’s time in the hospital to be discharged. We actually generate the course-of-stay, which has been usually beneficial for docs to not have to start to build that on their own.

We also do the same for nurses as they switch shifts. We give a nurse shift summary, which basically categorizes the relevant information from the previous shift and saves them quite a bit of time. We are using the Vertex AI platform to do that. And in addition to everyone else under the sun, we have obviously delivered and brought live ambient scribe capabilities with AI platforms from a multitude of vendors, which has been successful for the company as well.

The concept of Google and the partnership remains strong. The results are clear with the vision that we had for Expanse Navigator. The progress continues around the LLMs, and what we’re seeing is great promise for the future of these technologies helping with administrative burdens and tasks, but also continued informed capacities to have clinicians feel strong and confident in the decisions they’re making. 

The voice capabilities in the concept of agentic AI will clearly go far beyond ambient scribing, which is both exciting and ironic when you think about how the industry started with a pen way back when, we took them to keyboards, and then we took them to mobile devices, where they could tap and swipe with tablets and phones. Now we’re right back to voice, which I think will be pleasing provided it works efficiently and effectively for clinicians.


So if you read–not even between the lines but just what they are saying–Epic, which dominates AMCs and big non-profit health systems, and Meditech, the EMR for most big for-profit systems like HCA, are both building AI into their platforms for almost all of the workflow that most clinicians and administrators use.

I raised this issue a number of different ways at a meeting hosted by Commure, the General Catalyst-backed provider-focused AI company. Commure has been through a number of iterations in its 8 year life but it is now an AI platform on which it is building several products or capabilities. (For more here’s my interview with CEO Tannay Tandon). These include (so far!) administration, revenue cycle, inventory and staff tracking, ambient listening/scribing, clinical workflow, and clinical summarization. You can bet there’s more to come via development or acquisition. In addition Commure is doing this not only with the deep pocketed backing of General Catalyst but also with partial ownership from HCA–incidentally Meditech’s biggest client. That means HCA has to figure out what Commure is doing compared to Meditech.

Finally there’s also a ton of AI activity using the big LLMs internally within AMCs and in providers, plans and payers generally. Don’t forget that all these players have heavily customized many of the tools (like Epic) which external vendors have sold them. They are also making their AI vendors “forward deploy” engineers to customize their AI tools to the clients’ workflow. But they are also building stuff themselves. For instance Stanford just released a homegrown product that uses AI to communicate lab results to patients. Not bought from a vendor, but developed internally using Anthropic’s Claude LLM. There are dozens and dozens of these homegrown projects happening in every major health care enterprise. All those data scientists have to keep busy somehow!

So what does that say about the role of AI?

First it’s clear that the current platforms of record in health care–the EHRs–are viewing themselves as massive data stores and are expecting that the AI tools that they and their partners develop will take over much of the workflow currently done by their human users.

Second, the law of tech has usually been that water flows downhill. More and more companies and products end up becoming features on other products and platforms. You may recall that there used to be a separate set of software for writing (Wordperfect), presentation (Persuasion), spreadsheets (Lotus123) and now there is MS Office and Google Suite. Last month a company called Brellium raised $16m from presumably very clever VCs to summarize clinical notes and analyze them for compliance. Now watch them prove me wrong, but doesn’t it seem that everyone and their dog has already built AI to summarize and analyze clinical notes? Can’t one more analysis for compliance be added on easily? It’s a pretty good bet that this functionality will be part of some bigger product very soon.

(By the way, one area that might be distinct is voice conversation, which right now does seem to have a separate set of skills and companies working in it because interpreting human speech and conversing with humans is tricky. Of course that might be a temporary “moat” and these companies or their products may end up back in the main LLM soon enough). 

Meanwhile, Vine Kuraitis, Girish Muralidharan & the late Jody Ranck just wrote a 3 part series on how the EMR is moving anyway towards becoming a bigger unified digital health platform which suggests that the clinical part of the EMR will be integrated with all the other process stuff going on in health systems. Think staffing, supplies, finance, marketing, etc. And of course there’s still the ongoing integration between EMRs and medical devices and sensors across the hospital and eventually the wider health ecosystem.

So this integration of data sets could quickly lead to an AI dominated super system in which lots of decisions are made automatically (e.g. AI tracking care protocols as Robbie Pearl suggested on THCB a while back), while some decisions are operationally made by humans (ordering labs or meds, or setting staffing schedules) and finally a few decisions are more strategic. The progress towards deep research and agentic AI being made by the big LLMs has caused many (possibly including Satya Nadella) to suggest that SaaS is dead. It’s not hard to imagine a new future where everything is scraped by the AI and agents run everything globally in a health system.

This leads to a real problem for every player in the health care ecosystem.

If you are buying an AI system, you don’t know if the application or solution you are buying is going to be cannibalized by your own EHR, or by something that is already being built inside your organization.

If you are selling an AI system, you don’t know if your product is a feature of someone else’s AI, or if the skill is in the prompts your customers want to develop rather than in your tool. And worse, there’s little penalty in your potential clients waiting to see if something better and cheaper comes along.

And this is happening in a world in which there are new and better LLM and other AI models every few months.

I think for now the issue is that, until we get a clearer understanding of how all this plays out, there will be lots of false starts, funding rounds that don’t go anywhere, and AI implementations that don’t achieve much. Reports like the one from Sofia Guerra and Steve Kraus at Bessmer may help, giving 59 “jobs to be done”. I’m just concerned that no one will be too sure what the right tool for the job is.

Of course I await my robot overlords telling me the correct answer.

Matthew Holt is the Publisher of THCB

Patrick Quigley, Sidecar Health

Patrick Quigley is the CEO of Sidecar Health. It’s a start up health insurance company that has a new approach to how employers and employees buy health care. Sidecar is betting on the radical pricing  transparency idea. Instead of going down the contacting and narrow network route, Sidecar presents average area pricing and individual provider pricing to its members, and rewards them if they go to lower cost providers (who often are cheaper). How does this all work and is it real? Patrick took me through an extensive demo and explained how this all works. There’s a decent amount of complexity behind the scenes but Sidecar is creating something very rare in America, a priced health care market allowing consumers to choose–Matthew Holt

And Now for Some Fun Future

By KIM BELLARD

I feel like I’ve been writing a lot about futures I was pretty worried about, so I’m pleased to have a couple developments to talk about that help remind me that technology is cool and that healthcare can surely use more of it.

First up is a new AI algorithm called FaceAge, as published last week in The Lancet Digital Health by researchers at Mass General Brigham. What it does is to use photographs to determine biological age – as opposed to chronological age. We all know that different people seem to age at different rates – I mean, honestly, how old is Paul Rudd??? – but until now the link between how people look and their health status was intuitive at best.

Moreover, the algorithm can help determine survival outcomes for various types of cancer.

The researchers trained the algorithm on almost 59,000 photos from public databases, then tested against the photos of 6,200 cancer patients taken prior to the start of radiotherapy. Cancer patients appeared to FaceAge some five years older than their chronological age. “We can use artificial intelligence (AI) to estimate a person’s biological age from face pictures, and our study shows that information can be clinically meaningful,” said co-senior and corresponding author Hugo Aerts, PhD, director of the Artificial Intelligence in Medicine (AIM) program at Mass General Brigham.

Curiously, the algorithm doesn’t seem to care about whether someone is bald or has grey hair, and may be using more subtle clues, such as muscle tone. It is unclear what difference makeup, lighting, or plastic surgery makes. “So this is something that we are actively investigating and researching,” Dr. Aerts told The Washington Post. “We’re now testing in various datasets [to see] how we can make the algorithm robust against this.”

Moreover, it was trained primarily on white faces, which the researchers acknowledge as a deficiency. “I’d be very worried about whether this tool works equally well for all populations, for example women, older adults, racial and ethnic minorities, those with various disabilities, pregnant women and the like,” Jennifer E. Miller, the co-director of the program for biomedical ethics at Yale University, told The New York Times.  

The researchers believe FaceAge can be used to better estimate survival rates for cancer patients. It turns out that when physicians try to gauge them simply by looking, their guess is essentially like tossing a coin. When paired with FaceAge’s insights, the accuracy can go up to about 80%.

Dr. Aerts says: “This work demonstrates that a photo like a simple selfie contains important information that could help to inform clinical decision-making and care plans for patients and clinicians. How old someone looks compared to their chronological age really matters—individuals with FaceAges that are younger than their chronological ages do significantly better after cancer therapy.”

I’m especially thrilled about this because ten years ago I speculated about using selfies and facial recognition AI to determine if we had conditions that were prematurely aging us, or even we were just getting sick. It appears the Mass General Brigham researchers agree. “This opens the door to a whole new realm of biomarker discovery from photographs, and its potential goes far beyond cancer care or predicting age,” said co-senior author Ray Mak, MD, a faculty member in the AIM program at Mass General Brigham. “As we increasingly think of different chronic diseases as diseases of aging, it becomes even more important to be able to accurately predict an individual’s aging trajectory. I hope we can ultimately use this technology as an early detection system in a variety of applications, within a strong regulatory and ethical framework, to help save lives.”

The researchers acknowledge that much has to be accomplished before it is introduced for commercial purposes, and that strong oversight will be needed to ensure, as Dr. Aerts told WaPo, “these AI technologies are being used in the right way, really only for the benefit of the patients.” As Daniel Belsky, a Columbia University epidemiologist, told The New York Times: “There’s a long way between where we are today and actually using these tools in a clinical setting.”

The second development is even more out there. Let me break down the CalTech News headline: “3D Printing.” OK, you’ve got my attention. “In Vivo.” Color me highly intrigued. “Using Sound.” Mind. Blown.

That’s right. This team of researchers have “developed a method for 3D printing polymers at specific locations deep within living animals.”

Continue reading…
assetto corsa mods