Categories

Tag: AI

Owen Tripp, Included Health, talks AI

“So far AI in health care is being used to drive existing profits on workflows and increase revenue per event that patients in the end have to pay for. That’s not a win for anyone long term!” Included Health’s CEO Owen Tripp dives into the present and future use of AI, LLMs, patient self-triage and self treatment and all that. Another interesting conversation on where patient facing AI will end up — Matthew Holt

BTW here’s my Conversation with Ami Parekh & Ankoor Shah

Here’s Owen Tripp discussing Included Health.

Here’s Owen’s piece on AI, What’s in your chatbot?

Après AI, le Déluge

By KIM BELLARD

I have to admit, I’ve steered away from writing about AI lately. There’s just so much going on, so fast, that I can’t keep up. Don’t ask me how GPT-5 differs from GPT-4, or what Gemini does versus Genie 3. I know Microsoft really, really wants me to use Copilot, but so far I’m not biting. DeepMind versus DeepSeek?  Is Anthropic the French AI, or is that Mistral?  I’m just glad there are younger, smarter people paying closer attention to all this.

Still, I’m very much concerned about where the AI revolution is taking us, and whether we’re driving it or just along for the ride. In Fast Company, Sebastion Buck, co-founder of the “future design company” Enso, posits a great attitude about the AI revolution:

The scary news is: We have to redesign everything.

The exciting news is: We get to redesign everything.

He goes on to explain:

Technical revolutions create windows of time when new social norms are created, and where institutions and infrastructure is rethought. This window of time will influence daily life in myriad ways, from how people find dates, to whether kids write essays, to which jobs require applications, to how people move through cities and get health diagnoses.

Each of these are design decisions, not natural outcomes. Who gets to make these decisions? Every company, organization, and community that is considering if—and how—to adopt AI. Which almost certainly includes you. Congratulations, you’re now part of designing a revolution.

I want to pick out one area in particular where I hope we redesign everything intentionally, rather than in our normal short-sighted, laissez-faire manner: jobs and wealth.

It has become widely accepted that offshoring led to the demise of U.S. manufacturing and its solidly middle class blue collar jobs over the last 30 years. There’s some truth to that, but automation was arguably more of a factor – and that was before AI and today’s more versatile robots. More to the point, today’s AI and robots aren’t coming just to manufacturing but pretty much to every sector.

Former Transportation Secretary Pete Buttigieg warned:

The economic implications are the ones that I think could be the most disruptive, the most quickly. We’re talking about whole categories of jobs, where — not in 30 or 40 years, but in three or four — half of the entry-level jobs might not be there. It will be a bit like what I lived through as a kid in the industrial Midwest when trade in automation sucked away a lot of the auto jobs in the nineties — but ten times, maybe a hundred times more disruptive.

Mr. Buttigieg is no AI expert, but Erik Brynjolfsson, senior fellow at Stanford’s Institute for Human-Centered Artificial Intelligence and director of the Stanford Digital Economy Lab, is. When asked about those comments, he told Morning Edition: “Yeah, he’s spot on. We are seeing enormous advances in core technology and very little attention is being paid to how we can adapt our economy and be ready for those changes.”

You could look, for example, at the big layoffs in the tech sector lately. Natasha Singer, writing in The New York Times, reports on how computer science graduates have gone from expecting mid-six figure starting salaries to working at Chipotle (and wait till Chipotle automates all those jobs). The Federal Reserve Bank of New York says unemployment for computer science & computer engineering majors is better than anthropology majors, but, astonishingly, worse than pretty much all other majors.

And don’t just feel sorry for tech workers. Neil Irwin of Axios warns: “In the next job market downturn — whether it’s already starting or years away — there just might be a bloodbath for millions of workers whose jobs can be supplanted by artificial intelligence.” He quotes Federal Reserve governor Lisa Cook: “AI is poised to reshape our labor market, which in turn could affect our notion of maximum employment or our estimate of the natural rate of unemployment.”

In other words, you ain’t seen nothing yet.

While manufacturing was taking a beating in the U.S. over the last thirty years, tech boomed. Most of the world’s largest and most profitable companies are tech companies, and most of the world’s richest people got their wealth from tech. Those are, by and large, the ones investing most heavily in AI — most likely to benefit from it.

Professor Brynjolfsson worries about how we’ll handle the transition to an AI economy:

The ideal thing is that you find ways of compensating people and managing a transition. Sad to say, with trade, we didn’t do a very good job of that. A lot of people got left behind. It would be a catastrophe if we made the similar mistake with technology, [which] that also is going to create enormous amounts of wealth, but it’s not going to affect everyone evenly. And we have to make sure that people manage that transition. 

“Catastrophe” indeed. And I fear it is coming.

Continue reading…

China Goes “Democratic” on Artificial General Intelligence

By MIKE MAGEE

Last week, following a visit to the White House, Jensen Huang instigated a wholesale reversal of policy from Trump who was blocking Nvidia sales of its H20 chip to China. What did Jensen say?

We can only guess of course. But he likely shared the results of a proprietary report from noted AI researchers at Digital Science that suggested an immediate policy course correction was critical. Beyond the fact that over 50% of all AI researchers are currently based in China, their study documented that “In 2000, China-based scholars produced just 671 AI papers, but in 2024 their 23,695 AI-related publications topped the combined output of the United States (6378), the United Kingdom (2747), and the European Union (10,055).”

David Hook, CEO of Digital Science was declarative in the opening of the report, stating “U.S. influence in AI research is declining, with China now dominating.”

China now supports about 30,000 AI researchers compared to only 10,000 in the US. And that number is shrinking thanks to US tariff and visa shenanigans, and overt attacks by the administration on our premier academic institutions.

Economics professors David Autor (MIT) and Gordon Hanson (Harvard), known for “their research into how globalization, and especially the rise of China, reshaped the American labor market,” famously described the elements of “China Shock 1.0.” in 2013. It was “a singular process—China’s late-1970s transition from Maoist central planning to a market economy, which rapidly moved the country’s labor and capital from collective rural farms to capitalist urban factories.”

As a result, a quarter of all US manufacturing jobs disappeared between 1999 and 2007. Today China’s manufacturing work force tops 100 million, dwarfing the US manufacturing job count of 13 million. Those numbers peaked a decade ago when China’s supply of low cost labor peaked. But these days China is clearly looking forward while this administration and its advisers are being left behind in the rear view mirror.

Welcome to “China Shock 2.0” wrote Autor and Hanson in a recent New York Times editorial. But this time, their leaders are focusing on “key technologies of the 21st century…(and it) will last for as long as China has the resources, patience and discipline to compete fiercely.”

The highly respected Australian Strategic Policy Institute, funded by their Defense Department, has been tracking the volume of published innovative technology research in the US and China for over a quarter century. They see this as a measure of experts opinion where the greatest innovations are originating. In 2007, we led China in the prior four years in 60 of 64 “frontier technologies.”

Two decades later, the table has flipped, with China well ahead of the US in 57 of 64 categories measured.

Continue reading…

Healthcare AI: What’s in your chatbot?

By OWEN TRIPP

So much of the early energy around generative AI in healthcare has been geared toward speed and efficiency: freeing doctors from admin tasks, automating patient intake, streamlining paperwork-heavy pain points. This is all necessary and helpful, but much of it boils down to established players optimizing the existing system to suit their own needs. As consumers flock to AI for healthcare, their questions and needs highlight the limits of off-the-shelf bots — and the pent-up demand for no judgment, all-in-one, personalized help.

Transforming healthcare so that it actually works for patients and consumers — ahem, people — requires more than incumbent-led efficiency. Generative AI will be game-changing, no doubt, but only when it’s embedded and embraced as a trusted guide that steers people toward high-quality care and empowers them to make better decisions.

Upgrading Dr. Google

From my vantage point, virtual agents and assistants are the most important frontier in healthcare AI right now — and in people-centered healthcare, period. Tens of millions of people (especially younger generations) are already leaning into AI for help with health and wellness, testing the waters of off-the-shelf apps and tools like ChatGPT.

You see, people realize that AI isn’t just for polishing emails and vacation itineraries. One-fifth of adults consult AI chatbots with health questions at least once a month (and given AI’s unprecedented adoption curve, we can assume that number is rising by the day). For most, AI serves as a souped-up, user-friendly alternative to search engines. It offers people a more engaging way to research symptoms, explore potential treatments, and determine if they actually need to see a doctor or head to urgent care.

But people are going a lot deeper with chatbots than they ever did with Dr. Google or WebMD. Beyond the usual self-triage, the numbers tell us that up to 40% of ChatGPT users have consulted AI after a doctor’s appointment. They were looking to verify and validate what they’d heard. Even more surprising, after conferring with ChatGPT, a similar percentage then re-engaged with their doctor — to request referrals or tests, changes to medications, or schedule a follow-up.

These trends highlight AI’s enormous potential as an engagement tool, and they also suggest that people are defaulting to AI because the healthcare system is (still) too difficult and frustrating to navigate. Why are people asking ChatGPT how to manage symptoms? Because accessing primary and preventive care is a challenge. Why are they second-guessing advice and prescriptions? Sadly, they don’t fully trust their doctor, are embarrassed to speak up, or don’t have enough time to talk through their questions and concerns during appointments.

Chatbots have all the time in the world, and they’re responsive, supportive, knowledgeable, and nonjudgmental. This is the essence of the healthcare experience people want, need, and deserve, but that experience can’t be built with chatbots alone. AI has a critical role to play, to be sure, but to fulfill its potential it has to evolve well beyond off-the-shelf chatbot competence.

Chatbots 2.0

When it comes to their healthcare, the people currently flocking to mass-market apps like ChatGPT will inevitably realize diminishing returns. Though the current experience feels personal, the advice and information is ultimately very generic, built on the same foundation of publicly available data, medical journals, websites, and countless other sources. Even the purpose-built healthcare chatbots in the market today are overwhelmingly relying on public data and outsourced AI models.

Generic responses and transactional experiences have inherent shortcomings. As we’ve seen with other health-tech advances, including 1.0 telehealth and navigation platforms, impersonal, one-off services driven primarily by in-the-moment-need, efficiency, or convenience don’t equate to long-term value.

For chatbots to avoid the 1.0 trap, they need to do more than put the world’s medical knowledge at our fingertips.

Continue reading…

Watching Where and How You’re Walking

By MIKE MAGEE

In a speech to the American Philosophical Society in January, 1946, J. Robert Oppenheimer said, “We have made a thing …that has altered abruptly and profoundly the nature of the world…We have raised again the question of whether science is good for man, of whether it is good to learn about the world, to try to understand it, to try to control it, to help give to the world of men increased insight, increased power.”

Eight decades later, those words reverberate, and we once again are at a seminal crossroads. This past week, Jensen Huang, the CEO of Nvidia, was everywhere, a remarkably skilled communicator celebrating the fact that his company was now the first publicly traded company to exceed a $4 trillion valuation.

As he explained, “We’ve essentially created a new industry for the first time in three hundred years. the last time there was an industry like this, it was a power generation industry…Now we have a new industry that generates intelligence…you can use it to discover new drugs, to accelerate diagnosis of disease…everybody’s jobs will be different going forward.”

Jensen, as I observed him perform on that morning show, seemed just a bit overwhelmed, awed, and perhaps even slightly frightened by the pace of recent change. “We reinvented computing for the first time since the 60’s, since IBM introduced the modern computer architecture… its able to accelerate applications from computer graphics to physics simulations for science to digital biology to artificial intelligence. . . . in the last year, the technology has advanced incredibly fast. . . AI is now able to reason, it’s able to think… Before it was able to understand, it was able to generate content, but now it can reason, it can do research, it can learn about the latest information before it answers a question.”

Of course, this is hardly the first time technology has triggered flashing ethical warning lights. I recently summarized the case of Facial Recognition Technology (FRT). The US has the largest number of closed circuit cameras at 15.28 per capita, in the world. On average, every American is caught on a closed circuit camera 238 times a week, but experts say that’s nothing compared to where our “surveillance” society will be in a few years.

The field of FRT is on fire. 

Continue reading…

Roy Schoenberg, AileenAI

Last week longtime AmWell CEO Roy Schoenberg announced, in the New England Journal of Medicine no less, that he was building a companion AI for the elderly called Aileen. We took a dive into the state of play for digital health, what happened at AmWell, and what the goal is for the AI companion. It’s early days but Roy has an interesting idea for how AI will work in the future to be the underlying platform to manage the elder consumer experience. Always a great conversation with Roy and this is no exception–Matthew Holt

How Did the AI “Claude” Get Its Name?

By MIKE MAGEE

Let me be the first to introduce you to Claude Elwood Shannon. If you have never heard of him but consider yourself informed and engaged, including at the interface of AI and Medicine, don’t be embarrassed. I taught a semester of “AI and Medicine” in 2024 and only recently was introduced to “Claude.”

Let’s begin with the fact that the product, Claude, is not the same as the person, Claude. The person died a quarter century ago and except for those deep in the field of AI has largely been forgotten – until now.

Among those in the know, Claude Elwood Shannon is often referred to as the “father of information theory.” He graduated from the University of Michigan in 1936 where he majored in electrical engineering and mathematics. At 21, as a Master’s student at MIT, he wrote a Master’s Thesis titled “A Symbolic Analysis Relay and Switching Circuits” which those in the know claim was “the birth certificate of the digital revolution,” earning him the Alfred Noble Prize in 1939 (No, not that Nobel Prize).

None of this was particularly obvious in those early years. A University of Michigan biopic claims, “If you were looking for world changers in the U-M class of 1936, you probably would not have singled out Claude Shannon. The shy, stick-thin young man from Gaylord, Michigan, had a studious air and, at times, a playful smirk—but none of the obvious aspects of greatness. In the Michiganensian yearbook, Shannon is one more face in the crowd, his tie tightly knotted and his hair neatly parted for his senior photo.”

But that was one of the historic misreads of all time, according to his alma mater. “That unassuming senior would go on to take his place among the most influential Michigan alumni of all time—and among the towering scientific geniuses of the 20th century…It was Shannon who created the “bit,” the first objective measurement of the information content of any message—but that statement minimizes his contributions. It would be more accurate to say that Claude Shannon invented the modern concept of information. Scientific American called his groundbreaking 1948 paper, “A Mathematical Theory of Communication,” the “Magna Carta of the Information Age.”

I was introduced to “Claude” just 5 days ago by Washington Post Technology Columnist, Geoffrey Fowler – Claude the product, not the person. His article, titled “5 AI bots took our tough reading test. One was smartest — and it wasn’t ChatGPT,” caught my eye. As he explained, “We challenged AI helpers to decode legal contracts, simplify medical research, speed-read a novel and make sense of Trump speeches.”

Judging the results of the medical research test was Scripps Research Translational Institute luminary, Eric Topol.  The 5 AI products were asked 115 questions on the content of two scientific research papers : Three-year outcomes of post-acute sequelae of COVID-19 and Retinal Optical Coherence Tomography Features Associated With Incident and Prevalent Parkinson Disease.

Not to bury the lead, Claude – the product – won decisively, not only in science but also overall against four name brand competitors I was familiar with – Google’s Gemini, Open AI’s ChatGPT, Microsoft Copilot, and MetaAI. Which left me a bit embarrassed. How had I never heard of Claude the product?

For the answer, let’s retrace a bit of AI history.

Continue reading…

High-Profile Start-Ups Inato And Prenosis Show AI ‘Best Practice’

By MICHAEL MILLENSON

Treating artificial intelligence as just one ingredient in a business success recipe was a prominent theme at the MedCity INVEST 2025 conference, with this AI “best practice” advice epitomized by high-profile start-ups Inato and Prenosis.

“You need to build a business model that makes sense, then use AI,” cautioned Raffi Boyajian, principal at CIGNA Ventures and a panelist at the MedCity INVEST 2025 conference in Chicago.

That sentiment was echoed and emphasized by fellow investors Aman Shah, vice president of new ventures at VNS Health, and Dipa Mehta, managing partner of Valeo Ventures. Both emphasized the necessity in a tough economic environment to find a “burning platform” that could immediately boost a customer’s bottom line.

In a separate panel, high-profile start-ups Inato and Prenosis accentuated that AI approach.

Innovation Customers Need

Inato was named by Fast Company magazine as one of the Most Innovative Companies of 2024, and that same year chosen by Fierce Healthcare as one of its Fierce 15. The Paris-based company connects drugmakers with otherwise hard-to-enroll patients for clinical trials by means of an AI-based platform that has attracted more than 3,000 community research sites in over 70 countries. By making clinical trials “more accessible, inclusive, and efficient,” in the company’s words, breaking a shocking pattern where 96% of trials do not include a representative population, Inato has established partnerships with more than a third of the top 30 pharmaceutical firms.

In describing its technology, Inato says it “assembled an AI agent to de-identify patient records, quickly determine which trials are relevant to each patient and evaluate patients against inclusion and exclusion criteria to assess eligibility” accurately and at scale. However, that phrase, “assembled an AI agent,” obscures a subtler process.

Liz Beatty, Inato’s co-founder and chief strategy officer, described using “off-the-shelf” large language models like ChatGPT and Claude and then optimizing them for a particular process with algorithms attuned to each model. As new models appear, the company adjusts accordingly. Although Beatty did not offer an analogy, there seemed an obvious parallel to a chef choosing among the right ingredients in the right proportions to ensure a recipe’s success.

Said Beatty, “I hear, ‘Let’s apply AI to everything.’ That’s not the right answer.” Investors are convinced enough that Inato does have the right answer that they’ve poured in $38.2 million, according to Pitchbook.

AI has also been central to the success of Prenosis. The company’s Sepsis ImmunoScore was the first Food and Drug Administration-approved tool using AI to predict the imminent onset of an often-deadly condition known as sepsis. Integrated into the clinical workflow, it was hailed by Time magazine as one of “the best inventions of 2024,” while Bobby Reddy Jr., Prenosis co-founder and chief executive officer, was subsequently named to the Time100 Health List recognizing influential individuals in global health.

Chicago-based Prenosis describes itself as an artificial intelligence company tailoring therapy to individual patient biology as part of “a new era of precision medicine.” As with Inato, though, the AI headline hides a more complex reality.

Sepsis is a heterogenous syndrome with close to 200 different symptoms possibly at play. “AI brings it together so we can understand the process of deterioration,” Reddy said. The company used machine learning to develop and validate a sophisticated algorithm, according to a New England Journal of Medicine study.

But the right AI was only one product ingredient. Prenosis also assembled a database of thousands of patients and set up a “wet lab” to find sepsis biomarkers – and to use for other conditions as the company expands its offerings – based on what is now 120,000 blood samples. Adding biomarkers to EHR data enabled the company to position itself as a more accurate, real-time complement to the sepsis tool Epic provides free to hospitals using its EHR.

“That’s our competitive advantage,” Reddy said.

Focused AI

Just as Inato focused on AI for its specific purposes, Prenosis also focused on a crucial goal. The AI was used “first and foremost to fit the FDA model for approval,” said Reddy.

Sepsis is caused by an overactive immune response to infection. It costs the U.S. health care system billions of dollars annually while claiming the lives of at least 350,000 people – more than all cancers combined, according to the Prenosis website. The World Health Organization has labeled sepsis a threat to global health, and the economic impact of just this one condition amounts to an average 2.7% of a nation’s health care costs, according to a 2022 study.

Unmentioned by Reddy at the INVEST conference was that a U.S. hospital’s performance in preventing and effectively treating sepsis is a factor in value-based payment by Medicare and in the hospital patient safety score published by the Leapfrog Group. A “burning platform,” indeed.

For Prenosis and Inato alike, AI best practice is based on practicality. As Reddy put it, AI is “just a tool” in product development.

Michael L. Millenson is president of Health Quality Advisors & a regular THCB Contributor. This first appeared in his column at Forbes

How to Buy and Sell AI in health care? Not Easy.

By MATTHEW HOLT

It was not so  long ago that you could create one of those maps of health care IT or digital health and be roughly right. I did it myself back in the Health 2.0 days, including the old sub categories of the “Rebel Alliance of New Provider Technologies” and the “Frontier of Patient Empowerment Technologies”

But those easy days of matching a SaaS product to the intended user, and differentiating it from others are gone. The map has been upended by the hurricane that is generative AI, and it has thrown the industry into a state of confusion.

For the past several months I have been trying to figure out who is going to do what in AI health tech. I’ve had lots of formal and informal conversations, read a ton and been to three conferences in the past few months all focused dead on this topic. And it’s clear no one has a good answer.

Of course this hasn’t stopped people trying to draw maps like this one from Protege. As you can tell there are hundreds of companies building AI first products for every aspect of the health care value (or lack of it!) chain.

But this time it’s different. It’s not at all clear that AI will stop at the border of a user or even have a clearly defined function. It’s not even clear that there will be an “AI for Health Tech” sector.

This is a multi-dimensional issue.

The main AI LLMs–ChatGPT (OpenAI/Microsoft), Gemini (Google/Alphabet) Claude (Anthropic/Amazon), Grok (X/Twitter), Lama (Meta/Facebook)–are all capable of incredible work inside of health care and of course outside it. They can now write in any language you like, code, create movies, music, images and are all getting better and better. 

And they are fantastic at interpretation and summarization. I literally dumped a pretty incomprehensible 26 page dense CMS RFI document into ChatGPT the other day and in a few seconds it told me what they asked for and what they were actually looking for (that unwritten subtext). The CMS official who authored it was very impressed and was a little upset they weren’t allowed to use it. If I had wanted to help CMS, it would have written the response for me too.

The big LLMs are also developing “agentic” capabilities. In other words, they are able to conduct multistep business and human processes.

Right now they are being used directly by health care professionals and patients for summaries, communication and companionship. Increasingly they are being used for diagnostics, coaching and therapy. And of course many health care organizations are using them directly for process redesign.

Meanwhile, the core workhorses of health care are the EMRs used by providers, and the biggest kahuna of them all is Epic. Epic has a relationship with Microsoft which has its own AI play and also has its own strong relationship with OpenAI – or at least as strong as investing $13bn in a non-profit will make your relationship. Epic is now using Microsoft’s AI both in note summaries, patient communications et al, and also using DAX, the ambient AI scribe from Microsoft’s subsidiary Nuance. Epic also has a relationship with DAX rival Abridge

But that’s not necessarily enough and Epic is clearly building its own AI capabilities. In an excellent review over at Health IT Today John Lee breaks down Epic’s non-trivial use of AI in its clincal workflow:

  • The platform now offers tools to reorganize text for readability, generate succinct, patient-friendly summaries, hospital course summaries, discharge instructions, and even translating discrete clinical data into narrative instructions.
  • We will be able to automatically destigmatize language in notes (e.g., changing “narcotic abuser” to “patient has opiate use disorder”),
  • Even as a physician, I sometimes have a hard time deciphering the shorthand that my colleagues so frequently use. Epic showed how AI can translate obtuse medical shorthand-like “POD 1 sp CABG. HD stable. Amb w asst.”-into plain language: “Post op day 1 status post coronary bypass graft surgery. Hemodynamically stable. Patient is able to ambulate with assist.”
  • For nurses, ambient documentation and AI-generated shift notes will be available, reducing manual entry and freeing up time for patient care.

And of course Epic isn’t the only EHR (honestly!). Its competitors aren’t standing still. Meditech’s COO Helen Waters gave a wide-ranging interview to HISTalk. I paid particular attention to her discussion of their work with Google in AI and I am quoting almost all of it:

This initial product was built off of the BERT language model. It wasn’t necessarily generative AI, but it was one of their first large language models. The feature in that was called Conditions Explorer, and that functionality was really a leap forward. It was intelligently organizing the patient information directly from within the chart, and as the physician was working in the chart workflow, offering both a longitudinal view of the patient’s health by specific conditions and categorizing that information in a manner that clinicians could quickly access relevant information to particular health issues, correlated information, making it more efficient in informed decision making.  <snip>

Beyond that, with the Vertex AI platform and certainly multiple iterations of Gemini, we’ve walked forward to offer additional AI offerings in the category of gen AI, and that includes both a physician hospital course-of-stay narrative at the end of a patient’s time in the hospital to be discharged. We actually generate the course-of-stay, which has been usually beneficial for docs to not have to start to build that on their own.

We also do the same for nurses as they switch shifts. We give a nurse shift summary, which basically categorizes the relevant information from the previous shift and saves them quite a bit of time. We are using the Vertex AI platform to do that. And in addition to everyone else under the sun, we have obviously delivered and brought live ambient scribe capabilities with AI platforms from a multitude of vendors, which has been successful for the company as well.

The concept of Google and the partnership remains strong. The results are clear with the vision that we had for Expanse Navigator. The progress continues around the LLMs, and what we’re seeing is great promise for the future of these technologies helping with administrative burdens and tasks, but also continued informed capacities to have clinicians feel strong and confident in the decisions they’re making. 

The voice capabilities in the concept of agentic AI will clearly go far beyond ambient scribing, which is both exciting and ironic when you think about how the industry started with a pen way back when, we took them to keyboards, and then we took them to mobile devices, where they could tap and swipe with tablets and phones. Now we’re right back to voice, which I think will be pleasing provided it works efficiently and effectively for clinicians.


So if you read–not even between the lines but just what they are saying–Epic, which dominates AMCs and big non-profit health systems, and Meditech, the EMR for most big for-profit systems like HCA, are both building AI into their platforms for almost all of the workflow that most clinicians and administrators use.

I raised this issue a number of different ways at a meeting hosted by Commure, the General Catalyst-backed provider-focused AI company. Commure has been through a number of iterations in its 8 year life but it is now an AI platform on which it is building several products or capabilities. (For more here’s my interview with CEO Tannay Tandon). These include (so far!) administration, revenue cycle, inventory and staff tracking, ambient listening/scribing, clinical workflow, and clinical summarization. You can bet there’s more to come via development or acquisition. In addition Commure is doing this not only with the deep pocketed backing of General Catalyst but also with partial ownership from HCA–incidentally Meditech’s biggest client. That means HCA has to figure out what Commure is doing compared to Meditech.

Finally there’s also a ton of AI activity using the big LLMs internally within AMCs and in providers, plans and payers generally. Don’t forget that all these players have heavily customized many of the tools (like Epic) which external vendors have sold them. They are also making their AI vendors “forward deploy” engineers to customize their AI tools to the clients’ workflow. But they are also building stuff themselves. For instance Stanford just released a homegrown product that uses AI to communicate lab results to patients. Not bought from a vendor, but developed internally using Anthropic’s Claude LLM. There are dozens and dozens of these homegrown projects happening in every major health care enterprise. All those data scientists have to keep busy somehow!

So what does that say about the role of AI?

First it’s clear that the current platforms of record in health care–the EHRs–are viewing themselves as massive data stores and are expecting that the AI tools that they and their partners develop will take over much of the workflow currently done by their human users.

Second, the law of tech has usually been that water flows downhill. More and more companies and products end up becoming features on other products and platforms. You may recall that there used to be a separate set of software for writing (Wordperfect), presentation (Persuasion), spreadsheets (Lotus123) and now there is MS Office and Google Suite. Last month a company called Brellium raised $16m from presumably very clever VCs to summarize clinical notes and analyze them for compliance. Now watch them prove me wrong, but doesn’t it seem that everyone and their dog has already built AI to summarize and analyze clinical notes? Can’t one more analysis for compliance be added on easily? It’s a pretty good bet that this functionality will be part of some bigger product very soon.

(By the way, one area that might be distinct is voice conversation, which right now does seem to have a separate set of skills and companies working in it because interpreting human speech and conversing with humans is tricky. Of course that might be a temporary “moat” and these companies or their products may end up back in the main LLM soon enough). 

Meanwhile, Vine Kuraitis, Girish Muralidharan & the late Jody Ranck just wrote a 3 part series on how the EMR is moving anyway towards becoming a bigger unified digital health platform which suggests that the clinical part of the EMR will be integrated with all the other process stuff going on in health systems. Think staffing, supplies, finance, marketing, etc. And of course there’s still the ongoing integration between EMRs and medical devices and sensors across the hospital and eventually the wider health ecosystem.

So this integration of data sets could quickly lead to an AI dominated super system in which lots of decisions are made automatically (e.g. AI tracking care protocols as Robbie Pearl suggested on THCB a while back), while some decisions are operationally made by humans (ordering labs or meds, or setting staffing schedules) and finally a few decisions are more strategic. The progress towards deep research and agentic AI being made by the big LLMs has caused many (possibly including Satya Nadella) to suggest that SaaS is dead. It’s not hard to imagine a new future where everything is scraped by the AI and agents run everything globally in a health system.

This leads to a real problem for every player in the health care ecosystem.

If you are buying an AI system, you don’t know if the application or solution you are buying is going to be cannibalized by your own EHR, or by something that is already being built inside your organization.

If you are selling an AI system, you don’t know if your product is a feature of someone else’s AI, or if the skill is in the prompts your customers want to develop rather than in your tool. And worse, there’s little penalty in your potential clients waiting to see if something better and cheaper comes along.

And this is happening in a world in which there are new and better LLM and other AI models every few months.

I think for now the issue is that, until we get a clearer understanding of how all this plays out, there will be lots of false starts, funding rounds that don’t go anywhere, and AI implementations that don’t achieve much. Reports like the one from Sofia Guerra and Steve Kraus at Bessmer may help, giving 59 “jobs to be done”. I’m just concerned that no one will be too sure what the right tool for the job is.

Of course I await my robot overlords telling me the correct answer.

Matthew Holt is the Publisher of THCB

Elevare Law launches!

There’s a new health innovation law firm in town! Rebecca Gwilt & Kaitlyn O’Connor have started Elevare Law to help health tech companies. We spent a little time talking about the new firm and who it’s going to work with, and a lot about the different legal and regulatory challenges facing digital health companies. Deep dives into the regs around RPM, RTM & more, and also a lot about what we might expect from the FDA and the rest of the chaos in the new Administration. Plus a little about how AI helps lawyers be more efficient and a lot about how AI may or may not be influenced by health care regulation (TL:DL, it’s going to be slow & state by state) –-Matthew Holt

assetto corsa mods