Artificial intelligence is quickly becoming a core part of healthcare operations. It drafts clinical notes, summarizes patient visits, flags abnormal labs, triages messages, reviews imaging, helps with prior authorizations, and increasingly guides decision support. AI is no longer just a side experiment in medicine; it is becoming a key interpreter of clinical reality.
That raises an important question for physicians, administrators, and policymakers alike: Is AI accurately reflecting the real world? Or subtly reshaping it?
The data is simple. According to the U.S. Census Bureau’s July 2023 estimates, about 75 percent of Americans identify as White (including Hispanic and non-Hispanic), around 14 percent as Black or African American, roughly 6 percent as Asian, and smaller percentages as Native American, Pacific Islander, or multiracial. Hispanic or Latino individuals, who can be of any race, make up roughly 19 percent of the population.
In brief, the data are measurable, verifiable, and accessible to the public.
I recently carried out a simple experiment with broader implications beyond image creation. I asked two top AI image-generation platforms to produce a group photo that reflects the racial composition of the U.S. population based on official Census data.
In 1872, English mathematician and sometimes poet, Augustus de Morgan, wrote this catching rhyme: “Great fleas have little fleas upon their backs to bite ‘em, And little fleas have lesser fleas, and so ad infinitum.”
This truism about competition among species for access to nutrition and reproduction could have come in handy to Napoleon 60 years earlier when he tragically underestimated his enemies will to live. It wasn’t so much the stubborn Russians as it was microbes that were his undoing.
When he launched his invasion with a staggering force of 615,000 men, 200,000 horses, and 1,372 mobile guns, he appeared unstoppable. But on his way to Moscow, (according to Tolstoy’s account of the misadventure in “War and Peace”) he lost 130,000 men to Shigella dysentery. Confronted with harsh weather and a Russian force that refused to engage in defense of Moscow, Napoleon lost 2/3 of his remaining retreating force to Typhus, carried by Rickettsia prowazekki, housed in body lice embedded in his soldiers rancid clothing.
Under more favorable circumstances, the soldiers immune systems would have been their ally. Human bioengineering has evolved side by side with pathogenic microbes determined to chemically out smart their human hosts.
Humans rely on innate and adaptive mechanisms to detect and destroy pathogens. But to do so while sparing their own cells, they must be able to distinguish self from non-self. And they must adapt and remember, producing long-lived immune cells and protein receptors that allow them to “capture” and destroy repeat offenders.
If the system experiences a breakdown in self-tolerance, the protective processes may over-shoot and result in a chronic inflammatory response that destroys healthy tissues and marks the emergence of auto-immune diseases.
One special circumstance where immuno-tolerance is both normal and essential is maternal self-suppression during pregnancy which allows two separate immunologic organisms to survive intimate relations side-by-side.
“There are decades where nothing happens; and there are weeks where decades happen,” said Lenin, probably never. It’s also a remarkably apt characterization of the last year in generative AI (genAI) — the last week in particular — which has seen the AI landscape shift so dramatically that even skeptics are now updating their priors in a more bullish direction.
In September 2025, Anthropic, the AI company behind Claude, released what it described as its most capable model yet, and said it could stay on complex coding tasks for about 30 hours continuously. Reported examples including building a web app from scratch, with some runs described as generating roughly 11,000 lines of code. In January 2026, two Wall Street Journal reporters who said they had no programming background used Claude Code to build and publish a Journal project, and described the capability as “a breakout moment for Anthropic’s coding tool” and for “vibe coding” — the idea of creating software simply by describing it.
Around the same time, OpenClaw went viral as an open-source assistant that runs locally and works through everyday apps like WhatsApp, Telegram, and Slack to execute multi-step tasks. The deeper shift, though, is architectural: the ecosystem is converging on open standards for AI integration. One such standard called MCP — the “USB-C of AI” — is now being downloaded nearly 100 million times a month, suggesting that AI integration has moved from exploratory to operational.
Markets are watching the evolution of AI agents into potentially useful economic actors and reacting accordingly. When Anthropic announced plans to move into high-revenue verticals — including financial services, law, and life sciences — the Journalheadline read: “Threat of New AI Tools Wipes $300B Off Software and Data Stocks.”
Economist Tyler Cowen observed that this moment will “go down as some kind of turning point.” Derek Thompson, long concerned about an AI bubble, said his worries “declined significantly” in recent weeks. Heeding Wharton’s Ethan Mollick — “remember, today’s AI is the worst AI you will ever use” — investors and entrepreneurs are busily searching for opportunities to ride this wave.
Some founders are taking their ambition to healthcare and life science, where they see a slew of problems for which (they anticipate) genAI might be the solution, or at least part of it. The approach one AI-driven startup is taking towards primary care offers a glimpse into what such a future might hold (or perhaps what fresh hell awaits us).
Two Visions of Primary Care
There is genuine crisis in primary care. Absurdly overburdened and comically underpaid, primary care physicians have fled the profession in droves — some to concierge practices where (they say) they can provide the quality of care that originally attracted them to medicine, many out of clinical practice entirely. Recruiting new trainees grows harder each year.
What’s being lost is captured with extraordinary power by Dr. Lisa Rosenbaum in her NEJM podcast series on the topic.
It’s not well known but there’s a lot of people in hospitals who spend a lot of time creating patient registries for quality programs, CMS reporting, clinical trials and lots more. It requires extremely detailed abstraction of patient data from patient records and comparisons with registry demands. Wouldn’t it be clever if an AI system could read the chart and help the people doing that work (usually very expensive nurses) do it quicker? That’s the premise behind Carta Healthcare. Greg Miller and Jared Crapo from Carta demoed the system for me and told me about the market for it–Matthew Holt
In its Strategy for Artificial Intelligence (V.3), the Department of Health and Human Services (“HHS”) acknowledges that: “For too long, our Department has been bogged down by bureaucracy and busy work.” HHS promises that it will accelerate artificial intelligence (“AI”) innovation, including “accelerating drug and biologic approvals at the FDA.”
History shows that well-intended but cumulative regulatory intervention – more so than scientific complexity – is the primary deterrent to rapid technological progress. If AI is subject to the typical pattern of regulatory creep, its potential to accelerate drug discovery and development will be significantly reduced. To avoid this outcome, HHS should develop a plan that is premised on a zero-based regulatory approach. That is, each new technology such as AI should start with a clean slate and only the minimum requirements deemed necessary to show effectiveness and safety should be applied in the approval process for that technology.
The Pace of Innovation
Medical innovation has lagged the pace in the other sectors of the economy. As Dr. Scott Podolsky of Harvard Medical School observed: “Medicine in 2020 is much closer to medicine in 1970 than medicine in 1970 was to medicine in 1920.” Podolsky points to breakthroughs such as antibiotics, antihypertensives, antidepressants, antipsychotics, and steroids that have not been met with same impact as innovations in the later 50 years.
Two explanations have been offered for this phenomenon: 1) the inherent complexity of biological processes; and 2) the regulatory approval process.
As a benchmark for comparison to the following case studies, the development of 4G communications spanned less than a decade, with discussions starting around 2001, technical specifications being released in 2004, and the first commercial networks launching in 2009.
Regulatory Intervention in New Technologies
The Human Genome (Great Science Leads to Regulatory Paralysis)
The Human Genome Project (HGP) ran from 1990 to 2003, and has been lauded as one of the world’s greatest scientific achievements. The project identified the specific location of genes and DNA, creating a “roadmap” of the human genetic code and facilitating the identification of disease-related genes.
The HGP focused on balancing rapid scientific progress with ethical safeguards. Oversight was primarily managed through internal ethical programs and international data-sharing agreements rather than a single overarching legislative or regulatory body.
Under this structure, the HGP beat its target date by two years. That is to say that the complexity of the problem did not cause any delays, and progress was not impeded by the standard drug-approval bottleneck.
However, once the genetic roadmap was handed off for drug discovery and development, progress slowed dramatically.
On December 19th, the Department of Health and Human Services (“HHS”) issued a Request for Information seeking to harness artificial intelligence (“AI”) to deflate health care costs and make America healthy again.
As described herein, AI can be used in many dimensions to help lower healthcare costs and improve care. However, to achieve significant breakthroughs with AI, HHS will need to completely revamp the regulatory approach to drug discovery and development.
Dimension #1. Incorporation of AI into Drug Discovery
The biggest benefit to the healthcare industry’s performance from AI is achievable from drug discovery. Accounting for the costs of failures, the average FDA drug approval costs society almost $3 billion and takes decades to reach the market from its inception in the lab.
In contrast, AI identifies potential treatments much faster than traditional methods by processing vast amounts of biological data, uncovering hidden causal relationships, and generating new actionable insights.
AI is particularly promising for complex, multifactorial conditions – such as neurodegenerative diseases, autism spectrum disorders, and multiple chronic illnesses – where conventional reductionist approaches have failed.
In the short-run, HHS should direct its grants toward AI-generated basic research, with a particular emphasis on the hard-to-solve illnesses. At the same time, the FDA should be putting into place a new approval system for AI-initiated programs to enable breakthrough treatments in a compressed timetable.
Dimension #2. Incorporation of AI into the Drug Development Process
Simply relying on AI for drug discovery, while subjecting its advances to the current approval process would undermine the use of the technology.
Rather, improvements from AI can already be had in fulfilling the exhaustive regulatory documentation requirements, which today add up to as much as 30% of the cost of compliance.
Kai Romero is Head of Clinical Success at Evidently. The company is one of many that are using AI to dive into the EMR and extract data to deliver it to clinicians. It works to get really great information from the EMR to various flavors of clinicians in a fast and innovative way. Kai leads me on a detailed exploration of how the technology gets used as a layer over the EMR. And Kai shows me the new version that allows and LLM to deliver immediate answers from the data. This is a demo you really need to see to understand how AI is changing, and improving, that clinical experience. Meanwhile Kai is fascinating. She was an ER doc who became a specialist in hospice. We didn’t get into that too much, but you can tell about her input into Evidently’s design — Matthew Holt
Artificial intelligence (“AI”) has taken root in the field of drug discovery and development and already has shown signs of running past the traditional model of doing research. Congress should take note of these rapid changes and: 1) direct the Department of Health and Human Services (“HHS”) to phase down the government’s basic research grant program for non-Ai applicants, 2) require HHS to redirect these monies to fund nascent artificial intelligence applications, and 3) require HHS to revamp the roadmap for drug approvals of AI-driven trials to reflect the new capabilities for drug discovery and development.
Background
There are four distinguishing features of the U.S. healthcare industry.
First, the industry’s costs as a percentage of GNP have increased from 8% in 1980 to 17% today, and are expected to exceed 20% by 2030. The federal government subsidizes roughly one-third of these costs. These subsidies are not sustainable as healthcare costs continue to skyrocket, especially in the face of an overall $37 trillion federal deficit.
Second, the industry is regulated under a system that results in an average of 18 years of basic research and 12 years of clinical research for each drug approval. The clinical cost per newly approved drug now exceeds $2 billion. The economics of drug discovery are so unattractive to investors that the federal government and charitable foundations fund virtually all basic research. The federal government does so to the tune of $44 billion per year. When this cost is spread among the 50 or so drug approvals per year, it adds a cost of roughly $880 million to each drug, bringing the total cost to over $3 billion per drug approval. Worse yet, the process is getting slower and more costly each year. As such, drug discoveries under the current research approach will not be a significant contributor to lowering the overall healthcare costs.
Third, the Trump administration has undercut the federal government’s role in healthcare by firing several thousand employees from HHS. Thus, the agency can no longer effectively administer its previously adopted rules and regulations, and therefore, cannot be expected to shepherd drug discovery into lowering healthcare costs.
Fourth, on the positive side, artificial intelligence software combined with the massive and growing computational capacity of supercomputers have shown the potential to dramatically lower the cost of drug discovery and to radically shorten the timeline to identify effective treatments.
Enter Artificial Intelligence (AI) into Drug Discovery
For the past decade, a handful of companies have been exploring advanced automation techniques to improve the many facets of the drug discovery process. Improvements can now be had in fulfilling regulatory documentation requirements, which today add up to as much as 30% of the cost of compliance. More significantly, Ai can be used to accurately create comprehensive clinical documents from raw data with citations and cross-references – and continually update and validate the documentation.
The top Ai drug discovery companies include Insilico Medicine, Atomwise, and Recursion, which leverage Ai to accelerate various stages of drug development, from target identification to clinical trials. Other notable companies are BenevolentAI, Insitro, Owkin, and Schrödinger, alongside technology providers like Nvidia that supply critical Ai infrastructure for the life sciences sector.
I haven’t blogged this yet, which kinda surprises me, since I find myself describing it often. Let’s start with an overview. We can look at health information through the lens of a lifecycle.
The promise of Health Information Technology has been to help us – ideally to achieve optimal health in the people we serve.
The concept @ the beginning of the HITECH act was: “ADOPT, CONNECT, IMPROVE.”
These were the three pillars of the Meaningful Use Incentive programs.
Adopt technology so we can connect systems and therefore improve health.
Simple, yes?
Years later, one can argue that adoption and even connection have (mostly) been accomplished.
But the bridge between measurement and health improvement isn’t one we can easily cross with the current tools available to us.
Why?
Many of the technical solutions, particularly those that promote dashboards, are missing the most crucial piece of the puzzle. They get us close, but then they drop the ball.
And that’s where this “simple”AAAA” model becomes useful.
For data and information to be truly valuable in health care, it needs to complete a full cycle.
It’s not enough to just collect and display. There are four essential steps:
1. Acquire. This is where we gather the raw data & information. EHR entries, device readings, patient-reported outcomes … the gamut of information flowing into our systems. Note that I differentiate between data (transduced representations of the physical world: blood pressure, CBC, the DICOM representation of an MRI, medications actually taken) and information (diagnoses, ideas, symptoms, the problem list, medications prescribed) because data is reliably true and information is possibly true, and possibly inaccurate. We need to weigh these two kinds of inputs properly – as data is a much better input than information. (I’ll resist the temptation to go off on a vector about data being a preferable input for AI models too … perhaps that’s another post.)
2. Aggregate. Once acquired, this data and information needs to be brought together, normalized, and cleaned up. This is about making disparate data sources speak the same language, creating a unified repository so we can ask questions of one dataset rather than tens or hundreds.
3. Analyze. Now we can start to make sense of it. This is where clinical decision support (CDS) begins to take shape, how we can identify trends, flag anomalies, predict risks, and highlight opportunities for intervention. The analytics phase is where most current solutions end. A dashboard, an alert, a report … they all dump advice – like a bowl of spaghetti – into the lap of a human to sort it all out and figure out what to do.
Sure … you can see patterns, understand populations, and identify areas for improvement … All good things. The maturity of health information technology means that aggregation, normalization, and sophisticated analysis are now far more accessible and robust than ever before. We no longer need a dozen specialized point solutions to handle each step; modern platforms can integrate it all. This is good – but not good enough
A dashboard or analytics report, no matter how elegant, is ultimately passive. It shows you the truth, but it doesn’t do anything about it.
Lisbeth Votruba, the Chief Clinical Officer and Dana Peco, the AVP of Clinical Informatics from Avasure came on THCB to explain how their AI enabled surveillance system improves the care team experience in hospitals and health care facilities. Their technology enables remote nurses and clinical staff to monitor patients, and manage their care in a tight virtual nursing relationship with the staff at the facility, and also deliver remote specialty consults. They showed their tools and services which are now present in thousands of facilities and are helping with the nursing shortage. A demo and great discussion about how technology is improving the quality of care and the staff experience–Matthew Holt