Kai Romero is Head of Clinical Success at Evidently. The company is one of many that are using AI to dive into the EMR and extract data to deliver it to clinicians. It works to get really great information from the EMR to various flavors of clinicians in a fast and innovative way. Kai leads me on a detailed exploration of how the technology gets used as a layer over the EMR. And Kai shows me the new version that allows and LLM to deliver immediate answers from the data. This is a demo you really need to see to understand how AI is changing, and improving, that clinical experience. Meanwhile Kai is fascinating. She was an ER doc who became a specialist in hospice. We didn’t get into that too much, but you can tell about her input into Evidently’s design — Matthew Holt
Artificial Intelligence Renders the FDA’s Current Drug Approval Process to be Obsolete

By STEVEN ZECOLA
Artificial intelligence (“AI”) has taken root in the field of drug discovery and development and already has shown signs of running past the traditional model of doing research. Congress should take note of these rapid changes and: 1) direct the Department of Health and Human Services (“HHS”) to phase down the government’s basic research grant program for non-Ai applicants, 2) require HHS to redirect these monies to fund nascent artificial intelligence applications, and 3) require HHS to revamp the roadmap for drug approvals of AI-driven trials to reflect the new capabilities for drug discovery and development.
Background
There are four distinguishing features of the U.S. healthcare industry.
First, the industry’s costs as a percentage of GNP have increased from 8% in 1980 to 17% today, and are expected to exceed 20% by 2030. The federal government subsidizes roughly one-third of these costs. These subsidies are not sustainable as healthcare costs continue to skyrocket, especially in the face of an overall $37 trillion federal deficit.
Second, the industry is regulated under a system that results in an average of 18 years of basic research and 12 years of clinical research for each drug approval. The clinical cost per newly approved drug now exceeds $2 billion. The economics of drug discovery are so unattractive to investors that the federal government and charitable foundations fund virtually all basic research. The federal government does so to the tune of $44 billion per year. When this cost is spread among the 50 or so drug approvals per year, it adds a cost of roughly $880 million to each drug, bringing the total cost to over $3 billion per drug approval. Worse yet, the process is getting slower and more costly each year. As such, drug discoveries under the current research approach will not be a significant contributor to lowering the overall healthcare costs.
Third, the Trump administration has undercut the federal government’s role in healthcare by firing several thousand employees from HHS. Thus, the agency can no longer effectively administer its previously adopted rules and regulations, and therefore, cannot be expected to shepherd drug discovery into lowering healthcare costs.
Fourth, on the positive side, artificial intelligence software combined with the massive and growing computational capacity of supercomputers have shown the potential to dramatically lower the cost of drug discovery and to radically shorten the timeline to identify effective treatments.
Enter Artificial Intelligence (AI) into Drug Discovery
For the past decade, a handful of companies have been exploring advanced automation techniques to improve the many facets of the drug discovery process. Improvements can now be had in fulfilling regulatory documentation requirements, which today add up to as much as 30% of the cost of compliance. More significantly, Ai can be used to accurately create comprehensive clinical documents from raw data with citations and cross-references – and continually update and validate the documentation.
The top Ai drug discovery companies include Insilico Medicine, Atomwise, and Recursion, which leverage Ai to accelerate various stages of drug development, from target identification to clinical trials. Other notable companies are BenevolentAI, Insitro, Owkin, and Schrödinger, alongside technology providers like Nvidia that supply critical Ai infrastructure for the life sciences sector.
Continue reading…AAAA (the four A)
By JACOB REIDER
I haven’t blogged this yet, which kinda surprises me, since I find myself describing it often.
Let’s start with an overview. We can look at health information through the lens of a lifecycle.

The promise of Health Information Technology has been to help us – ideally to achieve optimal health in the people we serve.
The concept @ the beginning of the HITECH act was: “ADOPT, CONNECT, IMPROVE.”
These were the three pillars of the Meaningful Use Incentive programs.
Adopt technology so we can connect systems and therefore improve health.
Simple, yes?
Years later, one can argue that adoption and even connection have (mostly) been accomplished.
But the bridge between measurement and health improvement isn’t one we can easily cross with the current tools available to us.
Why?
Many of the technical solutions, particularly those that promote dashboards, are missing the most crucial piece of the puzzle. They get us close, but then they drop the ball.
And that’s where this “simple”AAAA” model becomes useful.
For data and information to be truly valuable in health care, it needs to complete a full cycle.
It’s not enough to just collect and display. There are four essential steps:
1. Acquire. This is where we gather the raw data & information. EHR entries, device readings, patient-reported outcomes … the gamut of information flowing into our systems. Note that I differentiate between data (transduced representations of the physical world: blood pressure, CBC, the DICOM representation of an MRI, medications actually taken) and information (diagnoses, ideas, symptoms, the problem list, medications prescribed) because data is reliably true and information is possibly true, and possibly inaccurate. We need to weigh these two kinds of inputs properly – as data is a much better input than information. (I’ll resist the temptation to go off on a vector about data being a preferable input for AI models too … perhaps that’s another post.)
2. Aggregate. Once acquired, this data and information needs to be brought together, normalized, and cleaned up. This is about making disparate data sources speak the same language, creating a unified repository so we can ask questions of one dataset rather than tens or hundreds.
3. Analyze. Now we can start to make sense of it. This is where clinical decision support (CDS) begins to take shape, how we can identify trends, flag anomalies, predict risks, and highlight opportunities for intervention. The analytics phase is where most current solutions end. A dashboard, an alert, a report … they all dump advice – like a bowl of spaghetti – into the lap of a human to sort it all out and figure out what to do.
Sure … you can see patterns, understand populations, and identify areas for improvement … All good things. The maturity of health information technology means that aggregation, normalization, and sophisticated analysis are now far more accessible and robust than ever before. We no longer need a dozen specialized point solutions to handle each step; modern platforms can integrate it all. This is good – but not good enough
A dashboard or analytics report, no matter how elegant, is ultimately passive. It shows you the truth, but it doesn’t do anything about it.
Continue reading…Avasure: Tech for helpful watching & remote care in hospitals
Lisbeth Votruba, the Chief Clinical Officer and Dana Peco, the AVP of Clinical Informatics from Avasure came on THCB to explain how their AI enabled surveillance system improves the care team experience in hospitals and health care facilities. Their technology enables remote nurses and clinical staff to monitor patients, and manage their care in a tight virtual nursing relationship with the staff at the facility, and also deliver remote specialty consults. They showed their tools and services which are now present in thousands of facilities and are helping with the nursing shortage. A demo and great discussion about how technology is improving the quality of care and the staff experience–Matthew Holt
What A Digital Health Doc Learned Recertifying His Boards

By JEAN LUC NEPTUNE
I recently got the good news that I passed the board recertification exam for the American Board of Internal Medicine (ABIM). As a bit of background, ABIM is a national physician evaluation organization that certifies physicians practicing internal medicine and its subspecialties (every other specialty has its own board certification body like ABOG for OB/GYNs and ABS for surgeons). Doctors practicing in most clinical environments need to be board-certified to be credentialed and eligible to work. Board certification can be accomplished by taking a test every 10 years or by participating in a continuing education process known as LKA (Longitudinal Knowledge Assessment). I decided to take the big 10-year test rather than pursue the LKA approach. For my fellow ABIM-certified docs out there who are wondering why I did the 10-year vs. the LKA, I’m happy to have a side discussion, but it was largely a career timing issue.
Of note, board certification is different from the USMLE (United States Medical Licensing Examination) which is the first in a series of licensing hurdles that doctors face in medical school and residency, involving 3 separate tests (USMLE Step 1, 2 and 3). After completing the USMLE steps, acquiring a medical license is a separate state-mediated process (I’m active in NY and inactive in PA) and has its own set of requirements that one needs to meet in order to practice in any one state. If you want to be able to prescribe controlled substances (opioids, benzos, stimulants, etc.), you will need a separate license from the DEA (the Drug Enforcement Administration, which is a federal entity). Simply put, you need to complete a lot of training, score highly on many standardized tests, and acquire a bunch of certifications (that cost a lot of money, BTW) to be able to practice medicine in the USofA.
What I learned in preparing for the ABIM recertification exam:
1.) There’s SO MUCH TO KNOW to be a doctor!
To prepare for the exam I used the New England Journal of Medicine (NEJM) review course which included roughly 2,000 detailed case studies that covered all the subspecialty areas of internal medicine. If you figure that each case involves mastery of dozens of pieces of medical knowledge, the exam requires a physician to remember tens of thousands of distinct pieces of information just for one specialty (remember that the medical vocabulary alone consists of tens of thousands of words). In addition, the individual facts mean nothing without a mastery of the basic underlying concepts, models, and frameworks of biology, biochemistry, human anatomy, physiology, pathophysiology, public health, etc. etc. Then there’s all the stuff you need to know for your specific speciality: medications, diagnostic frameworks, treatment guidelines, etc. It’s a lot. There’s a reason it takes the better part of a decade to gain any competency as a physician. So whenever I hear a non-doc saying that they’ve been reading up on XYZ and “I think I know almost as much as my doctor!”, my answer is always “No you don’t. Not at all. Not even a little bit. Stop it.”
2.) There is so much that we DON’T KNOW as doctors!
What was particularly striking to me as I did my review was how often I encountered a case or a presentation where:
- It’s unclear what causes a disease,
- The natural history of the disease is unclear,
- We don’t know how to treat the disease,
- We know how to treat the disease but we don’t how the treatment works,
- We don’t know what treatment is most effective, or
- We don’t know what diagnostic test is best.
- And on, and on, and on…
It’s estimated that there are more than 50,000 (!!) active journals in the field of biomedical sciences publishing more than 3 million (!!!!) articles per year. Despite all this knowledge generation there’s still so much we don’t know about the human body and how it works. I think some people find doctors arrogant, but anyone who really knows doctors and physician culture can tell you that doctors possess a deep sense of humility that comes out of knowing that you actually know very little.
3.) Someday soon the computer doctor will FOR SURE be smarter than the human doctor.
The whole time I was preparing for the test, I kept telling myself that there was nothing I was doing that a sufficiently advanced computer couldn’t accomplish.
Continue reading…Penguin–The Flightless Bird of Health AI
Fawad Butt and Missy Krasner started a new AI company which is building a big platform for both plans and providers in health care. Penguin Ai has a cute name, but is serious about trying to provide an underlying platform that is going enable agents across the enterprise. They are health care only, as opposed to the big LLMs. But does health care need a separate AI company? Are the big LLMs going to give up health? And what about that Epic company? Join us as we discuss how this AI thing is going to be deployed across health care, and how Penguin is going to play. Oh and they raised $30m series A to start getting it done–Matthew Holt
Dr Kaelee Brockway on AI for physical therapy training
Dr Kaelee Brockway is a professor of education and physical therapy who has built a series of AI based “patients” for her PT students to train on. Kaelee is a pioneer in using these tools for training. She showed me the personas that she has built with LLMs that are now being used by her students to figure out how to train their soft skills–a huge part of any training. This a great demo and discussion about how clinical professionals are going to use LLMs in their training and their work–Matthew Holt
Owen Tripp, Included Health, talks AI
“So far AI in health care is being used to drive existing profits on workflows and increase revenue per event that patients in the end have to pay for. That’s not a win for anyone long term!” Included Health’s CEO Owen Tripp dives into the present and future use of AI, LLMs, patient self-triage and self treatment and all that. Another interesting conversation on where patient facing AI will end up — Matthew Holt
BTW here’s my Conversation with Ami Parekh & Ankoor Shah
Here’s Owen Tripp discussing Included Health.
Here’s Owen’s piece on AI, What’s in your chatbot?
Après AI, le Déluge

By KIM BELLARD
I have to admit, I’ve steered away from writing about AI lately. There’s just so much going on, so fast, that I can’t keep up. Don’t ask me how GPT-5 differs from GPT-4, or what Gemini does versus Genie 3. I know Microsoft really, really wants me to use Copilot, but so far I’m not biting. DeepMind versus DeepSeek? Is Anthropic the French AI, or is that Mistral? I’m just glad there are younger, smarter people paying closer attention to all this.
Still, I’m very much concerned about where the AI revolution is taking us, and whether we’re driving it or just along for the ride. In Fast Company, Sebastion Buck, co-founder of the “future design company” Enso, posits a great attitude about the AI revolution:
The scary news is: We have to redesign everything.
The exciting news is: We get to redesign everything.
He goes on to explain:
Technical revolutions create windows of time when new social norms are created, and where institutions and infrastructure is rethought. This window of time will influence daily life in myriad ways, from how people find dates, to whether kids write essays, to which jobs require applications, to how people move through cities and get health diagnoses.
Each of these are design decisions, not natural outcomes. Who gets to make these decisions? Every company, organization, and community that is considering if—and how—to adopt AI. Which almost certainly includes you. Congratulations, you’re now part of designing a revolution.
I want to pick out one area in particular where I hope we redesign everything intentionally, rather than in our normal short-sighted, laissez-faire manner: jobs and wealth.
It has become widely accepted that offshoring led to the demise of U.S. manufacturing and its solidly middle class blue collar jobs over the last 30 years. There’s some truth to that, but automation was arguably more of a factor – and that was before AI and today’s more versatile robots. More to the point, today’s AI and robots aren’t coming just to manufacturing but pretty much to every sector.
Former Transportation Secretary Pete Buttigieg warned:
The economic implications are the ones that I think could be the most disruptive, the most quickly. We’re talking about whole categories of jobs, where — not in 30 or 40 years, but in three or four — half of the entry-level jobs might not be there. It will be a bit like what I lived through as a kid in the industrial Midwest when trade in automation sucked away a lot of the auto jobs in the nineties — but ten times, maybe a hundred times more disruptive.
Mr. Buttigieg is no AI expert, but Erik Brynjolfsson, senior fellow at Stanford’s Institute for Human-Centered Artificial Intelligence and director of the Stanford Digital Economy Lab, is. When asked about those comments, he told Morning Edition: “Yeah, he’s spot on. We are seeing enormous advances in core technology and very little attention is being paid to how we can adapt our economy and be ready for those changes.”
You could look, for example, at the big layoffs in the tech sector lately. Natasha Singer, writing in The New York Times, reports on how computer science graduates have gone from expecting mid-six figure starting salaries to working at Chipotle (and wait till Chipotle automates all those jobs). The Federal Reserve Bank of New York says unemployment for computer science & computer engineering majors is better than anthropology majors, but, astonishingly, worse than pretty much all other majors.
And don’t just feel sorry for tech workers. Neil Irwin of Axios warns: “In the next job market downturn — whether it’s already starting or years away — there just might be a bloodbath for millions of workers whose jobs can be supplanted by artificial intelligence.” He quotes Federal Reserve governor Lisa Cook: “AI is poised to reshape our labor market, which in turn could affect our notion of maximum employment or our estimate of the natural rate of unemployment.”
In other words, you ain’t seen nothing yet.
While manufacturing was taking a beating in the U.S. over the last thirty years, tech boomed. Most of the world’s largest and most profitable companies are tech companies, and most of the world’s richest people got their wealth from tech. Those are, by and large, the ones investing most heavily in AI — most likely to benefit from it.
Professor Brynjolfsson worries about how we’ll handle the transition to an AI economy:
The ideal thing is that you find ways of compensating people and managing a transition. Sad to say, with trade, we didn’t do a very good job of that. A lot of people got left behind. It would be a catastrophe if we made the similar mistake with technology, [which] that also is going to create enormous amounts of wealth, but it’s not going to affect everyone evenly. And we have to make sure that people manage that transition.
“Catastrophe” indeed. And I fear it is coming.
Continue reading…China Goes “Democratic” on Artificial General Intelligence

By MIKE MAGEE
Last week, following a visit to the White House, Jensen Huang instigated a wholesale reversal of policy from Trump who was blocking Nvidia sales of its H20 chip to China. What did Jensen say?
We can only guess of course. But he likely shared the results of a proprietary report from noted AI researchers at Digital Science that suggested an immediate policy course correction was critical. Beyond the fact that over 50% of all AI researchers are currently based in China, their study documented that “In 2000, China-based scholars produced just 671 AI papers, but in 2024 their 23,695 AI-related publications topped the combined output of the United States (6378), the United Kingdom (2747), and the European Union (10,055).”
David Hook, CEO of Digital Science was declarative in the opening of the report, stating “U.S. influence in AI research is declining, with China now dominating.”
China now supports about 30,000 AI researchers compared to only 10,000 in the US. And that number is shrinking thanks to US tariff and visa shenanigans, and overt attacks by the administration on our premier academic institutions.
Economics professors David Autor (MIT) and Gordon Hanson (Harvard), known for “their research into how globalization, and especially the rise of China, reshaped the American labor market,” famously described the elements of “China Shock 1.0.” in 2013. It was “a singular process—China’s late-1970s transition from Maoist central planning to a market economy, which rapidly moved the country’s labor and capital from collective rural farms to capitalist urban factories.”
As a result, a quarter of all US manufacturing jobs disappeared between 1999 and 2007. Today China’s manufacturing work force tops 100 million, dwarfing the US manufacturing job count of 13 million. Those numbers peaked a decade ago when China’s supply of low cost labor peaked. But these days China is clearly looking forward while this administration and its advisers are being left behind in the rear view mirror.
Welcome to “China Shock 2.0” wrote Autor and Hanson in a recent New York Times editorial. But this time, their leaders are focusing on “key technologies of the 21st century…(and it) will last for as long as China has the resources, patience and discipline to compete fiercely.”
The highly respected Australian Strategic Policy Institute, funded by their Defense Department, has been tracking the volume of published innovative technology research in the US and China for over a quarter century. They see this as a measure of experts opinion where the greatest innovations are originating. In 2007, we led China in the prior four years in 60 of 64 “frontier technologies.”
Two decades later, the table has flipped, with China well ahead of the US in 57 of 64 categories measured.
Continue reading…