Categories

Above the Fold

Biden’s cancer diagnosis should be a teaching moment

By DANIEL STONE

Joe Biden’s metastatic cancer diagnosis brings together two controversial issues: PSA testing for prostate cancer and presidential politics. To understand what is at stake Americans need basic information about PSA testing, and a frank discussion of the reasoning behind the prostate cancer screening decisions in the former president’s case. The dribble of information we’ve gotten only creates more uncomfortable questions for Biden and his family. The absence of adequate explanation also fails to contribute to public appreciation of these important medical issues.

The prostate, a walnut-shaped gland at the base of the bladder, produces “prostate specific antigen,” or PSA. Chemically classed as a glycoprotein, a sugar/protein aggregate, it leaks from the prostate into the blood, where its level can be measured with routine blood testing.

As men age, the prostate enlarges, increasing PSA levels. Screening tests take advantage of the fact that prostate cancer usually leaks more PSA than normal prostate tissue. And in the case of prostate cancer, the PSA typically rises relatively fast.

Beyond these basic facts, the PSA story becomes hazy. Although an elevated PSA may signal cancer, most men with an elevated PSA have benign prostate enlargement, not prostate cancer. Worse yet for screening, many men with prostate cancer have a mild and slow-moving disease that requires no treatment. They coexist with their disease rather than dying of it. This fact leads to the old adage that prostate cancer is the disease of long-lived popes and Supreme Court justices.

Medical advisory panels view PSA screening with skepticism partly due to the challenges of distinguishing benign PSA elevations from those related to cancer. Confirming a suspected cancer diagnosis requires prostate biopsies that can be painful and can produce side effects. Additionally, once a diagnosis is made, patients who might have coexisted with their disease may needlessly be subject to the harms of treatment, such as radiation and surgery. Finally, the benefits of early treatment of prostate cancer have been difficult to prove in clinical studies.

For all these reasons medical advisory panels have discouraged widespread testing or recommend a nuanced approach with careful discussion of risk and benefits between patients and their

Despite these concerns, the pendulum has swung toward more PSA testing in recent years. One reason is that improvements in radiographic imaging, such as MRI, allow for “active surveillance” that can track early lesions for signs of spread, allowing doctors to distinguish between relatively benign cases of prostate cancer and those likely to progress. Interventions can then be directed more specifically to those at high risk.

In my medical practice, I have generally been an advocate for prostate cancer screening despite the controversy surrounding the clinical benefits. My experience leads me to believe that early diagnosis improves prognosis. But even without improved medical outcomes, patients and their families still benefit from early diagnosis for the purposes of planning. No one wants to be sideswiped by a late-stage symptomatic disease that limits both clinical and life choices.

Continue reading…

High-Profile Start-Ups Inato And Prenosis Show AI ‘Best Practice’

By MICHAEL MILLENSON

Treating artificial intelligence as just one ingredient in a business success recipe was a prominent theme at the MedCity INVEST 2025 conference, with this AI “best practice” advice epitomized by high-profile start-ups Inato and Prenosis.

“You need to build a business model that makes sense, then use AI,” cautioned Raffi Boyajian, principal at CIGNA Ventures and a panelist at the MedCity INVEST 2025 conference in Chicago.

That sentiment was echoed and emphasized by fellow investors Aman Shah, vice president of new ventures at VNS Health, and Dipa Mehta, managing partner of Valeo Ventures. Both emphasized the necessity in a tough economic environment to find a “burning platform” that could immediately boost a customer’s bottom line.

In a separate panel, high-profile start-ups Inato and Prenosis accentuated that AI approach.

Innovation Customers Need

Inato was named by Fast Company magazine as one of the Most Innovative Companies of 2024, and that same year chosen by Fierce Healthcare as one of its Fierce 15. The Paris-based company connects drugmakers with otherwise hard-to-enroll patients for clinical trials by means of an AI-based platform that has attracted more than 3,000 community research sites in over 70 countries. By making clinical trials “more accessible, inclusive, and efficient,” in the company’s words, breaking a shocking pattern where 96% of trials do not include a representative population, Inato has established partnerships with more than a third of the top 30 pharmaceutical firms.

In describing its technology, Inato says it “assembled an AI agent to de-identify patient records, quickly determine which trials are relevant to each patient and evaluate patients against inclusion and exclusion criteria to assess eligibility” accurately and at scale. However, that phrase, “assembled an AI agent,” obscures a subtler process.

Liz Beatty, Inato’s co-founder and chief strategy officer, described using “off-the-shelf” large language models like ChatGPT and Claude and then optimizing them for a particular process with algorithms attuned to each model. As new models appear, the company adjusts accordingly. Although Beatty did not offer an analogy, there seemed an obvious parallel to a chef choosing among the right ingredients in the right proportions to ensure a recipe’s success.

Said Beatty, “I hear, ‘Let’s apply AI to everything.’ That’s not the right answer.” Investors are convinced enough that Inato does have the right answer that they’ve poured in $38.2 million, according to Pitchbook.

AI has also been central to the success of Prenosis. The company’s Sepsis ImmunoScore was the first Food and Drug Administration-approved tool using AI to predict the imminent onset of an often-deadly condition known as sepsis. Integrated into the clinical workflow, it was hailed by Time magazine as one of “the best inventions of 2024,” while Bobby Reddy Jr., Prenosis co-founder and chief executive officer, was subsequently named to the Time100 Health List recognizing influential individuals in global health.

Chicago-based Prenosis describes itself as an artificial intelligence company tailoring therapy to individual patient biology as part of “a new era of precision medicine.” As with Inato, though, the AI headline hides a more complex reality.

Sepsis is a heterogenous syndrome with close to 200 different symptoms possibly at play. “AI brings it together so we can understand the process of deterioration,” Reddy said. The company used machine learning to develop and validate a sophisticated algorithm, according to a New England Journal of Medicine study.

But the right AI was only one product ingredient. Prenosis also assembled a database of thousands of patients and set up a “wet lab” to find sepsis biomarkers – and to use for other conditions as the company expands its offerings – based on what is now 120,000 blood samples. Adding biomarkers to EHR data enabled the company to position itself as a more accurate, real-time complement to the sepsis tool Epic provides free to hospitals using its EHR.

“That’s our competitive advantage,” Reddy said.

Focused AI

Just as Inato focused on AI for its specific purposes, Prenosis also focused on a crucial goal. The AI was used “first and foremost to fit the FDA model for approval,” said Reddy.

Sepsis is caused by an overactive immune response to infection. It costs the U.S. health care system billions of dollars annually while claiming the lives of at least 350,000 people – more than all cancers combined, according to the Prenosis website. The World Health Organization has labeled sepsis a threat to global health, and the economic impact of just this one condition amounts to an average 2.7% of a nation’s health care costs, according to a 2022 study.

Unmentioned by Reddy at the INVEST conference was that a U.S. hospital’s performance in preventing and effectively treating sepsis is a factor in value-based payment by Medicare and in the hospital patient safety score published by the Leapfrog Group. A “burning platform,” indeed.

For Prenosis and Inato alike, AI best practice is based on practicality. As Reddy put it, AI is “just a tool” in product development.

Michael L. Millenson is president of Health Quality Advisors & a regular THCB Contributor. This first appeared in his column at Forbes

How Bright A Light Do We Shine This Memorial Day?

By MIKE MAGEE 

According to Veterans Administration historians, the origin of Memorial Day dates back to 1864 when three women from Boalsburg, Pennsylvania joined in grief to decorate the graves of family members who had died in the Civil War. A year later, other townspeople joined in and one year later, in 1866, women in Columbus, Mississippi, joined the event, in honor of fallen Confederate soldiers. That was 14 years after the publication of Harriet Beecher Stowe’s Uncle Tom’s Cabin in 1852. 

In that first year it was published, Uncle Tom’s Cabin sold over 300,000 copies. Author and critic Alfred Kazin called it “The most powerful and most enduring work ever written about American slavery.” Its prominence in the American lexicon speaks for itself, and its relevance regarding goodness and governance, leadership by legislation, women’s roles in creating civil societies and the underpinnings of Christianity in the unrealized potential of the American dream all speak to the continued value of the publication. 

On page 2 of the preface, Harriet Beecher Stowe comments on “memorializing” human hatred and cruelty to the ash bin of history. She writes, “It is a comfort to hope, as so many of the world’s sorrows and wrongs have, from age to age, been lived down, so a time shall come when sketches similar to these shall be valuable only as memorials of what has long ceased to be.” 

To this, we must respond today, “Not yet. There is work that remains.” 

On the last page of her book, Harriet Beecher Stowe in 1852 reflects (as if on our modern day predicament), “This is an age of the world when nations are trembling and convulsed. A mighty influence is abroad, surging and heaving the world, as with an earthquake. And is America safe? Every nation that carries in its bosom great and unredressed injustice has in it the elements of this last convulsion.”

To this, we believers in human goodness and democracy must respond, “We will never be free, safe and healthy if our elected leaders promote policies – whether here or abroad – that belie our finer instincts, promote fear, and trigger predation.” 

The White House, until recently, has largely been a sacred and treasured shrine. Back in 2013, our President at the time, Barack Obama, hosted our former President, George H.W. Bush and his family there to commemorate the 5000th award of a “Daily-Point-of-Light”, that the former President had launched to “honor individuals who demonstrate the transformative power of service, and who are driving significant and sustained impact through their everyday actions and words that light the path for other points of light.” 

Here in part, is what President Obama said that day: “…given the humility that’s defined your life, I suspect it’s harder for you to see something that’s clear to everybody else around you, and that’s how bright a light you shine — how your vision and example have illuminated the path for so many others, how your love of service has kindled a similar love in the hearts of millions here at home and around the world. And, frankly, just the fact that you’re such a gentleman and such a good and kind person I think helps to reinforce that spirit of service. So on behalf of us all, let me just say that we are surely a kinder and gentler nation because of you and we can’t thank you enough.” 

Just a dozen years ago, just to be publicly “thanked” seemed enough. And “active citizenship” as a member of this great nation was viewed by many – by most – as a duty and an honor – even to the point of sacrificing one’s life in defense of this nation. 

That, after all, is what Memorial Day commemorates. Action is required, as is goodness and virtue by example and daily behavior. 

We continue to struggle in the shadow of Uncle Tom’s Cabin. We lack perfection, but we certainly could, and should, do better. Because, to be healthy in America, to realize our full potential, to be civilized, as Ralph Waldo Emerson said, “to make good the cause of freedom against slavery you must be… Declaration of Independence walking.”

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside America’s Medical Industrial Complex. (Grove/2020)

This One Weird Trick Can Fix U.S. Healthcare

By OWEN TRIPP

Creating a healthcare experience that builds trust and delivers value to people and purchasers isn’t a quick fix, but it’s the only way to reverse the downward spiral of high costs and poor outcomes

Entrepreneurs like to say the U.S. healthcare system is “broken,” usually right before they explain how they intend to fix it. I have a slightly different diagnosis.

The U.S. healthcare system is the gold standard. Our institutions and enterprises, ranging from 200-year-old academic medical centers to digital health startups, are the clear world leaders in clinical expertise, research, innovation, and technology. Capabilities-wise, the system is far from broken.

What’s broken is trust in the system, because of the glaring gap between what the system is capable of and what it actually delivers. Every day across the country, people drive past world-class hospitals, but then have to wait months for a primary care appointment. They deduct hundreds for healthcare from each paycheck, only to be told at the pharmacy that their prescription isn’t covered. While waiting for a state-of-the-art scan, they’re handed a clipboard and asked to recap their medical history.

This whipsaw experience isn’t due to incompetence or poor infrastructure. It’s the product of the dysfunction between the two biggest players in healthcare: providers and insurers, two entities that have optimized the hell out of their respective businesses, in opposition to one another, and inadvertently at the expense of people.

Historically, hospitals and health systems — including those 200-year-old AMCs — have dedicated themselves fully to improving and saving lives. I’m not saying they’ve lost sight of this, but until recently, margin took a back seat to mission. With industry consolidation and the persistence of the fee-for-service model, however, providers’ hands have been forced to maximize volume of care at the highest possible unit cost, which in turn has become a main driver of the out-of-control cost trend at large.

This push from providers has prompted an equal-and-opposite reaction from insurers. Though the industry has been villainized (rightly, in some cases) for a heavy-handed approach to utilization management and prior authorization, insurers are merely doing what their primary customers — private employers — have hired them to do: manage cost. Insurers have gotten very good at it, not just by limiting care, but also through product innovation that has created more tiers and cost-sharing options for plan sponsors.

Meanwhile, healthcare consumers (people!) have been sidelined amid this tug-of-war. Doctors and hospitals say they’re patient-centered, and insurers say they’re member-centric — but the jargon is a dead giveaway. Each side is focused on their half of the pie, and neither is accountable for the whole person: the person receiving care and paying for care, not to mention navigating everything in between.

It should come as no surprise that trust is falling. Only 56% of Americans trust their health insurer to act in their best interest. Even trust in doctors — the good guys — has plummeted. In a startling reversal from just four years ago, a whopping 76% of people believe hospitals care more about revenue than patient care.

Loss of Trust in Healthcare Providers

Hospitals in the U.S.
are mostly focused on…
⏺  Caring for patients⏺  Making money


Source: Jarrard/Chartis (2025)

This trust deficit is the root cause of so many healthcare problems. It’s the reason people disengage, delay and skip care, and end up in the ER or OR for preventable issues. When a good chunk of the population falls into this cycle, as they have, you end up with the status quo: unrelenting costs and deteriorating outcomes that is dragging down households, businesses, and the industry itself.

There’s no quick fix. Despite what my fellow entrepreneurs might say, no one point solution or technology (no, not even AI) can rebuild trust. The only way to reverse the downward spiral is by serving up a modern experience that is genuinely designed around people’s needs.

Continue reading…

How to Buy and Sell AI in health care? Not Easy.

By MATTHEW HOLT

It was not so  long ago that you could create one of those maps of health care IT or digital health and be roughly right. I did it myself back in the Health 2.0 days, including the old sub categories of the “Rebel Alliance of New Provider Technologies” and the “Frontier of Patient Empowerment Technologies”

But those easy days of matching a SaaS product to the intended user, and differentiating it from others are gone. The map has been upended by the hurricane that is generative AI, and it has thrown the industry into a state of confusion.

For the past several months I have been trying to figure out who is going to do what in AI health tech. I’ve had lots of formal and informal conversations, read a ton and been to three conferences in the past few months all focused dead on this topic. And it’s clear no one has a good answer.

Of course this hasn’t stopped people trying to draw maps like this one from Protege. As you can tell there are hundreds of companies building AI first products for every aspect of the health care value (or lack of it!) chain.

But this time it’s different. It’s not at all clear that AI will stop at the border of a user or even have a clearly defined function. It’s not even clear that there will be an “AI for Health Tech” sector.

This is a multi-dimensional issue.

The main AI LLMs–ChatGPT (OpenAI/Microsoft), Gemini (Google/Alphabet) Claude (Anthropic/Amazon), Grok (X/Twitter), Lama (Meta/Facebook)–are all capable of incredible work inside of health care and of course outside it. They can now write in any language you like, code, create movies, music, images and are all getting better and better. 

And they are fantastic at interpretation and summarization. I literally dumped a pretty incomprehensible 26 page dense CMS RFI document into ChatGPT the other day and in a few seconds it told me what they asked for and what they were actually looking for (that unwritten subtext). The CMS official who authored it was very impressed and was a little upset they weren’t allowed to use it. If I had wanted to help CMS, it would have written the response for me too.

The big LLMs are also developing “agentic” capabilities. In other words, they are able to conduct multistep business and human processes.

Right now they are being used directly by health care professionals and patients for summaries, communication and companionship. Increasingly they are being used for diagnostics, coaching and therapy. And of course many health care organizations are using them directly for process redesign.

Meanwhile, the core workhorses of health care are the EMRs used by providers, and the biggest kahuna of them all is Epic. Epic has a relationship with Microsoft which has its own AI play and also has its own strong relationship with OpenAI – or at least as strong as investing $13bn in a non-profit will make your relationship. Epic is now using Microsoft’s AI both in note summaries, patient communications et al, and also using DAX, the ambient AI scribe from Microsoft’s subsidiary Nuance. Epic also has a relationship with DAX rival Abridge

But that’s not necessarily enough and Epic is clearly building its own AI capabilities. In an excellent review over at Health IT Today John Lee breaks down Epic’s non-trivial use of AI in its clincal workflow:

  • The platform now offers tools to reorganize text for readability, generate succinct, patient-friendly summaries, hospital course summaries, discharge instructions, and even translating discrete clinical data into narrative instructions.
  • We will be able to automatically destigmatize language in notes (e.g., changing “narcotic abuser” to “patient has opiate use disorder”),
  • Even as a physician, I sometimes have a hard time deciphering the shorthand that my colleagues so frequently use. Epic showed how AI can translate obtuse medical shorthand-like “POD 1 sp CABG. HD stable. Amb w asst.”-into plain language: “Post op day 1 status post coronary bypass graft surgery. Hemodynamically stable. Patient is able to ambulate with assist.”
  • For nurses, ambient documentation and AI-generated shift notes will be available, reducing manual entry and freeing up time for patient care.

And of course Epic isn’t the only EHR (honestly!). Its competitors aren’t standing still. Meditech’s COO Helen Waters gave a wide-ranging interview to HISTalk. I paid particular attention to her discussion of their work with Google in AI and I am quoting almost all of it:

This initial product was built off of the BERT language model. It wasn’t necessarily generative AI, but it was one of their first large language models. The feature in that was called Conditions Explorer, and that functionality was really a leap forward. It was intelligently organizing the patient information directly from within the chart, and as the physician was working in the chart workflow, offering both a longitudinal view of the patient’s health by specific conditions and categorizing that information in a manner that clinicians could quickly access relevant information to particular health issues, correlated information, making it more efficient in informed decision making.  <snip>

Beyond that, with the Vertex AI platform and certainly multiple iterations of Gemini, we’ve walked forward to offer additional AI offerings in the category of gen AI, and that includes both a physician hospital course-of-stay narrative at the end of a patient’s time in the hospital to be discharged. We actually generate the course-of-stay, which has been usually beneficial for docs to not have to start to build that on their own.

We also do the same for nurses as they switch shifts. We give a nurse shift summary, which basically categorizes the relevant information from the previous shift and saves them quite a bit of time. We are using the Vertex AI platform to do that. And in addition to everyone else under the sun, we have obviously delivered and brought live ambient scribe capabilities with AI platforms from a multitude of vendors, which has been successful for the company as well.

The concept of Google and the partnership remains strong. The results are clear with the vision that we had for Expanse Navigator. The progress continues around the LLMs, and what we’re seeing is great promise for the future of these technologies helping with administrative burdens and tasks, but also continued informed capacities to have clinicians feel strong and confident in the decisions they’re making. 

The voice capabilities in the concept of agentic AI will clearly go far beyond ambient scribing, which is both exciting and ironic when you think about how the industry started with a pen way back when, we took them to keyboards, and then we took them to mobile devices, where they could tap and swipe with tablets and phones. Now we’re right back to voice, which I think will be pleasing provided it works efficiently and effectively for clinicians.


So if you read–not even between the lines but just what they are saying–Epic, which dominates AMCs and big non-profit health systems, and Meditech, the EMR for most big for-profit systems like HCA, are both building AI into their platforms for almost all of the workflow that most clinicians and administrators use.

I raised this issue a number of different ways at a meeting hosted by Commure, the General Catalyst-backed provider-focused AI company. Commure has been through a number of iterations in its 8 year life but it is now an AI platform on which it is building several products or capabilities. (For more here’s my interview with CEO Tannay Tandon). These include (so far!) administration, revenue cycle, inventory and staff tracking, ambient listening/scribing, clinical workflow, and clinical summarization. You can bet there’s more to come via development or acquisition. In addition Commure is doing this not only with the deep pocketed backing of General Catalyst but also with partial ownership from HCA–incidentally Meditech’s biggest client. That means HCA has to figure out what Commure is doing compared to Meditech.

Finally there’s also a ton of AI activity using the big LLMs internally within AMCs and in providers, plans and payers generally. Don’t forget that all these players have heavily customized many of the tools (like Epic) which external vendors have sold them. They are also making their AI vendors “forward deploy” engineers to customize their AI tools to the clients’ workflow. But they are also building stuff themselves. For instance Stanford just released a homegrown product that uses AI to communicate lab results to patients. Not bought from a vendor, but developed internally using Anthropic’s Claude LLM. There are dozens and dozens of these homegrown projects happening in every major health care enterprise. All those data scientists have to keep busy somehow!

So what does that say about the role of AI?

First it’s clear that the current platforms of record in health care–the EHRs–are viewing themselves as massive data stores and are expecting that the AI tools that they and their partners develop will take over much of the workflow currently done by their human users.

Second, the law of tech has usually been that water flows downhill. More and more companies and products end up becoming features on other products and platforms. You may recall that there used to be a separate set of software for writing (Wordperfect), presentation (Persuasion), spreadsheets (Lotus123) and now there is MS Office and Google Suite. Last month a company called Brellium raised $16m from presumably very clever VCs to summarize clinical notes and analyze them for compliance. Now watch them prove me wrong, but doesn’t it seem that everyone and their dog has already built AI to summarize and analyze clinical notes? Can’t one more analysis for compliance be added on easily? It’s a pretty good bet that this functionality will be part of some bigger product very soon.

(By the way, one area that might be distinct is voice conversation, which right now does seem to have a separate set of skills and companies working in it because interpreting human speech and conversing with humans is tricky. Of course that might be a temporary “moat” and these companies or their products may end up back in the main LLM soon enough). 

Meanwhile, Vine Kuraitis, Girish Muralidharan & the late Jody Ranck just wrote a 3 part series on how the EMR is moving anyway towards becoming a bigger unified digital health platform which suggests that the clinical part of the EMR will be integrated with all the other process stuff going on in health systems. Think staffing, supplies, finance, marketing, etc. And of course there’s still the ongoing integration between EMRs and medical devices and sensors across the hospital and eventually the wider health ecosystem.

So this integration of data sets could quickly lead to an AI dominated super system in which lots of decisions are made automatically (e.g. AI tracking care protocols as Robbie Pearl suggested on THCB a while back), while some decisions are operationally made by humans (ordering labs or meds, or setting staffing schedules) and finally a few decisions are more strategic. The progress towards deep research and agentic AI being made by the big LLMs has caused many (possibly including Satya Nadella) to suggest that SaaS is dead. It’s not hard to imagine a new future where everything is scraped by the AI and agents run everything globally in a health system.

This leads to a real problem for every player in the health care ecosystem.

If you are buying an AI system, you don’t know if the application or solution you are buying is going to be cannibalized by your own EHR, or by something that is already being built inside your organization.

If you are selling an AI system, you don’t know if your product is a feature of someone else’s AI, or if the skill is in the prompts your customers want to develop rather than in your tool. And worse, there’s little penalty in your potential clients waiting to see if something better and cheaper comes along.

And this is happening in a world in which there are new and better LLM and other AI models every few months.

I think for now the issue is that, until we get a clearer understanding of how all this plays out, there will be lots of false starts, funding rounds that don’t go anywhere, and AI implementations that don’t achieve much. Reports like the one from Sofia Guerra and Steve Kraus at Bessmer may help, giving 59 “jobs to be done”. I’m just concerned that no one will be too sure what the right tool for the job is.

Of course I await my robot overlords telling me the correct answer.

Matthew Holt is the Publisher of THCB

Patrick Quigley, Sidecar Health

Patrick Quigley is the CEO of Sidecar Health. It’s a start up health insurance company that has a new approach to how employers and employees buy health care. Sidecar is betting on the radical pricing  transparency idea. Instead of going down the contacting and narrow network route, Sidecar presents average area pricing and individual provider pricing to its members, and rewards them if they go to lower cost providers (who often are cheaper). How does this all work and is it real? Patrick took me through an extensive demo and explained how this all works. There’s a decent amount of complexity behind the scenes but Sidecar is creating something very rare in America, a priced health care market allowing consumers to choose–Matthew Holt

And Now for Some Fun Future

By KIM BELLARD

I feel like I’ve been writing a lot about futures I was pretty worried about, so I’m pleased to have a couple developments to talk about that help remind me that technology is cool and that healthcare can surely use more of it.

First up is a new AI algorithm called FaceAge, as published last week in The Lancet Digital Health by researchers at Mass General Brigham. What it does is to use photographs to determine biological age – as opposed to chronological age. We all know that different people seem to age at different rates – I mean, honestly, how old is Paul Rudd??? – but until now the link between how people look and their health status was intuitive at best.

Moreover, the algorithm can help determine survival outcomes for various types of cancer.

The researchers trained the algorithm on almost 59,000 photos from public databases, then tested against the photos of 6,200 cancer patients taken prior to the start of radiotherapy. Cancer patients appeared to FaceAge some five years older than their chronological age. “We can use artificial intelligence (AI) to estimate a person’s biological age from face pictures, and our study shows that information can be clinically meaningful,” said co-senior and corresponding author Hugo Aerts, PhD, director of the Artificial Intelligence in Medicine (AIM) program at Mass General Brigham.

Curiously, the algorithm doesn’t seem to care about whether someone is bald or has grey hair, and may be using more subtle clues, such as muscle tone. It is unclear what difference makeup, lighting, or plastic surgery makes. “So this is something that we are actively investigating and researching,” Dr. Aerts told The Washington Post. “We’re now testing in various datasets [to see] how we can make the algorithm robust against this.”

Moreover, it was trained primarily on white faces, which the researchers acknowledge as a deficiency. “I’d be very worried about whether this tool works equally well for all populations, for example women, older adults, racial and ethnic minorities, those with various disabilities, pregnant women and the like,” Jennifer E. Miller, the co-director of the program for biomedical ethics at Yale University, told The New York Times.  

The researchers believe FaceAge can be used to better estimate survival rates for cancer patients. It turns out that when physicians try to gauge them simply by looking, their guess is essentially like tossing a coin. When paired with FaceAge’s insights, the accuracy can go up to about 80%.

Dr. Aerts says: “This work demonstrates that a photo like a simple selfie contains important information that could help to inform clinical decision-making and care plans for patients and clinicians. How old someone looks compared to their chronological age really matters—individuals with FaceAges that are younger than their chronological ages do significantly better after cancer therapy.”

I’m especially thrilled about this because ten years ago I speculated about using selfies and facial recognition AI to determine if we had conditions that were prematurely aging us, or even we were just getting sick. It appears the Mass General Brigham researchers agree. “This opens the door to a whole new realm of biomarker discovery from photographs, and its potential goes far beyond cancer care or predicting age,” said co-senior author Ray Mak, MD, a faculty member in the AIM program at Mass General Brigham. “As we increasingly think of different chronic diseases as diseases of aging, it becomes even more important to be able to accurately predict an individual’s aging trajectory. I hope we can ultimately use this technology as an early detection system in a variety of applications, within a strong regulatory and ethical framework, to help save lives.”

The researchers acknowledge that much has to be accomplished before it is introduced for commercial purposes, and that strong oversight will be needed to ensure, as Dr. Aerts told WaPo, “these AI technologies are being used in the right way, really only for the benefit of the patients.” As Daniel Belsky, a Columbia University epidemiologist, told The New York Times: “There’s a long way between where we are today and actually using these tools in a clinical setting.”

The second development is even more out there. Let me break down the CalTech News headline: “3D Printing.” OK, you’ve got my attention. “In Vivo.” Color me highly intrigued. “Using Sound.” Mind. Blown.

That’s right. This team of researchers have “developed a method for 3D printing polymers at specific locations deep within living animals.”

Continue reading…

Health Deserves A Vision More Capacious Than Dashboard Metrics

By DAVID SHAYWITZ

Consumer health and wellness is experiencing a flurry of activity. 

The lab testing company Function (motto: “It’s time to own your health”) acquired Ezra, a whole body MRI company promising “the world’s most advanced longevity scan.”   

Oura, maker of the popular smart ring, recently added an integration for continuous glucose measurement as well as the ability to calculate meal nutrition based on a photo. Oura also hired Dr. Ricky Bloomfield as its first Chief Medical Officer; Dr. Bloomfield had previously served as Clinical and Health Informatics Lead at Apple, and is known for his expertise in health data interoperability. 

Meanwhile, Oura competitor Whoop, maker of a smart band, just announced the latest versions of its device, with the ability to monitor blood pressure, ECG, and to assess what it describes as a measure of biological age, which it calls “Whoop Age.” Whoop now says it seeks to “unlock human performance and healthspan,” enticing users with the pitch, “Get a complete picture of your health.”

Towards a Personal Health Operating System (OS)

Notice a pattern yet? 

What unites these approaches and so many others, as the industry newsletter Fitt Insider (FI) recently observed, is they reflect an attempt to generate a “personal health OS,” intended to “give individuals agency over their well-being,” and more generally, wrest control back from a health system that’s often perceived (especially by young adults) as somewhere between useless and obstructive.

Citing a recent Edelman survey, FI reports,

 …nearly half of young adults believe well-informed people can be as knowledgeable as doctors, two-thirds see lived experience as expertise, and 61% view institutions as barriers to care.

Fed up with reactive care, many already collect data across wearables, lifestyle apps, DTC diagnostics, and more, but most are siloed. Rolling up, Function is architecting a unified platform capable of generating clinically relevant insights from raw inputs.

FI points to the proliferation of companies like Bright OS, Gyroscope, and Guava Health focused on “day-to-day data management,” as well as startups like Superpower (“Delivering concierge-level metrics minus the PCP”) and Mito Health (a “pocket-sized AI doctor” that “generates comprehensive digital health profiles by merging labs, medical records, family history, lifestyle info, and more.”)

AI seems poised to play an increasingly central role in many of these companies. 

FI speculates,

A step further, end-to-end LLMs could close the loop, linking cause and effect, turning insights into actions, syncing with PCPs, and laying the foundation for an AI-powered medical future.

This is a good time to take a deep breath – as well as a closer, more critical look at this vision of consumer-empowered, data-fortified health.

A Powerful Vision

Unquestionably, there’s a lot to embrace here, including in particular:

  • The opportunity for individuals to gather more and richer health data from a greater variety of sources, including in particular wearables;
  • The increased possibility of relevant insights (a key deficiency of early “Quantified Self” efforts) from these data.
  • The explicit centralization of your health data around you (Superpower’s tagline is “Health Data, In One Place”), a long-promised but often frustratingly elusive healthcare goal in practice. Today, still, (still!), so many patients find themselves having to beg and plead for efficient access to their own health information, data that health systems tend to view as a competitive advantage and aren’t eager to let go.

A tech-enabled approach to health where you have more abundant data about you, that are explicitly in your control, and which could lead to healthier behaviors represents the sort of progress that deserves to be celebrated.

At the same time, when I look at many of these approaches to health, I see two broad categories of concerns.

Concern One: Plural of Fragile Data May Not Be Insight

The first, perhaps more concrete worry, is that, to paraphrase comedian Dennis Miller, “two of [crap] is [crap],” and simply the collection of a lot of data, much of which may be fragile, isn’t sure to translate into brilliant insight, even if the magical power of AI is fervently invoked.

In an especially incisive “Ground Truths” blog post focused on “The business of promoting longevity and healthspan,” Dr. Eric Topol writes that “getting hundreds of biomarker results and imaging tests in an individual greatly increases the likelihood of false-positive results,” a concerning possibility.

I’ve discussed the challenge of false positives here, and get into some of the details around Bayes Theorem (which informs the assessment) here. The OG reference in this space may be this 2006 paper by Zak Kohane and colleagues, in which they introduce the term “incidentalome.”

To be fair, at least some of the proponents of extensive testing recognize the challenge of false positives but feel that the opportunity to collect dense data on individuals over time enables important inflections to be observed, a point Dr. Peter Attia explicitly emphasizes in Outlive; I discuss his “risk-management” mindset here.

Similarly, Nathan Price, a professor at the Buck Institute and the CSO of Thorne, has argued that close inspection (assisted by AI) of rich individual data could identify (for example) opportunities for supplement intervention.  These interventions may not make much of a difference on the population level (hence the paucity of persuasive clinical trial data for supplements, as Dr. Topol notes in his latest book, Super Agers – my WSJ review here), but could in selected individuals. (I also discuss Price here, here).

Proponents of the “personal health OS” also might emphasize the presence of tailwinds – the likelihood of improved predictions as measurement technologies continue to get better, denser data become available, and the AI tools become ever-more capable.  Perhaps we’re not quite at the point of realizing the future we imagine, advocates might argue, but we’re close enough to start to see what it might look like.

Continue reading…

Tracy DeTomasi, Callisto

Callisto is a non-profit tech company that helps survivors of sexual violence identify repeat offenders. The company was started a few years back by Jess Ladd and Tracy DeTomasi took over as CEO a few years back, It focuses on college campuses where 90% of assaults are perpetrated by repeat offenders, who on average commit 6 offenses. And 90% of assaults are not reported, Callisto is working providing an anonymous solution with Tracey also giving a demo of how it works. This is a  tough conversation about a difficult topic.–Matthew Holt

Seriously, Aon, you think weight loss drugs save money?

By AL LEWIS

Last month Aon, the major benefits consulting firm, released a “study” claiming:

A significant opportunity to reduce healthcare costs for employers and enhance overall workforce health through a comprehensive obesity management program that includes GLP-1 medications.

This, of course, is the opposite of what most researchers have shown.  And in the immortal words of the great philosophers Dire Straits: “Two men say they’re Jesus, one of them must be wrong.” We’ll shortly see who’s wrong (um, meaning about weight loss drugs) when we dive into the study in a minute. But first, let’s review Aon’s previous analyses. 

A brief history of Aon

Aon claimed that Accolade saved 8%, but it looks like they must coincidentally have been absent both on the day that the biostatistics professor explained how control groups work, and also on the day the fifth-grade math teacher explained how averages work. 

Then, they claimed that Lyra – which is a mental health company – achieved the following non-mental improvements in the set of patients who had at least one mental health encounter with one of their “220,000 high-quality providers”:

§  A 30% reduction in non-mental health-related ER visits

§  A 30% reduction in generic drug spending

§  A 20% reduction in specialty drug spending

Thanks in part to starting the y-axis at $4000 to improve the optics, Aon also revealed that Lyra achieved a very high “efficiency ratio”:

A graph of a number of people

AI-generated content may be incorrect.

I can’t object to that finding because – despite three decades in this field, about 100 articles/interviews/quotes/citations including the Wall Street Journal, two trade-bestselling books and one Harvard Business School case study – I still don’t know what an “efficiency ratio” is, other than that has nothing to do with comparing participants to non-participants in a mental health study. Apparently an “efficiency ratio” in healthcare measures how quickly a hospital turns over its inventory. So Aon’s use of the term recalls the immortal words of the great philosopher Bob Uecker: “Juuussst a bit outside.”

When publicly and privately asked to explain any of these things, Aon clammed up. That was likely wise on their part.

Continue reading…