Categories

Tag: AI

AAAA (the four A)

By JACOB REIDER

I haven’t blogged this yet, which kinda surprises me, since I find myself describing it often.  
Let’s start with an overview. We can look at health information through the lens of a lifecycle. 

The promise of Health Information Technology has been to help us – ideally to achieve optimal health in the people we serve.

The concept @ the beginning of the HITECH act was: “ADOPT, CONNECT, IMPROVE.”

These were the three pillars of the Meaningful Use Incentive programs.

Adopt technology so we can connect systems and therefore improve health.

Simple, yes?

Years later, one can argue that adoption and even connection have (mostly) been accomplished.

But the bridge between measurement and health improvement isn’t one we can easily cross with the current tools available to us.

Why?

Many of the technical solutions, particularly those that promote dashboards, are missing the most crucial piece of the puzzle. They get us close, but then they drop the ball.

And that’s where this “simple”AAAA” model becomes useful.

For data and information to be truly valuable in health care, it needs to complete a full cycle.

It’s not enough to just collect and display. There are four essential steps:

1. Acquire. This is where we gather the raw data & information. EHR entries, device readings, patient-reported outcomes  …  the gamut of information flowing into our systems.  Note that I differentiate between data (transduced representations of the physical world: blood pressure, CBC, the DICOM representation of an MRI, medications actually taken) and information (diagnoses, ideas, symptoms, the problem list, medications prescribed) because data is reliably true and information is possibly true, and possibly inaccurate. We need to weigh these two kinds of inputs properly – as data is a much better input than information.  (I’ll resist the temptation to go off on a vector about data being a preferable input for AI models too … perhaps that’s another post.)

2. Aggregate. Once acquired, this data and information needs to be brought together, normalized, and cleaned up. This is about making disparate data sources speak the same language, creating a unified repository so we can ask questions of one dataset rather than tens or hundreds.

3. Analyze. Now we can start to make sense of it. This is where clinical decision support (CDS) begins to take shape, how we can identify trends, flag anomalies, predict risks, and highlight opportunities for intervention. The analytics phase is where most current solutions end. A dashboard, an alert, a report … they all dump advice – like a bowl of spaghetti – into the lap of a human to sort it all out and figure out what to do.

Sure … you can see patterns, understand populations, and identify areas for improvement … All good things. The maturity of health information technology means that aggregation, normalization, and sophisticated analysis are now far more accessible and robust than ever before. We no longer need a dozen specialized point solutions to handle each step; modern platforms can integrate it all. This is good – but not good enough

A dashboard or analytics report, no matter how elegant, is ultimately passive. It shows you the truth, but it doesn’t do anything about it.

Continue reading…

Avasure: Tech for helpful watching & remote care in hospitals

Lisbeth Votruba, the Chief Clinical Officer and Dana Peco, the AVP of Clinical Informatics from Avasure came on THCB to explain how their AI enabled surveillance system improves the care team experience in hospitals and health care facilities. Their technology enables remote nurses and clinical staff to monitor patients, and manage their care in a tight virtual nursing relationship with the staff at the facility, and also deliver remote specialty consults. They showed their tools and services which are now present in thousands of facilities and are helping with the nursing shortage. A demo and great discussion about how technology is improving the quality of care and the staff experience–Matthew Holt

What A Digital Health Doc Learned Recertifying His Boards

By JEAN LUC NEPTUNE

I recently got the good news that I passed the board recertification exam for the American Board of Internal Medicine (ABIM). As a bit of background, ABIM is a national physician evaluation organization that certifies physicians practicing internal medicine and its subspecialties (every other specialty has its own board certification body like ABOG for OB/GYNs and ABS for surgeons). Doctors practicing in most clinical environments need to be board-certified to be credentialed and eligible to work. Board certification can be accomplished by taking a test every 10 years or by participating in a continuing education process known as LKA (Longitudinal Knowledge Assessment). I decided to take the big 10-year test rather than pursue the LKA approach. For my fellow ABIM-certified docs out there who are wondering why I did the 10-year vs. the LKA, I’m happy to have a side discussion, but it was largely a career timing issue.

Of note, board certification is different from the USMLE (United States Medical Licensing Examination) which is the first in a series of licensing hurdles that doctors face in medical school and residency, involving 3 separate tests (USMLE Step 1, 2 and 3). After completing the USMLE steps, acquiring a medical license is a separate state-mediated process (I’m active in NY and inactive in PA) and has its own set of requirements that one needs to meet in order to practice in any one state. If you want to be able to prescribe controlled substances (opioids, benzos, stimulants, etc.), you will need a separate license from the DEA (the Drug Enforcement Administration, which is a federal entity). Simply put, you need to complete a lot of training, score highly on many standardized tests, and acquire a bunch of certifications (that cost a lot of money, BTW) to be able to practice medicine in the USofA.

What I learned in preparing for the ABIM recertification exam:

1.) There’s SO MUCH TO KNOW to be a doctor!

To prepare for the exam I used the New England Journal of Medicine (NEJM) review course which included roughly 2,000 detailed case studies that covered all the subspecialty areas of internal medicine. If you figure that each case involves mastery of dozens of pieces of medical knowledge, the exam requires a physician to remember tens of thousands of distinct pieces of information just for one specialty (remember that the medical vocabulary alone consists of tens of thousands of words). In addition, the individual facts mean nothing without a mastery of the basic underlying concepts, models, and frameworks of biology, biochemistry, human anatomy, physiology, pathophysiology, public health, etc. etc. Then there’s all the stuff you need to know for your specific speciality: medications, diagnostic frameworks, treatment guidelines, etc. It’s a lot. There’s a reason it takes the better part of a decade to gain any competency as a physician. So whenever I hear a non-doc saying that they’ve been reading up on XYZ and “I think I know almost as much as my doctor!”, my answer is always “No you don’t. Not at all. Not even a little bit. Stop it.”

2.) There is so much that we DON’T KNOW as doctors!

What was particularly striking to me as I did my review was how often I encountered a case or a presentation where:

  • It’s unclear what causes a disease,
  • The natural history of the disease is unclear,
  • We don’t know how to treat the disease,
  • We know how to treat the disease but we don’t how the treatment works,
  • We don’t know what treatment is most effective, or
  • We don’t know what diagnostic test is best.
  • And on, and on, and on…

It’s estimated that there are more than 50,000 (!!) active journals in the field of biomedical sciences publishing more than 3 million (!!!!) articles per year. Despite all this knowledge generation there’s still so much we don’t know about the human body and how it works. I think some people find doctors arrogant, but anyone who really knows doctors and physician culture can tell you that doctors possess a deep sense of humility that comes out of knowing that you actually know very little.

3.) Someday soon the computer doctor will FOR SURE be smarter than the human doctor.

The whole time I was preparing for the test, I kept telling myself that there was nothing I was doing that a sufficiently advanced computer couldn’t accomplish.

Continue reading…

Penguin–The Flightless Bird of Health AI

Fawad Butt and Missy Krasner started a new AI company which is building a big platform for both plans and providers in health care. Penguin Ai has a cute name, but is serious about trying to provide an underlying platform that is going enable agents across the enterprise. They are health care only, as opposed to the big LLMs. But does health care need a separate AI company? Are the big LLMs going to give up health? And what about that Epic company? Join us as we discuss how this AI thing is going to be deployed across health care, and how Penguin is going to play. Oh and they raised $30m series A to start getting it done–Matthew Holt

Dr Kaelee Brockway on AI for physical therapy training

Dr Kaelee Brockway is a professor of education and physical therapy who has built a series of AI based “patients” for her PT students to train on. Kaelee is a pioneer in using these tools for training. She showed me the personas that she has built with LLMs that are now being used by her students to figure out how to train their soft skills–a huge part of any training. This a great demo and discussion about how clinical professionals are going to use LLMs in their training and their work–Matthew Holt

Owen Tripp, Included Health, talks AI

“So far AI in health care is being used to drive existing profits on workflows and increase revenue per event that patients in the end have to pay for. That’s not a win for anyone long term!” Included Health’s CEO Owen Tripp dives into the present and future use of AI, LLMs, patient self-triage and self treatment and all that. Another interesting conversation on where patient facing AI will end up — Matthew Holt

BTW here’s my Conversation with Ami Parekh & Ankoor Shah

Here’s Owen Tripp discussing Included Health.

Here’s Owen’s piece on AI, What’s in your chatbot?

Après AI, le Déluge

By KIM BELLARD

I have to admit, I’ve steered away from writing about AI lately. There’s just so much going on, so fast, that I can’t keep up. Don’t ask me how GPT-5 differs from GPT-4, or what Gemini does versus Genie 3. I know Microsoft really, really wants me to use Copilot, but so far I’m not biting. DeepMind versus DeepSeek?  Is Anthropic the French AI, or is that Mistral?  I’m just glad there are younger, smarter people paying closer attention to all this.

Still, I’m very much concerned about where the AI revolution is taking us, and whether we’re driving it or just along for the ride. In Fast Company, Sebastion Buck, co-founder of the “future design company” Enso, posits a great attitude about the AI revolution:

The scary news is: We have to redesign everything.

The exciting news is: We get to redesign everything.

He goes on to explain:

Technical revolutions create windows of time when new social norms are created, and where institutions and infrastructure is rethought. This window of time will influence daily life in myriad ways, from how people find dates, to whether kids write essays, to which jobs require applications, to how people move through cities and get health diagnoses.

Each of these are design decisions, not natural outcomes. Who gets to make these decisions? Every company, organization, and community that is considering if—and how—to adopt AI. Which almost certainly includes you. Congratulations, you’re now part of designing a revolution.

I want to pick out one area in particular where I hope we redesign everything intentionally, rather than in our normal short-sighted, laissez-faire manner: jobs and wealth.

It has become widely accepted that offshoring led to the demise of U.S. manufacturing and its solidly middle class blue collar jobs over the last 30 years. There’s some truth to that, but automation was arguably more of a factor – and that was before AI and today’s more versatile robots. More to the point, today’s AI and robots aren’t coming just to manufacturing but pretty much to every sector.

Former Transportation Secretary Pete Buttigieg warned:

The economic implications are the ones that I think could be the most disruptive, the most quickly. We’re talking about whole categories of jobs, where — not in 30 or 40 years, but in three or four — half of the entry-level jobs might not be there. It will be a bit like what I lived through as a kid in the industrial Midwest when trade in automation sucked away a lot of the auto jobs in the nineties — but ten times, maybe a hundred times more disruptive.

Mr. Buttigieg is no AI expert, but Erik Brynjolfsson, senior fellow at Stanford’s Institute for Human-Centered Artificial Intelligence and director of the Stanford Digital Economy Lab, is. When asked about those comments, he told Morning Edition: “Yeah, he’s spot on. We are seeing enormous advances in core technology and very little attention is being paid to how we can adapt our economy and be ready for those changes.”

You could look, for example, at the big layoffs in the tech sector lately. Natasha Singer, writing in The New York Times, reports on how computer science graduates have gone from expecting mid-six figure starting salaries to working at Chipotle (and wait till Chipotle automates all those jobs). The Federal Reserve Bank of New York says unemployment for computer science & computer engineering majors is better than anthropology majors, but, astonishingly, worse than pretty much all other majors.

And don’t just feel sorry for tech workers. Neil Irwin of Axios warns: “In the next job market downturn — whether it’s already starting or years away — there just might be a bloodbath for millions of workers whose jobs can be supplanted by artificial intelligence.” He quotes Federal Reserve governor Lisa Cook: “AI is poised to reshape our labor market, which in turn could affect our notion of maximum employment or our estimate of the natural rate of unemployment.”

In other words, you ain’t seen nothing yet.

While manufacturing was taking a beating in the U.S. over the last thirty years, tech boomed. Most of the world’s largest and most profitable companies are tech companies, and most of the world’s richest people got their wealth from tech. Those are, by and large, the ones investing most heavily in AI — most likely to benefit from it.

Professor Brynjolfsson worries about how we’ll handle the transition to an AI economy:

The ideal thing is that you find ways of compensating people and managing a transition. Sad to say, with trade, we didn’t do a very good job of that. A lot of people got left behind. It would be a catastrophe if we made the similar mistake with technology, [which] that also is going to create enormous amounts of wealth, but it’s not going to affect everyone evenly. And we have to make sure that people manage that transition. 

“Catastrophe” indeed. And I fear it is coming.

Continue reading…

China Goes “Democratic” on Artificial General Intelligence

By MIKE MAGEE

Last week, following a visit to the White House, Jensen Huang instigated a wholesale reversal of policy from Trump who was blocking Nvidia sales of its H20 chip to China. What did Jensen say?

We can only guess of course. But he likely shared the results of a proprietary report from noted AI researchers at Digital Science that suggested an immediate policy course correction was critical. Beyond the fact that over 50% of all AI researchers are currently based in China, their study documented that “In 2000, China-based scholars produced just 671 AI papers, but in 2024 their 23,695 AI-related publications topped the combined output of the United States (6378), the United Kingdom (2747), and the European Union (10,055).”

David Hook, CEO of Digital Science was declarative in the opening of the report, stating “U.S. influence in AI research is declining, with China now dominating.”

China now supports about 30,000 AI researchers compared to only 10,000 in the US. And that number is shrinking thanks to US tariff and visa shenanigans, and overt attacks by the administration on our premier academic institutions.

Economics professors David Autor (MIT) and Gordon Hanson (Harvard), known for “their research into how globalization, and especially the rise of China, reshaped the American labor market,” famously described the elements of “China Shock 1.0.” in 2013. It was “a singular process—China’s late-1970s transition from Maoist central planning to a market economy, which rapidly moved the country’s labor and capital from collective rural farms to capitalist urban factories.”

As a result, a quarter of all US manufacturing jobs disappeared between 1999 and 2007. Today China’s manufacturing work force tops 100 million, dwarfing the US manufacturing job count of 13 million. Those numbers peaked a decade ago when China’s supply of low cost labor peaked. But these days China is clearly looking forward while this administration and its advisers are being left behind in the rear view mirror.

Welcome to “China Shock 2.0” wrote Autor and Hanson in a recent New York Times editorial. But this time, their leaders are focusing on “key technologies of the 21st century…(and it) will last for as long as China has the resources, patience and discipline to compete fiercely.”

The highly respected Australian Strategic Policy Institute, funded by their Defense Department, has been tracking the volume of published innovative technology research in the US and China for over a quarter century. They see this as a measure of experts opinion where the greatest innovations are originating. In 2007, we led China in the prior four years in 60 of 64 “frontier technologies.”

Two decades later, the table has flipped, with China well ahead of the US in 57 of 64 categories measured.

Continue reading…

Healthcare AI: What’s in your chatbot?

By OWEN TRIPP

So much of the early energy around generative AI in healthcare has been geared toward speed and efficiency: freeing doctors from admin tasks, automating patient intake, streamlining paperwork-heavy pain points. This is all necessary and helpful, but much of it boils down to established players optimizing the existing system to suit their own needs. As consumers flock to AI for healthcare, their questions and needs highlight the limits of off-the-shelf bots — and the pent-up demand for no judgment, all-in-one, personalized help.

Transforming healthcare so that it actually works for patients and consumers — ahem, people — requires more than incumbent-led efficiency. Generative AI will be game-changing, no doubt, but only when it’s embedded and embraced as a trusted guide that steers people toward high-quality care and empowers them to make better decisions.

Upgrading Dr. Google

From my vantage point, virtual agents and assistants are the most important frontier in healthcare AI right now — and in people-centered healthcare, period. Tens of millions of people (especially younger generations) are already leaning into AI for help with health and wellness, testing the waters of off-the-shelf apps and tools like ChatGPT.

You see, people realize that AI isn’t just for polishing emails and vacation itineraries. One-fifth of adults consult AI chatbots with health questions at least once a month (and given AI’s unprecedented adoption curve, we can assume that number is rising by the day). For most, AI serves as a souped-up, user-friendly alternative to search engines. It offers people a more engaging way to research symptoms, explore potential treatments, and determine if they actually need to see a doctor or head to urgent care.

But people are going a lot deeper with chatbots than they ever did with Dr. Google or WebMD. Beyond the usual self-triage, the numbers tell us that up to 40% of ChatGPT users have consulted AI after a doctor’s appointment. They were looking to verify and validate what they’d heard. Even more surprising, after conferring with ChatGPT, a similar percentage then re-engaged with their doctor — to request referrals or tests, changes to medications, or schedule a follow-up.

These trends highlight AI’s enormous potential as an engagement tool, and they also suggest that people are defaulting to AI because the healthcare system is (still) too difficult and frustrating to navigate. Why are people asking ChatGPT how to manage symptoms? Because accessing primary and preventive care is a challenge. Why are they second-guessing advice and prescriptions? Sadly, they don’t fully trust their doctor, are embarrassed to speak up, or don’t have enough time to talk through their questions and concerns during appointments.

Chatbots have all the time in the world, and they’re responsive, supportive, knowledgeable, and nonjudgmental. This is the essence of the healthcare experience people want, need, and deserve, but that experience can’t be built with chatbots alone. AI has a critical role to play, to be sure, but to fulfill its potential it has to evolve well beyond off-the-shelf chatbot competence.

Chatbots 2.0

When it comes to their healthcare, the people currently flocking to mass-market apps like ChatGPT will inevitably realize diminishing returns. Though the current experience feels personal, the advice and information is ultimately very generic, built on the same foundation of publicly available data, medical journals, websites, and countless other sources. Even the purpose-built healthcare chatbots in the market today are overwhelmingly relying on public data and outsourced AI models.

Generic responses and transactional experiences have inherent shortcomings. As we’ve seen with other health-tech advances, including 1.0 telehealth and navigation platforms, impersonal, one-off services driven primarily by in-the-moment-need, efficiency, or convenience don’t equate to long-term value.

For chatbots to avoid the 1.0 trap, they need to do more than put the world’s medical knowledge at our fingertips.

Continue reading…

Watching Where and How You’re Walking

By MIKE MAGEE

In a speech to the American Philosophical Society in January, 1946, J. Robert Oppenheimer said, “We have made a thing …that has altered abruptly and profoundly the nature of the world…We have raised again the question of whether science is good for man, of whether it is good to learn about the world, to try to understand it, to try to control it, to help give to the world of men increased insight, increased power.”

Eight decades later, those words reverberate, and we once again are at a seminal crossroads. This past week, Jensen Huang, the CEO of Nvidia, was everywhere, a remarkably skilled communicator celebrating the fact that his company was now the first publicly traded company to exceed a $4 trillion valuation.

As he explained, “We’ve essentially created a new industry for the first time in three hundred years. the last time there was an industry like this, it was a power generation industry…Now we have a new industry that generates intelligence…you can use it to discover new drugs, to accelerate diagnosis of disease…everybody’s jobs will be different going forward.”

Jensen, as I observed him perform on that morning show, seemed just a bit overwhelmed, awed, and perhaps even slightly frightened by the pace of recent change. “We reinvented computing for the first time since the 60’s, since IBM introduced the modern computer architecture… its able to accelerate applications from computer graphics to physics simulations for science to digital biology to artificial intelligence. . . . in the last year, the technology has advanced incredibly fast. . . AI is now able to reason, it’s able to think… Before it was able to understand, it was able to generate content, but now it can reason, it can do research, it can learn about the latest information before it answers a question.”

Of course, this is hardly the first time technology has triggered flashing ethical warning lights. I recently summarized the case of Facial Recognition Technology (FRT). The US has the largest number of closed circuit cameras at 15.28 per capita, in the world. On average, every American is caught on a closed circuit camera 238 times a week, but experts say that’s nothing compared to where our “surveillance” society will be in a few years.

The field of FRT is on fire. 

Continue reading…