Categories

Category: Health Tech

GPT-4o: What’s All The Fuss About?

By MIKE MAGEE

If you follow my weekly commentary on HealthCommentary.org or THCB, you may have noticed over the past 6 months that I appear to be obsessed with mAI, or Artificial Intelligence intrusion into the health sector space.

So today, let me share a secret. My deep dive has been part of a long preparation for a lecture (“AI Meets Medicine”) I will deliver this Friday, May 17, at 2:30 PM in Hartford, CT. If you are in the area, it is open to the public. You can register to attend HERE.

This image is one of 80 slides I will cover over the 90 minute presentation on a topic that is massive, revolutionary, transformational and complex. It is also a moving target, as illustrated in the final row above which I added this morning.

The addition was forced by Mira Murati, OpenAI’s chief technology officer, who announced from a perch in San Francisco yesterday that, “We are looking at the future of the interaction between ourselves and machines.”

The new application, designed for both computers and smart phones, is GPT-4o. Unlike prior members of the GPT family, which distinguished themselves by their self-learning generative capabilities and an insatiable thirst for data, this new application is not so much focused on the search space, but instead creates a “personal assistant” that is speedy and conversant in text, audio and image (“multimodal”).

OpenAI says this is “a step towards much more natural human-computer interaction,” and is capable of responding to your inquiry “with an average 320 millisecond (delay) which is similar to a human response time.” And they are fast to reinforce that this is just the beginning, stating on their website this morning “With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.”

It is useful to remind that this whole AI movement, in Medicine and every other sector, is about language. And as experts in language remind us, “Language and speech in the academic world are complex fields that go beyond paleoanthropology and primatology,” requiring a working knowledge of “Phonetics, Anatomy, Acoustics and Human Development, Syntax, Lexicon, Gesture, Phonological Representations, Syllabic Organization, Speech Perception, and Neuromuscular Control.”

The notion of instantaneous, multimodal communication with machines has seemingly come of nowhere but is actually the product of nearly a century of imaginative, creative and disciplined discovery by information technologists and human speech experts, who have only recently fully converged with each other. As paleolithic archeologist, Paul Pettit, PhD, puts it, “There is now a great deal of support for the notion that symbolic creativity was part of our cognitive repertoire as we began dispersing from Africa.” That is to say, “Your multimodal computer imagery is part of a conversation begun a long time ago in ancient rock drawings.”

Throughout history, language has been a species accelerant, a secret power that has allowed us to dominate and rise quickly (for better or worse) to the position of “masters of the universe.”  The shorthand: We humans have moved “From babble to concordance to inclusivity…”

GPT-4o is just the latest advance, but is notable not because it emphasizes the capacity for “self-learning” which the New York Times correctly bannered as “Exciting and Scary,” but because it is focused on speed and efficiency in the effort to now compete on even playing field with human to human language. As OpenAI states, “GPT-4o is 2x faster, half the price, and has 5x higher (traffic) rate limits compared to GPT-4.”

Practicality and usability are the words I’d chose. In the companies words, “Today, GPT-4o is much better than any existing model at understanding and discussing the images you share. For example, you can now take a picture of a menu in a different language and talk to GPT-4o to translate it, learn about the food’s history and significance, and get recommendations.”

In my lecture, I will cover a great deal of ground, as I attempt to provide historic context, relevant nomenclature and definitions of new terms, and the great potential (both good and bad) for applications in health care. As many others have said, “It’s complicated!”

But as this yesterday’s announcing in San Francisco makes clear, the human-machine interface has blurred significantly. Or as Mira Murati put it, “You want to have the experience we’re having — where we can have this very natural dialogue.”

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside the Medical Industrial Complex (Grove/2020)

Chakri Toleti, Care.ai

Chakri Toleti is an occasional Bollywood film producer (you can Google that) and also the CEO of Care.ai–one of the leading companies using sensors and AI to figure out what is going on in that hospital room. They’ve grown very fast in recent years, fundamentally by using technology to monitor patients and help improve their care, improve patient safety and figure out what else is needed to improve the care process. You’ll also see me doing a little bit of self-testing!–Matthew Holt

Will AI Revolutionize Surgical Care?  Yes, But Maybe Not How You Think

By MIKE MAGEE

If you talk to consultants about AI in Medicine, it’s full speed ahead. GenAI assistants, “upskilling” the work force, reshaping customer service, new roles supported by reallocation of budgets, and always with one eye on “the dark side.”

But one area that has been relatively silent is surgery. What’s happening there? In June, 2023, the American College of Surgeons (ACS) weighed in with a report that largely stated the obvious. They wrote, “The daily barrage of news stories about artificial intelligence (AI) shows that this disruptive technology is here to stay and on the verge of revolutionizing surgical care.”

Their summary self-analysis was cautious, stating: “By highlighting tools, monitoring operations, and sending alerts, AI-based surgical systems can map out an approach to each patient’s surgical needs and guide and streamline surgical procedures. AI is particularly effective in laparoscopic and robotic surgery, where a video screen can display information or guidance from AI during the operation.”

The automatic emergency C-Section in Prometheus–Coming, but not quite yet!

So the ACS is not anticipating an invasion of robots. In many ways, this is understandable. The operating theater does not reward hyperbole or flash performances. In an environment where risk is palpable, and simple tremors at the wrong time, and in the wrong place, can be deadly, surgical players are well-rehearsed and trained to remain calm, conservative, and alert members of the “surgical team.”

Continue reading…

Nvidia’s AI Bot Outperforms Nurses: Here’s What It Means for You  

By ROBBIE PEARL

Soon after Apple released the original iPhone, my father, an unlikely early adopter, purchased one. His plan? “I’ll keep it in the trunk for emergencies,” he told me. He couldn’t foresee that this device would eventually replace maps, radar detectors, traffic reports on AM radio, CD players, and even coin-operated parking meters—not to mention the entire taxi industry.

His was a typical response to revolutionary technology. We view innovations through the lens of what already exists, fitting the new into the familiar context of the old.

Generative AI is on a similar trajectory.

As I planned the release of my new book in early April, “ChatGPT, MD: How AI-Empowered Patients & Doctors Can Take Back Control of American Medicine,” I delved into the promise and perils of generative AI in medicine. Initially, I feared my optimism about AI’s potential might be too ambitious. I envisioned tools like ChatGPT transforming into hubs of medical expertise within five years. However, by the time the book hit the shelves, it was clear that these changes were unfolding even more quickly than I had anticipated.

Three weeks before “ChatGPT, MD” became number one on Amazon’s “Best New Books” list,  Nvidia stunned the tech and healthcare industries with a flurry of headline-grabbing announcements at its 2024 GTC AI conference. Most notably, Nvidia announced a collaboration with Hippocratic AI to develop generative AI “agents,” purported to outperform human nurses in various tasks at a significantly lower cost.

According to company-released data, the AI bots are 16% better than nurses at identifying a medication’s impact on lab values; 24% more accurate detecting toxic dosages of over-the-counter drugs, and 43% better at identifying condition-specific negative interactions from OTC meds. All that at $9 an hour compared to the $39.05 median hourly pay for U.S. nurses.

Although I don’t believe this technology will replace dedicated, skilled, and empathetic RNs, it will assist and support their work by identifying when problems unexpectedly arise. And for patients at home who today can’t obtain information, expertise and assistance for medical concerns, these AI nurse-bots will help. Although not yet available, they will be designed to make new diagnoses, manage chronic disease, and give patients a detailed but clear explanation of clinician’ advice.

These rapid developments suggest we are on the cusp of technology revolution, one that could reach global ubiquity far faster than the iPhone. Here are three major implications for patients and medical practitioners:  

1. GenAI In Healthcare Is Coming Faster Than You Can Imagine

The human brain can easily predict the rate of arithmetic growth (whereby numbers increase at a constant rate: 1, 2, 3, 4). And it does reasonably well at comprehending geometric growth (a pattern that increases at a constant ratio: 1, 3, 9, 27), as well.

But even the most astute minds struggle to grasp the implications of continuous, exponential growth. And that’s what we’re witnessing with generative AI.

Continue reading…

What Walmart said & What Walmart Did: Not the same thing

Walmart surprised us all and changed its mind about primary care yesterday. It’s out.

Because so few people have seen it I want to show what Walmart‘s head of health care said just 18 months ago (Nov 2022). Today they are finally killing off the 6th different strategy they’ve had (maybe it was 4). I guess (unlike CVS & Walgreens) they don’t have to write down investment in Oak Street or VillageCare, but they never worked out that primary care is only profitable if it’s 1) very low overhead 2) a loss leader for more expensive services (as most hospitals run it) or 3) getting a cut of the $$ for stopping more expensive services (Oak Street, Chenmed, Kaiser).

At HLTH 18 months ago I interviewed Cheryl Pegus who was then running Walmart and I asked why anyone should trust them, given how often they changed. Sachin H. Jain, MD, MBA Jain answered for her and said, “because they have Cheryl!” — Cheryl then said, “at Walmart the commitment to delivering health care is bigger than anywhere I have ever worked”. “Right now I have 35 centers in 3 years I’ll have 100s”  see 11.00 onwards in the video below, although the whole thing is worth a look

Cheryl though left Walmart THE NEXT WEEK!

What’s behind all these assessments of digital health?

By MATTHEW HOLT

A decent amount of time in recent weeks has been spent hashing out the conflict over data. Who can access it? Who can use it for what? What do the new AI tools and analytics capabilities allow us to do? Of course the idea is that this is all about using data to improve patient care. Anyone who is anybody, from John Halamka at the Mayo Clinic down to the two guys with a dog in a garage building clinical workflows on ChatGPT, thinks they can improve the patient experience and improve outcomes at lower cost using AI.

But if we look at the recent changes to patient care, especially those brought on by digital health companies founded over the past decade and a half, the answer isn’t so clear. Several of those companies, whether they are trying to reinvent primary care (Oak, Iora, One Medical) or change the nature of diabetes care (Livongo, Vida, Virta et al) have now had decent numbers of users, and their impact is starting to be assessed. 

There’s becoming a cottage industry of organizations looking at these interventions. Of course the companies concerned have their own studies, In some cases, several years worth. Their  logic always goes something like “XY% of patients used our solution, most of them like it, and after they use it hospital admissions and ER visits go down, and clinical metrics get better”. But organizations like the Validation Institute, ICER, RAND and more recently the Peterson Health Technology Institute, have declared themselves neutral arbiters, and started conducting studies or meta-analyses of their own. (FD: I was for a brief period on the advisory board of the Validation Institute). In general the answers are that digital health solutions ain’t all they’re cracked up to be.

There is of course a longer history here. Since the 1970s policy wonks have been trying to figure out if new technologies in health care were cost effective. The discipline is called health technology assessment and even has its own journal and society, at a meeting of which in 1996 I gave a keynote about the impact of the internet on health care. I finished my talk by telling them that the internet would have little impact on health care and was mostly used for downloading clips of color videos and that I was going to show them one. I think the audience was relieved when I pulled up a video of Alan Shearer scoring for England against the Netherlands in Euro 96 rather than certain other videos the Internet was used for then (and now)!

But the point is that, particularly in the US, assessment of the cost effectiveness of new tech in health care has been a sideline. So much so that when the Congressional Office of Technology Assessment was closed by Gingrich’s Republicans in 1995, barely anyone noticed. In general, we’ve done clinical trials that were supposed to show if drugs worked, but we have never really  bothered figuring out if they worked any better than drugs we already had, or if they were worth the vast increase in costs that tended to come with them. That doesn’t seem to be stopping Ozempic making Denmark rich.

Likewise, new surgical procedures get introduced and trialed long before anyone figures out if systematically we should be doing them or not. My favorite tale here is of general surgeon Eddie Jo Riddick who discovered some French surgeons doing laparoscopic gallbladder removal in the 1980s, and imported it to the US. He traveled around the country charging a pretty penny to  teach other surgeons how to do it (and how to bill more for it than the standard open surgery technique). It’s not like there was some big NIH funded study behind this. Instead an entrepreneurial surgeon changed an entire very common procedure in under five years. The end of the story was that Riddick made so much money teaching surgeons how to do the “lap chole” that he retired and became a country & western singer.

Similarly in his very entertaining video, Eric Bricker points out that we do more than double the amount of imaging than is common in European countries. Back in 2008 Shannon Brownlee spent a good bit of her great book Overtreated explaining how the rate of imaging skyrocketed while there was no improvement in our diagnosis or outcomes rates. Shannon by the way declared defeat and also got out of health care, although she’s a potter not a country singer.

You can look at virtually any aspect of health care and find ineffective uses of technology that don’t appear to be cost effective, and yet they are widespread and paid for.

So why are the knives out for digital health specifically?

And they are out. ICER helped kill the digital therapeutics movement by declaring several solutions for opiod use disorder ineffective, and letting several health plans use that as an excuse to not pay for them. Now Peterson, which is using a framework from ICER, has basically said the same thing about diabetes solutions and is moving on to MSK, with presumably more categories to be debunked on deck.

Continue reading…

Ready for Robots?

By KIM BELLARD

When I was young, robots were Robby the Robot (Forbidden Planet, etc.), the unnamed robot in Lost in Space, or The JetsonsRosey the Robot. Gen X and Millennials might think instead of the more malevolent Terminators (which, of course, are actually cyborgs). But Gen Z is likely to think of the running, jumping, back-flipping Atlas from Boston Dynamics, whose videos have entertained millions.

Alas, last week Boston Dynamics announced it was discontinuing Atlas. “For almost a decade, Atlas has sparked our imagination, inspired the next generations of roboticists and leapt over technical barriers in the field,” the company said. “Now it’s time for our hydraulic Atlas robot to kick back and relax.”

The key part of that announcement was describing Atlas as “hydraulic,” because the very next day Boston Dynamics announced a new, all-electric Atlas: “Our new electric Atlas platform is here. Supported by decades of visionary robotics innovation and years of practical experience, Boston Dynamics is tackling the next commercial frontier.” Moreover, the company brags: “The electric version of Atlas will be stronger, with a broader range of motion than any of our previous generations.”

The introductory video is astounding:

Boston Dynamics says: “Atlas may resemble a human form factor, but we are equipping the robot to move in the most efficient way possible to complete a task, rather than being constrained by a human range of motion. Atlas will move in ways that exceed human capabilities.”

They’re right about that.

CEO Robert Playter told Evan Ackerman of IEEE Spectrum: “We’re going to launch it as a product, targeting industrial applications, logistics, and places that are much more diverse than where you see Stretch—heavy objects with complex geometry, probably in manufacturing type environments.”

He went on to elaborate:

This is our third product [following Spot and Stretch], and one of the things we’ve learned is that it takes way more than some interesting technology to make a product work. You have to have a real use case, and you have to have real productivity around that use case that a customer cares about. Everybody will buy one robot—we learned that with Spot. But they won’t start by buying fleets, and you don’t have a business until you can sell multiple robots to the same customer. And you don’t get there without all this other stuff—the reliability, the service, the integration.

The company will work with Hyundai (which, ICYMI, owns Boston Dynamics). Mr. Playter says Hyundai “is really excited about this venture; they want to transform their manufacturing and they see Atlas as a big part of that, and so we’re going to get on that soon.”

Continue reading…

Jeff Gartland, Relatient

Relatient focuses on intelligent scheduling, specifically for the larger specialty groups. They touch over 50m patients and 45,000 providers a year, and are now a significant player in the key part of patient experience–converting a patient looking into an actual appointment with the provider. I spoke with CEO Jeff Gartland at HIMSS in March 2024.–Matthew Holt

Aasim Saeed, CEO of Amenities

Aasim Saeed is the CEO of Amenities. He’s a doc, ex-McKinsey Consultant and spent a lot of time building a version of his tool for Baylor Scott & White. We had a wide ranging conversation about how health systems treat patients (not well), whether health systems know the value of their customers (no!), and how to bump up “in network” utilization. Amenities is a front door tool that essentially replaces those sh*tty MyChart portals, and eventually will lead to creating a loyalty membership experience. He gave me a tour of the new-ish tool that is live at MemorialCare in southern California, and coming soon to a system near you.–Matthew Holt