Categories

Tag: AI

Artificial Intelligence Plus Data Democratization Requires New Health Care Framework

By MICHAEL MILLENSON

The latest draft government strategic plan for health information technology pledges to support health information sharing among individuals, health care providers and others “so that they can make informed decisions and create better health outcomes.”

Those good intentions notwithstanding, the current health data landscape is dramatically different from when the organizational author of the plan, the Office of the National Coordinator for Health IT, formed two decades ago. As Price and Cohen have pointed out, entities subject to federal Health Insurance Portability and Accountability Act (HIPAA) requirements represent just the tip of the informational iceberg. Looming larger are health information generated by non-HIPAA-covered entities, user-generated health information, and non-health information being used to generate inferences about treatment and health improvement.

Meanwhile, the content of health information, its capabilities, and, crucially, the loci of control are all undergoing radical shifts due to the combined effects of data democratization and artificial intelligence. The increasing sophistication of consumer-facing AI tools such as biometric monitoring and web-based analytics are being seen as a harbinger of “fundamental changes” in interactions between health care professionals and patients.

In that context, a framework of information sharing I’ve called “collaborative health” could help proactively create a therapeutic alliance designed to respond to the emerging new realities of the AI age.

The term (not be confused with the interprofessional coordination known as “collaborative care”) describes a shifting constellation of relationships for health maintenance and sickness care shaped by individuals based on their life circumstances. At a time when people can increasingly find, create, control, and act upon an unprecedented breadth and depth of personalized information, the traditional care system will often remain a part of these relationships, but not always. For example, a review of breast cancer apps found that about one-third now use individualized, patient-reported health data obtained outside traditional care settings.

Collaborative health has three core principles: shared information, shared engagement, and shared accountability. They are meant to enable a framework of mutual trust and obligation with which to address the clinical, ethical, and legal issues AI and data democratization are bringing to the fore. As the white paper AI Rights for Patients noted, digital technologies can be vital tools, but they can also expose patients to privacy breaches, illegal data sharing and other “cyber harms.” Involving patients “is not just a moral imperative; it is foundational to the responsible and effective deployment of AI in health and in care.” (While “responsible” is not defined, one plausible definition might be “defensible to a jury.”)

Below is a brief description of how collaborative health principles might apply in practice.

Shared information

While the OurNotes initiative represents a model for co-creation of information with clinicians, important non-traditional inputs that should be shared are still generally absent from the record. These might include not just patient-provided data from vetted wearables and sensors, but also information from important non-traditional providers, such as the online fertility companies often accessed through an employee benefit. Whatever is in the record, the 21st Century Cures Act and subsequent regulations addressing interoperability through mechanisms such as Fast Healthcare Interoperability Resources more commonly known as FHIR have made much of that information available for patients to access and share electronically with whomever they choose.

Provider sharing of non-traditional information that comes from outside the EHR could be more problematic. So-called “commercially available information,” not protected by HIPAA, is being used to generate inferences about health improvement interventions. Individually identified data can include shopping habits, online searches, living arrangements and many other variables analyzed by proprietary AI algorithms that have undergone no public scrutiny for accuracy or bias. Since use by providers is often motivated by value-based payment incentives, voluntary disclosure will distance clinicians from a questionable form of surveillance capitalism.

Continue reading…

Innovators: Avoid Health Care

By KIM BELLARD

NVIDIA founder and CEO Jensen Huang has become quite the media darling lately, due to NVIDIA’s skyrocketing market value the past two years ($3.3 trillion now, thank you very much. A year ago it first hit $1 trillion). His company is now the world’s third largest company by market capitalization. Last week he gave the commencement speech at Caltech, and offered those graduates some interesting insights.

Which, of course, I’ll try to apply to healthcare.

Mr. Jensen founded NVIDIA in 1993, and took the company public in 1999, but for much of its existence it struggled to find its niche. Mr. Huang figured NVIDIA needed to go to a market where there were no customers yet – “because where there are no customers, there are no competitors.” He likes to call this “zero billion dollar markets” (a phrase I gather he did not invent).

About a decade ago the company bet on deep learning and A.I. “No one knew how far deep learning could scale, and if we didn’t build it, we’d never know,” Mr. Huang told the graduates. “Our logic is: If we don’t build it, they can’t come.”

NVIDIA did build it, and, boy, they did come.

He believes we all should try to do things that haven’t been done before, things that “are insanely hard to do,” because if you succeed you can make a real contribution to the world.  Going into zero billion dollar markets allows a company to be a “market maker, not a market-taker.” He’s not interested in market share; he’s interested in developing new markets.

Accordingly, he told the Caltech graduates:

I hope you believe in something. Something unconventional, something unexplored. But let it be informed, and let it be reasoned, and dedicate yourself to making that happen. You may find your GPU. You may find your CUDA. You may find your generative AI. You may find your NVIDIA.

And in that group, some may very well.

He didn’t promise it would be easy, citing his company’s own experience, and stressing the need for resilience. “One setback after another, we shook it off and skated to the next opportunity. Each time, we gain skills and strengthen our character,” Mr. Huang said. “No setback that comes our way doesn’t look like an opportunity these days… The world can be unfair and deal you with tough cards. Swiftly shake it off. There’s another opportunity out there — or create one.”

He was quite pleased with the Taylor Swift reference; the crowd seemed somewhat less impressed.

Continue reading…

Who Needs Humans, Anyway?

By KIM BELLARD

Imagine my excitement when I saw the headline: “Robot doctors at world’s first AI hospital can treat 3,000 a day.” Finally, I thought – now we’re getting somewhere. I must admit that my enthusiasm was somewhat tempered to find that the patients were virtual. But, still.

The article was in Interesting Engineering, and it largely covered the source story in Global Times, which interviewed the research team leader Yang Liu, a professor at China’s Tsinghua University, where he is executive dean of Institute for AI Industry Research (AIR) and associate dean of the Department of Computer Science and Technology. The professor and his team just published a paper detailing their efforts.  

The paper describes what they did: “we introduce a simulacrum of hospital called Agent Hospital that simulates the entire process of treating illness. All patients, nurses, and doctors are autonomous agents powered by large language models (LLMs).” They modestly note: “To the best of our knowledge, this is the first simulacrum of hospital, which comprehensively reflects the entire medical process with excellent scalability, making it a valuable platform for the study of medical LLMs/agents.”

In essence, “Resident Agents” randomly contract a disease, seek care at the Agent Hospital, where they are triaged and treated by Medical Professional Agents, who include 14 doctors and 4 nurses (that’s how you can tell this is only a simulacrum; in the real world, you’d be lucky to have 4 doctors and 14 nurses). The goal “is to enable a doctor agent to learn how to treat illness within the simulacrum.”

The Agent Hospital has been compared to the AI town developed at Stanford last year, which had 25 virtual residents living and socializing with each other. “We’ve demonstrated the ability to create general computational agents that can behave like humans in an open setting,” said Joon Sung Park, one of the creators. The Tsinghua researchers have created a “hospital town.”

Gosh, a healthcare system with no humans involved. It can’t be any worse than the human one. Then, again, let me know when the researchers include AI insurance company agents in the simulacrum; I want to see what bickering ensues.

Continue reading…

AI Cognition – The Next Nut To Crack

By MIKE MAGEE

OpenAI says its new GPT-4o is “a step towards much more natural human-computer interaction,” and is capable of responding to your inquiry “with an average 320 millisecond (delay) which is similar to a human response time.” So it can speak human, but can it think human?

The “concept of cognition” has been a scholarly football for the past two decades, centered primarily on “Darwin’s claim that other species share the same ‘mental powers’ as humans, but to different degrees.” But how about genAI powered machines? Do they think?

The first academician to attempt to define the word “cognition” was Ulric Neisser in the first ever textbook of cognitive psychology in 1967. He wrote that “the term ‘cognition’ refers to all the processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used. It is concerned with these processes even when they operate in the absence of relevant stimulation…”

The word cognition is derived from “Latin cognoscere ‘to get to know, recognize,’ from assimilated form of com ‘together’ + gnoscere ‘to know’ …”

Knowledge and recognition would not seem to be highly charged terms. And yet, in the years following Neisser’s publication there has been a progressively intense, and sometimes heated debate between psychologists and neuroscientists over the definition of cognition.

The focal point of the disagreement has (until recently) revolved around whether the behaviors observed in non-human species are “cognitive” in the human sense of the word. The discourse in recent years had bled over into the fringes to include the belief by some that plants “think” even though they are not in possession of a nervous system, or the belief that ants communicating with each other in a colony are an example of “distributed cognition.”

What scholars in the field do seem to agree on is that no suitable definition for cognition exists that will satisfy all. But most agree that the term encompasses “thinking, reasoning, perceiving, imagining, and remembering.” Tim Bayne PhD, a Melbourne based professor of Philosophy adds to this that these various qualities must be able to be “systematically recombined with each other,” and not be simply triggered by some provocative stimulus.

Allen Newell PhD, a professor of computer science at Carnegie Mellon, sought to bridge the gap between human and machine when it came to cognition when he published a paper in 1958 that proposed “a description of a theory of problem-solving in terms of information processes amenable for use in a digital computer.”

Machines have a leg up in the company of some evolutionary biologists who believe that true cognition involves acquiring new information from various sources and combining it in new and unique ways.

Developmental psychologists carry their own unique insights from observing and studying the evolution of cognition in young children. What exactly is evolving in their young minds, and how does it differ, but eventually lead to adult cognition? And what about the explosion of screen time?

Pediatric researchers, confronted with AI obsessed youngsters and worried parents are coming at it from the opposite direction. With 95% of 13 to 17 year olds now using social media platforms, machines are a developmental force, according to the American Academy of Child and Adolescent Psychiatry. The machine has risen in status and influence from a side line assistant coach to an on-field teammate.

Scholars admit “It is unclear at what point a child may be developmentally ready to engage with these machines.” At the same time, they are forced to admit that the technological tidal waves leave few alternatives. “Conversely, it is likely that completely shielding children from these technologies may stunt their readiness for a technological world.”

Bence P Ölveczky, an evolutionary biologist from Harvard, is pretty certain what cognition is and is not. He says it “requires learning; isn’t a reflex; depends on internally generated brain dynamics; needs access to stored models and relationships; and relies on spatial maps.”

Thomas Suddendorf PhD, a research psychologist from New Zealand, who specializes in early childhood and animal cognition, takes a more fluid and nuanced approach. He says, “Cognitive psychology distinguishes intentional and unintentional, conscious and unconscious, effortful and automatic, slow and fast processes (for example), and humans deploy these in diverse domains from foresight to communication, and from theory-of-mind to morality.”

Perhaps the last word on this should go to Descartes. He believed that humans mastery of thoughts and feelings separated them from animals which he considered to be “mere machines.”

Were he with us today, and witnessing generative AI’s insatiable appetite for data, its’ hidden recesses of learning, the speed and power of its insurgency, and human uncertainty how to turn the thing off, perhaps his judgement of these machines would be less disparaging; more akin to Mira Murati, OpenAI’s chief technology officer, who announced with some degree of understatement this month, “We are looking at the future of the interaction between ourselves and machines.”

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside the Medical Industrial Complex (Grove/2020)

Getting the Future of Health Care Wrong

By KIM BELLARD

Sure, there’s lots of A.I. hype to talk about (e.g., the AI regulation proposed by Chuck Schumer, or the latest updates from Microsoft, Google, and OpenAI) but a recent column by Wall Street Journal tech writer Christopher Mims – What I Got Wrong in a Decade of Predicting the Future of Tech —  reminded me how easily we get overexcited by such things.   

I did my own mea culpa about my predictions for healthcare a couple of years ago, but since Mr. Mims is both smarter and a better writer than I am, I’ll use his structure and some of his words to try to apply them to healthcare.  

Mr. Mims offers five key learnings:

  1. Disruption is overrated
  2. Human factors are everything
  3. We’re all susceptible to this one kind of tech B.S.
  4. Tech bubbles are useful even when they’re wasteful
  5. We’ve got more power than we think

Let’s take each of these in turn and see how they relate not just to tech but also to healthcare.

Disruption is overrated

“It’s not that disruption never happens,” Mr. Mims clarifies. “It just doesn’t happen nearly as often as we’ve been led to believe.”  Well, no kidding. I’ve been in healthcare for longer than I care to admit, and I’ve lost count of all the “disruptions” we were promised.

The fact of the matter is that healthcare is a huge part of the economy. Trillions of dollars are at stake, not to mention millions of jobs and hundreds of billions of profits. Healthcare is too big to fail, and possibly too big to disrupt in any meaningful way.

If some super genius came along and offered us a simple solution that would radically improve our health but slash more than half of that spending and most of those jobs, I honestly am not sure we’d take the offer. Healthcare likes its disruption in manageable gulps, and disruptors often have their eye more on their share of those trillions than in reducing them.

For better or worse, change in healthcare usually comes in small increments.

Human factors are everything

“But what’s most often holding back mass adoption of a technology is our humanity,” Mr. Mims points out. “The challenge of getting people to change their ways is the reason that adoption of new tech is always much slower than it would be if we were all coldly rational utilitarians bent solely on maximizing our productivity or pleasure.” 

Boy, this hits the healthcare head on the nail. If we all simply ate better, exercised more, slept better, and spent less time on our screens, our health and our healthcare system would be very different. It’s not rocket science, but it is proven science.

But we don’t. We like our short-cuts, we don’t like personal inconvenience, and why skip the Krispy Kreme when we can just take Wegovy? Figure out how to motivate people to take more charge of their health: that’d be disruption.

We’re all susceptible to this one kind of tech B.S.

Mr. Mims believes: “Tech is, to put it bluntly, full of people lying to themselves,” although he is careful to add: “It’s usually not malicious.” That’s true in healthcare as well. I’ve known many healthcare innovators, and almost without exception they are true believers in what they are proposing. The good ones get others to buy into their vision. The great ones actually make some changes, albeit rarely quite as profoundly as hoped.

But just because someone believes something strongly and articulates very well doesn’t mean it’s true. I’d like to see significant changes as much as anyone, and more than most, and I know I’m too often guilty of looking for what Mr. Mims calls “the winning lottery ticket” when it comes to healthcare innovation, even though I know the lottery is a sucker’s bet.

To paraphrase Ronald Reagan (!), hope but verify.

Tech bubbles are useful even when they’re wasteful

 Healthcare has its bubbles as well, many but not all of them tech related. How many health start-ups over the last twenty years can you name that did not survive, much less make a mark on the healthcare system? How many billions of investments do they represent?

But, as Mr. Mims recounts Bill Gates once saying, “most startups were “silly” and would go bankrupt, but that the handful of ideas—he specifically said ideas, and not companies—that persist would later prove to be “really important.”’  

The trick, in healthcare as in tech, is separating the proverbial wheat from the chaff, both in terms of what ideas deserve to persist and in which people/organizations can actually make them work. There are good new ideas out there, some of which could be really important.

We’ve got more power than we think

Many of us feel helpless when encountering the healthcare system. It’s too big, too complicated, too impersonal, and too full of specialized knowledge for us to have the kind of agency we might like.

Mr. Mims advice, when it comes to tech is: “Collectively, we have agency over how new tech is developed, released, and used, and we’d be foolish not to use it.” The same is true with healthcare. We can be the patient patients our healthcare system has come to expect, or we can be the assertive ones that it will have to deal with.

I think about people like Dave deBronkart or the late Casey Quinlan when it comes to demanding our own data. I think about Andrea Downing and The Light Collective when it comes to privacy rights. I think about all the biohackers who are not waiting for the healthcare system to catch up on how to apply the latest tech to their health. And I think about all those patient advocates – too numerous to name – who are insisting on respect from the healthcare system and a meaningful role in managing their health.

Yes, we’ve got way more power than we think. Use it.

————

Mr. Mims is humble in admitting that he fell for some people, ideas, gadgets, and services that perhaps he shouldn’t. The key thing he does, though, to use his words, is “paying attention to what’s just over the horizon.” We should all be trying to do that and doing our best to prepare for it.

My horizon is what a 22nd healthcare system could, will and should look like. I’m not willing to settle for what our early 21st century one does. I expect I’ll continue to get a lot wrong but I’m still going to try.

GPT-4o: What’s All The Fuss About?

By MIKE MAGEE

If you follow my weekly commentary on HealthCommentary.org or THCB, you may have noticed over the past 6 months that I appear to be obsessed with mAI, or Artificial Intelligence intrusion into the health sector space.

So today, let me share a secret. My deep dive has been part of a long preparation for a lecture (“AI Meets Medicine”) I will deliver this Friday, May 17, at 2:30 PM in Hartford, CT. If you are in the area, it is open to the public. You can register to attend HERE.

This image is one of 80 slides I will cover over the 90 minute presentation on a topic that is massive, revolutionary, transformational and complex. It is also a moving target, as illustrated in the final row above which I added this morning.

The addition was forced by Mira Murati, OpenAI’s chief technology officer, who announced from a perch in San Francisco yesterday that, “We are looking at the future of the interaction between ourselves and machines.”

The new application, designed for both computers and smart phones, is GPT-4o. Unlike prior members of the GPT family, which distinguished themselves by their self-learning generative capabilities and an insatiable thirst for data, this new application is not so much focused on the search space, but instead creates a “personal assistant” that is speedy and conversant in text, audio and image (“multimodal”).

OpenAI says this is “a step towards much more natural human-computer interaction,” and is capable of responding to your inquiry “with an average 320 millisecond (delay) which is similar to a human response time.” And they are fast to reinforce that this is just the beginning, stating on their website this morning “With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.”

It is useful to remind that this whole AI movement, in Medicine and every other sector, is about language. And as experts in language remind us, “Language and speech in the academic world are complex fields that go beyond paleoanthropology and primatology,” requiring a working knowledge of “Phonetics, Anatomy, Acoustics and Human Development, Syntax, Lexicon, Gesture, Phonological Representations, Syllabic Organization, Speech Perception, and Neuromuscular Control.”

The notion of instantaneous, multimodal communication with machines has seemingly come of nowhere but is actually the product of nearly a century of imaginative, creative and disciplined discovery by information technologists and human speech experts, who have only recently fully converged with each other. As paleolithic archeologist, Paul Pettit, PhD, puts it, “There is now a great deal of support for the notion that symbolic creativity was part of our cognitive repertoire as we began dispersing from Africa.” That is to say, “Your multimodal computer imagery is part of a conversation begun a long time ago in ancient rock drawings.”

Throughout history, language has been a species accelerant, a secret power that has allowed us to dominate and rise quickly (for better or worse) to the position of “masters of the universe.”  The shorthand: We humans have moved “From babble to concordance to inclusivity…”

GPT-4o is just the latest advance, but is notable not because it emphasizes the capacity for “self-learning” which the New York Times correctly bannered as “Exciting and Scary,” but because it is focused on speed and efficiency in the effort to now compete on even playing field with human to human language. As OpenAI states, “GPT-4o is 2x faster, half the price, and has 5x higher (traffic) rate limits compared to GPT-4.”

Practicality and usability are the words I’d chose. In the companies words, “Today, GPT-4o is much better than any existing model at understanding and discussing the images you share. For example, you can now take a picture of a menu in a different language and talk to GPT-4o to translate it, learn about the food’s history and significance, and get recommendations.”

In my lecture, I will cover a great deal of ground, as I attempt to provide historic context, relevant nomenclature and definitions of new terms, and the great potential (both good and bad) for applications in health care. As many others have said, “It’s complicated!”

But as this yesterday’s announcing in San Francisco makes clear, the human-machine interface has blurred significantly. Or as Mira Murati put it, “You want to have the experience we’re having — where we can have this very natural dialogue.”

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside the Medical Industrial Complex (Grove/2020)

Will AI Revolutionize Surgical Care?  Yes, But Maybe Not How You Think

By MIKE MAGEE

If you talk to consultants about AI in Medicine, it’s full speed ahead. GenAI assistants, “upskilling” the work force, reshaping customer service, new roles supported by reallocation of budgets, and always with one eye on “the dark side.”

But one area that has been relatively silent is surgery. What’s happening there? In June, 2023, the American College of Surgeons (ACS) weighed in with a report that largely stated the obvious. They wrote, “The daily barrage of news stories about artificial intelligence (AI) shows that this disruptive technology is here to stay and on the verge of revolutionizing surgical care.”

Their summary self-analysis was cautious, stating: “By highlighting tools, monitoring operations, and sending alerts, AI-based surgical systems can map out an approach to each patient’s surgical needs and guide and streamline surgical procedures. AI is particularly effective in laparoscopic and robotic surgery, where a video screen can display information or guidance from AI during the operation.”

The automatic emergency C-Section in Prometheus–Coming, but not quite yet!

So the ACS is not anticipating an invasion of robots. In many ways, this is understandable. The operating theater does not reward hyperbole or flash performances. In an environment where risk is palpable, and simple tremors at the wrong time, and in the wrong place, can be deadly, surgical players are well-rehearsed and trained to remain calm, conservative, and alert members of the “surgical team.”

Continue reading…

Nvidia’s AI Bot Outperforms Nurses: Here’s What It Means for You  

By ROBBIE PEARL

Soon after Apple released the original iPhone, my father, an unlikely early adopter, purchased one. His plan? “I’ll keep it in the trunk for emergencies,” he told me. He couldn’t foresee that this device would eventually replace maps, radar detectors, traffic reports on AM radio, CD players, and even coin-operated parking meters—not to mention the entire taxi industry.

His was a typical response to revolutionary technology. We view innovations through the lens of what already exists, fitting the new into the familiar context of the old.

Generative AI is on a similar trajectory.

As I planned the release of my new book in early April, “ChatGPT, MD: How AI-Empowered Patients & Doctors Can Take Back Control of American Medicine,” I delved into the promise and perils of generative AI in medicine. Initially, I feared my optimism about AI’s potential might be too ambitious. I envisioned tools like ChatGPT transforming into hubs of medical expertise within five years. However, by the time the book hit the shelves, it was clear that these changes were unfolding even more quickly than I had anticipated.

Three weeks before “ChatGPT, MD” became number one on Amazon’s “Best New Books” list,  Nvidia stunned the tech and healthcare industries with a flurry of headline-grabbing announcements at its 2024 GTC AI conference. Most notably, Nvidia announced a collaboration with Hippocratic AI to develop generative AI “agents,” purported to outperform human nurses in various tasks at a significantly lower cost.

According to company-released data, the AI bots are 16% better than nurses at identifying a medication’s impact on lab values; 24% more accurate detecting toxic dosages of over-the-counter drugs, and 43% better at identifying condition-specific negative interactions from OTC meds. All that at $9 an hour compared to the $39.05 median hourly pay for U.S. nurses.

Although I don’t believe this technology will replace dedicated, skilled, and empathetic RNs, it will assist and support their work by identifying when problems unexpectedly arise. And for patients at home who today can’t obtain information, expertise and assistance for medical concerns, these AI nurse-bots will help. Although not yet available, they will be designed to make new diagnoses, manage chronic disease, and give patients a detailed but clear explanation of clinician’ advice.

These rapid developments suggest we are on the cusp of technology revolution, one that could reach global ubiquity far faster than the iPhone. Here are three major implications for patients and medical practitioners:  

1. GenAI In Healthcare Is Coming Faster Than You Can Imagine

The human brain can easily predict the rate of arithmetic growth (whereby numbers increase at a constant rate: 1, 2, 3, 4). And it does reasonably well at comprehending geometric growth (a pattern that increases at a constant ratio: 1, 3, 9, 27), as well.

But even the most astute minds struggle to grasp the implications of continuous, exponential growth. And that’s what we’re witnessing with generative AI.

Continue reading…

Ready for Robots?

By KIM BELLARD

When I was young, robots were Robby the Robot (Forbidden Planet, etc.), the unnamed robot in Lost in Space, or The JetsonsRosey the Robot. Gen X and Millennials might think instead of the more malevolent Terminators (which, of course, are actually cyborgs). But Gen Z is likely to think of the running, jumping, back-flipping Atlas from Boston Dynamics, whose videos have entertained millions.

Alas, last week Boston Dynamics announced it was discontinuing Atlas. “For almost a decade, Atlas has sparked our imagination, inspired the next generations of roboticists and leapt over technical barriers in the field,” the company said. “Now it’s time for our hydraulic Atlas robot to kick back and relax.”

The key part of that announcement was describing Atlas as “hydraulic,” because the very next day Boston Dynamics announced a new, all-electric Atlas: “Our new electric Atlas platform is here. Supported by decades of visionary robotics innovation and years of practical experience, Boston Dynamics is tackling the next commercial frontier.” Moreover, the company brags: “The electric version of Atlas will be stronger, with a broader range of motion than any of our previous generations.”

The introductory video is astounding:

Boston Dynamics says: “Atlas may resemble a human form factor, but we are equipping the robot to move in the most efficient way possible to complete a task, rather than being constrained by a human range of motion. Atlas will move in ways that exceed human capabilities.”

They’re right about that.

CEO Robert Playter told Evan Ackerman of IEEE Spectrum: “We’re going to launch it as a product, targeting industrial applications, logistics, and places that are much more diverse than where you see Stretch—heavy objects with complex geometry, probably in manufacturing type environments.”

He went on to elaborate:

This is our third product [following Spot and Stretch], and one of the things we’ve learned is that it takes way more than some interesting technology to make a product work. You have to have a real use case, and you have to have real productivity around that use case that a customer cares about. Everybody will buy one robot—we learned that with Spot. But they won’t start by buying fleets, and you don’t have a business until you can sell multiple robots to the same customer. And you don’t get there without all this other stuff—the reliability, the service, the integration.

The company will work with Hyundai (which, ICYMI, owns Boston Dynamics). Mr. Playter says Hyundai “is really excited about this venture; they want to transform their manufacturing and they see Atlas as a big part of that, and so we’re going to get on that soon.”

Continue reading…

The Latest AI Craze: Ambient Scribing

By MATTHEW HOLT

Okay, I can’t do it any longer. As much as I tried to resist, it is time to write about ambient scribing. But I’m going to do it in a slightly odd way

If you have met me, you know that I have a strange English-American accent, and I speak in a garbled manner. Yet I’m using the inbuilt voice recognition that Google supplies to write this story now.

Side note: I dictated this whole thing on my phone while watching my kids water polo game, which has a fair amount of background noise. And I think you’ll be modestly amused about how terrible the original transcript was. But then I put that entire mess of a text  into ChatGPT and told it to fix the mistakes. it did an incredible job and the output required surprisingly little editing.

Now, it’s not perfect, but it’s a lot better than it used to be, and that is due to a couple of things. One is the vast improvement in acoustic recording, and the second is the combination of Natural Language Processing and artificial intelligence.

Which brings us to ambient listening now. It’s very common in all the applications we use in business, like Zoom and others like transcript creation from videos on Youtube. Of course, we have had something similar in the medical business for many years, particularly in terms of radiology and voice recognition. It has only been in the last few years that transcribing the toughest job of all–the clinical encounter–has gotten easier.

The problem is that doctors and other professionals are forced to write up the notes and history of all that has happened with their patients. The introduction of electronic medical records made this a major pain point. Doctors used to take notes mostly in shorthand, leaving the abstraction of these notes for coding and billing purposes to be done by some poor sap in the basement of the hospital.

Alternatively in the past, doctors used to dictate and then send tapes or voice files off to parts unknown, but then would have to get those notes back and put them into the record. Since the 2010s, when most American health care moved towards using  electronic records, most clinicians have had to type their notes. And this was a big problem for many of them. It has led to a lot of grumpy doctors not only typing in the exam room and ignoring their patients, but also having to type up their notes later in the day. And of course, that’s a major contributor to burnout.

To some extent, the issue of having to type has been mitigated by medical scribes–actual human beings wandering around behind doctors pushing a laptop on wheels and typing up everything that was said by doctors and their patients. And there have been other experiments. Augmedix started off using Google Glass, allowing scribes in remote locations like Bangladesh to listen and type directly into the EMR.

But the real breakthrough has been in the last few years. Companies like Suki, Abridge, and the late Robin started to promise doctors that they could capture the ambient conversation and turn it into proper SOAP notes. The biggest splash was made by the biggest dictation company, Nuance, which in the middle of this transformation got bought by one of the tech titans, Microsoft. Six years ago, they had a demonstration at HIMSS showing that ambient scribing technology was viable. I attended it, and I’m pretty sure that it was faked. Five years ago, I also used Abridge’s tool to try to capture a conversation I had with my doctor — at that time, they were offering a consumer-facing tool – and it was pretty dreadful.

Fast forward to today, and there are a bunch of companies with what seem to be really very good products.

Continue reading…