Categories

Tag: AI

Getting the Future of Health Care Wrong

By KIM BELLARD

Sure, there’s lots of A.I. hype to talk about (e.g., the AI regulation proposed by Chuck Schumer, or the latest updates from Microsoft, Google, and OpenAI) but a recent column by Wall Street Journal tech writer Christopher Mims – What I Got Wrong in a Decade of Predicting the Future of Tech —  reminded me how easily we get overexcited by such things.   

I did my own mea culpa about my predictions for healthcare a couple of years ago, but since Mr. Mims is both smarter and a better writer than I am, I’ll use his structure and some of his words to try to apply them to healthcare.  

Mr. Mims offers five key learnings:

  1. Disruption is overrated
  2. Human factors are everything
  3. We’re all susceptible to this one kind of tech B.S.
  4. Tech bubbles are useful even when they’re wasteful
  5. We’ve got more power than we think

Let’s take each of these in turn and see how they relate not just to tech but also to healthcare.

Disruption is overrated

“It’s not that disruption never happens,” Mr. Mims clarifies. “It just doesn’t happen nearly as often as we’ve been led to believe.”  Well, no kidding. I’ve been in healthcare for longer than I care to admit, and I’ve lost count of all the “disruptions” we were promised.

The fact of the matter is that healthcare is a huge part of the economy. Trillions of dollars are at stake, not to mention millions of jobs and hundreds of billions of profits. Healthcare is too big to fail, and possibly too big to disrupt in any meaningful way.

If some super genius came along and offered us a simple solution that would radically improve our health but slash more than half of that spending and most of those jobs, I honestly am not sure we’d take the offer. Healthcare likes its disruption in manageable gulps, and disruptors often have their eye more on their share of those trillions than in reducing them.

For better or worse, change in healthcare usually comes in small increments.

Human factors are everything

“But what’s most often holding back mass adoption of a technology is our humanity,” Mr. Mims points out. “The challenge of getting people to change their ways is the reason that adoption of new tech is always much slower than it would be if we were all coldly rational utilitarians bent solely on maximizing our productivity or pleasure.” 

Boy, this hits the healthcare head on the nail. If we all simply ate better, exercised more, slept better, and spent less time on our screens, our health and our healthcare system would be very different. It’s not rocket science, but it is proven science.

But we don’t. We like our short-cuts, we don’t like personal inconvenience, and why skip the Krispy Kreme when we can just take Wegovy? Figure out how to motivate people to take more charge of their health: that’d be disruption.

We’re all susceptible to this one kind of tech B.S.

Mr. Mims believes: “Tech is, to put it bluntly, full of people lying to themselves,” although he is careful to add: “It’s usually not malicious.” That’s true in healthcare as well. I’ve known many healthcare innovators, and almost without exception they are true believers in what they are proposing. The good ones get others to buy into their vision. The great ones actually make some changes, albeit rarely quite as profoundly as hoped.

But just because someone believes something strongly and articulates very well doesn’t mean it’s true. I’d like to see significant changes as much as anyone, and more than most, and I know I’m too often guilty of looking for what Mr. Mims calls “the winning lottery ticket” when it comes to healthcare innovation, even though I know the lottery is a sucker’s bet.

To paraphrase Ronald Reagan (!), hope but verify.

Tech bubbles are useful even when they’re wasteful

 Healthcare has its bubbles as well, many but not all of them tech related. How many health start-ups over the last twenty years can you name that did not survive, much less make a mark on the healthcare system? How many billions of investments do they represent?

But, as Mr. Mims recounts Bill Gates once saying, “most startups were “silly” and would go bankrupt, but that the handful of ideas—he specifically said ideas, and not companies—that persist would later prove to be “really important.”’  

The trick, in healthcare as in tech, is separating the proverbial wheat from the chaff, both in terms of what ideas deserve to persist and in which people/organizations can actually make them work. There are good new ideas out there, some of which could be really important.

We’ve got more power than we think

Many of us feel helpless when encountering the healthcare system. It’s too big, too complicated, too impersonal, and too full of specialized knowledge for us to have the kind of agency we might like.

Mr. Mims advice, when it comes to tech is: “Collectively, we have agency over how new tech is developed, released, and used, and we’d be foolish not to use it.” The same is true with healthcare. We can be the patient patients our healthcare system has come to expect, or we can be the assertive ones that it will have to deal with.

I think about people like Dave deBronkart or the late Casey Quinlan when it comes to demanding our own data. I think about Andrea Downing and The Light Collective when it comes to privacy rights. I think about all the biohackers who are not waiting for the healthcare system to catch up on how to apply the latest tech to their health. And I think about all those patient advocates – too numerous to name – who are insisting on respect from the healthcare system and a meaningful role in managing their health.

Yes, we’ve got way more power than we think. Use it.

————

Mr. Mims is humble in admitting that he fell for some people, ideas, gadgets, and services that perhaps he shouldn’t. The key thing he does, though, to use his words, is “paying attention to what’s just over the horizon.” We should all be trying to do that and doing our best to prepare for it.

My horizon is what a 22nd healthcare system could, will and should look like. I’m not willing to settle for what our early 21st century one does. I expect I’ll continue to get a lot wrong but I’m still going to try.

GPT-4o: What’s All The Fuss About?

By MIKE MAGEE

If you follow my weekly commentary on HealthCommentary.org or THCB, you may have noticed over the past 6 months that I appear to be obsessed with mAI, or Artificial Intelligence intrusion into the health sector space.

So today, let me share a secret. My deep dive has been part of a long preparation for a lecture (“AI Meets Medicine”) I will deliver this Friday, May 17, at 2:30 PM in Hartford, CT. If you are in the area, it is open to the public. You can register to attend HERE.

This image is one of 80 slides I will cover over the 90 minute presentation on a topic that is massive, revolutionary, transformational and complex. It is also a moving target, as illustrated in the final row above which I added this morning.

The addition was forced by Mira Murati, OpenAI’s chief technology officer, who announced from a perch in San Francisco yesterday that, “We are looking at the future of the interaction between ourselves and machines.”

The new application, designed for both computers and smart phones, is GPT-4o. Unlike prior members of the GPT family, which distinguished themselves by their self-learning generative capabilities and an insatiable thirst for data, this new application is not so much focused on the search space, but instead creates a “personal assistant” that is speedy and conversant in text, audio and image (“multimodal”).

OpenAI says this is “a step towards much more natural human-computer interaction,” and is capable of responding to your inquiry “with an average 320 millisecond (delay) which is similar to a human response time.” And they are fast to reinforce that this is just the beginning, stating on their website this morning “With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.”

It is useful to remind that this whole AI movement, in Medicine and every other sector, is about language. And as experts in language remind us, “Language and speech in the academic world are complex fields that go beyond paleoanthropology and primatology,” requiring a working knowledge of “Phonetics, Anatomy, Acoustics and Human Development, Syntax, Lexicon, Gesture, Phonological Representations, Syllabic Organization, Speech Perception, and Neuromuscular Control.”

The notion of instantaneous, multimodal communication with machines has seemingly come of nowhere but is actually the product of nearly a century of imaginative, creative and disciplined discovery by information technologists and human speech experts, who have only recently fully converged with each other. As paleolithic archeologist, Paul Pettit, PhD, puts it, “There is now a great deal of support for the notion that symbolic creativity was part of our cognitive repertoire as we began dispersing from Africa.” That is to say, “Your multimodal computer imagery is part of a conversation begun a long time ago in ancient rock drawings.”

Throughout history, language has been a species accelerant, a secret power that has allowed us to dominate and rise quickly (for better or worse) to the position of “masters of the universe.”  The shorthand: We humans have moved “From babble to concordance to inclusivity…”

GPT-4o is just the latest advance, but is notable not because it emphasizes the capacity for “self-learning” which the New York Times correctly bannered as “Exciting and Scary,” but because it is focused on speed and efficiency in the effort to now compete on even playing field with human to human language. As OpenAI states, “GPT-4o is 2x faster, half the price, and has 5x higher (traffic) rate limits compared to GPT-4.”

Practicality and usability are the words I’d chose. In the companies words, “Today, GPT-4o is much better than any existing model at understanding and discussing the images you share. For example, you can now take a picture of a menu in a different language and talk to GPT-4o to translate it, learn about the food’s history and significance, and get recommendations.”

In my lecture, I will cover a great deal of ground, as I attempt to provide historic context, relevant nomenclature and definitions of new terms, and the great potential (both good and bad) for applications in health care. As many others have said, “It’s complicated!”

But as this yesterday’s announcing in San Francisco makes clear, the human-machine interface has blurred significantly. Or as Mira Murati put it, “You want to have the experience we’re having — where we can have this very natural dialogue.”

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside the Medical Industrial Complex (Grove/2020)

Will AI Revolutionize Surgical Care?  Yes, But Maybe Not How You Think

By MIKE MAGEE

If you talk to consultants about AI in Medicine, it’s full speed ahead. GenAI assistants, “upskilling” the work force, reshaping customer service, new roles supported by reallocation of budgets, and always with one eye on “the dark side.”

But one area that has been relatively silent is surgery. What’s happening there? In June, 2023, the American College of Surgeons (ACS) weighed in with a report that largely stated the obvious. They wrote, “The daily barrage of news stories about artificial intelligence (AI) shows that this disruptive technology is here to stay and on the verge of revolutionizing surgical care.”

Their summary self-analysis was cautious, stating: “By highlighting tools, monitoring operations, and sending alerts, AI-based surgical systems can map out an approach to each patient’s surgical needs and guide and streamline surgical procedures. AI is particularly effective in laparoscopic and robotic surgery, where a video screen can display information or guidance from AI during the operation.”

The automatic emergency C-Section in Prometheus–Coming, but not quite yet!

So the ACS is not anticipating an invasion of robots. In many ways, this is understandable. The operating theater does not reward hyperbole or flash performances. In an environment where risk is palpable, and simple tremors at the wrong time, and in the wrong place, can be deadly, surgical players are well-rehearsed and trained to remain calm, conservative, and alert members of the “surgical team.”

Continue reading…

Nvidia’s AI Bot Outperforms Nurses: Here’s What It Means for You  

By ROBBIE PEARL

Soon after Apple released the original iPhone, my father, an unlikely early adopter, purchased one. His plan? “I’ll keep it in the trunk for emergencies,” he told me. He couldn’t foresee that this device would eventually replace maps, radar detectors, traffic reports on AM radio, CD players, and even coin-operated parking meters—not to mention the entire taxi industry.

His was a typical response to revolutionary technology. We view innovations through the lens of what already exists, fitting the new into the familiar context of the old.

Generative AI is on a similar trajectory.

As I planned the release of my new book in early April, “ChatGPT, MD: How AI-Empowered Patients & Doctors Can Take Back Control of American Medicine,” I delved into the promise and perils of generative AI in medicine. Initially, I feared my optimism about AI’s potential might be too ambitious. I envisioned tools like ChatGPT transforming into hubs of medical expertise within five years. However, by the time the book hit the shelves, it was clear that these changes were unfolding even more quickly than I had anticipated.

Three weeks before “ChatGPT, MD” became number one on Amazon’s “Best New Books” list,  Nvidia stunned the tech and healthcare industries with a flurry of headline-grabbing announcements at its 2024 GTC AI conference. Most notably, Nvidia announced a collaboration with Hippocratic AI to develop generative AI “agents,” purported to outperform human nurses in various tasks at a significantly lower cost.

According to company-released data, the AI bots are 16% better than nurses at identifying a medication’s impact on lab values; 24% more accurate detecting toxic dosages of over-the-counter drugs, and 43% better at identifying condition-specific negative interactions from OTC meds. All that at $9 an hour compared to the $39.05 median hourly pay for U.S. nurses.

Although I don’t believe this technology will replace dedicated, skilled, and empathetic RNs, it will assist and support their work by identifying when problems unexpectedly arise. And for patients at home who today can’t obtain information, expertise and assistance for medical concerns, these AI nurse-bots will help. Although not yet available, they will be designed to make new diagnoses, manage chronic disease, and give patients a detailed but clear explanation of clinician’ advice.

These rapid developments suggest we are on the cusp of technology revolution, one that could reach global ubiquity far faster than the iPhone. Here are three major implications for patients and medical practitioners:  

1. GenAI In Healthcare Is Coming Faster Than You Can Imagine

The human brain can easily predict the rate of arithmetic growth (whereby numbers increase at a constant rate: 1, 2, 3, 4). And it does reasonably well at comprehending geometric growth (a pattern that increases at a constant ratio: 1, 3, 9, 27), as well.

But even the most astute minds struggle to grasp the implications of continuous, exponential growth. And that’s what we’re witnessing with generative AI.

Continue reading…

Ready for Robots?

By KIM BELLARD

When I was young, robots were Robby the Robot (Forbidden Planet, etc.), the unnamed robot in Lost in Space, or The JetsonsRosey the Robot. Gen X and Millennials might think instead of the more malevolent Terminators (which, of course, are actually cyborgs). But Gen Z is likely to think of the running, jumping, back-flipping Atlas from Boston Dynamics, whose videos have entertained millions.

Alas, last week Boston Dynamics announced it was discontinuing Atlas. “For almost a decade, Atlas has sparked our imagination, inspired the next generations of roboticists and leapt over technical barriers in the field,” the company said. “Now it’s time for our hydraulic Atlas robot to kick back and relax.”

The key part of that announcement was describing Atlas as “hydraulic,” because the very next day Boston Dynamics announced a new, all-electric Atlas: “Our new electric Atlas platform is here. Supported by decades of visionary robotics innovation and years of practical experience, Boston Dynamics is tackling the next commercial frontier.” Moreover, the company brags: “The electric version of Atlas will be stronger, with a broader range of motion than any of our previous generations.”

The introductory video is astounding:

Boston Dynamics says: “Atlas may resemble a human form factor, but we are equipping the robot to move in the most efficient way possible to complete a task, rather than being constrained by a human range of motion. Atlas will move in ways that exceed human capabilities.”

They’re right about that.

CEO Robert Playter told Evan Ackerman of IEEE Spectrum: “We’re going to launch it as a product, targeting industrial applications, logistics, and places that are much more diverse than where you see Stretch—heavy objects with complex geometry, probably in manufacturing type environments.”

He went on to elaborate:

This is our third product [following Spot and Stretch], and one of the things we’ve learned is that it takes way more than some interesting technology to make a product work. You have to have a real use case, and you have to have real productivity around that use case that a customer cares about. Everybody will buy one robot—we learned that with Spot. But they won’t start by buying fleets, and you don’t have a business until you can sell multiple robots to the same customer. And you don’t get there without all this other stuff—the reliability, the service, the integration.

The company will work with Hyundai (which, ICYMI, owns Boston Dynamics). Mr. Playter says Hyundai “is really excited about this venture; they want to transform their manufacturing and they see Atlas as a big part of that, and so we’re going to get on that soon.”

Continue reading…

The Latest AI Craze: Ambient Scribing

By MATTHEW HOLT

Okay, I can’t do it any longer. As much as I tried to resist, it is time to write about ambient scribing. But I’m going to do it in a slightly odd way

If you have met me, you know that I have a strange English-American accent, and I speak in a garbled manner. Yet I’m using the inbuilt voice recognition that Google supplies to write this story now.

Side note: I dictated this whole thing on my phone while watching my kids water polo game, which has a fair amount of background noise. And I think you’ll be modestly amused about how terrible the original transcript was. But then I put that entire mess of a text  into ChatGPT and told it to fix the mistakes. it did an incredible job and the output required surprisingly little editing.

Now, it’s not perfect, but it’s a lot better than it used to be, and that is due to a couple of things. One is the vast improvement in acoustic recording, and the second is the combination of Natural Language Processing and artificial intelligence.

Which brings us to ambient listening now. It’s very common in all the applications we use in business, like Zoom and others like transcript creation from videos on Youtube. Of course, we have had something similar in the medical business for many years, particularly in terms of radiology and voice recognition. It has only been in the last few years that transcribing the toughest job of all–the clinical encounter–has gotten easier.

The problem is that doctors and other professionals are forced to write up the notes and history of all that has happened with their patients. The introduction of electronic medical records made this a major pain point. Doctors used to take notes mostly in shorthand, leaving the abstraction of these notes for coding and billing purposes to be done by some poor sap in the basement of the hospital.

Alternatively in the past, doctors used to dictate and then send tapes or voice files off to parts unknown, but then would have to get those notes back and put them into the record. Since the 2010s, when most American health care moved towards using  electronic records, most clinicians have had to type their notes. And this was a big problem for many of them. It has led to a lot of grumpy doctors not only typing in the exam room and ignoring their patients, but also having to type up their notes later in the day. And of course, that’s a major contributor to burnout.

To some extent, the issue of having to type has been mitigated by medical scribes–actual human beings wandering around behind doctors pushing a laptop on wheels and typing up everything that was said by doctors and their patients. And there have been other experiments. Augmedix started off using Google Glass, allowing scribes in remote locations like Bangladesh to listen and type directly into the EMR.

But the real breakthrough has been in the last few years. Companies like Suki, Abridge, and the late Robin started to promise doctors that they could capture the ambient conversation and turn it into proper SOAP notes. The biggest splash was made by the biggest dictation company, Nuance, which in the middle of this transformation got bought by one of the tech titans, Microsoft. Six years ago, they had a demonstration at HIMSS showing that ambient scribing technology was viable. I attended it, and I’m pretty sure that it was faked. Five years ago, I also used Abridge’s tool to try to capture a conversation I had with my doctor — at that time, they were offering a consumer-facing tool – and it was pretty dreadful.

Fast forward to today, and there are a bunch of companies with what seem to be really very good products.

Continue reading…

Are AI Clinical Protocols A Dobb-ist Trojan Horse?

By MIKE MAGEE

For most loyalist Americans at the turn of the 19th century, Justice John Marshall Harlan’s decision in Jacobson v. Massachusetts (1905). was a “slam dunk.” In it, he elected to force a reluctant Methodist minister in Massachusetts to undergo Smallpox vaccination during a regional epidemic or pay a fine.

Justice Harlan wrote at the time: “Real liberty for all could not exist under the operation of a principle which recognizes the right of each individual person to use his own, whether in respect of his person or his property, regardless of the injury that may be done to others.”

What could possibly go wrong here? Of course, citizens had not fully considered the “unintended consequences,” let alone the presence of President Wilson and others focused on “strengthening the American stock.”

This involved a two-prong attack on “the enemy without” and “the enemy within.”

The The Immigration Act of 1924, signed by President Calvin Coolidge, was the culmination of an attack on “the enemy without.” Quotas for immigration were set according to the 1890 Census which had the effect of advantaging the selective influx of Anglo-Saxons over Eastern Europeans and Italians. Asians (except Japanese and Filipinos) were banned.

As for “the enemy within,” rooters for the cause of weeding out “undesirable human traits” from the American populace had the firm support of premier academics from almost every elite university across the nation. This came in the form of new departments focused on advancing the “Eugenics Movement,” an excessively discriminatory, quasi-academic approach based on the work of Francis Galton, cousin of Charles Darwin.

Isolationists and Segregationists picked up the thread and ran with it focused on vulnerable members of the community labeled as paupers, mentally disabled, dwarfs, promiscuous or criminal.

In a strategy eerily reminiscent of that employed by Mississippi Pro-Life advocates in Dobbs v. Jackson Women’s Health Organization in 2021, Dr. Albert Priddy, activist director of the Virginia State Colony for Epileptics and Feebleminded, teamed up with radical Virginia state senator Aubrey Strode to hand pick and literally make a “federal case” out of a young institutionalized teen resident named Carrie Buck.

Their goal was to force the nation’s highest courts to sanction state sponsored mandated sterilization.

In a strange twist of fate, the Dobbs name was central to this case as well.

Continue reading…

The 7 Decade History of ChatGPT

By MIKE MAGEE

Over the past year, the general popularization of AI orArtificial Intelligence has captured the world’s imagination. Of course, academicians often emphasize historical context. But entrepreneurs tend to agree with Thomas Jefferson who said, “I like dreams of the future better than the history of the past.”

This particular dream however is all about language, its standing and significance in human society. Throughout history, language has been a species accelerant, a secret power that has allowed us to dominate and rise quickly (for better or worse) to the position of “masters of the universe.”

Well before ChatGPT became a household phrase, there was LDT or the laryngeal descent theory. It professed that humans unique capacity for speech was the result of a voice box, or larynx, that is lower in the throat than other primates. This permitted the “throat shape, and motor control” to produce vowels that are the cornerstone of human speech. Speech – and therefore language arrival – was pegged to anatomical evolutionary changes dated at between 200,000 and 300,000 years ago.

That theory, as it turns out, had very little scientific evidence. And in 2019, a landmark study set about pushing the date of primate vocalization back to at least 3 to 5 million years ago. As scientists summarized it in three points: “First, even among primates, laryngeal descent is not uniquely human. Second, laryngeal descent is not required to produce contrasting formant patterns in vocalizations. Third, living nonhuman primates produce vocalizations with contrasting formant patterns.”

Language and speech in the academic world are complex fields that go beyond paleoanthropology and primatology. If you want to study speech science, you better have a working knowledge of “phonetics, anatomy, acoustics and human development” say the  experts. You could add to this “syntax, lexicon, gesture, phonological representations, syllabic organization, speech perception, and neuromuscular control.”

Professor Paul Pettitt, who makes a living at the University of Oxford interpreting ancient rock paintings in Africa and beyond, sees the birth of civilization in multimodal language terms. He says, “There is now a great deal of support for the notion that symbolic creativity was part of our cognitive repertoire as we began dispersing from Africa.  Google chair, Sundar Pichai, maintains a similarly expansive view when it comes to language. In his December 6, 2023, introduction of their ground breaking LLM (large language model), Gemini (a competitor of ChatGPT), he described the new product as “our largest and most capable AI model with natural image, audio and video understanding and mathematical reasoning.”

Continue reading…

The Optimism of Digital Health

By JONATHON FEIT

Journalists like being salty.  Like many venture investors, we who are no longer “green” have finely tuned BS meters that like to rip off the sheen of a press release to reach the truthiness underneath. We ask, is this thing real? If I write about XYZ, will I be embarrassed next year to learn that it was the next Theranos?

Yet journalists must also be optimistic—a delicate balance: not so jaded that one becomes boooring, not so optimistic that one gets giddy at each flash of potential; and still enamored of the belief that every so often, something great will remake the present paradigm.

This delicately balanced worldview is equally endemic to entrepreneurs that stick around: Intel founder Andy Grove’s famously said “only the paranoid survive,” a view that is inherently nefarious since it points out that failure is always lurking nearby. Nevertheless, to venture is to look past the risk, as in, “Someone has to reach that tall summit someday—it may as well be our team!” Pragmatic entrepreneurs seek to do something else, too: deliver value for one’s clients / customers / partners / users in excess of what they pay—which makes they willing to pay in excess of what the thing or service costs to produce. We call that metric “profit,” and over the past several years, too many young companies, far afield of technology and healthcare, forgot about it.

Once upon a time, not too many years ago, during the very first year that my company (Beyond Lucid Technologies) turned a profit, I presented to a room of investors in San Francisco, and received a stunning reply when told that people were willing to pay us for our work.  “But don’t you want to grow?” the investor asked. 

Continue reading…

Can Generative AI Improve Health Care Relationships?

By MIKE MAGEE

“What exactly does it mean to augment clinical judgement…?”

That’s the question that Stanford Law professor, Michelle Mello, asked in the second paragraph of a May, 2023 article in JAMA exploring the medical legal boundaries of large language model (LLM) generative AI.

This cogent question triggered unease among the nation’s academic and clinical medical leaders who live in constant fear of being financially (and more important, psychically) assaulted for harming patients who have entrusted themselves to their care.

That prescient article came out just one month before news leaked about a revolutionary new generative AI offering from Google called Genesis. And that lit a fire.

Mark Minevich, a “highly regarded and trusted Digital Cognitive Strategist,” writing in a December issue of  Forbes, was knee deep in the issue writing, “Hailed as a potential game-changer across industries, Gemini combines data types like never before to unlock new possibilities in machine learning… Its multimodal nature builds on, yet goes far beyond, predecessors like GPT-3.5 and GPT-4 in its ability to understand our complex world dynamically.”

Health professionals have been negotiating this space (information exchange with their patients) for roughly a half century now. Health consumerism emerged as a force in the late seventies. Within a decade, the patient-physician relationship was rapidly evolving, not just in the United States, but across most democratic societies.

That previous “doctor says – patient does” relationship moved rapidly toward a mutual partnership fueled by health information empowerment. The best patient was now an educated patient. Paternalism must give way to partnership. Teams over individuals, and mutual decision making. Emancipation led to empowerment, which meant information engagement.

In the early days of information exchange, patients literally would appear with clippings from magazines and newspapers (and occasionally the National Inquirer) and present them to their doctors with the open ended question, “What do you think of this?”

But by 2006, when I presented a mega trend analysis to the AMA President’s Forum, the transformative power of the Internet, a globally distributed information system with extraordinary reach and penetration armed now with the capacity to encourage and facilitate personalized research, was fully evident.

Coincident with these new emerging technologies, long hospital length of stays (and with them in-house specialty consults with chart summary reports) were now infrequently-used methods of medical staff continuous education. Instead, “reputable clinical practice guidelines represented evidence-based practice” and these were incorporated into a vast array of “physician-assist” products making smart phones indispensable to the day-to-day provision of care.

At the same time, a several decade struggle to define policy around patient privacy and fund the development of medical records ensued, eventually spawning bureaucratic HIPPA regulations in its wake.

The emergence of generative AI, and new products like Genesis, whose endpoints are remarkably unclear and disputed even among the specialized coding engineers who are unleashing the force, have created a reality where (at best) health professionals are struggling just to keep up with their most motivated (and often mostly complexly ill) patients. Needless to say, the Covid based health crisis and human isolation it provoked, have only made matters worse.

Continue reading…
assetto corsa mods