Categories

Category: Health Tech

What We Can Learn From the Change Healthcare Hack

By ZACHARY AMOS

The health care sector is no stranger to cyberattacks. Still, large incidents like the February 2024 ransomware attack on Change Healthcare are enough to shake up the industry. In the wake of such a massive breach, medical organizations of all types and sizes should take the opportunity to review their security postures.

What Happened in the Change Healthcare Cyberattack

On February 21, Change Healthcare — the largest medical clearinghouse in the U.S. — suffered a ransomware attack, forcing it to take over 100 systems offline. Many of its electronic services remained down for weeks, with full restoration taking until early April.

A week after the attack, the infamous ransomware-as-a-service gang BlackCat claimed responsibility. BlackCat was also responsible for 2021’s Colonial Pipeline shutdown and several attacks on health care organizations throughout 2023. This latest act against Change Healthcare, however, stands as one of its most disruptive yet.

Because Change and its parent company — UnitedHealth Group (UHG) — are such central industry players, the hack had industry-wide ripple effects. A staggering 94% of U.S. hospitals suffered financial consequences from the incident and 74% experienced a direct impact on patient care. Change’s services affect one in every three patient records, so the massive outage created a snowball effect of disruptions, delays and losses.

Most of Change’s pharmacy and electronic payment services came back online by March 15. As of early April, nearly everything is running again, but the financial fallout continues for many enterprises reliant on UHG, thanks to substantial backlogs.

What It Means for the Broader Health Care Sector

Considering the Change Healthcare cyberattack affected almost the entire medical sector, it has significant implications. Even the few medical groups untouched by the hack should consider what it means for the future of health care security.

1. No Organization Is an Island

It’s difficult to ignore that an attack on a single entity impacted almost all hospitals in the U.S. This massive ripple effect highlights how no business in this industry is a self-contained unit. Third-party vulnerabilities affect everyone, so due diligence and thoughtful access restrictions are essential.

While the Change Healthcare hack is an extreme example, it’s not the first time the medical sector has seen large third-party breaches. In 2021, the Red Cross experienced a breach of over 515,000 patient records when attackers targeted its data storage partner.

Health care enterprises rely on multiple external services and each of these connections represents another vulnerability the company has little control over. In light of that risk, it must be more selective about who it does business with. Even with trusted partners like UHG, brands must restrict data access privileges as much as possible and demand high security standards.

2. Centralization Makes the Industry Vulnerable

Relatedly, this attack reveals how centralized the industry has become. Not only are third-party dependencies common, but many organizations depend on the same third parties. That centralization makes these vulnerabilities exponentially more dangerous, as one attack can affect the whole sector.

The health care industry must move past these single points of failure. Some external dependencies are inevitable, but medical groups should avoid them wherever possible. Splitting tasks between multiple vendors may be necessary to reduce the impact of a single breach.

Regulatory changes may support this shift. During a Congressional hearing on the incident, some lawmakers expressed concerns over consolidation in the health care industry and the cyber risks it poses. This growing sentiment could lead to a sector-wide reorganization, but in the meantime, private companies should take the initiative to move away from large centralized dependencies where they can.

Continue reading…

AI Cognition – The Next Nut To Crack

By MIKE MAGEE

OpenAI says its new GPT-4o is “a step towards much more natural human-computer interaction,” and is capable of responding to your inquiry “with an average 320 millisecond (delay) which is similar to a human response time.” So it can speak human, but can it think human?

The “concept of cognition” has been a scholarly football for the past two decades, centered primarily on “Darwin’s claim that other species share the same ‘mental powers’ as humans, but to different degrees.” But how about genAI powered machines? Do they think?

The first academician to attempt to define the word “cognition” was Ulric Neisser in the first ever textbook of cognitive psychology in 1967. He wrote that “the term ‘cognition’ refers to all the processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used. It is concerned with these processes even when they operate in the absence of relevant stimulation…”

The word cognition is derived from “Latin cognoscere ‘to get to know, recognize,’ from assimilated form of com ‘together’ + gnoscere ‘to know’ …”

Knowledge and recognition would not seem to be highly charged terms. And yet, in the years following Neisser’s publication there has been a progressively intense, and sometimes heated debate between psychologists and neuroscientists over the definition of cognition.

The focal point of the disagreement has (until recently) revolved around whether the behaviors observed in non-human species are “cognitive” in the human sense of the word. The discourse in recent years had bled over into the fringes to include the belief by some that plants “think” even though they are not in possession of a nervous system, or the belief that ants communicating with each other in a colony are an example of “distributed cognition.”

What scholars in the field do seem to agree on is that no suitable definition for cognition exists that will satisfy all. But most agree that the term encompasses “thinking, reasoning, perceiving, imagining, and remembering.” Tim Bayne PhD, a Melbourne based professor of Philosophy adds to this that these various qualities must be able to be “systematically recombined with each other,” and not be simply triggered by some provocative stimulus.

Allen Newell PhD, a professor of computer science at Carnegie Mellon, sought to bridge the gap between human and machine when it came to cognition when he published a paper in 1958 that proposed “a description of a theory of problem-solving in terms of information processes amenable for use in a digital computer.”

Machines have a leg up in the company of some evolutionary biologists who believe that true cognition involves acquiring new information from various sources and combining it in new and unique ways.

Developmental psychologists carry their own unique insights from observing and studying the evolution of cognition in young children. What exactly is evolving in their young minds, and how does it differ, but eventually lead to adult cognition? And what about the explosion of screen time?

Pediatric researchers, confronted with AI obsessed youngsters and worried parents are coming at it from the opposite direction. With 95% of 13 to 17 year olds now using social media platforms, machines are a developmental force, according to the American Academy of Child and Adolescent Psychiatry. The machine has risen in status and influence from a side line assistant coach to an on-field teammate.

Scholars admit “It is unclear at what point a child may be developmentally ready to engage with these machines.” At the same time, they are forced to admit that the technological tidal waves leave few alternatives. “Conversely, it is likely that completely shielding children from these technologies may stunt their readiness for a technological world.”

Bence P Ölveczky, an evolutionary biologist from Harvard, is pretty certain what cognition is and is not. He says it “requires learning; isn’t a reflex; depends on internally generated brain dynamics; needs access to stored models and relationships; and relies on spatial maps.”

Thomas Suddendorf PhD, a research psychologist from New Zealand, who specializes in early childhood and animal cognition, takes a more fluid and nuanced approach. He says, “Cognitive psychology distinguishes intentional and unintentional, conscious and unconscious, effortful and automatic, slow and fast processes (for example), and humans deploy these in diverse domains from foresight to communication, and from theory-of-mind to morality.”

Perhaps the last word on this should go to Descartes. He believed that humans mastery of thoughts and feelings separated them from animals which he considered to be “mere machines.”

Were he with us today, and witnessing generative AI’s insatiable appetite for data, its’ hidden recesses of learning, the speed and power of its insurgency, and human uncertainty how to turn the thing off, perhaps his judgement of these machines would be less disparaging; more akin to Mira Murati, OpenAI’s chief technology officer, who announced with some degree of understatement this month, “We are looking at the future of the interaction between ourselves and machines.”

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside the Medical Industrial Complex (Grove/2020)

Your Water, or Your Life

By KIM BELLARD

Matthew Holt, publisher of The Health Care Blog, thinks I worry too much about too many things. He’s probably right. But here’s one worry I’d be remiss in not alerting people to: your water supply is not as safe – not nearly as safe – as you probably assume it is.

I’m not talking about the danger of lead pipes. I’m not even talking about the danger of microplastics in your water. I’ve warned about both of those before (and I’m still worried about them). No, I’m worried we’re not taking the danger of cyberattacks against our water systems seriously enough.

A week ago the EPA issued an enforcement alert about cybersecurity vulnerabilities and threats to community drinking water systems. This was a day after EPA head Michael Regan and National Security Advisor Jake Sullivan sent a letter to all U.S. governors warning them of “disabling cyberattacks” on water and wastewater systems and urging them to cooperate in safeguarding those infrastructures.

“Drinking water and wastewater systems are an attractive target for cyberattacks because they are a lifeline critical infrastructure sector but often lack the resources and technical capacity to adopt rigorous cybersecurity practices,” the letter warned. It specifically cited known state-sponsored attacks from Iran and China.

The enforcement alert elaborated:

Cyberattacks against CWSs are increasing in frequency and severity across the country. Based on actual incidents we know that a cyberattack on a vulnerable water system may allow an adversary to manipulate operational technology, which could cause significant adverse consequences for both the utility and drinking water consumers. Possible impacts include disrupting the treatment, distribution, and storage of water for the community, damaging pumps and valves, and altering the levels of chemicals to hazardous amounts.

Next Gov/FCW paints a grim picture of how vulnerable our water systems are:

Multiple nation-state adversaries have been able to breach water infrastructure around the country. China has been deploying its extensive and pervasive Volt Typhoon hacking collective, burrowing into vast critical infrastructure segments and positioning along compromised internet routing equipment to stage further attacks, national security officials have previously said.

In November, IRGC-backed cyber operatives broke into industrial water treatment controls and targeted programmable logic controllers made by Israeli firm Unitronics. Most recently, Russia-linked hackers were confirmed to have breached a slew of rural U.S. water systems, at times posing physical safety threats.

We shouldn’t be surprised by these attacks. We’ve come to learn that China, Iran, North Korea, and Russia have highly sophisticated cyber teams, but, when it comes to water systems, it turns out the attacks don’t have to be all that sophisticated. The EPA noted that over 70% of water systems it inspected did not fully comply with security standards, including such basic protections such as not allowing default passwords.

NextGov/FCW pointed out that last October the EPA was forced to rescind requirements that water agencies at least evaluate their cyber defenses, due to legal challenges from several (red) states and the American Water Works Association. Take that in. I’ll bet China, Iran, and others are evaluating them.

“In an ideal world … we would like everybody to have a baseline level of cybersecurity and be able to confirm that they have that,” Alan Roberson, executive director of the Association of State Drinking Water Administrators, told AP. “But that’s a long ways away.”

Tom Kellermann, SVP of Cyber Strategy at Contrast Security told Security Magazine: “The safety of the U.S. water supply is in jeopardy. Rogue nation states are frequently targetingthese critical infrastructures, and soon we will experience a life-threatening event.” That doesn’t sound like a long ways away.

Continue reading…

Getting the Future of Health Care Wrong

By KIM BELLARD

Sure, there’s lots of A.I. hype to talk about (e.g., the AI regulation proposed by Chuck Schumer, or the latest updates from Microsoft, Google, and OpenAI) but a recent column by Wall Street Journal tech writer Christopher Mims – What I Got Wrong in a Decade of Predicting the Future of Tech —  reminded me how easily we get overexcited by such things.   

I did my own mea culpa about my predictions for healthcare a couple of years ago, but since Mr. Mims is both smarter and a better writer than I am, I’ll use his structure and some of his words to try to apply them to healthcare.  

Mr. Mims offers five key learnings:

  1. Disruption is overrated
  2. Human factors are everything
  3. We’re all susceptible to this one kind of tech B.S.
  4. Tech bubbles are useful even when they’re wasteful
  5. We’ve got more power than we think

Let’s take each of these in turn and see how they relate not just to tech but also to healthcare.

Disruption is overrated

“It’s not that disruption never happens,” Mr. Mims clarifies. “It just doesn’t happen nearly as often as we’ve been led to believe.”  Well, no kidding. I’ve been in healthcare for longer than I care to admit, and I’ve lost count of all the “disruptions” we were promised.

The fact of the matter is that healthcare is a huge part of the economy. Trillions of dollars are at stake, not to mention millions of jobs and hundreds of billions of profits. Healthcare is too big to fail, and possibly too big to disrupt in any meaningful way.

If some super genius came along and offered us a simple solution that would radically improve our health but slash more than half of that spending and most of those jobs, I honestly am not sure we’d take the offer. Healthcare likes its disruption in manageable gulps, and disruptors often have their eye more on their share of those trillions than in reducing them.

For better or worse, change in healthcare usually comes in small increments.

Human factors are everything

“But what’s most often holding back mass adoption of a technology is our humanity,” Mr. Mims points out. “The challenge of getting people to change their ways is the reason that adoption of new tech is always much slower than it would be if we were all coldly rational utilitarians bent solely on maximizing our productivity or pleasure.” 

Boy, this hits the healthcare head on the nail. If we all simply ate better, exercised more, slept better, and spent less time on our screens, our health and our healthcare system would be very different. It’s not rocket science, but it is proven science.

But we don’t. We like our short-cuts, we don’t like personal inconvenience, and why skip the Krispy Kreme when we can just take Wegovy? Figure out how to motivate people to take more charge of their health: that’d be disruption.

We’re all susceptible to this one kind of tech B.S.

Mr. Mims believes: “Tech is, to put it bluntly, full of people lying to themselves,” although he is careful to add: “It’s usually not malicious.” That’s true in healthcare as well. I’ve known many healthcare innovators, and almost without exception they are true believers in what they are proposing. The good ones get others to buy into their vision. The great ones actually make some changes, albeit rarely quite as profoundly as hoped.

But just because someone believes something strongly and articulates very well doesn’t mean it’s true. I’d like to see significant changes as much as anyone, and more than most, and I know I’m too often guilty of looking for what Mr. Mims calls “the winning lottery ticket” when it comes to healthcare innovation, even though I know the lottery is a sucker’s bet.

To paraphrase Ronald Reagan (!), hope but verify.

Tech bubbles are useful even when they’re wasteful

 Healthcare has its bubbles as well, many but not all of them tech related. How many health start-ups over the last twenty years can you name that did not survive, much less make a mark on the healthcare system? How many billions of investments do they represent?

But, as Mr. Mims recounts Bill Gates once saying, “most startups were “silly” and would go bankrupt, but that the handful of ideas—he specifically said ideas, and not companies—that persist would later prove to be “really important.”’  

The trick, in healthcare as in tech, is separating the proverbial wheat from the chaff, both in terms of what ideas deserve to persist and in which people/organizations can actually make them work. There are good new ideas out there, some of which could be really important.

We’ve got more power than we think

Many of us feel helpless when encountering the healthcare system. It’s too big, too complicated, too impersonal, and too full of specialized knowledge for us to have the kind of agency we might like.

Mr. Mims advice, when it comes to tech is: “Collectively, we have agency over how new tech is developed, released, and used, and we’d be foolish not to use it.” The same is true with healthcare. We can be the patient patients our healthcare system has come to expect, or we can be the assertive ones that it will have to deal with.

I think about people like Dave deBronkart or the late Casey Quinlan when it comes to demanding our own data. I think about Andrea Downing and The Light Collective when it comes to privacy rights. I think about all the biohackers who are not waiting for the healthcare system to catch up on how to apply the latest tech to their health. And I think about all those patient advocates – too numerous to name – who are insisting on respect from the healthcare system and a meaningful role in managing their health.

Yes, we’ve got way more power than we think. Use it.

————

Mr. Mims is humble in admitting that he fell for some people, ideas, gadgets, and services that perhaps he shouldn’t. The key thing he does, though, to use his words, is “paying attention to what’s just over the horizon.” We should all be trying to do that and doing our best to prepare for it.

My horizon is what a 22nd healthcare system could, will and should look like. I’m not willing to settle for what our early 21st century one does. I expect I’ll continue to get a lot wrong but I’m still going to try.

Calum Yacoubian, Linguamatics

Calum Yacoubian is product director of Linguamatics, a company acquired in 2019 by IQVIA the huge data/clinical trials company itself a merger of IMS and Quintiles about a decade ago. Linguamatics is in the business of helping providers, payers and others with clinical background research based on NLP, and now with generative AI, is democratizing the ability to use NLP. Their case studies include for example trying to find patients with SDOH limitations, because they can aim LLMs at their data for summarization, and use them to help with data extraction from unstructured medical records. Calum thinks that LLMs are expanding the market for NLP!–Matthew Holt

A Call for Responsible Antibiotic Use in the Era of Telehealth

By PHIYEN NGUYEN

Telehealth has revolutionized health care as we know it, but it may also be contributing to the overuse of antibiotics and antimicrobial resistance.

Antibiotics and the Risks

Antibiotics treat infections caused by bacteria, like strep throat and whooping cough. They do this by either killing or slowing the growth of bacteria. Antibiotics save millions of lives around the world each year, but they can also be overprescribed and overused.

Excessive antibiotic use can lead to antimicrobial resistance (AMR). AMR happens when germs from the initial infection continue to survive, even after a patient completes a course of antibiotics. In other words, the germs are now resilient against that treatment. Resistance to even one type of antibiotic can lead to serious complications and prolonged recovery, requiring additional courses of stronger medicines.

The Centers for Disease Control and Prevention reported that AMR leads to over 2.8 million infections and 35,000 deaths each year in the United States. By 2050, AMR is predicted to cause about 10 million deaths annually, resulting in a global public health crisis.

Increase in Telehealth and Antibiotic Prescriptions

Surprisingly, the growth of telehealth care may be contributing to antibiotic overprescribing and overuse.

Telehealth exploded during the COVID-19 pandemic and, today, 87 percent of physicians use it regularly. Telehealth allows patients to receive health care virtually, through telephone, video, or other forms of technology. It offers increased flexibility, decreased travel time, and less risk of spreading disease for both patients and providers.

Popular platforms like GoodRx and Doctor on Demand market convenient and easy access to health care. Others offer specialized services, like WISP that focuses on women’s health. Despite its benefits, telehealth is not perfect.

It limits physical examinations (by definition) and rapport building, which changes the patient-provider relationship. It’s also unclear whether providers can truly make accurate diagnoses in a virtual setting in some cases.

Studies also show higher antibiotic prescribing rates in virtual consultations compared to in-person visits.

For instance, physicians were more likely to prescribe antibiotics for urinary tract infections during telehealth appointments (99%) compared to an office visit (49%). In another study, 55 percent of telehealth visits for respiratory tract infections resulted in antibiotic prescriptions, many of these cases were later found to not require them.

Continue reading…

Glen Tullman, CEO, Transcarent, talks about their new Wayfinding AI service

Glen Tullman came on THCB to talk about Transcarent’s new Wayfinding AI service. Transcarent has spent more than $125m (of the some $450m or so it’s raised so far) plugging an AI chatbot called Wayfinding into its various segments–which include the former 98.6 now rebranded as Transcarent Everyday Care. Wayfinding has benefits, clinical guidance and care delivery on one intelligent chatbot platform. No coincidence that this is released the same week as OpenAI released Chat GPT4o and the latest Google Gemini release. Make no mistake, this is a huge bet and probably the most aggressive, if obvious, use of an AI agent in health care I’ve seen so far. I saw a demo earlier which was pretty impressive and I had fun talking with Glen about what it’s capable of doing now, and what it will be–Matthew Holt

GPT-4o: What’s All The Fuss About?

By MIKE MAGEE

If you follow my weekly commentary on HealthCommentary.org or THCB, you may have noticed over the past 6 months that I appear to be obsessed with mAI, or Artificial Intelligence intrusion into the health sector space.

So today, let me share a secret. My deep dive has been part of a long preparation for a lecture (“AI Meets Medicine”) I will deliver this Friday, May 17, at 2:30 PM in Hartford, CT. If you are in the area, it is open to the public. You can register to attend HERE.

This image is one of 80 slides I will cover over the 90 minute presentation on a topic that is massive, revolutionary, transformational and complex. It is also a moving target, as illustrated in the final row above which I added this morning.

The addition was forced by Mira Murati, OpenAI’s chief technology officer, who announced from a perch in San Francisco yesterday that, “We are looking at the future of the interaction between ourselves and machines.”

The new application, designed for both computers and smart phones, is GPT-4o. Unlike prior members of the GPT family, which distinguished themselves by their self-learning generative capabilities and an insatiable thirst for data, this new application is not so much focused on the search space, but instead creates a “personal assistant” that is speedy and conversant in text, audio and image (“multimodal”).

OpenAI says this is “a step towards much more natural human-computer interaction,” and is capable of responding to your inquiry “with an average 320 millisecond (delay) which is similar to a human response time.” And they are fast to reinforce that this is just the beginning, stating on their website this morning “With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.”

It is useful to remind that this whole AI movement, in Medicine and every other sector, is about language. And as experts in language remind us, “Language and speech in the academic world are complex fields that go beyond paleoanthropology and primatology,” requiring a working knowledge of “Phonetics, Anatomy, Acoustics and Human Development, Syntax, Lexicon, Gesture, Phonological Representations, Syllabic Organization, Speech Perception, and Neuromuscular Control.”

The notion of instantaneous, multimodal communication with machines has seemingly come of nowhere but is actually the product of nearly a century of imaginative, creative and disciplined discovery by information technologists and human speech experts, who have only recently fully converged with each other. As paleolithic archeologist, Paul Pettit, PhD, puts it, “There is now a great deal of support for the notion that symbolic creativity was part of our cognitive repertoire as we began dispersing from Africa.” That is to say, “Your multimodal computer imagery is part of a conversation begun a long time ago in ancient rock drawings.”

Throughout history, language has been a species accelerant, a secret power that has allowed us to dominate and rise quickly (for better or worse) to the position of “masters of the universe.”  The shorthand: We humans have moved “From babble to concordance to inclusivity…”

GPT-4o is just the latest advance, but is notable not because it emphasizes the capacity for “self-learning” which the New York Times correctly bannered as “Exciting and Scary,” but because it is focused on speed and efficiency in the effort to now compete on even playing field with human to human language. As OpenAI states, “GPT-4o is 2x faster, half the price, and has 5x higher (traffic) rate limits compared to GPT-4.”

Practicality and usability are the words I’d chose. In the companies words, “Today, GPT-4o is much better than any existing model at understanding and discussing the images you share. For example, you can now take a picture of a menu in a different language and talk to GPT-4o to translate it, learn about the food’s history and significance, and get recommendations.”

In my lecture, I will cover a great deal of ground, as I attempt to provide historic context, relevant nomenclature and definitions of new terms, and the great potential (both good and bad) for applications in health care. As many others have said, “It’s complicated!”

But as this yesterday’s announcing in San Francisco makes clear, the human-machine interface has blurred significantly. Or as Mira Murati put it, “You want to have the experience we’re having — where we can have this very natural dialogue.”

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside the Medical Industrial Complex (Grove/2020)

Chakri Toleti, Care.ai

Chakri Toleti is an occasional Bollywood film producer (you can Google that) and also the CEO of Care.ai–one of the leading companies using sensors and AI to figure out what is going on in that hospital room. They’ve grown very fast in recent years, fundamentally by using technology to monitor patients and help improve their care, improve patient safety and figure out what else is needed to improve the care process. You’ll also see me doing a little bit of self-testing!–Matthew Holt

Will AI Revolutionize Surgical Care?  Yes, But Maybe Not How You Think

By MIKE MAGEE

If you talk to consultants about AI in Medicine, it’s full speed ahead. GenAI assistants, “upskilling” the work force, reshaping customer service, new roles supported by reallocation of budgets, and always with one eye on “the dark side.”

But one area that has been relatively silent is surgery. What’s happening there? In June, 2023, the American College of Surgeons (ACS) weighed in with a report that largely stated the obvious. They wrote, “The daily barrage of news stories about artificial intelligence (AI) shows that this disruptive technology is here to stay and on the verge of revolutionizing surgical care.”

Their summary self-analysis was cautious, stating: “By highlighting tools, monitoring operations, and sending alerts, AI-based surgical systems can map out an approach to each patient’s surgical needs and guide and streamline surgical procedures. AI is particularly effective in laparoscopic and robotic surgery, where a video screen can display information or guidance from AI during the operation.”

The automatic emergency C-Section in Prometheus–Coming, but not quite yet!

So the ACS is not anticipating an invasion of robots. In many ways, this is understandable. The operating theater does not reward hyperbole or flash performances. In an environment where risk is palpable, and simple tremors at the wrong time, and in the wrong place, can be deadly, surgical players are well-rehearsed and trained to remain calm, conservative, and alert members of the “surgical team.”

Continue reading…
assetto corsa mods