Categories

Tag: AI

You’re Not Going to Automate MY Job

By KIM BELLARD

Earlier this month U.S. dockworkers struck, for the first time in decades. Their union, the International Longshoremen’s Association (ILW), was demanding a 77% pay increase, rejecting an offer of a 50% pay increase from the shipping companies. People worried about the impact on the economy, how it might impact the upcoming election, even if Christmas would be ruined. Some panic hoarding ensued.

Then, just three days later, the strike was over, with an agreement for a 60% wage increase over six years. Work resumed. Everyone’s happy right? Well, no. The agreement is only a truce until January 15, 2025. While money was certainly an issue – it always is – the real issue is automation, and the two sides are far apart on that.

Most of us aren’t dockworkers, of course, but their union’s attitude towards automation has lessons for our jobs nonetheless.

The advent of shipping containers in the 1960’s (if you haven’t read The BoxHow the Shipping Container Made the World Smaller and the World Economy Bigger, by Marc Levinson, I highly recommend it) made increased use of automation in the shipping industry not only possible but inevitable. The ports, the shipping companies, and the unions all knew this, and have been fighting about it ever since. Add better robots and, now, AI to the mix, and one wonders when the whole process will be automated.

Curiously, the U.S. is not a leader in this automation. Margaret Kidd, program director and associate professor of supply chain logistics at the University of Houston, told The Hill: “What most Americans don’t realize is that American exceptionalism does not exist in our port system. Our infrastructure is antiquated. Our use of automation and technology is antiquated.”

Eric Boehm of Reason agrees:

The problem is that American ports need more automation just to catch up with what’s considered normal in the rest of the world. For example, automated cranes in use at the port of Rotterdam in the Netherlands since the 1990s are 80 percent faster than the human-operated cranes used at the port in Oakland, California, according to an estimate by one trade publication.

The top rated U.S. port in the World Bank’s annual performance index is only 53rd.  

Continue reading…

Pete Hudson, Alta Partners & Transcarent Investor (Part 2)

Pete Hudson is one of the OGs of digital health. As an emergency room doc he was fed up with his friends bothering him with their medical problems and he created a tool called iTriage, which helped patients figure out what condition they had, and where to go to deal with it. This was fifteen years ago and we’re now starting to see the evolution of that. Pete is now a venture capitalist and an investor in Transcarent–the sponsor of a new video series on THCB. This is part 2 of our conversation (part 1 is here) and we dive much more into AI and what Transcarent’s Wayfinding tool and other AI like it could do to change health care and the patient experience–Matthew Holt

Pete Hudson, Alta Partners & Transcarent Investor (Part 1)

Pete Hudson is one of the OGs of digital health. As an emergency room doc he was fed up with his friends bothering him with their medical problems and he created a tool called iTriage, which helped patients figure out what condition they had, and where to go to deal with it. This was fifteen years ago and we’re now starting to see the evolution of that. Pete is now a venture capitalist and an investor in Transcarent–the sponsor of a new video series on THCB. We had a long conversation about the evolution of digital health, what went right, what opportunities got missed, and what to expect next. This is part one of our conversation, and allows two guys who were there close to the start of this world to survey what’s happened since–Matthew Holt

The Silicon Curtain Descends on SB 1047

By MIKE MAGEE

Whether you’re talking health, environment, technology or politics, the common denominator these days appears to be information.  And the injection of AI, not surprisingly, has managed to reinforce our worst fears about information overload and misinformation. As the “godfather of AI”, Geoffrey Hinton, confessed as he left Google after a decade of leading their AI effort, “It is hard to see how you can prevent the bad actors from using AI for bad things.”

Hinton is a 75-year-old British expatriate who has been around the world. In 1972 he began to work with neural networks that are today the foundation of AI. Back then he was a graduate student at the University of Edinburgh. Mathematics and computer science were his life. but they co-existed alongside a well evolved social conscience, which caused him to abandon a 1980’s post at Carnegie Mellon rather that accept Pentagon funding with a possible endpoint that included “robotic soldiers.” 

Four years later in 2013, he was comfortably resettled at the University of Toronto where he managed to create a computer neural network able to teach itself image identification by analyzing data over and over again. That caught Google’s eye and made Hinton $44 million dollars richer overnight. It also won Hinton the Turing Award, the “Nobel Prize of Computing” in 2018. But on May 1 2023, he unceremoniously quit over a range of safety concerns.

He didn’t go quietly. At the time, Hinton took the lead in signing on to a public statement by scientists that read, “We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure.” This was part of an effort to encourage Governor Newsom of California to sign SB 1047 which the California Legislature passed to codify regulations that the industry had already pledged to pursue voluntarily. They failed, but more on that in a moment.

At the time of his resignation from Google, Hinton didn’t mix words. In an interview with the BBC, he described the generative AI as “quite scary…This is just a kind of worst-case scenario, kind of a nightmare scenario.”

Hinton has a knack for explaining complex mathematical and computer concepts in simple terms.

Continue reading…

Tiny Is Mighty

By KIM BELLARD

I am a fanboy for AI; I don’t really understand the technical aspects, but I sure am excited about its potential. I’m also a sucker for a catchy phrase. So when I (belatedly) learned about TinyAI, I was hooked.  

Now, as it turns out, TinyAI (also know as Tiny AI) has been around for a few years, but with the general surge of interest in AI it is now getting more attention. There is also TinyML and Edge AI, the distinctions between which I won’t attempt to parse. The point is, AI doesn’t have to involve huge datasets run on massive servers somewhere in the cloud; it can happen on about as small a device as you care to imagine. And that’s pretty exciting.

What caught my eye was a overview in Cell by Farid Nakhle, a professor at Temple University, Japan Campus: Shrinking the Giants: Paving the Way for TinyAI.  “Transitioning from the landscape of large artificial intelligence (AI) models to the realm of edge computing, which finds its niche in pocket-sized devices, heralds a remarkable evolution in technological capabilities,” Professor Nakhle begins.

AI’s many successes, he believes, “…are demanding a leap in its capabilities, calling for a paradigm shift in the research landscape, from centralized cloud computing architectures to decentralized and edge-centric frameworks, where data can be processed on edge devices near to where they are being generated.” The demands for real time processing, reduced latency, and enhanced privacy make TinyAI attractive.

Accordingly: “This necessitates TinyAI, here defined as the compression and acceleration of existing AI models or the design of novel, small, yet effective AI architectures and the development of dedicated AI-accelerating hardware to seamlessly ensure their efficient deployment and operation on edge devices.”

Professor Nakhle gives an overview of those compression and acceleration techniques, as well as architecture and hardware designs, all of which I’ll leave as an exercise for the interested reader.  

If all this sounds futuristic, here are some current examples of TinyAI models:

  • This summer Google launched Gemma 2 2B, a 2 billion parameter model that it claims outperforms OpenAI’s GPT 3.5 and Mistral AI’s Mixtral 8X7B. VentureBeat opined: “Gemma 2 2B’s success suggests that sophisticated training techniques, efficient architectures, and high-quality datasets can compensate for raw parameter count.”
  • Also this summer OpenAI introduced GPT-4o mini, “our most cost-efficient small model.” It “supports text and vision in the API, with support for text, image, video and audio inputs and outputs coming in the future.”
  • Salesforce recently introduced its xLAM-1B model, which it likes to call the “Tiny Giant.” It supposedly only has 1b parameters, yet Marc Benoff claims it outperforms modelx 7x its size and boldly says: “On-device agentic AI is here”  
  • This spring Microsoft launched Phi-3 Mini, a 3.8 billion parameter model, which is small enough for a smartphone. It claims to compare well to GPT 3.5 as well as Meta’s Llama 3.
  • H2O.ai offers Danube 2, a 1.8 b parameter model that Alan Simon of Hackernoon calls the most accurate of the open source, tiny LLM models.   

A few billion parameters may not sound so “tiny,” but keep in mind that other AI models may have trillions.

Continue reading…

Artificial Intelligence Plus Data Democratization Requires New Health Care Framework

By MICHAEL MILLENSON

The latest draft government strategic plan for health information technology pledges to support health information sharing among individuals, health care providers and others “so that they can make informed decisions and create better health outcomes.”

Those good intentions notwithstanding, the current health data landscape is dramatically different from when the organizational author of the plan, the Office of the National Coordinator for Health IT, formed two decades ago. As Price and Cohen have pointed out, entities subject to federal Health Insurance Portability and Accountability Act (HIPAA) requirements represent just the tip of the informational iceberg. Looming larger are health information generated by non-HIPAA-covered entities, user-generated health information, and non-health information being used to generate inferences about treatment and health improvement.

Meanwhile, the content of health information, its capabilities, and, crucially, the loci of control are all undergoing radical shifts due to the combined effects of data democratization and artificial intelligence. The increasing sophistication of consumer-facing AI tools such as biometric monitoring and web-based analytics are being seen as a harbinger of “fundamental changes” in interactions between health care professionals and patients.

In that context, a framework of information sharing I’ve called “collaborative health” could help proactively create a therapeutic alliance designed to respond to the emerging new realities of the AI age.

The term (not be confused with the interprofessional coordination known as “collaborative care”) describes a shifting constellation of relationships for health maintenance and sickness care shaped by individuals based on their life circumstances. At a time when people can increasingly find, create, control, and act upon an unprecedented breadth and depth of personalized information, the traditional care system will often remain a part of these relationships, but not always. For example, a review of breast cancer apps found that about one-third now use individualized, patient-reported health data obtained outside traditional care settings.

Collaborative health has three core principles: shared information, shared engagement, and shared accountability. They are meant to enable a framework of mutual trust and obligation with which to address the clinical, ethical, and legal issues AI and data democratization are bringing to the fore. As the white paper AI Rights for Patients noted, digital technologies can be vital tools, but they can also expose patients to privacy breaches, illegal data sharing and other “cyber harms.” Involving patients “is not just a moral imperative; it is foundational to the responsible and effective deployment of AI in health and in care.” (While “responsible” is not defined, one plausible definition might be “defensible to a jury.”)

Below is a brief description of how collaborative health principles might apply in practice.

Shared information

While the OurNotes initiative represents a model for co-creation of information with clinicians, important non-traditional inputs that should be shared are still generally absent from the record. These might include not just patient-provided data from vetted wearables and sensors, but also information from important non-traditional providers, such as the online fertility companies often accessed through an employee benefit. Whatever is in the record, the 21st Century Cures Act and subsequent regulations addressing interoperability through mechanisms such as Fast Healthcare Interoperability Resources more commonly known as FHIR have made much of that information available for patients to access and share electronically with whomever they choose.

Provider sharing of non-traditional information that comes from outside the EHR could be more problematic. So-called “commercially available information,” not protected by HIPAA, is being used to generate inferences about health improvement interventions. Individually identified data can include shopping habits, online searches, living arrangements and many other variables analyzed by proprietary AI algorithms that have undergone no public scrutiny for accuracy or bias. Since use by providers is often motivated by value-based payment incentives, voluntary disclosure will distance clinicians from a questionable form of surveillance capitalism.

Continue reading…

Innovators: Avoid Health Care

By KIM BELLARD

NVIDIA founder and CEO Jensen Huang has become quite the media darling lately, due to NVIDIA’s skyrocketing market value the past two years ($3.3 trillion now, thank you very much. A year ago it first hit $1 trillion). His company is now the world’s third largest company by market capitalization. Last week he gave the commencement speech at Caltech, and offered those graduates some interesting insights.

Which, of course, I’ll try to apply to healthcare.

Mr. Jensen founded NVIDIA in 1993, and took the company public in 1999, but for much of its existence it struggled to find its niche. Mr. Huang figured NVIDIA needed to go to a market where there were no customers yet – “because where there are no customers, there are no competitors.” He likes to call this “zero billion dollar markets” (a phrase I gather he did not invent).

About a decade ago the company bet on deep learning and A.I. “No one knew how far deep learning could scale, and if we didn’t build it, we’d never know,” Mr. Huang told the graduates. “Our logic is: If we don’t build it, they can’t come.”

NVIDIA did build it, and, boy, they did come.

He believes we all should try to do things that haven’t been done before, things that “are insanely hard to do,” because if you succeed you can make a real contribution to the world.  Going into zero billion dollar markets allows a company to be a “market maker, not a market-taker.” He’s not interested in market share; he’s interested in developing new markets.

Accordingly, he told the Caltech graduates:

I hope you believe in something. Something unconventional, something unexplored. But let it be informed, and let it be reasoned, and dedicate yourself to making that happen. You may find your GPU. You may find your CUDA. You may find your generative AI. You may find your NVIDIA.

And in that group, some may very well.

He didn’t promise it would be easy, citing his company’s own experience, and stressing the need for resilience. “One setback after another, we shook it off and skated to the next opportunity. Each time, we gain skills and strengthen our character,” Mr. Huang said. “No setback that comes our way doesn’t look like an opportunity these days… The world can be unfair and deal you with tough cards. Swiftly shake it off. There’s another opportunity out there — or create one.”

He was quite pleased with the Taylor Swift reference; the crowd seemed somewhat less impressed.

Continue reading…

Who Needs Humans, Anyway?

By KIM BELLARD

Imagine my excitement when I saw the headline: “Robot doctors at world’s first AI hospital can treat 3,000 a day.” Finally, I thought – now we’re getting somewhere. I must admit that my enthusiasm was somewhat tempered to find that the patients were virtual. But, still.

The article was in Interesting Engineering, and it largely covered the source story in Global Times, which interviewed the research team leader Yang Liu, a professor at China’s Tsinghua University, where he is executive dean of Institute for AI Industry Research (AIR) and associate dean of the Department of Computer Science and Technology. The professor and his team just published a paper detailing their efforts.  

The paper describes what they did: “we introduce a simulacrum of hospital called Agent Hospital that simulates the entire process of treating illness. All patients, nurses, and doctors are autonomous agents powered by large language models (LLMs).” They modestly note: “To the best of our knowledge, this is the first simulacrum of hospital, which comprehensively reflects the entire medical process with excellent scalability, making it a valuable platform for the study of medical LLMs/agents.”

In essence, “Resident Agents” randomly contract a disease, seek care at the Agent Hospital, where they are triaged and treated by Medical Professional Agents, who include 14 doctors and 4 nurses (that’s how you can tell this is only a simulacrum; in the real world, you’d be lucky to have 4 doctors and 14 nurses). The goal “is to enable a doctor agent to learn how to treat illness within the simulacrum.”

The Agent Hospital has been compared to the AI town developed at Stanford last year, which had 25 virtual residents living and socializing with each other. “We’ve demonstrated the ability to create general computational agents that can behave like humans in an open setting,” said Joon Sung Park, one of the creators. The Tsinghua researchers have created a “hospital town.”

Gosh, a healthcare system with no humans involved. It can’t be any worse than the human one. Then, again, let me know when the researchers include AI insurance company agents in the simulacrum; I want to see what bickering ensues.

Continue reading…

AI Cognition – The Next Nut To Crack

By MIKE MAGEE

OpenAI says its new GPT-4o is “a step towards much more natural human-computer interaction,” and is capable of responding to your inquiry “with an average 320 millisecond (delay) which is similar to a human response time.” So it can speak human, but can it think human?

The “concept of cognition” has been a scholarly football for the past two decades, centered primarily on “Darwin’s claim that other species share the same ‘mental powers’ as humans, but to different degrees.” But how about genAI powered machines? Do they think?

The first academician to attempt to define the word “cognition” was Ulric Neisser in the first ever textbook of cognitive psychology in 1967. He wrote that “the term ‘cognition’ refers to all the processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used. It is concerned with these processes even when they operate in the absence of relevant stimulation…”

The word cognition is derived from “Latin cognoscere ‘to get to know, recognize,’ from assimilated form of com ‘together’ + gnoscere ‘to know’ …”

Knowledge and recognition would not seem to be highly charged terms. And yet, in the years following Neisser’s publication there has been a progressively intense, and sometimes heated debate between psychologists and neuroscientists over the definition of cognition.

The focal point of the disagreement has (until recently) revolved around whether the behaviors observed in non-human species are “cognitive” in the human sense of the word. The discourse in recent years had bled over into the fringes to include the belief by some that plants “think” even though they are not in possession of a nervous system, or the belief that ants communicating with each other in a colony are an example of “distributed cognition.”

What scholars in the field do seem to agree on is that no suitable definition for cognition exists that will satisfy all. But most agree that the term encompasses “thinking, reasoning, perceiving, imagining, and remembering.” Tim Bayne PhD, a Melbourne based professor of Philosophy adds to this that these various qualities must be able to be “systematically recombined with each other,” and not be simply triggered by some provocative stimulus.

Allen Newell PhD, a professor of computer science at Carnegie Mellon, sought to bridge the gap between human and machine when it came to cognition when he published a paper in 1958 that proposed “a description of a theory of problem-solving in terms of information processes amenable for use in a digital computer.”

Machines have a leg up in the company of some evolutionary biologists who believe that true cognition involves acquiring new information from various sources and combining it in new and unique ways.

Developmental psychologists carry their own unique insights from observing and studying the evolution of cognition in young children. What exactly is evolving in their young minds, and how does it differ, but eventually lead to adult cognition? And what about the explosion of screen time?

Pediatric researchers, confronted with AI obsessed youngsters and worried parents are coming at it from the opposite direction. With 95% of 13 to 17 year olds now using social media platforms, machines are a developmental force, according to the American Academy of Child and Adolescent Psychiatry. The machine has risen in status and influence from a side line assistant coach to an on-field teammate.

Scholars admit “It is unclear at what point a child may be developmentally ready to engage with these machines.” At the same time, they are forced to admit that the technological tidal waves leave few alternatives. “Conversely, it is likely that completely shielding children from these technologies may stunt their readiness for a technological world.”

Bence P Ölveczky, an evolutionary biologist from Harvard, is pretty certain what cognition is and is not. He says it “requires learning; isn’t a reflex; depends on internally generated brain dynamics; needs access to stored models and relationships; and relies on spatial maps.”

Thomas Suddendorf PhD, a research psychologist from New Zealand, who specializes in early childhood and animal cognition, takes a more fluid and nuanced approach. He says, “Cognitive psychology distinguishes intentional and unintentional, conscious and unconscious, effortful and automatic, slow and fast processes (for example), and humans deploy these in diverse domains from foresight to communication, and from theory-of-mind to morality.”

Perhaps the last word on this should go to Descartes. He believed that humans mastery of thoughts and feelings separated them from animals which he considered to be “mere machines.”

Were he with us today, and witnessing generative AI’s insatiable appetite for data, its’ hidden recesses of learning, the speed and power of its insurgency, and human uncertainty how to turn the thing off, perhaps his judgement of these machines would be less disparaging; more akin to Mira Murati, OpenAI’s chief technology officer, who announced with some degree of understatement this month, “We are looking at the future of the interaction between ourselves and machines.”

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside the Medical Industrial Complex (Grove/2020)

Getting the Future of Health Care Wrong

By KIM BELLARD

Sure, there’s lots of A.I. hype to talk about (e.g., the AI regulation proposed by Chuck Schumer, or the latest updates from Microsoft, Google, and OpenAI) but a recent column by Wall Street Journal tech writer Christopher Mims – What I Got Wrong in a Decade of Predicting the Future of Tech —  reminded me how easily we get overexcited by such things.   

I did my own mea culpa about my predictions for healthcare a couple of years ago, but since Mr. Mims is both smarter and a better writer than I am, I’ll use his structure and some of his words to try to apply them to healthcare.  

Mr. Mims offers five key learnings:

  1. Disruption is overrated
  2. Human factors are everything
  3. We’re all susceptible to this one kind of tech B.S.
  4. Tech bubbles are useful even when they’re wasteful
  5. We’ve got more power than we think

Let’s take each of these in turn and see how they relate not just to tech but also to healthcare.

Disruption is overrated

“It’s not that disruption never happens,” Mr. Mims clarifies. “It just doesn’t happen nearly as often as we’ve been led to believe.”  Well, no kidding. I’ve been in healthcare for longer than I care to admit, and I’ve lost count of all the “disruptions” we were promised.

The fact of the matter is that healthcare is a huge part of the economy. Trillions of dollars are at stake, not to mention millions of jobs and hundreds of billions of profits. Healthcare is too big to fail, and possibly too big to disrupt in any meaningful way.

If some super genius came along and offered us a simple solution that would radically improve our health but slash more than half of that spending and most of those jobs, I honestly am not sure we’d take the offer. Healthcare likes its disruption in manageable gulps, and disruptors often have their eye more on their share of those trillions than in reducing them.

For better or worse, change in healthcare usually comes in small increments.

Human factors are everything

“But what’s most often holding back mass adoption of a technology is our humanity,” Mr. Mims points out. “The challenge of getting people to change their ways is the reason that adoption of new tech is always much slower than it would be if we were all coldly rational utilitarians bent solely on maximizing our productivity or pleasure.” 

Boy, this hits the healthcare head on the nail. If we all simply ate better, exercised more, slept better, and spent less time on our screens, our health and our healthcare system would be very different. It’s not rocket science, but it is proven science.

But we don’t. We like our short-cuts, we don’t like personal inconvenience, and why skip the Krispy Kreme when we can just take Wegovy? Figure out how to motivate people to take more charge of their health: that’d be disruption.

We’re all susceptible to this one kind of tech B.S.

Mr. Mims believes: “Tech is, to put it bluntly, full of people lying to themselves,” although he is careful to add: “It’s usually not malicious.” That’s true in healthcare as well. I’ve known many healthcare innovators, and almost without exception they are true believers in what they are proposing. The good ones get others to buy into their vision. The great ones actually make some changes, albeit rarely quite as profoundly as hoped.

But just because someone believes something strongly and articulates very well doesn’t mean it’s true. I’d like to see significant changes as much as anyone, and more than most, and I know I’m too often guilty of looking for what Mr. Mims calls “the winning lottery ticket” when it comes to healthcare innovation, even though I know the lottery is a sucker’s bet.

To paraphrase Ronald Reagan (!), hope but verify.

Tech bubbles are useful even when they’re wasteful

 Healthcare has its bubbles as well, many but not all of them tech related. How many health start-ups over the last twenty years can you name that did not survive, much less make a mark on the healthcare system? How many billions of investments do they represent?

But, as Mr. Mims recounts Bill Gates once saying, “most startups were “silly” and would go bankrupt, but that the handful of ideas—he specifically said ideas, and not companies—that persist would later prove to be “really important.”’  

The trick, in healthcare as in tech, is separating the proverbial wheat from the chaff, both in terms of what ideas deserve to persist and in which people/organizations can actually make them work. There are good new ideas out there, some of which could be really important.

We’ve got more power than we think

Many of us feel helpless when encountering the healthcare system. It’s too big, too complicated, too impersonal, and too full of specialized knowledge for us to have the kind of agency we might like.

Mr. Mims advice, when it comes to tech is: “Collectively, we have agency over how new tech is developed, released, and used, and we’d be foolish not to use it.” The same is true with healthcare. We can be the patient patients our healthcare system has come to expect, or we can be the assertive ones that it will have to deal with.

I think about people like Dave deBronkart or the late Casey Quinlan when it comes to demanding our own data. I think about Andrea Downing and The Light Collective when it comes to privacy rights. I think about all the biohackers who are not waiting for the healthcare system to catch up on how to apply the latest tech to their health. And I think about all those patient advocates – too numerous to name – who are insisting on respect from the healthcare system and a meaningful role in managing their health.

Yes, we’ve got way more power than we think. Use it.

————

Mr. Mims is humble in admitting that he fell for some people, ideas, gadgets, and services that perhaps he shouldn’t. The key thing he does, though, to use his words, is “paying attention to what’s just over the horizon.” We should all be trying to do that and doing our best to prepare for it.

My horizon is what a 22nd healthcare system could, will and should look like. I’m not willing to settle for what our early 21st century one does. I expect I’ll continue to get a lot wrong but I’m still going to try.

fs25 mods