Categories

Tag: AI

You Can’t Spell Fair Pay Without AI

By KIM BELLARD

Everything’s about AI these days. Everything is going to be about AI for a while. Everyone’s talking about it, and most of them know more about it than I do. But there is one thing about AI that I don’t think is getting enough attention. I’m old enough that the mantra “follow the money” resonates, and, when it comes to AI, I don’t like where I think the money is ending up.

I’ll talk about this both at a macro level and also specifically for healthcare.

On the macro side, one trend that I have become increasingly radicalized about over the past few year is income/wealth inequality.  I wrote a couple weeks ago about how the economy is not working for many workers: executive to worker compensation ratios have skyrocketed over the past few decades, resulting in wage stagnation for many workers; income and wealthy inequality are at levels that make the Gilded Age look positively progressive; intergenerational mobility in the United States is moribund.

That’s not the American Dream many of us grew up believing in.

We’ve got a winner-take-all economy, and it’s leaving behind more and more people. If you are a tech CEO, a hedge fund manager, or a highly skilled knowledge worker, things are looking pretty good. If you don’t have a college degree, or even if you have a college degree but with the wrong major or have the wrong skills, not so much.  

All that was happening before AI, and the question for us is whether AI will exacerbate those trends, or ameliorate them. If you are in doubt about the answer to that question, follow the money. Who is funding AI research, and what might they be expecting in return?

It seems like every day I read about how AI is impacting white collar jobs. It can help traders! It can help lawyers! It can help coders! It can help doctors! For many white collar workers, AI may be a valuable tool that will enhance their productivity and make their jobs easier – in the short term. In the long term, of course, AI may simply come for their jobs, as it is starting to do for blue collar workers.

Automation has already cost more blue collar jobs than outsourcing, and that was before anything we’d now consider AI. With AI, that trend is going to happen on steroids; jobs will disappear in droves. That’s great if you are an executive looking to cut costs, but terrible if you are one of those costs.

Continue reading…

Sean Bell, Spring Health

Sean Bell is head of new ventures at Spring Health, a very well-funded mental health company. They’ve built a tech platform that its providers (both contractors and FT employees) are on, and spend a lot of time using machine learning to match patients to therapists, to augment the care and also measure the impact of that care. Sean told me about both how Spring Health works and how much its grown, and what new specialized care is being introduced in 2025. He talks quick and we covered a lot of ground including the business of being a highly-valued private mental health company when there are some lower priced public companies out there. Interesting interview — Matthew Holt

Will Trump and RFK Jr. Revive His Covid Pandemic Performance?

By MIKE MAGEE

It has been a collision of past, present and future this week in the wake of Trump’s victory on November 6, 2024. The country, both for and against, has been unusually quiet. It is unclear whether this is in recognition of political exhaustion, or the desire of victors to be “good winners” and no longer “poor losers.”

Who exactly are “the enemy within” remains to be seen. But Trump is fast at work in defining his cabinet and top agency officials. In his first term as President, Trump famously placed himself at the front of the line of scientific experts sowing confusion and chaos in the early Covid response.

His 2024 campaign alliance with Robert F. Kennedy Jr. suggests health policy remains a strong interest. As his spokesperson suggested, his up-front leadership led to a resounding victory “because they trust his judgement and support his policies, including his promise to Make America Healthy Again alongside well-respected leaders like RFK Jr.”

For those with a memory of Trump’s checkered, and disruptive management of the Covid crisis, it is useful to remind ourselves of those days not long ago, and consider if throwing Bobby Kennedy Jr. in the mix back then would have been helpful.

I have been revisiting the Covid pandemics I have prepared for a 3-session course on “AI and Medicine” at the University of Hartford’s Presidents College. The course includes a number of case studies, notably the multi-prong role of AI in addressing the Covid pandemic as it spun out of control in 2020.

The early Covid timeline reads like this:

Continue reading…

You’re Not Going to Automate MY Job

By KIM BELLARD

Earlier this month U.S. dockworkers struck, for the first time in decades. Their union, the International Longshoremen’s Association (ILW), was demanding a 77% pay increase, rejecting an offer of a 50% pay increase from the shipping companies. People worried about the impact on the economy, how it might impact the upcoming election, even if Christmas would be ruined. Some panic hoarding ensued.

Then, just three days later, the strike was over, with an agreement for a 60% wage increase over six years. Work resumed. Everyone’s happy right? Well, no. The agreement is only a truce until January 15, 2025. While money was certainly an issue – it always is – the real issue is automation, and the two sides are far apart on that.

Most of us aren’t dockworkers, of course, but their union’s attitude towards automation has lessons for our jobs nonetheless.

The advent of shipping containers in the 1960’s (if you haven’t read The BoxHow the Shipping Container Made the World Smaller and the World Economy Bigger, by Marc Levinson, I highly recommend it) made increased use of automation in the shipping industry not only possible but inevitable. The ports, the shipping companies, and the unions all knew this, and have been fighting about it ever since. Add better robots and, now, AI to the mix, and one wonders when the whole process will be automated.

Curiously, the U.S. is not a leader in this automation. Margaret Kidd, program director and associate professor of supply chain logistics at the University of Houston, told The Hill: “What most Americans don’t realize is that American exceptionalism does not exist in our port system. Our infrastructure is antiquated. Our use of automation and technology is antiquated.”

Eric Boehm of Reason agrees:

The problem is that American ports need more automation just to catch up with what’s considered normal in the rest of the world. For example, automated cranes in use at the port of Rotterdam in the Netherlands since the 1990s are 80 percent faster than the human-operated cranes used at the port in Oakland, California, according to an estimate by one trade publication.

The top rated U.S. port in the World Bank’s annual performance index is only 53rd.  

Continue reading…

Pete Hudson, Alta Partners & Transcarent Investor (Part 2)

Pete Hudson is one of the OGs of digital health. As an emergency room doc he was fed up with his friends bothering him with their medical problems and he created a tool called iTriage, which helped patients figure out what condition they had, and where to go to deal with it. This was fifteen years ago and we’re now starting to see the evolution of that. Pete is now a venture capitalist and an investor in Transcarent–the sponsor of a new video series on THCB. This is part 2 of our conversation (part 1 is here) and we dive much more into AI and what Transcarent’s Wayfinding tool and other AI like it could do to change health care and the patient experience–Matthew Holt

Pete Hudson, Alta Partners & Transcarent Investor (Part 1)

Pete Hudson is one of the OGs of digital health. As an emergency room doc he was fed up with his friends bothering him with their medical problems and he created a tool called iTriage, which helped patients figure out what condition they had, and where to go to deal with it. This was fifteen years ago and we’re now starting to see the evolution of that. Pete is now a venture capitalist and an investor in Transcarent–the sponsor of a new video series on THCB. We had a long conversation about the evolution of digital health, what went right, what opportunities got missed, and what to expect next. This is part one of our conversation, and allows two guys who were there close to the start of this world to survey what’s happened since–Matthew Holt

The Silicon Curtain Descends on SB 1047

By MIKE MAGEE

Whether you’re talking health, environment, technology or politics, the common denominator these days appears to be information.  And the injection of AI, not surprisingly, has managed to reinforce our worst fears about information overload and misinformation. As the “godfather of AI”, Geoffrey Hinton, confessed as he left Google after a decade of leading their AI effort, “It is hard to see how you can prevent the bad actors from using AI for bad things.”

Hinton is a 75-year-old British expatriate who has been around the world. In 1972 he began to work with neural networks that are today the foundation of AI. Back then he was a graduate student at the University of Edinburgh. Mathematics and computer science were his life. but they co-existed alongside a well evolved social conscience, which caused him to abandon a 1980’s post at Carnegie Mellon rather that accept Pentagon funding with a possible endpoint that included “robotic soldiers.” 

Four years later in 2013, he was comfortably resettled at the University of Toronto where he managed to create a computer neural network able to teach itself image identification by analyzing data over and over again. That caught Google’s eye and made Hinton $44 million dollars richer overnight. It also won Hinton the Turing Award, the “Nobel Prize of Computing” in 2018. But on May 1 2023, he unceremoniously quit over a range of safety concerns.

He didn’t go quietly. At the time, Hinton took the lead in signing on to a public statement by scientists that read, “We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure.” This was part of an effort to encourage Governor Newsom of California to sign SB 1047 which the California Legislature passed to codify regulations that the industry had already pledged to pursue voluntarily. They failed, but more on that in a moment.

At the time of his resignation from Google, Hinton didn’t mix words. In an interview with the BBC, he described the generative AI as “quite scary…This is just a kind of worst-case scenario, kind of a nightmare scenario.”

Hinton has a knack for explaining complex mathematical and computer concepts in simple terms.

Continue reading…

Tiny Is Mighty

By KIM BELLARD

I am a fanboy for AI; I don’t really understand the technical aspects, but I sure am excited about its potential. I’m also a sucker for a catchy phrase. So when I (belatedly) learned about TinyAI, I was hooked.  

Now, as it turns out, TinyAI (also know as Tiny AI) has been around for a few years, but with the general surge of interest in AI it is now getting more attention. There is also TinyML and Edge AI, the distinctions between which I won’t attempt to parse. The point is, AI doesn’t have to involve huge datasets run on massive servers somewhere in the cloud; it can happen on about as small a device as you care to imagine. And that’s pretty exciting.

What caught my eye was a overview in Cell by Farid Nakhle, a professor at Temple University, Japan Campus: Shrinking the Giants: Paving the Way for TinyAI.  “Transitioning from the landscape of large artificial intelligence (AI) models to the realm of edge computing, which finds its niche in pocket-sized devices, heralds a remarkable evolution in technological capabilities,” Professor Nakhle begins.

AI’s many successes, he believes, “…are demanding a leap in its capabilities, calling for a paradigm shift in the research landscape, from centralized cloud computing architectures to decentralized and edge-centric frameworks, where data can be processed on edge devices near to where they are being generated.” The demands for real time processing, reduced latency, and enhanced privacy make TinyAI attractive.

Accordingly: “This necessitates TinyAI, here defined as the compression and acceleration of existing AI models or the design of novel, small, yet effective AI architectures and the development of dedicated AI-accelerating hardware to seamlessly ensure their efficient deployment and operation on edge devices.”

Professor Nakhle gives an overview of those compression and acceleration techniques, as well as architecture and hardware designs, all of which I’ll leave as an exercise for the interested reader.  

If all this sounds futuristic, here are some current examples of TinyAI models:

  • This summer Google launched Gemma 2 2B, a 2 billion parameter model that it claims outperforms OpenAI’s GPT 3.5 and Mistral AI’s Mixtral 8X7B. VentureBeat opined: “Gemma 2 2B’s success suggests that sophisticated training techniques, efficient architectures, and high-quality datasets can compensate for raw parameter count.”
  • Also this summer OpenAI introduced GPT-4o mini, “our most cost-efficient small model.” It “supports text and vision in the API, with support for text, image, video and audio inputs and outputs coming in the future.”
  • Salesforce recently introduced its xLAM-1B model, which it likes to call the “Tiny Giant.” It supposedly only has 1b parameters, yet Marc Benoff claims it outperforms modelx 7x its size and boldly says: “On-device agentic AI is here”  
  • This spring Microsoft launched Phi-3 Mini, a 3.8 billion parameter model, which is small enough for a smartphone. It claims to compare well to GPT 3.5 as well as Meta’s Llama 3.
  • H2O.ai offers Danube 2, a 1.8 b parameter model that Alan Simon of Hackernoon calls the most accurate of the open source, tiny LLM models.   

A few billion parameters may not sound so “tiny,” but keep in mind that other AI models may have trillions.

Continue reading…

Artificial Intelligence Plus Data Democratization Requires New Health Care Framework

By MICHAEL MILLENSON

The latest draft government strategic plan for health information technology pledges to support health information sharing among individuals, health care providers and others “so that they can make informed decisions and create better health outcomes.”

Those good intentions notwithstanding, the current health data landscape is dramatically different from when the organizational author of the plan, the Office of the National Coordinator for Health IT, formed two decades ago. As Price and Cohen have pointed out, entities subject to federal Health Insurance Portability and Accountability Act (HIPAA) requirements represent just the tip of the informational iceberg. Looming larger are health information generated by non-HIPAA-covered entities, user-generated health information, and non-health information being used to generate inferences about treatment and health improvement.

Meanwhile, the content of health information, its capabilities, and, crucially, the loci of control are all undergoing radical shifts due to the combined effects of data democratization and artificial intelligence. The increasing sophistication of consumer-facing AI tools such as biometric monitoring and web-based analytics are being seen as a harbinger of “fundamental changes” in interactions between health care professionals and patients.

In that context, a framework of information sharing I’ve called “collaborative health” could help proactively create a therapeutic alliance designed to respond to the emerging new realities of the AI age.

The term (not be confused with the interprofessional coordination known as “collaborative care”) describes a shifting constellation of relationships for health maintenance and sickness care shaped by individuals based on their life circumstances. At a time when people can increasingly find, create, control, and act upon an unprecedented breadth and depth of personalized information, the traditional care system will often remain a part of these relationships, but not always. For example, a review of breast cancer apps found that about one-third now use individualized, patient-reported health data obtained outside traditional care settings.

Collaborative health has three core principles: shared information, shared engagement, and shared accountability. They are meant to enable a framework of mutual trust and obligation with which to address the clinical, ethical, and legal issues AI and data democratization are bringing to the fore. As the white paper AI Rights for Patients noted, digital technologies can be vital tools, but they can also expose patients to privacy breaches, illegal data sharing and other “cyber harms.” Involving patients “is not just a moral imperative; it is foundational to the responsible and effective deployment of AI in health and in care.” (While “responsible” is not defined, one plausible definition might be “defensible to a jury.”)

Below is a brief description of how collaborative health principles might apply in practice.

Shared information

While the OurNotes initiative represents a model for co-creation of information with clinicians, important non-traditional inputs that should be shared are still generally absent from the record. These might include not just patient-provided data from vetted wearables and sensors, but also information from important non-traditional providers, such as the online fertility companies often accessed through an employee benefit. Whatever is in the record, the 21st Century Cures Act and subsequent regulations addressing interoperability through mechanisms such as Fast Healthcare Interoperability Resources more commonly known as FHIR have made much of that information available for patients to access and share electronically with whomever they choose.

Provider sharing of non-traditional information that comes from outside the EHR could be more problematic. So-called “commercially available information,” not protected by HIPAA, is being used to generate inferences about health improvement interventions. Individually identified data can include shopping habits, online searches, living arrangements and many other variables analyzed by proprietary AI algorithms that have undergone no public scrutiny for accuracy or bias. Since use by providers is often motivated by value-based payment incentives, voluntary disclosure will distance clinicians from a questionable form of surveillance capitalism.

Continue reading…

Innovators: Avoid Health Care

By KIM BELLARD

NVIDIA founder and CEO Jensen Huang has become quite the media darling lately, due to NVIDIA’s skyrocketing market value the past two years ($3.3 trillion now, thank you very much. A year ago it first hit $1 trillion). His company is now the world’s third largest company by market capitalization. Last week he gave the commencement speech at Caltech, and offered those graduates some interesting insights.

Which, of course, I’ll try to apply to healthcare.

Mr. Jensen founded NVIDIA in 1993, and took the company public in 1999, but for much of its existence it struggled to find its niche. Mr. Huang figured NVIDIA needed to go to a market where there were no customers yet – “because where there are no customers, there are no competitors.” He likes to call this “zero billion dollar markets” (a phrase I gather he did not invent).

About a decade ago the company bet on deep learning and A.I. “No one knew how far deep learning could scale, and if we didn’t build it, we’d never know,” Mr. Huang told the graduates. “Our logic is: If we don’t build it, they can’t come.”

NVIDIA did build it, and, boy, they did come.

He believes we all should try to do things that haven’t been done before, things that “are insanely hard to do,” because if you succeed you can make a real contribution to the world.  Going into zero billion dollar markets allows a company to be a “market maker, not a market-taker.” He’s not interested in market share; he’s interested in developing new markets.

Accordingly, he told the Caltech graduates:

I hope you believe in something. Something unconventional, something unexplored. But let it be informed, and let it be reasoned, and dedicate yourself to making that happen. You may find your GPU. You may find your CUDA. You may find your generative AI. You may find your NVIDIA.

And in that group, some may very well.

He didn’t promise it would be easy, citing his company’s own experience, and stressing the need for resilience. “One setback after another, we shook it off and skated to the next opportunity. Each time, we gain skills and strengthen our character,” Mr. Huang said. “No setback that comes our way doesn’t look like an opportunity these days… The world can be unfair and deal you with tough cards. Swiftly shake it off. There’s another opportunity out there — or create one.”

He was quite pleased with the Taylor Swift reference; the crowd seemed somewhat less impressed.

Continue reading…