Categories

Tag: AI

Searching For The Next Search

By KIM BELLARD

I didn’t write about ChatGPT when it was first introduced a month ago because, well, it seemed like everyone else was. I didn’t play with it to see what it could do.  I didn’t want it to write any poems. I didn’t have any AP tests I wanted it to pass. And, for all you know, I’m not using it to write this. But when The New York Times reports that Google sees ChatGPT as a “Code Red” for its search business, that got my attention.

A few months ago I wrote about how Google saw TikTok as an existential threat to its business, estimating that 40% of young people used it for searches. It was a different kind of search, mind you, with video results instead of links, but that’s what made it scary – because it didn’t just incrementally improve “traditional” search, as Google had done to Lycos or Altavista, it potentially changed what “search” was.    

TikTok may well still do that (although it is facing existential issues of its own), but ChatGPT could pose an even greater threat. Why get a bunch of search results that you still have to investigate when you could just ask ChatGPT to tell you exactly what you want to know?

Look, I like Google as much as anyone, but the prospect that its massive dominance of the search engine market could, in the near future, suddenly come to an end gives me hope for healthcare.  If Google isn’t safe in search, no company is safe in any industry, healthcare included.

Continue reading…

OK, trying more AI to write this

by MATTHEW HOLT

In the far oft halcyon days of last month, I used an AI system from Lex.Page  to write something about digital health. In the past few days the world has gone nuts about a new AI author called ChatGPT. So I tought I would compare and contrast.

Last month Lex.page was asked about Digital Health and it said

Digital Health is one of the most rapidly growing and important sectors in health care today. It is a broad term that encompasses the use of technology to improve health and health care.

Today I asked ChatGPT the same thing. In fact I just wrote “Digital Health” as the Q and this was the A

Digital health, also known as telehealth, is the use of digital technologies to deliver healthcare services remotely. This includes the use of video conferencing, mobile health apps, remote patient monitoring, and other digital tools to provide healthcare services to patients. Digital health can be used to improve access to primary care, reduce costs, and improve patient outcomes.

OK, pretty much a longer version of the same thing but with more details. So then I went BACK to Lex.page to ask it. But I had to ask it “What is Digital Health”

Digital health is a term used to describe the use of technology to improve healthcare. It covers everything from diagnostic tools and monitoring devices to electronic medical records and healthcare analytics. It’s an exciting and rapidly evolving area that has the potential to revolutionize the delivery of healthcare.

So in a month Lex.page has gotten a lot better!

Last month Lex.page said in response to my question/statement, “The biggest challenges for primary care are”

-Access to care

-Cost of care

-Quality of care

So it seems that digital health can help with all of those things. By providing tools that help patients better self-manage their health, digital health can help to improve access to care. By providing tools that help to lower the cost of care, digital health can help to make care more affordable. And by providing tools that help to improve the quality of care, digital health can help to make

I put the same sentence into GPTChat and at the bottom it told me Warning: Your text ends in a trailing space, which causes worse performance due to how the API splits text into tokens. But I asked it anyway and it gave me

A: 1. Accessibility: Ensuring that everyone has access to quality primary care services.

I surmised that it had more answers so I kept hitting submit! It got to another 12 answers with no new question!

Q: A: 2. Cost: Finding ways to make primary care services more affordable and accessible to all. Q: A: 3. Workforce: Recruiting and retaining a sufficient number of primary care providers to meet the needs of the population. Q: A: 4. Technology: Incorporating new technologies into primary care to improve efficiency and patient outcomes. Q: A: 5. Coordination: Improving the coordination of care between primary care providers and other healthcare professionals. Q: A: 6. Quality: Ensuring that primary care services are of the highest quality and meet the needs of patients. Q: A: 7. Education: Educating patients about the importance of preventive care and the value of primary care services. Q: A: 8. Mental Health: Integrating mental health services into primary care to better address the needs of patients. Q: A: 9. Population Health: Developing strategies to improve the health of the population as a whole. A: 10. Data: Utilizing data to inform decision-making and improve patient outcomes. A: 11. Payment: Developing new payment models that incentivize quality care and reduce costs. Q: A: 12. Social Determinants of Health: Addressing the social and environmental factors that influence health outcomes.

OK so those are all separate statements, but it’s pretty damn impressive. How is Lex.page doing? Well I asked it’s chatbot the same statement and it now answered:

The biggest challenges for primary care are likely to be related to access and cost. In many areas of the world, access to quality primary care is limited by geography, poverty, or both. Cost is also an issue, especially in countries with expensive healthcare systems. Additionally, many primary care providers struggle to effectively manage chronic conditions, such as diabetes and hypertension, due to a lack of resources, funding, and staff. Finally, the rapid advancements in technology have presented a challenge in terms of staying up to date with the latest medical developments.

That’s not only pretty smart but it’s quite an advance on the answer it gave me to the same vague question/statement just 31 days ago.

We are seeing a technology emerge right before our eyes.

Presumably the next step is to set it to read some of my favorite pieces on THCB and just have it write for me! Then I will be happily replaced by a robot!

Go Ahead, AI—Surprise Us

By KIM BELLARD

Last week I was on a fun podcast with a bunch of people who were, as usual, smarter than me, and, in particular, more knowledgeable about one of my favorite topics – artificial intelligence (A.I.), particularly for healthcare.  With the WHO releasing its “first global report” on A.I. — Ethics & Governance of Artificial Intelligence for Health – and with no shortage of other experts weighing in recently, it seemed like a good time to revisit the topic. 

My prediction: it’s not going to work out quite like we expect, and it probably shouldn’t. 

“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm,” Dr Tedros Adhanom Ghebreyesus, WHO Director-General, said in a statement.  He’s right on both counts.

WHO’s proposed six principles are:

  • Protecting human autonomy
  • Promoting human well-being and safety and the public interest
  • Ensuring transparency, explainability and intelligibility 
  • Fostering responsibility and accountability
  • Ensuring inclusiveness and equity 
  • Promoting AI that is responsive and sustainable

All valid points, but, as we’re already learning, easier to propose than to ensure.  Just ask Timnit Gebru.  When it comes to using new technologies, we’re not so good about thinking through their implications, much less ensuring that everyone benefits.  We’re more of a “let the genie out of the bottle and see what happens” kind of species, and I hope our future AI overlords don’t laugh too much about that. 

As Stacey Higginbotham asks in IEEE Spectrum, “how do we know if a new technology is serving a greater good or policy goal, or merely boosting a company’s profit margins?…we have no idea how to make it work for society’s goals, rather than a company’s, or an individual’s.”   She further notes that “we haven’t even established what those benefits should be.”

Continue reading…

Docs are ROCs: a simple fix for a “methodologically indefensible” practice in medical AI studies

By LUKE OAKDEN-RAYNER

Anyone who has read my blog or tweets before has probably seen that I have issues with some of the common methods used to analyse the performance of medical machine learning models. In particular, the most commonly reported metrics we use (sensitivity, specificity, F1, accuracy and so on) all systematically underestimate human performance in head to head comparisons against AI models.

This makes AI look better than it is, and may be partially responsible for the “implementation gap” that everyone is so concerned about.

I’ve just posted a preprint on arxiv titled “Docs are ROCs: A simple off-the-shelf approach for estimating average human performance in diagnostic studies” which provides what I think is a solid solution to this problem, and I thought I would explain in some detail here.

Disclaimer: not peer reviewed, content subject to change 


A (con)vexing problem

When we compare machine learning models to humans, we have a bit of a problem. Which humans?

In medical tasks, we typically take the doctor who currently does the task (for example, a radiologist identifying cancer on a CT scan) as proxy for the standard of clinical practice. But doctors aren’t a monolithic group who all give the same answers. Inter-reader variability typically ranges from 15% to 50%, depending on the task. Thus, we usually take as many doctors as we can find and then try to summarise their performance (this is called a multi-reader multicase study, MRMC for short).

Since the metrics we care most about in medicine are sensitivity and specificity, many papers have reported the averages of these values. In fact, a recent systematic review showed that over 70% of medical AI studies that compared humans to AI models reported these values. This makes a lot of sense. We want to know how the average doctor performs at the task, so the average performance on these metrics should be great, right?

Continue reading…

Will AI-Based Automation Replace Basic Primary Care? Should It?

By KEN TERRY

In a recent podcast about the future of telehealth, Lyle Berkowitz, MD, a technology consultant, entrepreneur, and professor at Northwestern University’s Feinberg School of Medicine, confidently predicted that, because of telehealth and clinical automation, “In 10-20 years, we won’t need primary care physicians [for routine care]. The remaining PCPs will specialize in caring for complicated patients. Other than that, if people need care, they’ll go to NPs or PAs or receive automated care with the help of AI.”

Berkowitz isn’t the first to make this kind of prediction. Back in 2013, when mobile health was just starting to take hold, a trio of experts from the Scripps Translational Science Institute—Eric Topol, MD, Steven R. Steinhubl, MD, and Evan D. Muse, MD—wrote a JAMA Commentary arguing that, because of mHealth, physicians would eventually see patients far less often for minor acute problems and follow-up visits than they did then.

Many acute conditions diagnosed and treated in ambulatory care offices, they argued, could be addressed through novel technologies. For example, otitis media might be diagnosed using a smartphone-based otoscope, and urinary tract infections might be assessed using at-home urinalysis. Remote monitoring with digital blood pressure cuffs could be used to improve blood pressure control, so that patients would only have to visit their physicians occasionally.

Continue reading…

Trying to Make AI Less Squirrelly

By KIM BELLARD

You may have missed it, but the Association for the Advancement of Artificial Intelligence (AAAI) just announced its first annual Squirrel AI award winner: Regina Barzilay, a professor at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).   In fact, if you’re like me, you may have missed that there was a Squirrel AI award.  But there is, and it’s kind of a big deal, especially for healthcare – as Professor Barzilay’s work illustrates. 

The Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity (Squirrel AI is a Chinese-based AI-powered “adaptive education provider”) “recognizes positive impacts of artificial intelligence to protect, enhance, and improve human life in meaningful ways with long-lived effects.”  The award carries a prize of $1,000,000, which is about the same as a Nobel Prize

Yolanda Gil, a past president of AAAI, explained the rationale for the new award: “What we wanted to do with the award is to put out to the public that if we treat AI with fear, then we may not pursue the benefits that AI is having for people.”

Dr. Barzilay has impressive credentials, including a MacArthur Fellowship.   Her expertise is in natural language processing (NLP) and machine learning, and she focused her interests on healthcare following a breast cancer diagnosis.  “It was the end of 2014, January 2015, I just came back with a totally new vision about the goals of my research and technology development,” she told The Wall Street Journal. “And from there, I was trying to do something tangible, to change the diagnostics and treatment of breast cancer.”

Continue reading…

It’s complicated. A deep dive into the Viz/Medicare AI reimbursement model.

By LUKE OAKDEN-RAYNER

In the last post I wrote about the recent decision by CMS to reimburse a Viz.AI stroke detection model through Medicare/Medicaid. I briefly explained how this funding model will work, but it is so darn complicated that it deserves a much deeper look.

To get more info, I went to the primary source. Dr Chris Mansi, the co-founder and CEO of Viz.ai, was kind enough to talk to me about the CMS decision. He was also remarkably open and transparent about the process and the implications as they see them, which has helped me clear up a whole bunch of stuff in my mind. High fives all around!

So let’s dig in. This decision might form the basis of AI reimbursement in the future. It is a huge deal, and there are implications.


Uncharted territory

The first thing to understand is that Viz.ai charges a subscription to use their model. The cost is not what was included as “an example” in the CMS documents (25k/yr per hospital), and I have seen some discussion on Twitter that it is more than this per annum, but the actual cost is pretty irrelevant to this discussion.

For the purpose of this piece, I’ll pretend that the cost is the 25k/yr in the CMS document, just for simplicity. It is order-of-magnitude right, and that is what matters.

A subscription is not the only way that AI can be sold (I have seen other companies who charge per use as well) but it is a fairly common approach. Importantly though, it is unusual for a medical technology. Here is what CMS had to say:

Continue reading…

The Medical AI Floodgates Open, at a Cost of $1000 per Patient

By LUKE OAKDEN-RAYNER

In surprising news this week, CMS (the Centres for Medicare & Medicaid Services) in the USA approved the first reimbursement for AI augmented medical care. Viz.ai have a deep learning model which identifies signs of stroke on brain CT and automatically contacts the neurointerventionalist, bypassing the first read normally performed by a general radiologist.

From their press material:

Viz.ai demonstrated to CMS a significant reduction in time to treatment and improved clinical outcomes in patients suffering a stroke. Viz LVO has been granted a New Technology Add on Payment of up to $1,040 per use in patients with suspected strokes.

https://www.prnewswire.com/news-releases/vizai-granted-medicare-new-technology-add-on-payment-301123603.html

This is enormous news, and marks the start of a totally new era in medical AI.

Especially that pricetag!


Doing it tough

It is widely known in the medical AI community that it has been a troubled marketplace for AI developers. The majority of companies have developed putatively useful AI models, but have been unable to sell them to anyone. This has lead to many predictions that we are going to see a crash amongst medical AI startups, as capital runs out and revenue can’t take over. There have even been suggestions that a medical “AI winter” might be coming.

Continue reading…

Your Face is Not Your Own

By KIM BELLARD

I swear I’d been thinking about writing about facial recognition long before I discovered that John Oliver devoted his show last night to it.  Last week I wrote about how “Defund Police” should be expanded to “Defund Health Care,” and included a link to Mr. Oliver’s related episode, only to have a critic comment that I should have just given the link and left it at that.  

Now, I can’t blame anyone for preferring Mr. Oliver’s insights to mine, so I’ll link to his observations straightaway…but if you’re interested in some thoughts about facial recognition and healthcare, I hope you’ll keep reading.

Facial recognition is, indeed, in the news lately, and not in a good way.  Its use, particularly by law enforcement agencies, has become more widely known, as have some of its shortcomings.  At best, it is still weak at accurately identifying minority faces (or women), and at worst it poses significant privacy concerns for, well, everyone.  The fact that someone using such software could identify you in a crowd using publicly available photographs, and then track your past and subsequent movements, is the essence of Big Brother.  

Continue reading…

Health in 2 Point 00, Episode 115 | Olive, Bright.md and AristaMD

Today on Health in 2 Point 00, we have a no-nonsense April 1st episode—with deals this time! On Episode 115, Jess asks me about Olive raising $51 million for its AI-enabled revenue cycle management solution, Bright.md raising an $8 million Series C for its asynchronous telemedicine platform, and AristaMD raising $18 million for a different sort of telemedicine, eConsults, which allow primary care physicians to consult with specialists virtually. —Matthew Holt

Registration

Forgotten Password?