Categories

Category: Artificial Intelligence

Need to Choose a Doctor? What Does AI Think About the Choices?

By ZEESHAN SYED

Tens of millions of Americans rely on consumer experience apps to help them find the best new restaurant or the right hairdresser. But while relying on customer opinion might make sense for figuring out where to get dinner tonight, when it comes to picking which doctor is best for you, AI might be more trustworthy than the wisdom of the crowd.

Consumer apps provide us with rich data categories that often take into account preferences, from location to free wi-fi, to help users narrow down choices. Navigating your health insurer’s network of physicians is a different proposition, and some of the popular ranking systems reportedly have significant limitations. Doctors are often categorized by specialty, insurance, hospital, or location, which may be effective for logistics, but fail to take into account a patient’s unique health conditions and say very little about what an individual patient can expect in terms of health outcomes. Research from my company Health at Scale shows that 83% of Medicare patients seeking cardiology care and 88% of cases seeking orthopedic care may not be choosing providers that are highly rated for best predicted outcomes based on each patient’s individual health conditions. 

Deep personalization is exactly what physicians, health systems, and insurers need to offer patients to improve outcomes and lower costs across the board. A study using our data recently published in the Journal of Medical Internet Research sought to quantify how consumer, quality and volume metrics may be associated with outcomes. Researchers analyzed data from 4,192 Medicare fee-for-service beneficiaries undergoing elective hip replacements between 2013-2018 in the greater Chicago area, comparing post-procedure hospitalization rate, emergency department visits, and total costs of care at hospitals ranked highly by popular consumer ratings systems and CMS star ratings as well as those ranked highly by a machine intelligence algorithm for personalized provider navigation.

Continue reading…

Docs are ROCs: a simple fix for a “methodologically indefensible” practice in medical AI studies

By LUKE OAKDEN-RAYNER

Anyone who has read my blog or tweets before has probably seen that I have issues with some of the common methods used to analyse the performance of medical machine learning models. In particular, the most commonly reported metrics we use (sensitivity, specificity, F1, accuracy and so on) all systematically underestimate human performance in head to head comparisons against AI models.

This makes AI look better than it is, and may be partially responsible for the “implementation gap” that everyone is so concerned about.

I’ve just posted a preprint on arxiv titled “Docs are ROCs: A simple off-the-shelf approach for estimating average human performance in diagnostic studies” which provides what I think is a solid solution to this problem, and I thought I would explain in some detail here.

Disclaimer: not peer reviewed, content subject to change 


A (con)vexing problem

When we compare machine learning models to humans, we have a bit of a problem. Which humans?

In medical tasks, we typically take the doctor who currently does the task (for example, a radiologist identifying cancer on a CT scan) as proxy for the standard of clinical practice. But doctors aren’t a monolithic group who all give the same answers. Inter-reader variability typically ranges from 15% to 50%, depending on the task. Thus, we usually take as many doctors as we can find and then try to summarise their performance (this is called a multi-reader multicase study, MRMC for short).

Since the metrics we care most about in medicine are sensitivity and specificity, many papers have reported the averages of these values. In fact, a recent systematic review showed that over 70% of medical AI studies that compared humans to AI models reported these values. This makes a lot of sense. We want to know how the average doctor performs at the task, so the average performance on these metrics should be great, right?

Continue reading…

Will AI-Based Automation Replace Basic Primary Care? Should It?

By KEN TERRY

In a recent podcast about the future of telehealth, Lyle Berkowitz, MD, a technology consultant, entrepreneur, and professor at Northwestern University’s Feinberg School of Medicine, confidently predicted that, because of telehealth and clinical automation, “In 10-20 years, we won’t need primary care physicians [for routine care]. The remaining PCPs will specialize in caring for complicated patients. Other than that, if people need care, they’ll go to NPs or PAs or receive automated care with the help of AI.”

Berkowitz isn’t the first to make this kind of prediction. Back in 2013, when mobile health was just starting to take hold, a trio of experts from the Scripps Translational Science Institute—Eric Topol, MD, Steven R. Steinhubl, MD, and Evan D. Muse, MD—wrote a JAMA Commentary arguing that, because of mHealth, physicians would eventually see patients far less often for minor acute problems and follow-up visits than they did then.

Many acute conditions diagnosed and treated in ambulatory care offices, they argued, could be addressed through novel technologies. For example, otitis media might be diagnosed using a smartphone-based otoscope, and urinary tract infections might be assessed using at-home urinalysis. Remote monitoring with digital blood pressure cuffs could be used to improve blood pressure control, so that patients would only have to visit their physicians occasionally.

Continue reading…

Trying to Make AI Less Squirrelly

By KIM BELLARD

You may have missed it, but the Association for the Advancement of Artificial Intelligence (AAAI) just announced its first annual Squirrel AI award winner: Regina Barzilay, a professor at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).   In fact, if you’re like me, you may have missed that there was a Squirrel AI award.  But there is, and it’s kind of a big deal, especially for healthcare – as Professor Barzilay’s work illustrates. 

The Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity (Squirrel AI is a Chinese-based AI-powered “adaptive education provider”) “recognizes positive impacts of artificial intelligence to protect, enhance, and improve human life in meaningful ways with long-lived effects.”  The award carries a prize of $1,000,000, which is about the same as a Nobel Prize

Yolanda Gil, a past president of AAAI, explained the rationale for the new award: “What we wanted to do with the award is to put out to the public that if we treat AI with fear, then we may not pursue the benefits that AI is having for people.”

Dr. Barzilay has impressive credentials, including a MacArthur Fellowship.   Her expertise is in natural language processing (NLP) and machine learning, and she focused her interests on healthcare following a breast cancer diagnosis.  “It was the end of 2014, January 2015, I just came back with a totally new vision about the goals of my research and technology development,” she told The Wall Street Journal. “And from there, I was trying to do something tangible, to change the diagnostics and treatment of breast cancer.”

Continue reading…

It’s complicated. A deep dive into the Viz/Medicare AI reimbursement model.

By LUKE OAKDEN-RAYNER

In the last post I wrote about the recent decision by CMS to reimburse a Viz.AI stroke detection model through Medicare/Medicaid. I briefly explained how this funding model will work, but it is so darn complicated that it deserves a much deeper look.

To get more info, I went to the primary source. Dr Chris Mansi, the co-founder and CEO of Viz.ai, was kind enough to talk to me about the CMS decision. He was also remarkably open and transparent about the process and the implications as they see them, which has helped me clear up a whole bunch of stuff in my mind. High fives all around!

So let’s dig in. This decision might form the basis of AI reimbursement in the future. It is a huge deal, and there are implications.


Uncharted territory

The first thing to understand is that Viz.ai charges a subscription to use their model. The cost is not what was included as “an example” in the CMS documents (25k/yr per hospital), and I have seen some discussion on Twitter that it is more than this per annum, but the actual cost is pretty irrelevant to this discussion.

For the purpose of this piece, I’ll pretend that the cost is the 25k/yr in the CMS document, just for simplicity. It is order-of-magnitude right, and that is what matters.

A subscription is not the only way that AI can be sold (I have seen other companies who charge per use as well) but it is a fairly common approach. Importantly though, it is unusual for a medical technology. Here is what CMS had to say:

Continue reading…

The Medical AI Floodgates Open, at a Cost of $1000 per Patient

By LUKE OAKDEN-RAYNER

In surprising news this week, CMS (the Centres for Medicare & Medicaid Services) in the USA approved the first reimbursement for AI augmented medical care. Viz.ai have a deep learning model which identifies signs of stroke on brain CT and automatically contacts the neurointerventionalist, bypassing the first read normally performed by a general radiologist.

From their press material:

Viz.ai demonstrated to CMS a significant reduction in time to treatment and improved clinical outcomes in patients suffering a stroke. Viz LVO has been granted a New Technology Add on Payment of up to $1,040 per use in patients with suspected strokes.

https://www.prnewswire.com/news-releases/vizai-granted-medicare-new-technology-add-on-payment-301123603.html

This is enormous news, and marks the start of a totally new era in medical AI.

Especially that pricetag!


Doing it tough

It is widely known in the medical AI community that it has been a troubled marketplace for AI developers. The majority of companies have developed putatively useful AI models, but have been unable to sell them to anyone. This has lead to many predictions that we are going to see a crash amongst medical AI startups, as capital runs out and revenue can’t take over. There have even been suggestions that a medical “AI winter” might be coming.

Continue reading…

Your Face is Not Your Own

By KIM BELLARD

I swear I’d been thinking about writing about facial recognition long before I discovered that John Oliver devoted his show last night to it.  Last week I wrote about how “Defund Police” should be expanded to “Defund Health Care,” and included a link to Mr. Oliver’s related episode, only to have a critic comment that I should have just given the link and left it at that.  

Now, I can’t blame anyone for preferring Mr. Oliver’s insights to mine, so I’ll link to his observations straightaway…but if you’re interested in some thoughts about facial recognition and healthcare, I hope you’ll keep reading.

Facial recognition is, indeed, in the news lately, and not in a good way.  Its use, particularly by law enforcement agencies, has become more widely known, as have some of its shortcomings.  At best, it is still weak at accurately identifying minority faces (or women), and at worst it poses significant privacy concerns for, well, everyone.  The fact that someone using such software could identify you in a crowd using publicly available photographs, and then track your past and subsequent movements, is the essence of Big Brother.  

Continue reading…

CT scanning is just awful for diagnosing Covid-19

By LUKE OAKDEN-RAYNER, MBBS

I got asked the other day to comment for Wired on the role of AI in Covid-19 detection, in particular for use with CT scanning. Since I didn’t know exactly what resources they had on the ground in China, I could only make some generic vaguely negative statements. I thought it would be worthwhile to expand on those ideas here, so I am writing two blog posts on the topic, on CT scanning for Covid-19, and on using AI on those CT scans.

As background, the pro-AI argument goes like this:

  1. CT screening detects 97% of Covid-19, viral PCR only detects 70%!
  2. A radiologist takes 5-10 minutes to read a CT chest scan. AI can do it in a second or two.
  3. If you use CT for screening, there will be so many studies that radiologists will be overwhelmed.

In this first post, I will explain why CT, with or without AI, is not worthwhile for Covid-19 screening and diagnosis, and why that 97% sensitivity report is unfounded and unbelievable.

Next post, I will address the use of AI for this task specifically.

Continue reading…

Can AI diagnose COVID-19 on CT scans? Can humans?

Vidur Mahajan
Vasanth Venugopal

By VASANTH VENUGOPAL MD and VIDUR MAHAJAN MBBS, MBA

What can Artificial Intelligence (AI) do?

AI can, simply put, do two things – one, it can do what humans can do. These are tasks like looking at CCTV cameras, detecting faces of people, or in this case, read CT scans and identify ‘findings’ of pneumonia that radiologists can otherwise also find – just that this happens automatically and fast. Two, AI can do things that humans can’t do – like telling you the exact time it would take you to go from point A to point B (i.e. Google maps), or like in this case, diagnose COVID-19 pneumonia on a CT scan.

Pneumonia on CT scans?

Pneumonia, an infection of the lungs, is a killer disease. According to WHO statistics from 2015, Community Acquired Pneumonia (CAP) is the deadliest communicable disease and third leading cause of mortality worldwide leading to 3.2 million deaths every year.

Pneumonias can be classified in many ways, including the type of infectious agent (etiology), source of infection and pattern of lung involvement. From an etiological classification perspective, the most common causative agents of pneumonia are bacteria (typical like Pneumococcus, H.Influenza and atypical like Legionella, Mycoplasma), viral (Influenza, Respiratory Syncytial Virus, Parainfluenza, and adenoviruses) and fungi (Histoplasma & Pneumocystis Carinii).

Continue reading…

Artificial Intelligence vs. Tuberculosis – Part 2

By SAURABH JHA, MD

This is the part two of a three-part series. Catch up on Part One here.

Clever Hans

Preetham Srinivas, the head of the chest radiograph project in Qure.ai, summoned Bhargava Reddy, Manoj Tadepalli, and Tarun Raj to the meeting room.

“Get ready for an all-nighter, boys,” said Preetham.

Qure’s scientists began investigating the algorithm’s mysteriously high performance on chest radiographs from a new hospital. To recap, the algorithm had an area under the receiver operating characteristic curve (AUC) of 1 – that’s 100 % on multiple-choice question test.

“Someone leaked the paper to AI,” laughed Manoj.

“It’s an engineering college joke,” explained Bhargava. “It means that you saw the questions before the exam. It happens sometimes in India when rich people buy the exam papers.”

Just because you know the questions doesn’t mean you know the answers. And AI wasn’t rich enough to buy the AUC.

The four lads were school friends from Andhra Pradesh. They had all studied computer science at the Indian Institute of Technology (IIT), a freaky improbability given that only hundred out of a million aspiring youths are selected to this most coveted discipline in India’s most coveted institute. They had revised for exams together, pulling all-nighters – in working together, they worked harder and made work more fun.

Continue reading…

Registration

Forgotten Password?