Categories

Tag: AI

Go Ahead, AI—Surprise Us

By KIM BELLARD

Last week I was on a fun podcast with a bunch of people who were, as usual, smarter than me, and, in particular, more knowledgeable about one of my favorite topics – artificial intelligence (A.I.), particularly for healthcare.  With the WHO releasing its “first global report” on A.I. — Ethics & Governance of Artificial Intelligence for Health – and with no shortage of other experts weighing in recently, it seemed like a good time to revisit the topic. 

My prediction: it’s not going to work out quite like we expect, and it probably shouldn’t. 

“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm,” Dr Tedros Adhanom Ghebreyesus, WHO Director-General, said in a statement.  He’s right on both counts.

WHO’s proposed six principles are:

  • Protecting human autonomy
  • Promoting human well-being and safety and the public interest
  • Ensuring transparency, explainability and intelligibility 
  • Fostering responsibility and accountability
  • Ensuring inclusiveness and equity 
  • Promoting AI that is responsive and sustainable

All valid points, but, as we’re already learning, easier to propose than to ensure.  Just ask Timnit Gebru.  When it comes to using new technologies, we’re not so good about thinking through their implications, much less ensuring that everyone benefits.  We’re more of a “let the genie out of the bottle and see what happens” kind of species, and I hope our future AI overlords don’t laugh too much about that. 

As Stacey Higginbotham asks in IEEE Spectrum, “how do we know if a new technology is serving a greater good or policy goal, or merely boosting a company’s profit margins?…we have no idea how to make it work for society’s goals, rather than a company’s, or an individual’s.”   She further notes that “we haven’t even established what those benefits should be.”

Continue reading…

Docs are ROCs: a simple fix for a “methodologically indefensible” practice in medical AI studies

By LUKE OAKDEN-RAYNER

Anyone who has read my blog or tweets before has probably seen that I have issues with some of the common methods used to analyse the performance of medical machine learning models. In particular, the most commonly reported metrics we use (sensitivity, specificity, F1, accuracy and so on) all systematically underestimate human performance in head to head comparisons against AI models.

This makes AI look better than it is, and may be partially responsible for the “implementation gap” that everyone is so concerned about.

I’ve just posted a preprint on arxiv titled “Docs are ROCs: A simple off-the-shelf approach for estimating average human performance in diagnostic studies” which provides what I think is a solid solution to this problem, and I thought I would explain in some detail here.

Disclaimer: not peer reviewed, content subject to change 


A (con)vexing problem

When we compare machine learning models to humans, we have a bit of a problem. Which humans?

In medical tasks, we typically take the doctor who currently does the task (for example, a radiologist identifying cancer on a CT scan) as proxy for the standard of clinical practice. But doctors aren’t a monolithic group who all give the same answers. Inter-reader variability typically ranges from 15% to 50%, depending on the task. Thus, we usually take as many doctors as we can find and then try to summarise their performance (this is called a multi-reader multicase study, MRMC for short).

Since the metrics we care most about in medicine are sensitivity and specificity, many papers have reported the averages of these values. In fact, a recent systematic review showed that over 70% of medical AI studies that compared humans to AI models reported these values. This makes a lot of sense. We want to know how the average doctor performs at the task, so the average performance on these metrics should be great, right?

Continue reading…

Will AI-Based Automation Replace Basic Primary Care? Should It?

By KEN TERRY

In a recent podcast about the future of telehealth, Lyle Berkowitz, MD, a technology consultant, entrepreneur, and professor at Northwestern University’s Feinberg School of Medicine, confidently predicted that, because of telehealth and clinical automation, “In 10-20 years, we won’t need primary care physicians [for routine care]. The remaining PCPs will specialize in caring for complicated patients. Other than that, if people need care, they’ll go to NPs or PAs or receive automated care with the help of AI.”

Berkowitz isn’t the first to make this kind of prediction. Back in 2013, when mobile health was just starting to take hold, a trio of experts from the Scripps Translational Science Institute—Eric Topol, MD, Steven R. Steinhubl, MD, and Evan D. Muse, MD—wrote a JAMA Commentary arguing that, because of mHealth, physicians would eventually see patients far less often for minor acute problems and follow-up visits than they did then.

Many acute conditions diagnosed and treated in ambulatory care offices, they argued, could be addressed through novel technologies. For example, otitis media might be diagnosed using a smartphone-based otoscope, and urinary tract infections might be assessed using at-home urinalysis. Remote monitoring with digital blood pressure cuffs could be used to improve blood pressure control, so that patients would only have to visit their physicians occasionally.

Continue reading…

Trying to Make AI Less Squirrelly

By KIM BELLARD

You may have missed it, but the Association for the Advancement of Artificial Intelligence (AAAI) just announced its first annual Squirrel AI award winner: Regina Barzilay, a professor at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).   In fact, if you’re like me, you may have missed that there was a Squirrel AI award.  But there is, and it’s kind of a big deal, especially for healthcare – as Professor Barzilay’s work illustrates. 

The Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity (Squirrel AI is a Chinese-based AI-powered “adaptive education provider”) “recognizes positive impacts of artificial intelligence to protect, enhance, and improve human life in meaningful ways with long-lived effects.”  The award carries a prize of $1,000,000, which is about the same as a Nobel Prize

Yolanda Gil, a past president of AAAI, explained the rationale for the new award: “What we wanted to do with the award is to put out to the public that if we treat AI with fear, then we may not pursue the benefits that AI is having for people.”

Dr. Barzilay has impressive credentials, including a MacArthur Fellowship.   Her expertise is in natural language processing (NLP) and machine learning, and she focused her interests on healthcare following a breast cancer diagnosis.  “It was the end of 2014, January 2015, I just came back with a totally new vision about the goals of my research and technology development,” she told The Wall Street Journal. “And from there, I was trying to do something tangible, to change the diagnostics and treatment of breast cancer.”

Continue reading…

It’s complicated. A deep dive into the Viz/Medicare AI reimbursement model.

By LUKE OAKDEN-RAYNER

In the last post I wrote about the recent decision by CMS to reimburse a Viz.AI stroke detection model through Medicare/Medicaid. I briefly explained how this funding model will work, but it is so darn complicated that it deserves a much deeper look.

To get more info, I went to the primary source. Dr Chris Mansi, the co-founder and CEO of Viz.ai, was kind enough to talk to me about the CMS decision. He was also remarkably open and transparent about the process and the implications as they see them, which has helped me clear up a whole bunch of stuff in my mind. High fives all around!

So let’s dig in. This decision might form the basis of AI reimbursement in the future. It is a huge deal, and there are implications.


Uncharted territory

The first thing to understand is that Viz.ai charges a subscription to use their model. The cost is not what was included as “an example” in the CMS documents (25k/yr per hospital), and I have seen some discussion on Twitter that it is more than this per annum, but the actual cost is pretty irrelevant to this discussion.

For the purpose of this piece, I’ll pretend that the cost is the 25k/yr in the CMS document, just for simplicity. It is order-of-magnitude right, and that is what matters.

A subscription is not the only way that AI can be sold (I have seen other companies who charge per use as well) but it is a fairly common approach. Importantly though, it is unusual for a medical technology. Here is what CMS had to say:

Continue reading…

The Medical AI Floodgates Open, at a Cost of $1000 per Patient

By LUKE OAKDEN-RAYNER

In surprising news this week, CMS (the Centres for Medicare & Medicaid Services) in the USA approved the first reimbursement for AI augmented medical care. Viz.ai have a deep learning model which identifies signs of stroke on brain CT and automatically contacts the neurointerventionalist, bypassing the first read normally performed by a general radiologist.

From their press material:

Viz.ai demonstrated to CMS a significant reduction in time to treatment and improved clinical outcomes in patients suffering a stroke. Viz LVO has been granted a New Technology Add on Payment of up to $1,040 per use in patients with suspected strokes.

https://www.prnewswire.com/news-releases/vizai-granted-medicare-new-technology-add-on-payment-301123603.html

This is enormous news, and marks the start of a totally new era in medical AI.

Especially that pricetag!


Doing it tough

It is widely known in the medical AI community that it has been a troubled marketplace for AI developers. The majority of companies have developed putatively useful AI models, but have been unable to sell them to anyone. This has lead to many predictions that we are going to see a crash amongst medical AI startups, as capital runs out and revenue can’t take over. There have even been suggestions that a medical “AI winter” might be coming.

Continue reading…

Your Face is Not Your Own

By KIM BELLARD

I swear I’d been thinking about writing about facial recognition long before I discovered that John Oliver devoted his show last night to it.  Last week I wrote about how “Defund Police” should be expanded to “Defund Health Care,” and included a link to Mr. Oliver’s related episode, only to have a critic comment that I should have just given the link and left it at that.  

Now, I can’t blame anyone for preferring Mr. Oliver’s insights to mine, so I’ll link to his observations straightaway…but if you’re interested in some thoughts about facial recognition and healthcare, I hope you’ll keep reading.

Facial recognition is, indeed, in the news lately, and not in a good way.  Its use, particularly by law enforcement agencies, has become more widely known, as have some of its shortcomings.  At best, it is still weak at accurately identifying minority faces (or women), and at worst it poses significant privacy concerns for, well, everyone.  The fact that someone using such software could identify you in a crowd using publicly available photographs, and then track your past and subsequent movements, is the essence of Big Brother.  

Continue reading…

Health in 2 Point 00, Episode 115 | Olive, Bright.md and AristaMD

Today on Health in 2 Point 00, we have a no-nonsense April 1st episode—with deals this time! On Episode 115, Jess asks me about Olive raising $51 million for its AI-enabled revenue cycle management solution, Bright.md raising an $8 million Series C for its asynchronous telemedicine platform, and AristaMD raising $18 million for a different sort of telemedicine, eConsults, which allow primary care physicians to consult with specialists virtually. —Matthew Holt

Can AI diagnose COVID-19 on CT scans? Can humans?

Vidur Mahajan
Vasanth Venugopal

By VASANTH VENUGOPAL MD and VIDUR MAHAJAN MBBS, MBA

What can Artificial Intelligence (AI) do?

AI can, simply put, do two things – one, it can do what humans can do. These are tasks like looking at CCTV cameras, detecting faces of people, or in this case, read CT scans and identify ‘findings’ of pneumonia that radiologists can otherwise also find – just that this happens automatically and fast. Two, AI can do things that humans can’t do – like telling you the exact time it would take you to go from point A to point B (i.e. Google maps), or like in this case, diagnose COVID-19 pneumonia on a CT scan.

Pneumonia on CT scans?

Pneumonia, an infection of the lungs, is a killer disease. According to WHO statistics from 2015, Community Acquired Pneumonia (CAP) is the deadliest communicable disease and third leading cause of mortality worldwide leading to 3.2 million deaths every year.

Pneumonias can be classified in many ways, including the type of infectious agent (etiology), source of infection and pattern of lung involvement. From an etiological classification perspective, the most common causative agents of pneumonia are bacteria (typical like Pneumococcus, H.Influenza and atypical like Legionella, Mycoplasma), viral (Influenza, Respiratory Syncytial Virus, Parainfluenza, and adenoviruses) and fungi (Histoplasma & Pneumocystis Carinii).

Continue reading…

The FDA Needs to Set Standards for Using Artificial Intelligence in Drug Development

By CHARLES K. FISHER, PhD

Artificial intelligence has become a crucial part of our technological infrastructure and the brain underlying many consumer devices. In less than a decade, machine learning algorithms based on deep neural networks evolved from recognizing cats in videos to enabling your smartphone to perform real-time translation between 27 different languages. This progress has sparked the use of AI in drug discovery and development.

Artificial intelligence can improve efficiency and outcomes in drug development across therapeutic areas. For example, companies are developing AI technologies that hold the promise of preventing serious adverse events in clinical trials by identifying high-risk individuals before they enroll. Clinical trials could be made more efficient by using artificial intelligence to incorporate other data sources, such as historical control arms or real-world data. AI technologies could also be used to magnify therapeutic responses by identifying biomarkers that enable precise targeting of patient subpopulations in complex indications.

Innovation in each of these areas would provide substantial benefits to those who volunteer to take part in trials, not to mention downstream benefits to the ultimate users of new medicines.

Misapplication of these technologies, however, can have unintended harmful consequences. To see how a good idea can turn bad, just look at what’s happened with social media since the rise of algorithms. Misinformation spreads faster than the truth, and our leaders are scrambling to protect our political systems.

Continue reading…

Registration

Forgotten Password?