Categories

Tag: Artificial intelligence

AI are (going to be) people too

BY KIM BELLARD

My heart says I should write about Uvalde, but my head says, not yet; there are others more able to do that.  I’ll reserve my sorrow, my outrage, and any hopes I still have for the next election cycle.  

Instead, I’m turning to a topic that has long fascinated me: when and how are we going to recognize when artificial intelligence (AI) becomes, if not human, then a “person”?  Maybe even a doctor.

Continue reading…

DALL-E, Draw an AI Doctor

BY KIM BELLARD

I can’t believe I somehow missed when OpenAI introduced DALL-E in January 2021 – a neural network that could “generate images from text descriptions” — so I’m sure not going to miss now that OpenAI has unveiled DALL-E 2.  As they describe it, “DALL-E 2 is a new AI system that can create realistic images and art from a description in natural language.”  The name, by the way, is a playful combination of the animated robot WALL-E  and the idiosyncratic artist Salvator Dali.

This is not your father’s AI.  If you think it’s just about art, think again.  If you think it doesn’t matter for healthcare, well, you’ve been warned.

Here are further descriptions of what OpenAI is claiming:

“DALL·E 2 can create original, realistic images and art from a text description. It can combine concepts, attributes, and styles.

DALL·E 2 can make realistic edits to existing images from a natural language caption. It can add and remove elements while taking shadows, reflections, and textures into account.

DALL·E 2 can take an image and create different variations of it inspired by the original.”

Here’s their video:

I’ll leave it to others to explain exactly how it does all that, aside from saying it uses a process called diffusion, “which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of that image.”  The end result is that, relative to DALL-E, DALL-E 2 “generates more realistic and accurate images with 4x greater resolution.”  

Continue reading…

Health Care Organizations Must Prioritize Cybersecurity Before Undergoing Digital Transformation

By TRAVIS GOOD

The health care industry is rapidly embracing new technologies. Covid-19 changed the way many industries operate, and healthcare is one industry that was particularly affected by the pandemic. Many health care organizations were already undergoing digital transformations, but Covid exponentially sped up those processes. Health care providers and health-tech companies were forced to adapt to the new normal and change the way they operate. Here are 3 major ways health care has changed in recent times. 

1. Increased popularity of telehealth services:

Covid made telehealth appointments a necessity, but even in a post-Covid world virtual visits are likely to remain a core component of modern healthcare. According to McKinsey, telehealth utilization was 78 times higher in April 2020 than in February 2020. It remained nearly 40 times as popular in 2021 as compared to pre-pandemic levels. 

Research shows that both patients and physicians are fans of telehealth. Many patients prefer the convenience of being able to speak to their doctor from home and physicians feel that offering telemedicine allows them to operate more efficiently. Phone and video-based medical appointments became mainstream in 2020, and they are unlikely to go away anytime soon. 

2. More wearable medical devices with connected ecosystems:

The number of wearable medical devices in use has skyrocketed over the past 5 years. The wearable medical device market is expected to reach $23 million in 2023, a major increase from $8 million in 2017. Gadgets like heart rate sensors, oxygen meters, and exercise trackers are all becoming increasingly popular. Many popular consumer products such as cell phones and smartwatches ship with built-in medical tracking technology.

Continue reading…

It’s complicated. A deep dive into the Viz/Medicare AI reimbursement model.

By LUKE OAKDEN-RAYNER

In the last post I wrote about the recent decision by CMS to reimburse a Viz.AI stroke detection model through Medicare/Medicaid. I briefly explained how this funding model will work, but it is so darn complicated that it deserves a much deeper look.

To get more info, I went to the primary source. Dr Chris Mansi, the co-founder and CEO of Viz.ai, was kind enough to talk to me about the CMS decision. He was also remarkably open and transparent about the process and the implications as they see them, which has helped me clear up a whole bunch of stuff in my mind. High fives all around!

So let’s dig in. This decision might form the basis of AI reimbursement in the future. It is a huge deal, and there are implications.


Uncharted territory

The first thing to understand is that Viz.ai charges a subscription to use their model. The cost is not what was included as “an example” in the CMS documents (25k/yr per hospital), and I have seen some discussion on Twitter that it is more than this per annum, but the actual cost is pretty irrelevant to this discussion.

For the purpose of this piece, I’ll pretend that the cost is the 25k/yr in the CMS document, just for simplicity. It is order-of-magnitude right, and that is what matters.

A subscription is not the only way that AI can be sold (I have seen other companies who charge per use as well) but it is a fairly common approach. Importantly though, it is unusual for a medical technology. Here is what CMS had to say:

Continue reading…

CT scanning is just awful for diagnosing Covid-19

By LUKE OAKDEN-RAYNER, MBBS

I got asked the other day to comment for Wired on the role of AI in Covid-19 detection, in particular for use with CT scanning. Since I didn’t know exactly what resources they had on the ground in China, I could only make some generic vaguely negative statements. I thought it would be worthwhile to expand on those ideas here, so I am writing two blog posts on the topic, on CT scanning for Covid-19, and on using AI on those CT scans.

As background, the pro-AI argument goes like this:

  1. CT screening detects 97% of Covid-19, viral PCR only detects 70%!
  2. A radiologist takes 5-10 minutes to read a CT chest scan. AI can do it in a second or two.
  3. If you use CT for screening, there will be so many studies that radiologists will be overwhelmed.

In this first post, I will explain why CT, with or without AI, is not worthwhile for Covid-19 screening and diagnosis, and why that 97% sensitivity report is unfounded and unbelievable.

Next post, I will address the use of AI for this task specifically.

Continue reading…

Can AI diagnose COVID-19 on CT scans? Can humans?

Vidur Mahajan
Vasanth Venugopal

By VASANTH VENUGOPAL MD and VIDUR MAHAJAN MBBS, MBA

What can Artificial Intelligence (AI) do?

AI can, simply put, do two things – one, it can do what humans can do. These are tasks like looking at CCTV cameras, detecting faces of people, or in this case, read CT scans and identify ‘findings’ of pneumonia that radiologists can otherwise also find – just that this happens automatically and fast. Two, AI can do things that humans can’t do – like telling you the exact time it would take you to go from point A to point B (i.e. Google maps), or like in this case, diagnose COVID-19 pneumonia on a CT scan.

Pneumonia on CT scans?

Pneumonia, an infection of the lungs, is a killer disease. According to WHO statistics from 2015, Community Acquired Pneumonia (CAP) is the deadliest communicable disease and third leading cause of mortality worldwide leading to 3.2 million deaths every year.

Pneumonias can be classified in many ways, including the type of infectious agent (etiology), source of infection and pattern of lung involvement. From an etiological classification perspective, the most common causative agents of pneumonia are bacteria (typical like Pneumococcus, H.Influenza and atypical like Legionella, Mycoplasma), viral (Influenza, Respiratory Syncytial Virus, Parainfluenza, and adenoviruses) and fungi (Histoplasma & Pneumocystis Carinii).

Continue reading…

Artificial Intelligence vs. Tuberculosis – Part 2

By SAURABH JHA, MD

This is the part two of a three-part series. Catch up on Part One here.

Clever Hans

Preetham Srinivas, the head of the chest radiograph project in Qure.ai, summoned Bhargava Reddy, Manoj Tadepalli, and Tarun Raj to the meeting room.

“Get ready for an all-nighter, boys,” said Preetham.

Qure’s scientists began investigating the algorithm’s mysteriously high performance on chest radiographs from a new hospital. To recap, the algorithm had an area under the receiver operating characteristic curve (AUC) of 1 – that’s 100 % on multiple-choice question test.

“Someone leaked the paper to AI,” laughed Manoj.

“It’s an engineering college joke,” explained Bhargava. “It means that you saw the questions before the exam. It happens sometimes in India when rich people buy the exam papers.”

Just because you know the questions doesn’t mean you know the answers. And AI wasn’t rich enough to buy the AUC.

The four lads were school friends from Andhra Pradesh. They had all studied computer science at the Indian Institute of Technology (IIT), a freaky improbability given that only hundred out of a million aspiring youths are selected to this most coveted discipline in India’s most coveted institute. They had revised for exams together, pulling all-nighters – in working together, they worked harder and made work more fun.

Continue reading…

Detecting Heart Conditions Faster: The Case for Biomarkers-PLUS-AI | Dean Loizou, Prevencio

BY JESSICA DAMASSA

Can artificial intelligence help prevent cardiovascular diseases? Biotech startup, Prevencio, has developed a proprietary panel of biomarkers that uses blood proteins and sophisticated AI algorithms to detect cardiovascular conditions like coronary and peripheral artery disease, aerotic stenosis, risk for stroke and more. Dean Loizou, Prevencio’s VP of Business Development, breaks down the process step-by-step and explains exactly how Prevencio reports its clinically viable scores to doctors. How does the AI fit into all this? We get to that too, plus the details around this startup’s plans for raising a B-round on the heels of this work with Bayer.

Filmed at Bayer G4A Signing Day in Berlin, Germany, October 2019.

Radiology Gets an “App Store” for its AI Tools | Ben Panter, Blackford Analysis

AI in radiology is not new. In fact, the field is swarming with various apps and tools seeking to find a place in the radiologist’s toolkit to get more value out of medical imaging and improve patient care. So, how does a radiology team pick which tools to invest in? Enter Blackford Analysis, a health tech startup that has, simply put, designed an “app store” for radiology departments that liberates access to life-saving tech for radiologists. CEO Ben Panter explains how the platform not only gives radiologists access to a curated group of best-in-class AI radiology tools, but does so en-mass to circumvent the need for one-off approvals from hospital administrators and procurement teams.

Filmed at Bayer G4A Signing Day in Berlin, Germany, October 2019.

Continue reading…

Explain yourself, machine. Producing simple text descriptions for AI interpretability

By LUKE OAKDEN-RAYNER, MD

One big theme in AI research has been the idea of interpretability. How should AI systems explain their decisions to engender trust in their human users? Can we trust a decision if we don’t understand the factors that informed it?

I’ll have a lot more to say on the latter question some other time, which is philosophical rather than technical in nature, but today I wanted to share some of our research into the first question. Can our models explain their decisions in a way that can convince humans to trust them?


Decisions, decisions

I am a radiologist, which makes me something of an expert in the field of human image analysis. We are often asked to explain our assessment of an image, to our colleagues or other doctors or patients. In general, there are two things we express.

  1. What part of the image we are looking at.
  2. What specific features we are seeing in the image.

This is partially what a radiology report is. We describe a feature, give a location, and then synthesise a conclusion. For example:

There is an irregular mass with microcalcification in the upper outer quadrant of the breast. Findings are consistent with malignancy.

You don’t need to understand the words I used here, but the point is that the features (irregular mass, microcalcification) are consistent with the diagnosis (breast cancer, malignancy). A doctor reading this report already sees internal consistency, and that reassures them that the report isn’t wrong. An common example of a wrong report could be:

Continue reading…

Registration

Forgotten Password?