In the last post I wrote about the recent decision by CMS to reimburse a Viz.AI stroke detection model through Medicare/Medicaid. I briefly explained how this funding model will work, but it is so darn complicated that it deserves a much deeper look.
To get more info, I went to the primary source. Dr Chris Mansi, the co-founder and CEO of Viz.ai, was kind enough to talk to me about the CMS decision. He was also remarkably open and transparent about the process and the implications as they see them, which has helped me clear up a whole bunch of stuff in my mind. High fives all around!
So let’s dig in. This decision might form the basis of AI reimbursement in the future. It is a huge deal, and there are implications.
The first thing to understand is that Viz.ai charges a subscription to use their model. The cost is not what was included as “an example” in the CMS documents (25k/yr per hospital), and I have seen some discussion on Twitter that it is more than this per annum, but the actual cost is pretty irrelevant to this discussion.
For the purpose of this piece, I’ll pretend that the cost is the 25k/yr in the CMS document, just for simplicity. It is order-of-magnitude right, and that is what matters.
A subscription is not the only way that AI can be sold (I have seen other companies who charge per use as well) but it is a fairly common approach. Importantly though, it is unusual for a medical technology. Here is what CMS had to say:
I got asked the other day to comment for Wired on the role of AI in Covid-19 detection, in particular for use with CT scanning. Since I didn’t know exactly what resources they had on the ground in China, I could only make some generic vaguely negative statements. I thought it would be worthwhile to expand on those ideas here, so I am writing two blog posts on the topic, on CT scanning for Covid-19, and on using AI on those CT scans.
As background, the pro-AI argument goes like this:
CT screening detects 97% of Covid-19, viral PCR only detects 70%!
A radiologist takes 5-10 minutes to read a CT chest scan. AI can do it in a second or two.
If you use CT for screening, there will be so many studies that radiologists will be overwhelmed.
In this first post, I will explain why CT, with or without AI, is not worthwhile for Covid-19 screening and diagnosis, and why that 97% sensitivity report is unfounded and unbelievable.
Next post, I will address the use of AI for this task specifically.
By VASANTH VENUGOPAL MD and VIDUR MAHAJAN MBBS, MBA
What can Artificial
Intelligence (AI) do?
simply put, do two things – one, it can do what humans can do. These are tasks
like looking at CCTV cameras, detecting faces of people, or in this case, read
CT scans and identify ‘findings’ of pneumonia that radiologists can otherwise
also find – just that this happens automatically and fast. Two, AI can do
things that humans can’t do – like telling you the exact time it would take you
to go from point A to point B (i.e. Google maps), or like in this case,
diagnose COVID-19 pneumonia on a CT scan.
on CT scans?
an infection of the lungs, is a killer disease. According to WHO statistics from
2015, Community Acquired Pneumonia (CAP) is the deadliest communicable disease
and third leading cause of mortality worldwide leading to 3.2 million deaths
be classified in many ways, including the type of infectious agent (etiology),
source of infection and pattern of lung involvement. From an etiological classification
perspective, the most common causative agents of pneumonia are bacteria
(typical like Pneumococcus, H.Influenza and atypical like Legionella,
Mycoplasma), viral (Influenza, Respiratory Syncytial Virus, Parainfluenza, and
adenoviruses) and fungi (Histoplasma & Pneumocystis Carinii).
This is the part two of a three-part series. Catch up on Part One here.
Preetham Srinivas, the head of the
chest radiograph project in Qure.ai, summoned Bhargava Reddy, Manoj Tadepalli, and
Tarun Raj to the meeting room.
“Get ready for an all-nighter, boys,”
Qure’s scientists began investigating
the algorithm’s mysteriously high performance on chest radiographs from a new
hospital. To recap, the algorithm had an area under the receiver operating
characteristic curve (AUC) of 1 – that’s 100 % on multiple-choice question
“Someone leaked the paper to AI,”
“It’s an engineering college joke,”
explained Bhargava. “It means that you saw the questions before the exam. It
happens sometimes in India when rich people buy the exam papers.”
Just because you know the questions
doesn’t mean you know the answers. And AI wasn’t rich enough to buy the AUC.
The four lads were school friends from
Andhra Pradesh. They had all studied computer science at the Indian Institute
of Technology (IIT), a freaky improbability given that only hundred out of a
million aspiring youths are selected to this most coveted discipline in India’s
most coveted institute. They had revised for exams together, pulling
all-nighters – in working together, they worked harder and made work more fun.
Can artificial intelligence help prevent cardiovascular diseases? Biotech startup, Prevencio, has developed a proprietary panel of biomarkers that uses blood proteins and sophisticated AI algorithms to detect cardiovascular conditions like coronary and peripheral artery disease, aerotic stenosis, risk for stroke and more. Dean Loizou, Prevencio’s VP of Business Development, breaks down the process step-by-step and explains exactly how Prevencio reports its clinically viable scores to doctors. How does the AI fit into all this? We get to that too, plus the details around this startup’s plans for raising a B-round on the heels of this work with Bayer.
Filmed at Bayer G4A Signing Day in Berlin, Germany, October 2019.
AI in radiology is not new. In fact, the field is swarming with various apps and tools seeking to find a place in the radiologist’s toolkit to get more value out of medical imaging and improve patient care. So, how does a radiology team pick which tools to invest in? Enter Blackford Analysis, a health tech startup that has, simply put, designed an “app store” for radiology departments that liberates access to life-saving tech for radiologists. CEO Ben Panter explains how the platform not only gives radiologists access to a curated group of best-in-class AI radiology tools, but does so en-mass to circumvent the need for one-off approvals from hospital administrators and procurement teams.
Filmed at Bayer G4A Signing Day in Berlin, Germany, October 2019.
One big theme in AI research has been the idea of interpretability. How should AI systems explain their decisions to engender trust in their human users? Can we trust a decision if we don’t understand the factors that informed it?
I’ll have a lot more to say on the latter question some other time, which is philosophical rather than technical in nature, but today I wanted to share some of our research into the first question. Can our models explain their decisions in a way that can convince humans to trust them?
I am a radiologist, which makes me something of an expert in the field of human image analysis. We are often asked to explain our assessment of an image, to our colleagues or other doctors or patients. In general, there are two things we express.
What part of the image we are looking at.
What specific features we are seeing in the image.
This is partially what a radiology report is. We describe a feature, give a location, and then synthesise a conclusion. For example:
There is an irregular mass with microcalcification in the upper outer quadrant of the breast. Findings are consistent with malignancy.
You don’t need to understand the words I used here, but the point is that the features (irregular mass, microcalcification) are consistent with the diagnosis (breast cancer, malignancy). A doctor reading this report already sees internal consistency, and that reassures them that the report isn’t wrong. An common example of a wrong report could be:
AI in medical imaging entered the consciousness of radiologists just a few years ago, notably peaking in 2016 when Geoffrey Hinton declared radiologists’ time was up, swiftly followed by the first AI startups booking exhibiting booths at RSNA. Three years on, the sheer number and scale of AI-focussed offerings has gathered significant pace, so much so that this year a decision was made by the RSNA organising committee to move the ever-growing AI showcase to a new space located in the lower level of the North Hall. In some ways it made sense to offer a larger, dedicated show hall to this expanding field, and in others, not so much. With so many startups, wiggle room for booths was always going to be an issue, however integration of AI into the workflow was supposed to be a key theme this year, made distinctly futile by this purposeful and needless segregation.
By moving the location, the show hall for AI startups was made more difficult to find, with many vendors verbalising how their natural booth footfall was not as substantial as last year when AI was upstairs next to the big-boy OEM players. One witty critic quipped that the only way to find it was to ‘follow the smell of burning VC money, down to the basement’. Indeed, at a conference where the average step count for the week can easily hit 30 miles or over, adding in an extra few minutes walk may well have put some of the less fleet-of-foot off. Several startup CEOs told us that the clientele arriving at their booths were the dedicated few, firming up existing deals, rather than new potential customers seeking a glimpse of a utopian future. At a time when startups are desperate for traction, this could have a disastrous knock-on effect on this as-yet nascent industry.
It wasn’t just the added distance that caused concern, however. By placing the entire startup ecosystem in an underground bunker there was an overwhelming feeling that the RSNA conference had somehow buried the AI startups alive in an open grave. There were certainly a couple of tombstones on the show floor — wide open gaps where larger booths should have been, scaled back by companies double-checking their diminishing VC-funded runway. Zombie copycat booths from South Korea and China had also appeared, and to top it off, the very first booth you came across was none other than Deep Radiology, a company so ineptly marketed and indescribably mysterious, that entering the show hall felt like you’d entered some sort of twilight zone for AI, rather than the sparky, buzzing and upbeat showcase it was last year. It should now be clear to everyone who attended that Gartner’s hype curve has well and truly been swung, and we are swiftly heading into deep disillusionment.
No one knows who gave Rahul Roy
tuberculosis. Roy’s charmed life as a successful trader involved traveling in his
Mercedes C class between his apartment on the plush Nepean Sea Road in South
Mumbai and offices in Bombay Stock Exchange. He cared little for Mumbai’s weather.
He seldom rolled down his car windows – his ambient atmosphere, optimized for
his comfort, rarely changed.
Historically TB, or
“consumption” as it was known, was a Bohemian malady; the chronic suffering produced
a rhapsody which produced fine art. TB was fashionable in Victorian Britain, in
part, because consumption, like aristocracy, was thought to be hereditary. Even
after Robert Koch discovered that the cause of TB was a rod-shaped bacterium –
Mycobacterium Tuberculosis (MTB), TB had a special status denied to its immoral
peer, Syphilis, and unaesthetic cousin, leprosy.
TB became egalitarian in the early twentieth
century but retained an aristocratic noblesse oblige. George Orwell may have
contracted TB when he voluntarily lived with miners in crowded squalor to
understand poverty. Unlike Orwell, Roy had no pretentions of solidarity with
poor people. For Roy, there was nothing heroic about getting TB. He was
embarrassed not because of TB’s infectivity; TB sanitariums are a thing of the
past. TB signaled social class decline. He believed rickshawallahs, not
traders, got TB.
Today on THCB Spotlights, Matthew speaks with Jeremy Orr, CEO of Medial EarlySign. Medial EarlySign does complex algorithmic detection of elevated risk trajectories for high-burden serious diseases, and the progression towards chronic diseases such as diabetes. Tune in to hear more about this AI/ML company that has been working on their algorithms since before many had even heard about machine learning, what they’ve been doing with Kaiser Permanente and Geisinger, and where they are going next.
Filmed at the HLTH Conference in Las Vegas, October 2019.