Categories

Tag: AI

The FDA Needs to Set Standards for Using Artificial Intelligence in Drug Development

By CHARLES K. FISHER, PhD

Artificial intelligence has become a crucial part of our technological infrastructure and the brain underlying many consumer devices. In less than a decade, machine learning algorithms based on deep neural networks evolved from recognizing cats in videos to enabling your smartphone to perform real-time translation between 27 different languages. This progress has sparked the use of AI in drug discovery and development.

Artificial intelligence can improve efficiency and outcomes in drug development across therapeutic areas. For example, companies are developing AI technologies that hold the promise of preventing serious adverse events in clinical trials by identifying high-risk individuals before they enroll. Clinical trials could be made more efficient by using artificial intelligence to incorporate other data sources, such as historical control arms or real-world data. AI technologies could also be used to magnify therapeutic responses by identifying biomarkers that enable precise targeting of patient subpopulations in complex indications.

Innovation in each of these areas would provide substantial benefits to those who volunteer to take part in trials, not to mention downstream benefits to the ultimate users of new medicines.

Misapplication of these technologies, however, can have unintended harmful consequences. To see how a good idea can turn bad, just look at what’s happened with social media since the rise of algorithms. Misinformation spreads faster than the truth, and our leaders are scrambling to protect our political systems.

Continue reading…

Artificial Intelligence vs. Tuberculosis – Part 2

By SAURABH JHA, MD

This is the part two of a three-part series. Catch up on Part One here.

Clever Hans

Preetham Srinivas, the head of the chest radiograph project in Qure.ai, summoned Bhargava Reddy, Manoj Tadepalli, and Tarun Raj to the meeting room.

“Get ready for an all-nighter, boys,” said Preetham.

Qure’s scientists began investigating the algorithm’s mysteriously high performance on chest radiographs from a new hospital. To recap, the algorithm had an area under the receiver operating characteristic curve (AUC) of 1 – that’s 100 % on multiple-choice question test.

“Someone leaked the paper to AI,” laughed Manoj.

“It’s an engineering college joke,” explained Bhargava. “It means that you saw the questions before the exam. It happens sometimes in India when rich people buy the exam papers.”

Just because you know the questions doesn’t mean you know the answers. And AI wasn’t rich enough to buy the AUC.

The four lads were school friends from Andhra Pradesh. They had all studied computer science at the Indian Institute of Technology (IIT), a freaky improbability given that only hundred out of a million aspiring youths are selected to this most coveted discipline in India’s most coveted institute. They had revised for exams together, pulling all-nighters – in working together, they worked harder and made work more fun.

Continue reading…

Radiology Gets an “App Store” for its AI Tools | Ben Panter, Blackford Analysis

AI in radiology is not new. In fact, the field is swarming with various apps and tools seeking to find a place in the radiologist’s toolkit to get more value out of medical imaging and improve patient care. So, how does a radiology team pick which tools to invest in? Enter Blackford Analysis, a health tech startup that has, simply put, designed an “app store” for radiology departments that liberates access to life-saving tech for radiologists. CEO Ben Panter explains how the platform not only gives radiologists access to a curated group of best-in-class AI radiology tools, but does so en-mass to circumvent the need for one-off approvals from hospital administrators and procurement teams.

Filmed at Bayer G4A Signing Day in Berlin, Germany, October 2019.

Continue reading…

Explain yourself, machine. Producing simple text descriptions for AI interpretability

By LUKE OAKDEN-RAYNER, MD

One big theme in AI research has been the idea of interpretability. How should AI systems explain their decisions to engender trust in their human users? Can we trust a decision if we don’t understand the factors that informed it?

I’ll have a lot more to say on the latter question some other time, which is philosophical rather than technical in nature, but today I wanted to share some of our research into the first question. Can our models explain their decisions in a way that can convince humans to trust them?


Decisions, decisions

I am a radiologist, which makes me something of an expert in the field of human image analysis. We are often asked to explain our assessment of an image, to our colleagues or other doctors or patients. In general, there are two things we express.

  1. What part of the image we are looking at.
  2. What specific features we are seeing in the image.

This is partially what a radiology report is. We describe a feature, give a location, and then synthesise a conclusion. For example:

There is an irregular mass with microcalcification in the upper outer quadrant of the breast. Findings are consistent with malignancy.

You don’t need to understand the words I used here, but the point is that the features (irregular mass, microcalcification) are consistent with the diagnosis (breast cancer, malignancy). A doctor reading this report already sees internal consistency, and that reassures them that the report isn’t wrong. An common example of a wrong report could be:

Continue reading…

RSNA 2019 AI Round-Up

Shah Islam
Hugh Harvey

By HUGH HARVEY, MBBS and SHAH ISLAM, MBBS

AI in medical imaging entered the consciousness of radiologists just a few years ago, notably peaking in 2016 when Geoffrey Hinton declared radiologists’ time was up, swiftly followed by the first AI startups booking exhibiting booths at RSNA. Three years on, the sheer number and scale of AI-focussed offerings has gathered significant pace, so much so that this year a decision was made by the RSNA organising committee to move the ever-growing AI showcase to a new space located in the lower level of the North Hall. In some ways it made sense to offer a larger, dedicated show hall to this expanding field, and in others, not so much. With so many startups, wiggle room for booths was always going to be an issue, however integration of AI into the workflow was supposed to be a key theme this year, made distinctly futile by this purposeful and needless segregation.

By moving the location, the show hall for AI startups was made more difficult to find, with many vendors verbalising how their natural booth footfall was not as substantial as last year when AI was upstairs next to the big-boy OEM players. One witty critic quipped that the only way to find it was to ‘follow the smell of burning VC money, down to the basement’. Indeed, at a conference where the average step count for the week can easily hit 30 miles or over, adding in an extra few minutes walk may well have put some of the less fleet-of-foot off. Several startup CEOs told us that the clientele arriving at their booths were the dedicated few, firming up existing deals, rather than new potential customers seeking a glimpse of a utopian future. At a time when startups are desperate for traction, this could have a disastrous knock-on effect on this as-yet nascent industry.

It wasn’t just the added distance that caused concern, however. By placing the entire startup ecosystem in an underground bunker there was an overwhelming feeling that the RSNA conference had somehow buried the AI startups alive in an open grave. There were certainly a couple of tombstones on the show floor — wide open gaps where larger booths should have been, scaled back by companies double-checking their diminishing VC-funded runway. Zombie copycat booths from South Korea and China had also appeared, and to top it off, the very first booth you came across was none other than Deep Radiology, a company so ineptly marketed and indescribably mysterious, that entering the show hall felt like you’d entered some sort of twilight zone for AI, rather than the sparky, buzzing and upbeat showcase it was last year. It should now be clear to everyone who attended that Gartner’s hype curve has well and truly been swung, and we are swiftly heading into deep disillusionment.

Continue reading…

THCB Spotlights: Jeremy Orr, CEO of Medial EarlySign

Today on THCB Spotlights, Matthew speaks with Jeremy Orr, CEO of Medial EarlySign. Medial EarlySign does complex algorithmic detection of elevated risk trajectories for high-burden serious diseases, and the progression towards chronic diseases such as diabetes. Tune in to hear more about this AI/ML company that has been working on their algorithms since before many had even heard about machine learning, what they’ve been doing with Kaiser Permanente and Geisinger, and where they are going next.

Filmed at the HLTH Conference in Las Vegas, October 2019.

The FDA has approved AI-based PET/MRI “denoising”. How safe is this technology?

By LUKE OAKDEN-RAYNER, MD

Super-resolution* promises to be one of the most impactful medical imaging AI technologies, but only if it is safe.

Last week we saw the FDA approve the first MRI super-resolution product, from the same company that received approval for a similar PET product last year. This news seems as good a reason as any to talk about the safety concerns myself and many other people have with these systems.

Disclaimer: the majority of this piece is about medical super-resolution in general, and not about the SubtleMR system itself. That specific system is addressed directly near the end.

Zoom, enhance

Super-resolution is, quite literally, the “zoom and enhance” CSI meme in the gif at the top of this piece. You give the computer a low quality image and it turns it into a high resolution one. Pretty cool stuff, especially because it actually kind of works.

In medical imaging though, it’s better than cool. You ever wonder why an MRI costs so much and can have long wait times? Well, it is because you can only do one scan every 20-30 minutes (with some scans taking an hour or more). The capital and running costs are only spread across one to two dozen patients per day.

So what if you could get an MRI of the same quality in 5 minutes? Maybe two to five times more scans (the “getting patient ready for the scan” time becomes the bottleneck), meaning less cost and more throughput.

This is the dream of medical super-resolution.

Continue reading…

Why I’m Not Buying Healthcare’s AI Hype…Yet | Enrico Coiera, Macquarie University

By JESSICA DAMASSA, WTF HEALTH

Everyone seems to be amazed by artificial intelligence (AI) and machine learning in healthcare, but Enrico Coiera, Professor of Medical Informatics at Macquarie University, is not impressed — yet. Instead of designing algorithms, he advocates for designing “human-machine systems” that work with the best parts of the health system, the people. An interesting anecdote about how AI can go wrong? Diagnoses of thyroid cancer in South Korea have increased 15 times, but not because of a higher prevalence of the disease…it’s because of more sensitive AI diagnostics that are over-diagnosing people and rendering many with chemo and other treatments they don’t need. So, what should technologists do to ensure that tech doesn’t fail patient outcomes? Enrico gives his best advice for a healthcare industry that’s “in love with technology and can’t often see the simple solution for the sexy tech one.”

Filmed in the HISA Studio at HIC 2019 in Melbourne, Australia, August 2019.

Jessica DaMassa is the host of the WTF Health show & stars in Health in 2 Point 00 with Matthew HoltGet a glimpse of the future of healthcare by meeting the people who are going to change it. Find more WTF Health interviews here or check out www.wtf.health.

Improving Medical AI Safety by Addressing Hidden Stratification

Jared Dunnmon
Luke Oakden-Rayner

By LUKE OAKDEN-RAYNER MD, JARED DUNNMON, PhD

Medical AI testing is unsafe, and that isn’t likely to change anytime soon.

No regulator is seriously considering implementing “pharmaceutical style” clinical trials for AI prior to marketing approval, and evidence strongly suggests that pre-clinical testing of medical AI systems is not enough to ensure that they are safe to use.  As discussed in a previous post, factors ranging from the laboratory effect to automation bias can contribute to substantial disconnects between pre-clinical performance of AI systems and downstream medical outcomes.  As a result, we urgently need mechanisms to detect and mitigate the dangers that under-tested medical AI systems may pose in the clinic.  

In a recent preprint co-authored with Jared Dunnmon from Chris Ré’s group at Stanford, we offer a new explanation for the discrepancy between pre-clinical testing and downstream outcomes: hidden stratification. Before explaining what this means, we want to set the scene by saying that this effect appears to be pervasive, underappreciated, and could lead to serious patient harm even in AI systems that have been approved by regulators.

But there is an upside here as well. Looking at the failures of pre-clinical testing through the lens of hidden stratification may offer us a way to make regulation more effective, without overturning the entire system and without dramatically increasing the compliance burden on developers.

Continue reading…

The Rise and Rise of Quantitative Cassandras

By SAURABH JHA, MD

Despite an area under the ROC curve of 1, Cassandra’s prophesies were never believed. She neither hedged nor relied on retrospective data – her predictions, such as the Trojan war, were prospectively validated. In medicine, a new type of Cassandra has emerged –  one who speaks in probabilistic tongue, forked unevenly between the probability of being right and the possibility of being wrong. One who, by conceding that she may be categorically wrong, is technically never wrong. We call these new Minervas “predictions.” The Owl of Minerva flies above its denominator.

Deep learning (DL) promises to transform the prediction industry from a stepping stone for academic promotion and tenure to something vaguely useful for clinicians at the patient’s bedside. Economists studying AI believe that AI is revolutionary, revolutionary like the steam engine and the internet, because it better predicts.

Recently published in Nature, a sophisticated DL algorithm was able to predict acute kidney injury (AKI), continuously, in hospitalized patients by extracting data from their electronic health records (EHRs). The algorithm interrogated nearly million EHRS of patients in Veteran Affairs hospitals. As intriguing as their methodology is, it’s less interesting than their results. For every correct prediction of AKI, there were two false positives. The false alarms would have made Cassandra blush, but they’re not bad for prognostic medicine. The DL- generated ROC curve stands head and shoulders above the diagonal representing randomness.

The researchers used a technique called “ablation analysis.” I have no idea how that works but it sounds clever. Let me make a humble prophesy of my own – if unleashed at the bedside the AKI-specific, DL-augmented Cassandra could unleash havoc of a scale one struggles to comprehend.

Leaving aside that the accuracy of algorithms trained retrospectively falls in the real world – as doctors know, there’s a difference between book knowledge and practical knowledge – the major problem is the effect availability of information has on decision making. Prediction is fundamentally information. Information changes us.

Continue reading…

Registration

Forgotten Password?