Could computers develop the drugs of the future? The short answer: probably, but not yet.
Computer simulation is a cornerstone in the development and optimization of “mission-critical” elements in industries ranging from aerospace to finance. Even the smooth functioning of nuclear reactors – where failure would be catastrophic – relies on a computational model called a Virtual Reactor, which allows scientists and engineers to observing the reactor’s real-time response to operating conditions.
The analogous model in medicine – a “virtual human” – doesn’t yet exist. We still rely on living, breathing animals and humans to test drugs and devices. Discoveries are made largely by trial and error. But the age-old approach that led to the discovery of antibiotics, cardiac catheterization, and organ transplantation is becoming increasingly unsustainable.
According to the Tufts Center for the Study of Drug Development, the development of a new drug now exceeds $2.5 billion, 75% of which is spent in various phases of development. Lesser known than the price tag of successful drugs is the failure of most: 90% of potential drugs that are backed by funding of this magnitude fail in late-stage clinical trials before making it to market. Perhaps most worrisome of all is that when a drug or device fails, no one really understands why. The traditional clinical trial simply isn’t designed to tell us why an adverse event occurred, or why the expected efficacy was strikingly less than predicted. The failed therapeutics are largely abandoned in an all-or-nothing mentality that ultimately dampens the innovative process.
Enter in silico clinical trials. Avicenna, the European Commission-funded coalition dedicated to the support and development of in silico technology, defines in silico clinical trials (ISCT) as “the use of individualized computer simulation in the development or regulatory evaluation of a medicinal product, medical device, or medical intervention.” In other words, test the idea on a computer, not a person.
In all fairness, computer modeling in drug and device development isn’t particularly groundbreaking. A handle of pharmaceutical companies uses computational methods to model pharmacodynamics and pharmacokinetics in pre-clinical studies. And medical device companies use computational fluid dynamics to blood moves around an implanted device.
What’s missing is a model that can be tailored to each patient, with the option to punch in a patient’s physiologic parameters and learn how the individual will respond. The goal is it to create – you guessed it – a Virtual Physiologic Human, or VPH.
Although the VPH model is far from attaining the reliability needed to meaningfully transform the clinical trial process, it isn’t just science fiction. A recent UCSF computer model, for instance, correctly predicted side effects of over 650 drugs. Another study of 300 virtual type-I diabetics accurately predicted trends in blood glucose after “meals” and with “insulin.” And a computer model in patients with sepsis identified subgroups of virtual patients for whom TNF-alpha inhibitors could be life-saving, and those in whom the immunomodulatory drug did more harm than good .
In the not-so-distant future, ISCT could be used to answer questions that current clinical trials can’t. How will the effect of the drug change if the body weight is 20%? What genetic profile is most likely to respond best to the drug? What would happen if the dose were doubled, or halved? Ultimately, the hope is that ISCT technology could reduce the size and duration of clinical trials, refine clinical outcomes with greater explanatory power, and even partially replace clinical trials in cases where ISCT can generate sound and reliable evidence.
If successful, the implications of in silico trials – from individualized drug development to massive cost savings to much-needed and overdue development of orphan drugs – are hard to overestimate.
Nicole Van Groningen is an internal medicine physician in New York City. She tweets from @NVanGroningenMD and blogs at theavantmed.com.
Categories: Uncategorized
It goes further than that, I think. You have to know the basic assumptions, which can only come from human studies (until we can reduce biology to a set of differential equations).. If the human studies are too small and underpowered, how good would those assumptions be? The smallest flaw can be magnified by orders of magnitude and propagated throughout the entire model and all models based on the original model.
Then, of course, comes malfeasance. Computer algorithms are created by people. People who get paid by other people who get paid ultimately by a set of financial interests. We can hope for the best. We can ask to see the code. We can have checks and balances. But if history is an indicator, we are going to get taken for another ride.
And finally, how much information about patients are we going to need to collect, both to validate algorithms and to apply results? What else will the “collectors” do with this information? How many data points will be tacked on, just because we can, for other, unrelated, utilization by the financiers of this entire scheme?
All in all, I think this could be great technology, but it seems to me that it’s high time legislation and ethics should be brought in line with current and future technologies. Maybe we need a new bill of rights for the 21st century and beyond, instead of bickering about selling muskets to well regulated militias.
My first reaction was the same as yours Nicky ..
This is SERIOUSLY COOL stuff. I mean, just imagine! The things we could simulate, the efficiencies we could create, the breakthroughs waiting for us just out of reach ..
Then it hit me. This one is going to be TRICKY. If you stop and think about the controversies around the way we do clinical trials in the real world – the questions about what is and what isn’t a legitimate finding, the validity of people’s numbers.
On the other hand, the kind of medicine you describe could theoretically answer many of those questions by allowing us to test a hundred scenarios we could never manage in real life.
Let’s get working on it! But let’s get working on it with eyes wide open. And let’s start planning now to build safeguards into the system to prevent abuse and assure transparent results. More evidence that a Uniform Data Code is needed, requiring open access to data sources and research.
To be fair, we’ll also have to have some sort of access to the code that drives these kinds of simulations. Otherwise we’ll have to be taking people’s word for it, and evidence suggests that is probably not a very good idea …
John Irvine
This is so cool. VPH seems promising. Would def save us a lot of money!
via TweetBot