A Museum of Modern Art exhibit by Michael Burton once proposed that human beings themselves would be the soil for a “future farm:”
Future Farm predicts that the emerging pharmaceutical research in harvesting adult stem cells from fat tissues and its convergence with future nanotechnologies, will bring with it scenarios that reconsider the body as income. We live in a world where industries exist to offer financial rewards for those willing to sell a kidney or produce hair to beautify others. Industries have grown to facilitate transplant tourism as a result of the success of contemporary surgery. And scientific and technological advances continue to bring new possibilities for the practice of farming the body.
This may seem like an overly dramatic or even science-fictionalized description of desperation due to poverty and larger economic trends. But the global economic race to the bottom has now so influenced medical research that Burton’s dark vision is coming closer to realization.
A recent article by Bartlett & Steele and a book by Carl Elliott describe the rise of “contract research organizations” that organize the initial phases of drug trials. Bartlett and Steele choose a provocative metaphor to describe the trend:
To have an effective regulatory system you need a clear chain of command—you need to know who is responsible to whom, all the way up and down the line. There is no effective chain of command in modern American drug testing. Around the time that drugmakers began shifting clinical trials abroad, in the 1990s, they also began to contract out all phases of development and testing, putting them in the hands of for-profit companies.
It used to be that clinical trials were done mostly by academic researchers in universities and teaching hospitals, a system that, however imperfect, generally entailed certain minimum standards. The free market has changed all that. Today it is mainly independent contractors who recruit potential patients both in the U.S. and—increasingly—overseas. They devise the rules for the clinical trials, conduct the trials themselves, prepare reports on the results, ghostwrite technical articles for medical journals, and create promotional campaigns. The people doing the work on the front lines are not independent scientists. They are wage-earning technicians who are paid to gather a certain number of human beings; sometimes sequester and feed them; administer certain chemical inputs; and collect samples of urine and blood at regular intervals. The work looks like agribusiness, not research.
Because of the deference shown to drug companies by the F.D.A.—and also by Congress, which has failed to impose any meaningful regulation—there is no mandatory public record of the results of drug trials conducted in foreign countries. Nor is there any mandatory public oversight of ongoing trials.
Therefore, it is up to journalists like Bartlett & Steele to uncover problems. And they are legion:
The Argentinean province of Santiago del Estero, with a population of nearly a million, is one of the country’s poorest. In 2008 seven babies participating in drug testing in the province suffered what the U.S. clinical-trials community refers to as “an adverse event”: they died. . . . In New Delhi, 49 babies died at the All India Institute of Medical Sciences while taking part in clinical trials over a 30-month period. . . . In 2007, residents of a homeless shelter in Grudziadz, Poland, received as little as $2 to take part in a flu-vaccine experiment. The subjects thought they were getting a regular flu shot. They were not. At least 20 of them died.
Bartlett and Steele also discuss problems in research in the US. Exploitation probably should not be a surprise in a country where unpaid prison labor appears to be a strategy to boost productivity. US companies are also driving the “initial stages of distributed human computing that can be directed at mental tasks the way that surplus remote server rackspace or Web hosting can be purchased to accommodate sudden spikes in Internet traffic.” (Such “human intelligence tasks” can be purchased for as little as a penny each on Amazon’s Mechanical Turk.) But the slow infiltration of less developed countries’ standards into US drug testing should be a concern for the FDA.
The system also appears to give drug companies a wide latitude to manipulate results, leading to the rise of “rescue countries” that are particularly prone to produce positive results:
One big factor in the shift of clinical trials to foreign countries is a loophole in F.D.A. regulations: if studies in the United States suggest that a drug has no benefit, trials from abroad can often be used in their stead to secure F.D.A. approval. There’s even a term for countries that have shown themselves to be especially amenable when drug companies need positive data fast: they’re called “rescue countries.” Rescue countries came to the aid of Ketek, the first of a new generation of widely heralded antibiotics to treat respiratory-tract infections. . . In 2004—on April Fools’ Day, as it happens—the F.D.A. certified Ketek as safe and effective. The F.D.A.’s decision was based heavily on the results of studies in Hungary, Morocco, Tunisia, and Turkey. The approval came less than one month after a researcher in the United States was sentenced to 57 months in prison for falsifying her own Ketek data.
Massive global inequalities render populations around the world vulnerable to exploitative testing conditions.
Carl Elliott’s book White Coat, Black Hat covers similar terrain, as well as the conflicts of interest and other issues we’ve addressed at Seton Hall’s health law center. His review of recent books on medical research described a “mild torture economy.” His piece “Guinea Pigging” suggests that “rescue counties” in the US may complement the “rescue countries” of Bartlett and Steele:
This unit was in a university hospital, not a corporate lab, and the staff had a casual attitude toward regulations and procedures. “The Animal House of research units” is what [one research subject] calls it. . . . Although study guidelines called for stringent dietary restrictions, the subjects got so hungry that one of them picked the lock on the food closet. “We got giant boxes of cookies and ran into the lounge and put them in the couch,” Rockwell says. “This one guy was putting them in the ceiling tiles.” Rockwell has little confidence in the data that the study produced. “The most integral part of the study was the diet restriction,” he says, “and we were just gorging ourselves at 2 A.M. on Cheez Doodles.”
Elliott’s litany of poorly controlled or ramshackle studies gives us one more item to add to Dr. John Ioannidis’s many reasons for doubting medical research:
Ioannidis [has] laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time. Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right. . . .
When a five-year study of 10,000 people finds that those who take more vitamin X are less likely to get cancer Y, you’d think you have pretty good reason to take more vitamin X . . . But these studies often sharply conflict with one another. Studies have gone back and forth on the cancer-preventing powers of vitamins A, D, and E; on the heart-health benefits of eating fat and carbs; and even on the question of whether being overweight is more likely to extend or shorten your life. How should we choose among these dueling, high-profile nutritional findings? Ioannidis suggests a simple approach: ignore them all.
For starters, he explains, the odds are that in any large database of many nutritional and health factors, there will be a few apparent connections that are in fact merely flukes, not real health effects—it’s a bit like combing through long, random strings of letters and claiming there’s an important message in any words that happen to turn up. But even if a study managed to highlight a genuine health connection to some nutrient, you’re unlikely to benefit much from taking more of it, because we consume thousands of nutrients that act together as a sort of network, and changing intake of just one of them is bound to cause ripples throughout the network that are far too complex for these studies to detect, and that may be as likely to harm you as help you. . . .[S]tudies rarely go on long enough to track the decades-long course of disease and ultimately death. Instead, they track easily measurable health “markers” such as cholesterol levels, blood pressure, and blood-sugar levels, and meta-experts have shown that changes in these markers often don’t correlate as well with long-term health as we have been led to believe. . . .
And these problems are aside from ubiquitous measurement errors (for example, people habitually misreport their diets in studies), routine misanalysis (researchers rely on complex software capable of juggling results in ways they don’t always understand), and the less common, but serious, problem of outright fraud (which has been revealed, in confidential surveys, to be much more widespread than scientists like to acknowledge). . . .If a study somehow avoids every one of these problems and finds a real connection to long-term changes in health, you’re still not guaranteed to benefit, because studies report average results that typically represent a vast range of individual outcomes. Should you be among the lucky minority that stands to benefit, don’t expect a noticeable improvement in your health, because studies usually detect only modest effects that merely tend to whittle your chances of succumbing to a particular disease from small to somewhat smaller. “The odds that anything useful will survive from any of these studies are poor,” says Ioannidis—dismissing in a breath a good chunk of the research into which we sink about $100 billion a year in the United States alone.
To summarize: Ioannidis casts some doubt on even the best of studies, and Elliott, Bartlett, and Steele show that bad studies may be far more common than we suspect. It’s a troubling set of observations for all concerned. We should at the very least insist on much more systematic monitoring of global drug trials.
This post originally appeared on Health Reform Watch, the web log of the Seton Hall University School of Law.
Frank Pasquale, JD, is the Schering-Plough Professor in health care regulation and enforcement at Seton Hall Law School and is the Associate Director of the Center for Health & Pharmaceutical Law & Policy. He has distinguished himself as an internationally recognized scholar in health, intellectual property, and information law and has made numerous academic presentations at universities across North America and at the National Academy of Sciences. A prolific writer, Professor Pasquale’s work has been featured in top law reviews, books, peer-reviewed journals, and online blogs, including Health Reform Watch, of which he is Editor-in-Chief. A frequent media presence, he has appeared in the New York Times, San Francisco Chronicle, Los Angeles Times, Boston Globe, Financial Times, and on CNN, WNYC’s Brian Lehrer Show, and National Public Radio’s Talk of the Nation.