By ANISH KOKA, MD
Something didn’t seem right to epidemiologist Eric Weinhandl when he glanced at an article published in the venerated Journal of the American Medical Association (JAMA) on a crisp fall evening in Minnesota. Eric is a smart guy – a native Minnesotan and a math major who fell in love with clinical quantitative database-driven research because he happened to work with a nephrologist early in his training. After finishing his doctorate in epidemiology, he cut his teeth working with the Chronic Disease Research Group, a division of the Hennepin Healthcare Research Institute that has held The National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) contract for the United States Renal Data System Coordinating Center. The research group Eric worked for from 2004-2015 essentially organized the data generated from almost every dialysis patient in the United States. He didn’t just work with the data as an end-user, he helped maintain the largest, and most important database on chronic kidney disease in the United States.
For all these reasons this particular study published in JAMA that sought to examine the association between dialysis facility ownership and access to kidney transplantation piqued Eric’s interest. The provocative hypothesis is that for-profit dialysis centers are financially motivated to keep patients hooked to dialysis machines rather than refer them for kidney transplantation. A number of observational trials have tracked better outcomes in not-for-profit settings, so the theory wasn’t implausible, but mulling over the results more carefully, Eric noticed how large the effect sizes reported in the paper were. Specifically, the hazard ratios for for-profit vs. non-profit were 0.36 for being put on a waiting list, 0.5 for receiving a living donor kidney transplant, 0.44 for receiving a deceased donor kidney transplant. This roughly translates to patients being one-half to one-third as likely to get referred for and ultimately receiving a transplant. These are incredible numbers when you consider it can be major news when a study reports a hazard ratio of 0.9. Part of the reason one doesn’t usually see hazard ratios that are this large is because that signals an effect size that’s so obvious to the naked eye that it doesn’t require a trial. There’s a reason there are no trials on the utility of cauterizing an artery to stop bleeding during surgery.
But it really wasn’t the hazard ratios that first struck his eye. What stuck out were the reported event rates in the study. 1.9 million incident end-stage kidney disease patients in 17 years made sense. The exclusion of 90,000 patients who were wait-listed or received a kidney transplant before ever getting on dialysis, and 250,000 patients for not having any dialysis facility information left ~1.5 million patients for the primary analysis. The original paper listed 121,000 first wait-list events, 23,000 living donor transplants and ~50,000 deceased donor transplants. But the United Network for Organ Sharing (UNOS), an organization that manages the US organ transplantation system, reported 280,000 transplants during the same period.
The paper somehow was missing almost 210,000 transplants.
Henrietta Lacks did not give researchers permission to take her cancer cells and study them. After she died in 1951, her family was not asked permission as her immortalized cells were used in countless laboratories. This month, the National Institutes of Health finally took a step in righting that wrong, announcing that the Lacks family would help decide who can access Henrietta’s DNA.
Today, getting a patient’s permission, often in writing, is standard in experimental medical research. Well, not always. Currently, there are at least nine ongoing studies involving 62 U.S. cities and towns with a combined population of more than 45 million that do not involve getting permission. They take place during emergencies, such as when ambulances arrive at an accident where patients are too injured to give permission.
For example, imagine this scenario based on a recent study sponsored by the University of Washington. You are involved in a car accident. Paramedics find you bleeding severely. They give you fluids to keep your blood pressure up, but they intentionally give you a bag of fluid that is smaller than the standard. Then they monitor your medical outcome and compare it with patients who received the larger amount of fluids. During the emergency, neither you nor your family know about the study.
Research on medical emergencies is vital in determining how to care for people with life-threatening injuries because we often do not have proof that standard methods are the best. People involved should be told that is how their records are being used.
In 1996, the Department of Health and Human Services and the Food and Drug Administration passed regulations allowing research about emergency treatment to occur without permission. For a study to qualify, patients need to have a life-threatening condition, current standards of care must be unproven or performing poorly, and obtaining permission must not be feasible (such as an unconscious patient or a patient whose condition does not allow time for informed consent).
Most people I work with in medicine have never heard of GitHub .
For the unfamiliar, GitHub is an online repository, which is an essential tool used by computer programmers to store their programming code. It has a number of virtues, including giving users the ability to track multiple versions of their code (sort of like remembering all the track changes you ever made to your word document). This is an essential tool for programmers but its value goes beyond its function as a track changes repository, as it is a site that facilitates open source collaboration, given its “social” features, similar to social networks like Facebook or Twitter, in which you follow the content of others or others follow you.
The most amazing thing about GitHub is that many users post their code (their work, their blood, sweat, and tears) publicly on their GitHub profile. Individuals will comment on others code, providing valuable input that the owner of the code can use to improve their work. In addition, can “fork” another person’s code repository, and work directly on the code in their own Github profile to make changes or improvements, sort of like a tag team collaboration. GitHub is the tool to help facilitate large-scale open source collaboration for the software/web programming world (such as that which lead to the Linux revolution).
By early 2012 there were apparently 1.2 million users hosting over 3.6 million repositories. Now that’s collaboration to scale!
So again, you may ask, why should physicians or medical researchers care about GitHub? Because it can have broader application beyond the software/web programming world, as shown by its use among non programmers, who are currently repurposing Github to advance collaborate in their own respective fields. They are posting book projects and transcripts of talks on the site, to encourage conversation and collaboration. One user even published his personal DNA information to encourage development of open-source DNA analysis. It has been suggested that Github could even be used by US citizens to “fork” the law so that they can propose their own amendments to their elected officials.
How might we use Github to democratize the world of medical research?
As researchers we do so many different activities that we perform in isolation, which forces us to “reinvent the wheel” constantly, from drafting of ethics board applications, to creation of research protocols, to the writing of snippets of statistical code or code for web programs.
Twenty-five years ago this month, the New England Journal of Medicine published a special report on something that’s become medical gospel:
That’s right. Not as in “take two and call me in the morning,” but in the realm of the randomized double-blinded placebo-controlled trial. Or what we generally consider the gold standard of evidence in medical research.
If you’ve often heard that bit of jargon but always wondered why it’s so exalted, break it down:
- randomized: the assignment of the treatment (aspirin) or placebo (‘inert’ sugar pill) is not given in any planned sequence.
- double-blinded: neither the researchers nor the subjects know who is taking what (everything is coded so that analysts can find out at the end).
- placebo-controlled: the study compares the treatment against placebo to see if it’s helpful or harmful.
Even though acetylsalicylic acid’s properties as a pain reliever and fever reducer had been known in the time of Hippocrates, it was in 1899 that Bayer first patented and marketed what came to be known as aspirin worldwide.
A mere 89 years later, researchers from the “Physicians Health Study” did something unusual. Citing aspirin’s “extreme beneficial effects on non-fatal and fatal myocardial infarction”–doctor speak for heart attacks–the study’s Data Monitoring Board recommended terminating the aspirin portion of the study early (the study also was looking at the effects of beta-carotene). In other words, the benefit in preventing heart attacks was so clear at 5 years instead of the planned 12 years of study that it was deemed unethical to continue blinding participants or using placebo.