Is 5 too few and 40 too many? That’s one of many questions that researcher David Chan is asking about the clinical reminders embedded into those electronic health record (EHR) systems increasingly used at your doctor’s office or local hospital. Electronic reminders, which are similar to the popups that appear when installing software on your computer, flag items for healthcare professionals to consider when they are seeing patients. Depending on the type of reminder used in the EHR—and there are many types—these timely messages may range from a simple prompt to write a prescription to complex recommendations for follow-up testing and specialist referrals.
Chan became interested in this topic when he was a resident at Brigham and Women’s Hospital in Boston, where he experienced the challenges of seeing many patients and keeping up with a deluge of health information in a primary-care setting. He had to write prescriptions, schedule lab tests, manage chronic conditions, and follow up on suggested lifestyle changes, such as weight loss and smoking cessation. In many instances, he says electronic reminders eased his burden and facilitated his efforts to provide high quality care to patients.
Still, Chan was troubled by the lack of quantitative evidence Continue reading…
Late last month, President Obama unveiled a $215 million Precision Medicine initiative, which has won early bipartisan support. The centerpiece of this proposal is an ambitious effort to integrate disparate clinical datasets to advance science and improve health. The question now is whether the National Institute of Health officials entrusted to carry out this program will seize this opportunity to leverage the thinking and experiences of the entrepreneurs, engineers, and data scientists from the private sector who have been wrestling these sorts of challenges to the ground. The early indications are encouraging.
(Disclosure/reminder: I work at a cloud-enabled genomic data management company in Mountain View, California.)
Data is the organizing principle of Silicon Valley; the landscape is dotted with companies – from behemoths like Facebook, Google GOOGL -0.99%, Salesforce, and Palantir to younger entrants like ours – devoted to collecting, analyzing, and collaborating around huge amounts of data, often enabled by cloud computing.
The same engineers who gave us photo sharing, Angry Birds, and smart thermostats are increasingly bringing their talents to healthcare, trying to enable health data sharing, motivate healthy behaviors, and empower elders living at home alone.
It was 1970. I was in my laboratory at the NIH sequencing a murine myeloma protein in order to define the structure of its antibody combining region. Studies of protein conformation were at the cutting edge of science then; enthusiasm abounded. But it was clear to me that this work, in all its scientific elegance, had little to do with treating myeloma or anything else in mice or man. The reason for all the painstaking effort was the joy of pushing back the frontier of ignorance, even if only a bit. No one could foresee clinical utility then, nor would any become apparent for decades. Today such monoclonal antibodies are widely used to treat many diseases, sometimes with efficacy that justifies the costliness.
Genomics is in a bigger hurry.
Thanks to 40 years of breakthroughs, many earning Nobel Prizes, the chromosome carrying the defective gene underlying a genetic disease, Huntington’s disease, was identified in 1983 and the gene sequenced a decade later. In short order, defective genes underlying a number of single-gene diseases were defined: cystic fibrosis, hemophilia, and others. We all wait with baited breath for these elegant insights to transform into primary treatments for single allele genetic diseases. Attempts to transfect patients with normal genes are encouraging but barely so; it has proved difficult to get the right gene to stay in the right cells. Likewise, directly modifying the abnormal genetic apparatus is still largely just promising. The fallback remains working downstream from the genetic apparatus, replacing or modifying the defective products of many of these pathogenetic genes. Nonetheless, optimism regarding modifying the genetic apparatus itself is rational as is ever more boldness on the part of molecular biologists.
Between October 1 and 17, the federal government ceased all nonessential operations because of a partisan stalemate over Obamacare. Although it is premature to declare this the greatest example of misgovernance in modern U.S. Congressional history, this impasse ranks highly.
One casualty of the showdown was any consideration of changes to lessen the impact of the across-the-board sequestration cuts that began on March 1. The cuts have caused economic and other distress across the nation, including serious impacts within the health care sector. Nearly eight months into sequestration, we can move beyond predictions and begin to quantify these effects.
Consider the following impacts of sequestration on Federal health agencies and activities:
NATIONAL INSTITUTES OF HEALTH
Cuts to the FY13 budget: $1.71 billion or 5.5%
A 5.8% cut to the National Cancer Institute, including 6% to ongoing grants, 6.5% to cancer centers, and 8.5% to existing contracts
A 5.0% cut to National Institute of General Medical Sciences, and a 21.6% drop in new grant awards
Among the effects:
- 703 fewer new and competing research projects
- 1,357 fewer research grants in total
- 750 or 7% fewer patients admitted to NIH Clinical Center
- $3 billion in lost economic activity and 20,500 lost jobs
- Estimated lost medical and scientific funding in California, Massachusetts, and New York alone of $180, $128, and $104 million respectively.
Dr. Randy Schekman, whose first major grant was from the National Institutes of Health in 1978, said winning this year’s Nobel Prize for Medicine made him reflect on how his original proposal might have fared in today’s depressed funding climate. “It would have been much, much more difficult to get support,” he said. Congresswoman Zoe Lofgren (D-Calif.) noted the irony that because of sequester cuts, NIH funding was reduced for the research that resulted in Yale’s James Rothman sharing in the 2013 Nobel Prize for Medicine.
Much attention has been paid to the government shutdown that started last week. Many of us heard heart-tugging stories on public radio about the NIH closing down new subject enrollment at its “House of Hope,” the clinical trial hospital on the NIH main campus. These stories gave many people the impression that clinical research halted around the country when the federal government failed to approve a Continuing Resolution.
The reality is both less dramatic in the short term and more concerning for the long term. For the most part, federally-funded projects at university campuses and hospitals are continuing as usual (or, the new “usual,” as reduced by sequestration), because the grants already awarded are like I.O.U.s from the government. By and large, university researchers will keep spending on their funded grants, with the knowledge that reimbursement will come once the government re-opens for business. The universities and hospitals are, in a sense, acting like banks that loan the government money while waiting for these expenses to be reimbursed.
Also, many clinical trials are funded by the pharmaceutical industry. So it is not the case that hospitals are closing their doors to research en masse. But the long-term effects of a shutdown will have lasting and compounding effects on our science pipeline. The U.S. federal government is the single largest funder of scientific research at American universities. Each month, thousands of grant proposals are sent to the various federal funding agencies for consideration.
These in turn are filtered and assigned to peer review committees. The whole process of review, scoring, and funding approval typically takes months, sometimes more than a year.
The shutdown could not stop the rollout of the state and federal exchanges.
That’s because the Obama administration, sensing a political fight in the offing with Republicans, wisely prepaid the bill for the insurance exchanges and other key components of the rollout.
On the other hand, the fiscal standoff is having a very real impact on the infrastructure that supports healthcare across the United States. Agencies from the Centers for Disease and Control to the National Institutes of Health have seen their money turned off. Others have seen their staffing levels sharply reduced with non-essential employees furloughed.
It doesn’t take a wild imagination to imagine potential deadly consequences if something goes wrong. If for example, flu season strikes early or a drug recall is needed. Much of the pain will be felt over time. As the shutdown drags on, you can expect problems that are brewing under the surface to become much more visible …
Here’s a review of what’s happening:
Centers For Disease Control and Prevention
Funding for monitoring of disease outbreaks turned off. Lab operations sharply scaled back. 24/7 operations center to remain online. With some scientists predicting a severe 2013-2014 flu season, this is cause for concern …
National Institutes For Health
Enrollment in new clinical trials suspended, impacting thousands of patients suffering from serious diseases. No action on grant proposals. Minimal support for ongoing protocols.
Food and Drug Administration
Food safety inspections sharply cut back. Monitoring of imports eliminated. Oversight of production facilities curtailed, again potentially an issue with flu season on the way.The good news? Because drug approvals are funded by industry “user-fees” FDA approvals of new drugs will continue.
Centers For Medicare and Medicaid Services
Key ACA related operations intact. The bad news for docs and patients – claims and payment processing expected to continue but with slower service than usual. With purse strings tight, this is likely to become more of a problem as shutdown drags on. In the unlikely event that a shutdown continues for more than a month, the impact on physician practices could be much more serious.
Henrietta Lacks did not give researchers permission to take her cancer cells and study them. After she died in 1951, her family was not asked permission as her immortalized cells were used in countless laboratories. This month, the National Institutes of Health finally took a step in righting that wrong, announcing that the Lacks family would help decide who can access Henrietta’s DNA.
Today, getting a patient’s permission, often in writing, is standard in experimental medical research. Well, not always. Currently, there are at least nine ongoing studies involving 62 U.S. cities and towns with a combined population of more than 45 million that do not involve getting permission. They take place during emergencies, such as when ambulances arrive at an accident where patients are too injured to give permission.
For example, imagine this scenario based on a recent study sponsored by the University of Washington. You are involved in a car accident. Paramedics find you bleeding severely. They give you fluids to keep your blood pressure up, but they intentionally give you a bag of fluid that is smaller than the standard. Then they monitor your medical outcome and compare it with patients who received the larger amount of fluids. During the emergency, neither you nor your family know about the study.
Research on medical emergencies is vital in determining how to care for people with life-threatening injuries because we often do not have proof that standard methods are the best. People involved should be told that is how their records are being used.
In 1996, the Department of Health and Human Services and the Food and Drug Administration passed regulations allowing research about emergency treatment to occur without permission. For a study to qualify, patients need to have a life-threatening condition, current standards of care must be unproven or performing poorly, and obtaining permission must not be feasible (such as an unconscious patient or a patient whose condition does not allow time for informed consent).
A couple of weeks ago, President Obama launched a new open data policy (pdf) for the federal government. Declaring that, “…information is a valuable asset that is multiplied when it is shared,” the Administration’s new policy empowers federal agencies to promote an environment in which shareable data are maximally and responsibly accessible. The policy supports broad access to government data in order to promote entrepreneurship, innovation, and scientific discovery.
If the White House needed an example of the power of data sharing, it could point to the Psychiatric Genomics Consortium (PGC). The PGC began in 2007 and now boasts 123,000 samples from people with a diagnosis of schizophrenia, bipolar disorder, ADHD, or autism and 80,000 controls collected by over 300 scientists from 80 institutions in 20 countries. This consortium is the largest collaboration in the history of psychiatry.
More important than the size of this mega-consortium is its success. There are perhaps three million common variants in the human genome. Amidst so much variation, it takes a large sample to find a statistically significant genetic signal associated with disease. Showing a kind of “selfish altruism,” scientists began to realize that by pooling data, combining computing efforts, and sharing ideas, they could detect the signals that had been obscured because of lack of statistical power. In 2011, with 9,000 cases, the PGC was able to identify 5 genetic variants associated with schizophrenia. In 2012, with 14,000 cases, they discovered 22 significant genetic variants. Today, with over 30,000 cases, over 100 genetic variants are significant. None of these alone are likely to be genetic causes for schizophrenia, but they define the architecture of risk and collectively could be useful for identifying the biological pathways that contribute to the illness.
We are seeing a similar culture change in neuroimaging. The Human Connectome Project is scanning 1,200 healthy volunteers with state of the art technology to define variation in the brain’s wiring. The imaging data, cognitive data, and de-identified demographic data on each volunteer are available, along with a workbench of web-based analytical tools, so that qualified researchers can obtain access and interrogate one of the largest imaging data sets anywhere. How exciting to think that a curious scientist with a good question can now explore a treasure trove of human brain imaging data—and possibly uncover an important aspect of brain organization—without ever doing a scan.
A useful and well-written summary of open access to publications in the medical field triggered some thoughts I’d like to share. The thrust of the article was that doctors need more access to a wide range of journal publications in order to make better decisions. The article also praises NIH’s open access policy, which has inspired the NSF and many journals.
My additional points are:
- Open publication adds to the flood of information already available to most doctors, placing a burden on them to search and filter it. IBM’s Watson is one famous attempt to approach the ideal where the doctor would be presented right at the point of care with exactly the information he or she needs to make a better decision. Elsewhere, I have reported on a proposal to help experts doctors filter and select the important information and provide it to their peers upon demand–a social networking approach to evidence-based medicine.
- Not only published papers, but the data that led to those research results should be published online, to help researchers reproduce the results and build on them to make new discoveries. I report in an earlier article on this site about the work of Sage Bionetworks to get researchers to open their data. Of course, putting up raw data leaves many challenges: one has to be careful to deidentify it according to accepted standards. One has to explain the provenance of the data carefully: how it was collected and massaged (because data sets always require some culling and error-correction) so it can be understood and properly reused. Finally, combining different data sets is always difficult because they are collected under different conditions and with different assumptions.
My job and my life intersected in a profound way when my daughter was diagnosed with Type I diabetes. Years working in mobile innovation didn’t prepare me for how personally relevant mHealth so quickly became. Her clinical trial at Stanford University, supported by the National Institutes of Health through Congress’ Special Diabetes Program, featured a world-class endocrinologist working alongside software coders, applications developers, algorithm writers, network engineers and other mobile innovators. They were all pushing together for what could be a revolution in diabetes management—the artificial pancreas.
Recently I had the opportunity to talk about my daughter’s experience and share my thoughts on how government can help encourage the next wave of mHealth innovation, when I was invited to testify before Congress on mobile innovation and health care.
America’s leadership in the mobile economy — 40,000 apps and counting in the broad mHealth category — matches America’s leadership at the cutting edge of medical technology.
Mobile devices, wireless networks and targeted applications are enabling better, more seamless and cost-effective care that empowers and informs stakeholders on both sides of the stethoscope.
The virtuous cycle of investment in the mobile ecosystem — from networks, to handsets and tablets, to applications — provides an unparalleled foundation for dramatic advances in the nation’s health and wellness. My message to Congress was to lean in and strike a reasonable and circumspect balance that both protects patient safety and privacy and propels the dramatic, mobile-fueled advances we are seeing through American medicine today.