Abbott Ventures chief Evan Norton may have spent part of his youth on a farm, but there’s no manure in his manner when speaking of the medical device and diagnostics market landscape. The key, he says, is to avoid being blindsided by the transformational power of digital data.
“We’ve been competing against Medtronic and J&J, so that has the risk of us being disintermediated by other players that come into the market,” Norton told attendees at MedCity Invest, a meeting focused on health care entrepreneurs. “Physicians are coming to us and asking for access to data for decisions, and they don’t care who the manufacturer [of the device] is. Are we enabling data creation?”
Abbott, said Norton, wrestles with whether they are simply data creators or want to get paid for providing algorithmic guidance on how the data is used. (Full disclosure: I own Abbott shares.) Other panelists agreed making sense of the digital data deluge remains the central business challenge.
Senate leaders now say they won’t consider companion legislation to the House-passed 21st Century Cures Act until September, after months of delay. Lawmakers would then have to reconcile the differing House and Senate versions, presumably by year’s end during a lame-duck Congress.
We believe the summer delay is a good thing, and that Congress should actually extend consideration of the complex legislation into 2017 when must-pass FDA funding through industry user-fees will be on the congressional calendar. That way, lawmakers can debate the implications of the proposed bills in the context of the resources FDA needs.
Why further delay? Because the legislation—which makes substantial changes to the way the Food and Drug Administration (FDA) approves drugs and devices—is flawed. As currently crafted, it lowers standards for drug and device approvals and safety, and risks adding to the rising cost of prescription drugs.
The ostensible rationale for the legislation—being pushed by drug and device companies—is that the FDA stifles innovation and advances in treatment by approving drugs and devices too slowly compared with other countries.
After decades of bravely keeping them at bay, health care is beginning to be overwhelmed by “fast, cheap, and out of control” new technologies, from BYOD (“bring your own device”) tablets in the operating room, to apps and dongles that turn your smart phone into a Star Trek Tricorder, to 3-D printed skulls. (No, not a souvenir of the Grateful Dead, a Harley decoration or a pastry for the Mexican Dia de Los Muertos, but an actual skullcap to repair someone’s head. Take measurements from a scan, set to work in a cad-cam program, press Cmd-P and boom! There you have it: new ear-to-ear skull top, ready for implant.)
Each new category, we are told, will Revolutionize Health Care, making it orders of magnitude better and far less expensive. Yet the experience of the last three decades is that each new technology only adds complexity and expense.
So what will it be? Will some of these new technologies actually transform health care? Which ones? How can we know?
There is an answer, but it does not lie in the technologies. It lies in the economics. It lies in the reason we have so much waste in health care. We have so much waste because we get paid for it.
Yes, it’s that simple. In an insurance-supported fee-for-service system, we don’t get paid to solve problems. We get paid to do stuff that might solve a problem. The more stuff we do, and the more complex the stuff we do, the more impressive the machines we use, the more we get paid.
A Tale of a Wasteful Technology
A few presidencies back, I was at a medical conference at a resort on a hilltop near San Diego. I was invited into a trailer to see a demo of a marvellous new technology — computer-aided mammography. I had never even taken a close look at a mammogram, so I was immediately impressed with how difficult it is to pick possible tumours out of the cloudy images. The computer could show you the possibilities, easy as pie, drawing little circles around each suspicious nodule.
But, I asked, will people trust a computer to do such an important job?
Oh, the computer is just helping, I was told. All the scans will be seen by a human radiologist. The computer just makes sure the radiologist does not miss any possibilities.
I thought, Hmmm, if you have a radiologist looking at every scan anyway, why bother with the computer program? Are skilled radiologists in the habit of missing a lot of possible tumors? From the sound of it, I thought what we would get is a lot of false positives, unnecessary call-backs and biopsies, and a lot of unnecessarily worried women. After all, if the computer says something might be a tumor, now the radiologist is put in the position of proving that it isn’t.
I didn’t see any reason that this technology would catch on. I didn’t see it because the reason was not in the technology, it was in the economics.
Years later, as we are trending toward standardizing on this technology across the industry, the results of various studies have shown exactly what I suspected they would: lots of false positives, call-backs and biopsies, and not one tumor that would not have been found without the computer. Not one. At an added cost trending toward half a billion dollars per year.
In an era of sophisticated information technology and rapid communication, the medical device community lags far behind other fields in its ability to alert patients about safety concerns.
For example, auto manufacturers and government regulators are able to quickly identify potential safety concerns by linking reports of crashes, malfunctions and defects with individual vehicle identification numbers (VINs).
They can then communicate recalls to affected customers by using their VIN. Manufacturers will issue notifications via mail or e-mail, or offer customers the ability to search the manufacturer’s website using their VIN.
In health care, drugs are tracked using a system established in the 1970s called National Drug Codes (NDCs). The 10-digit NDCs are assigned to all manufactured medications. The code tracks the vendor, product, and package code, which can then be captured in electronic health records and the FDA’s national database.
Unfortunately, we do not yet have a similar national system that can identify and communicate potential concerns for the tens of millions of patients with implantable devices such as pacemakers, glucose meters, artificial joints, and defibrillators.
Patients are bombarded by news stories about device recalls, but unless they have access to information about the exact make and model of their device, they have no way of knowing if they should be concerned. Since most medical device procedures take place in a hospital, a patient’s health care providers may also lack this critical, sometimes life-saving information.
The patient is then burdened with the task of tracking down their specialist or surgeon, in hopes that they documented the specific device information.
Clearly, the current health care information infrastructure does not yet support a robust surveillance system.
The United States spends more than $2 trillion per year on health care, surpassing all other countries in per capita terms and as a percentage of gross domestic product.
New, expensive medical technologies are a leading driver of ballooning U.S. health care spending. While many new drugs and devices are worthwhile because they substantially extend lives and reduce suffering, many others provide little or no health benefit.
Many studies grapple with how to control spending by considering changing how existing technologies are used. But what if the problem could be attacked at its root by changing which drugs and devices are invented in the first place?
Recently, my colleagues and I explored how medical product innovation could be redirected to reduce spending with little, if any, sacrifice to health and to ensure that any spending increases are justified by sufficient health benefits.
The basic approach is to use “carrots and sticks” to alter financial incentives for drug and device companies, their investors, health care payers and providers, and patients.
The ten policy options below could change which technologies are invented and how they’re used. In turn, this could cut spending or increase the value (health benefits per dollar spent) derived from new products that do increase spending.
We urge policymakers—both public and private—to consider these options soon and to implement those that are most promising. Policymakers should also consider how to reduce spending and get more value from health services that don’t involve drugs or devices.
The longer the delay, the more money will be badly spent.
1. Encourage Creativity in Funding Basic Science
The National Institutes of Health (NIH), the leading funder of basic biomedical research, typically favors low-risk projects. Funded researchers who fail to achieve their goals are much less likely to secure additional NIH funding. Encouraging more creativity and risk-taking could increase major breakthroughs.
2. Reward Inventors with Prizes
Public entities, private health care systems, the philanthropic sector, or public-private partnerships could award prizes to the first to invent drugs or devices that satisfy certain performance criteria, including a potential to decrease spending. Winners could receive a share of future savings that their product brings the Medicare program, which spends more than $500 billion annually.
Scott Erven is head of information security for a healthcare provider called Essentia Health, and his Friday presentation at Chicago’s Thotcon, “Just What The Doctor Ordered?” is a terrifying tour through the disastrous state of medical device security.
Wired’s Kim Zetter summarizes Erven’s research, which ranges from the security of implanted insulin pumps and defibrillators to surgical robots and MRIs. Erven and his team discovered that hospitals are full of fundamentally insecure devices, and that these insecurities are not the result of obscure bugs buried deep in their codebase (as was the case with the disastrous Heartbleed vulnerability), but rather these are incredibly stupid, incredibly easy to discover mistakes, such as hardcoded easy default passwords.
For example: Surgical robots have their own internal firewall. If you run a vulnerability scanner against that firewall, it just crashes, and leaves the robot wide open.
The backups for image repositories for X-rays and other scanning equipment have no passwords. Drug-pumps can be reprogrammed over the Internet with ease. Defibrillators can be made to deliver shocks — or to withhold them when needed.
Doctors’ instructions to administer therapies can be intercepted and replayed, adding them to other patients’ records.
You can turn off the blood fridge, crash life-support equipment and reset it to factory defaults. The devices themselves are all available on the whole hospital network, so once you compromise an employee’s laptop with a trojan, you can roam free.
You can change CT scanner parameters and cause them to over-irradiate patients.Continue reading…
The Food and Drug Administration has spent decades refining its processes for approving drugs and devices (and is still refining them), so what would happen if they extended their scope to the exploding health software industry?
The FDA, and its parent organization, the Department of Health and Human Services, are facing an unpleasant and politically difficult choice.
Sticking regulatory fences into the fertile plains of software development and low-cost devices will arouse its untamed denizens, who are already lobbying Congress to warn the FDA about overreaching. But to abandon the field is to leave patients and regular consumers unprotected. This is the context in which the Food and Drug Administration, the Office of National Coordinator, after consultation with outside stakeholders, released a recent report on Health IT.
I myself was encouraged by the report. It brings together a number of initiatives that have received little attention and, just by publicizing the issues, places us one step closer to a quality program. Particular aspects that pleased me are:
A call for transparent reporting and sharing of errors, including the removal of “disincentives to transparent reporting”–i.e., legal threats by vendors (p. 25). Error reporting is clearly a part of the “environment of learning and continual improvement” I mentioned earlier. A regulation subgroup stated the need most starkly: “It is essential to improve adverse events reporting, and to enable timely and broader public access to safety and performance data.” Vague talk of a Health IT Safety Center (p. 4, pp. 14-15) unfortunately seems to stop with education, lacking enforcement. I distinctly disagree with the assessment of two commentators who compared the Health IT Safety Center to the National Transportation Safety Board and assigned it some potential power. However, I will ask ONC and FDA for clarification.
A recognition that software is part of a larger workflow and social system, that designing it to meet people’s needs is important, and that all stakeholders should have both a say in software development and a responsibility to use it properly.
In the New York Times on Thursday, October 17, Topher Spiro wrote an important op-ed expressing why we need to hold onto the medical device tax that helps pay for parts of the Affordable Care Act. Spiro backs up his argument by pointing out how profitable the device industry is. To his argument I would also add the fact that this will provide the industry with more paying customers. Certainly it can afford to pay the taxes.
But I diverge from Spiro on a proposal he floated near the end of his piece:
“To complement these efforts, the new Patient-Centered Outcomes Research Institute [PCORI], a non-governmental body created by the Affordable Care Act, should pay for research that compares the effectiveness of devices so physicians can make informed choices. (Three years into its existence, the institute has initiated few, if any, studies of medical devices.”
Listen to me PCORI. Don’t follow this advice, unless you plan not to survive to celebrate your fourth birthday.
Consider what happened to the Agency for Healthcare Policy Research (AHCPR), when it tried to help physicians figure out the best way to treat low back pain. AHCPR was created as a stand-alone research institute, akin to the NIH, but one that would focus not on the basic science of treating disease, but instead on evaluating how well existing treatments worked.
NEHI recently convened a meeting on health care innovation policy at which the Harvard economist David Cutler noted that debate over innovation has shifted greatly in the last decade. Not that long-running debates about the FDA, regulatory approvals, and drug and medical device development have gone away: far from it.
But these concerns are now matched or overshadowed by demands for proven value, proven outcomes and, increasingly, the Triple Aim, health care’s analog to the “faster, better, cheaper” goal associated with Moore’s Law.
To paraphrase Cutler, the market is demanding that cost come out of the system, that patient outcomes be held harmless if not improved, and it is demanding innovation that will do all this at once. Innovation in U.S. health care is no longer just about meeting unmet medical need. It is about improving productivity and efficiency as well.
In this new environment it‘s the science-driven innovators (the pharma, biotech, and medtech people) who seem like the old school players, despite their immersion in truly revolutionary fields such as genomic medicine. It’s the tech-driven innovators (the healthcare IT, predictive analytics, process redesign, practice transformation and mobile health people) who are the cool kids grabbing the attention and a good deal of the new money.
To make matters worse for pharma, biotech and medtech, long-held assumptions about our national commitment to science-driven innovation seem to be dissolving. There’s little hope for reversing significant cuts to the National Institutes of Health. User fee revenues painstakingly negotiated with the FDA just last year have only barely escaped sequestration this year. Bold initiatives like the Human Genome Project seem a distant memory; indeed, President Obama’s recently announced brain mapping project seems to barely register with the public and Congress.
With the announcement that the FDA granted 510(k) approval for the AliveCor EKG case for the iPhone 4/4s, the device became available to “licensed U.S. medical professionals and prescribed patients to record, display, store, and transfer single-channel electrocardiogram (ECG) rhythms.”
While this sounds nice, how, exactly, does one become a “prescribed patient?” Once a doctor “prescribes” such a device, what are his responsibilities? Does this obligate the physician to 24/7/365 availability for EKG interpretations? How are HIPAA-compliant tracings sent between doctor and patient? How are the tracings and medical care documented in the (electronic) medical record? What are the legal risks to the doctor if the patient transmits OTHER patient’s EKG’s to OTHER people, non-securely?
At this point, no one knows. We are entering into new, uncharted medicolegal territory.
But the legal risks for prescribing a device to a patient are, sadly, probably real, especially since the FDA has now officially sanctioned this little iPhone case as a real, “live” medical device. But I must say, I am not a legal expert in this area and would defer to others with more legal expertise to comment on these thorny issues.
This issue came up because a patient saw the device demonstrated in my office and wanted me to prescribe it for them. So I sent AliveCor’s Dr. Dave Alpert a tweet and later received this “how to” e-mail response from their support team: