You might have missed it amongst all the headlines about the U.S.P.S., the 2020 elections, and, of course, that little thing we call the pandemic, but Fortnite got kicked off Apple’s App Store (and subsequently Google Play).
I’m not a gamer, but I am fascinated by gaming, because, as Steven Johnson put it, “The Future is where people are having the most fun.” Tim Sweeney, the founder and CEO of Epic Games, Inc., which makes Fortnite, seems to be having a lot of fun. And he thinks the future is the Metaverse.
Healthcare, take note.
The tech giants were reacting to Epic allowing “permanent discounts” on developer fees for in-game purchases made directly, rather than going through Apple or Google. Developers thus avoid the 30% commission charged in those Stores. Mr. Sweeney has been railing about the commission level for some time, leading to the recent decision.
Today, Epic Games took the unfortunate step of violating the App Store guidelines that are applied equally to every developer and designed to keep the store safe for our users. As a result their Fortnite app has been removed from the store. Epic enabled a feature in its app which was not reviewed or approved by Apple, and they did so with the express intent of violating the App Store guidelines regarding in-app payments that apply to every developer who sells digital goods or services.
If you’re lucky, you’ve been working from home these past couple months. That is, you’re lucky you’re not one of the 30+ million people who have lost their jobs due to the pandemic. That is, you’re lucky you’re not an essential worker whose job has required you to risk exposure to COVID-19 by continuing to go into your workplace.
What’s interesting is that many of the stay-at-home workers, and the companies they work for, are finding it a surprisingly suitable arrangement. And that has potentially major implications for our society, and, not coincidentally, for our healthcare system.
Twitter was one of the first to announce that it wouldn’t care if workers continued to work from home. “Opening offices will be our decision, when and if our employees come back, will be theirs,” a company spokesperson wrote in a blog post. “So if our employees are in a role and situation that enables them to work from home and they want to continue to do so forever, we will make that happen.”
Other tech companies are also letting the work-from-home experiment continue. According to The Washington Post, Amazon and Microsoft have told such workers they can keep working from home until at least October, while Facebook and Google say at least until 2021. Microsoft president Brad Smith observed: “We found that we can sustain productivity to a very high degree with people working from home.”
In this post, I write down all my strategy and business development knowledge in healthcare and organize it into the top 9 commandments for selling as a healthcare startup. I think everyone from the founder to the most junior person on the team should know these pillars because all startups must grow. I should also note these tenets are most applicable for selling into large enterprise healthcare incumbents (e.g., payers, providers, medical device, drug companies). Although I appreciate the direct-to-consumer game, these slices are less applicable for that domain. If your startup needs help developing or implementing your business development strategy, shoot me an email and we can discuss a potential partnership. Enjoy!
1. Understand Everything About the Product and Market
You must also understand the competitive landscape, who else is in the marketplace and how they appear differentiated? What has been their preferred go-to-market approach and is your startup capable of replicating a similar strategy with your current team members? Also, do you understand the federal and state policy that most affects your vertical, whether that be pharmaceutical or medical device (e.g., FDA), health plans (e.g., state insurance commissioners), or providers (e.g., CMS)? For example, if your company is focused on “value-based care” and shifting payment structures of physicians to downside risk, do you intimately understand The Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) and the requisite CMS Demonstration Models from the Innovation Center (e.g., MSSP, BPCI-A, etc.)? Make sure you do or at least hire someone to explain what is important now and in the future.
We must ensure their relevance to contemporary patient care
By LONNY REISMAN, MD
It’s 1992 and disruptive technologies of the day are making headlines: AT&T releases the first color videophone; scientists start accessing the World Wide Web; Apple launches the PowerBook Duo.
In healthcare, with less fanfare, a Harvard physician named Dr. Burton “Bud” Rose converts his entire nephrology textbook onto a floppy disk, launching the clinical tool that would ultimately become UpToDate. Instead of flipping through voluminous medical reference texts, such as the Washington Manual, doctors could for the first time input keywords to find the clinical guidance they needed to make better treatment decisions.
The medical community embraced UpToDate’s unique ability to put knowledge at their fingertips. Today more than 1.7 million clinicians around the world use UpToDate to provide evidence-based patient care with confidence. For many, it along with other reference sources has become foundational to providing high quality medical care.
More than just an easy-to-use reference, UpToDate has gone on to improve patient outcomes, according to the Journal of Hospital Medicine.
In the new era of 21st century digital medicine, however, there are opportunities to go further in support of clinicians and patients. Reference tools must be powered by predictive and prescriptive analytics, be personalized to individual patient circumstances, and be integrated into clinician workflow. In some cases, clinicians may be unaware of which questions to ask of a computerized reference manual, or how to incorporate the nuances of an individual patient’s case into the general insights of a reference. Searching for heart failure treatment, for example, may be too broad a query and the resulting recommendations therefore may not provide optimal care for a specific patient’s unique medical circumstances. New digital health solutions that consider patients’ co-illnesses, contraindications, symptomatology, current treatment regimens, and hereditary risks are essential.
By ROBERT C. MILLER, JR. and MARIELLE S. GROSS, MD, MBE
This piece is part of the series “The Health Data Goldilocks Dilemma: Sharing? Privacy? Both?” which explores whether it’s possible to advance interoperability while maintaining privacy. Check out other pieces in the series here.
The problem with porridge
Today, we regularly hear stories of research teams using artificial intelligence to detect and diagnose diseases earlier with more accuracy and speed than a human would have ever dreamed of. Increasingly, we are called to contribute to these efforts by sharing our data with the teams crafting these algorithms, sometimes by healthcare organizations relying on altruistic motivations. A crop of startups have even appeared to let you monetize your data to that end. But given the sensitivity of your health data, you might be skeptical of this—doubly so when you take into account tech’s privacy track record. We have begun to recognize the flaws in our current privacy-protecting paradigm which relies on thin notions of “notice and consent” that inappropriately places the responsibility data stewardship on individuals who remain extremely limited in their ability to exercise meaningful control over their own data.
Emblematic of a broader trend, the “Health Data Goldilocks Dilemma” series calls attention to the tension and necessary tradeoffs between privacy and the goals of our modern healthcare technology systems. Not sharing our data at all would be “too cold,” but sharing freely would be “too hot.” We have been looking for policies “just right” to strike the balance between protecting individuals’ rights and interests while making it easier to learn from data to advance the rights and interests of society at large.
What if there was a way for you to allow others
to learn from your data without compromising your privacy?
To date, a major strategy for striking this balance has involved the practice of sharing and learning from deidentified data—by virtue of the belief that individuals’ only risks from sharing their data are a direct consequence of that data’s ability to identify them. However, artificial intelligence is rendering genuine deidentification obsolete, and we are increasingly recognizing a problematic lack of accountability to individuals whose deidentified data is being used for learning across various academic and commercial settings. In its present form, deidentification is little more than a sleight of hand to make us feel more comfortable about the unrestricted use of our data without truly protecting our interests. More of a wolf in sheep’s clothing, deidentification is not solving the Goldilocks dilemma.
Tech to the rescue!
Fortunately, there are a handful of exciting new technologies that may let us escape the Goldilocks Dilemma entirely by enabling us to gain the benefits of our collective data without giving up our privacy. This sounds too good to be true, so let me explain the three most revolutionary ones: zero knowledge proofs, federated learning, and blockchain technology.
You might not know it yet, but there’s a revolution coming
While digitization has driven innovation across the healthcare sector, the advent of 5G is set to spark a fourth industrial revolution.
3G and 4G networks enabled large-scale change and rapid
modernization. However, 5G delivers what these networks could not: blazing
speeds and ultra-low latencies that permit enormous data transfers between
devices in near-real time. That means that technologies like artificial
intelligence, machine learning and augmented reality will be capable of
transforming the industry as we know it.
Whether it’s strengthening telemedicine connections, implementing new teaching methods at medical school, or connecting large hospitals and clinics, see how 5G-powered technologies will open the door for innovation in healthcare.
At long last, we seem to be on the threshold of departing the earliest phases of AI, defined by the always tedious “will AI replace doctors/drug developers/occupation X?” discussion, and are poised to enter the more considered conversation of “Where will AI be useful?” and “What are the key barriers to implementation?”
As I’ve watched this evolution in both drug discovery and medicine, I’ve come to appreciate that in addition to the many technical barriers often considered, there’s a critical conceptual barrier as well – the threat some AI-based approaches can pose to our “explanatory models” (a construct developed by physician-anthropologist Arthur Kleinman, and nicely explained by Dr. Namratha Kandulahere): our need to ground so much of our thinking in models that mechanistically connect tangible observation and outcome. In contrast, AI relates often imperceptible observations to outcome in a fashion that’s unapologetically oblivious to mechanism, which challenges physicians and drug developers by explicitly severing utility from foundational scientific understanding.
In an effort to help women make informed decisions about where to deliver their babies, we set out to collect a comprehensive, nationwide database of hospitals’ C-section rates. Knowing that the federal government mandates surveillance and reporting of vital statistics through the National Vital Statistics System, we contacted all 50 states’ (+Washington D.C.) Departments of Public Health (DPH) asking for access to de-identified birth data from all of their hospitals. What we learned might not surprise you — the lack of transparency in the United States healthcare system extends to quality information, and specifically C-section data. Continue reading…
Adoption of technology in the healthcare field has been happening at an incredibly slow pace. This is a fact that few would disagree with. The market is saturated with health tech companies that are vying to be the next big unicorn in the field, but long sales cycles and simple underestimations of what is needed for HIPAA and FDA approval has led to the demise of many of these projects. The ones that do receive enough series funding to produce finessed products for health systems and pharmaceutical companies however soon realize that the battle against time is not over.
Simply getting into a health system is not enough. Once a contract is finally ironed out and the software is exchanged, the next uphill battle against the slow-pace of internal adoption is mounted. Not only is a speedy adoption important for hospitals to demonstrate that their purchases and investments were appropriate, but it is also key for founders who hope to demonstrate that their product works. Nothing is worse than the painfully slow adoption internally of a piece of technology. One bad experience has the potential to tarnish an organization’s appetite for future tech ventures.
On July 24, the new administration kicked off their version of interoperability work with a public meeting of the incumbent trust brokers. They invited the usual suspects Carequality, CARIN Alliance, CommonWell, Digital Bridge, DirectTrust, eHealth Exchange, NATE, and SHIEC with the goal of driving for an understanding of how these groups will work with each other to solve information blocking and longitudinal health records as mandated by the 21st Century Cures Act.
Of the 8 would-be trust brokers, some go back to 2008 but only one is contemporary to the 21stCC act: The CARIN Alliance. The growing list of trust brokers over our decade of digital health tracks with the growing frustration of physicians, patients, and Congress over information blocking, but is there causation beyond just correlation?
One way to get data to move is open APIs, which the 21st Century Cures Act mandates by tasking EHR vendors to open up patient data “without special effort, through the use of application programming interfaces.”