In the battle over health care that lies ahead, how strongly will the public rally around the need for innovation in confronting health care costs? Does the public view innovation as relevant to the challenge in the first place?
These aren’t idle questions. The news that growth in overall national health care spending has been moderating has raised speculation that innovations in payment and health care delivery are already paying off, notwithstanding the unquestioned impact of the Great Recession.
Looking ahead, uncertainty over the fate of the Affordable Care Act and the likelihood of federal budget cuts yet to come has many fearing that innovations will be vulnerable. And it is not just federal spending that will be at risk. Hospitals and health plans will all be watching their margins carefully to assess how far and how fast they can keep making investments that support innovation (such as investments in healthcare IT, analytics and care coordination) but that may take months or years to generate a return.
All of which places the role of innovation in controlling costs center stage. After all, this is what undergirds the Triple Aim that so many health care leaders have embraced as the only realistic alternative to arbitrary cutbacks in health care services and spending. Health care leaders can defend innovation if they have public support. But do they?
As an ever increasing amount of money seems determined to chase an ever greater number of questionable ideas, it’s perhaps not surprising that inquiring minds want to know: (1) Are we really in a tech bubble? (2) If so, when will it pop? (3) What should I do in the meantime?
I’m not sure about Question 1: I’ve heard some distinguished valley wags insist we’re not in a tech bubble, and that current valuations are justified, but I also know many technology journalists feel certain the end is neigh, and view the bubble as an established fact of life – see here and here. The surge of newly-minted MBAs streaming to start-ups has been called out as a likely warning sign of the upcoming apocalypse as well.
I have the humility to avoid Question 2: as Gregory Zuckerman reviews in The Greatest Trade Ever, even if you’re convinced you’re in a bubble, and you’re right, the real challenge is figuring out when to get out. Isaac Newton discovered this the hard way in the South Sea Bubble, leading him to declare, “I can calculate the motions of heavenly bodies but not the madness of people.”
I do have a thought about Question 3, however – what to do: reconsider digital health — serious digital health.
Here’s why: Instagram and similar apps are delightful, but hardly essential; most imitators and start-ups inspired by their success are neither. It doesn’t strain credulity to imagine investors in these sorts of companies waking up one day and experiencing their own Seinfeld moment, as it occurs to them they’ve created a portfolio built around nothing.
It didn’t appear on the lightning strike map, but lightning did indeed strike a young medical student inside the Washington Convention Center right in front of about 1,500 amazed spectators on the first day of The Health Data Initiative Forum III: The Health Datapalooza. Everyone is fine—though our medical student may never be the same again.
Actually, this story began long before Datapalooza, of course. Fourth-year medical student, Craig Monsen, and his Johns Hopkins Medical School classmate, David Do, started collaborating on software applications soon after they met in first-year anatomy class. Craig graduated from Harvard with degrees in Engineering and Computer Science and David from University of Minnesota in Bioengineering.
They’re not quite Jobs and Wozniak—neither dropped out of anything—yet—although Craig, at least, is planning to skip or delay residency. You see, after seeing the Robert Wood Johnson Foundation (RWJF) Aligning Forces for Quality Developer Challenge last year—they got very serious about bringing to life their vision of new applications that could help patients and consumers make great health care decisions.
Venture capitalists like to use the word “traction.” It sounds all glamorous, like an ad showing a Range Rover toughing it out up some impossible incline. But when I hear a company talk about ‘early traction’ in its pitch, I’m always leery of the “First and Worst” effect.
My first customer at my first company was a grandfatherly CIO at a big hospital. Of course I wanted to please him, and was enthusiastic about doing so. But I was also very focused on taking over the world with our software. I told him, “We’ll change anything you want about the product, as long as it’ll be good for all our future [gazillion] customers.”
Of course, The Grandfather wanted lots of one-off customizations that would really only be good for him. I told him that all the time we spent doing custom work for him was going to make us less profitable, and less likely to be able to sell the product to other people. And to survive long enough to do any improvements to the product at all, we needed to be profitable. He seemed to think that made sense, and begrudgingly agreed.
In the end this arrangement was a win for both of us. Our product was a home run for his hospital. We got an evangelical reference customer, and his hospital helped make our product better. The precedent we’d set with The Grandfather gave us the courage to refuse other customers who wanted one-off changes. Sure, we could have done this for one or two hospitals, but by the time we got to hospital 300, it would have been a mess.
For the majority of my career I have been obsessed with creating technologies to modernize our largely dysfunctional U.S. healthcare system. To me, it is very clear that the emergence of cloud computing has finally created the opportunity to truly address this daunting problem. Cloud-based solutions are the only viable option for effectively getting providers, patients and other key stakeholders online so that the necessary efficiencies find their way into the system.
To the rest of healthcare IT, however, it is not so clear, as witnessed by the lack of truly cloud-based companies in the marketplace.
Most of the large, established players in this industry continue to rely on the outdated client/server or older technologies, such as MUMPS. Some of these companies’ products trace their roots as far back as 1969. These companies and their software were built before the world wide web, before Facebook, the iPhone and iPad, salesforce.com – and even email, for God’s sake! There also exists a tremendous amount of confusion related to the morass of small, bootstrapped EMR companies, which number in the hundreds. People do not understand the difference between buying a monolithic single-purpose app to utilizing a robust, cloud-based platform approach.
This lack of understanding has made me realize that we need a better way to explain what the cloud has the power to do, and what true cloud-based technology even is. Easier said than done!
I was recently afforded a breakthrough, though unfortunately at the expense of an ancient treasure. Allow me to explain:
For the last six years, I’ve written this blog under the title “Medinnovation” with the tag line, “Where Innovation, Health Reform, and Physician Practices Meet.”
The novelty of use of word “innovation” is wearing thin. And for good reasons.
Sad to say, as a piece in the Wall Street Journal says. “Companies love to say they innovate, but the term has begun to lose its meaning.” Companies are touting chief innovation officers, innovation teams, innovation strategies, and even innovation days.
- Companies last year mentioned “innovation” 33,552 times in their annual and quarterly reports.
- Publishers issued 255 books in the last 90 days with “innovation” in their titles.
- 43% of 260 companies said they have appointed chief “innovation” officers.
- 28% of business schools use the word “innovation” in their mission statements.
So what is “innovation”?
When I entered the VC business 10 years ago, I tried to keep thinking about venture capital as a business, where the key focus area was on meeting the needs of our target customers — entrepreneurs and limited partner investors.
In the case of entrepreneurs, those needs have changed radically in these last 10 years. The surge in seed investing over the last few years has been well-reported and analyzed. With advances in cloud computing, open source infrastructure, development tools and general “Lean Start-Up” techniques, entrepreneurs need less capital than ever before. And when entrepreneurs’ needs change (i.e., requiring less capital), smart investors adjust to meet those new needs. Hence, the rise of angels, super-angels, incubators, accelerators, micro-VCs and VC-led seed programs.
But as the “Great Seed Experiment” (as my partner, Michael Greeley, calls it) matures, a new trend is emerging. Entrepreneurs are beginning to learn the difference between what I’ll call Passive Seeds and Activist Seeds. And entrepreneurs are learning that the difference between the two, although somewhat subtle, matters greatly.
Passive Seeds are when a VC invests a small amount of money (for a $200-500M mid-sized fund, typically $250k or less, for a large $1B fund, perhaps $500k or less), to achieve a very small amount of ownership (typically less than 5%) to simply create an option to participate as a more meaningful investor in the future. Passive seed programs get most of the press attention because of their sheer volume.
In just about a month, the third Annual Health Datapalooza will take place in Washington, DC – a celebration of data-driven healthcare innovation (tax-payer funded data, by the way). The part of the program that I’m personally looking forward to is the Apps Expo of about a hundred or so health apps that will be showcased throughout the event. While there will be center stage presentations by a cavalcade of inspiring leaders (including Thomas Geotz and Bob Kocher), what is noteworthy is that there will be the opportunity to participate in roundtable discussions and deep dive sessions on top-of-mind areas of development such as big data, ACOs, and consumer data liberation. (liberacion!)
But what is the value in attendance? Better question, why has the event attracted more and more new attendees recently?
I’ve spent the last few years supporting private-sector healthcare innovation – especially around health IT. What I’ve come to appreciate from those dedicated to the space – whether a two person startup or a carve-out within a large technology prime – is that success at every stage of innovative development is predicated on how quickly one can create value based on the expectations of the relevant stakeholders at that stage.
Last week I found my usually-diverse Twitter feed had coalesced into a single hashtag, the trolley buses chugging through the streets of Washington, D.C. were sporting bold logos on their sides, and all around the city people were wearing giant nametags bearing their name, face, and three things they liked to talk about. There was no mistaking it: TEDMED was in town.
For the world of health care, TEDMED was the only party at which to see and be seen. The thousand or so delegates had been specifically “curated” to encapsulate the epitome of health care innovation. For 3.5 days they basked in cutting-edge, quirky talks by people “shaping and creating the future of health and medicine,” punctuated by lavish dinners and parties, TEDMED-themed M&Ms, and morning runs, as sanctioned by the Cookie Monster (one of the celebrity speakers at this extravaganza). Meanwhile, the rest of the medical world followed the #TEDMED hashtag on Twitter or soaked up the inspiration in real time at one of TEDMED’s mostly academic simulcast venues around the U.S.
And as for me? I threw myself into getting invited to the cool kids’ party. Or to be more accurate, the cool, privileged kids’ party. Because as well as being accepted on merit, attending TEDMED in person costs an eye-watering $4,950. A wealth of sponsors paid for 200 people to attend on scholarships (and for the Simulcasts), but by the time I’d realized this and persuaded them of my innovative brilliance, they’d already allocated their funds and I was consigned to their priority waiting list. But at the last minute, delightfully, my persistence and anticipation were rewarded with a pass for the Thursday night party and the final Friday morning session.
When I was a teenager, the older women in my family taught me to cook. I learned it was traditional not to add salt when cooking lentils, because it would slow down the cooking. For some reason, perhaps the sheer pleasure of being difficult, I insisted on taking two identical pots and cooking identical quantities of lentils, one with salt and one without. That caused quite a bit of a stir, and not only because I proved that the salted lentils cooked just as fast. On the one hand, my mother, grandmother, and aunts sensed more difficulties were to come. On the other, they knew they’d participated in something different and important: a scientific experiment.
The women in my family were courageous, smart, and resourceful. They knew many things: useful wonderful things. For the most part, their knowledge was received knowledge, knowledge they’d been given, not figured out on their own. This is a common situation. The idea that anybody can be taught to figure things out, that there is a logic to discovery and invention, would have struck our ancestors as radical and strange. Until quite recently — until science education became institutionalized and widespread — the creation of new knowledge depended on either genius or luck.