Categories

Tag: David Shaywitz

Nate Silver Is King, Long Live Nate Silver

My twitter stream is awash in math this morning, cheering Nate Silver’s exceptional forecasting (“Triumph of the Nerds: Nate Silver Wins In Fifty States”, Chris Taylor wrote), and celebrating the victory of math and big data over pompous punditry.  Jeff Greenfield tweeted, “I, for one, welcome our new Algorithmic Overlord.”

At some level, I thrill to the ascendancy of math, and of math nerds – and I write this as a proud former math team captain (and math team T-shirt designer), and as someone whose very best summers as a teenager were spent in math (and writing) camp at Duke University.  It’s also one of the reasons I love Silicon Valley so much – it’s where nerds rule, and where even emerging VCs promote themselves as “Geeks.”

However, before we turn all of life over to algorithms, as some are suggesting, it’s important to place the election prediction in context.

The accomplishment of Silver’s splendid forecasting was to intelligently aggregate existing data, to accurately summarize the current, expressed intentions of the national electorate.  And we’ve learned that careful analysis is far more useful than blustery experts – something Philip Tetlock has been trying to tell us for years.

At the same time, all forecasting challenges are not created equal, and summarizing current public opinion is a much lower bar than predicting events far into the future – and Silver has been clear about this; it’s others who seem to be leaping ahead.

Continue reading…

Being Human

The human connection is threatened by medicine’s increasingly reductive focus on data collection, algorithms, and information transaction.

If you follow digital health, Rachel King’s recent Wall Street Journal piece on Stanford physician Abraham Verghese should be required reading, as it succinctly captures the way compassionate, informed physicians wrestle with emerging technologies — especially the electronic medical record.

For starters, Verghese understands its appeal: “The electronic medical record is a wonderful thing, in general, a huge improvement on finding paper charts and finding the old records and trying to put them all together.”

At the same, he accurately captures the problem: “The downside is that we’re spending too much time on the electronic medical record and not enough at the bedside.”

This tension is not unique to digital health, and reflects a more general struggle between technologists who emphasize the efficient communication of discrete data, and others (humanists? Luddites?) who worry that in the reduction of complexity to data, something vital may be lost.

Technologists, it seems, tend to view activities like reading and medicine as fundamentally data transactions. So it makes sense to receive reading information electronically on your Kindle — what could be more efficient?

Continue reading…

Digital Health: Almost a Real, Live Business

While the evolution of the digital health ecosystem has seemed at times almost painfully contrived, it now appears to have reached the point where it requires but a few sprinkles of magic fairy dust to be truly alive.

The basic idea behind digital health is pretty clear: we can (and must) do health better, and technology should be able to help,

There’s also an ever-increasing amount of support for early-stage innovators in this space. A remarkably large number of digital health incubators have sprung up around the country, as Lisa Suennen captured with characteristic verve in a recent Venture Valkyrie post.

On top of this, a slew of corporate VCs have now emerged – many from payors, but some from communication companies, and even a few from big pharmas such as Merck – all keen to invest strategically in the digital health space.

Deliberately, many of these large corporations also represent likely buyers for the products or services that will be produced, so it really does seem like an example of the savvy external sourcing of innovation.

So we’re good, then – right?

Well, not so fast.

It turns out that many high profile VCs continue to eschew this space, other than perhaps an occasional investment or two. The reason? As one extremely well-regarded VC – with extensive healthcare experience – told me yesterday, “I haven’t seen a viable business model yet.”

Translation: how do you make (serious) money here? Where’s the revenue?

Continue reading…

Don’t Confuse Hard Science With Bad Pharma

A key lesson of science is the importance of a control group; I worry that a lot of coverage and discussion of the biopharma industry (in which I work) neglects this lesson, and instead contrasts (implicitly or explicitly) industry behavior to that of an imagined, idealized standard of perfection, and fails to place the actions in the context of medical science as a whole.

I appreciate critical coverage of the industry: reporters should always maintain high standards, approach new information skeptically, and not take anything at face value.

However, what disappoints me is the common, implicit assumption that industry science deserves to be treated as a special case, rather than considered within the broader framework of contemporary research.  I’m especially disappointed by the frequent assumption that the behavior of industry scientists should be viewed more skeptically than the behavior of academic scientists; this strikes me as a magical, often self-serving belief that has now become elevated to the status of conventional wisdom.

Take data sharing, a topic in the news today (and discussed very thoughtfully here by John Wilbanks, the guru of open science).  While most media coverage of this topic (both today and over the years) has focused on the transparency of industry research, I’ve been attending the annual Sage Commons Congress since its inception in 2010 (disclosure: I served as a founding advisor to Sage, a non-profit organization focused on open science, founded by Eric Schadt and Stephen Friend), and hearing every year about how incredibly difficult it is to get academic groups to share with each other, for a wide variety of reasons.  (See this exceptional talk from Josh Sommer of the Chordoma Foundation at the First Sage Congress).  Getting scientists (or any group of competitive human beings) to exchange data turns out to be a real problem — especially in the highly-regulated environment in which clinical data sit.

Continue reading…

Rethinking the Provider Certification Game

Quality is the new watchword in healthcare; it’s what we seek – and increasingly, what we try to measure.  Medications, devices, care delivery, hospital services – all are now scrutinized as we seek to gauge their benefit, and justify their cost.

The idea of using metrics to evaluate quality make sense, but only if we can trust the metrics themselves.  Otherwise, we risk becoming party to an updated version of craniometry, systematized false-precision that focuses on easily-measurable parameters (such as head circumference) that may not represent meaningful proxies for the assessments we’re really after (i.e. intelligence).

The good news is that the science of testing, of developing evaluation instruments, has improved over time.  We’re now better able to recognize the qualities and properties of good tests – and to identify where they’re likely to fall short.

We’re also getting more comfortable with demanding robust evaluation instruments.  For example, the FDA’s approach to patient-reported outcomes places exceptional (and appropriate) emphasis on the assessment tool chosen, and requires that it demonstrates the appropriate properties before relying on its results.

Unfortunately, one critically important area within our healthcare system that seems to have escaped such careful review is the way the competence of care providers is typically assessed and certified.

Whether you are an X-ray technician, a physical therapist, a registered nurse, or a transplant surgeon, you are required to pass through a gauntlet of costly certification exams.  These tests, already significant, are assuming an even greater importance as the healthcare system increasingly looks to them as proxies for quality.  Certification can be required for employment and for admission privileges, and frequently impacts the reimbursement rate for healthcare providers.

All this makes complete sense – provided the certification tests themselves are sound.

Unfortunately, the world of healthcare worker certification remains a bit like the wild west, as medical organizations and professional societies approach certification testing with profoundly different degrees of rigor — and generally little-to-no transparency.

Continue reading…

Better Than Pandora for Cats

Health VCs: Desperate…

As a tech VC recently told me, refuting the latest flimsy rumor of a huge tech-dominated fund contemplating significant new investment in life science, “Wow, you healthcare guys are really desperate for some good news!”

It’s true; not only are LPs looking ever more critically at VC as an asset class – especially since the publication of the Kauffman report – but the life science sector, in particular, has been devastated, and health VCs have been hurting.  (Added Sept 27: See this fascinating, just-posted Xconomy profile of Avalon’s Kevin Kinsella and discussion of the current sorry state of healthcare VC.)

Part of the issue, as Bruce Booth and Bijan Salehizidah have described previously, and as Sarah Lacy summarized nicely this week in PandoDaily, is that the return profile of life science venture investments looks very different than tech in general, and consumer web (the focus of Lacy’s article) in particular.

The sex appeal of tech investing is that a relatively small initial investment can blossom very quickly to yield huge returns; the catch, of course, is that this happens very rarely, and much like at a casino, and the tremendous attention lavished upon these winners can almost make you forget how infrequently they occur.

Continue reading…

Folly To Forecast Startup Performance?

Several days ago, Paul Graham, co-founder of noted Silicon Valley accelerator Y-Combinator (YC), wrote an exceptional post, “Black Swan Farming,” observing how crazy difficult it is to predict success in the startup space, and noting that just two companies – Airbnb and Dropbox – account for about 75% of the total value created by all YC-associated companies.

Yesterday, Dave McClure (the white-hot seed-stage Silicon Valley investor, familiar to readers of this column – see this discussion of his small bets style in connection with digital health) responded in a post titled (what else?) “Screw the Black Swans” that his investment model (at 500 Startups) is slightly different.

While most VCs are looking for the big score, McClure said, he’s deliberately seeking singles and doubles, which he basically expects will result in a similar expected value for his portfolio but reduce the chances of getting shut-out.  He anticipates and is hoping for a greater number of successes (albeit more modest ones) than achieved by other VCs.

This will be a familiar dialog not only to investors but also to those in biopharma (who perhaps should be thought of as investors as well), as they continuously need to decide whether to go for a risky potential blockbuster or more of a sure-thing that ostensibly may be associated with a smaller market.

I’ve been fascinated with this exact question for a while (see here and here), and I’ve always looked at the problem a bit differently than McClure – which, if I’m right, may actually be good news for him.

Continue reading…

Do You Believe Doctors Are Systems, My Friends?

In the current issue of The New Yorker, surgeon Atul Gawande provocatively suggests that medicine needs to become more like The Cheesecake Factory – more standardized, better quality control, with a touch of room for slight customization and innovation.

The basic premise, of course, isn’t new, and seems closely aligned with what I’ve heard articulated from a range of policy experts (such as Arnold Milstein) and management experts (such as Clayton Christensen, specifically in his book The Innovator’s Prescription).

The core of the argument is this: the traditional idea that your doctor is an expert who knows what’s best for you is likely wrong, and is both dangerous and costly.  Instead, for most conditions, there are a clear set of guidelines, perhaps even algorithms, that should guide care, and by not following these pathways, patients are subjected to what amounts to arbitrary, whimsical care that in many cases is unnecessary and sometimes even harmful – and often with the best of intentions.

According to this view, the goal of medicine should be to standardize where possible, to the point where something like 90% of all care can be managed by algorithms – ideally, according to many, not requiring a physician’s involvement at all (most care would be administered by lower-cost providers).  A small number of physicians still would be required for the difficult cases – and to develop new algorithms.

Continue reading…

Closing the Translational Gap: A Challenge Facing Innovators in Medical Science — and in Digital Health

The gap between model or potential solutions and solutions that work in the real world – the translational gap — is arguably the greatest challenge we have in healthcare, and is something seen in both medical science and in digital health.

Translational Gap in Medical Science

The single most important lesson I learned from my many years as a bench scientist was how fragile most data are, whether presented by a colleague at lab meeting or (especially) if published by a leading academic in a high-profile journal.  It was not uncommon to watch colleagues spend months or even years trying to build upon an exciting reported finding, only to eventually discover the underlying result was not reproducible.

This turns out to be a problem not only for other university researchers, but also for industry scientists who are trying to translate promising scientific findings into actual treatments for patients; obviously, if the underlying science doesn’t hold up, there isn’t anything to translate.  Innovative analyses by John Ioannidis, now at Stanford, and more recently by scientists from Bayer and Amgen, have highlighted the surprisingly prevalence of this problem.

Continue reading…

Time For Biopharma To Jump On The “Big Data” Train?

In a piece just posted at TheAtlantic.com, I discuss what I see as the next great quest in applied science: the assembly of a unified health database, a “big data” project that would collect in one searchable repository all the parameters that measure or could conceivably reflect human well-being.

I don’t expect the insights gained from these data will obsolete physicians, but rather empower them (as well as patients and other stakeholders) and make them better, informing their clinical judgment without supplanting their empathy.

I also discuss how many companies and academic researchers are focusing their efforts on defined subsets of the information challenge, generally at the intersection of data domains.  I observe that one notable exception seems to be big pharma, as many large drug companies seem to have decided that hefty big data analytics is a service to be outsourced, rather than a core competency to be built.  I then ask whether this is savvy judgment or a profound miscalculation, and suggest that if you were going to create the health solutions provider of the future, arguably your first move would be to recruit a cutting-edge analytics team.

The question of core competencies is more than just semantics – it is perhaps the most important strategic question facing biopharma companies as they peer into a frightening and uncertain future.

Continue reading…

assetto corsa mods