We Are Not A Dashboard: Contesting The Tyranny Of Metrics, Measurement, And Managerialism


The dashboard is the potent symbol of our age. It offers the elegant visualization of data, and is intended to capture and represent the performance of a system, revealing at a glance current status, and pointing out potential emerging concerns. Dashboards are a prominent feature of most every “big data” project I can think of, offered by every vendor, and constructed to provide a powerful sense of control to the viewer. It seemed fitting that Novartis CEO Dr. Vas Narasimhan, a former McKinsey consultant, would build (then tweet enthusiastically about) “our new ‘control tower’” – essentially a multi-screen super dashboard – “to track, analyse and predict the status of all our clinical studies. 500+ active trials, 70+ countries, 80 000+ patients – transformative for how we develop medicines.” Dashboards are the physical manifestation of the ideology of big data, the idea that if you can measure it you can manage it.

I am increasingly concerned, however, that the ideology of big data has taken on a life of it’s own, assuming a sense of both inevitability and self-justification. From measurement in service of people, we increasingly seem to be measuring in service of data, setting up systems and organizations where constant measurement often appears to be an end in itself.

My worries, it turns out, are hardly original. I’ve been delighted to discover over the past year what feels like an underground movement of dissidents who question the direction we seem to be heading, and who’ve thoughtfully discussed many of the issues that I stumbled upon. (Special hat-tip to “The Accad & Koka Report” podcast, an independent and original voice in the healthcare podcast universe, for introducing me to several of these thinkers, including Jerry Muller and Gary Klein.)

A good place to start may be a 2013 essay by Kenneth Cukier and Viktor Mayer-Schönberger in Technology Review, warning,

“We are more susceptible than we may think to the ‘dictatorship of data’—that is, to letting the data govern us in ways that may do as much harm as good. The threat is that we will let ourselves be mindlessly bound by the output of our analyses even when we have reasonable grounds for suspecting that something is amiss.”

Citing the example of metrics-obsessed Vietnam-era Secretary of Defense Robert McNamara, Kukier and Mayer-Schönberger conclude,

“Big data will be a foundation for improving the drugs we take, the way we learn, and the actions of individuals. However, the risk is that its extraordinary powers may lure us to commit the sin of McNamara: to become so fixated on the data, and so obsessed with the power and promise it offers, that we fail to appreciate its inherent ability to mislead.”

Jerry Muller

Historian Jerry Muller, in his essential new book The Tyranny of Metrics, offers what might be the best summary of how McNamara-like thinking has pervaded our own; in the same way that Steven Levy, in 1984, wrote that the (recently introduced) spreadsheet “is a tool, but it is also a world view,” Muller offers a similar view of metrics, which he worries has evolved into a fixation.

“The most characteristic feature of metric fixation is the aspiration to replacement judgment based on experience with standardized measurement. For judgment is understood to be personal, subjective, and self-interested. Metrics, by contrast, are supposed to provide information that is hard and objective. The strategy is to improve institutional efficiency by offering rewards to those whose metrics are highest, or whose benchmarks or targets have been reached, and to penalize those who fall behind….

To be sure, there are many situations where decision-making based on standardized measurement is superior to judgment based upon personal experience and expertise…. [U]sed judiciously, then, the measurement of the previously unmeasured can provide real benefits….

If what is actually measured is a reasonable proxy for what is intended to be measured, and if it is combined with judgment then measurement can help practitioners to assess their own performance, both for individuals and for organizations. But problems arise when such measures become the criteria used to reward and punish – when metrics become the basis of pay-for-performance or ratings.”

He observes that “metrics fixation leads to a diversion of resources away from frontline producers toward managers, administrators, and those who gather and manipulate data.”

Muller’s key takeaway: “Not everything that is important is measurable, and much that is measurable is unimportant.”

Nassim Taleb

The Black Swan author Nassim Taleb has also worried about the way we think about data, writing in Antifragile (and excerpted here),

“In business and economic decision-making, data causes severe side effects —data is now plentiful thanks to connectivity; and the share of spuriousness in the data increases as one gets more immersed into it. A not well discussed property of data: it is toxic in large quantities —even in moderate quantities….

The more frequently you look at data, the more noise you are disproportionally likely to get (rather than the valuable part called the signal); hence the higher the noise to signal ratio.”

Indeed, in testimony to Congress following the financial crisis, Taleb said,

“Some may use the argument about predicting risks equal or better than nothing; using arguments like ‘we are aware of the limits.’ Risk measurement and prediction —any prediction — has side effects of increasing risk-taking, even by those who know that they are not reliable. We have ample evidence of so called ‘anchoring’ in the calibration of decisions. Information, even when it is known to be sterile, increases overconfidence.”

Frank Pasquale

A particularly cogent summary of our present state has been offered by law professor Frank Pasquale, whose 2017 essay on professional judgment, while a bit of tough sledding, is nevertheless a required read.

Pasquale zeros on the reductionist essence of many data-focused approaches,

“Robotics and AI, including even advanced machine-learning systems, comprehend professions as jobs, jobs as tasks, and tasks as observation, information processing, and actuation. Though such strategies to divide labor are sensible in many industrial contexts, they ignore the irreducibly holistic assessments that are hallmarks of good judgment.“

Yet, Pasquale writes,

“Instead of reductionism, an encompassing holism is a hallmark of professional practice—an ability to integrate facts and values, the demands of the particular case and prerogatives of society, and the delicate balance between mission and margin….

For over a decade, business books have exhorted managers to be ‘supercrunchers’— numbers-obsessed quantifiers, quick to make important decisions as ‘data driven’ as possible. There is an almost evangelical quality to this work, a passionate belief that older, intuition-driven decisions are a sinful relic of a fallen world….

The commensurating power of numbers, sweeping aside contestable narratives, promises a simple rank ordering of merit, whether in schools, hospitals, or beyond. Measurements are not simply imposed from the top down. They also colonize our own understandings of merit.”

Such managerial approaches, Pasquale recognizes, elevate “the ‘data-driven,’ while minimizing the all-too-human process of gathering, cleaning, and analyzing data.”

Pasquale also reiterates Muller’s point that metrics “often distort the social practice that they ostensibly measure,” citing Campbell’s Law: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”   (This idea has also found expression in Goodhart’s Law, essentially, “When a measure becomes a target, it ceases to be a good measure.”)

Finally, while emphasizing that “all-pervasive quantification and metricization” is not “the ineluctable logic of economic progress,” Pasquale recognizes that, “For true believers in metrics and standardization, problems with existing metrics are simply a prompt to improve metrics.”

My Reflections

The authors cited above deserve careful read and thoughtful critique; I’ve quoted them to provide a sense that there’s an alternative world view to the one in which it feels so many of us live, whether working in a multinational corporation or a local non-profit hospital. It is a world dominated by a religious faith in managerialism, the idea that the right way to improve performance is to measure more, capture more data, and use these data to manage in an increasingly granular fashion, in a behaviorist style that can be traced back to Taylor but often feels more indebted to Pavlov.

To be clear, it’s not that the critics can’t appreciate the utility of data, of measurement, of metrics; they go out of the way to explain how each of these can be critically important – an invaluable tool. The issue seems to be that we’ve taken a tool, an approach, a mindset (to return to Levy’s phrase), and started to apply it almost indiscriminately, with a near-religious fervor. We do this because we can – there are always more data to capture, and there are very real examples where data provides essential insight that human intuition alone might have missed or already missed.

But the idea that data enables human biases to be replaced with pure objectivity is a fantastical illusion. Human judgment is biased, but the worship of data doesn’t magically transport us to The Realm Of Wholesome Objectivity. In the world of data, bias abounds, permeates all we do – including how we decide what data to collect (and decide what is “collectable”), how we decide how to measure, how we decide how to analyze, and what we decide to do with the results of these analysis.

The answer isn’t to reject all data, measurement, or metrics, but rather to ensure that these tools aren’t accorded undue respect, or given an unearned benefit of the doubt. We need to leverage data and metrics selectively and judiciously, just as we should recognize and leverage selectively the accumulated wisdom of reflective practitioners in a range of domains. We must also recognize the value of tacit knowledge and accumulated expertise in some areas, while at the same time thoughtfully challenging received wisdom and cherished assumptions.

As Klein and Kahneman observed in a captivating 2009 essay attempting to synthesize their contrasting views of expertise, physicians (and they believe, most professions) exhibit “fractionated expertise,” meaning that there are situations where expert wisdom deserves to be trusted, and situations where it shouldn’t be (the former captivate Klein, the later, Kahneman).

I suspect Klein and Kahneman are correct that “the fractionation of expertise is the rule, not an exception,” but I worry that we’ve lost our equilibrium, and have become so obsessed with the failure of expertise and intuition that we’ve neglected to leverage deep reservoirs of existing expertise, especially among experienced practitioners. Meanwhile, we seem to have developed an excessive faith in the ability of putatively objective data and detached analytics to deliver us.

If there’s one thing managerialism offers above all else it’s an implementable framework, an all-purpose way to approach most every organization, no matter how large, and every problem, no matter how difficult. It’s thrived in no small part because it clearly works in some well-defined situations, and while it may fail abysmally in others (pharma R&D comes to mind), there doesn’t seem to be an alternative worldview capable of replacing it, causing the cycle to persist, the behavior patterns to become ever more ingrained.

I recently asked Muller about this concern, via Twitter, wondering if there was another way to think about this. He replied, “Management based upon experience, expertise in the subject matter of the organization, and demonstrated talent within the organization; along with the judgment to use measurement and data selectively and efficiently.”

“I can’t see how competent experts could ignore metrics,” Muller continued. “The question is their ability to evaluate the significance of the metrics, and to recognize the role of the unmeasured.“

This captures, perfectly, where we need to be headed, especially as we contemplate how to meaningfully improve areas like drug development and healthcare delivery. Data sciences and technology could, and must, play a vital role. But they haven’t earned the right to be considered an end in themselves. They represent potentially valuable tools, ideally in the hands of experienced and inquisitive practitioners, who uniquely appreciate the subtleties of their domain – McClintock’s phrase “feeling for the organism” comes to mind; who have the humility to recognize the limits of knowledge; and who will actively seek to leverage the benefits potentially offered by data, analytics, and measurement, thoughtfully applied.

Addendum: Further Reading

In addition to the books and articles cited above, readers might also enjoy:

This Bill Gardner essay about checklist burden, and how well-intentioned but excessively narrow metric-based thinking can lead to unintended, suboptimal outcomes.

This Financial Times op-ed Nassim Taleb and I wrote in 2008 about the challenges of industrializing drug discovery.

This Atlantic essay I wrote about the impact of reductionism in business strategies around healthcare cost reduction.

This short Forbes piece I wrote about fetishization of metrics.

This engaging talk by OptumLabs CMO Darshak Sanghavi (discussed by on our recent TechTonics podcast with Sanghavi and co-host Lisa Suennen) highlighting (through analogies with reality TV!) the importance of measuring what matters – and understanding what matters.

This engaging talk by OptumLabs CMO Darshak Sanghavi (discussed by on our recent TechTonics podcast with Sanghavi and co-host Lisa Suennen) highlighting (through analogies with reality TV!) the importance of measuring what matters – and understanding what matters.

David Shaywitz is a Senior Partner with Takada Ventures and a Visiting Scientist at Harvard Medical School. 

This piece originally appeared in Forbes here.

Livongo’s Post Ad Banner 728*90
Spread the love

2 replies »

  1. Having served during the Vietnam War (not in theater) and in Desert Storm (in theater) I have had an interest in McNamara and his Whiz kids. Just a couple of points.

    1) They lied about or made up some of their data. I hope the conclusions about the usefulness of this data are obvious.

    2) The senior military knew some of the data was wrong, and knew that they were measuring some of the wrong stuff, but they kept quiet. (Read McMaster’s Dereliction Of Duty) They did this for a number of reasons, but it seems clear that if the metrics are set solely by people who do not really understand the field they are measuring, you are probably not going to get helpful results.

    Finally, from my POV having been around for a while it seems like we have always known that some studies are wrong. This just doesn’t seem new. I still don’t see a practical solution to an alternative. If we are going to have people go on their gut instincts, at what point in their career should that happen? Internship? End of residency? 5 years into practice? 20? I think we should be data based, but skeptical. Don’t be the first one on the train, wait for confirmation studies (with rare exceptions). Remember that how we respond to data is where we often make our mistakes.


Leave a Reply

Your email address will not be published. Required fields are marked *