The Rise of the Chief Cognitive Officer

Cogito potestas est (Thinking & learning is power)

Screen Shot 2016-05-12 at 1.06.57 PM

In a recent blog post titled ‘A computer that allows the doctor to be more human’ Toby Cosgrove, the CEO of the Cleveland Clinic stated “It may sound odd, but technology like Watson will make healthcare less robotic and more human.” The reasoning behind putting an AI through a version of medical school is that human physicians can’t possibly read and process the exponentially growing volumes of clinical trials, medical journals, and individual cases available in the digital domain. A computer that digests them can transform them into useful support options for care of a patient. Furthermore humans can’t be a part of every case and learn from every physician. But by combining a human with the capacity of a computer as a physician’s assistant, physicians can focus on the many things that they are uniquely able to do in the complex domain of medicine. This includes the critical conversations with patients and their families.

In short a more human physician interaction can be made by sharing the workload between man and machine. The upshot of the shift to cognitive clinical decision support is that we will likely increasingly see an evolving marriage and interdependency between the worlds of AI (Artificial Intelligence) thinking and human provider thinking within medicine. It isn’t going to be a trivial transition and demands qualified and empowered leadership to go through the transitions to be able to achieve that vision.

Even without cognitive computing, when encountering changes from HIT many physicians say things along the lines of “the physicians came to work once the EHR was implemented and they didn’t know how to think anymore.” With the rise of more robust thinking medical AI systems with virtual degrees through training from various medical machine learning/training programs a change is needed in how healthcare organizations think about the totality of their thinking assets. This new health system is expected to involve humans who should be continuously training and retooling their medical capacities to think differently as they discover how to collaboratively work with relatively smart machines that dynamically change in their capacity to be involved in advising on patient cases.

This is why I am proposing and predicting a new title of a CCO (Chief Cognitive Officer) or CCMO (Chief Cognitive Medical Officer) to help modernize the construct of CMIO (Chief Medical Information Officer). Informatics shouldn’t be the core focus of IT in the next generation so much as collaborative thinking-cognition on behalf of the patient between medical professional and machine. As important as informatics seems to be today, it tends to be on a lower layer of Maslow’s hierarchy than thinking. Cognition in people is as important if not more important as machines so the CCMO should focus on how to help and introduce changes for the physician to do their work as much as on helping the computer do its component of medical work. The physician and computer would, in essence, be training each other by transferring scale expertise from computer into the human one case at a time and with the human pushing back decisions and nuances about the case or documentation that only the physician can enrich through their specific perspective and undocumented knowledge. The mechanisms that we commonly use to denote charges for today in a volume based mode such as CPT codes often include thinking so the compensation for physicians may need to be reviewed in the new symbiotic mode. Cognitive assets also come into play since the traditional way to acquire medical expertise was to hire a qualified physician. While data is an asset, digital knowledge is also an asset but it may not and likely won’t come from the local institution. AIs and the data sets they use are not people and are licensed like software and services. Organizations will most likely need to choose between them such as an AI trained on GI care in Indiana which is different from an AI for GI care trained in China.  With such a burden on reengineering work, training, and acquisition of ‘digital talent’ a CCMO may have some of the responsibilities traditionally handled by human resources or talent organization groups if not be expected to reframe these groups to incorporate the computer as a new form of ‘worker’.

This Chief Cognitive Officer should try to obtain a holistic picture of work so that the physician/computer thinking model can be a real system built with a design that works for both physician and patients. They would be responsible for continuously and safely introducing new cognitive capabilities into the organization with effective change management. They would design from the human backwards to find areas where physicians most need leverage, rather than working from the computer forward based on things computers happen to do well.

A trained AI, whether it was trained within the organization or is a brain transplant from a top ranking academic medical center, is a new kind of asset. It will likely be complex to figure out how it evolves in the same ways that the people evolve. If left untrained for a year or two should the AI lose credentials? How does the training get combined between organizations who have different styles or systems of care? How should it be trained? How should the trainers be compensated while they are training it if it takes away from time spent caring for patients? The natural trainers after all are those same people who are providing the front line care and thus many decisions directly impact them. The digital capital of data also gives rise to this new cognitive capital of educated AIs. This may mean changes in ownership of those, or the financial exchanges for that education is likely to get messy. All of these challenges would roll-up to the Chief Cognitive Officer.

Overall I think the most natural fit for such a role today (if we don’t add a new title) is the CMIO, who is normally a computer and data savvy trained MD. Cognitive is the natural next jump for them although it is likely to be as big a leap over a chasm as implementing an EHR. They might have to add to their staff a lot of other areas beyond just informatics (the science of data) since knowledge and thinking go far beyond data. New areas are needed, like design thinking and data science (applying math and science to put data to work), to help people understand how the human capital can work along with the introduction of the AI capital.

The root word for cognitive in Latin is “cogn” which means “to learn, know.” It tends to be an active requirement for success in modern medicine with ideas such as ‘the learning health system’ — and the modern learning health system is typically able to learn and know things because algorithms do a chunk of the learning that people can’t scale to do.

At the moment I don’t work within a health system. Because I am focusing on Cognitive Computing I am considering declaring myself the Chief Cognitive Officer and taking over establishing systems of anything related to thinking and knowledge. I wonder if anyone will stop me? Maybe the job of the chief cognitive officer should be the CEO? I’d say so but ‘executive’ implies decisions. Thinking isn’t always about big decisions but about the tools that can be used to make the millions of little decisions needed for every interaction to generate value. Someone has to make this next generation stuff work.

At least that is how my AI and I think.

Dan Housman is CMO for ConvergeHealth by Deloitte.

Categories: Uncategorized

Tagged as:

3 replies »

  1. Clearly our pressing need is the designation of yet another administrative/executive title and acronym, one which in this instance sort of implies that all other subordinate clinicians in an organization are lesser thinkers.

    Been riffing on this AI topic on my blog,

    “At this point it looks to me that AI will be more “IA” in the health care domain — “Intelligence Augmentation” that helps the clinicians sift through the otherwise increasingly unmanageable torrents of data they face when trying to arrive at accurate dx’s and efficacious px’s and tx’s.”

    immediately after which I cited your post.


  2. AI–as a dream–is just what medicine needs: diagnostic help. It is easy to look up therapy. But what are the best tests for scleroderma? and symptoms of Mediterranean fever? and genetics of hereditary coagulation problems? Our brains are bursting.

    Is it good enough theoretically to help us? Should we push for AI or fund its research? Can it crudely scan and digest a Harrison’s textbook of internal medicine? I keep hoping that the researchers at the AI tip-of-the-spear could tell us the prognosis for this stuff. Where are we now?

  3. There’s no particular reason for the AI to come through any organization. The seeds of competition between the organization’s EHR vendor and the cognitive computing vendor are slowly becoming apparent. IBM Watson has already announced a collaboration with the American Cancer Society to provide access directly to physicians and patients. As a physician, do I want my AI to be an extension of organizational control the way my EHR is? As a patient, do I want my AI to be fed my record from some inscrutable organization with an EHR “VDT portal” or do I want my AI to be looking at the health record I’ve accumulated and control?

    Medicine begins and ends with the physician-patient relationship. We have ample evidence of the limitations of the EHR and organizational Meaningful Use model. The AI will be way too smart to fall for that one the way the hospitals did.