Leonard Kish talks to Douglas Fridsma, President and CEO at American Medical Informatics Association, about his work in the Office of the National Coordinator for Health Information Technology, or ONC, and the barriers to implementing MIPS in the most useful and transparent way. In order to communicate the data, of course, we’ll need informatics; but how will that work? And which comes first, policy or technology?
Leonard Kish: When you first began your studies in medical informatics, was there a sense that the field was a science?
Doug Fridsma: After working on the Standards of Interoperability Framework for the National Cancer Institute – which was essentially crowdsourced, I engaged government, research and other pharmaceutical companies and standards organizations to basically come up with what that standard should be – I had an opportunity to go out to the University of Arizona and ASU. Ted Shortliffe, who had been my mentor at Stanford, had just been appointed to be the Dean of the new medical school at the University of Arizona.
Fundamentally what physicians do is information management. The thought we had was, if we really wanted to change the way we use technology in medicine, then one of the biggest ways we can make an impact is to train medical students to engage in and to learn this information, then we can actually start to develop the physicians of the 21st Century.
It was a great opportunity to get in on the ground floor with a blank slate, so there could be there was a new effort to include informatics and information management into the training.
As I started working on developing the curriculum at ASU, we approached informatics from a new angle. If you want to teach someone about pharmacology, and you want them to prescribe medication, you need to teach them pathophysiology, pharmacology, and basically all of the basic sciences. You don’t teach them how to write a prescription – that’s not the first thing you do. You certainly don’t have the pharmaceutical companies come in and teach how to write good prescriptions. ˇhe way information technology was being taught at the time was just that. You would bring in the big vendors, who would spend a week teaching medical students how to use their system and then reaping the rewards.
We had to figure out if there was a way to educate the students with the basic sciences about how to represent and use information, why to collect it in certain ways (and not in others), how you collect it is often times determined by how it will be used afterwards. It was a very interesting set of activities and much of that work is continuing even today.
LK: Teaching the actual care delivery, which health IT is a part, hasn’t until recently been a part of medical curriculum, and still only at a few medical schools so far. Now that we’re focusing economic attention on value and outcomes, you see a little more of it.
DF: I think we have to change the way to document care delivery. We use something called the SOAP Note (Subjective, Objective, Assessment, Plan) as an outline of how medical information is captured by physicians and others. The problem is that none of those talk about outcomes.
We talk about the subjective complaints, objective tests, physical exams and given that information, what is our assessment of what’s going on? And then we describe a plan of what we want to do. But, nowhere the approach to documentation do we capture what the outcome is. The next time you as a patient are seen, you talk to your medical provider about your subjective, objective, assessment and plan. But no one ever just asks the patient, “Did those medications work for you?”
As we think about value-based purchasing and quality measurement, we go through these conniptions to infer what the quality and outcome was, whereas sometimes, maybe we just need to ask the patient or have the doctor record what the outcome was. I think that there needs to be fundamental changes in medical school curriculums just for us to make outcome assessments happen.
LK: We work with a telemedicine company and your description of SOAP falls into our conversations. We talk about going from event-based encounters where a plan has to be made, versus a really longitudinal perspective. Information technology is just beginning to allow us to track a patient day in and day out, and this shift necessitates interoperability.
Interoperability is becoming an unusually hot political topic. Perhaps because of the high tech industry; some of it coming out of 21st Century Cures, the bill making it through the health committee; and some of it is with the Precision Medicine Initiative. There seems to be recognition that for medicine to move forward, it’s based on the availability of information. In your opinion, what’s going on there?
DF: I’ve tried to maintain for the last six years a consistent definition of what interoperability is. The first thing I can describe is what interoperability is not. It is not a state of utopia in which there is this information liquidity. You will hear this all the time: we want ubiquitous information, and free-flow, data liquidity and all those things.
But interoperability is not that state of the world. Interoperability is defined operationally. I use one of the definitions by the IEEE folks. The best version is that interoperability has two parts: the first part of the ability of two or more systems to exchange information and the second one – the one we usually overlook – is the ability of the systems to use the information that has been exchanged. It’s about exchange and use.
The example is that I use a Mac and you use a PC. I use my mail program and set up an e-mail and send it to you. We can exchange information if you open up Outlook and open my message. We’ve exchanged information. But, if I wrote my e-mail in German and you only speak French, then we’ve exchanged information but there’s nothing you can do with that information… unless you translate it.
The question is: have we had true interoperability? We’ve had exchange, but not really interoperability. The reason why I like the definition of exchange and use is that, when someone says, “I’m so upset, my system is not interoperable,” my first question can be, “What do you want it to do?”
Interoperability cannot be achieved in the abstract. It can only be achieved, given a definition, in the concrete. Interoperability is the exchange and use forces us to ask ourselves “what do we want to try to accomplish?”
When I was seeing patients at Mayo Clinic, I often worked in their Urgent Care Center. I saw a lot of folks that had been seen the night before, perhaps in the emergency room, and they just needed follow up.
Frequently, they’d come to me and have this yellow carbon copy that was written with a felt tipped marker, where the notes didn’t come through as well as they could have. I’m trying to look at this yellow sheet of paper to determine what medications they’re on, what tests they got, what were the results of the test, and it’s all scribbled at 3 o’clock in the morning and I can’t really read any of it.
If I had an electronic copy of that in my inbox, even if it was a scanned image of the original one, I’d have interoperability because I’d be able to exchange the information and use it for the purposes of understanding what happened that night before.
On the other hand, if my goal is to have that document and any new medications that they had been prescribed automatically incorporated that into my electronic health record, I wouldn’t have interoperability for that function.
Interoperability for legibility is different from interoperability for long-term care. If we have “exchange” and “use”, then it’s the “use” part that really defines what interoperability is.
LK: Where can we go from here in terms of the baseline of interoperability? I perceive there is a base level of frustration in the industry at large: how one hospital on one system can get basic information to another system. What are the best measures to figure out whether it’s about “uses” or about something else? How can we assess the quality of our interoperability?
DF: Healthcare is what I would term an “ultra large-scale system,” and a report that the Department of Defense did back in 2006 is illuminating. What do you do when you have a billion lines of code that all have to work together and seamlessly exchange information across these systems? The report was focused on communication and navigation and other kinds of systems, but does a good job of articulating the characteristics of a large scale system.
We need to frame the question. If you frame it as “we need to build some giant architecture that makes this all work,” you’re going to end up with AOL and not the World Wide Web. If you frame it around achieving a policy, like lower costs or higher quality outcomes, technology can help you get to that policy.
I think you need to think more strategically, realizing you are building an ecosystem, not a product. The large scale approach frames the problem better, accepting that we’re never going to centralize medical informatics; it’s always going to be decentralized.
The World Wide Web, which is probably the best example of a successful ultra large scale system, breaks down the technology into a fundamental stack of building blocks. One of our biggest challenges is that we haven’t thought strategically about how we create a set of fundamental building blocks so that the next use case can actually leverage what has been done instead of starting over with new approaches.
It’s sort of like those early telephone systems where they had all the cords and you had to pull the cord to connect it to the right connection. It becomes tremendously complicated because everything is a one-to-one connection; scaling becomes unmanageable.
Whereas, if as with the World Wide Web, you create individual packets to be routed and transferred, you can have set of standards rather than these one-off solutions.
LK: It seems like the lack of interoperability is in some ways used as a strategic advantage. So how do we get from this insular or institution-based perspective of interoperability to a global perspective? Does it have to be a legislative solution? And how do we communicate and bridge to the consumer (which may actually be key to decentralized thinking)?
DF: There are three fundamental things we need to turn the ship in a better direction. The first step is I think we need to focus on those fundamental building blocks; for how we represent meaning, how we structure information, how we transport it, and how we secure it. Those four things are really just an API.
But unless we think about what those fundamental building blocks are, even if we develop APIs, we’re still going to be in the situation with the switchboard and the cords. The building blocks are the first goal, and one of the pieces that’s missing is “how do we represent granular data” because most of our exchange right now is document centric.
We need to move from document-centric to data-centric exchange. We need to have a way to represent data at a granular level because that’s how we’re going to be able to calculate quality measures, that’s how we’re going to do decision support, and a lot of the other sophisticated computable things that we need to do.
The second thing we need to realize is that it’s very likely that we’re going to develop ecosystems with a high degree of interoperability within the ecosystems and a low degree between the ecosystems. It’s like smart phones or computers: some people have iPhones, others have Androids; some people use PC’s, some Macs, and some people use Linux.
APIs can be really helpful within an ecosystem but can be challenging between ecosystems. We don’t require that Android works on iPhones or vice versa, but it’s really useful to have ways of exchanging information between those things so that if I’ve got a PC and I’ve got my iPhone, I can still move music back and forth because there’s this thing called an mp3 or an mp4.
I think in healthcare we need that granular data that will help us develop good APIs and clinical decision support, but I need to be able to move from the Android system to the Microsoft system if I need to. To me, that is having the ability for a patient to have a full extract in a computable, maybe not interoperable, but computable format that they could take from their Cerner System and move to their Epic system or take from Epic and move to NexGen system.
HIPAA allows that to happen but there’s been a lot of pushback and defenders would rather say “well we’re not going to give patients the entire extract, we’re going to give them access to selected information in an API that we develop.” The issue with that it’s then the vendors who get to decide what information they share and what information they won’t. That creates an establishment of those ecosystems and sort of locks them in.
If you can create a way of having granular data for these things like clinical decision support and quality measurement and the like, and the ability for patients to have a full extract of their information and actually move information between ecosystems, not entirely interoperable but at least computable, it makes it possible that things like precision medicine can be supported.
Precision medicine is about discovery of new associations and new ways that you can relate genetic information to behavioral and environmental and medical information. API’s presuppose that you know precisely what information you need to share. Precision medicine says, “We don’t even know what we’re looking for.” So we need to have a way to have all the data accessible, to move between ecosystems and to be available for discover for science.
Categories: Uncategorized
“we haven’t thought strategically about how we create a set of fundamental building blocks so that the next use case can actually leverage what has been done instead of starting over with new approaches.”
With all due respect, all we’ve done is think “strategically.” That’s why we do not have total interoperability today and only hope to have it by 2024.
If the likes of Ma Bell and other early telephone companies had pursued Dr. Fridsma’s approach to build the “ideal” granular network, the world would have been silent for decades.
Compared to today’s electronic switchboards, the switchboards of the recent and distant past, appear primitive. But you know what? They met the need for people to communicate with the tools they had available at the time and within the constraints they faced. And the world was changed forever.
Similarly, think how many millions of patients benefitted when doctors could share patient records by fax. Should we have refused to embrace the fax while we sought granular data?
Then came the cell phone. It removed the requirement that phones be connected by land lines and now enables billions more people to talk by phone. Should we have waited until cell phones could be created before adopting land line technology?
Of course not. Improvements will always emerge. But there is no reason to ignore what we can do today.
The fact is, we can achieve total interoperability in healthcare today with the tools we have today. Yes. Now. Today.
We can meet the rapidly growing demand and anger of doctors who want to access their patients’ complete medical record so they can avoid mistakes, coordinate care and reduce the cost of care—but can’t because we are looking for theoretical, granular solutions. And we can meet the growing demand and anger of patients who want to control their records and know their providers can access their complete record anytime, anywhere.
So let’s do today what we can do today to benefit the millions who need their records available today! And when new use cases and approaches emerge, let’s be open to adopting them, too.
They told us the reason for the EHR was to save money and to make the ACA budget neutral.
I think the real reason might have been a ‘pay-off’ to gain supporters for the Act…
A patient with a creatinine of 1.3mg does not want his new employer’s plan to get this data from his old plan.
A physician who has a patient with a history of a + ANA is going to want to repeat it, willy nilly interoperability’s promise of reducing lab duplication.
A husband does not want to know through interoperability
that it is not possible serologically for him to be his wife’s baby’s father.
Interoperability gives data wings and flight. Hospitals have to own data and make it theirs. They can’t abide by wide data dissemination. E.g. Nurse writes in nursing notes: “We couldn’t find hospitalist to renew drug orders.” The patient dies needing this drug. The hospital has to remove this note. It must. It would otherwise be dead in litigation. In several days, the note has disappeared….surprise!
Where’s the dad-gumbed “Like” button? Thank you.
The web was not built on APIs. It was built on linguistic references within a common information space. Trying to build a complex, adaptive system out of fixed APIs and standards is simply not going to work. The web started with amazingly simple initial conditions and evolved from there. As Tim Berners-Lee said, “What was often difficult for people to understand about the design of the web was that there was nothing else beyond URLs, HTTP, and HTML. There was no central computer “controlling” the web, no single network on which these protocols worked, not even an organization anywhere that “ran” the Web. The web was not a physical “thing” that existed in a certain “place.” It was a “space” in which information could exist.”
The glue that holds the web together is symbolic references, not hard-coded APIs.
The lesson to take away from the web is that we need to look at things as an information space for health care, linked together symbolically rather than APIs.
See some of my other “interoperababble” rants here: http://regionalextensioncenter.blogspot.com/search?q=interoperababble
“Beyond the ability of two or more computer systems to exchange information, semantic interoperability is the ability to automatically interpret the information exchanged meaningfully and accurately in order to produce useful results as defined by the end users of both systems…”
___
THAT would require a standard data dictionary (used by all EHR systems) or, less optimally, comprehensive n-dimensional cross data mapping (“interfaces”). Think of a standard data dictionary perhaps as “type-O blood.” http://regionalextensioncenter.blogspot.com/2014/02/we-should-not-prescribe-specific.html
Since the days of the CHINs (i.e. Community Health Information Networks), back in the late 1990’s, we spoke about “Semantic Interoperability”. After reading the post, all I see is “semantic interoperability” updated to include a “Transportation Layer”.
I’m I to far from what is being described as the “real goal” of “interoperability”…?
Any comments welcome.
I see “interoperababble” is alive and well, inclusive of leaving out a key phrase in the IEEE definition of interoperability: “…without special effort on the part of the user.” No amount of calling n-dimensionally interfaced “data exchange” “interoperability” will make it so.
Moreover, with respect to “SOAP.” it would properly be “SOAPe” (kudos to my former Sup Keith Parker at my QIO/REC/HIE for the observation), wherein the “e” refers to “evaluation” — i.e., “outcome” eval of the assessment and plan. In PDSA terms, the “e” would be the “S,” the “Study” component of science-based QI.