Uncategorized

Defining Interoperability: An Interview with Grahame Grieve

flying cadeuciiGrahame Grieve is a long-time leader within HL7 and one of the key drivers behind FHIR. He chats with Leonard Kish about what’s been happening and what’s ahead for interoperability.

LK: First tell me how you got into standards… it’s kind of an odd business to get into.  Why have you chosen this and why are you excited about it?

G: It happened by accident.  I was working for a vendor and we were tasked with getting some exchanges and I wanted them to be right the first time.  That was the philosophy of the vendor.  If we did it right the first time, then we wouldn’t have to keep revisiting and that meant that using the standards correctly.  The more I got involved, the more I discovered that it wasn’t obvious how to do that…and that the standards themselves weren’t good.  I felt personally that we need really good standards in healthcare.  So it became a personal mission and I got more involved through the company I was working for and eventually I left so I could continue doing what I wanted doing with the standards – I enjoy the community aspect of the standards and feel very strongly that it’s worth investing time in and I had the opportunity to build a business out of it, which not many people do. So now I freelance in standards development and standards implementation.

LK: There’s a lot of talk in Congress about the lack of interoperability and everyone probably has their own definition. Do you have a working definition of interoperability or is there a good definition you like for interoperability?

G: The IEEE definition to get data from one place to another and use it correctly is pretty widely used.  I guess when you’re living and breathing interoperability you’re kind of beyond asking about definitions.

LK: Are there ways to measure it then?  Some people talk about different levels; data interoperability, functional interoperability, semantic interoperability.  Are there different levels and are there different ways to measure interoperability?

G: We don’t have really have enough metrics.  It’s actually relatively easy to move data around.  What you’ve got to do is consider the costs of moving it, the fragility of the solution, and whether the solution meets the user’s needs around appropriateness, availability, security, and consent.  Given the complexity of healthcare and business policy, it’s pretty hard to get a handle on those things.  One thing that is key is that interoperability of data is neither here nor there in the end because if providers continue with their current work practices, the availability of data is basically irrelevant, because they treat themselves as an island. They don’t know how depend on each other.  So I think the big open area is clinical interoperability.

LK: Interoperability in other verticals mostly works.  We hear talk about Silicon Valley and open APIs.  There’s perhaps less commotion about standards, maybe because there are less conflicting business interests than in healthcare.  Why is healthcare different?

G: First of all – from an international perspective, I don’t think other countries are by and large better off or different (where incentives are different).  They all have the same issues and even though they don’t have the business competition or the funding insanity that you do in the US, they still have the same fundamental problems.  So I hear a lot of stuff from the US media about that and I think it’s overblown.  The problem is more around micro level transactions and motivations for them and fundamentally the same problem around getting people to provide integrated clinical care when the system works against them doing that.                  

LK:  So can you give me an example of how things are maybe the same with NHS or another country vs. the US in terms of people not wanting to exchange clinical data?

G: In Australia, there’s a properly funded medical health care system where the system is overwhelmed by the volume of work to be provided.  No one get’s any business benefit from not sharing content with other people. Still, because you have to invest time up ahead to exchange data and other people get the benefits later, there’s very low participation rates for any kind of voluntary data sharing schemes that you set up. There’s scandalously low adoption rates.  And that’s not because it’s not a good business idea to get involved but it’s because the incentives are misaligned at the individual level (and the costs are up front).

LK: Right, so it’s maybe it’s also a lack of consumer drive?  It’s there data and you’d expect the incentives to align behind them, but they don’t ask and don’t get, maybe because we (or our providers) only access your record when we really need them.  It’s not like banking or email or other things we use on a daily basis?    

G: Probably that’s part of it, but from a consumer’s point of view, what does it do for them getting access to their data?  

If they can’t share their data with other clinicians and other clinicians use the shared data effectively, then it’s not that significant.  And that’s where the challenges are in getting clinicians to start thinking of care in an integrated framework. One of the things we’ve got going running here in this country is disease focused clinics.  So there’s a breast cancer clinic, and if you’re diagnosed with breast cancer they get you in and in one day they do all the scans and all the consults and everything.  If you don’t get involved with one of those, and there’s only a few of them in the country, then you have to schedule each of those things separately and it can take you months to go through the process instead of hours.  And all it is is a combined coherent scheduling problem.  We’ve almost got the ability to integrate scheduling systems really tightly.  It’s a scandal.  This happened to a friend of mine.  There were weeks between each appointment.  It’s a disgrace.

LK:  Any idea how that kind of stuff is going to get fixed, as standards come and go?

G:  I think of standards as a precondition.  If they can exchange data correctly then they can start asking themselves whether they want to.  Whereas if they can’t exchange data correctly and usefully then they don’t even get to ask the questions.  So standards is just a precondition to asking “How do we have patient focused care without having to build specific institutions around a particular process?”                  
                  
LK: So let’s move on to FHIR.  You’ve worked with HL7 a long time and now have brought FHIR forward.  Has it been kind of an easier process?

G: In some ways it’s easier.  The process we’ve set up relies very heavily on social media and existing internet development infrastructure in order to be transparent and open.  We inherit very much from the open source movement.  And that kind of has always been the ethos of HL7 as well, but HL7 was established before those things.  So there’s a legacy way of operating that’s not optimal and it’s a matter of gradually moving the working organization to a new set of processes and technologies based on collaborative media that have come out of the webspace.

LK: When you say the process uses social media, you’re referring to a kind of internal social media. You don’t mean Twitter and Facebook right?

G: Well we use Twitter, we use Wiki, we use Skype extensively, we use blogs.  Some of the less high profile areas of social media.  Although we do use LinkedIn for notifications and there’s often discussion in the LinkedIn groups around FHIR.  But probably Wiki is the one we use the most.

LK: Tell me about Version 2 of FHIR.  What’s in it?

G:  Technically we made 1500 changes to the spec, some of them pretty sweeping.  So it was very much a thorough overhaul.  What I’ll do is I’ll forward you the press release which is not yet finalized but should be finalized by the time we finish this call.  It has the best summary around.  

Note: you can find the (.pdf) Version 2 Press Release here:

LK: So it’s more of an update, there’s not expanded resources?

G: Yes there’s a bunch of new resources around clinical, administrative and financial stuff.  The financial stuff is still pretty preliminary.  And we’ve explored some clinical areas where there’s never been interoperability before so we’ll have to see how that goes.

LK: Can you tell us a little bit about how SMART On FHIR enhances FHIR and are there going to be other things on FHIR as we go forward with different enabling bodies working together?

G: FHIR is a base API for all sorts of usages.  One of the most common usages is going to be exchanging healthcare information between EHRs and within an EHR in its internal extensibility environment.  And that’s where SMART On FHIR fits in and provides a really neat solution for what EHR’s need to do.  So personally I think most data exchange using FHIR will use SMART on FHIR because the driving need is in the EHR space.  And I think SMART On FHIR is a great extension to FHIR around that.  I think there’ll be other extensions that are more in the corporate backbone space and more knowledge-based service, things that are not so user specific. Those aren’t formed yet and SMART On FHIR is the one we’re throwing all our weight behind because it meets the immediate needs.

LK: How does Argonaut fit into this?

G: Argonaut is a supporting movement.  It’s a volunteer organization.  It was not obvious originally that the FHIR project would keep to its self-described deadlines (for adoption). We had people going “I don’t know if we’re going to make it, there’s big pieces of work that we probably don’t have time to do.”  And the big EHR vendors said “No we don’t want that to slide, what can we do about that?” So they did two things that we talked about.  First, they funded specifically a piece of work that wouldn’t otherwise have happened that was a precondition for us, and then they banded together to drive adoption of the standard, both as a quality measure for the standard itself and also as a way of encouraging ONC to adopt the standard earlier.  They are working with HL7 and ONC to clarify further the appropriate way to use FHIR in the USA because there are some parts of FHIR that are very international in perspective.  For instance, because there is no consistency around the world with how medication is encoded, we don’t make any rules in FHIR about how medication is encoded.  But clearly in the US context, you want to agree about how the medication is encoded.  So we’re working through all those issues like that in association with with the Argonaut members.

LK:  Right and you said that most of the big EHR vendors were pushing for deadlines not to slip so presumably most vendors were in favor of FHIR and seeing it go forward.  Do you feel like kind of the religion around the big EHR vendors has changed around interoperability in the last few years?

G: I don’t do that much work in the implementation space for EHR’s in the States.  So I don’t work with the operational EHR’s, I work with the development teams.  And from where I sit, they’re all under instructions to make interoperability happen.  They’re all believers.  Maybe it’s a time to market thing since initiating is going to take seven to ten years to come to fruition, and maybe in that time frame the market chatter will change.  Because as far as I can see, the vendors believe in data exchange.  As a vendor myself, we rapidly saw that holding on to the data in order to protect a business relationship effectively equated to reducing your service to your customer in order to make the customer happy.  Well that’s crap.  Everyone at the IT level is behind interoperability because we’ve been shown it’s a better way to build systems.  But as I said the business/clinical people are not yet converts to the cause.

LK: You hear things about different sales teams pushing independent providers onto a single system and things like that.  That’s a different part of the organization.  But to your point, I think developers want to see things happen and it’s great to hear that they’re under orders to make it happen.  It’s refreshing actually.  

G:  Well that’s what I see.  It might be that they want to have the capability but they might not roll it out with the certainty with which they have developed it.  But most certainly they’re saying “Oh I want to develop the capabilities.”

LK: Anybody can use FHIR right?  Do you have a sense of scale as to how fast adoption is going with FHIR, and how consumer-generated data might fit in?

G: They have some production usage in integrating disparate healthcare systems around the world, EHRs in late preparation to go live, integration systems (for instance, the Commonwell patient integration system is based on an early version of FHIR).  Adoption is running way ahead of where we thought it would be for what was effectively an alpha release.  Today’s release is kind of beta and we’ll get a lot of adoptions.  

There isn’t a lot of production adoption in patient collected data yet.  With very patient focused content, the question is going to become: “The standard is complex, more complex than rolling your own, so why shouldn’t you roll your own?”  And there’s a bunch of reasons the standards are more complicated because we cater to more use cases.  For example, a simple BMI is based on height and weight.  So you could do an API based on height and weight.  But there’s a bunch of particular populations for which that is not appropriate.  Amputees are a great example.  You get an amputation and my formula is no longer appropriate for you. (Standards around patient data) becomes in the end a human rights issue.  If you’re going to offer a patient focused service but your API makes assumptions about not catering for particular minority populations, and there’s going to be different kinds of minority populations than the ones traditionally focused on, in essence they’ll be clinically handicapped and that will become a human rights issue.  And I predict that there will be quite a lot of focus on this issue 2-3 years from now when social health services are really ramping up and a significant population of patients are finding they can’t use the services because they don’t meet the simplifications people have assumed they could get away with making.

LK:  So, what’s next?

G: Well we regard this as a beta now so we have a huge amount of implementation work to do with Argonaut and other partners around the world.  We know specifically that there’s a couple big areas that we haven’t got a solution.  One of those is the workflow process of ordering and asking other people to do things and having people be able to refuse to do them and recommended alternative actions.  So there’s a bunch of work to do around capturing that workflow and making that pretty easy because people vary widely about those things.  We’ve certainly got a pile of work to do around clinical decision support.  Those are our big agendas for the next little while and they certainly will keep our plates full.       


              

2 replies »

  1. Data of course varies in reliability and truth to reality. You can have levels of potassium done in a clinical chemistry lab that are accurate to, say, plus or minus 3%. On the other extreme you might have the rambling discussuion of a radiologist. giving the differential diagnoses seen on a high resolution CT of the lungs. And, finally, you can have a sort of meta-data that consists of the actual physical magnetic recordings of an echocardiogram on a hard disk or solid state memory. Accordingly, do you have boundaries that you have defined as to what data you are dealing with? Do you only want the professional’s final diagnosis? his summary? his rambling? the entire transaction? including the half-hour recording of an entire echocardiogram? And, also, much of this stuff is almost incomprehensible to almost everyone. I could give you discussions of immunohistochemistry on a given patient’s microscopic slide that would bring instant torpor. Thus, no one wants some of this stuff. So, it seems that you must have to draw lines around the data “space” that you are working with. This means that there is no such thing as completeness, ever. But, I think you are to be congratulated for even attempting such an endless job as data standardization….because someone has to do it. I guess little parts have been done, like SnoMed and the ICDs and RVSs. Good luck.