TECH: Interoperability/schminteroperability

This week the Clinton/Frist (or should it be Frist/Clinton) legislation got on breakfast time TV, and Brailer’s office announced that it was going to be starting the first few pilots towards interoperability with some $60m available. A more ambitious $4bn bill was introduced too, although that won’t go anywhere unless someone adds the words "Terror" or "Iraq" to the title. But while all the fuss is about interoperability of data transfer, there is a whole set of players who need data to become electronic before it can be made "interoperable". While the larger medical groups and hospitals are rapidly getting on the EMR adoption curve, it’s a much slower process among the small practices that account for 75% of America’s doctors and patients — most of their information is stuck in paper. Other countries solved this problem the old fashioned way — the government paid for doctors to get EMRs in their offices.  Before we get too worked up about interoperability and RHIOs, a bigger national push to get smaller practices using clinical information technology might be a better idea.

Spread the love

Categories: Uncategorized

Tagged as:

18 replies »

  1. When Kaiser or any organizations, large or small are to store patient data, they risk all the concerns about HIPAA violations. (see $200k fine on HIPAA violation news) Hence, it is critical to look at the challenges for protecting individual privacy either in centralized data repository, or local data as one, not seperately.
    Since technolgoy would always have failures at sometime, or somepoint, we need to make sure all eHealthcare applications that access such data adhere to the requirements of HIPAA.

  2. //applications don’t run off of their own duplicated “marts” but on the same physical version of the source data that every other application in the enterprise is running on.//
    It depends on the application.
    Let’s say you have a data warehouse with patient info. Some applications will access the data directly, and the “data set” will be limited to whatever the user asked for. This data set is only at risk if the user prints it out, makes an electronic copy, or forwards it somewhere.
    However, the IT world (especially Health IT!) is not that simple. Imagine if it’s not a user that needs the data set, but another application. Let’s imagine the Chronic Conditions System. For the Chronic Conditions System not only has rules to access this particular set of data for direct access, it actually creates the data set automatically – as a “job”…because that data is needed by a downstream application.
    Now let’s say we have 3 or 4 applications that make use of the Chronic Conditions System. One of them is Angina Alerts. Angina Alerts gets their daily report from the Chronic Conditions System – that’s a travelling data set, detached from the original warehouse. Now imagine there are 3 or 4 applications downstream from Angina Alerts: the Patient Contact System, the Diabetes Information Management System, the ER Coordination System, the federally mandated Angina Demography System, etc.
    The issue is that applications become users to other applications, and these peel off into infinity. One of the things that was really surreal at Kaiser was that the IT Division put on this unified front and talked a lot about integration and rationalization, but the IT Division only ruled over a particular realm of national systems. Each region of Kaiser had its own systems and own IT oversight. And even that wasn’t total oversight because medical centers had their own little “start up” ventures where physicians were developing their own applications for whatever they needed. These applications were registered (there was a large concern to protect them when Kaiser was trying to extract itself from Lotus Notes, for instance), but they weren’t on the agenda for “integration”. No one wanted to stifle the innovation occuring on the front lines where physicians know what they need.
    The point is that these chains of applications get really long, and each batch job that calls for a data set is getting a copy of that data. It’s only the “foundation systems” that access the original data and create sets on the fly. The downstream systems need copies of the data.

  3. I agree that securing the source data is but one aspect of information security. Having a single physical set of data – one thing to secure – is favorable to having the data split into many parts and physical copies.
    True “data warehousing” seeks to minimize (or eliminate) physical duplication of data… since duplication of data results in inaccuracies in the data (and wasted resources): physical cubes are replaced by virtual cubes (in the WH), applications don’t run off of their own duplicated “marts” but on the same physical version of the source data that every other application in the enterprise is running on. Wal-Mart, 3M, Highmark BCBS, and many others have taken this approach… although aren’t 100% adherent to it.

  4. My point, though, is securing the data warehouses won’t secure the data. I agree an organization will save money on staff and operations with fewer data warehouses – it just won’t mean anything from the perspective of protecting information. Data sets have to travel. Encryption helps, especially en route, but the data is reproduced in readable format in any application where someone queries for output results.

  5. While I agree that there won’t be a single data warehouse, fewer data warehouses are easier to secure – from a technology, staffing, and operations perspective – than having more… which was my point.

  6. I want to add that huge segments of the MegaDatabase will also be reproduced as datasets for any number of downstream applications. The reason Integration is such a hot topic today is because these applications tend to split off into thousands of unconnected little fiefdoms – patient alert system here, population management data there, billing here and there…not to mention tribes of contractors doing prototyping…there’s no such thing as keeping the data in some Fortress of Safety.

  7. There wouldn’t be one database. There would have to be several for redundancy. And then there would be tape/platter back ups. I worked in a department that did such back ups just for IRA accounts at Bank of America, and I can assure you this is where corporations try to cut corners. One was faking an understanding of Access programming, and the president of that division was getting fake summary information simply because he had no way of knowing otherwise. People would forget to label the platters, and they would just stack up in the storage room. A couple utterly unqualified people, and hundreds of thousands of personal records could be at risk. I kid you not.

  8. Is it easier to ensure that one database (that has 300 billion records) is secure or 5,000 databases that each have (600 million)?
    Each database contains the medical records for lots and lots of people… and compromise of any one would be a big problem.
    Insurance companies are already centralizing claims data for all kinds of analysis. Centralizing real clinical (EHR) data would make their analysis that much more accurate (in comparison to what they’re doing today with claims) and result in fewer “false” positives. How is this not a good thing?

  9. //errors in records (bad coding, aggressive prescribing, poor diagnosis or a diagnosis deliberately exaggerated //
    The thing I have a hard time wrapping my mind around is that it’s not just the physician’s interests that might distort a medical record. The patient’s interest in accurate diagnosis might vary over time. A patient might under-report symptoms for a range of reasons from macho psychology to insurance consequences. Other times a patient might want the diagnosis to access certain services: for instance parents might want a child diagnosed with Attention Deficit Disorder so their child will get disability services at school. A middle-aged woman with arthritis or circulation problems might want a diagnosis to be formalized so they can get their insurance to pay for one of those go-carts.
    However, while one day it might be in the patient’s interest to get the go-cart, two years later they might regret that formal diagnosis when they can’t get individual insurance or an HSA. One patient might fear and avoid diagnosis at one point in time, and another might sue the physician for failing to diagnose. The patients interests are different at different times.
    One of the things universal insurance would do is separate medical decisions and diagnosis from longterm financial considerations (and perhaps some of the psychological issues as well). Just take the “interest” right out of it. My thought is accuracy of diagnosis would improve across the board when the distorting factors of business are taken out.
    After the business distortion is removed, it will also be safer for all parties to centralize the data. Once that happens, I think one improvement that could be made is that patients should formally record their symptoms, and they should sign off on what was communicated to the physicians. Physicians under-report what patients tell them, and this becomes an issue in patient-physician disputes over misdiagnosis and failure to diagnose or offer the correct tests. Physicians are deliberately obscuring the information to avoid culpability and then hand-wringing over malpractice suits and patient anger. A few years ago there was a movement for “no fault” reporting in hospitals so physicians wouldn’t shirk responsibility out of fear: instead, we should be working to clarify what happened. Perhaps patients wouldn’t be so angry (which ups the demand for compensation for psychological pain and suffering) if they could quickly and easily make their point? Any coddling of physicians for their mistakes should be offered by their employer – not taken at the expense of patient victims.

  10. //given the proper safeguards//
    One word, dude: Mastercard. There will never be proper safeguards to protect the data. There needs to be proper safeguards to protect people from the consequences of its theft.
    My Mom became the victim of identity theft when someone stole her purse two years ago. She did all the right things: she filed a police report, cancelled her credit cards and implemented fraud alert witht he credit agencies, cancelled her checks and had an identity theft affadavit notarized at the bank, etc. This was a lot of time and effort. But despite all this, to this day my Mom is the one who has to deal with the collections agencies when businesses foolishly accept checks in her name. She has to copy the affadavit and police report, buy a stamp, and go to the post office to mail the proof her identity is stolen. Worse, I worry that one of the collections agencies actually might turn out to be a scam to steal more from her.
    While I haven’t had my identity abused yet, I’ve received various letters warning that my identity information had been stolen: most recently from the big U.C. Berkeley computer theft, in which somebody got my social security number. Creepier still – the University felt obliged to gather more information on me through “research” in order to contact me about the matter.
    Every other night there’s a story on the news about data theft. Pundits and politicians are pouring on the big rhetoric about the need to secure data.
    This is the wrong approach to take. And I point to my Mom’s experience to show how sensitive I am to the issue when I say that.
    I’m also not a technology luddite: I agree that indexed data could be used to save lives through rapid access to information and coordinate a response to bioterrorism. There are many ways technology could emancipate people and improve the quality of people’s lives…if our society was responsible enough to use it that way instead of handing technology over to the interests of robber barons and social control that only benefits the wealthy and the privileged.
    My argument is that we need to reduce the stress and anxiety caused by data theft: we need to make it not worth stealing. We need to develop technology that protects individual rights and interests instead of business interests. In the case of the EMR: how about tecbnology to make it easier for patients to get billing issues settled quickly instead of technology to harass people and wear them down until they pay overcharges? In the case of financial data: why is it that banks and credit card agencies have the technology to trigger fees and penalties, but my Mom can’t just register her Identity Theft once and be automatically protected from calls from collection agencies and demands for her to copy and mail in proof?
    My problem with technology is that there just isn’t enough of a public interest pull to make sure that it’s the technology that will impove lives that will be developed. Businesses, including HMOs, make business decisions: they look for return or investment and what will make lives for the doctors easier. When I worked for the CTO of Northern California, our technology clients were the *physicians*, not the *patients*. Our mandate was to make life easier for the *physicians*. When the CEO decided to buy the EMR from Epic, all the shoptalk was about the billing system and how Kaiser was going to repackage Epic with the patient data it had the capacity to gather to resell it to other HMOs and the Government. Not one word about how the patient might benefit or how the patient’s interests might be protected.

  11. “Underwriting is constrained by laws, not by the capabilities of insurance companies to do better underwriting. They have enough data today to figure out who’s a good and bad member / group. They’re just -legally – not allowed to underwrite that way.”
    They can’t legally charge an individual more than the rest of the group the individual is placed in, but they can use that data to determine the premium group the individual is placed in and in many cases whether or not the individual’s initial application should be denied or coverage modified by exclusion of pre-existing conditions. That’s the issue I worry about.

  12. My concern relative to centralized data collection isn’t that people won’t be able to hide medical conditions from underwriting, but that errors in records (bad coding, aggressive prescribing, poor diagnosis or a diagnosis deliberately exaggerated to receive higher reimbursement) could in fact result in relatively healthy people being rated improperly and being forced to pay higher premiums. Think about the challenges consumers face when mistakes are made in their credit reports. There is a process to correct that but it is time-consuming and puts the burden on the consumer. The complexity of medical records would make correcting a consumer’s medical transaction record a lot more difficult because “bad coding” or “poor diagnosis” is hard to argue.

  13. “The thing I fear is that it will create a centralized database that will make it easy for insurance companies to access in underwriting…”
    Underwriting is constrained by laws, not by the capabilities of insurance companies to do better underwriting. They have enough data today to figure out who’s a good and bad member / group. They’re just -legally – not allowed to underwrite that way.
    The more that data is decentralized, the less it’s useful for analysis… for good things like identifying and recalling dangerous drugs, reducing regional, local, racial, and economic disparities, reacting to bioterror, etc, etc.
    And although data that is centralized might be a bigger target, it certainly could – given the proper safeguards – be more secure than a series of decentralized databases – each requiring the same safegaurds / protection.

  14. All I know is that the billing module was what Kaiser wanted when they made their deal with Epic, and it was the first thing they were going to implement (starting in Colorado, if I remember correctly). I’m not sure whether Kaiser was after the point-of-sale extractions, for patient profiling, or just the ability to keep on top of billing efforts made in order to implement fees and penalties.

  15. How does anyone see the EMR working in billing? Will it help to decrease billing errors? How?
    Does anyone have any thoughts about the unintended consequences of this? Or the new errors that will be created (like in electronice prescription writing – errors due to poor handwriting decreased, while other types of errors increased)?

  16. And…it puts more power in the hands of technical contractors, and the security of that database of information is going to be subject to the random decisions of cheap programmers in other countries who are not subject to the laws and public scrutiny of this country. My prediction is that the national EMR is going to turn into a nightmare of Orwellian proportions.

  17. The thing I fear is that it will create a centralized database that will make it easy for insurance companies to access in underwriting and ultimately lead to computerized “health scoring” with no patient input.

  18. As a voice in the wilderness, I once again have to point out that the EMR that will be adopted first is point-of-service charging and billing systems. This has nothing to do with delivery of health care: the public is going to pay for new ways to deny care and harass people, and the improvement of health services is going to be put way down on the bottom of the list. This is not an issue about deadbeat patients: this is an issue of the normal citizen who won’t be able to stand up against technologized billing when they get ripped off. And this is about people ill or in pain dragging who will be forced to reconsider seeking medical care as they go from point to point using credit to pay for services they can’t afford. Transparency of costs makes it clear that someone has to pay: but it also makes it clear that a segment of society can be priced out of health care – leaving the rest of society to then pick up the tab for falling productivity of people who can’t work and preventable incidents of disability.

Leave a Reply

Your email address will not be published. Required fields are marked *