Uncategorized

Should Small to Medium-Sized Practices Use Cloud-Based EHR?

Recently I was asked if SaaS/Cloud computing is appropriate for small practice EHR hosting.

I responded: “SaaS in general is good. However, most SaaS is neither private nor secure. Current regulatory and compliance mandates require that you find a cloud hosting firm which will indemnify you against privacy breeches caused by security issues in the SaaS hosting facility. Also, SaaS is only as good as the internet connections of the client sites.   We’ve had a great deal of experience with ‘last mile’ issues.”

To add further detail, Bill Gillis, the CIO of the Beth Israel Deaconess Care Organization (BIDCO) responded

“We built, manage & maintain our own private cloud in a Co-location facility.  Our EHR cloud is served to the practice via public internet over SSL. One challenge we struggle with is ISP availability and service level/stability.  In Metro Boston one would expect a robust internet infrastructure.  We’ve found heterogeneous public internet capabilities and quality of service.  We’ve found that getting a good ping response is not truly an indicator of meeting application performance requirements.  Many cloud hosted applications are sensitive to latency, packet loss, fragmentation & jitter.

In the first year of our project deployment we struggled because the ISP connectivity did not appear to be the culprit.  A practice would have 10+ megabit connections with ping returns under 25ms.  Yet the practice would experience application freezing, crashes or very poor/slow response time.  From the public ISP’s perspective ‘the lights were green’ and they would take no further action.  After engaging third party network sniffing firm, we discovered the real culprit impacting performance – network latency.  We were able to take the data from that engagement back to the ISP to illustrate the problems with the packets in transit.

Implementing network sniffing engagement was time consuming and costly.  Doing this for the 100+ practice locations we were supporting is not sustainable.  Luckily we found a company in Boston called Apparent Networks (now called Appneta).  Appneta makes a small, low cost black box application that provides deep and detailed network data back to a secure cloud.  We place a device in a practice that communicates back to a device we keep in our hosted/central site.  The devices continually communicate with each other and log all of the various degrees of network performance up to the cloud.  The best part is we preconfigure the devices and mail them to the practices reporting issues.

All the practice staff need to do is provide power and plug it into an open Ethernet port.  This saves us from deploying a technician on-site.  Since we first deployed these devices we’ve been able to get to the root cause of performance issues and resolve them rapidly.  We’ve been able to identify everything from an ISP charging for a certain level of bandwidth while only providing 1/2 that speed to staff streaming media during high volume hours saturating the local router.  The performance data is stored in the cloud indefinitely.  This give us a longitudinal view of the network/internet connectivity for a specific practice.  Recently we were able to avoid a potential issue by noticing that a practice’s connection stability was slowly degrading over the past year.  We were able to work with the ISP to discover they had an issue with a local Central Office/substation.  The reality is most ISP’s are not that willing to work with us until we show them the data.  Once we have the smoking gun, they tend to dig deeper and work with us to resolve the problems.  For all the high-tech equipment we’ve leveraged for our private cloud, this device was the real swiss army knife of the project.”

I’ve described Cloud Computing as “your mess run by someone else”.   It can be done successfully, but SaaS is only as good as the privacy protections you purchase or build yourself.   Performance is only as good as your network connection.

I hope this is helpful.

John Halamka, MD, is the CIO at Beth Israel Deconess Medical Center and the author of the popular Life as a Healthcare CIO blog, where he writes about technology, the business of healthcare and the issues he faces as the leader of the IT department of a major hospital system. He is a frequent contributor to THCB.

11 replies »

  1. Thanks on your marvelous posting! I seriously enjoyed reading it, you happen to be a
    great author. I will be sure to bookmark your blog and
    will eventually come back down the road. I want to encourage yourself to continue your
    great work, have a nice weekend!

    My page: social media article

  2. I could hardly go away your web site before suggesting which i really liked the common information and facts a person supply on your visitors? Is going to be returning ceaselessly to examine completely new content

  3. I think he highlights the two major concerns that healthcare CIO’s have in regard to SaaS EHR. The first is a strong agreement/relationship with the vendor. This is even more important thanks to the new HIPAA Omnibus business associate requirements.

    The second challenge is whether your network infrastructure can support the hosted application. If you have a reliable internet connection (possibly with a redundant backup), then you’ll be fine. If your internet connection can’t support the SaaS EHR application, then you’ll hate life. Pretty basic stuff, but I think his example illustrates that it’s hard to know if your internet connection is of high enough quality to support a cloud EHR. You won’t likely know until you try and diagnosing the issue can be challenging and expensive.

  4. Was excited to read this article as someone closely following the implementation of ehr’s, but couldn’t appreciate any of the key points hidden in all the tech speak and jargon…would love to know how he really feels!

  5. “project deployment”
    “impacting performance”
    “data from that engagement”

    It’s poetry.

  6. Can somebody translate this into English?

    “In the first year of our project deployment we struggled because the ISP connectivity did not appear to be the culprit. A practice would have 10+ megabit connections with ping returns under 25ms. Yet the practice would experience application freezing, crashes or very poor/slow response time. From the public ISP’s perspective ‘the lights were green’ and they would take no further action. After engaging third party network sniffing firm, we discovered the real culprit impacting performance – network latency. We were able to take the data from that engagement back to the ISP to illustrate the problems with the packets in transit.”

  7. Is this based on number of known breaches? To me it’s looked like the biggest risk is unencrypted laptops/thumbdrives and on-premises servers. But I haven’t seen a comprehensive study. I think that would be interesting to see if anyone’s done it.