EHR Usability

EHR Usability

41
SHARE

A few days ago, I wrote about Innovation, a term being overused in the EHR industry to the point where it lost all meaning. Here is another such term: Usability

Just like Innovation, Usability is the weapon du jour against the large and/or established EHR vendors. After all, it is common knowledge that these “legacy” products all look like old Windows applications and lack usability to the point of endangering patients’ lives. On the other hand, the new and innovative EHRs, anticipated to make their debut any day now, will have so much usability that users will intuitively know how to use them before even laying their eyes on the actual product. With this new generation of EHR technology, users will be up and running their medical practice in 5 minutes and everybody in the office will be able to complete their tasks in a fraction of the time it took with the clunky, legacy EMRs built in the 90s. And all this because the new EHRs have Usability, not functionality, a.k.a. bloat, not analytical business intelligence and definitely not massive integration, a.k.a. monolithic. No, this is the minimalist age of EHR haiku. Less is better, as long as it has Usability.

Usability, according to the Usability Professionals Association, is “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use [ISO 9241-11]”. Based on this definition, it stands to reason that any EHR prospective buyer should want a product with lots of Usability. Everybody wants to be effective, efficient and satisfied. So how does one go about finding such EHR?

Well, as always, CCHIT picked up the glove, and as always, CCHIT will be criticized for doing so. The 2011 Ambulatory EHR Certification includes Usability Ratings from 1 to 5 stars. The ratings are based on a Usability Testing Guide. Jurors are instructed to assess Usability of the product during and after the certification testing based on three criteria: Effectiveness, Efficiency and the subjective Satisfaction, as required by the ISO standard.  The tools for this assessment consist of 3 types of questionnaires:

  • After Scenario Questionnaire (ASQ) –jurors rate perceived efficiency (time and effort), learnability, and confidence after viewing scenarios

4 questions after each scenario –16 overall

  • Perceived Usability Questionnaire (PERUSE)–jurors rate screen-level design attributes based on reasonably observable characteristics

20 questions divided among each of the scenarios;

  • System Usability Survey (SUS) –jurors rate the assessment of usability, and satisfaction with the application

10 questions after all four scenarios have been demonstrated

The questions range from general subjective assessments in the ASQ, to very specific inquiries in PERUSE, like whether table headers are clearly indicative of the table columns content. Following the certification testing, results from all jurors are combined and weighted with more weight to specific answers and less to subjective overall impressions. The final result is the star rating, ranging from 1 to 5 Usability stars.

As of this writing, 19 Ambulatory EHRs have obtained CCHIT 2011 certification and all of them have been rated for Usability presumably according to the model described above. Of those, 12 achieved 5 stars, 6 have 4 stars and 1 has 3 stars. Amongst the 5 stars winners, one can find such “legacy” products as Epic, Allscripts and NextGen. The 4 and 3 stars awardees are rather obscure. So what can we learn from these results?

The futuristic EHR movement will probably dismiss these rankings as the usual CCHIT bias towards large vendors. Having gone through a full CCHIT certification process a couple of years ago, I can attest that the only large vendor bias I observed was in the functionality criteria, which seemed tailored to large products. Big problem. However, the testing and the jurors seemed very fair and competent. Looking at the CCHIT Usability Testing Guide, I cannot detect any bias towards any type of software. I would encourage folks to read the guide and form their own unbiased opinions. Are we then to assume that the 5 Stars EHRs have high Usability and therefore will provide satisfaction?

I don’t have a clear answer to this question. Obviously these EHRs have all their buttons and labels and text conforming to the Usability industry standards, and obviously a handful of jurors watching a vendor representative go through a bunch of preset tasks on a Webex screen felt comfortable that they understand and could use the system themselves without too much trouble. Many physicians feel the same way during vendor sales demos. However, efficiency and effectiveness can only be measured by repetitive use of the software in real life settings, for long periods of time and by a variety of users. Measuring satisfaction, the third pillar of Usability, is a different story altogether. There isn’t much satisfaction about anything in the physician community nowadays and when one is overwhelmed with patients, contemplating pay cuts every 30 days or so and bracing for unwelcome intrusion of regulators into one’s business, it’s hard to find joy in a piece of software, no matter how  well aligned the checkboxes are.

The bottom line for doctors looking for EHRs remains unchanged: caveat emptor. The footnote is that the bigger EHRs are as usable as the Usability standards dictate, just like they are as meaningful as the Meaningful Use standards dictate and when all is said and done it is still up to the individual physician user to pick the best EHR for his/her own Satisfaction.

Margalit Gur-Arie is COO at GenesysMD (Purkinje), an HIT company focusing on web based EHR/PMS and billing services for physicians. Prior to GenesysMD, Margalit was Director of Product Management at Essence/Purkinje and HIT Consultant for SSM Healthcare, a large non-profit hospital organization.

Leave a Reply

41 Comments on "EHR Usability"


Guest
John
Jun 4, 2010

It is absolutely astonishing that Epic’s EMR received 5 stars for its usability. That rating alone should trigger an investigation of CCHIT’s business practices.

Guest
bev M.D.
Jun 4, 2010

Margalit;
Thanks for a fair post from a trustworthy expert. I’m sure you have been following the Epic brouhaha as exposed in HIStalk 6/2/10 and on the Health Care Renewal blog. There seems to be great disagreement on how subjective or objective usability measurement is, but from my computer-dumb user perspective, it’s pretty darned important to us. And you are absolutely right about vendor demos vs. real life experience. Our institutions got caught on that multiple times.
I don’t really know what the answer is for the average buyer, a hospital or dr’s office who has little comprehension of technicalities and much vulnerability to sales tactics.

Guest
tcoyote
Jun 4, 2010

Thank you for this posting. It’s the CENTRALl issue in rapid adoption or use of physician EHR’s. Unfortunately, I agree w/ John. If those system are five stars, then we need another fifteen stars in the rating system.

Guest
propensity
Jun 4, 2010

Does the author know anything about the CCHIT process of testing usability? Please explain how CCHIT does that to us naive users.

Guest
Jun 4, 2010

Congrants on appearing on The Health Care Blog, Margalit. I had read this one over at your blog, but it’s still good food for thought. Usability is a tricky thing: for example, I’m happy to just pull up an empty Notepad file, type some HTML, and then post it online. Of course, that’s not how most people would want to keep a blog, but I don’t like using programs that “simplify” to the point that typing a simple sentence requires clicking five different buttons. That type of system is “unfriendly” to this particular user.
Plus, what’s most important for one practice might not be for another. What parts of an EHR should be given highest priority (i.e. less clicks to get to) and what should not? How do you make a structure general enough for anyone to use but also able to be customized for the different types of practices out there? What about color, contrast, font size, screen resolution, popups, and code language requirements? Touchscreens or mouse functions? Keyboard shortcuts? And finally, how do you make data easy to access and use, but still consider privacy and security?
@propensity: I believe Maraglit summarized the CCHIT testing process starting in paragraph 4, with a link to CCHIT’s own testing guidelines for further reading.

Guest
Mark Spohr
Jun 4, 2010

Propensity,
I don’t know if you have a serious question here or if you are just trolling. I will assume (perhaps incorrectly) that you have a deficit in reading comprehension and ask you to re-read the article and note: the article explained CCHIT usability testing in detail as well as giving references (those “linky” things) to the usability testing documents (which contain further links to additional documents). If you can’t follow these, then I suggest Google which I have found to be very useful. A simple search for “CCHIT Usability” returns a wealth of information.
If your comment was intended to make some other point then I can’t help you.

Guest
Raule Vasquez, MD
Jun 4, 2010

Propensity is correct: CCHIT is a scam. CCHIT Jurors are sworn to secrecy. Their tests are meaningfully useless for usability. It is an organization of HIMSS to create a facade of legitimacy for the defective care products they are promoting.

Guest
Jun 4, 2010

Where can one find the usability indices? I agree that true usability cannot be judged until six months after installation. Is that how they were rated? A one or two hour webex or even onsite evaluation is meaningless.

Guest

propensity, let me address what I believe you are asking, not what you actually typed in.
We can measure basic design features that are prerequisite to usability, like whether fonts are readable, buttons are consistently labeled and consistently placed on screens, checboxes are aligned, response time, etc. However, a software product having satisfactory grades on all these basic items, is not necessarily usable, or not judged equally usable by different users. For example, I prefer clicking to scrolling and I know that other folks prefer the opposite, and there are many more such examples as Michelle also indicated. This is problem #1.
Problem #2, which is much more explosive (as the HIStalk thread bev mentioned certainly was), is the contention that lack of usability kills people. There is absolutely no way to test that during a certification inspection. It may be that the screens flow nicely and all the buttons are as they should be, but some clinically pertinent information is, for example, not displayed when it should be or there is a software bug manifesting itself in rare circumstances and corrupting actual data. These sort of usability measures can only be assessed by long term use in a real live environment by clinician users.
For ambulatory products, the AAFP for example is running surveys and publishes results regarding user satisfaction. This may be a better usability assessment, but still imperfect.
All in all, I don’t believe either CCHIT or the ONC/NIST can really and truly certify usability. They can certify the basic prerequisites, and maybe they should, but just like a car that passes inspection can turn out to be a horrific lemon, an EMR certified this way may turn out to have fatal flaws.
So what do we do? We could choose to stop the effort to implement HIT. We could choose to have HIT go through FDA approval processes. We could just go ahead and hope for the best. We could try to think creatively and constructively and come up with a workable solution. I was hoping this post will trigger such conversation here…..

Guest

Gary,
The rules are in the Usability Guide link above. Here it is explicitly http://www.cchit.org/sites/all/files/CCHIT%20Usability%20Testing%20Guide%20Ambulatory%20EHRs%202011%20v6.pdf
If I remember correctly, a typical CCHIT inspections takes about 8-10 hours and is done over Webex with the vendor driving and the jurors watching. The usability ratings are performed during and immediately after the inspection.

Guest
pcp
Jun 4, 2010

Who are the jurors?
If they’re not docs actually in the office and on the floor seeing patients, this is just rating sales pitches.

Guest

pcp,
From CCHIT’s website http://www.cchit.org/participate/jurors
“For its CCHIT Certified comprehensive program inspection, the Commission empanels a team of three clinical jurors, one of whom must be a practicing physician, and an IT security evaluator to assess a technology’s conformance to the CCHIT certification criteria. The inspection occurs by observing the performance of the applicant’s technology in executing a series of test scripts and reviewing required materials supplied by the applicant.”
However, as I said before, this is much like watching a scripted, and rehearsed ad nauseam, vendor demo.

Guest
pcp
Jun 4, 2010

Thanks.
As if Consumer Reports rated cars by watching advertisements, but never actually driving them?
It’s measuring something, but doesn’t sound like it’s something worth measuring.

Guest
bev M.D.
Jun 4, 2010

I am increasingly of the opinion that at least parts of an EMR do constitute a medical device and should be so regulated by the FDA. As I have mentioned before, there is precedent for this in their regulation of blood bank/transfusion service software. I recognize the delays and bureaucracy this will add to this process, but it works quite well with the blood bank software. Besides, the onus should be on the vendors to prove that, as Margalit says,
“some clinically pertinent information is, for example, not displayed when it should be or there is a software bug manifesting itself in rare circumstances and corrupting actual data.”
We have actually seen this happen while implementing new laboratory information systems, where critical pieces of information in, say, a microbiology report are omitted. This is particularly problematic in interfaces between the LIS and HIS. Why should the hospitals have to find these bugs for the vendors?

Guest
Jun 4, 2010

Margalit,
Thanks for your balanced perspectives.
It will be obvious to Joe the Doctor that CCHIT is destroying what little is left of their credibility with their new usability rating system and process.
The fact that 18/19 of vendors can get a 4 or 5 usability score doesn’t pass the straight face test….
If there is a doctor on the planet that finds this information useful in making an EHR purchasing decision, would you please identify yourself and help me understand what I’m missing.