TECH: Manhattan on PHRs. by Erica Fishman with UPDATE from Matthew

Erika Fishman from Manhattan Research, who wrote the report that I mentioned yesterday in my piece on "PHRs, EMRs and pretty much useless surveys", is rightfully a little grumpy about her survey being called useless. She very kindly wrote a very detailed reply to the questions I asked her. Here’s her answer. I’ll be back with my response to it later today (when I have a couple of deadlines out the way), but note carefully her explanation of why surveys are different and how they produce different answers.

I’ve now moved this up to Friday (originally posted on Thursday) with my comments after Erika’s in bold italics

I read your blog “PHRs, EMRs and pretty much useless surveys.”  As you can imagine, I do not agree that these surveys are useless (and I believe our loyal client base of leading global health and pharmaceutical companies would agree with me on that point). Furthermore, I believe the basis for your comparison is not entirely sound (given difference in approach, questions, methodology and sample).

Erika is right to defend surveys in general (and not just because she, Mark Bard and the crew at Manhattan make their living off them). Surveys are very, very good at explaining what is going on in the trenches, rather than what "accepted wisdom" says about something, and much better than the anecdotes that actually "inform" our debate.They are also good at picking up shifts in consumer and business/organization behavior and thinking. However, they are less good at understanding consumers’ future intentions, as opposed to business people’s likely decisions. And yeah, calling them "useless" is blogger’s hyperbole — no well researched data is useless. I’ve sold (and bought) several surveys in the past and will in the future, so no arguments from me — even if clients aren’t always sure exactly what they’re buying them for, nor are the answers about what to do with the results always that obvious.  However, better than tossing money at management consultants I say!

As you know, methodology is critical to the validity of results. Our survey sample size included more than 4,000 adults and was conducted via random digit-dialed telephone methodology-– providing for a representative mix of online and offline consumers. The 2005 Markle study used a sample of 800 registered voters (each for the two-separate sub-studies)- introducing potential selection bias due to demographics of registered voters. Markle’s 2003 study was out of 1,246 online consumers who were solicited via email. Already those consumers are more tech savvy than the general population base we surveyed, especially since they are “interactive” by taking online surveys.

Erika’s point here is very important, and unfortunately gets completely lost in the way surveys are reported. When the press says "a survey said X or Y" no one ever bothers to check the exact language, let alone the methodology. I worked for IFTF and Harris for years tracking the growth of the Internet and health care users of it, and our data and Cyberdialogue’s data consistently disagreed. (Harris always showed more users) I never found out what their questions were and what the discrepancy resulted from, but I only ever had ONE person (the very smart Sam Karp at CHCF) ask me why they were different, even though plenty of companies were buying both surveys and the results were widely reported.

And the inside baseball discussion about online versus telephone surveys is so deep in the mire that I can’t believe you’re not already asleep reading this.  Suffice it to say that the online-only crowd has a ton of data about how their surveys correct for the online/offline distinction. And somehow the registered versus likely voter discussion from last year gets picked up in this mess too because of the choices made by Markle’s pollsters (probably because they were combining polling questions with some policy work)! One thing we do know is that very, very close elections (less than 2-3% differences) cannot get picked up by surveys which have built-in margins of error. And exit-polls (questions of fact) are more reliable than pre-election surveys (questions of intention), even if the voting machines or other disqualification techniques screw-over the connection to the end result (as in Ohio in 2004 and Florida 2000).

The additional point, and here Manhattan seems to have done a better job than Markle’s survey guys, the Republican-affiliated Public Opinion Strategies (and why Markle’s using them rather than a "neutral" firm is  a little curious), is that it’s very hard to determine a consumer’s intention to use something that they’ve never used before or even seen, as it depends (as I said in my last piece) on how well it’s marketed to them, and how it fits into their daily workflow.  Most consumers cannot say and can’t really imagine what technologies they will be using in the future — after all if we all knew the iPod was going to be a big deal we’d have invented it ourselves! So when they say that 15% of adults are likely to use a PHR my guess is that it could be 5% and it could be 30%, but it won’t be 60% (which is what Markle said).

I am also not keen on the comparison because there are different meanings behind the words used in each study. While Markle uses “favor” and “want”, our question asks: “How likely are you to use a place to access/update your personal medical records online? I may still favor personal health records but not be likely to use them. In fact, that is how I personally feel about PHRs currently.

To be fair to me, I was in my piece contrasting the difference in numbers between the two surveys (the 15% and the 60%) and suggesting that Manhattan’s much lower number was more likely to be right based on the slow take up of PHRs and people emailing their physician.  Of course I’m much more interested in the numbers people report of what they are  actually doing (and of doctors reporting of what technologies they are using) because you can then see if a trend is taking off or not.  And it seems that EMR/PHR trend is taking off much more slowly than optimists from a few years back (who included me) guessed.

Our press releases are not chock-full of data because we do not want people/companies taking the data and re-purposing it as their own-– a problem we encounter frequently. Instead, we prefer to invite interested parties for more controlled webinar and multimedia presentations.  Please let me know if you are interested in accessing the presentation for this module, and I will arrange it.

Here I must disagree a little. The important part of the survey data for a client is not the top-line stuff that Harris puts in their press releases, it’s in the tabs–the information sub-dividing the studied population by their attributes and their intended actions. And it’s in the advice from the survey company about what to do with that information. Of course Manhattan and all survey companies are struggling to get people to come back to buy their surveys, and they don’t want their data stolen and re-purposed by other companies without giving them credit (and money!), but there is something to be said for letting a little more of it get out there in order to stop observers like me being forced to read into the tea-leaves. It certainly doesn’t seem to have hurt Harris, who’s healthcare business is much bigger now than when I was there, and which is putting out great information in its newsletters and in its Wall Street Journal articles. I suspect what they’re gaining in publicity vastly exceeds the sales that they’re losing by clients on the margin.

The 7.6 million emailers would include you-– we ask: “Have you ever emailed with a physician office?”

The 29.8 million includes only consumers age 18 and older.

I still don’t understand, even given that the margin of error in a 4,000 person survey is relatively low, why they don’t just say "about 30 million."

Formerly, we asked consumers if they were interested in using personal health records, but we have changed it to likelihood to use. I do not think these are necessarily comparable questions.

And there is the bane of the surveyer’s life. You want a trend but you asked the slightly wrong question way back when, when you weren’t quite sure what you were asking about. Now you’ve changed the phrasing to get it right and blown your trend data. Not much you can do about this other than invent a time machine.

Another factor here is space (or time) on a survey. In the physician computing survey I did at Harris in 1999 & 2000 I skated around EMR use by asking about what technologies doctors used for certain functions. In future surveys (after I’d left) they changed all that detail to simply "Do you use an EMR?"– a simpler but much less helpful question.  Unless of course you know exactly what an EMR is and you’re sure that all doctors share your exact opinion. I suspect the change was made to fit in other questions, but it goes to show the complexity of what you have to juggle when you’re designing a survey — and, no, it’s not easy.

I do agree with you that consumers will not be using PHRs until they “get used to it,” as you put it. This is the main point of our Consumer Health Interactivity module. If consumers are not yet adopting online interactive tools and programs, PHRs will not be a reality for a long time. Furthermore, taking advantage of highly interactive features, such as email with a physician, can help prime consumers for future PHR use. In fact, our study reveals that consumers with chronic health conditions who are currently emailing with their physicians are 230% times as likely to be interested in transmitting personal health data online as online consumers with chronic conditions who have no interest in emailing with physicians.

The example of chronically ill emailers is a very juicy tidbit of information which is more likely to get me interested if I’m a prospective client, and it reveals that Manhattan (and Erika) understands that this is a much more complex consumer market than is suggested by some of the "80% of consumers say they like EMRs"-type surveys that we’ve seen.

I still maintain that surveys of what consumers actually do, which technologies doctors actually use, and what technologies health care organizations are actually installing, are the most helpful. Perhaps HHS agrees with me, as (apparently frustrated by the numerous surveys out there on health care IT use) they have commissioned a series of their own.  How knows if they’ll get that right, and I hope that they ask Manhattan, Harris and me for advice.

But I stand by my final point. Irrespective of what they think they may or may not do in the future, consumers will use PHRs if they are provided and marketed to them in a logical, constructive way that fits into their use of the health care system, and connects them with their providers. And unfortunately that depends on their providers having their patients data in a useful and complete format. That in turn suggests that something like the RHIO and complete inter-operability will be needed before the PHR becomes absolutely complete, and for that we’ll have to wait a while.

Categories: Uncategorized

Tagged as: