Categories

Above the Fold

MS-HUG Awards; let’s see you, Health 2.0 gang!

Last year I was a judge in the MS-HUG award for the HealthVault applications category. The quantity and standard of the entries was pitiful. I think that a few sales reps rounded up a few entries at the last minute

Given that many if not most Health 2.0 applications now link to HealthVault I really hope that the entries this year are way better. Here’s the blurb but if you are a cool Health 2.0 company linked to HealthVault, please enter. You have a week or so (and no Microsoft is not paying me to write this! In fact I didn’t even get paid to be a judge!)

Nominations are accepted in the following categories: 

Clinical Records – Inpatient
Clinical Records – Ambulatory
HIE and Interoperability
Microsoft HealthVault Applications
The nominations have been open since mid-December and will close on January 22 at 5:00 pm Central Standard Time. All of this year’s awards information is on the Microsoft HUG website at:
www.mshug.org/awards.

Nurse Practitioners – Doctors?

By Barbara Ficarra

Doctors like to assert, maintain control and continuously patrol over their territories; at least some do. In a recent post on THCB, “Nurseanomics” by Maggie Mahar addresses the heated debate over the difference between a doctor and a nurse. Mahar takles the question that Legislators in twenty-eight states are dealing with. Should a nurse practitioner (NP) with an advanced degree provide primary care, without an M.D. being in charge? But another pressing question that needs to be addressed is: Should nurse practitioners be called doctors (DNP)? (DNP is a Doctor of Nursing Practice.) That is the question that I will address here. I reached out to the medical community to get their reaction. It’s not surprising that the immediate response of some doctors when asked if nurse practitioners should be called doctors (DNP) is “No!” evidenced by Dr. Stangl’s comment.

“NO! Nurse practitioners should NOT be called “doctors” because they are NOT! While many NPs do an excellent job of handling certain types of problems in certain settings, they do not have near the depth or length of education that physicians do and should be credited for what they Do have, which is their nursing background and expertise.” Susan Stangl, MD

Take a look at this comment that appears in THCB:

“An NP has mostly on the job training…they NEVER went to a formal hard-to-get into school like medical school,” wrote one doctor. “I have worked with NPs before, and their basic knowledge of medical science is extremely weak. They only have experiential knowledge and very little of the underpinning principles. It would be like allowing flight attendants to land an airplane because pilots are too expensive. HEY NURSIE, IF YOU WANT TO WORK LIKE A DOCTOR…THEN GET YOUR BUTT INTO MEDICAL SCHOOL AND THEN DO RESIDENCY FOR ANOTHER 3-4 YEARS. NO ONE IS PREVENTING YOU IF YOU COULD HACK IT![his emphasis]”

Continue reading…

The Cost of Mammography Screening for Women Under 50

Goozner The tempest that greeted the United States Preventive Services Task Force guidelines on mammography screening for women in their 40s prompted the Senate to insert a mandate in its health care reform bill that every insurer cover every mammography screening test at no cost to beneficiaries. If it passes, it will spark an upsurge in mammography screening, especially among women under 50, and raise the nation’s health care tab.

The Journal of the American Medical Association this morning provides a timely article (subscription required) reminding physicians and women about the serious health costs of adopting that policy.Continue reading…

State vs. National Exchanges – Why it Matters

Does it matter whether health insurance exchanges are state-level or national? I used to think that it wasn’t a major issue, but my opinion has changed.

During the health reform debate early in 2009, I thought that other exchange design issues were more important than whether they are organized at the state or national level. In my view, who is eligible to join (all small business employees or just those who receive subsidies?), whether the exchange is the exclusive market for individuals and small groups, and how the exchange will be protected from an adverse selection “death spiral” are critical design features and will determine whether the exchanges are successful.

It seemed to me that the arguments put forward by advocates of a national exchange were not compelling. The most common argument was that a national exchange was needed in order to gain sufficient size, which would supposedly give the exchange more bargaining power with health insurers. But I always thought that size was more important at the local level. Health insurers negotiate provider contracts locally, not nationally, and they gain leverage based on their size locally regardless of how big they are nationwide. In addition, the “bargaining power” argument is relevant only if the exchange is negotiating rates with insurers. In an “all comers” model, the exchange isn’t negotiating rates; it relies on healthy competition among insurers to drive down premiums.

Continue reading…

“Comparative Effectiveness Research” and Kindred Delusions

By NORTIN HADLER, MD

Last summer President Obama signed the American Recovery and Reinvestment Act into law. Tucked into the legislation was $1.1 billion to support comparative effectiveness research (CER). The legislation charged the Institute of Medicine with defining CER. Its Committee on Comparative Effectiveness Research Prioritization rapidly came up with,

    …the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat and monitor a clinical condition, or to improve the delivery of care. The purpose of CER is to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels.

The Committee then elicited over 2500 opinions from 1500 stakeholders and produced a list of the 100 highest-ranked topics for CER (www.iom.edu/cerpriorities). Proposals to undertake CER are pouring forth from investigators across the land. There is no doubt that an enormous amount of data will be generated by 2015. But there is every reason to doubt whether many inferences can be teased out of these data that will actually advantage patients, consumers, or the health of the nation.

I am no Luddite. For me “evidence based medicine” is not a shibboleth; it’s an axiom. Furthermore, having trained as a physical biochemist, I am comfortable with the most rigorous of the quantitative sciences let alone biostatistics. However, you can’t compare treatments for effectiveness unless you are quite certain that one of the comparators is truly efficacious. There must be a group of patients for whom one treatment has unequivocal and important efficacy. Otherwise, the comparison might discern differences in relative ineffectiveness.

The academic epidemiologists who spearheaded the CER agenda are aware of the analytic challenges but are convinced these can be overcome. I would argue that CER can never succeed as the primary mechanism to assure the provision of rational health care. It has a role as a secondary mechanism, a surveillance method to fine tune the provision of rational health care, once such is established.

The difference between efficacy and effectiveness

My assertion may seem counter-intuitive. After all, we hear every day about pharmaceuticals that are licensed by the FDA because of a science that supports the assertion of benefit. In epidemiology-speak, the science that the FDA reviews does not speak to the effectiveness of the drug, but to its efficacy. The science of efficacy tests the hypothesis that a particular drug or other intervention works in a particular group of similar patients. CER asks whether an intervention works better than other interventions in practice where the patients and the doctors are heterogeneous. The rational for the CER movement is the perceived limitations of efficacy research. I argue that the limitations of efficacy research are much more readily overcome than the limitations on CER.

Efficacy research

The gold standard of efficacy research is the randomized controlled trial (RCT). In a RCT, patients with a particular disease are randomly assigned to receive either a study intervention or a comparator (often a placebo). After a pre-determined interval, the previously defined clinical outcome is compared in the active and control limbs of the trial. If there is no difference, one can argue that the intervention offers no demonstrable clinical benefit to patients such as those in the study. If there is a difference, the contrary argument is tenable.

This elegant approach to establishing clinical utility has its roots in antiquity, at least as far back as Avicenna. The modern era commences after World War II and escalates dramatically after 1962 when the Kefauver-Harris Amendment to the laws regulating the US Food and Drug Administration mandated demonstration of efficacy before pharmaceuticals could be licensed. Modern biostatistics has probed every nuance of the RCT paradigm. The result is a highly sophisticated understanding of the limitations of the RCT, an understanding that has fueled the call for CER:

  1. The more homogeneous the study population, the more likely any efficacy will be demonstrated and the more compelling any assertion as to its lacking. However, the homogeneity compromises the ability to assume the result generalizes to different kinds of patients.
  2. Many important clinical outcomes are either infrequent or occur late in the course of disease. It is difficult to maintain and fund RCTs that require years or decades before one can hope to see a difference between the active and control limbs. The compromise is to study “surrogate” outcomes, measures that in theory reflect the disease process, but are not themselves clinically important outcomes. Thus we have thousands of studies of blood pressure, cholesterol, blood sugar, PSA and the like but comparatively few studies that use heart attacks, death from prostate cancer, or other untoward clinical outcomes as the end-point.
  3. How big a difference between the active and control limbs is important? Biostatistics has dictated that we should pay attention to any difference that is unlikely to happen by chance too often. “Too often” traditionally is considered no more than 5% of the time, but that’s a matter risk-taking philosophy. What are we to make of a difference that is clinically very small, even if it is unlikely to happen by chance more than 5% of the time? Is it possible that the small effect will be important, perhaps less small, when the constraints of homogeneity are removed in practice? In practice, drugs licensed for one disease are even tried for other “off label” indications where effectiveness may emerge.
  4. The corollary limitation relates to the negative trial. If there is no demonstrable difference, does that mean that there is no effect? Or could the effect have been too small to detect because of the duration of the trial or the size or homogeneity of the population studied? Even a very small effect, advantaging only the occasional patient, can translate into many benefited people when tens of thousands are treated.
  5. Devices and surgical procedures are used practice; rigorous testing as to efficacy is not a statutory requirement. Maybe in the “real world” a treatment that was never studied or studied in a limited fashion turns out to really advantage patients in practice, or advantage some patients – or not.

CER to the rescue?

The methodology employed for CER is not the RCT. CER is an exercise in “observational research”. CER examines real world data sets to deduce benefit or lack thereof. This entails the development of large-scale, clinical and administrative networks to provide the observational data. Then biostatistics must come to grips with issues that make defining the heterogeneity of populations recruited into RCTs seem trivial. In the RCT, the volunteers can be examined and questioned individually and in detail and the criteria for admission into the trial defined a priori. Nothing about the validity of diagnosis, clinical course, interventions, coincident diseases, personal characteristics or outcomes can be assumed in observational data sets. There must be efforts at validating all such crucial variables. No matter how compulsively this is done, CER demands judgments about the importance of each of these variables. It is argued that some of these limitations are overcome because CER is not attempting to ask whether a particular intervention works in practice, but whether it works better than another option also in practice. It is even suggested that encouraging or introducing particular interventions or practice styles into some practice communities and not others would facilitate CER. Perhaps.

The object lesson of interventional cardiology

Interventional cardiology for coronary artery disease is the engine of the American “health care” enterprise. Angioplasties, stents of various kinds, and coronary artery bypass grafting (CABG) have attained “entitlement” status. There are thousands of RCTs comparing one with another, generally leading to much ado about very small differences, usually in surrogate measures such as costliness or patency of the stent. But there are very few RCTs comparing the invasive intervention with non-invasive best medical care of the day: 3 for CABG and 4 for angioplasty with or without stenting. In these large and largely elegant RCTs, the likelihood of death or a heart attack if treated invasively is no different from the likelihood if treated medically. Whether anyone might be spared some degree of chest pain by submitting to an invasive treatment is arguable since the results are neither compelling nor consistent. Yet, interventional cardiology remains the engine of the American “health care” enterprise. It carries on despite the RCTs because its advocates launch such arguments as “We do it differently” or “The RCTs were keenly focused on particular populations of patients and we reserve these interventions for others we deem appropriate.” These arguments walk a fine line between hubris and quackery.

So many invasive procedures are done to the coronary arteries of the young and the elderly that interventional cardiology has long lent itself to CER. We know from observational studies that that it does not seem to matter much if the heart attack patient has an invasive intervention quickly or it is delayed or not at all. We know from observational studies, and even trials rewarding some but not all hospitals for getting doctors to adhere to the “guidelines” for managing heart disease, that adherence does not make much of a difference. Do the results of this CER mean that we need to further improve the efficiency and quality of the performance of invasive treatments as many would argue? Or can we hope that more exacting CER can parse out some meaningful indication from large data sets, some compelling inference that only particular people with particular conditions are advantaged and therefore are the only candidates for interventional cardiology?

Or are we using the promise of CER to postpone calling a halt to the ineffective and inefficacious engine of American “health care”. The available science is consistent with the argument that interventional cardiology is not contributing to the health of the patient. I would argue that interventional cardiology should be halted until someone can demonstrate substantial efficacy and a meaningful benefit-to-risk ratio in some subset.  Then CER can ask whether the benefit demonstrated in the efficacy trial translates to benefit in common practice.

Efficacy research is the horse; CER is the cart

Interventional cardiology for coronary artery disease is but one of many object lessons. There is much in common practice that has never been shown to be efficacious in any subset of patients. Some practices take up residence in the common sense despite having never been studied. Some practices, like interventional cardiology, persist because intellectual and fiscal interests are vested in the entrenchment despite the results of efficacy trials. CER can not inform efficacy, and CER can not inform effectiveness unless there is an example of efficacious therapy against which practices are compared. Otherwise, CER can be comparing degrees of ineffectiveness.

The way forward is to design efficacy trials that are more efficient in providing gold standards for comparison and as efficient in defining false starts that are not allowed into common practice until the approach is superseded by one of demonstrated efficacy. This is not all that difficult to do. Let’s return to the limitations of efficacy trials listed above:

  1. Homogeneity of study populations is not a limitation for the quest for a meaningful standard of efficacy. At least we will know the intervention is good for someone.
  2. Surrogate measures are useful to bolster the hypothesis that something might work. They have a dismal track record for testing the hypothesis that something does work. Clinically important outcomes must be invoked for such a test. If it is not feasible because the clinical outcome is too slow to develop or too infrequent, compromise is not an option. The intervention can not be studied at all, or it can not be studied until an appropriate subpopulation can be identified, or one must bite the bullet and undertake a lengthy RCT.
  3. Surrogate outcomes are not the only way that RCT results can lead to spurious clinical assumptions. “Composite outcomes” are even worse. RCTs in cardiology are notorious for an outcome such as “death from heart disease or heart attack or the need for another procedure.” When these studies are closely read, one learns that any difference detected is almost exclusively in “the need for another procedure” which is a highly subjective and interactive outcome that can speak to preconceptions on the part of the doctor or the patient rather than the efficacy of the intervention.
  4. Modern epidemiology is so wedded to the notion of statistical significance that concern about the statistical significance of “What?” is overwhelmed. “What?” is the clinical significance? Just because the difference observed between the active and control limbs of the RCT wouldn’t have happened by chance too often does not mean that the difference is clinically important even in the occasional patient. I’ll illustrate this by touching the Third Rail that the debate over the clinical utility of mammography has become. Malmö is a city in Sweden where women were invited to volunteer for a RCT; half would be offered routine screening mammography for a decade and the other half encouraged see their physicians whenever they had concern about the health of their breasts. That’s the difference between screening and diagnostic protocols; in screening one is agreeing to a test simply as a matter of course, in diagnostics one agrees to the testing in response to a clinical complaint. Back to the Malmö RCT. Over 40,000 women between age 40 and 60 volunteered for the RCT. Invasive cancer was detected in statistically significantly more women who were in the screened group than in the diagnostic group. Impressed? How about if I told you that 7 of 2000 women screened for a year were found to have invasive breast cancer and 5 of 2000 women in the diagnostic group for a year were found to have invasive breast cancer. Was all the screening worth this difference in absolute number of additional cancers detected? I could have told you that screening detected 40% more cancers but you won’t be swayed by the relative increase now that you know the absolute increase was 0.1%, will you? Would you consider the screening valuable if I told you that for every woman whose invasive breast cancer was treated so that they lived long enough to die from something else at a ripe old age, another two were treated unnecessarily since they died from something else before their breast cancer could be their reaper? How about all the false positive mammograms and false positive biopsies? There is a debate about mammography because it is a very marginal test that clearly is not doing as well as the common sense assumes.
  5. How small an effect can we detect in a RCT? Theoretically we can detect a very small effect. Theoretically we can detect an effect even smaller than the Malmö result. In order to do so, you need to randomize a large, homogeneous population whose size is determined by the level of statistical significance you choose and the nature of the health effect you seek. Death is the least equivocal outcome, for example. The quest for the small effect is the mantra of modern epidemiology. However, I consider such “small effectology” a sophism. No human population is homogeneous; we differ one from another in obvious, often measurable ways but also in less obvious, immeasurable ways. When we randomize individuals in any homogeneous population into a treatment group and a control group we assume that all the immeasurable differences randomize 50:50 or if not the randomization errors counterbalance. The smaller effect we are seeking, the more likely we are to be fooled by randomization errors that account for the difference rather than the treatment. That’s why so many small effects that emerge from RCTs do not reproduce.

Evidence Based Medicine can be more than a Shibboleth

The philosophical challenge in the design of efficacy trials relates to the notion of “clinically significant.” How high should we set the bar for the absolute difference in outcome between the treated and control groups in the RCT to be considered compelling? One way to get one’s mind around this question is to convert the absolute difference into a more intuitively appealing measure, the Number Needed to Treat (NNT). If the outcome is readily measured and unequivocal, such as death or stroke or heart attack, I would find the intervention valuable if I had to treat 20 patients to spare 1. Few students of efficacy would be persuaded if we had to treat more than 50 to spare 1. Between 20 and 50 delineates the communitarian ethic; smaller effects are ephemeral. For an outcome that is more difficult to measure than death or the like, an outcome that relates to symptoms or quality of life, I would argue for a more stringent bar.

If we applied this logic to RCTs, the trials would be far more efficient (in investigator/volunteer time, materiel, and cost) and the results far more reliable. If we applied this logic to RCTs, we would eliminate trials designed only to license agents no better than those already licensed (“me too” trials) and trials designed only for marketing purposes (“seed” trials). If we only licensed clinically efficacious interventions going forward, we could turn to CER to understand their effectiveness in practice. If we applied this logic retrospectively, to the trials that have already accumulated, we would soon realize how much of what is common practice is on the thinnest of evidentiary ice, how much has fallen through and how much supports an enterprise that is known to be inefficacious. It would take great transparency and political will to apply this razor retrospectively. We, the people, deserve no less.

Nortin M. Hadler, MD, MACP, FACR, FACOEM (AB Yale University, MD Harvard Medical School) trained at the Massachusetts General Hospital, the National Institutes of Health in Bethesda, and the Clinical Research Centre in London. He joined the faculty of the University of North Carolina in 1973 and was promoted to Professor of Medicine and Microbiology/Immunology in 1985. He serves as Attending Rheumatologist at the University of North Carolina Hospitals.

For 30 years he has been a student of “the illness of work incapacity”; over 200 papers and 12 books bear witness to this interest. He has lectured widely, garnered multiple awards, and served lengthy Visiting Professorships in England, France, Israel and Japan. He has been elected to membership in the American Society for Clinical Investigation and the National Academy of Social Insurance.  He is a student of the approach taken by many nations to the challenges of applying disability and compensation insurance schemes to such predicaments as back pain and arm pain in the workplace. He has dissected the fashion in which medicine turns disputative and thereby iatrogenic in the process of disability determination, whether for back or arm pain or a more global illness narrative such as is labeled fibromyalgia. He is widely regarded for his critical assessment of the limitations of certainty regarding medical and surgical management of the regional musculoskeletal disorders. Furthermore, he has applied his critical razor to much that is considered contemporary medicine at its finest.

Urgently Needed: Useful Meaning of Meaningful Use

One day before 2009 passed into history, the much anticipated final definition of “meaningful use” was released by CMS and ONC, 556 pages and 136 pages, respectively. The blogosphere experts rushed to summarize the contents, some accurate and some less so, and just like everything that has to do with health care reform, for every rule making there are a dozen new questions being raised by the already thoroughly confused stakeholders at large.

Just so that THCB is not left out, here is a quick qualitative summary of the contents:

Requirements:

1. Data Collection – The following structured data elements will need to be collected by the software: Demographics, Vitals (plus Smoking status), Electronic Lab Results, Problem Lists, Medications and Allergies. The important thing to note here is that the requirement to record Advanced Directives has been dropped in the final ruling.

2. Medical Records for Patients – Providers will need to provide patients with electronic copies of Visit Summaries, Care Summaries, Discharge Summaries and Complete Medical Records upon request. In addition, continuous on-line access to medical records is also to be provided.

Continue reading…

Nancy Turett, Edelman: “Health is the new Green”

Late last year PR/Communications giant Edelman released a survey called the Health Engagement Pulse. (Here’s the press release and here are the charts) This is separate from both Edelman’s Trust Barometer which has looked at consumer engagement and trust in business and institutions for years, and their Health Engagement Barometer (HEB) which looked at engagement in health in five countries in 2008 and is going to be run again this spring. At Health 2.0 we;ve worked with Edelman and featured the HEB data in our meetings and will continue to do so. Recently I “chatted” with Edelman’s President for Health, Nancy Turett, to find out what she thinks the data is telling us about people’s attitudes towards “health”.

Matthew Holt: Nancy, Edelman’s been looking at Health for a long time and also Engagement with the well known Engagement Barometer separately. In late 2008 you did the first Health Engagement Barometer. What does Health Engagement mean, and why have you put the two concepts together now?

Nancy Turett: Over the past several years, our engagement in all things health has growth dramatically, giving us a particularly useful whole-egg look at health industry, issues, and especially the growing convergence of public and personal health imperative. With clients from all industries and sectors grappling with health — costs, social expectations, pressures to innovate, and policy changes underway — we’ve found it useful to all to provide insights about what the public-at-large — wearing their many health hats — knows, wants, cares about and does as relates to health.  And as a communications and engagement firm, we’ve delved particularly deeply into how people are influenced and how they influence others.

The Health Engagement Barometer, which we created and conducted for the first time a year ago, shone a bright light on some key issues, and identifying a fascinating cohort of people who by dint of their engagement, involvement, and information about health, have high influence over the attitudes and actions of others. We called them the “Health Info-entials.” We also learned a lot about people’s interest in engaging with health brands and companies — and we found people crave more connection than they’re getting — and that transparency and completeness trumps perfection when it comes to building trust between a health-involved brand and a consumer.

Continue reading…

An Open Letter to Speaker Pelosi from C-SPAN Founder Brian Lamb

BrianLambBy BRIAN LAMB

Dear Speaker Pelosi:

As your respective chambers work to reconcile the differences between the House and Senate health care bills, C-SPAN requests that you open all important negotiations, including any conference committee meetings, to electronic media coverage.

The C-SPAN networks will commit the necessary resources to covering all of these sessions LIVE and in their entirety. We will also, as we willingly do each day, provide C-SPAN’s multi-camera coverage to any interested member of the Capitol Hill broadcast pool.Continue reading…

EHRs for a Small Planet

Right now, American health care information technology is undergoing two enormous leaps. First, it is moving onto Web-based and mobile platforms – which are less expensive and facilitate information exchange – and away from client-server enterprise-centric technologies, which are more expensive and have limited interoperability. In addition, more EHR development activity is headed into the cloud, driven by large consumer-based firms with the technological depth to take it there. Both these trends will facilitate greater openness, lower user cost, improved ease of use, and faster adoption of EHRs.

But they could also impact the shape of EHR technologies in another profoundly important way. What is often lost in our discussions about electronic health record technology in the US is the relationship these tools have to our health and health care problems…globally. We could be designing our health IT in ways that are good for the health of people both here and around the world, not simply to enhance care in the US.

Designing health data and management tools only for the particular operational needs of the current US health system may be doubly wrongheaded: It risks locking us into outdated technology and an expensive, dead-end path, while, at the same time, it could restrict collaborative exchanges of ideas and innovations that could improve health care here and abroad through better designed information technology.

Perhaps we should design EHRs for a small planet.

Rene Dubos (1901-1982) was a microbiologist who produced the first commercially marketed antibiotic. He also wrote widely about the relationship of humans with their environment, notably in So Human an Animal (1968), which won a Pullitzer Prize. In 1972, with economist Barbara Ward, he co-authored Only One Earth: The Care and Maintenance of a Small Planet, which set the issues and tone for the first major international conference on the environment. Dubos also first used the term “think globally, act locally,” advice to consider the widest possible consequences of our behaviors, but to take action in our own communities.

What would our EHR technology design efforts in the US look like if we incorporated Dubos’ more expansive framework? What principles might shift our thinking about EHRs away from America’s failing health system paradigm — with its illusion of unlimited resources, delivered by a fixed and ritualized set of professionals and institutions, and costs that double with each passing decade — towards a vision in which EHRs promote sustainable efforts in disease prevention, health improvement, social responsibility, and environmental protection? How might we think about EHRs globally while acting locally?

Principle 1: Define success with local health and health care problems in mind.

Defining EHR success is important, partly because US federal policy for EHR adoption is currently so dynamic. It would be easy to simply define success in terms of physicians’ short term acquisition of today’s EHRs, and the economic boost that might result from new government IT spending (e.g., IT jobs and EHR vendor profits). But Dubos might argue that successful EHR adoption should require measurable social and ecological benefits in the communities where the technologies are deployed, after consideration of the ‘big picture’ in which health spending is one among many societal priorities competing for limited societal resources, and therefore ought to be conservative.

The US’ current EHR adoption strategy channels money directly to doctors and hospitals, among the most privileged professional groups in any community. It could, instead, send those funds directly into the communities served, focusing on the local circumstances that result in fragmented, disorganized, and inconsistent health care delivery within driving distances of its citizens. EHR technologies could address communities’ continuity and access-to-care problems, and relate these to major preventative and chronic illness management challenges, e.g. vaccinations, obesity, and risks of heart disease. More and more people in adjoining communities could be reached by building on successes. Lowering health costs nationally is an important goal, to be sure. Maybe the best way to get there is to stimulate uses of health IT to improve individual and community health through local action. (It goes without saying that the system’s financial incentives would also have to be re-aligned.)

Thinking globally and acting locally would require us to study and plan how EHRs might benefit different communities, as unique populations with particular health risks, public health problems, and care delivery challenges. We would have to study those risks and challenges in each community, or in groups of neighboring communities. This is not easy, and it can be time consuming.

But the alternative, which seems to be to spend huge amounts of state, federal, and local dollars on one-size-fits-all health IT projects, top-down EHR systems that work for the VA or DOD but probably nowhere else, or data exchange efforts that may not be capable of solving, or even suitable to, the problems most at hand in that locale, could be simply disastrously wasteful by comparison. What works in central Indiana, quite honestly, may not be the right thing for Green Bay, Wisconsin, Helena, Montana, or Pamlico County, North Carolina.

Principle 2: Make the best possible use of existing IT resources before building or installing expensive new EHR systems.

Rather than ask “What could we do if everyone had computer systems like the most advanced large groups, e.g. Kaiser or the VA, let’s ask “What could we accomplish if we utilize the computers everyone already has?”

Experience has shown that it is not wise to expect big and complicated things to somehow become small and simple. For one thing, costs don’t necessarily scale. In contrast, though, the evidence is now overwhelming that with browser-based software running on personal computers and cell phones, and small applications running on hand-held devices, like the iPhone, consumer use can grow at extremely rapid rates and lead to complex social networks, rapid communications and feedback loops, and massive search and data analysis capability.

Examples abound of the kinds of resources available through inexpensive personal computers connected to the Internet, cell phones, and the newer smart phone technologies. Skype, the Internet-based voice communications company, has over 500 million registered users world-wide, which would make it the largest telecom carrier, if it were one. The top 25 wireless providers globally already service over 3 billion registered customers. The iPhone, introduced in 2008, has more than 57 million users, the fastest user growth in consumer technology in history, many times faster than the earlier rapid growth in PCs or the Apple iPod. Facebook – the social network platform where people send email, chat, share photos, and share interests – now has 350 million users and is growing at 660,000 per day! Lest we forget, these ubiquitous technologies are not just used for fun and games: massive amounts of data are being exchanged as well. And they are getting cheaper to own and operate all the time.

And yet they are for the most part useful only at the margins of health care, an industry that has somehow walled itself off from IT modernity. We certainly have not yet capitalized on the health and medical uses of the extraordinary networked computing resources available now in almost every home and work site in this country. EHRs for a small planet need not cost $54,000 per physician, which is the current estimate used by ONC and HHS.

It would be a critical mistake to waste our resources, time, and effort building new specialized state or regional data centers requiring complex and proprietary identity management technology for access, and to train a generation of IT professionals how to manage these expensive centers and the technology deployed there, when better design and efficiency could be obtained by use of the existing “off the shelf” general- and multi-purpose data highways, application platforms, and end-user computing capacity now available for health data exchange.

Principle 3: Design EHRs for the smallest unit of care delivery, with a focus on connectivity and communications.

Connectable EHRs can be designed for small medical practices and clinics in primary care, where the great majority of care is delivered, and for patients’ themselves — in their homes and places of work. Designed from the local, grassroots perspective, EHR technologies would also focus on affordability, ease-of-use, and especially on connectivity and continuity of information across those units in a given community, using existing computers, cell phones, the Internet, and yes, even fax machines.

Our current approach to health care IT, in contrast, is biased towards the needs of a handfull of professionals working in a relatively small number of large enterprises, such as hospital systems, and in large multi-specialty practices. These large units typically represent the most complex “use cases” for EHRs, based on the needs of the most complicated and sickest patients, requiring the most intensive usage of drugs and pharmaceuticals, and at the far end of the spectrum in terms of complicated ancillary medical devices, such as MRIs, medicated stents and proton accelerators.

These large health care units are often fiercely competitive and have little use for data exchange with competitors, and even less interest in using computing resources to reach across the communities they serve. As a result, they may be among the least appropriate and least competent stewards of community-based health IT resources. And yet their representatives dominate the steering committees and governance boards for the nation’s health information exchanges (HIEs) and regional health information organizations (RHIOs), where a big chunk of the federal funding is now going.

If waste is the failure of design, then designing EHRs for a small planet would avoid lengthy and disruptive installations and long training cycles involving expert consultants. Instead, they would favor modular, browser-based EHR software that are familiar to physicians, their staffs, and their patients, and that can be navigated simply.

Implicit in this design priniciple is a requirement for minimal training that focuses on how to use the software to best improve care, rather than on which buttons to push in which sequence to optimize fee-for-service reimbursement. EHR software that looks more like Facebook and less like a database manager’s tool kit, that can work through web browsers and mobile devices, and that can be incrementally expanded as new uses arise, is not only likely to be more adoptable than today’s EHRs, but also less expensive to own and operate.

Principle 4: Recognize that what sustains most information technologies is people’s desire to connect with one another.

Email is the “killer app” of the Internet. Facebook and Twitter have become the amazingly fast growing online social networks. Human beings seek connection at nearly every opportunity. Technologies that facilitate that connectedness and then provide key utilities are most likely to succeed.

Maintaining and restoring health, preventing disease, and the act of caring for others who are in need due to problems of the body and mind: these are among the most basic social activities of human beings, our communities, and our cultures. And yet, for complex reasons associated with money and power, our health system and the care it delivers is too often fragmented, dis-connected, and isolated. And its technological disconnection is both a symptom and a substrate of this phenomenon. Physicians and nurses face many barriers in communicating amongst themselves, with their patients and with their patients’ caregivers. The current crop of EHR products do virtually nothing to address this problem. In fact, EHRs in the US may have exacerbated our health care dis-connectedness.

EHRs that can share data, information, and connect the experience of patients, caregivers and doctors more directly are much more likely to be utilized at the community level than EHRs that in essence capture and remove data, isolating them and their potential social uses in faraway databases that no one can get into.

The huge success of health-related social websites – like PatientsLikeMe.com, DiabeticConnect,com and Sermo.com – are testament to the desire that many people have to close what Adam Bosworth has called the “collaboration gap” that stands between the limitations of the legacy health care system and the almost infinite benefits that arise from participating in self-help and online socializing activities. People who share their experiences – and data about themselves – know that this is helping them close the collaboration gap. But this gap is being perpetuated by EHRs that are organization- and enterprise-centered, and can only be substantially closed if physicians and medical groups in communities around the country use EHR technology to leapfrog over the communications and socialization barriers inherent in their older technologies. This will require new forms of EHR technology capable of socialization, which we have described elsewhere as Clinical Groupware.

Principle 5: Separate data from the applications and from the transport layer.

It is a stunningly simple yet powerful feature of the most familiar and widely-used information technologies that data – the message – is deliverable regardless of the sending or receiving applications, and independent of the network or transport layer that carries it. Email messages can be sent and received via many hundreds of client applications (what you and your computer use to compose the email or to display a received email.) Email and messaging services can carry many dozens of different kinds of attachments, e.g. pdf documents, across both open and secure networks, and networks with different kinds and levels of security protection in place.

This is a small planet idea that is the direct consequence of the openness of Internet protocols, but one that has not yet become incorporated in US health care, where data messages, applications, and network transport protocols remain unendingly, even stupifyingly, proprietary. Not only do these approaches perpetuate “walled gardens” – hospitals using one EHR system can’t send a simple electronic medical summary to another hospital using another EHR system across the street — but it also is a barrier to the innovators who would design, build and implement new, low cost applications like modular EHRs.

Clay Shirky makes this point in a blog post recently:

Thus the question for broad participation… is not: “What will the most complete system look like for the richest and most technically adept institutions?” Rather, it is: “What’s the simplest and most low cost way for a small vendor or new market entrant to get a small practice tied in?”

…Here’s what a workable set of transport standards will not do: It will not assume to know what kind of applications any given network participant is running locally. Once the data are delivered, it should be usable by everything from the simplest to the most complex application, since the recipient of the data will have the best understanding of what works in their local context.

This ability to separate data from transport and applications from data is the essential pre-condition for innovation — a group that has a valuable new idea for presentation of data for clinical use should not also be forced to think about the data encoding or the way the data are transported. Groups working on new data encodings should not be tied to a pre-existing suite of potential applications, nor should they have to change anything in the transport layer to send the new data out, and so on.

Patients and doctors in offices, homes, laboratories and pharmacies most often need information, and most often they need it in the form of small amounts of summary data such as a medication or problem/diagnosis list, a specific allergy, a limited number of recent or historically important lab tests or images. Where there is continuity of care and information flow, especially, there is rarely the need to access the complete or comprehensive medical record or its full contents.

For most ambulatory and outpatient clinical care needs, simple dashboard and summary health “EHR light” products may be sufficient, and there is a logical progression towards more complex health IT as the acuity of care increases. Modular design of EHR technologies may help to bridge this gap without creating large discontinuities of user interfaces and may also keep prices for health IT in the community setting at a lower point than otherwise.

*****

In the U.S., many of our health problems result from the growing burden of chronic diseases occasioned both by an aging population and our sedentary lifestyles. In much of the developing world, by contrast, the local health problems – pandemics like HIV/AIDs, malaria, and drug-resistant tuberculosis – result from poverty and a lack of basic public health resources. However, similar EHR technology in each of these settings can provide efficient health data exchange and information management. Both individual and population health status could be improved with medical records that are inexpensive, simple to use, and capable of network exchange

To this point, each of the above principles for small planet health IT is already being put in place effectively in many developing countries, where cell phones are used to remind patients of their medication regimens and are the vehicle for relaying laboratory test results and vaccination information from provider to provider in sparsely populated and very resource-limited communities. As part of the Millenium Villages Project in Ghana, for example, cell phones are part of a program that is dramatically improving the chances of survival for pregnant women and their newborns.

Our brethren in other countries, developed and developing, face many of the same challenges obtaining good quality health care that we do here in the United States, including realizing the promise and hope offered by health IT. If we persist in federal EHR policies that “over-serve” local US communities’ needs by developing complex and expensive systems of health IT, we may not only be missing the mark at home. We might also be missing the opportunity of helping the other inhabitants of this small planet.

David C. Kibbe MD, MBA and Brian Klepper, PhD write together about health care technology, market dynamics and reform. Their collected writings can be found here.