Cleveland Clinic is the health care industry trailblazer when it comes to publishing its clinical outcomes. As discussed in this earlier story (“How To Report Quality To The Public”), the Ohio hospital system annually publishes Outcomes Books that detail the clinical performance of each of its departments.
If you doubt this is radical, go to your local hospital’s Web site. See if it publishes how many patients died during heart surgery last year.
At Cleveland Clinic that number is easy to find. The hospital performed 459 bypass surgeries and only three patients died in the hospital. That is about a third the rate of deaths recorded at other hospitals for the same procedure.
Yet Cleveland Clinic does not only publish data that casts itself in a favorable light. In the third quarter of last year, 3% of bypass patients had strokes after their operations, when that number should have been around 1%.
I called the hospital’s corporate office to find out more about the history of the Outcomes books, how they affect hospital operations, and if there were lessons to share. I asked to speak to the de facto “Chief Transparency Office” and assumed I’d be directed to a middle manager working in the office of public affairs or marketing.
Instead, I soon found myself on the phone with the CEO. It turns out that Delos “Toby” Cosgrove, who runs the $6 billion health system, is also the organization’s unofficial transparency officer. He was the guy who developed the Outcomes Book concept in the first place.
Almost thirty years ago when he became chair of heart surgery—and 20 years before he ascended to his current role—he started measuring and sharing surgical outcomes as a way to hold staff accountable.
Cosgrove is still very involved with the day-to-day data collection that drives quality improvement. He told me that just that morning he had been in a meeting to determine why there was a spike in ICU central-line infections. Collecting and reporting such data is the first step in making improvements.
Here are five lessons that I distilled from Dr. Cosgrove as he described the Outcomes books, and more broadly, how to run a hospital in an increasingly transparent world of health care.
1. Tap Your Staff’s Own Competitive Desire to Improve Quality
Transparency in health care often implies external reporting. Regulators force hospitals to release their charges. Medicare reports infection rates to the public.
But long before those mandates, Cosgrove was experimenting with transparency inside the hospital as a management tool. As chair of cardiac surgery, each year he would report on his physicians’ complication and mortality rates during a special grand rounds presentation.
“We wanted to compare our quality with our reputation,” he says.
Surgeons with worse-than-average outcomes had to lower their gaze. Not every famous surgeon looks good when judged with objective metrics. The exercise had a sobering effect. But more importantly, staff rushed to improve their outcomes before the following year’s public reckoning.
2. Methodology Matters
Successful heart surgery seems like it would be easy to measure. “You either walk out of the hospital or you get carried out,” Cosgrove says.
But some surgeons have more complicated cases than others. Some patients may suffer from underlying conditions that worsen their survival chances regardless of the surgeon’s skill.
Risk adjustment has become the standard practice, where outcomes are put on a curve. But how to adjust for risk is controversial. There’s also the question of how to assess a surgeon’s performance when a patient survives. “The learning of how to evaluate cardiac surgery results became more sophisticated,” Cosgrove says.
Cleveland Clinic tries to assess heart procedures with other measures besides mortality. It tracks complication rates. For some surgeries it reports whether a patient required a ventilator or dialysis afterward.
But there is room for improvement. Cleveland reports data on how many of a particular procedure it performed but does not couple that with data on how patients do over a follow-up period. The follow-up data is missing for catheterization (angiogram, stent) and electrophysiology (pacemaker, defibrillator) procedures. An exception however, is heart transplants. It tracks those patients for three years and finds that its 82% survival rate is in line with expectations. (See page 52.)
As Cosgrove recounts, everything is a work in progress as methodology and data collection improve. And what works for heart surgery may be very different than what works for dermatology.
From a patient’s perspective a big question is whether the data that Cleveland Clinic reports is useful when making comparisons. If you are choosing between Cleveland and another major medical center, can you find out which had the fewest complications in a very specific procedure?
Unfortunately that is usually not possible. Cosgrove says that for a while the University of Pennsylvania and Brigham & Women’s Hospital in Boston coordinated cardiac surgery reporting so that patients could make apples-to-apples comparisons. But those hospitals no longer report the same data as Cleveland.
3. Develop Best Practices
Another reason to be transparent is that best practices emerge. “Every time we looked at the outcomes we found things to do better,” Cosgrove says.
A critical step was keeping a patient registry. That registry, started in 1972 for heart surgery patients, became a source of data for the outcomes reporting.
Filling in the registry with long term outcomes sometimes required private detectives to track down patients who had moved or stopped responding to calls. Researchers also used it to mine data for papers.
Cosgrove recalls how registry data showed that the long-term success of bypass surgery improved greatly for patients whose surgeons used arterial grafts instead of vein grafts. He says Cleveland realized this in the late 1970s but artery grafting did not become the standard of care for another 13 years.
Ironically patient registries are in vogue right now. This is being driven by accountable care organizations, and pay-for-performance programs, which both tie reimbursement to long-term outcomes.
All this activity should lead to greater transparency and more robust reporting, as it has in Cleveland.
4. Hold Yourself Accountable
Cosgrove disputes the notion the Outcomes Books are just another form of marketing. In a technical sense, he is correct. The books are not sent to patients (though they are visible). And you cannot put a spreadsheet or a line graph up on a billboard.
Yet even if the effort is not purely for marketing, the books and the exercise of reporting data, do help make the brand. And that brand, Cosgrove acknowledges, stands for a commitment to quality and accountability.
Clinical outcomes are only one piece of how a more transparent hospital builds credibility with patients, he says.
For example, several years ago Cleveland Clinic physicians were criticized for prescribing products that they served as consultants for without full disclosure. Now every physician has to list on her Web site any corporate consulting work.
There’s also a move to give patients full access to their electronic health records, through a website called MyChart.
HCAHPS scores, which are Medicare’s measure of customer satisfaction, are available to the public at the hospital level. Cleveland calculates them by unit and links patient surveys to individual physicians as well.
However, as with the outcomes data, even Cleveland Clinic is not yet ready to release physician-specific HCAHPS data to the public.
5. Tie Transparency to Sales
Cosgrove’s original exercise in transparency, which was imposed on his own cardiac surgery department as a quality control initiative, had a secondary purpose.
After presenting it to his surgeons, Cosgrove took the data and handed it over to the cardiologists at Cleveland Clinic. Cardiologists are the doctors who refer patients for surgery, and thus deserve to know what the pre-operative risks were of dying or being injured. They also should know which surgeons had the best performance.
In this way Cosgrove tied the outcomes to referrals and thus to his business.
As the annual Outcomes study got bigger, and Cosgrove worked his way up to CEO, he turned the internal presentations into their current book form. He also made every department publish one. ”We publish 40,000 or 50,000 copies for public consumption,” he says. “It’s a major investment.”
Cleveland also started mailing copies to referring physicians around the country. “Outside physicians are appreciative of our candor,” Cosgrove says.
Working in Cleveland Clinic’s favor is that specific, service-level outcomes are becoming more important, as buyers of health care become more sophisticated about quality.
This fall Boeing cut a deal in Seattle whereby 83,000 of its workers can now spend less out of pocket by getting their heart surgery at Cleveland Clinic. The company health plan will pay for the patient to travel with a companion across the country. These patients will be opting out of getting care at local medical centers like the University of Washington, and flying over countless others. This is a deal that Cleveland Clinic has cut with other major employers like Wal-Mart and Lowe’s.
In the Seattle Times coverage of the Boeing deal, the story specifically references the fact that Cleveland Clinic publishes its clinical results. And a manager at Boeing is quoted saying that the aerospace company is considering doing a similar deal for orthopedics care.
Tying transparency to sales gives it a long-term business purpose.
Many health care buyers— individuals or employers—are not yet so sophisticated about quality. They will continue to patronize a local hospital because of proximity and word of mouth, without doing much homework. But that world is changing, as the Boeing deal signifies.
That is not to say that we have fully arrived in the future.
Few other hospitals even attempt to self-report outcomes. And when they do, it is difficult to make comparisons among facilities and physicians. Even Cleveland Clinic does not take the utmost radical step of publicly ranking its own staff physicians, which Cosgrove acknowledges would be difficult to do externally.
Still, there is something reassuring about the swagger with which Cleveland Clinic pulls back the curtain on the quality of its care. Maybe other health systems will someday be just as bold.
David Whelan is a contributing editor at Forbes, where he was a staff writer for 8 years covering health care payers, providers and policy. He’s currently studying and working in hospital administration. Follow him on Twitter @WhelanHealth. This post first appeared on Forbes.
Six CMS Investigations reveal that Toby has problems at Cleveland Clinic. The investigations suggest he is NOT running the day to day operations. CMS cited Cleveland Clinic for having NO Governing Board. Read!
Toby runs a very harmful and dangerous hospital. Read the following findings from CMS investigations:
What a great article and very well done!
What I found fascinating is that I referred a patient to CC for valve surgery. It demanded the images of all imaging studies on this patient, many of which were done within the last 3 months. It was a time consuming task to gather the images and get them there.
All that was needed was a heart catheterization at the CC, but the CC repeated everything, including cardiac ultrasound from the chest and esophagus, without ever looking at the studies that were sent. Who paid for the duplicates, exactly?
I do not object against outcome indicators and welcome them in principle, but I think that has to be done very carefully, and if they are overemphasized, individuals/clinics/systems will find ways to game the system. Cherrypicking patients may actually be a blessing if sick risky patients are spared a hazardous therapy, but may also lead to physicians jumping on mild or even questionable disease. Biological age and comorbidity is hard to quantify, and a, say, 72 year old with diabetes may look like a desaster or like a quite healthy fellow. Patients with very poor surgical outcome could be discharged to specialty hospitals where patients can be ventilated and receive complex care. The ingenious human mind probably find many more ways finding the weak spots in outcome indicator generation. I would therefore take all numbers with skepticism. Assesing quality of care is simply more complex than airline or car performance reviews.
CC has a national reputation for cutting edge methods of forcing extortionary fees on insurers and patients. Until there’s some transparency there, everything else is just “swagger” (and bullshit).
To follow up on my prior comment, it’s important to note that The Cleveland Clinic is a well known academic medical center with a strong reputation and significant local and regional market power. As a result, it commands very high prices, at least in its local market, for the services, tests and procedures it offers. It’s quite likely that other nearby competitors can offer much of this care just as competently for significantly less money. That’s why we need price and quality transparency.
At the same time, deals with large employers for heart surgery are probably priced more aggressively in order to win business that CC would clearly not otherwise get. The Cleveland region is not growing and is probably losing population so CC may well have excess capacity. Offering reasonable bundled prices for cardiac procedures to attract patients from distant states is a good way to fill excess capacity with lucrative business.
I applaud the Cleveland Clinic for its outcomes data reporting and the maintenance of registries. However, we still need public reporting of commercial insurers’ contract reimbursement rates for services, tests and procedures so both patients and referring doctors can compare prices and outcomes before services are rendered, at least for tests and procedures that are scheduled in advance.
As I’ve said before, there needs to be special pricing rules for care delivered under emergency conditions either on an out-of-network basis or to those who lack health insurance. Disclosure of chargemaster rates (list prices) is meaningless as virtually nobody pays them.