There are two definitions of the word “Hacker”. One is an original and authentic term that the geekdom uses with respect. This is a cherished label in the technical community, which might read something like:
“A person adept at solving technical problems in clever and delightful ways”
While the one portrayed by popular culture is what real hackers call “crackers”
“Someone who breaks into other people computers and causes havok on the Internet”
People who aspire to be hackers, like me, resent it when other people use the term in a demeaning and co-opted manner. Or at least, that is what I used to think. For years, I have had a growing unease about the “split” between these two definitions. The original Hackers at the MIT AI lab did spend time breaking into computer resources… it is not an accident that the word has come to mean two things.. It is from observing e-patients, who I consider to be the hackers of the healthcare world, that I have come to understand a higher level definition that encompasses both of these terms.
Hacking is the act of using clever and delightful technical workarounds to reject the morality embedded default settings embedded in a given system.
This puts “Hacking” more on the footing with “Protesting”. This is why crackers give real Hackers a bad name. While crackers might technically be engaged in Hacking, they are doing so in a base and ethically bankrupt manner. Martin Luther King Jr. certainly deserves the moniker of “protester” and this is not made any less noble because Westboro Baptist Church members are labeled protesters too.
The sharing of patient information in the US is out of whack — we lean far too much toward hoarding information vs. sharing it. While care providers have an explicit duty to protect patient confidentiality and privacy, two things are missing:
- the explicit recognition of a corollary duty to share patient information with other providers when doing so is the patient’s interests, and
- a recognition that there is potential tension between the duty to protect patient confidentiality/privacy and the duty to share — with minimal guidance on how to resolve the tension.
In this essay we’ll discuss:
1. A recent recognition in the UK
2. The need for an explicit duty to share patient information in the US
A useful and well-written summary of open access to publications in the medical field triggered some thoughts I’d like to share. The thrust of the article was that doctors need more access to a wide range of journal publications in order to make better decisions. The article also praises NIH’s open access policy, which has inspired the NSF and many journals.
My additional points are:
- Open publication adds to the flood of information already available to most doctors, placing a burden on them to search and filter it. IBM’s Watson is one famous attempt to approach the ideal where the doctor would be presented right at the point of care with exactly the information he or she needs to make a better decision. Elsewhere, I have reported on a proposal to help experts doctors filter and select the important information and provide it to their peers upon demand–a social networking approach to evidence-based medicine.
- Not only published papers, but the data that led to those research results should be published online, to help researchers reproduce the results and build on them to make new discoveries. I report in an earlier article on this site about the work of Sage Bionetworks to get researchers to open their data. Of course, putting up raw data leaves many challenges: one has to be careful to deidentify it according to accepted standards. One has to explain the provenance of the data carefully: how it was collected and massaged (because data sets always require some culling and error-correction) so it can be understood and properly reused. Finally, combining different data sets is always difficult because they are collected under different conditions and with different assumptions.
In 2004, I was managing a hospital division at the University of Chicago and our clinic director walked into my office and asked whether I thought that all physicians should be issued with smartphones. My first internal thought was, “Hmm, what’s a smartphone?”
Today, we all know how dramatically different mobile phones are than they were a year or two ago, much less back in 2004. But as the power of mobile technology increases, tech entrepreneurs have taken a lead on challenging old rules that haven’t been discussed in decades. What if the development of the smartphone could give us some clues into the future of healthcare IT?
Recently, I was on a business trip to Boston and met a friend for dinner. As we discussed where to go, I wanted to go someplace close, thinking that getting a taxi would be a pain. My friend pulled out his smartphone and requested a car to pick us up through the car-sharing service Uber. If you haven’t heard of Uber, or Sidecar, or Lyft, the essence is that the headache, the wait, and sometimes the expense of getting a taxi are virtually eliminated.
A recent RAND(1) study has concluded that the implementation of health information technology (HIT) has neither effected a reduction in the cost of healthcare nor an improvement in the quality of healthcare. The RAND authors confidently predicted that the widespread adoption of HIT will eventually achieve these goals if certain “conditions” were implemented. I do not believe that there is sufficient scientific data to support the authors’ conclusion nor validate the Federal Government’s decision to encourage the universal installation of “certified” electronic medical records (EMRs.)
As a “geek” physician who runs a solo, private practice and the creator of one of the older EMRs, I believe that I can provide a somewhat unique perspective on the HIT debate which will resonate with a large fraction of private practitioners.
Last week, five health IT vendors came together to announce the CommonWell Health Alliance, a nonprofit focused on developing a national secure network and standards that will:
- Unambiguously identify patients
- Provide a national, secure record locator service. For treatment purposes, providers can know where a patient’s records are located.
- Enable peer-to-peer sharing of patient records requested via a targeted (or directed) query
- Enable patients and consumers to withhold consent / authorization for participation in the network
Unambiguous patient identity matters
In banking, without certainty about identity, ATM machines would not give out cash. And in healthcare without certainty about identity, physicians are working with one hand tied behind their backs.
This problem will never be solved by the Feds. In fact, Congress has restricted any spending on it by the government at all. Industry working together may be the only practical alternative.
The big news at HIMSS13 was the unveiling of CommonWell (Cerner, McKesson, Allscripts, athenahealth, Greenway and RelayHealth) to “get the ball rolling” on data exchange across disparate technologies. The shame is that another program with opaque governance by the largest incumbents in health IT is being passed off as progress. The missed opportunity is to answer the call for patient engagement and the frustrations of physicians with EHRs and reverse the institutional control over the physician-patient relationship. Physicians take an oath to put their patient’s interest above all others while in reality we are manipulated to participate in massive amounts of unwarranted care.
There’s a link between healthcare costs and health IT. The past months have seen frustration with this manipulation by industry hit the public media like never before. Early this year, National Coordinator for Health Information Technology Farzad Mostashari, MD, called for “moral and right” action on the part of some EHR vendors, particularly when it comes to data lock-in and pricing transparency. On February 19, a front page article in the New York Times exposed the tactics of some of the founding members of CommonWell in grabbing much of the $19 Billion of health IT incentives while consolidating the industry and locking out startups and innovators. That same week, Time magazine’s cover story is a special report on health care costs and analyzes how the US wastes $750 Billion a year and what that means to patients. To round things out, the March issue of Health Affairs, published a survey showing that “the average physician would lose $43,743 over five years” as a result of EHR adoption while the financial benefits go to the vendors and the larger institutions.
Several email lists I am on were abuzz last week about the publication of a paper that was described in a press release from Indiana University to demonstrate that “machine learning — the same computer science discipline that helped create voice recognition systems, self-driving cars and credit card fraud detection systems — can drastically improve both the cost and quality of health care in the United States.” The press release referred to a study published by an Indiana faculty member in the journal, Artificial Intelligence in Medicine .
While I am a proponent of computer applications that aim to improve the quality and cost of healthcare, I also believe we must be careful about the claims being made for them, especially those derived from results from scientific research.
After reading and analyzing the paper, I am skeptical of the claims made not only by the press release but also by the authors themselves. My concern is less about their research methods, although I have some serious qualms about them I will describe below, but more so with the press release that was issued by their university public relations office. Furthermore, as always seems to happen when technology is hyped, the press release was picked up and echoed across the Internet, followed by the inevitable conflation of its findings. Sure enough, one high-profile blogger wrote, “physicians who used an AI framework to make patient care decisions had patient outcomes that were 50 percent better than physicians who did not use AI.” It is clear from the paper that physicians did not actually use such a framework, which was only applied retrospectively to clinical data.
What exactly did the study show? Basically, the researchers obtained a small data set for one clinical condition in one institution’s electronic health record and applied some complex data mining techniques to show that lower cost and better outcomes could be achieved by following the options suggested by the machine learning algorithm instead of what the clinicians actually did. The claim, therefore, is that if the data mining were followed by the clinicians instead of their own decision-making, then better and cheaper care would ensue.
“Hey doctor, what do you think about this product/solution/service?”
These days, I look at a lot of websites describing some kind of product or solution related to the healthcare of older adults. Sometimes it’s because I have a clinical problem I’m trying to solve. (Can any of these sleep gadgets provide data — sleep latency, nighttime awakenings, total sleep time — on my elderly patient’s sleep complaints?)
In other cases, it’s because a family caregiver asks me if they should purchase some gizmo or sensor system they heard about. (“Do you think this will help keep my mom safe at home?”)
And increasingly, it’s because an entrepreneur asks me to check out his or her product.
So far, it’s been a bit of a bear to try to check out products. Part of it is that there are often too many choices, and there’s not yet a lot of help sifting through them. (And research has shown that choices create anxiety, decision-fatigue, and dissatisfaction with one’s ultimate pick.)
But even when I’m just considering a single product and trying to decide what to think of it, I find myself a bit stumped by most websites. And let’s face it, if I visit a website and it doesn’t speak to my needs and concerns fairly quickly, I’m going to bail. (Only in exceptional cases will I call or email for more information.)
So I thought it might be interesting to try to articulate what would help me more thoughtfully consider a product or service that is related to the healthcare of older adults.
Somewhere between the 20th century Bank ATM and the 25th century Tricorder, lays the EMR that we should have today.
Somewhere between the government-designed Meaningful Use EMR and the Holographic doctor in Star Trek, there should be a long stretch of disposable trial-and-error cycles of technology, changing and morphing from good to better to magical. For this to happen, we must release the EMR from its balls and chains. We must release the EMR from its life sentence in the salt mines of reimbursement, and understand that EMRs cannot, and will not, and should not, be held responsible for fixing the financial and physical health of the entire nation. In other words, lighten up folks …
A patient’s medical record contains all sorts of things, most of which diminish in importance as time goes by. Roughly speaking, a medical record contains quantifiable data (numbers), Boolean data (positive/negative), images (sometimes), and lots of plain, and not so plain, English (in the US).
The proliferation of prose and medical abbreviations in the medical record has been attacked a very long time ago by the World Health Organization (WHO), which gave us the International Classification of Disease (fondly known as ICD), attaching a code to each disease. With roots in the 19th century and with explicit rationale of facilitating international statistical research and public health, the codification of disease introduced the concept that caring for an individual patient should also be viewed as a global learning experience for humanity at large. Medicine was always a personal service, but medicine was also a science, and as long as those growing the science were not far removed from those delivering the service, both could symbiotically coexist.