By ANDREW DORSCH, MD
The question of how much time I spend in front of the screen has pestered me professionally and personally.
A recent topic of conversation among parents at my children’s preschool has been how much screen time my toddlers’ brain can handle. It was spurred on by a study in JAMA Pediatrics that evaluated the association between screen time and brain structure in toddlers. The study reported that those children who spent more time with electronic devices had lower measures of organization in brain pathways involved in language and reading.
As a neurologist, these findings worry me, for my children and for myself. I wonder if I’m changing the structure of my brain for the worse as a result of prolonged time spent in front of a computer completing medical documentation. I think that, without the move to electronic medical records, I might be in better stead — in more ways than one. Not only is using them potentially affecting my brain, they pose a danger to my patients, too, in that they threaten their privacy.
As any practicing physician can tell you, electronic medical records represent a Pyrrhic victory of sorts. They present a tangible benefit in that medical documentation is now legible and information from different institutions can be obtained with the click of a button — compared to the method of decades past, in which a doctor hand-wrote notes in a paper chart — but there’s also a downside.
For one, while they are supposed to maximize the efficiency of documentation, the use of auto-filling “smart” phrases and other techniques designed to save time spent writing notes make them that much more difficult to read. Bloated notes contain limited nuggets of useful information buried within reams of data, where they serve as treasure troves for data miners but as barriers to efficient communication between medical providers.
Aside from the fact that any type of screen time can potentially degrade the structure of my brain, more time spent face-to-screen and less time face-to-face with the patient drains the medical encounter of its essential humanity.
If anyone can disrupt a human connection, it’s the big tech companies. Last month Google announced a collaboration with the Ascension medical system, which operates hospitals across the country. In a blog post, Google stated that they would utilize their cloud computing and artificial intelligence expertise to develop tools that enable care providers to “more quickly and easily access relevant patient information.”
This isn’t new; the announcement followed collaborations between Google and academic medical centers such as Stanford, UCSF, and the University of Chicago.
Leveraging the large patient populations of these institutions, Google has developed technologies that with intersect with patient care in ways ranging from the automatic recognition of words spoken during conversations in the doctor’s office to developing predictive models aimed at preventing unnecessary hospitalizations.These provide enticing solutions to the current drudgery of documentation.
But I am still hesitant to celebrate them. I’m already wary of big tech companies’ using and monitoring consumers’ private data and my concerns are only heightened by the entry of these businesses into the healthcare space.
The collaboration between Google and the University of Chicago, for example, is the focus of a lawsuit claiming that personal health information was shared without the express written consent of patients. Once companies like Google enter into the healthcare space, how do we know they will abide by the rules protecting the personal health information contained in medical records and, more importantly, who would know if they didn’t?
In an age where individuals can be identified from purportedly anonymous DNA samples and imaging algorithms have been used to identify individual faces reconstructed from routine MRI scans , Google’s being adjacent to — if not outrightly inside of — my and my patients’ medical files requires more protections than the Health Insurance Portability and Accountability Act (HIPAA) currently offers.
Back in 1996 the framers of the seminal privacy law didn’t anticipate that people’s activity, financial, and search data would be stored alongside — perhaps even among — our medical diagnoses and symptoms. Questions regarding the provenance, permission, and permanence of the meta-data linking these types of information may not have even been conceived of as it would not have been thought possible that the technology would know us better than we know ourselves.
If HIPAA is insufficient to protect us, it’s probably easier to amend it than stop the steamroller that is big tech. For one, HIPAA should include explicit provisions about separating medical data from what is essentially marketing data. Google is here to make a sale. I’m here to save lives.
Efforts to approach the documentation problem at its source by have been proposed by the Center for Medicare & Medicaid Services, which will implement new requirements for clinical encounters in 2021. While these changes will make electronic medical records easier to manage, it will not make them safer from invasion. We need updated methods to protect all types of medical data and prevent the complete erosion of privacy that has already occurred with other online activities.
Andrew Dorsch, MD, is an Assistant Professor in the Department of Neurological Sciences at Rush University Medical Center in Chicago and a Public Voices Fellow with The OpEd Project.
Andrew, you did an excellent job framing the issue. Thanks.
Of course, the problem of misadventure in the use of human data extends to many many other sectors of our activity, from knowing everything we eat, to where we are at all times, to everything we write or are interested in. It’s like we all have a newspaper reporter living with us constantly telling the world what is going on with John Doe. And, because this is such valuable data—can you believe how wealthy Google is? —the political effort required to change this surveillance capitalism is unbelievable.
Here are just a few ideas:
Make it a felony not to report a ransomware attack. Of course, many firms are too embarrassed and pay the money to keep the quiet.
Symmetry. Allow the public to use the same digital tools to investigate what the insurers and hospitals and drug firms and government agencies are doing. In other words enlarge the reporting requirements of, say, insurers, to include details of their salary structures, what is done with all their data, to whom it is sold, kickbacks from PBMs and GPOs…get as much data from them as is gotten from the patient and other providers. We need not only an EHR, but an Electronic Stakeholder Report, an ESR—from insurers, hospitals, drug firms, government agencies.
Allow the legal sector full access to what is going on in all these sectors of the economy.
Very severe penalties have to be applied to hackers and ransomware attackers and to firms that sell patient data without sharing these profits with the patient. Congress has to do lots of work here.
Once, say, employers find how useful it is to know what is in an EHR pertaining to a potential future employee, there is going to be massive sales of this data…What is his BMI?, does he have back pain? , does he have diabetes or hypertension? Employers can save fortunes by hiring only healthy people. All these systems of screwing with the health data are just beginning and we desperately need to abort this nefarious future.