Categories

Tag: Research

Sexism vs. Cultural Imperialism

By SARAH HEARNE

As I was getting ready for bed last night a friend shared a tweet that immediately caught my attention.

https://twitter.com/sbattrawden/status/1143465003409915905

The tweet was of a paper that has just been published online, titled “Does physician gender have a significant impact on first-pass success rate of emergency endotracheal intubation?” and showed the abstract which began,

It is unknown whether female physicians can perform equivalently to male physicians with respect to emergency procedures.

Understandably, this got the backs up of a lot of people, myself included. Who on earth thinks that’s a valid question to be researching in this day and age? Are we really still having to battle assumptions of female inferiority when it comes to things like this? Who on earth gave this ethics approval, let alone got it though peer review?

I then took a deep breath and asked myself why a respected journal, The American Journal of Emergency Medicine, would publish such idiocy. Maybe there was something else going on. The best way to find out is to read the paper so I got a copy and started reading. The first thing that struck me was the author affiliations – both are associated with hospitals in Seoul, South Korea. The second author had an online profile, he is a Clinical Professor of Emergency Medicine. I couldn’t find the first author anywhere which made me think they are probably quite early in their career. The subject matter wasn’t something I could imagine a male early career researcher being interested in so figured they are probably female (not knowing Korean names I couldn’t work out if the name was feminine or masculine).

Continue reading…

Young People Need To Turn Out For Their Health

By MERCEDES CARNETHON PhD

This month, we saw historic turnout at the polls for midterm elections with over 114 million ballots cast.  One noteworthy observation regarding voter turnout is record rates of participation by younger voters aged between 18 to 29 years old.  Around 31 percent of people aged 18 to 29 voted in the midterms this year, an increase from 21 percent in 2014, according to a day-after exit poll by Tufts University.

Surely their political engagement counters the criticism that millennials are disengaged and disconnected with society and demonstrates that millennials are fully engaged when issues are relevant to them, their friends, and their families. Why, then, do we not see the same level of passion, engagement and commitment when young adults are asked to consider their health and well-being?

I have had the privilege of being a member of the National Heart, Lung and Blood Institute-funded Coronary Artery Risk Development in Young Adults (CARDIA) study research team. In over 5,000 black and white adults who were initially enrolled when they were 18 to 30 years old and have now been followed for nearly 35 years, we have described the decades-long process by which heart disease develops. We were able to do this because, in the 1980s when these studies began, young adults could be reached at their home telephone numbers. When a university researcher called claiming to be funded by the government, there was a greater degree of trust.

Unfortunately, that openness and that trust has eroded, particularly in younger adults and those who may feel marginalized from our society for any number of valid reasons. However, the results—unanswered phone calls from researchers, no-shows at the research clinic and the absence of an entire group of adults today from research studies, looks like disengagement. Disengagement is a very real public health crisis with consequences that are as dire as any political crisis.
Continue reading…

The ACO Information Vacuum

flying cadeuciiIn my three-part series on why we know so little about ACOs, I presented three arguments:

  1. We have no useful information on what ACOs do for patients;
  2. that’s because the definition of “ACO” is not a definition but an expression of hope; and
  3. the ACO’s useless definition is due to dysfunctional habits of thought within the managed care movement that have spread throughout the health policy community.

Judging from the comments from THCB readers, there is no disagreement about points 1 and 3. With one exception (David Introcaso), no one took issue with point 2 either. Introcaso  agreed with point 1 (we have no useful information on ACOs), but he argued that the ACO has been well defined by CMS regulations, and CMS, not the amorphous definition of “ACO,” is the reason researchers have failed to produce useful information on ACOs.

Another reply by Michael Millenson did not challenge any of the three points I made. Millenson’s point was that people outside the managed care movement use manipulative labels so what’s the problem?

I’ll reply first to Introcaso’s post, and then Millenson’s. I’ll close with a plea for more focus on specific solutions to specific problems and less tolerance for the unnecessarily abstract diagnoses and prescriptions (such as ACOs) celebrated today by far too many health policy analysts.

Summary of Introcaso’s comment and my response

I want to state at the outset I agree wholeheartedly with Introcaso’s statement that something is very wrong at CMS. I don’t agree with his rationale, but his characterization of CMS as an obfuscator is correct.

Continue reading…

The Limitations of Healthcare Science

Sidney Le UCSFEvery once in awhile on the wards, one of the attending physicians will approach me and ask me to perform a literature review on a particular clinical question. It might be a question like “What does the evidence say about how long should Bactrim should be given for a UTI?” or “Which is more effective in the management of atrial fibrillation, rate control or rhythm control?” A chill usually runs down my spine, like that feeling one gets when a cop siren wails from behind while one is driving. But thankfully, summarizing what we know about a subject is actually a pretty formulaic exercise, involving a PubMed search followed by an evaluation of the various studies with consideration for generalizability, bias, and confounding.

A more interesting question, in my opinion, is to ask why we do not know what we do not know. To delve into is a question requires some understanding of how research is conducted, and it has implications for how clinicians make decisions with their patients. Below, I hope to provide some insights into the ways in which clinical research is limited. In doing so, I hope to illustrate why some topics we know less about, and why some questions are perhaps even unknowable.Continue reading…

Secrets to Choosing the Right Medical School

GundermanThe competition to get into medical school is fierce.  The Association of American Medical Colleges just announced that this year, nearly 50,000 students applied for just over 20,000 positions at the nation’s 141 MD-granting schools – a record.  But medical schools do not have a monopoly on selectivity.  The average student applies to approximately 15 schools, and many are accepted by more than one.  Students attempting to sort out where to apply and which admission offer to accept face a big challenge, and they often look for guidance to medical school rankings.

Among the organizations that rank medical schools, perhaps the best-known is US News and World Report (USNWR).  It ranks the nation’s most prestigious schools using the assessments of deans and chairs (20%), assessments by residency program directors (20%), research activity (grant dollars received, 30%), student selectivity (difficulty of gaining admission, 20%), and faculty resources (10%).   Based on these methods, the top three schools are Harvard, Stanford, and Johns Hopkins.

Rankings seem important, but do they tell applicants what they really need to know?  I recently sat down with a group of a dozen fourth-year medical students who represent a broad range of undergraduate backgrounds and medical specialty interests.  I posed this question: How important are medical school rankings, and are there any other factors you wish you had paid more attention to when you chose which school to attend?

Continue reading…

A Case for Open Data

A couple of weeks ago, President Obama launched a new open data policy (pdf) for the federal government. Declaring that, “…information is a valuable asset that is multiplied when it is shared,” the Administration’s new policy empowers federal agencies to promote an environment in which shareable data are maximally and responsibly accessible. The policy supports broad access to government data in order to promote entrepreneurship, innovation, and scientific discovery.

If the White House needed an example of the power of data sharing, it could point to the Psychiatric Genomics Consortium (PGC). The PGC began in 2007 and now boasts 123,000 samples from people with a diagnosis of schizophrenia, bipolar disorder, ADHD, or autism and 80,000 controls collected by over 300 scientists from 80 institutions in 20 countries. This consortium is the largest collaboration in the history of psychiatry.

More important than the size of this mega-consortium is its success. There are perhaps three million common variants in the human genome. Amidst so much variation, it takes a large sample to find a statistically significant genetic signal associated with disease. Showing a kind of “selfish altruism,” scientists began to realize that by pooling data, combining computing efforts, and sharing ideas, they could detect the signals that had been obscured because of lack of statistical power. In 2011, with 9,000 cases, the PGC was able to identify 5 genetic variants associated with schizophrenia. In 2012, with 14,000 cases, they discovered 22 significant genetic variants. Today, with over 30,000 cases, over 100 genetic variants are significant. None of these alone are likely to be genetic causes for schizophrenia, but they define the architecture of risk and collectively could be useful for identifying the biological pathways that contribute to the illness.

We are seeing a similar culture change in neuroimaging. The Human Connectome Project is scanning 1,200 healthy volunteers with state of the art technology to define variation in the brain’s wiring. The imaging data, cognitive data, and de-identified demographic data on each volunteer are available, along with a workbench of web-based analytical tools, so that qualified researchers can obtain access and interrogate one of the largest imaging data sets anywhere. How exciting to think that a curious scientist with a good question can now explore a treasure trove of human brain imaging data—and possibly uncover an important aspect of brain organization—without ever doing a scan.

Continue reading…

Is Health Care about to Go the Way of the Dodo?

As the new year started, all kinds of predictions come to our attention, mostly of things that will enter our lives.

How about things that will dissolve from our lives ?

Of all species that became extinct the Dodo has become sort of synonymous with extinction. To “go the way the Dodo”means something is headed to go out of existence. (picture and quote source The Smithsonian)

So this goes not only for species but also stuff we use or things we do.

You might want to have a look at the extinction timeline and find things you did, ‘some’ time ago, and don’t anymore.

But what about health care? What will vanish, will the doctor due to all of this new technology disappear, or the nurse? Will we no longer go to a hospital or to the doctors office? I don’t think so.

We still will be needing professionals with compassion and care. However shift is happening and some things will start getting obsolete. In the following I am in no way going to try to be exhaustive, so feel free to add in comments or thought on what you think will disrupt from our lives in terms of health(care).

Continue reading…

Getting the Patient’s POV

One major challenge for the new Patient Centered Outcomes Research Institute (PCORI) is to make good on its stated mission to improve health care by producing evidence “that comes from research guided by patients, caregivers and the broader health care community.”

In order to “guide” that research, patients will offer their time and their experience to serve on various panels alongside scientists and other stakeholders, many of whom have competing agendas. This means that representing the patient perspective in research governance, priority-setting, design, execution and dissemination is not a good task for the shy or the ill-prepared.  Not only do you have to have reflected on your own experience as a patient, but you have to have a good sense of how much you can generalize from that experience. This is, after all, not about you. It is about us – all of us patients.

Sometimes this means gathering information from others who have a similar diagnosis and who have been treated with similar approaches. What was getting chemotherapy for breast cancer like for you?

Sometimes it means learning about how people with different kinds of heart conditions or kinds of cancer experience their diagnoses and treatments or health care in general. What happened when you were discharged from the hospital?

Continue reading…

Invalidated Results Watch

My friend Ivan Oransky runs a highly successful blog called Retraction Watch; if you have not yet discovered it, you should! In it he and his colleague Adam Marcus document (with shocking regularity) retractions of scientific papers. While most of the studies are from the bench setting, some are in the clinical arena. One of the questions they have raised is what should happen with citations of these retracted studies by other researchers? How do we deal with this proliferation of oftentimes fraudulent and occasionally simply mistaken data?

A more subtle but no less difficult conundrum arises when papers cited are recognized to be of poor quality, yet they are used to develop defense for one’s theses. The latest case in point comes from the paper I discussed at length yesterday, describing the success of the Keystone VAP prevention initiative. And even though I am very critical of the data, I do not mean to single out these particular researchers. In fact, because I am intimately familiar with the literature in this area, I can judge what is being cited. I have seen similar transgressions from other authors, and I am sure that they are ubiquitous. But let me be specific.

In the Methods section on page 306, the investigators lay out the rationale for their approach (bundles) by stating that the “ventilator care bundle has been an effective strategy to reduce VAP…” As supporting evidence they cite references #16-19. Well, it just so happens that these are the references that yours truly had included in her systematic review of the VAP bundle studies, and the conclusions of that review are largely summarized here. I hope that you will forgive me for citing myself again:Continue reading…

Is There Something Wrong With the Scientific Method?

A recurring them on this blog is the need for empowered, engaged patients to understand what they read about science. It’s true when researching treatments for one’s condition, it’s true when considering government policy proposals, it’s true when reading advice based on statistics. If you take any journal article at face value, you may get severely misled; you need to think critically.

Sometimes there’s corruption (e.g. the fraudulent vaccine/autism data reported this month, or “Dr. Reuben regrets this happened“), sometimes articles are retracted due to errors (see the new Retraction Watch blog), sometimes scientists simply can’t reproduce a result that looked good in the early trials.

But an article a month ago in the New Yorker sent a chill down my spine tonight. (I wish I could remember which Twitter friend cited it.) It’ll chill you, too, if you believe the scientific method leads to certainty. This sums it up:

Many results that are rigorously proved and accepted start shrinking in later studies.

This is disturbing. The whole idea of science is that once you’ve established a truth, it stays put: you don’t combine hydrogen and oxygen in a particular way and sometimes you get water, and other times chocolate cake.

Reliable findings are how we’re able to shoot a rocket and have it land on the moon, or step on the gas and make a car move (predictably), or flick a switch and turn on the lights. Things that were true yesterday don’t just become untrue. Right??

Bad news: sometimes the most rigorous published findings erode over time. That’s what the New Yorker article is about.

I won’t try to teach here everything in the article; if you want to understand research and certainty, read it. (It’s longish, but great writing.) I’ll just paste in some quotes. All emphasis is added, and my comments are in [brackets].

Continue reading…

Registration

Forgotten Password?