We’re already living in an era of unprecedented misinformation/disinformation, as we’ve seen repeatedly with COVID-19 (e.g., hydroxychloroquine, ivermectin, anti-vaxxers), but deepfakes should alert us that we haven’t seen anything yet.
ICYMI, here’s the 60 Minutes story:
The trick behind deepfakes is a type of deep learning called “generative adversarial network” (GAN), which basically means neural networks compete on which can generate the most realistic media (e.g., audio or video). They can be trying to replicate a real person, or creating entirely fictitious people. The more they iterate, the most realistic the output gets.
Two years ago, I interrupted a speaker at a big health/tech conference, right in the middle of his presentation. I still blush at the memory. But the speaker was citing data — my data—incorrectly and I couldn’t let it pass.
Brian Dolan recently wrote about how he wished he’d spoken up when he heard someone spreading misinformation at a conference:
Unfortunately, about 80 people sitting in the room either accepted this as new information or failed to stand up to correct the speaker. I wish I had pulled a Susannah Fox and done the latter.
What style of conference is the right one for the health/tech field? The TED-style “sage on stage” who does not take questions? Or the scientific-meeting style of engaged debate? Or is there a place for both?
Do different rules apply to start-ups? Is it OK to fudge a little bit to make a good point, as one might do in a pitch? Personally, I do not think people are entitled to their own facts. There’s too much at stake.
We can’t let misinformation—or worse—go by without comment.
I think it’s time for more people to speak up in health care.