BY MICHAEL MILLENSON
If you ask ChatGPT how many procedures a certain surgeon does or a specific hospital’s infection rate, the OpenAI and Microsoft chatbot inevitably replies with some version of, “I don’t do that.”
But depending upon how you ask, Google’s Bard provides a very different response, even recommending a “consultation” with particular clinicians.
Bard told me how many knee replacement surgeries were performed by major Chicago hospitals in 2021, their infection rates and the national average. It even told me which Chicago surgeon does the most knee surgeries and his infection rate. When I asked about heart bypass surgery, Bard provided both the mortality rate for some local hospitals and the national average for comparison. While sometimes Bard cited itself as the information source, beginning its response with, “According to my knowledge,” other times it referenced well-known and respected organizations.
There was just one problem. As Google itself warns, “Bard is experimental…so double-check information in Bard’s responses.” When I followed that advice, truth began to blend indistinguishably with “truthiness” – comedian Stephen Colbert’s memorable term to describe information that’s seen as true not because of supporting facts, but because it “feels” true.Continue reading…