By KIM BELLARD
Last week I was on a fun podcast with a bunch of people who were, as usual, smarter than me, and, in particular, more knowledgeable about one of my favorite topics – artificial intelligence (A.I.), particularly for healthcare. With the WHO releasing its “first global report” on A.I. — Ethics & Governance of Artificial Intelligence for Health – and with no shortage of other experts weighing in recently, it seemed like a good time to revisit the topic.
My prediction: it’s not going to work out quite like we expect, and it probably shouldn’t.
“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm,” Dr Tedros Adhanom Ghebreyesus, WHO Director-General, said in a statement. He’s right on both counts.
WHO’s proposed six principles are:
- Protecting human autonomy
- Promoting human well-being and safety and the public interest
- Ensuring transparency, explainability and intelligibility
- Fostering responsibility and accountability
- Ensuring inclusiveness and equity
- Promoting AI that is responsive and sustainable
All valid points, but, as we’re already learning, easier to propose than to ensure. Just ask Timnit Gebru. When it comes to using new technologies, we’re not so good about thinking through their implications, much less ensuring that everyone benefits. We’re more of a “let the genie out of the bottle and see what happens” kind of species, and I hope our future AI overlords don’t laugh too much about that.
As Stacey Higginbotham asks in IEEE Spectrum, “how do we know if a new technology is serving a greater good or policy goal, or merely boosting a company’s profit margins?…we have no idea how to make it work for society’s goals, rather than a company’s, or an individual’s.” She further notes that “we haven’t even established what those benefits should be.”Continue reading…