Responsible AI Seminar: Black-box decision making in medicine: some thoughts and questions

On 18 June,  Jens Chistian Krarup Bjerring, AU, will give an online talk as part of a new interdisciplinary, inter-universityPortrait of Jens Chistian Krarup Bjerring seminar series on responsible AI.  


Black-box decision making in medicine: some thoughts and questions


Advanced machine learning algorithms are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before these algorithms will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic obligation to align their medical verdicts with those of the algorithms. However, in light of their complexity, machine learning algorithms—and notably deep learning algorithms—often function as black boxes: ​the details of their contents, calculations, and procedures cannot be meaningfully understood by even human experts. ​When AI systems reach this level of complexity, we can also speak of black-box medicine. In this talk, I’ll explore some of the consequences of black-box medicine for core values and ideals in medical decision making.

For details on how to join

About the Responsible AI Seminar Series

Responsible AI draws on widely different scientific disciplines, from the technical aspects of AI, via ethics, philosophy and law, to the individual realities of different application domains. We wish to take advantage of the limitations imposed by Covid-19 to start an informal conversation across Denmark about different aspects of Responsible AI via a hybrid format seminar series. We hope to catch your interest with three seminar talks before the summer vacation, following which we hope to merge efforts across universities to start a truly inter-university seminar series. Initiators: Aasa Feragen, Melanie Ganz and Sune Hannibal Holm from the DFF-funded project Bias and Fairness in Medicine.

Learn more about the Responsible AI Seminars here.