With the increasing use of artificial intelligence (AI) in medical research and practice, its ethical implications have recently come under scrutiny. The potential benefits of AI systems are juxtaposed to ethical hazards they might cause. The ethical concerns in the implementation of AI in medicine include fairness, explainability, interpretability and accountability with little consensus on how to tackle these challenges to minimise the potential harm through AI-systems in medicine. In the present symposium, we will focus on the ethical challenges of artificial intelligence in clinical neuroscience and give an update of the current discussion.
Derya Sahin will elaborate algorithmic fairness on examples of predictive models for Alzheimer’s disease and psychosis in high-risk individuals, highlighting the challenges in the implementation of fair artificial intelligence in medicine.
Sarah Genon will present research findings on racial bias in AI-assisted neuroimaging studies, providing an empirical example for fairness-accuracy-trade-offs.
Didem Stark will present findings to demonstrate bias in Deep Learning algorithms trained on structural MRI data for the detection of Alzheimer’s disease. Even when training data are balanced with respect to gender and age, these models achieve significantly higher classification accuracy for women which might have important implications for development and translation to clinical practice.
Philipp Kellmeyer will give an overview on the explainability and interpretability of AI in the context of neurosciences and present an ethics-by-design approach in the implementation of an AI-assisted EEG analysis tool via convolutional neural networks.
In summary, the present symposium will provide a comprehensive overview of the potential ethical implications of AI and demonstrate hazards for a wider range of disorders and use-cases.
08:30 Uhr
Identification of fairness bias in clinical prediction models for psychosis and Alzheimer’s disease
D. Şahin (Köln, DE)
Details anzeigen
Autor:in:
D. Şahin (Köln, DE)
Algorithmic fairness as the pursuit of unbiased algorithms is one of the key concepts regarding ethically responsible AI. Parallel to the popular dictum “Garbage in, garbage out”, biases in data may lead to biased algorithms in a “Bias in, bias out” manner. As AI gains importance for medical research and practice, fairness should be incorporated in medical AI algorithms from development to deployment. Failing to examine and correct algorithms for fairness could lead to a perpetuation of biases and reinforce health inequities.
This talk will give an introduction into the concept of algorithmic fairness, elaborate it on examples of prediction algorithms for Alzheimer’s dementia and psychosis and summarize central concepts, questions and challenges regarding algorithmic fairness in medicine.
08:52 Uhr
Cross-ethnicity/race generalization failure of behavioral prediction from resting-state functional connectivity
S. Genon (Jülich, DE)
Details anzeigen
Autor:in:
S. Genon (Jülich, DE)
Algorithmic biases that favor majority populations pose a key challenge to the application of machine learning for precision medicine. Here, we assessed such bias in prediction models of behavioral phenotypes from brain functional magnetic resonance imaging. We examined the prediction bias using two independent datasets (preadolescent versus adult) of mixed ethnic/racial composition. When predictive models were trained on data dominated by white Americans (WA), out-of-sample prediction errors were generally higher for African Americans (AA) than for WA. This bias toward WA corresponds to more WA-like brain-behavior association patterns learned by the models. When models were trained on AA only, compared to training only on WA or an equal number of AA and WA participants, AA prediction accuracy improved but stayed below that for WA. Overall, the results point to the need for caution and further research regarding the application of current brain-behavior prediction models in minority populations.
09:14 Uhr
Gender-bias in MRI-based deep-learning models for the detection of Alzheimer’s disease
D. Stark (Berlin, DE)