The AI-Mind strategy for the ethical and trustworthy communication of AI-based dementia risk prediction to people with MCI in the clinical setting
In honour of World Alzheimer’s Month, we are thrilled to share insights on AI-Mind work regarding ethical and trustworthy communication about artificial intelligence-based solutions in medical settings. A full report will be available soon.
Artificial intelligence (AI) may sound somewhat futuristic but it is increasingly being used in many different domains (such as for self-driving cars, the interpretation of X-ray images, Google translate and Alexa). One branch of AI is predictive analytics which involves the evaluation of historical and real-time data to make predictions about which people are most at risk of developing certain medical conditions. In this context, AI has several advantages. It can classify, analyse and make predictions based on large amounts of data, it can handle more data than humans can typically manage and it is much faster than humans and can be very accurate. Indeed, a key goal of AI-Mind is to enable clinicians to make more effective and more efficient operational and clinical decisions, eventually allowing for care to be personalised to each individual.
Several studies suggest that, should a reliable test become available, most people would be interested in knowing their risk of developing dementia. Healthcare providers and governments would also find such information useful, allowing them to plan for the healthcare needs at population level and clinicians would eventually be able to provide personalised dementia care and treatment to their patients.
However, whilst AI-based risk prediction tools have the potential to predict, more accurately than is currently possible, which people with Mild Cognitive Impairment (MCI) will develop dementia, the potential benefits must be balanced against possible ethical, social and practical concerns. Concerns have been raised in the literature about confidentiality, respect for autonomy, harm versus benefit and the famous “black box” which describes the situation whereby risk predictions may be made that are highly accurate but for which it is difficult, if not impossible, to trace the specific factors directly linked to the actual prediction. This could have implications for the perceived value of such tools for patients. Another key concern is that of bias (e.g. in the questions being asked and the training and test data sets) and the implications for the future use of AI-based dementia risk prediction tools in clinical practice for certain marginalised groups (i.e. regarding access to and use of those tools).
The successful use and ongoing development of this kind of technology is highly dependent on all those affected by its use finding it trustworthy and being willing and able to use and understand it. Lack of trust or doubts about the value of AI in relation to risk prediction, misunderstandings, fear and resistance are just some of the factors which may affect the success or failure of implementing even the best AI-based risk prediction tools. Despite the wealth of information on AI and on risk prediction, literature on the ethical use and communication of AI-based dementia risk prediction specifically for people with MCI was scarce. AI-Mind researchers, therefore, conducted a rapid review of the literature to identify criteria that should be met or addressed if an AI-based risk prediction tool is to be considered trustworthy and for the risk to be considered as having been ethically communicated.
We also conducted a series of consultations with key stakeholders in the context of Public and Stakeholder Involvement to help ensure that the various ethical, societal and practical issues considered in the context of this project do not reflect solely the views, beliefs, assumptions and priorities of a limited number of published scholars and AI specialists. This also helps ensure that moral norms are established through a fair dialogue in which everyone’s perspectives and viewpoints are heard and taken into account.
Indeed, failure to successfully establish trust by patients and doctors in clinical practice would represent a huge waste of public funds which could have been better invested elsewhere, a misuse of the time and efforts of research participants or patients who shared their data and a lost opportunity to improve the lives of millions of people – all of which would be hugely unethical.
The review and series of consultations, organised in the framework of the AI-Mind project, led to the development of a comprehensive strategy for the ethical and trustworthy communication of AI-based dementia risk prediction to people with MCI in the clinical setting. This will soon be available on the AI-Mind website. This document will hopefully be revisited and reviewed towards the end of the AI-Mind project. If you have any feedback or comments, please send them to Dr Dianne Gove at [email protected]
- Ana Diaz, Angela Bradshaw and Anna Rita Øksengård for their support with the consultation of key stakeholders.
- Richard Milne and Angela Bradshaw for their support with the literature review.
- Lina Plataniti, Harry Hallock and Vebjørn Andersson for their help with the survey for clinicians.
- All members of the AI Mind Work Package 1, the AI specialists and the clinicians in the AI-Mind project for their support and contributions.
- The people at risk, with MCI and with dementia, their supporters and the clinicians who provided valuable feedback on this topic.