This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 964220. This website reflects only the author’s view and the Commission is not responsible for any use that may be made of the information it contains.
Search
Futuristic robot artificial intelligence enlightening AI technology development and machine learning concept. Global robotic bionic science research for future of human life. 3D rendering graphic.

The most recent scientific publication from AI-Mind

We are delighted to announce that the scientific publication from the AI-Mind consortium, A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective, has been recently published in the International Journal of Human-Computer Interaction.

Artificial intelligence (AI) in the health system

Currently, there is an ongoing AI revolution that is redefining the health sector. Personalised healthcare is on the verge of advancing from concept to reality. The European Commission emphasises the possibilities that AI and supercomputing offer to health systems, and the European and AI-Mind ambition to use advanced predictive and preventive diagnosis and intervention methods for dementia is nearing. The world of machine learning (ML) is still too disconnected from computational biology, in part due to the limited sizes of biological datasets and lack of clinical validation, and in part because of the lack of user trust in new technologies.

Prof. Sonia Sousa

About Trust

There is more than one way to define trust and context matters. User trust is influenced by socio-ethical considerations, technical and design features, and user characteristics. Even though trust seems to be a nebulous concept, we can still measure it by surveying, interviewing and hosting focus groups Prof. Sonia Sousa

 

 

Dr Tita Alissa Bach and Dr Harry Hallock from together with Amna Kham, Gabriela Beltrão and Prof. Sonia Sousa from Tallinn University investigated what influences user trust in AI, how trust is defined in various contexts and how to measure it. To better understand this complex matter, they have conducted a systematic literature review on this topic.

 

This review aims to provide an overview of the user trust definitions, influencing factors, and measurement methods from 23 empirical studies to gather insight for future technical and design strategies, research, and initiatives to calibrate the user-AI relationship. The findings confirm that there is more than one way to define trust and that selecting the most appropriate trust definition to depict user trust in a specific context should be the focus instead of comparing definitions. The findings highlight that user trust in AI-enabled systems is influenced by socio-ethical considerations, technical and design features, and user characteristics.

Read a full paper to learn, among other things, more about the potential ways of increasing user trust and what influences the environment to harvest and maintain a trusted user-AI relationship.

Click here: https://doi.org/10.1080/10447318.2022.2138826

 

Facebook
Twitter
LinkedIn