Humans + AI – The trust journey of humans towards intelligent systems

Artificially Intelligent systems are becoming both much more pervasive, and better. There is a lot of evidence though that the interaction between the human planner and AI systems is far from hassle free: AI-generated decisions are overridden or adapted when they should have been left alone, and AI-systems are trusted when they should not have been. The literature has suggested several factors that influence the trust that a planner has in AI systems, some related to the planner (experience and expertise, for instance), some related to the system (transparency, reliability, fairness, …) and some related to the context in which the interaction takes place (high-risk vs low-risk decisions, complex vs more standard, …). An overlooked issue is that in many organizations planners interact with the AI system repeatedly. This causes that, as planners interact more often, how they feel about and behave towards the system becomes more and more dependent on their experience with the system (and less dependent on these more often studied initial factors). This project focuses on trust in AI-systems over time and how past interactions of the planner with the AI-system shape future interactions.

September 2022 – ACES (Annual Conference of Experimental Sociology) conference. Poster first results in the development of trust in human-AI relationships.  

June 2023 – ESCF Workshop on ‘Data-Driven Automated planning’. Presentation Trust in AI. 

August 2024 – Paper submitted to Computer-Supported Cooperative Work, Conference Human Factors in Computing Systems based on the collaboration with ESCF member. 5 more papers/conferences. EAISI lunch lecture June 2024. Keynote ESCF D2M event July 2023.

 

Patricia KahrChris Snijders Gerrit Rooks Martijn Willemsen 

Meer info: escf@tue.nl