Keynotes

Towards Personalized Explainable AI

The AI community is increasingly interested in investigating explainability to foster user acceptance and trust in AI systems. However, there is still limited understanding of the actual relationship between AI explainability, acceptance and trust, and which factors might impact this relationship. I argue that one such factor relates to user individual differences, including long-term traits (e.g., cognitive abilities, personality, preferences) and short-term states (e.g., cognitive load, confusion, emotions). Namely, given a specific AI application, different types and forms of explanations may work best for different users and even for the same user at different times, depending to some extent on their long-term traits and short-term states. As such, our long-term goal is to develop personalized XAI tools that adapt dynamically to the relevant user factors. In this talk, I focus on research investigating the relevance of long-term traits in XAI personalization. I will present a general methodology for this investigation and examples of how we applied it to understand the importance of personalized XAI in an intelligent tutoring system and a recommender system. I discuss how to move forward from these insights and research paths that should be explored to make personalized XAI happen.

Cristina's research is at is at the intersection of Artificial Intelligence (AI), Human-Computer Interaction (HCI), and Cognitive Science, with the goal to create AI systems that can both perform useful tasks and be well accepted by their users. A key aspect of this endeavor is enabling AI systems to predict and monitor relevant properties of their users (e.g., states, skills, needs, emotions) and personalize the interaction accordingly, in a manner that maximizes both task performance as well user satisfaction. Toward this goal, Cristina is especially interested in investigating how to enable AI technology to strike the right balance between providing accurate predictions and decision-making while maintaining transparency, user control, and trust.

For more details on current and past projects, see https://hai.cs.ubc.ca/

Cristina Conati
Cristina ConatiProfessor of Computer Science and Distinguished Scholar at the Sauder School of Business, University of British Columbia

What knowledge for what Paradise

Paradise: from the Persian pairidaeza, from which also the Hebrew pardeš with the primitive meaning of ``protected garden``. We are the gardeners who must protect the natural habitat in which we live. From Yuval Harari's formula of knowledge to the formula of creation by Master Michelangelo Pistoletto to consider AI and HCI in a development perimeter with human beings, understood as part of nature from which they are distinct but never separated, at the center of the design of the most transparent, understandable and accessible intelligent systems, drawing more and more from that source to know that to date science has not yet understood: intuition, inspiration, imagination.During the intervention, some cases of integrated design that reach the market stimulating new sensitivities.

For over 35 years I have been involved in marketing and communication with the task of analyzing competitive scenarios and finding the best strategies to help companies and brands make their way into increasingly competitive markets using every physical and digital touch point. I founded and managed agencies in Italy and abroad with experiences in New York, London and San Francisco, where I began to learn about the challenges and perspectives of the digital revolution.
I would like to see the world in 100 years. And I think I will.

Valerio Saffirio
Valerio SaffirioBrand builder and innovation manager