The Role of Human Knowledge in Explainable AI

Machine learning and AI are facing a new challenge: making models more explainable.

This means to develop new methodologies to describe the behaviour of widely adopted black-box models, i.e., high-performing models whose internal logic is challenging to describe, justify, and understand from a human perspective.

The final goal of an explainability method is to faithfully describe the behaviour of a (black-box) model to users who can get a better understanding of its logic, thus increasing the trust and acceptance of the system.

Unfortunately, state-of-the-art explainability approaches may not be enough to guarantee the full understandability of explanations from a human perspective. For this reason, human-in-the-loop methods have been widely employed to enhance and/or evaluate explanations of machine learning models. These approaches focus on collecting human knowledge that AI systems can then employ or involving humans to achieve their objectives (e.g., evaluating or improving the system).

Based on these assumptions and requirements, we published a review article that aims to present a literature overview on collecting and employing human knowledge to improve and evaluate the understandability of machine learning models through human-in-the-loop approaches. The paper features a discussion on the challenges, state-of-the-art, and future trends in explainability.

The paper starts from the definition of the notion of “explanation” as an “interface between humans and a decision-maker that is, at the same time, both an accurate proxy of the decision-maker and comprehensible to humans”. Such a description highlights two fundamental features an explanation should have. It must be accurate, i.e., it must faithfully represent the model’s behaviour, and comprehensible, i.e., any human should be able to understand the meaning it conveys.

The Role of Human Knowledge in Explainable AI

The figure above summarizes the four main ways to use human knowledge in explainability, namely: knowledge collection for explainability (red), explainability evaluation (green), understanding human’s perspective in explainability (blue), and improving model explainability (yellow). In the schema, the icons represent human actors.

You may cite the paper as:

Tocchetti, Andrea; Brambilla, Marco. The Role of Human Knowledge in Explainable AI. Data 2022, 7, 93. https://doi.org/10.3390/ data7070093

The Final TRIGGER Conference

We will join and contribute to the final TRIGGER conference is scheduled for May 31st, 2022 in Brussels.

The theme is: “Rethinking the EU’s role in global governance”. In this context, the TRIGGER project is going to present the main research outcomes of the H2020 research program that started in 2018, setting the stage for the collaboration among 14 international partners. 

We will present our main contributions, namely PERSEUS and COCTEAU.

A quick intro to PERSEUS is available in this video:

Further details about the event are available here:

Modeling and Analyzing Engagement in Social Network Challenges

Within a completely new line of research, we are exploring the power of modeling for human behaviour analysis, especially within social networks and/or in occasion of large scale live events. Participation to challenges within social networks is a very effective instrument for promoting a brand or event and therefore it is regarded as an excellent marketing tool.
Our first reasearch has been published in November 2016 at WISE Conference, covering the analysis of user engagement within social network challenges.
In this paper, we take the challenge organizer’s perspective, and we study how to raise the
engagement of players in challenges where the players are stimulated to
create and evaluate content, thereby indirectly raising the awareness about the brand or event itself. Slides are available on slideshare:

We illustrate a comprehensive model of the actions and strategies that can be exploited for progressively boosting the social engagement during the challenge evolution. The model studies the organizer-driven management of interactions among players, and evaluates
the effectiveness of each action in light of several other factors (time, repetition, third party actions, interplay between different social networks, and so on).
We evaluate the model through a set of experiment upon a real case, the YourExpo2015 challenge. Overall, our experiments lasted 9 weeks and engaged around 800,000  users on two different social platforms; our quantitative analysis assesses the validity of the model.

The paper is published by Springer here.

cross-platform_pdf

 

To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).