The Role of Human Knowledge in Explainable AI

Machine learning and AI are facing a new challenge: making models more explainable.

This means to develop new methodologies to describe the behaviour of widely adopted black-box models, i.e., high-performing models whose internal logic is challenging to describe, justify, and understand from a human perspective.

The final goal of an explainability method is to faithfully describe the behaviour of a (black-box) model to users who can get a better understanding of its logic, thus increasing the trust and acceptance of the system.

Unfortunately, state-of-the-art explainability approaches may not be enough to guarantee the full understandability of explanations from a human perspective. For this reason, human-in-the-loop methods have been widely employed to enhance and/or evaluate explanations of machine learning models. These approaches focus on collecting human knowledge that AI systems can then employ or involving humans to achieve their objectives (e.g., evaluating or improving the system).

Based on these assumptions and requirements, we published a review article that aims to present a literature overview on collecting and employing human knowledge to improve and evaluate the understandability of machine learning models through human-in-the-loop approaches. The paper features a discussion on the challenges, state-of-the-art, and future trends in explainability.

The paper starts from the definition of the notion of “explanation” as an “interface between humans and a decision-maker that is, at the same time, both an accurate proxy of the decision-maker and comprehensible to humans”. Such a description highlights two fundamental features an explanation should have. It must be accurate, i.e., it must faithfully represent the model’s behaviour, and comprehensible, i.e., any human should be able to understand the meaning it conveys.

The Role of Human Knowledge in Explainable AI

The figure above summarizes the four main ways to use human knowledge in explainability, namely: knowledge collection for explainability (red), explainability evaluation (green), understanding human’s perspective in explainability (blue), and improving model explainability (yellow). In the schema, the icons represent human actors.

You may cite the paper as:

Tocchetti, Andrea; Brambilla, Marco. The Role of Human Knowledge in Explainable AI. Data 2022, 7, 93. https://doi.org/10.3390/ data7070093

EXP-Crowd: Gamified Crowdsourcing for AI Explainability

The spread of AI and black-box machine learning models makes it necessary to explain their behavior. Consequently, the research field of Explainable AI was born. The main objective of an Explainable AI system is to be understood by a human as the final beneficiary of the model.

In our research we just published on Frontiers in Artificial Intelligence, we frame the explainability problem from the crowd’s point of view and engage both users and AI researchers through a gamified crowdsourcing framework. We research whether it’s possible to improve the crowd’s understanding of black-box models and the quality of the crowdsourced content by engaging users in gamified activities through a crowdsourcing framework called EXP-Crowd. While users engage in such activities, AI researchers organize and share AI- and explainability-related knowledge to educate users.

The next diagram shows the interaction flows of researchers (dashed cyan arrows) and users (orange plain arrows) with the activities devised within our framework. Researchers organize users’ knowledge and set up activities to collect data. As users engage with such activities, they provide Content to researchers. In turn, researchers give the user feedback about the activity they performed. Such feedback aims to improve users’ understanding of the activity itself, the knowledge, and the context provided within it.

Interaction flows of researchers (dashed cyan arrows) and users (orange plain arrows) in the EXP-Crowd framework.

In our recent paper published on Frontiers in Artificial Intelligence, we present the preliminary design of a game with a purpose (G.W.A.P.) to collect features describing real-world entities which can be used for explainability purposes.

One of the crucial steps in the process is the questions and annotation challenge, where Player 1 asks yes/no questions about the entity to be explained. Player 2 answers such questions, and then is asked to complete a series of simple tasks to identify the guessed feature by answering questions and potentially annotating the picture as shown below.

Questioning and annotation steps within the explanation game.

If you are interested in more details, you can read the full EXP-Crowd paper on the journal site (full open access):

You can cite the paper as:

Tocchetti A., Corti L., Brambilla M., and Celino I. (2022). EXP-Crowd: A Gamified Crowdsourcing Framework for Explainability. Frontiers in Artificial Intelligence 5:826499. doi: 10.3389/frai.2022.826499

Using Crowdsourcing for Domain-Specific Languages Specification

In the context of Domain-Specific Modeling Language (DSML) development, the involvement of end-users is crucial to assure that the resulting language satisfies their needs.

In our paper presented at SLE 2017 in Vancouver, Canada, on October 24th within the SPLASH Conference context, we discuss how crowdsourcing tasks can exploited to assist in domain-specific language definition processes. This is in line with the vision towards cognification of model-driven engineering.

The slides are available on slideshare:

 

Indeed, crowdsourcing has emerged as a novel paradigm where humans are employed to perform computational and information collection tasks. In language design, by relying on the crowd, it is possible to show an early version of the language to a wider spectrum of users, thus increasing the validation scope and eventually promoting its acceptance and adoption.

SLE2017-v2
Ready to accept improper use of your tools?

We propose a systematic (and automatic) method for creating crowdsourcing campaigns aimed at refining the graphical notation of DSMLs. The method defines a set of steps to identify, create and order the questions for the crowd. As a result, developers are provided with a set of notation choices that best fit end-users’ needs. We also report on an experiment validating the approach.

Improving the quality of the language notation may improve dramatically acceptance and adoption, as well as the way people use your notation and the associated tools.

Essentially, our idea is to spawn to the crowd a bunch of questions regarding the concrete syntax of visual modeling languages, and collect opinions. Based on different strategies, we generate an optimal notation and then we check how good it is.

In the paper we also validate the approach and experiment it in a practical use case, namely studying some variations over the BPMN modeling language.

The full paper can be found here: https://dl.acm.org/citation.cfm?doid=3136014.3136033. The paper is titled: “Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages” and was co-authored by Marco Brambilla, Jordi Cabot, Javier Luis Cánovas Izquierdo, and Andrea Mauri.

You can also access the Web version on Jordi Cabot blog.

The artifacts described in this paper are also referenced on findresearch.org, namely referring to the following materials:

Pattern-Based Specification of Crowdsourcing Applications – ICWE 2014 best paper

I’m really proud to announce that our paper “Pattern-Based Specification of Crowdsourcing Applications” has received the BEST PAPER award at ICWE 2014 (International Conference on Web Engineering), held in Toulouse in July 2014. The paper was authored by Alessandro Bozzon, Marco Brambilla, Stefano Ceri, Andrea Mauri, and Riccardo Volonterio.

The work addresses the fact that in many crowd-based applications, the interaction with performers is decomposed in several tasks that, collectively, produce the desired results.
A number of emerging crowd-based applications cover very different scenarios, including opinion mining, multimedia data annotation, localised information gathering, marketing campaigns, expert response gathering, and so on.
In most of these scenarios, applications can be decomposed in tasks that collectively produce their results; Tasks interactions give rise to arbitrarily complex workflows.

In this paper we propose methods and tools for designing crowd-based workflows as interacting tasks.
We describe the modelling concepts that are useful in such framework, including typical workflow patterns, whose function is to decompose a cognitively complex task into simple interacting tasks so that the complex task is co-operatively solved.
We then discuss how workflows and patterns are managed by CrowdSearcher, a system for designing, deploying and monitoring applications on top of crowd-based systems, including social networks and crowdsourcing platforms. Tasks performed by humans consist of simple operations which apply to homogeneous objects; the complexity of aggregating and interpreting task results is embodied within the framework. We show our approach at work on a validation scenario and we report quantitative findings, which highlight the effect of workflow design on the final results.

Here are the slides presented by Alessandro Bozzon during the ICWE conference:

 

Here is Alessandro Bozzon presenting:

and here is the picture of the actual award:

ICWE 2014 Best Paper Award Certificate to Pattern-Based Specification of Crowdsourcing Applications. Bozzon, Brambilla, Ceri, Mauri, Volonterio

To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).

Community-based Crowdsourcing – Our paper at WWW2014 SOCM

Today Andrea Mauri presented our paper “Community-based Crowdsourcing” at the SOCM Workshop co-located with the WWW 2014 conference.

SOCM is the 2nd International Workshop on the Theory and Practice of Social Machines and is an interesting venue for discussing instrumentation, tooling, and software system aspects of online social network. The full program of the event is here.

Our paper is focused on community-based crowdsourcing applications, i.e. the ability of spawning crowdsourcing tasks upon multiple communities of performers, thus leveraging the peculiar characteristics and capabilities of the community members.
We show that dynamic adaptation of crowdsourcing campaigns to community behaviour is particularly relevant. We demonstrate that this approach can be very e ffective for obtaining answers from communities, with very di fferent size, precision, delay and cost, by exploiting the social networking relations and the features of the crowdsourcing task. We show the approach at work within the CrowdSearcher platform, which allows con figuring and dynamically adapting crowdsourcing campaigns tailored to different communities. We report on an experiment demonstrating the eff ectiveness of the approach.

The figure below shows a declarative reactive rule that dynamically adapts the crowdsourcing campaign by moving the task executions from a community of workers to another, when the average quality score of the community is below some threshold.

The slides of the presentation are available on Slideshare. If you want to know more or see some demos, please visit:

http://crowdsearcher.search-computing.org

 

The full paper will be available on the ACM Digital Library shortly.

To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).