Blog

M.Sc. Thesis Topics and Proposals at Polimi Data Science Lab – 2023/24

Within the context of our Data Science Lab and research team, we offer a variety of thesis options.

Check them out in these slide decks:

Online courses on Data, Policies, and COVID-19


Our lab is participating in the PERISCOPE H2020 project, a partnership of 30+ top European universities and associations of professionals worked together for the last two years to study data, policies, actions, and effects of pandemic management. Besides the high-impact research results, the consortium worked on implementing educational materials and courses.

Photo by cottonbro studio on Pexels.com

Among those, five online courses (MOOCs) that collect technical and policy solutions to pandemic challenges have been published on Coursera:

Ahead of the publication, the courses were tested by health authorities, policymakers, and public bodies. All courses are free to access.

You can access the courses from this list on Coursera.

Exploring the bi-verse: a trip across the digital and physical ecospheres

I’ve been invited to give a keynote talk at the WISE 2022 Conference. Thinking about it, I decided to focus on my idea of a bi-verse. To me, the bi-verse is the duality between the physical and digital worlds.

On one side, the Web and social media are the environments where people post their content, opinions, activities, and resources. Therefore, a considerable amount of user-generated content is produced every day for a wide variety of purposes.

On the other side, people live their everyday life immersed in the physical world, where society, economy, politics, and personal relations continuously evolve. These two opposite and complementary environments are today fully integrated: they reflect each other and they interact with each other in a stronger and stronger way.

Exploring and studying content and data coming from both environments offers a great opportunity to understand the ever-evolving modern society, in terms of topics of interest, events, relations, and behavior.

This slidedeck summarizes my contribution:

In my speech, I discuss business cases and socio-political scenarios, to show how we can extract insights and understand reality by combining and analyzing data from the digital and physical world, so as to reach a better overall picture of reality itself. Along this path, we need to keep into account that reality is complex and varies in time, space, and many other dimensions, including societal and economic variables. The speech highlights the main challenges that need to be addressed and outlines some data science strategies that can be applied to tackle these specific challenges.

The Role of Human Knowledge in Explainable AI

Machine learning and AI are facing a new challenge: making models more explainable.

This means to develop new methodologies to describe the behaviour of widely adopted black-box models, i.e., high-performing models whose internal logic is challenging to describe, justify, and understand from a human perspective.

The final goal of an explainability method is to faithfully describe the behaviour of a (black-box) model to users who can get a better understanding of its logic, thus increasing the trust and acceptance of the system.

Unfortunately, state-of-the-art explainability approaches may not be enough to guarantee the full understandability of explanations from a human perspective. For this reason, human-in-the-loop methods have been widely employed to enhance and/or evaluate explanations of machine learning models. These approaches focus on collecting human knowledge that AI systems can then employ or involving humans to achieve their objectives (e.g., evaluating or improving the system).

Based on these assumptions and requirements, we published a review article that aims to present a literature overview on collecting and employing human knowledge to improve and evaluate the understandability of machine learning models through human-in-the-loop approaches. The paper features a discussion on the challenges, state-of-the-art, and future trends in explainability.

The paper starts from the definition of the notion of “explanation” as an “interface between humans and a decision-maker that is, at the same time, both an accurate proxy of the decision-maker and comprehensible to humans”. Such a description highlights two fundamental features an explanation should have. It must be accurate, i.e., it must faithfully represent the model’s behaviour, and comprehensible, i.e., any human should be able to understand the meaning it conveys.

The Role of Human Knowledge in Explainable AI

The figure above summarizes the four main ways to use human knowledge in explainability, namely: knowledge collection for explainability (red), explainability evaluation (green), understanding human’s perspective in explainability (blue), and improving model explainability (yellow). In the schema, the icons represent human actors.

You may cite the paper as:

Tocchetti, Andrea; Brambilla, Marco. The Role of Human Knowledge in Explainable AI. Data 2022, 7, 93. https://doi.org/10.3390/ data7070093

The VaccinEU dataset of COVID-19 Vaccine Conversations on Twitter in French, German, and Italian

Despite the increasing limitations for unvaccinated people, in many European countries, there is still a non-negligible fraction of individuals who refuse to get vaccinated against SARS-CoV-2, undermining governmental efforts to eradicate the virus.

Within the PERISCOPE project, we studied the role of online social media in influencing individuals’ opinions about getting vaccinated by designing a large-scale collection of Twitter messages in three different languages — French, German, and Italian — and providing public access to the data collected. This work was implemented in collaboration with Observatory on Social Media, Indiana University, Bloomington, USA.

Focusing on the European context, we devised an open dataset called VaccinEU, that aims to help researchers to better understand the impact of online (mis)information about vaccines and design more accurate communication strategies to maximize vaccination coverage.

The dataset is openly accessible in a Dataverse repository and a GitHub repository.

Furthermore, a description has been published in a paper at ICWSM 2022 (open access), which can be cited as:

Di Giovanni, M., Pierri, F., Torres-Lugo, C., & Brambilla, M. (2022). VaccinEU: COVID-19 Vaccine Conversations on Twitter in French, German and Italian. Proceedings of the International AAAI Conference on Web and Social Media16(1), 1236-1244. https://ojs.aaai.org/index.php/ICWSM/article/view/19374

Model Driven Software Engineering in Practice now published by Springer Nature

Starting June 2022, our book “Model Driven Software Engineering in Practice” (co-authored with Jordi Cabot and Manuel Wimmer) is now also available via Springer . This means the price is actually lower, and if you are affiliated with an academic institution, you may even have free access to the book through your institutional access. Check it here.

Together with the book, we provided free bonus material including: over 500 slides on MDE ready to use for classes; and all the book examples in the book’s github repository.

As of today, there are 130 institutions already using the book . Make sure you join the list if you are not there yet.

The new book cover. Not particularly appealing, but aligned with the Springer style.

EXP-Crowd: Gamified Crowdsourcing for AI Explainability

The spread of AI and black-box machine learning models makes it necessary to explain their behavior. Consequently, the research field of Explainable AI was born. The main objective of an Explainable AI system is to be understood by a human as the final beneficiary of the model.

In our research we just published on Frontiers in Artificial Intelligence, we frame the explainability problem from the crowd’s point of view and engage both users and AI researchers through a gamified crowdsourcing framework. We research whether it’s possible to improve the crowd’s understanding of black-box models and the quality of the crowdsourced content by engaging users in gamified activities through a crowdsourcing framework called EXP-Crowd. While users engage in such activities, AI researchers organize and share AI- and explainability-related knowledge to educate users.

The next diagram shows the interaction flows of researchers (dashed cyan arrows) and users (orange plain arrows) with the activities devised within our framework. Researchers organize users’ knowledge and set up activities to collect data. As users engage with such activities, they provide Content to researchers. In turn, researchers give the user feedback about the activity they performed. Such feedback aims to improve users’ understanding of the activity itself, the knowledge, and the context provided within it.

Interaction flows of researchers (dashed cyan arrows) and users (orange plain arrows) in the EXP-Crowd framework.

In our recent paper published on Frontiers in Artificial Intelligence, we present the preliminary design of a game with a purpose (G.W.A.P.) to collect features describing real-world entities which can be used for explainability purposes.

One of the crucial steps in the process is the questions and annotation challenge, where Player 1 asks yes/no questions about the entity to be explained. Player 2 answers such questions, and then is asked to complete a series of simple tasks to identify the guessed feature by answering questions and potentially annotating the picture as shown below.

Questioning and annotation steps within the explanation game.

If you are interested in more details, you can read the full EXP-Crowd paper on the journal site (full open access):

You can cite the paper as:

Tocchetti A., Corti L., Brambilla M., and Celino I. (2022). EXP-Crowd: A Gamified Crowdsourcing Framework for Explainability. Frontiers in Artificial Intelligence 5:826499. doi: 10.3389/frai.2022.826499

The Final TRIGGER Conference

We will join and contribute to the final TRIGGER conference is scheduled for May 31st, 2022 in Brussels.

The theme is: “Rethinking the EU’s role in global governance”. In this context, the TRIGGER project is going to present the main research outcomes of the H2020 research program that started in 2018, setting the stage for the collaboration among 14 international partners. 

We will present our main contributions, namely PERSEUS and COCTEAU.

A quick intro to PERSEUS is available in this video:

Further details about the event are available here:

Analysis of Online Reviews for Evaluating the Quality of Cultural Tourism

Online reviews have long represented a valuable source for data analysis in the tourism field, but these data sources have been mostly studied in terms of the numerical ratings offered by the review platforms.

In a recent article (available as full open-access) and a respective blog post, we explored if social media and online review platforms can be a good source of quantitative evaluation of service quality of cultural venues, such as museums, theaters and so on. Our paper applies automatic analysis of online reviews, by comparing two different automated analysis approaches to evaluate which of the two is more adequate for assessing the quality dimensions. The analysis covers user-generated reviews over the top 100 Italian museums. 

Specifically, we compare two approaches:

  • a ‘top-down’ approach that is based on a supervised classification based upon strategic choices defined by policy makers’ guidelines at the national level; 
  • a ‘bottom-up’ approach that is based on an unsupervised topic model of the online words of reviewers.

The misalignment of the results of the ‘top-down’ strategic studies and ‘bottom-up’ data-driven approaches highlights how data science can offer an important contribution to decision making in cultural tourism.  Both the analysis approaches have been applied to the same dataset of 14,250 Italian reviews.

We identified five quality dimensions that follow the ‘top-down’ perspective: Ticketing and Welcoming, Space, Comfort, Activities, and Communication. Each of these dimensions has been considered as a class in a classification problem over user reviews. The top down approach allowed us to tag each review as descriptive of one of those 5 dimensions. Classification has been implemented both as a machine learning classification problem (using BERT, accuracy 88%) and as and keyword-based tagging (accuracy 80%).

The ‘bottom-up’ approach has been implemented through an unsupervised topic modelling approach, namely LDA (Latent Dirichlet Allocation), implemented and tuned over a range up to 30 topics. The best ‘bottom-up’ model we selected identifies 13 latent dimensions in review texts. We further integrated them in 3 main topics: Museum Cultural Heritage, Personal Experience and Museum Services.

The ‘top-down’ approach (based on a set of keywords defined from the standards issued by the policy maker) resulted in 63% of online reviews that did not fit into any of the predefined quality dimension.

63% of the reviews could not be assessed against the official top-down service quality categories.

The ‘bottom-up’ data-driven approach overcomes this limitation by searching for the aspects of interest using reviewers’ own words. Indeed, usually museum reviews discuss more about a museum’s cultural heritage aspects (46% average probability) and personal experiences (31% average probability) than the services offered by the museum (23% average probability).

Among the various quantitative findings of the study, I think the most important point is that the aspects considered as quality dimensions by the decision maker can be highly different from those aspects perceived as quality dimensions by museum visitors.

You can find out more about this analysis by reading the full article published online as open-access, or this longer blog post . The full reference to the paper is:

Agostino, D.; Brambilla, M.; Pavanetto, S.; Riva, P. The Contribution of Online Reviews for Quality Evaluation of Cultural Tourism Offers: The Experience of Italian Museums. Sustainability 2021, 13, 13340. https://doi.org/10.3390/su132313340