Exploring the bi-verse: a trip across the digital and physical ecospheres

I’ve been invited to give a keynote talk at the WISE 2022 Conference. Thinking about it, I decided to focus on my idea of a bi-verse. To me, the bi-verse is the duality between the physical and digital worlds.

On one side, the Web and social media are the environments where people post their content, opinions, activities, and resources. Therefore, a considerable amount of user-generated content is produced every day for a wide variety of purposes.

On the other side, people live their everyday life immersed in the physical world, where society, economy, politics, and personal relations continuously evolve. These two opposite and complementary environments are today fully integrated: they reflect each other and they interact with each other in a stronger and stronger way.

Exploring and studying content and data coming from both environments offers a great opportunity to understand the ever-evolving modern society, in terms of topics of interest, events, relations, and behavior.

This slidedeck summarizes my contribution:

In my speech, I discuss business cases and socio-political scenarios, to show how we can extract insights and understand reality by combining and analyzing data from the digital and physical world, so as to reach a better overall picture of reality itself. Along this path, we need to keep into account that reality is complex and varies in time, space, and many other dimensions, including societal and economic variables. The speech highlights the main challenges that need to be addressed and outlines some data science strategies that can be applied to tackle these specific challenges.

The VaccinEU dataset of COVID-19 Vaccine Conversations on Twitter in French, German, and Italian

Despite the increasing limitations for unvaccinated people, in many European countries, there is still a non-negligible fraction of individuals who refuse to get vaccinated against SARS-CoV-2, undermining governmental efforts to eradicate the virus.

Within the PERISCOPE project, we studied the role of online social media in influencing individuals’ opinions about getting vaccinated by designing a large-scale collection of Twitter messages in three different languages — French, German, and Italian — and providing public access to the data collected. This work was implemented in collaboration with Observatory on Social Media, Indiana University, Bloomington, USA.

Focusing on the European context, we devised an open dataset called VaccinEU, that aims to help researchers to better understand the impact of online (mis)information about vaccines and design more accurate communication strategies to maximize vaccination coverage.

The dataset is openly accessible in a Dataverse repository and a GitHub repository.

Furthermore, a description has been published in a paper at ICWSM 2022 (open access), which can be cited as:

Di Giovanni, M., Pierri, F., Torres-Lugo, C., & Brambilla, M. (2022). VaccinEU: COVID-19 Vaccine Conversations on Twitter in French, German and Italian. Proceedings of the International AAAI Conference on Web and Social Media16(1), 1236-1244. https://ojs.aaai.org/index.php/ICWSM/article/view/19374

Analysis of Online Reviews for Evaluating the Quality of Cultural Tourism

Online reviews have long represented a valuable source for data analysis in the tourism field, but these data sources have been mostly studied in terms of the numerical ratings offered by the review platforms.

In a recent article (available as full open-access) and a respective blog post, we explored if social media and online review platforms can be a good source of quantitative evaluation of service quality of cultural venues, such as museums, theaters and so on. Our paper applies automatic analysis of online reviews, by comparing two different automated analysis approaches to evaluate which of the two is more adequate for assessing the quality dimensions. The analysis covers user-generated reviews over the top 100 Italian museums. 

Specifically, we compare two approaches:

  • a ‘top-down’ approach that is based on a supervised classification based upon strategic choices defined by policy makers’ guidelines at the national level; 
  • a ‘bottom-up’ approach that is based on an unsupervised topic model of the online words of reviewers.

The misalignment of the results of the ‘top-down’ strategic studies and ‘bottom-up’ data-driven approaches highlights how data science can offer an important contribution to decision making in cultural tourism.  Both the analysis approaches have been applied to the same dataset of 14,250 Italian reviews.

We identified five quality dimensions that follow the ‘top-down’ perspective: Ticketing and Welcoming, Space, Comfort, Activities, and Communication. Each of these dimensions has been considered as a class in a classification problem over user reviews. The top down approach allowed us to tag each review as descriptive of one of those 5 dimensions. Classification has been implemented both as a machine learning classification problem (using BERT, accuracy 88%) and as and keyword-based tagging (accuracy 80%).

The ‘bottom-up’ approach has been implemented through an unsupervised topic modelling approach, namely LDA (Latent Dirichlet Allocation), implemented and tuned over a range up to 30 topics. The best ‘bottom-up’ model we selected identifies 13 latent dimensions in review texts. We further integrated them in 3 main topics: Museum Cultural Heritage, Personal Experience and Museum Services.

The ‘top-down’ approach (based on a set of keywords defined from the standards issued by the policy maker) resulted in 63% of online reviews that did not fit into any of the predefined quality dimension.

63% of the reviews could not be assessed against the official top-down service quality categories.

The ‘bottom-up’ data-driven approach overcomes this limitation by searching for the aspects of interest using reviewers’ own words. Indeed, usually museum reviews discuss more about a museum’s cultural heritage aspects (46% average probability) and personal experiences (31% average probability) than the services offered by the museum (23% average probability).

Among the various quantitative findings of the study, I think the most important point is that the aspects considered as quality dimensions by the decision maker can be highly different from those aspects perceived as quality dimensions by museum visitors.

You can find out more about this analysis by reading the full article published online as open-access, or this longer blog post . The full reference to the paper is:

Agostino, D.; Brambilla, M.; Pavanetto, S.; Riva, P. The Contribution of Online Reviews for Quality Evaluation of Cultural Tourism Offers: The Experience of Italian Museums. Sustainability 2021, 13, 13340. https://doi.org/10.3390/su132313340

Coronavirus stories and data

Coronavirus COVID-19 is an extreme challenge for our society, economy, and individual life. However, governments should have learnt from each other. The impact has been spreading slowly across countries. There has been plenty of time to take action. But apparently people and government can’t grasp the risk until it’s onto them. And the way European and American governments are acting is to slow and incremental.

I live in Italy, we rank second in the world for healthcare quality. The mindset of “this won’t happen here” was the attitude at the beginning of this challenge, and look at  what happened. I’m reporting here two links to articles that mention a data-driven vision, but also the human, psychological an behavioural aspects involved. They are two simple stories that report the Italian perspective on the virus.

Coronavirus Stories From Italy

And why now it’s the time for YOU to worry, fellow Europeans and Americans

#Coronavirus: Updates from the Italian Front

A preview of what will happen in a week in the rest of the world. Things have dramatically changed in our society

Content-based Classification of Political Inclinations of Twitter Users

Social networks are huge continuous sources of information that can be used to analyze people’s behavior and thoughts.

Our goal is to extract such information and predict political inclinations of users.

In particular, we investigate the importance of syntactic features of texts written by users when they post on social media. Our hypothesis is that people belonging to the same political party write in similar ways, thus they can be classified properly on the basis of the words that they use.

We analyze tweets because Twitter is commonly used in Italy for discussing about politics; moreover, it provides an official API that can be easily exploited for data extraction. Many classifiers were applied to different kinds of features and NLP vectorization methods in order to obtain the best method capable of confirming our hypothesis.

To evaluate their accuracy, a set of current Italian deputies with consistent activity in Twitter has been selected as ground truth, and we have then predicted their political party. Using the results of our analysis, we also got interesting insights into current Italian politics. Here are the clusters of users:

ieee-big-data-2018-twitter-elections-clusters

Results in understanding political alignment are quite good, as reported in the confusion matrix here: ieee-big-data-2018-twitter-elections-parties

Our study is described in detail in the paper published in the IEEE Big Data 2018 conference and linked at:

DOI: 10.1109/BigData.2018.8622040

The article can be downloaded here, if you don’t have access to IEEE library.

You can also look at the slides on SlideShare:

You can cite the paper as follows:

M. Di Giovanni, M. Brambilla, S. Ceri, F. Daniel and G. Ramponi, “Content-based Classification of Political Inclinations of Twitter Users,” 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 2018, pp. 4321-4327.
doi: 10.1109/BigData.2018.8622040

Brand Community Analysis using Graph Representation Learning on Social Networks – with a Fashion Case

In a world more and more connected, new and complex interaction patterns can be extracted in the communication between people.

This is extremely valuable for brands that can better understand  the interests of users and the trends on social media to better target  their products. In this paper, we aim to analyze the communities that arise around commercial brands on social networks to understand the meaning of similarity, collaboration, and interaction among users.

We exploit the network that builds around the brands by encoding it into a graph model. We build a social network graph, considering user nodes and friendship relations; then we compare it with a heterogeneous graph model, where also posts and hashtags
are considered as nodes and connected to the different node types; we finally build also a reduced network, generated by inducing direct user-to-user connections through the intermediate nodes (posts and hashtags). These different variants are encoded using graph representation learning, which generates a numerical vector for each node. Machine learning techniques are applied to these vectors to extract valuable insights for each user and for the communities they belong to.

We report on our experiments performed on an emerging fashion brand on Instagram, and we show that our approach is able to discriminate potential customers for the brand, and to highlight meaningful sub-communities composed by users that share the same kind of content on social networks.

The use case is taken from a joint research project with the Fashion in Process group in the Design Department of Politecnico di Milano, within the framework of FAST (Fashion Sensing Technology).

This study has been published by Springer as part of ACM SAC 2019, Cyprus.

Here is the slideset presenting the idea:

The paper can be referenced as:

Marco Brambilla, Mattia Gasparini: Brand Community Analysis On Social Networks Using Graph Representation Learning. ACM Symposium on Applied Computing (SAC) 2019, pp. 2060-2069.

The link to the officially published paper in the ACM Library will be available shortly.

Predictive Analysis on U.S. Midterm Elections on Twitter with RNN

We implemented an analysis (meaning both a method and a system) that aim to gauge local support for the two major US political parties in the 68 most competitive House of Representative districts during the 2018 U.S. mid-term elections.

The analysis attempts to mirror the “Generic Ballot” poll, i.e., a survey of voters of a particular district which aims to measure local popularity of national parties by querying participants on the likelihood they would vote for a “generic” Democrat or Republican candidate. We collect the tweets containing national parties and politicians in the 68 most competitive districts. By most competitive we mean that they are rated as: toss up50%-50%, or lean by the Cook Political Report.

This means we are addressing an extremely challenging analysis and prediction problem, while disregarding the simpler cases (everyone is good at predicting the obvious!).

Our solution employs the Twitter Search API to query for tweets mentioning a national leader or party, posted form a limited geographic region (i.e., each specific congressional district). For example, the following query extracts tweets on Republicans:

TRUMP OR REPS OR Republicans OR Republican OR MCCCONNELL OR ‘MIKE PENCE’ OR ‘PAUL RYAN’ OR #Republicans OR #REPS OR @realDonaldTrumpOR @SpeakerRyan OR @senatemajldr OR @VP OR GOP OR @POTUS

To limit the search to each congressional district, we use the geocode field in the search query of the API, which queries a circular area based on the coordinates of the center and the radius. Because of the irregular shape of the congressional districts, multiple queries are needed for each of them, therefore we built a custom set of bubbles that approximate the district shape.

For the analysis of the tweets, we adopted a Recurrent Neural Network, namely a RNN-LSTM binary classifier trained on tweets.

To build training and testing data we collected tweets of users with clear political affiliation, including candidates, political activists, and also lesser know users, well versed in the political vernacular.
The accounts selected yielded around 280,000 tweets in 6 months before election day, labeled based on the author’s political affiliation.

Notice that the method is a general political-purpose language-independent analysis framework, that can be applied to any national or local context.

Further details and the results can be found on this Medium post.

This work has been published as a short scientific paper presented at IEEE Big Data Conference in Seattle, WA on December 2018 and on a previous Medium post by Antonio Lopardo.

You can also download a poster format reporting the work:

poster-midterm

In case you want to cite the work, you can do it in this way:

A. Lopardo and M. Brambilla, “Analyzing and Predicting the US Midterm Elections on Twitter with Recurrent Neural Networks,” 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 2018, pp. 5389–5391.
doi: 10.1109/BigData.2018.8622441.
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8622441&isnumber=8621858

The online running prototype, the full description of the project, its results, and source code are available at http://www.twitterpoliticalsentiment.com/USA/.

Understanding Polarized Political Events through Social Media Analysis

Predicting the outcome of elections is a topic that has been extensively studied in political polls, which have generally provided reliable predictions by means of statistical models. In recent years, online social media platforms have become a potential alternative to traditional polls, since they provide large amounts of post and user data, also referring to socio-political aspects.

In this context, we designed a research that aimed at defining a user modeling pipeline to analyze dis cussions and opinions shared on social media regarding polarized political events (such as a public poll or referendum).

The pipeline follows a four-step methodology.

 

  • First, social media posts and users metadata are crawled.
  • Second, a filtering mechanism is applied to filter out spammers and bot users.
  • Third, demographics information is extracted out of the valid users, namely gender, age, ethnicity and location information.
  • Fourth, the political polarity of the users with respect to the analyzed event is predicted.

In the scope of this work, our proposed pipeline is applied to two referendum scenarios:

  • independence of Catalonia in Spain
  • autonomy of Lombardy in Italy

We used these real-world examples to assess the performance of the approach with respect to the capability of collecting correct insights on the demographics of social media users and of predicting the poll results based on the opinions shared by the users.

Cursor_and_KDWEB_2018_paper_1_pdf

Experiments show that the method was effective in predicting the political trends for the Catalonia case, but not for the Lombardy case. Among the various motivations for this, we noticed that in general Twitter was more representative of the users opposing the referendum than the ones in favor.

The work has been presented at the KDWEB workshop at the ICWE 2018 conference.

A preprint of the paper can be downloaded from ArXiv and cited as reported here:

Roberto Napoli, Ali Mert Ertugrul, Alessandro Bozzon, Marco Brambilla. A User Modeling Pipeline for Studying Polarized Political Events in Social Media. KDWeb Workshop 2018, co-located with ICWE 2018, Caceres, Spain, June 2018. arXiv:1807.09459

Data Cleaning for Knowledge Extraction and Understanding on Social Media

 

Social media platforms let users share their opinions through textual or multimedia content. In many settings, this becomes a valuable source of knowledge that can be exploited for specific business objectives. Brands and companies often ask to monitor social media as sources for understanding the stance, opinion, and sentiment of their customers, audience and potential audience. This is crucial for them because it let them understand the trends and future commercial and marketing opportunities.

However, all this relies on a solid and reliable data collection phase, that grants that all the analyses, extractions and predictions are applied on clean, solid and focused data. Indeed, the typical topic-based collection of social media content performed through keyword-based search typically entails very noisy results.

We recently implemented a simple study aiming at cleaning the data collected from social content, within specific domains or related to given topics of interest.  We propose a basic method for data cleaning and removal of off-topic content based on supervised machine learning techniques, i.e. classification, over data collected from social media platforms based on keywords regarding a specific topic. We define a general method for this and then we validate it through an experiment of data extraction from Twitter, with respect to a set of famous cultural institutions in Italy, including theaters, museums, and other venues.

For this case, we collaborated with domain experts to label the dataset, and then we evaluated and compared the performance of classifiers that are trained with different feature extraction strategies.

The work has been presented at the KDWEB workshop at the ICWE 2018 conference.

A preprint of the paper can be downloaded and cited as reported here:

Emre Calisir, Marco Brambilla. The Problem of Data Cleaning for Knowledge Extraction from Social Media. KDWeb Workshop 2018, co-located with ICWE 2018, Caceres, Spain, June 2018.

The slides used in the workshop are available online here:

 

Iterative knowledge extraction from social networks

Yesterday, we presented a new work at The Web Conference in Lyon along the research line on knowledge extraction from human generated content started with our paper “Extracting Emerging Knowledge from Social Media” presented at the WWW 2017 Conference (see also this past post).

Our motivation starts from the fact that knowledge in the world continuously evolves, and thus ontologies and knowledge bases are largely incomplete, especially regarding data belonging to the so-called long tail. Therefore, we proposed a method for discovering emerging knowledge by extracting it from social content. Once initialized by domain experts, the method is capable of finding relevant entities by means of a mixed syntactic-semantic method. The method uses seeds, i.e. prototypes of emerging entities provided by experts, for generating candidates; then, it associates candidates to feature vectors built by using terms occurring in their social content and ranks the candidates by using their distance from the centroid of seeds, returning the top candidates.

Based on this foundational idea, we explored the possibility of running our method iteratively, using the results as new seeds. In this paper we address the following research questions:

  1. How does the reconstructed domain knowledge evolve if the candidates of one extraction are recursively used as seeds?
  2. How does the reconstructed domain knowledge spread geographically?
  3. Can the method be used to inspect the past, present, and future of knowledge?
  4. Can the method be used to find emerging knowledge?

This is the presentation given at the conference:

This work was presented at The Web Conference 2018, in the Modeling Social Media (MSM) workshop.

The paper is in the official proceedings of the conference through the ACM Digital Library.

You can also find here a PDF preprint version of “Iterative Knowledge Extraction from Social Networks” by Brambilla et al.