Content-based Classification of Political Inclinations of Twitter Users

Social networks are huge continuous sources of information that can be used to analyze people’s behavior and thoughts.

Our goal is to extract such information and predict political inclinations of users.

In particular, we investigate the importance of syntactic features of texts written by users when they post on social media. Our hypothesis is that people belonging to the same political party write in similar ways, thus they can be classified properly on the basis of the words that they use.

We analyze tweets because Twitter is commonly used in Italy for discussing about politics; moreover, it provides an official API that can be easily exploited for data extraction. Many classifiers were applied to different kinds of features and NLP vectorization methods in order to obtain the best method capable of confirming our hypothesis.

To evaluate their accuracy, a set of current Italian deputies with consistent activity in Twitter has been selected as ground truth, and we have then predicted their political party. Using the results of our analysis, we also got interesting insights into current Italian politics. Here are the clusters of users:

ieee-big-data-2018-twitter-elections-clusters

Results in understanding political alignment are quite good, as reported in the confusion matrix here: ieee-big-data-2018-twitter-elections-parties

Our study is described in detail in the paper published in the IEEE Big Data 2018 conference and linked at:

DOI: 10.1109/BigData.2018.8622040

The article can be downloaded here, if you don’t have access to IEEE library.

You can also look at the slides on SlideShare:

You can cite the paper as follows:

M. Di Giovanni, M. Brambilla, S. Ceri, F. Daniel and G. Ramponi, “Content-based Classification of Political Inclinations of Twitter Users,” 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 2018, pp. 4321-4327.
doi: 10.1109/BigData.2018.8622040

The role of Big Data in Banks

I was listening at R. Martin Chavez, Goldman Sachs deputy CFO just last month in Harvard at the ComputeFest 2017 event, more precisely, the SYMPOSIUM ON THE FUTURE OF COMPUTATION IN SCIENCE AND ENGINEERING on “Data, Dollars, and Algorithms: The Computational Economy” held in Harvard on Thursday, January 19, 2017.

His claim was that

Banks are essentially API providers.

The entire structure and infrastructure of Goldman Sachs is being restructured for that. His case is that you should not compare a bank with a shop or store, you should compare it with Google. Just imagine that every time you want to search on Google you need to get in touch (i.e., make a phone call or submit a request) to some Google employee, who at some points comes back to you with the result. Non sense, right?  Well, but this is what actually happens with banks. It was happening with consumer-oriented banks before online banking, and it’s still largely happening for business banks.

But this is going to change. Amount of data and speed and volume of financial transaction doesn’t allow that any more.

Banks are actually among the richest (not [just] in terms of money, but in data ownership). But they are also craving for further “less official” big data sources.

c4tmizavuaa1fc3
Juri Marcucci: Importance of Big Data for Central (National) Banks.

Today at the ISTAT National Big Data Committee meeting in Rome, Juri Marcucci from Bank of Italy discussed their research activity in integration of Google Trends information in their financial predictive analytics.

Google Trends provide insights of user interests in general, as the probability that a random user is going to search for a particular keyword (normalized and scaled, also with geographical detail down to city level).

Bank of Italy is using Google Trends data for complementing their prediction of unemployment rates in short and mid term. It’s definitely a big challenge, but preliminary results are promising in terms of confidence on the obtained models. More details are available in this paper.

Paolo Giudici from University of Pavia showed how one can correlate the risk of bank defaults with their exposition on Twitter:

c4tuo4yxuae86gm
Paolo Giudici: bank risk contagion based (also) on Twitter data.

Obviously, all this must take into account the bias of the sources and the quality of the data collected. This was pointed out also by Paolo Giudici from University of Pavia. Assessment of “trustability” of online sources is crucial. In their research, they defined the T-index on Twitter accounts in a very similar way academics define the h-index for relevance of publications, as reported in the photographed slide below.

dig
Paolo Giudici: T-index describing the quality of Twitter authors in finance.

It’s very interesting to see how creative the use of (non-traditional, web based) big data is becoming, in very diverse fields, including very traditional ones like macroeconomy and finance.

And once again, I think the biggest challenges and opportunities come from the fusion of multiple data sources together: mobile phones, financial tracks, web searches, online news, social networks, and official statistics.

This is also the path that ISTAT (the official institute for Italian statistics) is pursuing. For instance, in the calculation of official national inflation rates, web scraping techniques (for ecommerce prices) upon more than 40.000 product prices are integrated in the process too.

 

 

Modeling and Analyzing Engagement in Social Network Challenges

Within a completely new line of research, we are exploring the power of modeling for human behaviour analysis, especially within social networks and/or in occasion of large scale live events. Participation to challenges within social networks is a very effective instrument for promoting a brand or event and therefore it is regarded as an excellent marketing tool.
Our first reasearch has been published in November 2016 at WISE Conference, covering the analysis of user engagement within social network challenges.
In this paper, we take the challenge organizer’s perspective, and we study how to raise the
engagement of players in challenges where the players are stimulated to
create and evaluate content, thereby indirectly raising the awareness about the brand or event itself. Slides are available on slideshare:

We illustrate a comprehensive model of the actions and strategies that can be exploited for progressively boosting the social engagement during the challenge evolution. The model studies the organizer-driven management of interactions among players, and evaluates
the effectiveness of each action in light of several other factors (time, repetition, third party actions, interplay between different social networks, and so on).
We evaluate the model through a set of experiment upon a real case, the YourExpo2015 challenge. Overall, our experiments lasted 9 weeks and engaged around 800,000  users on two different social platforms; our quantitative analysis assesses the validity of the model.

The paper is published by Springer here.

cross-platform_pdf

 

To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).

CityOmeters, our solution for smartcity analysis and management, presented at EXPO2015

CityOmeters, the complete solution proposed by Fluxedo for smart city management that includes social engagement via micro-planning and big data flow analytics over social content and IoT, has been presented today at EXPO 2015 in Milano, in the Samsung and TIM pavilion.
See the slides below:

To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).

My interview on Social Media and Society: what I said (and what I didn’t)

My recent interview on the evolution of social media and its role in modern society is available on YouTube (in Italian only, sorry about that).

While the 3+ minutes of speech necessarily had to be a general overview on the role and recent changes of social media, I wish to summarise here the some technical aspects of it.

As I mentioned in the presentation:

  • social media changed a lot since their early days, from being consumed on PCs to mobile devices, from general purpose social networks connecting friends to digital stages where we “sell” our life to the entire world, from places where to share personal information to platforms where to publish also objective information coming from the real world experience.
  • social media are nowadays a valuable source of information for companies, who look for (and find) their customers through social media marketing and advertising, and public institutions and researchers, that can leverage on a large amount of data for providing benefits to our everyday life
YourExpo2015 - the Instagram Photo Challenge of Expo2015 MilanoWhat I didn’t say is how you can do that. Well, it’s pretty simple.
The ingredients of the recipe:
  • A lot of users sharing their profile
  • A lot of content (photos, statuses, geotags, descriptions) shared by people
  • (which makes up a VERY big data problem)
  • crawlers capturing this (or stream capturing systems) and storage as needed
  • MODELS of the context, the problem and the solution
  • and DATA ANALYSIS TOOLS for studying the data and extracting meaningful information
To me, the most valuable points are MODELS and ANALYSIS TOOLS. We are doing a lot of experiments on mixing model-driven techniques with semantic analysis, NLP, and social media monitoring. One example of our experiments is the YourExpo2015 Instagram Photo Challenge.
Have a look and participate if you like. More on this coming soon!

To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).

Community-based Crowdsourcing – Our paper at WWW2014 SOCM

Today Andrea Mauri presented our paper “Community-based Crowdsourcing” at the SOCM Workshop co-located with the WWW 2014 conference.

SOCM is the 2nd International Workshop on the Theory and Practice of Social Machines and is an interesting venue for discussing instrumentation, tooling, and software system aspects of online social network. The full program of the event is here.

Our paper is focused on community-based crowdsourcing applications, i.e. the ability of spawning crowdsourcing tasks upon multiple communities of performers, thus leveraging the peculiar characteristics and capabilities of the community members.
We show that dynamic adaptation of crowdsourcing campaigns to community behaviour is particularly relevant. We demonstrate that this approach can be very e ffective for obtaining answers from communities, with very di fferent size, precision, delay and cost, by exploiting the social networking relations and the features of the crowdsourcing task. We show the approach at work within the CrowdSearcher platform, which allows con figuring and dynamically adapting crowdsourcing campaigns tailored to different communities. We report on an experiment demonstrating the eff ectiveness of the approach.

The figure below shows a declarative reactive rule that dynamically adapts the crowdsourcing campaign by moving the task executions from a community of workers to another, when the average quality score of the community is below some threshold.

The slides of the presentation are available on Slideshare. If you want to know more or see some demos, please visit:

http://crowdsearcher.search-computing.org

 

The full paper will be available on the ACM Digital Library shortly.

To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).

Multiplatform Reactive Crowdsourcing based on Social Networks – WWW2013

In this post I want to report on our paper on Reactive Crowdsourcing presented at the WWW 2013 conference in Rio de Janeiro, Brasil.

Here is a quick summary of motivation and idea, together with some relevant materials:

Need for control

We believe that an essential aspect for building effective crowdsourcing computations is the ability of “controlling the crowd”, i.e. of dynamically adapting the behaviour of the crowdsourcing systems as response to the quantity and quality of completed tasks or to the availability and reliability of performers.
This new paper focuses on a machinery and methodology for deploying configurable, cross-platform, and adaptive crowdsourcing campaigns through a model-driven approach.

Control through declarative active rules

In the paper we present an approach to crowdsourcing which provides powerful and flexible crowd controls. We model each crowdsourcing application as composition of elementary task types and we progressively transform these high level specifications into the features of a reactive execution environment that supports task planning, assignment and completion as well as performer monitoring and exclusion. Controls are specified as declarative, active rules on top of data structures which are derived from the model of the application; rules can be added, dropped or modified, thus  guaranteeing maximal exibility with limited effort. The paper applies modeling practices (as also explained in our book on model-driven software engineering).

Here is the presentation thatAlessandro Bozzon gave at WWW 2013:

Reactive crowdsourcing presentation on slideshare.

Prototype and experiments

We have a prototype platform that implements the proposed framework.  We have done extensive experiments with it. Our experimentations with different rule sets demonstrate how simple changes to the rules can substantially affect time, effort and quality involved in crowdsourcing activities.

Here is a short video demonstrating our approach through the current prototype (mainly centered on the crowdsourcing campaign configuration phase):

Paper and related activities

The paper can be downloaded for free from the ACM Digital Library through this link:
http://dl.acm.org/citation.cfm?id=2488403 (or alternatively from the WWW site)

The paper is a follow-up of our WWW2012 paper on Crowdsearcher, which focused on exploiting social networks and crowdsourcing platforms for improving search.
The paper nicely combines with another recent contribution of ours, presented at EDBT 2013, on finding the right crowd of experts on social networks for addressing a specific problem.

To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).

My invited post on Modeling Social Web Apps (on www.modeling-languages.com)

It’s with great pleasure that I announce my invited post on the modeling-languages.com blog, curated by Jordi Cabot.
First, I’m glad he invited me. Second, I’m happy that he asked for a post on Social-enabled Web application modeling.
I mean, we all see how social technologies are transforming our life. And yet, the modeling community and the software engineering community at large are paying very limited attention to this phenomenon.
That’s why I decided to address the problem by proposing a model-driven approach that is specifically focused on the development of Web applications that exploit social features, and my invited post is exactly focusing on this.
Basically, the proposal is to go from requirement specification down to static and dynamic design and to code generation of social applications with a pattern-based approach, which exploits goal-based requirement specification, UML modeling, and WebML models (enriched with social-specific primitives) for describing the user interaction. This image intuitively summarizes the idea:

You can find more details in the invited post on Social web application modeling on Jordi’s blog, and also check out the video that summarizes the approach:

 To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).

Efficient Subgraph Matching – Keynote by V.S. Subrahmanian at ASONAM 2012

V.S. Subrahmanian (Univ. of Maryland)
As part of the Data Management in the Social Semantic Web workshop (DMSSW workshop) at the ASONAM 2012 conference in Istanbul, Turkey, V.S. Subrahmanian (University of Maryland) gave an interesting talk on efficient subgraph matching on (social) networks.
Queries are defined as graphs themselves, with some of the places defined as constants and some defined as variables.
The complexity of queries over graphs is high, due to the large number of joins to be performed even in case of fairly simple queries.
The size of the query is typically relatively small with respect to the entire dataset (network). The proposed approaches are useful for scale of at least tens of millions of nodes in the network.

How to work on disk

The mechanism implemented is called DOGMA index and applies an algorithm called K-merge.
The algorithm builds a hierarchical index where I put at most K nodes of the graph in each index item. For obtaining that, I merge together connected nodes.You can do that randomly or more intelligently by trying to minimizing connections between nodes in different index items.
Example of DOGMA Index, where nodes of the original network (at the bottom) are merged in higher level representations in the level above (in this example, K = 4, since we have 4 nodes in each index position).
I don’t want to build the index by partitioning the whole graph, because it’s painful for large graphs. 
I start from a G0 graph, and I merge nodes until I get G1, G2, Gn graphs, each of them is more or less half the size of the previous, until Gn has K nodes or less. Then, I build the dogma index over Gn.
For the query, I can use a basic approach: identify the variable nodes that are immediately close to a constant node, and then find the possible satisfying values for those variables, starting from the constants. I can apply conditions considering distance constraints between constants and variables, as well as between candidate variable names. To allow this, I also save in every node of the index the distance of closest node in the other nodes. 

How to work on the cloud

This approach has been also implemented in the cloud, through the so called COSI Architecture, assuming a cloud of k+1 computing nodes. The implementation of the edge cuts that generates the index must be very quick and produce fairly good cuts (but not necessarily optimal).
The image below lists some of the references to S.V. works on the topic.
Some references to V.S. Subrahmanian works on subgraph matching.

To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).