Data Cleaning for Knowledge Extraction and Understanding on Social Media

 

Social media platforms let users share their opinions through textual or multimedia content. In many settings, this becomes a valuable source of knowledge that can be exploited for specific business objectives. Brands and companies often ask to monitor social media as sources for understanding the stance, opinion, and sentiment of their customers, audience and potential audience. This is crucial for them because it let them understand the trends and future commercial and marketing opportunities.

However, all this relies on a solid and reliable data collection phase, that grants that all the analyses, extractions and predictions are applied on clean, solid and focused data. Indeed, the typical topic-based collection of social media content performed through keyword-based search typically entails very noisy results.

We recently implemented a simple study aiming at cleaning the data collected from social content, within specific domains or related to given topics of interest.  We propose a basic method for data cleaning and removal of off-topic content based on supervised machine learning techniques, i.e. classification, over data collected from social media platforms based on keywords regarding a specific topic. We define a general method for this and then we validate it through an experiment of data extraction from Twitter, with respect to a set of famous cultural institutions in Italy, including theaters, museums, and other venues.

For this case, we collaborated with domain experts to label the dataset, and then we evaluated and compared the performance of classifiers that are trained with different feature extraction strategies.

The work has been presented at the KDWEB workshop at the ICWE 2018 conference.

A preprint of the paper can be downloaded and cited as reported here:

Emre Calisir, Marco Brambilla. The Problem of Data Cleaning for Knowledge Extraction from Social Media. KDWeb Workshop 2018, co-located with ICWE 2018, Caceres, Spain, June 2018.

The slides used in the workshop are available online here:

 

Analysis of user behaviour and social media content for art and culture events

In our most recent study, we analysed the user behaviour and profile, as well as the textual and visual content posted on social media for art and culture events.

The corresponding paper has been presented at CD-MAKE 2017 in Reggio Calabria on August 31st, 2017.

Nowadays people share everything on online social networks, from daily life stories to the latest local and global news and events. In our paper, we address the specific problem of user behavioural profiling in the context of cultural and artistic events.

We propose a specific analysis pipeline that aims at examining the profile of online users, based on the textual content they published online. The pipeline covers the following aspects: data extraction and enrichment, topic modeling based on LDA, dimensionality reduction, user clustering, prediction of interest, content analysis including profiling of images and subjects.

Picture1We show our approach at work for the monitoring of participation to a large-scale artistic installation that collected more than 1.5 million visitors in just two weeks (namely The Floating Piers, by Christo and Jeanne-Claude). In the paper we report our findings and discuss the pros and cons of the work.

The full paper is published by Springer in the LNCS series in volume 10410, pages 219-236.

The slides used for the presentation are available on SlideShare:

 

Extracting Emerging Knowledge from Social Media

Today I presented our full paper titled “Extracting Emerging Knowledge from Social Media” at the WWW 2017 conference.

The work is based on a rather obvious assumption, i.e., that knowledge in the world continuously evolves, and ontologies are largely incomplete for what concerns low-frequency data, belonging to the so-called long tail.

Socially produced content is an excellent source for discovering emerging knowledge: it is huge, and immediately reflects the relevant changes which hide emerging entities.

In the paper we propose a method and a tool for discovering emerging entities by extracting them from social media.

Once instrumented by experts through very simple initialization, the method is capable of finding emerging entities; we propose a mixed syntactic + semantic method. The method uses seeds, i.e. prototypes of emerging entities provided by experts, for generating candidates; then, it associates candidates to feature vectors, built by using terms occurring in their social content, and then ranks the candidates by using their distance from the centroid of seeds, returning the top candidates as result.

The method can be continuously or periodically iterated, using the results as new seeds.

The PDF of the full paper presented at WWW 2017 is available online (open access with Creative Common license).

You can also check out the slides of my presentation on Slideshare.

A version of the tool is available online for free use, thanks also to our partners Dandelion API and Microsoft Azure. The most recent version of the tool is available on GitHub here.

Data Science for Good City Life

On March 10, 2017 we hosted a seminar by Daniele Quercia in the Como Campus of Politecnico di Milano, on the topic:

Good City Life

daniele-quercia-good-city-life-smartcity
Daniele Quercia

Daniele Quercia leads the Social Dynamics group at Bell Labs in Cambridge
(UK)
. He has been named one of Fortune magazine’s 2014 Data All-Stars, and spoke about “happy maps” at TED.  His research has been focusing in the area of urban informatics and received best paper awards from Ubicomp 2014 and from ICWSM 2015, and an honourable mention from ICWSM 2013. He was Research Scientist at Yahoo Labs, a Horizon senior researcher at the University of Cambridge, and Postdoctoral Associate at the department of Urban Studies and Planning at MIT. He received his PhD from UC London. His thesis was sponsored by Microsoft Research and was nominated for BCS Best British PhD dissertation in Computer Science.

His presentation will contrast the corporate smart-city rhetoric about efficiency, predictability, and security with a different perspective on the cities, which I think is very inspiring and visionary.

“You’ll get to work on time; no queue when you go shopping, and you are safe because of CCTV cameras around you”. Well, all these things make a city acceptable, but they don’t make a city great.

This slideshow requires JavaScript.

Daniele is launching goodcitylife.org – a global group of like-minded people who are passionate about building technologies whose focus is not necessarily to create a smart city but to give a good life to city dwellers. The future of the city is, first and foremost, about people, and those people are increasingly networked. We will see how a creative use of network-generated data can tackle hitherto unanswered research questions. Can we rethink existing mapping tools [happy-maps]? Is it possible to capture smellscapes of entire cities and celebrate good odors [smelly-maps]? And soundscapes [chatty-maps]?

The complete video of the seminar has been streamed live on youtube and is now available online at https://www.youtube.com/watch?v=Z0IprrZ7phc&w=560&h=315 and embedded here:

The seminar was open to the public and hosted at the Polo Regionale di Como headquarters of Politecnico di Milano, located in Via Anzani 42, III floor, Como.

You can also download the Good City Life flyer.

Success story paper: Large-scale Model-Driven Engineering of Web User Interaction with WebML and WebRatio

Our paper “Large-scale Model-Driven Engineering of Web User Interaction: The WebML and WebRatio experience” has been published online on Elsevier’s journal: Science of Computer Programming, in the special issue Success Stories in Model Driven Engineering (edited by Davide Di Ruscio, Richard Paige, Alfonso Pierantonio).

The history we report spans across a decade that has seen a dramatic  change in the way software applications are built, which can be summarized  in three fundamental factors that impacted the evolution of WebML and  WebRatio:
  • The progressive consolidation of theWeb as an application development platform
  • At the front-end, the multiplication of access devices and usage scenarios
  • At the back-end, Business Process Models emerged as a uniform way of representing cross-organization functionality, and Service Oriented Architecture as the technical vehicle for deploying process enactment on top of heterogeneous IT infrastructures.
These change drivers put much strain on a DSL like WebML, born for capturing the  features of the Web, and produced the timeline shown below:


The paper reports on our experience with WebML and WebRatio and describes the perspective of the new IFML standard adopted by OMG. The report tells the story of our company in the MDE tool market, facing the challenges of deploying MDE solutions in large-scale industrial players, with a focus on the model-driven design of user interaction and on code generation across all the tiers of Web/SOA applications. We describe our decisions on the DSL (domain specific language) and on the features we decided to implement (or not) in the tool. 
The paper includes an overview of WebRatio and of its accompanying DSL for Web application design (WebML); it describes the parallel evolution of the WebML language and of the WebRatio development environment; it reports on the the lessons learnt from the joint design of the DSL and of its support tool; it presents a sample of customer histories and reports some quantitative measures on the WebRatio usage, together with some statistics on WebML models size and development effort. Finally, we take the occasion to reflect on the success and failure factors for MDE emerged from the WebRatio experience.

The paper is available from Elsevier and also here in our open-access preprint version.

To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).