Together with the Urbanscope team, we gave a TedX talk on the topics and results of the project here at Politecnico di Milano. The talk was actually given by our junior researchers, as we wanted it to be a choral performance as opposed to the typical one-man show.
The message is that cities are not mere physical and organizational devices only: they are informational landscapes where places are shaped more by the streams of data and less by the traditional physical evidences. We devise tools and analysis for understanding these streams and the phenomena they represent, in order to understand better our cities.
Two layers coexist: a thick and dynamic layer of digital traces – the informational membrane – grows everyday on top of the material layer of the territory, the buildings and the infrastructures. The observation, the analysis and the representation of these two layers combined provides valuable insights on how the city is used and lived.
Urbanscope is a research laboratory where collection, organization, analysis, and visualization of cross domain geo-referenced data are experimented.
The research team is based at Politecnico di Milano and encompasses researchers with competencies in Computing Engineering, Communication and Information Design, Management Engineering, and Mathematics.
The aim of Urbanscope is to systematically produce compelling views on urban systems to foster understanding and decision making. Views are like new lenses of a macroscope: they are designed to support the recognition of specific patterns thus enabling new perspectives.
If you enjoyed the show, you can explore our beta application at:
Today I presented our full paper titled “Extracting Emerging Knowledge from Social Media” at the WWW 2017 conference.
The work is based on a rather obvious assumption, i.e., that knowledge in the world continuously evolves, and ontologies are largely incomplete for what concerns low-frequency data, belonging to the so-called long tail.
Socially produced content is an excellent source for discovering emerging knowledge: it is huge, and immediately reflects the relevant changes which hide emerging entities.
In the paper we propose a method and a tool for discovering emerging entities by extracting them from social media.
Once instrumented by experts through very simple initialization, the method is capable of finding emerging entities; we propose a mixed syntactic + semantic method. The method uses seeds, i.e. prototypes of emerging entities provided by experts, for generating candidates; then, it associates candidates to feature vectors, built by using terms occurring in their social content, and then ranks the candidates by using their distance from the centroid of seeds, returning the top candidates as result.
The method can be continuously or periodically iterated, using the results as new seeds.
Social media are getting more and more important in the context of live events, such as fairs, exhibits, festivals, concerts, and so on, as they play an essential role in communicating them to fans, interest groups, and the general population. These kinds of events are geo-localized within a city or territory and are scheduled within a public calendar.
Together with the people in the Fashion in Process group of Politecnico di Milano, we studied the impact on social media of a specific scenario, the Milano Fashion Week (MFW), which is an important event in Milano for the whole fashion business.
We focus our attention on the spreading of social content in space, measuring the spreading of the event propagation in space. We build different clusters of fashion brands, we characterize several features of propagation in space and we correlate them to the popularity of the brand and temporal propagation.
We show that the clusters along space, time and popularity dimensions are loosely correlated, and therefore trying to understand the dynamics of the events only based on popularity aspects would not be appropriate.
Daniele Quercia leads the Social Dynamics group at Bell Labs in Cambridge
(UK). He has been named one of Fortune magazine’s 2014 Data All-Stars, and spoke about “happy maps” at TED.His research has been focusing in the area of urban informatics and received best paper awards from Ubicomp 2014 and from ICWSM 2015, and an honourable mention from ICWSM 2013. He was Research Scientist at Yahoo Labs, a Horizon senior researcher at the University of Cambridge, and Postdoctoral Associate at the department of Urban Studies and Planning at MIT. He received his PhD from UC London. His thesis was sponsored by Microsoft Research and was nominated for BCS Best British PhD dissertation in Computer Science.
His presentation will contrast the corporate smart-city rhetoric about efficiency, predictability, and security with a different perspective on the cities, which I think is very inspiring and visionary.
“You’ll get to work on time; no queue when you go shopping, and you are safe because of CCTV cameras around you”. Well, all these things make a city acceptable, but they don’t make a city great.
Daniele is launching goodcitylife.org – a global group of like-minded people who are passionate about building technologies whose focus is not necessarily to create a smart city but to give a good life to city dwellers. The future of the city is, first and foremost, about people, and those people are increasingly networked. We will see how a creative use of network-generated data can tackle hitherto unanswered research questions. Can we rethink existing mapping tools [happy-maps]? Is it possible to capture smellscapes of entire cities and celebrate good odors [smelly-maps]? And soundscapes [chatty-maps]?
When people talk about smart cities, the tendency is to think about them in a technology-oriented or sociology-oriented manner.
However, smart cities are the places where we leave and work everyday now.
Here is a very broad perspective (in Italian) about the experience of big data analysis and smart city instrumentation for the town of Como, in Italy: an experience on how phone calls, mobility data, social media, people counters can contribute to take and evaluate decisions.
Within a completely new line of research, we are exploring the power of modeling for human behaviour analysis, especially within social networks and/or in occasion of large scale live events. Participation to challenges within social networks is a very effective instrument for promoting a brand or event and therefore it is regarded as an excellent marketing tool.
Our first reasearch has been published in November 2016 at WISE Conference, covering the analysis of user engagement within social network challenges.
In this paper, we take the challenge organizer’s perspective, and we study how to raise the
engagement of players in challenges where the players are stimulated to
create and evaluate content, thereby indirectly raising the awareness about the brand or event itself. Slides are available on slideshare:
We illustrate a comprehensive model of the actions and strategies that can be exploited for progressively boosting the social engagement during the challenge evolution. The model studies the organizer-driven management of interactions among players, and evaluates
the effectiveness of each action in light of several other factors (time, repetition, third party actions, interplay between different social networks, and so on).
We evaluate the model through a set of experiment upon a real case, the YourExpo2015 challenge. Overall, our experiments lasted 9 weeks and engaged around 800,000 users on two different social platforms; our quantitative analysis assesses the validity of the model.
CityOmeters, the complete solution proposed by Fluxedo for smart city management that includes social engagement via micro-planning and big data flow analytics over social content and IoT, has been presented today at EXPO 2015 in Milano, in the Samsung and TIM pavilion.
See the slides below:
My recent interview on the evolution of social media and its role in modern society is available on YouTube (in Italian only, sorry about that).
While the 3+ minutes of speech necessarily had to be a general overview on the role and recent changes of social media, I wish to summarise here the some technical aspects of it.
As I mentioned in the presentation:
social media changed a lot since their early days, from being consumed on PCs to mobile devices, from general purpose social networks connecting friends to digital stages where we “sell” our life to the entire world, from places where to share personal information to platforms where to publish also objective information coming from the real world experience.
social media are nowadays a valuable source of information for companies, who look for (and find) their customers through social media marketing and advertising, and public institutions and researchers, that can leverage on a large amount of data for providing benefits to our everyday life
What I didn’t say is how you can do that. Well, it’s pretty simple.
The ingredients of the recipe:
A lot of users sharing their profile
A lot of content (photos, statuses, geotags, descriptions) shared by people
(which makes up a VERY big data problem)
crawlers capturing this (or stream capturing systems) and storage as needed
MODELS of the context, the problem and the solution
and DATA ANALYSIS TOOLS for studying the data and extracting meaningful information
To me, the most valuable points are MODELS and ANALYSIS TOOLS. We are doing a lot of experiments on mixing model-driven techniques with semantic analysis, NLP, and social media monitoring. One example of our experiments is the YourExpo2015 Instagram Photo Challenge.
Have a look and participate if you like. More on this coming soon!
To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).
Today Andrea Mauri presented our paper “Community-based Crowdsourcing” at the SOCM Workshop co-located with the WWW 2014 conference.
SOCM is the 2nd International Workshop on the Theory and Practice of Social Machines and is an interesting venue for discussing instrumentation, tooling, and software system aspects of online social network. The full program of the event is here.
Our paper is focused on community-based crowdsourcing applications, i.e. the ability of spawning crowdsourcing tasks upon multiple communities of performers, thus leveraging the peculiar characteristics and capabilities of the community members.
We show that dynamic adaptation of crowdsourcing campaigns to community behaviour is particularly relevant. We demonstrate that this approach can be very e ffective for obtaining answers from communities, with very di fferent size, precision, delay and cost, by exploiting the social networking relations and the features of the crowdsourcing task. We show the approach at work within the CrowdSearcher platform, which allows con figuring and dynamically adapting crowdsourcing campaigns tailored to different communities. We report on an experiment demonstrating the eff ectiveness of the approach.
The figure below shows a declarative reactive rule that dynamically adapts the crowdsourcing campaign by moving the task executions from a community of workers to another, when the average quality score of the community is below some threshold.
The slides of the presentation are available on Slideshare. If you want to know more or see some demos, please visit:
We believe that an essential aspect for building effective crowdsourcing computations is the ability of “controlling the crowd”, i.e. of dynamically adapting the behaviour of the crowdsourcing systems as response to the quantity and quality of completed tasks or to the availability and reliability of performers. This new paper focuses on a machinery and methodology for deploying configurable, cross-platform, and adaptive crowdsourcing campaigns through a model-driven approach.
Control through declarative active rules
In the paper we present an approach to crowdsourcing which provides powerful and flexible crowd controls. We model each crowdsourcing application as composition of elementary task types and we progressively transform these high level specifications into the features of a reactive execution environment that supports task planning, assignment and completion as well as performer monitoring and exclusion. Controls are specified as declarative, active rules on top of data structures which are derived from the model of the application; rules can be added, dropped or modified, thus guaranteeing maximal exibility with limited effort. The paper applies modeling practices (as also explained in our book on model-driven software engineering).
We have a prototype platform that implements the proposed framework. We have done extensive experiments with it. Our experimentations with different rule sets demonstrate how simple changes to the rules can substantially affect time, effort and quality involved in crowdsourcing activities.
Here is a short video demonstrating our approach through the current prototype (mainly centered on the crowdsourcing campaign configuration phase):
The paper is a follow-up of our WWW2012 paper on Crowdsearcher, which focused on exploiting social networks and crowdsourcing platforms for improving search. The paper nicely combines with another recent contribution of ours, presented at EDBT 2013, on finding the right crowd of experts on social networks for addressing a specific problem.
To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).