Instrumenting Continuous Knowledge Extraction, Sharing, and Benchmarking

This is a contribution in response to the Call for Linked Research for the workshop at ESWC 2017 entitled Enabling Decentralised Scholarly Communication.

Authors: Marco Brambilla, Emanuele Della Valle, Andrea Mauri, Riccardo Tommasini.

Affiliation: Politecnico di Milano, DEIB, Data Science Lab. Milano, Italy.

You can also read and download the FULL ARTICLE IN PDF.
“Nanos gigantum humeris insidentes”
(Bernard of Chartres, 1115 ca.)

Introduction

Science aims at  creating new knowledge upon the existing one, from the observation of physical phenomena, their modeling and empirical validation. This combines the well known motto “standing on the shoulders of giants” (attributed to Bernard of Chartres and subsequently rephrased by Isaac Newton) with the need of trying and validating new experiments.
However, knowledge in the world continuously evolves, at a pace that cannot be traced even by large crowdsourced bodies of knowledge such as Wikipedia. A large share of generated data are not currently analysed and consolidated into exploitable information and knowledge (Ackoff 1989). In particular, the process of ontological knowledge discovery tends to focus on the most popular items, those which are mostly quoted or referenced, and is less effective in discovering less popular items, belonging to the so-called long tail , i.e. the portion of the entity’s distribution having fewer occurrences .(Brambilla 2016)
This becomes a challenge for practitioners, enterprises and scholars / researchers, which need to be up to date to innovation and emerging facts. The scientific community also need to make sure there is a structured and formal way to represent, store and access such knowledge, for instance as ontologies or linked data sources.
Our idea is to propose a vision towards a set of (possibly integrated) publicly available tools that can help scholars keeping the pace with the evolving knowledge. This implies the capability of integrating informal sources, such as social networks, blogs, and user-generated content in general. One can conjecture that somewhere, within the massive content shared by people online, any low-frequency, emerging concept or fact has left some traces. The challenge is to detect such traces, assess their relevance and trustworthiness, and transform them into formalized knowledge (Stieglitz 2014).An appropriate set of tools that can improve effectiveness of knowledge extraction, storage, analysis, publishing and experimental benchmarking could be extremely beneficial for the entire research community across fields and interests.

Our Vision towards Continuous Knowledge Extraction and Publishing

We foresee a paradigm where knowledge seeds can be planted, and subsequently grow, finally leading to the generation and collection of new knowledge, as depicted in the exemplary process shown below: knowledge seeding (through types, context variables, and example instances), growing (for instance by exploring social media), and harvesting for extracting concepts (instances and types).

ske-paradigm
We advocate for a set of tools that, when implemented and integrated, enable  the following perspective reality:

  • possibility of selecting any kind of source of raw data, independently of their format, type or  semantics (spanning quantitative data, textual content, multimedia content), covering both data streams or pull-based data sources;
  • possibility of applying different data cleaning and data analysis pipelines to the different sources, in order to increase data quality and abstraction / aggregation;
  • possibility of integrating the selected sources;
  • possibility of running homogeneous knowledge extraction processes of the integrated sources;
  • possibility of publishing the results of the analysis and semantic enrichment as new and further (richer) data sources and streams, in a coherent, standard and semantic way.

This enables generation of new sources which in turn can be used in subsequent knowledge extraction processes of the same kind. The results of this process must be available at any stage to be shared for building an open, integrated and continuously evolving knowledge for research, innovation, and dissemination purposes.

A Preliminary Feasibility Perspective

Whilst beneficial and powerful, the vision we propose is far from being achieved nowadays.  However, we are convinced that the vision is not out of reach in the mid term. To give a hint of this, we report here our experience with the research, design and implementation of a few tools that point in the proposed direction:

  1. Social Knowledge Extractor (SKE) is a publicly available tool for discovering emerging knowledge by extracting it from social content. Once instrumented by experts through very simple initialization, the tool is capable of finding emerging entities by means of a mixed syntactic-semantic method. The method uses seeds, i.e. prototypes of emerging entities provided by experts, for generating candidates; then, it associates candidates to feature vectors, built by using terms occurring in their social content, and then ranks the candidates by using their distance from the centroid of seeds, returning the top candidates as result. The tool can run continuously or with periodic iterations, using the results as new seeds. Our research on this has been published in (Brambilla et al., 2017), a simplified implementation is currently available online for demo purposes
    at http://datascience.deib.polimi.it/social-knowledge/,
    and the code is available as open-source under an Apache 2.0 license on GitHub at https://github.com/DataSciencePolimi/social-knowledge-extractor.
  2. TripleWave is a tool for disseminating and exchanging RDF streams on the Web. At the purpose of  processing information streams in real-time and at Web scale, TripleWave integrates nicely with RDF Stream Processing (RSP) and Stream Reasoning (SR) as solutions to combine semantic technologies with stream and event processing techniques. In particular, it integrates with an existing ecosystem of solutions to query, reason and perform real-time processing over heterogeneous and distributed data streams. TripleWave can be fed with existing Web streams (e.g. Twitter and Wikipedia streams) or time-annotated RDF datasets (e.g. the Linked Sensor Data dataset) and it can be invoked through both pull- and push-based mechanisms, thus enabling RSP engines to automatically register and receive data from TripleWave. The tool has been described in (Mauri et al., 2016) and the code is available as open-source on GitHub at https://github.com/streamreasoning/TripleWave/.
  3. RSPlab enables efficient design and execution of reproducible experiments,  as well as sharing of the results. It integrates two existing RSP benchmarks (LSBench and CityBench) and two RSP engines (C-SPARQL engine and CQELS). It provides a programmatic environment to: deploy in the cloud RDF Streams and RSP engines; interact with them using TripleWave and RSP Services; continuously monitor their performances and collect statistics. RSPlab is released as open-source under an Apache 2.0 license, is currently under submission at ISWC – Resources Track and is available on GitHub
    at https://github.com/streamreasoning/rsplab.

Conclusions and Outlook

We believe that knowledge intaking by scholars is going to become more and more time consuming and expensive, due to the amount of knowledge that is being built and shared everyday. We envision a comprehensive approach based on integrated tools that allow data collection, cleaning, integration, analysis and semantic representation that can be run continuously  for keeping the formalized knowledge bases aligned with the evolution of knowledge, with limited cost and high recall on the facts and concepts that emerge or decay. These tools do not need to be implemented by the same vendor or provider; we instead advocate for opensource publishing of all the implementations, as well as for the definition of an agreed-upon integration platform that allows them all to colloquiate appropriately.

References:

  • Russell L. Ackoff. From data to wisdom. Journal of applied systems analysis 16, 3–9 (1989).

  • Marco Brambilla, Stefano Ceri, Florian Daniel, Emanuele Della Valle. On the quest for changing knowledge. In Proceedings of the Workshop on Data-Driven Innovation on the Web – DDI 16. ACM Press, 2016. Link

  • Stefan Stieglitz, Linh Dang-Xuan, Axel Bruns, Christoph Neuberger. Social Media Analytics. Business & Information Systems Engineering 6, 89–96 Springer Nature, 2014. Link

  • Marco Brambilla, Stefano Ceri, Emanuele Della Valle, Riccardo Volonterio, Felix Xavier Acero Salazar. Extracting Emerging Knowledge from Social Media. In Proceedings of the 26th International Conference on World Wide Web – WWW 17. ACM Press, 2017. Link

  • Andrea Mauri, Jean-Paul Calbimonte, Daniele Dell’Aglio, Marco Balduini, Marco Brambilla, Emanuele Della Valle, Karl Aberer. TripleWave: Spreading RDF Streams on the Web. 140–149 In Lecture Notes in Computer Science. Springer International Publishing, 2016. Link

Extracting Emerging Knowledge from Social Media

Today I presented our full paper titled “Extracting Emerging Knowledge from Social Media” at the WWW 2017 conference.

The work is based on a rather obvious assumption, i.e., that knowledge in the world continuously evolves, and ontologies are largely incomplete for what concerns low-frequency data, belonging to the so-called long tail.

Socially produced content is an excellent source for discovering emerging knowledge: it is huge, and immediately reflects the relevant changes which hide emerging entities.

In the paper we propose a method and a tool for discovering emerging entities by extracting them from social media.

Once instrumented by experts through very simple initialization, the method is capable of finding emerging entities; we propose a mixed syntactic + semantic method. The method uses seeds, i.e. prototypes of emerging entities provided by experts, for generating candidates; then, it associates candidates to feature vectors, built by using terms occurring in their social content, and then ranks the candidates by using their distance from the centroid of seeds, returning the top candidates as result.

The method can be continuously or periodically iterated, using the results as new seeds.

The PDF of the full paper presented at WWW 2017 is available online (open access with Creative Common license).

You can also check out the slides of my presentation on Slideshare.

A demo version of the tool is available online for free use, thanks also to our partners Dandelion and Microsoft Azure.

You can TRY THE TOOL NOW if you want.

Social Media Behaviour during Live Events: the Milano Fashion Week #MFW case

Social media are getting more and more  important in the context of live events, such as fairs, exhibits, festivals, concerts, and so on,  as they play an essential role in communicating them to  fans, interest groups, and the general population. These kinds of events are geo-localized within a city or territory and are scheduled within a public calendar.

Together with the people in the Fashion in Process group of Politecnico di Milano, we studied the impact on social media of a specific scenario, the Milano Fashion Week (MFW), which is an important event in Milano for the whole fashion business.

We presented this work at the Location and the Web workshop co-located with the WWW 2017 Conference in Perth, Australia.

We focus our attention on the spreading of social content  in space, measuring the spreading of the event propagation in space. We build different clusters of fashion brands, we characterize several features of propagation in space and we correlate them to the popularity of the brand and temporal propagation.

We show that the clusters along space, time and popularity dimensions are loosely correlated, and therefore trying to  understand the dynamics of the events only based on popularity  aspects would not be appropriate.

The paper PDF is available as open access PDF online on the WWW 2017 Conference web site. You can download it here.

The PowerPoint presentation is available on SlideShare.

Data Science for Good City Life

On March 10, 2017 we hosted a seminar by Daniele Quercia in the Como Campus of Politecnico di Milano, on the topic:

Good City Life

daniele-quercia-good-city-life-smartcity
Daniele Quercia

Daniele Quercia leads the Social Dynamics group at Bell Labs in Cambridge
(UK)
. He has been named one of Fortune magazine’s 2014 Data All-Stars, and spoke about “happy maps” at TED.  His research has been focusing in the area of urban informatics and received best paper awards from Ubicomp 2014 and from ICWSM 2015, and an honourable mention from ICWSM 2013. He was Research Scientist at Yahoo Labs, a Horizon senior researcher at the University of Cambridge, and Postdoctoral Associate at the department of Urban Studies and Planning at MIT. He received his PhD from UC London. His thesis was sponsored by Microsoft Research and was nominated for BCS Best British PhD dissertation in Computer Science.

His presentation will contrast the corporate smart-city rhetoric about efficiency, predictability, and security with a different perspective on the cities, which I think is very inspiring and visionary.

“You’ll get to work on time; no queue when you go shopping, and you are safe because of CCTV cameras around you”. Well, all these things make a city acceptable, but they don’t make a city great.

This slideshow requires JavaScript.

Daniele is launching goodcitylife.org – a global group of like-minded people who are passionate about building technologies whose focus is not necessarily to create a smart city but to give a good life to city dwellers. The future of the city is, first and foremost, about people, and those people are increasingly networked. We will see how a creative use of network-generated data can tackle hitherto unanswered research questions. Can we rethink existing mapping tools [happy-maps]? Is it possible to capture smellscapes of entire cities and celebrate good odors [smelly-maps]? And soundscapes [chatty-maps]?

The complete video of the seminar has been streamed live on youtube and is now available online at https://www.youtube.com/watch?v=Z0IprrZ7phc&w=560&h=315 and embedded here:

The seminar was open to the public and hosted at the Polo Regionale di Como headquarters of Politecnico di Milano, located in Via Anzani 42, III floor, Como.

You can also download the Good City Life flyer.

When a Smart City gets Personal

When people talk about smart cities, the tendency is to think about them in a technology-oriented or sociology-oriented manner.

However, smart cities are the places where we leave and work everyday now.

Here is a very broad perspective (in Italian) about the experience of big data analysis and smart city instrumentation for the town of Como, in Italy: an experience on how phone calls, mobility data, social media, people counters can contribute to take and evaluate decisions.

skype-2

You can read it on my Medium channel.

View story at Medium.com

The role of Big Data in Banks

I was listening at R. Martin Chavez, Goldman Sachs deputy CFO just last month in Harvard at the ComputeFest 2017 event, more precisely, the SYMPOSIUM ON THE FUTURE OF COMPUTATION IN SCIENCE AND ENGINEERING on “Data, Dollars, and Algorithms: The Computational Economy” held in Harvard on Thursday, January 19, 2017.

His claim was that

Banks are essentially API providers.

The entire structure and infrastructure of Goldman Sachs is being restructured for that. His case is that you should not compare a bank with a shop or store, you should compare it with Google. Just imagine that every time you want to search on Google you need to get in touch (i.e., make a phone call or submit a request) to some Google employee, who at some points comes back to you with the result. Non sense, right?  Well, but this is what actually happens with banks. It was happening with consumer-oriented banks before online banking, and it’s still largely happening for business banks.

But this is going to change. Amount of data and speed and volume of financial transaction doesn’t allow that any more.

Banks are actually among the richest (not [just] in terms of money, but in data ownership). But they are also craving for further “less official” big data sources.

c4tmizavuaa1fc3
Juri Marcucci: Importance of Big Data for Central (National) Banks.

Today at the ISTAT National Big Data Committee meeting in Rome, Juri Marcucci from Bank of Italy discussed their research activity in integration of Google Trends information in their financial predictive analytics.

Google Trends provide insights of user interests in general, as the probability that a random user is going to search for a particular keyword (normalized and scaled, also with geographical detail down to city level).

Bank of Italy is using Google Trends data for complementing their prediction of unemployment rates in short and mid term. It’s definitely a big challenge, but preliminary results are promising in terms of confidence on the obtained models. More details are available in this paper.

Paolo Giudici from University of Pavia showed how one can correlate the risk of bank defaults with their exposition on Twitter:

c4tuo4yxuae86gm
Paolo Giudici: bank risk contagion based (also) on Twitter data.

Obviously, all this must take into account the bias of the sources and the quality of the data collected. This was pointed out also by Paolo Giudici from University of Pavia. Assessment of “trustability” of online sources is crucial. In their research, they defined the T-index on Twitter accounts in a very similar way academics define the h-index for relevance of publications, as reported in the photographed slide below.

dig
Paolo Giudici: T-index describing the quality of Twitter authors in finance.

It’s very interesting to see how creative the use of (non-traditional, web based) big data is becoming, in very diverse fields, including very traditional ones like macroeconomy and finance.

And once again, I think the biggest challenges and opportunities come from the fusion of multiple data sources together: mobile phones, financial tracks, web searches, online news, social networks, and official statistics.

This is also the path that ISTAT (the official institute for Italian statistics) is pursuing. For instance, in the calculation of official national inflation rates, web scraping techniques (for ecommerce prices) upon more than 40.000 product prices are integrated in the process too.

 

 

The Harvard-Politecnico Joint Program on Data Science in full bloom

After months of preparation, here we are.

This week we kicked off the second edition of the DataShack program on Data Science that brings together interdisciplinary teams of data science, software engineering & computer science, and design students from Harvard (Institute of Applied Computational Science) and Politecnico di Milano (faculties of Engineering and Design).

The students will address big data extraction, analysis, and visualization problems provided by two real-world stakeholders in Italy: the Como city municipality and Moleskine.

logo-moleskineThe Moleskine Data-Shack project will explore the popularity and success of different Moleskine products co-branded with other famous brands (also known as special editions) and launched in specific periods in time. The main field of analysis is the impact that different products have on social media channels. Social media analysis then will be correlated with product distribution and sales performance data, along multiple dimensions (temporal, geographical, etc.) and product features.

logo-comoThe project consists of collecting and analyzing data about the city and the way people live and move within it, by integrating multiple and diverse data sources. The problems to be addressed may include providing estimates of human density and movements within the city, predicting the impact of hypothetical future events, determining the best allocation of sensors in the streets, and defining optimal user experience and interaction for exploring the city data.

img_3doc5n
The kickoff meeting of the DataShack 2017 projects, in Harvard. Faculties Pavlos Protopapas, Stefano Ceri, Paola Bertola, Paolo Ciuccarelli and myself (Marco Brambilla) are involved in the program.

The teams have been formed, and the problems assigned. I really look forward to advising the groups in the next months and seeing the results that will come out. The students have shown already commitment and engagement. I’m confident that they will be excellent and innovative this year!

For further activities on data science within our group you can refer to the DataScience Lab site, Socialometers, and Urbanscope.

 

"What’s special about us?" Harvard computational science symposium on Brain+Computer systems

On Friday, January 22, 2016 I attended a very interesting symposium organised by Harvard University Institute for Computational Science on “BRAIN + MACHINES: EXPLORING THE FRONTIERS OF NEUROSCIENCE AND COMPUTER SCIENCE”.

Although it fell outside my main research fields, I found it very interesting and enlightening. And the discussed topics could also imply some crucial role for modelling practices.
The introductory speech by David Cox, addressed the role and span of brain studies. First, he pointed out that when we say we want to study the brain, at a deep level, we say we want to study ourselves.
Indeed, we all perceive human species is special. But why is that? We are not the biggest, longest-living, most numerous, most adapted species. We simply cover a niche, as any other species.

What’s special about us is the complexity, not in general sense (nature is plenty of complexity), but specifically complexity of our brain.
Our brain includes 100 billions neutrons, and 100 trillions connections.
We are able to deal with complex information in incredible ways, because each neuron is actually a small computer, and globally our brain is enormously more powerful than any computer built so far.
We therefore build clusters of computers. But this is still not enough to obtain the brain power, we need to understand how brain works, to treat and replicate it.
Typical and crucial problems include to study: vision and image processing, positioning and mobility, and so on.
That’s where I think modelling can play a crucial role here.
As we clearly pointed out in our book Model-driven Development in Practice, modelling and abstraction is a natural way of working for our brain. And I got confirmation from renowned luminaries from Harvard today.
I really think that, should we discover the modelling approaches of our mind, we could disclose a lot of important aspects of several research fields.
Just imagine if:

  • we could represent human brain processes through models
  • we could replicate these processes and apply modelling techniques for improving, transforming and exploiting such models.

This would pave the way to infinite applications and researches. However, one big challenge opens up for the modelling community: are we able to deal with models including trillions of items??

Any further insights on this?

If you want further details on the event, checkout the official website of the symposium here and my storified social media report here.

To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).

Efficient Subgraph Matching – Keynote by V.S. Subrahmanian at ASONAM 2012

V.S. Subrahmanian (Univ. of Maryland)
As part of the Data Management in the Social Semantic Web workshop (DMSSW workshop) at the ASONAM 2012 conference in Istanbul, Turkey, V.S. Subrahmanian (University of Maryland) gave an interesting talk on efficient subgraph matching on (social) networks.
Queries are defined as graphs themselves, with some of the places defined as constants and some defined as variables.
The complexity of queries over graphs is high, due to the large number of joins to be performed even in case of fairly simple queries.
The size of the query is typically relatively small with respect to the entire dataset (network). The proposed approaches are useful for scale of at least tens of millions of nodes in the network.

How to work on disk

The mechanism implemented is called DOGMA index and applies an algorithm called K-merge.
The algorithm builds a hierarchical index where I put at most K nodes of the graph in each index item. For obtaining that, I merge together connected nodes.You can do that randomly or more intelligently by trying to minimizing connections between nodes in different index items.
Example of DOGMA Index, where nodes of the original network (at the bottom) are merged in higher level representations in the level above (in this example, K = 4, since we have 4 nodes in each index position).
I don’t want to build the index by partitioning the whole graph, because it’s painful for large graphs. 
I start from a G0 graph, and I merge nodes until I get G1, G2, Gn graphs, each of them is more or less half the size of the previous, until Gn has K nodes or less. Then, I build the dogma index over Gn.
For the query, I can use a basic approach: identify the variable nodes that are immediately close to a constant node, and then find the possible satisfying values for those variables, starting from the constants. I can apply conditions considering distance constraints between constants and variables, as well as between candidate variable names. To allow this, I also save in every node of the index the distance of closest node in the other nodes. 

How to work on the cloud

This approach has been also implemented in the cloud, through the so called COSI Architecture, assuming a cloud of k+1 computing nodes. The implementation of the edge cuts that generates the index must be very quick and produce fairly good cuts (but not necessarily optimal).
The image below lists some of the references to S.V. works on the topic.
Some references to V.S. Subrahmanian works on subgraph matching.

To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).

Keynote by David Campbell (Microsoft) on Big Data Challenges at VLDB 2011, Seattle

This post is about the insights I got from the interesting keynote speech by David Campbell (Microsoft) on Big Data Challenges that was given on August 31st 2011 at VLDB 2011, Seattle.

The challenge of big data is not interesting just because of the “big” per sè. It’s a multi-faceted concept and all the perspectives need to be considered.
The point is that this big data must be available on small devices and in shorter time-to-concept or time-to-insight than in the past.
We cannot afford any more the traditional paradigm in which the lifecycle is:

  • pose question
  • conceptual model
  • collect data
  • logical model
  • physical model
  • respond question

The question lifecycle can be summarized by the graph below:

However, the current lead time of this is too long (weeks or months). The true challenge is that We have much more data than we can model. The bottleneck is becoming the modeling phase, as shown below:

The correct cycle to be adopted is the sensemaking developed by Pirolli and Card in 2005 in the Intelligence Analysis community.
The notion is to have a frame that explains the data and viceversa the data supports the explanatory frame, in a continuous feedback and interdependent relationship. (see the Data-frame theory for sensemaking by Klein et al.)
So far, this is viable in modeled domains, while big data expands this to unmodeled domains.
This needs to enable automatic model generation.
The other challenge is to grant that the new paradigm is able to comprise the traditional data application and that it will be able to get the best of traditional data and big data.
A few patterns have been identified for big data:

  • Digital shoebox: retain all the ambient data to enable sensemaking. This is motivated by the cost of data acquisition and data storage going toward zero. I simply augment the raw data with sourceID and instanceID and keep it for future usage or sensemaking.
  • Information production: turn the acquired data from digital shoebox to other events, states, and results, thus transforming raw data into information (still requiring subsequent processing). The results go back in the digital shoebox
  • Model development: enable sensemaking direclty over the digital shoebox without extensive up front modeling, so as to create knowledge. Simple visualizations often suffice for getting the big picture of a trend or a behaviour (e.g., home automation sensors can provide the habits of a family).
  • Monitor, mine, manage: develop and use generated models to perfom active management or intervention. Models (or algorithms) are automatically generated so as to be installed as a new system (e.g., think to fraud detection or other fields).

I think that these patterns can actually be defined more as new development phases than patterns. Their application can significantly shorten the time-to-insight and is independent on the size of the datasource.
On the other side, I think this paradigm can apply more to sensor data that generally speaking big data (e.g., datasources on the web), but still has a huge potential both for personal information management, social networking data and also for enterprise management.

To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).