Efficient Subgraph Matching – Keynote by V.S. Subrahmanian at ASONAM 2012

V.S. Subrahmanian (Univ. of Maryland)
As part of the Data Management in the Social Semantic Web workshop (DMSSW workshop) at the ASONAM 2012 conference in Istanbul, Turkey, V.S. Subrahmanian (University of Maryland) gave an interesting talk on efficient subgraph matching on (social) networks.
Queries are defined as graphs themselves, with some of the places defined as constants and some defined as variables.
The complexity of queries over graphs is high, due to the large number of joins to be performed even in case of fairly simple queries.
The size of the query is typically relatively small with respect to the entire dataset (network). The proposed approaches are useful for scale of at least tens of millions of nodes in the network.

How to work on disk

The mechanism implemented is called DOGMA index and applies an algorithm called K-merge.
The algorithm builds a hierarchical index where I put at most K nodes of the graph in each index item. For obtaining that, I merge together connected nodes.You can do that randomly or more intelligently by trying to minimizing connections between nodes in different index items.
Example of DOGMA Index, where nodes of the original network (at the bottom) are merged in higher level representations in the level above (in this example, K = 4, since we have 4 nodes in each index position).
I don’t want to build the index by partitioning the whole graph, because it’s painful for large graphs. 
I start from a G0 graph, and I merge nodes until I get G1, G2, Gn graphs, each of them is more or less half the size of the previous, until Gn has K nodes or less. Then, I build the dogma index over Gn.
For the query, I can use a basic approach: identify the variable nodes that are immediately close to a constant node, and then find the possible satisfying values for those variables, starting from the constants. I can apply conditions considering distance constraints between constants and variables, as well as between candidate variable names. To allow this, I also save in every node of the index the distance of closest node in the other nodes. 

How to work on the cloud

This approach has been also implemented in the cloud, through the so called COSI Architecture, assuming a cloud of k+1 computing nodes. The implementation of the edge cuts that generates the index must be very quick and produce fairly good cuts (but not necessarily optimal).
The image below lists some of the references to S.V. works on the topic.
Some references to V.S. Subrahmanian works on subgraph matching.

To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).

Keynote by David Campbell (Microsoft) on Big Data Challenges at VLDB 2011, Seattle

This post is about the insights I got from the interesting keynote speech by David Campbell (Microsoft) on Big Data Challenges that was given on August 31st 2011 at VLDB 2011, Seattle.

The challenge of big data is not interesting just because of the “big” per sè. It’s a multi-faceted concept and all the perspectives need to be considered.
The point is that this big data must be available on small devices and in shorter time-to-concept or time-to-insight than in the past.
We cannot afford any more the traditional paradigm in which the lifecycle is:

  • pose question
  • conceptual model
  • collect data
  • logical model
  • physical model
  • respond question

The question lifecycle can be summarized by the graph below:

However, the current lead time of this is too long (weeks or months). The true challenge is that We have much more data than we can model. The bottleneck is becoming the modeling phase, as shown below:

The correct cycle to be adopted is the sensemaking developed by Pirolli and Card in 2005 in the Intelligence Analysis community.
The notion is to have a frame that explains the data and viceversa the data supports the explanatory frame, in a continuous feedback and interdependent relationship. (see the Data-frame theory for sensemaking by Klein et al.)
So far, this is viable in modeled domains, while big data expands this to unmodeled domains.
This needs to enable automatic model generation.
The other challenge is to grant that the new paradigm is able to comprise the traditional data application and that it will be able to get the best of traditional data and big data.
A few patterns have been identified for big data:

  • Digital shoebox: retain all the ambient data to enable sensemaking. This is motivated by the cost of data acquisition and data storage going toward zero. I simply augment the raw data with sourceID and instanceID and keep it for future usage or sensemaking.
  • Information production: turn the acquired data from digital shoebox to other events, states, and results, thus transforming raw data into information (still requiring subsequent processing). The results go back in the digital shoebox
  • Model development: enable sensemaking direclty over the digital shoebox without extensive up front modeling, so as to create knowledge. Simple visualizations often suffice for getting the big picture of a trend or a behaviour (e.g., home automation sensors can provide the habits of a family).
  • Monitor, mine, manage: develop and use generated models to perfom active management or intervention. Models (or algorithms) are automatically generated so as to be installed as a new system (e.g., think to fraud detection or other fields).

I think that these patterns can actually be defined more as new development phases than patterns. Their application can significantly shorten the time-to-insight and is independent on the size of the datasource.
On the other side, I think this paradigm can apply more to sensor data that generally speaking big data (e.g., datasources on the web), but still has a huge potential both for personal information management, social networking data and also for enterprise management.

To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).