Call for good practices proven effective in the management and containment of the COVID-19 pandemic effects on economy, society, and healthcare

PERISCOPE (“Pan-European Response to the Impacts of COVID-19 and future Pandemics and Epidemics”) is a large-scale project that aims at mapping and analysing the impacts of the COVID-19 pandemic, developing solutions and guidance for policymakers and health authorities on how to mitigate the impact of the pandemic, and enhancing Europe’s preparedness for future similar events. We plan to promote science-based policies for the post-pandemic society, in a way that orients future recovery towards enhanced resilience and sustainability. PERISCOPE is funded by the European Union Horizon 2020 programme for research and innovation, for the period November 2020-October 2023.

In our three-year journey, we plan to continuously collect good practices and innovative solutions that have proven effective in the containment of the pandemic, in the protection of the economy and society, in the management and organisation of healthcare facilities, or in the mitigation of indirect effects of the restrictions adopted throughout Europe, including mental health and inequalities. From the reorganisation of hospitals to the use of technology in social distancing and contact tracing, to innovative modes of disbursing funds to citizens and businesses, we commit to keeping our eyes open to all successful applications or solutions that could potentially be emulated in other parts of Europe, or inspire socially beneficial innovation.

Give us a hint. We’ll do the rest

We are launching a call for good practices directed at public authorities, businesses, civil society, academics from all over Europe and beyond, in order to identify solutions implemented during 2020, which proved useful and effective in achieving their intended objectives. We only ask respondents to provide us with a very short description, help us classify the good practices according to the categories specified below, and possibly be available for further clarifications in case we need important information. We at PERISCOPE will do the rest. We will analyse the proposed practice and evaluate its transferability to other parts of the European territory, and identify good practices to be promoted throughout Europe.

The areas of interest in our collection of good practices include: Education and training: (for example, modes of distance learning, organising student rotations at school, training teachers on online tools, training healthcare professionals, etc.); use of digital technologies (e.g. contact-tracing apps; use of data from mobile operators or tech platforms; crowdsourcing solutions; use of Artificial Intelligence in testing and tracing; etc.); financial aid to citizens and businesses (direct payments, access to subsidies, rating resilience or sustainability of recipients of funds); reorganisation of hospital and intensive care facilitiestransportation and logistics; and more.

The link to the online form is:

https://ec.europa.eu/eusurvey/runner/PERISCOPEgoodpractices

The first cut-off date for submitting good practices is December 31, 2020. After that date, we will compile a first report and publish it on our website, on the press and in scientific articles. By contributing valuable experience, you can help us learn and transfer practices that can save lives and improve individual well-being in Europe and beyond.

PERISCOPE: the EU project on socio-economic and behavioral impacts of the COVID-19 pandemic

Starting today, our team at the Data Science Lab Polimi will participate to the PERISCOPE European project.

PERISCOPE will investigate the broad socio-economic and behavioral impacts of the COVID-19 pandemic, to make Europe more resilient and prepared for future large-scale risks.

The European Commission approved PERISCOPE (PAN-EUROPEAN RESPONSE TO THE IMPACTS OF COVID-19 AND FUTURE PANDEMICS AND EPIDEMICS), a large-scale research project that brings together 32 European institutions and is coordinated by the University of Pavia. PERISCOPE is a Horizon 2020 research project that was funded with almost 10 million Euros under the Coronavirus Global Response initiative launched in May 2020 by the European Commission President Ursula von der Leyen.
The goal of PERISCOPE is to shed light into the broad socio-economic and behavioral impacts of COVID-19. A multidisciplinary consortium will bring together experts in all aspects of the current outbreak: clinic and epidemiologic; socio-economic and political; statistical and technological.

The partners of the consortium will carry out theoretical and experimental research to contribute to a deeper understanding of the short- and long-term impacts of the pandemic and the measures adopted to contain it. Such research-intensive activities will allow the consortium to propose measures to prepare Europe for future pandemics and epidemics in a relatively short timeline.

The main goals of PERISCOPE are:

  • to gather data on the broad impacts of COVID-19 in order to develop a comprehensive, user-friendly, openly accessible COVID Atlas, which should become a reference tool for researchers and policymakers, and a dynamic source of information to disseminate to the general public;
  • to perform innovative statistical analysis on the collected data, with the help of various methods including machine learning tools;
  • to identify successful practices and approaches adopted at the local level, which could be scaled up at the pan-European level for a better containment of the pandemic and its related socio-economic impacts; and
  • to develop guidance for policymakers at all levels of government, in order to enhance Europe’s preparedness for future similar events and proposed reforms in the multi-level governance of health.

PERISCOPE started on 1 November 2020 and will last until 31 October 2023. You can reach the project members and follow our activities through these social media profiles:

Twitter: @PER1SCOPE_EU

Linkedin: http://www.linkedin.com/company/periscopeproject/

Instagram: @periscope_project

Generation of Realistic Navigation Paths for Web Site Testing using RNNs and GANs

Weblogs represent the navigation activity generated by a specific amount of users on a given website. This type of data is fundamental because it contains information on the behaviour of users and how they interface with the company’s product itself (website or application). If a company could have a realistic weblog before the release of its product, it would have a significant advantage because it can use the techniques explained above to see the less navigated web pages or those to put in the foreground.

A large audience of users and typically a long time frame are needed to produce sensible and useful log data, making it an expensive task. 

To address this limit, we propose a method that focuses on the generation of REALISTIC NAVIGATIONAL PATHS, i.e., web logs .

Our approach is extremely relevant because it can at the same time tackle the problem of lack of publicly available data about web navigation logs, and also be adopted in industry for AUTOMATIC GENERATION OF REALISTIC TEST SETTINGS of Web sites yet to be deployed.

The generation has been implemented using deep learning methods for generating more realistic navigation activities, namely

  • Recurrent Neural Network, which are very well suited to temporally evolving data
  • Generative Adversarial Network: neural networks aimed at generating new data, such as images or text, very similar to the original ones and sometimes indistinguishable from them, that have become increasingly popular in recent years.

We run experiments using open data sets of weblogs as training, and we run tests for assessing the performance of the methods. Results in generating new weblog data are quite good, as reported in this summary table, with respect to the two evaluation metrics adopted (BLEU and Human evaluation).

Picture1

Comparison of performance of baseline statistical approach, RNN and GAN for generating realistic web logs. Evaluation is done using human assessments and BLEU metrics

 

Our study is described in detail in the paper published at ICWE 2020 – International Conference on Web Engineering with DOI: 10.1007/978-3-030-50578-3. It’s available online on the Springer Web site. and can be cited as:

Pavanetto S., Brambilla M. (2020) Generation of Realistic Navigation Paths for Web Site Testing Using Recurrent Neural Networks and Generative Adversarial Neural Networks. In: Bielikova M., Mikkonen T., Pautasso C. (eds) Web Engineering. ICWE 2020. Lecture Notes in Computer Science, vol 12128. Springer, Cham

The slides are online too:

Together with a short presentation video:

 

Coronavirus stories and data

Coronavirus COVID-19 is an extreme challenge for our society, economy, and individual life. However, governments should have learnt from each other. The impact has been spreading slowly across countries. There has been plenty of time to take action. But apparently people and government can’t grasp the risk until it’s onto them. And the way European and American governments are acting is to slow and incremental.

I live in Italy, we rank second in the world for healthcare quality. The mindset of “this won’t happen here” was the attitude at the beginning of this challenge, and look at  what happened. I’m reporting here two links to articles that mention a data-driven vision, but also the human, psychological an behavioural aspects involved. They are two simple stories that report the Italian perspective on the virus.

Coronavirus Stories From Italy

And why now it’s the time for YOU to worry, fellow Europeans and Americans

#Coronavirus: Updates from the Italian Front

A preview of what will happen in a week in the rest of the world. Things have dramatically changed in our society

Data Science for Business Innovation. Live courses for executives and managers in Italy and The Netherlands

Starting October 2019, we open a new opportunity for companies:

a 2-day hands-on course on Data-driven innovation for executive and managers.

The course is specially developed for executives, managers, and decision-makers that need to handle the foundations of data analysis for taking informed decisions on data-driven business, innovation path and strategies within the enterprise. It consists of keynotes, success stories, and quick  introductory lectures spanning big data, machine learning, data valorization and communication. The course covers terminology and concepts, tools and methods, use cases and success stories of data science applications.

The course explains  what value Data Science can create, what Data Science can solve, what the difference is between descriptive, predictive and prescriptive analytics, and what the roles of machine learning and artificial intelligence are.

The teaching style will be very practical, with use cases, hands on sessions, workgroup activities, and networking sessions for applying what you learn directly on real projects.

The live events will be:

If you are interested, you can visit the pages for the Italian [ITA] and English [ENG] editions respectively, and/o download the detailed brochures:

You can always get in touch to ask for more details.

Similar initiatives that we held in the past included the Urban Data Science Bootcamp, delivered in Milano and Amsterdam in 2017 (see a Medium story on the event here, to understand the style and activities, although you should consider that those reported there are about the specific sector of smartcity).

The event is also integrated with an online mini MOOC available on Coursera.

The course is offered by Politecnico di Milano in collaboration with Cefriel and EIT Digital.

 

Are open source projects governed by rich clubs?

The network of collaborations in an open source project can reveal relevant emergent properties that influence its prospects of success.

In our recent joint work with the Open University of Catalunya / ICREA, we analyze open source projects to determine whether they exhibit a rich-club behavior, that is a phenomenon where contributors with a high number of collaborations (i.e., strongly connected within the collaboration network) are likely to cooperate with other well-connected individuals.

ownCloud-open-source-accessibilityThe presence or absence of a rich-club has an impact on the sustainability and robustness of the project. In fact, if a member of the rich club leaves the project, it is easier for other members of the rich club to take over. Less collaborations would require more effort from more users.

The work has been presented at OpenSym 2019, the 15th International Symposium on Open Collaboration, in Skövde (Sweden), on August 20-22, 2019.

The full paper is available on the conference Web Site (or locally here), and the slides presenting our results are available on Slideshare:

For this analysis, we build and study a dataset with the 100 most popular projects in GitHub, exploiting connectivity patterns in the graph structure of collaborations that arise from commits, issues and pull requests. Results show that rich-club behavior is present in all the projects, but only few of them have an evident club structure.

For instance, this network of contributors for the Materialize project seems to go against the open source paradigma. The project is “owned” by very  few users:

richclubEstablished in 2014 by a team of 4 developers, at the time of the analysis it featured 3,853 commits and 252 contributors. Nevertheless, the project only has two top contributors (with more than 1,000 commits), which belong to the original team, and no other frequent contributors.

For all the projects, we compute coefficients both for single source graphs and the overall interaction graph, showing that rich-club behavior varies across different layers of software development. We provide possible explanations of our results, as well as implications for further analysis.

Data Science for Business Innovation. A new MOOC on Coursera

Breaking news!

We just published our new MOOC “Data Science for Business Innovation” on Coursera!

Our course is available for free on Coursera and is jointly offered by Politecnico di Milano and EIT Digital, as a compendium of the must-have expertise in data science for non-technical people, including executives, middle-managers to foster data-driven innovation.

The course is an introductory, non-technical overview of the concepts of data science.

You can enrol in the first edition of the course starting today.

The course is completely free and you can enjoy content at any time, with professional English speakers and animated, engaging materials.

Here is a short intro to the course:

The course consists of introductory lectures spanning big data, machine learning, data valorization and communication.
All the remaining details can be found on Coursera:

eit

Topics cover the essential concepts and intuitions on data needs, data analysis, machine learning methods, respective pros and cons, and practical applicability issues. The course covers terminology and concepts, tools and methods, use cases and success stories of data science applications.

The course explains what is Data Science and why it is so hyped. It discusses the value that Data Science can create, the main classes of problems that Data Science can solve, the difference is between descriptive, predictive and prescriptive analytics, and the roles of machine learning and artificial intelligence.

From a more technical perspective, the course covers supervised, unsupervised and semi-supervised methods, and explains what can be obtained with classification, clustering, and regression techniques. It discusses the role of NoSQL data models and technologies, and the role and impact of scalable cloud-based computation platforms.

All topics are covered with example-based lectures, discussing use cases, success stories and realistic examples.

If you are interested in these topics, feel free to look at it on Coursera.

We look forward to seeing you there!

Content-based Classification of Political Inclinations of Twitter Users

Social networks are huge continuous sources of information that can be used to analyze people’s behavior and thoughts.

Our goal is to extract such information and predict political inclinations of users.

In particular, we investigate the importance of syntactic features of texts written by users when they post on social media. Our hypothesis is that people belonging to the same political party write in similar ways, thus they can be classified properly on the basis of the words that they use.

We analyze tweets because Twitter is commonly used in Italy for discussing about politics; moreover, it provides an official API that can be easily exploited for data extraction. Many classifiers were applied to different kinds of features and NLP vectorization methods in order to obtain the best method capable of confirming our hypothesis.

To evaluate their accuracy, a set of current Italian deputies with consistent activity in Twitter has been selected as ground truth, and we have then predicted their political party. Using the results of our analysis, we also got interesting insights into current Italian politics. Here are the clusters of users:

ieee-big-data-2018-twitter-elections-clusters

Results in understanding political alignment are quite good, as reported in the confusion matrix here: ieee-big-data-2018-twitter-elections-parties

Our study is described in detail in the paper published in the IEEE Big Data 2018 conference and linked at:

DOI: 10.1109/BigData.2018.8622040

The article can be downloaded here, if you don’t have access to IEEE library.

You can also look at the slides on SlideShare:

You can cite the paper as follows:

M. Di Giovanni, M. Brambilla, S. Ceri, F. Daniel and G. Ramponi, “Content-based Classification of Political Inclinations of Twitter Users,” 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 2018, pp. 4321-4327.
doi: 10.1109/BigData.2018.8622040

online news and social media

News Sharing Behaviour on Twitter. A Dataset and a Pipeline

Online social media are changing the news industry and revolutionizing the traditional role of journalists and newspapers. In this scenario, investigating the behaviour of users in relationship to news sharing is relevant, as it provides means for understanding the impact of online news, their propagation within social communities, their impact on the formation of opinions, and also for effectively detecting individual stance relative to specific news or topics, as well as for understanding the role of journalism today.

Our contribution is two-fold.

First, we build a robust pipeline for collecting datasets describing news sharing; the pipeline takes as input a list of news sources and generates a large collection of articles, of the accounts that provide them on the social media either directly or by retweeting, and of the social activities performed by these accounts.

The dataset is published on Harvard Dataverse:

https://doi.org/10.7910/DVN/5XRZLH

Second, we also provide a large-scale dataset that can be used to study the social behavior of Twitter users and their involvement in the dissemination of news items. Finally we show an application of our data collection in the context of political stance classification and we suggest other potential usages of the presented resources.

The code is published on GitHub:

https://github.com/DataSciencePolimi/NewsAnalyzer

The details of our approach is published in a paper at ICWSM 2019 accessible online.

You can cite the paper as:

Giovanni Brena, Marco Brambilla, Stefano Ceri, Marco Di Giovanni, Francesco Pierri, Giorgia Ramponi. News Sharing User Behaviour on Twitter: A Comprehensive Data Collection of News Articles and Social Interactions. AAAI ICWSM 2019, pp. 592-597.

Slides are on Slideshare:

You can also download a summary poster.

partenza_poster__1__pdf-2

 

Brand Community Analysis using Graph Representation Learning on Social Networks – with a Fashion Case

In a world more and more connected, new and complex interaction patterns can be extracted in the communication between people.

This is extremely valuable for brands that can better understand  the interests of users and the trends on social media to better target  their products. In this paper, we aim to analyze the communities that arise around commercial brands on social networks to understand the meaning of similarity, collaboration, and interaction among users.

We exploit the network that builds around the brands by encoding it into a graph model. We build a social network graph, considering user nodes and friendship relations; then we compare it with a heterogeneous graph model, where also posts and hashtags
are considered as nodes and connected to the different node types; we finally build also a reduced network, generated by inducing direct user-to-user connections through the intermediate nodes (posts and hashtags). These different variants are encoded using graph representation learning, which generates a numerical vector for each node. Machine learning techniques are applied to these vectors to extract valuable insights for each user and for the communities they belong to.

We report on our experiments performed on an emerging fashion brand on Instagram, and we show that our approach is able to discriminate potential customers for the brand, and to highlight meaningful sub-communities composed by users that share the same kind of content on social networks.

The use case is taken from a joint research project with the Fashion in Process group in the Design Department of Politecnico di Milano, within the framework of FAST (Fashion Sensing Technology).

This study has been published by Springer as part of ACM SAC 2019, Cyprus.

Here is the slideset presenting the idea:

The paper can be referenced as:

Marco Brambilla, Mattia Gasparini: Brand Community Analysis On Social Networks Using Graph Representation Learning. ACM Symposium on Applied Computing (SAC) 2019, pp. 2060-2069.

The link to the officially published paper in the ACM Library will be available shortly.