Driving Style and Behavior Analysis based on Trip Segmentation over GPS Information through Unsupervised Learning

Over one billion cars interact with each other on the road every day. Each driver has his own driving style, which could impact safety, fuel economy and road congestion. Knowledge about the driving style of the driver could be used to encourage “better” driving behaviour through immediate feedback while driving, or by scaling auto insurance rates based on the aggressiveness of the driving style.
In this work we report on our study of driving behaviour profiling based on unsupervised data mining methods. The main goal is to detect the different driving behaviours, and thus to cluster drivers with similar behaviour. This paves the way to new business models related to the driving sector, such as Pay-How-You-Drive insurance policies and car rentals. Here is the presentation I gave on this topic:

Driver behavioral characteristics are studied by collecting information from GPS sensors on the cars and by applying three different analysis approaches (DP-means, Hidden Markov Models, and Behavioural Topic Extraction) to the contextual scene detection problems on car trips, in order to detect different behaviour along each trip. Subsequently, drivers are clustered in similar profiles based on that and the results are compared with a human-defined ground-truth on drivers classification.

The proposed framework is tested on a real dataset containing sampled car signals. While the different approaches show relevant differences in trip segment classification, the coherence of the final driver clustering results is surprisingly high.

 


This work has been published at the 4th IEEE Big Data Conference, held in Boston in December 2017. The full paper can be cited as:

M. Brambilla, P. Mascetti and A. Mauri, “Comparison of different driving style analysis approaches based on trip segmentation over GPS information,” 2017 IEEE International Conference on Big Data (Big Data), Boston, MA, 2017, pp. 3784-3791.
doi: 10.1109/BigData.2017.8258379

You can download the full paper PDF from the IEEE Explore Library, at this url:

https://ieeexplore.ieee.org/document/8258379/

If you are interested in further contributions at the conference, here you can find my summaries of the keynote speeches on human-in-the-loop machine learning and on increasing human perception through text mining.

Special session on Multimedia indexing for content-based search engines at IEEE CBMI 2009

I have been chair of the special session SS3 Multimedia indexing for content-based search engines at CBMI 2009 (7th International Workshop on Content-Based Multimedia Indexing), held in Chania, Crete (Greece).

All the main scientific contributors of the Pharos project will be represented in the session, together with a set of selected invited speakers from other European projects. The event has been a great opportunity for discussion, joint work and presentation of project outcomes.

The presented papers are:

  1. Petros Daras and Apostolos Axenopoulos. A Compact Multi-View Descripto for 3D Object Retrieval
  2. Alessandro Bozzon, Marco Brambilla and Piero Fraternali. Model-Driven Design of Audiovisual Indexing Processes for Search-Based Applications
  3. Reede Ren and Joemon Jose. Query Generation From Multiple Multimedia Examples
  4. Azeddine Zidouni, Mohamed Quafafou and Herve Glotin. Structured Named Entity Retrieval in Audio Broadcast News
  5. Oliver Schreer, Ingo Feldmann, Isabel Alonso, Pedro Concejero, Abdul Sadka and Rafiq Swash. RUSHES – Retrieval of Multimedia Semantic Units for Enhanced Reusability
  6. Peter Dunker, Christian Dittmar, Andre Begau, Stefanie Nowak and Matthias Gruhne. Semantic High-Level Features for Automated Cross-Modal Slideshow Generation
  7. Georges Quenot, Tien Ping Tan, Viet Bac Le, Stephane Ayache, Laurent Besacier and Philippe Mulhem. Content-Based Search in Multi-Lingual Audiovisual Documents using the International Phonetic Alphabet
  8. Cyril Laurier, Owen Meyers, Joan Serra, Martin Blech and Perfecto Herrera. Music Mood Annotator Design and Integration
  9. Alan Smeaton and Sandra Rothwell. Biometric Responses to Music-Rich Segments in Films: The CDVPlex

Here are a few pictures of the presenters (and of the beautiful setting of the conference venue).