Modern User Interfaces (UIs) are becoming complex software artifacts themselves, through integration of AI-enhanced software components that enable even more natural interactions, including the possibility to use Natural Language Processing (NLP) via chatbots or voicebots (aka., Conversational User Interfaces or CUIs).
Some times, several types of UIs are combined as part of the same application (e.g. a chatbot in a web page), what it is known as Multiexperience User Interface. These multiexperience UIs may be built together by using a Multiexperience Development Platform (MXDP).
“Multiexperience development involves ensuring a consistent user experience across web, mobile, wearable, conversational and immersive touchpoints”. [Gartner]
A typical scenario of multiexperience user interaction could unroll as follows (see image below too). Suppose that a customer on a Sunday morning wants to buy a new technical product (a cell phone or a home theater system). He first interacts with his home assistant (like Alexa or Google assistant) to ask it to find the best nearby tech store open on Sunday. With this information in mind, he looks at the store web site on his PC and, being satisfied with the kind of store, he asks the web site chatbot to find the type of products he is looking for. After browsing the various alternatives, he finds one item he likes, and sets the place and the product as preferences on his mobile phone. He reads the details of the product on the phone while walking to his car. When he reaches the car, he transfers the information about the place to the car navigation system and drives there. Finally, in the stores he looks around, tries various items, reads the reviews about them on a dedicated mobile app, and finally picks up the product and pays for it.
This kind of dynamic and seamless interaction demands a variety of complex design and implementation mechanisms to be put in place. Clearly, also very critical integration, evolution, and maintenance challenges need to be faced for these CUIs. Developers need to handle the coordination of the cognitive services to build multiexperience UIs, integrate them with external services, and worry about extensibility, scalability, and maintenance.
We believe a model-driven approach for MXDP could be an important first step towards facilitating the specification of rich UIs able to coordinate and collaborate to provide the best experience for end-users. Indeed, most non-trivial systems adhere to some kind of model-based philosophy, where software design models (including GUI models) are transformed into the production code the system executes at run-time. This transformation can be (semi)automated in some cases.
Our recent research tackles the application of model-driven techniques to the development of software applications embedding a multiexperience UI.
we raise the abstraction level used in the definition of this new kind of conversational and smart interfaces.
we show how these CUI models can be used in conjunction with more “traditional” GUI models to combine the benefits of all these different types of interfaces in a multiexperience development project.
In practice, we propose a new Domain Specific Language (DSL), that generalizes the one defined by the Xatkit model to cover all types of CUIs, and we show how this seamlessly integrates with appropriate extensions of the IFML model to design comprehensive multi-experience interfaces.
You can refer to the full paper here for covering the details. The paper reference is:
Planas, E., Daniel, G., Brambilla, M., Cabot, J. Towards a model-driven approach for multiexperience AI-based user interfaces. Software and System Modeling (SoSyM)20, 997–1009 (2021). https://doi.org/10.1007/s10270-021-00904-y
Weblogs represent the navigation activity generated by a specific amount of users on a given website. This type of data is fundamental because it contains information on the behaviour of users and how they interface with the company’s product itself (website or application). If a company could have a realistic weblog before the release of its product, it would have a significant advantage because it can use the techniques explained above to see the less navigated web pages or those to put in the foreground.
A large audience of users and typically a long time frame are needed to produce sensible and useful log data, making it an expensive task.
To address this limit, we propose a method that focuses on the generation of REALISTIC NAVIGATIONAL PATHS, i.e., web logs .
Our approach is extremely relevant because it can at the same time tackle the problem of lack of publicly available data about web navigation logs, and also be adopted in industry for AUTOMATIC GENERATION OF REALISTIC TEST SETTINGS of Web sites yet to be deployed.
The generation has been implemented using deep learning methods for generating more realistic navigation activities, namely
Recurrent Neural Network, which are very well suited to temporally evolving data
Generative Adversarial Network: neural networks aimed at generating new data, such as images or text, very similar to the original ones and sometimes indistinguishable from them, that have become increasingly popular in recent years.
We run experiments using open data sets of weblogs as training, and we run tests for assessing the performance of the methods. Results in generating new weblog data are quite good, as reported in this summary table, with respect to the two evaluation metrics adopted (BLEU and Human evaluation).
Comparison of performance of baseline statistical approach, RNN and GAN for generating realistic web logs. Evaluation is done using human assessments and BLEU metrics
Our study is described in detail in the paper published at ICWE 2020 – International Conference on Web Engineering with DOI: 10.1007/978-3-030-50578-3. It’s available onlineon the Springer Web site. and can be cited as:
Pavanetto S., Brambilla M. (2020) Generation of Realistic Navigation Paths for Web Site Testing Using Recurrent Neural Networks and Generative Adversarial Neural Networks. In: Bielikova M., Mikkonen T., Pautasso C. (eds) Web Engineering. ICWE 2020. Lecture Notes in Computer Science, vol 12128. Springer, Cham
The network of collaborations in an open source project can reveal relevant emergent properties that influence its prospects of success.
In our recent joint work with the Open University of Catalunya / ICREA, we analyze open source projects to determine whether they exhibit a rich-club behavior, that is a phenomenon where contributors with a high number of collaborations (i.e., strongly connected within the collaboration network) are likely to cooperate with other well-connected individuals.
The presence or absence of a rich-club has an impact on the sustainability and robustness of the project. In fact, if a member of the rich club leaves the project, it is easier for other members of the rich club to take over. Less collaborations would require more effort from more users.
For this analysis, we build and study a dataset with the 100 most popular projects in GitHub, exploiting connectivity patterns in the graph structure of collaborations that arise from commits, issues and pull requests. Results show that rich-club behavior is present in all the projects, but only few of them have an evident club structure.
For instance, this network of contributors for the Materialize project seems to go against the open source paradigma. The project is “owned” by very few users:
Established in 2014 by a team of 4 developers, at the time of the analysis it featured 3,853 commits and 252 contributors. Nevertheless, the project only has two top contributors (with more than 1,000 commits), which belong to the original team, and no other frequent contributors.
For all the projects, we compute coefficients both for single source graphs and the overall interaction graph, showing that rich-club behavior varies across different layers of software development. We provide possible explanations of our results, as well as implications for further analysis.
Long time ago, in the past century, the International DB Research Community used to meet for assessing new research directions, starting the meetings with 2-minutes gong showsto tell each one’s opinion and influencing follow-up discussion. Bruce Lindsay from IBM had just been quoted for his message:
There are 3 important things in data management: performance, performance, performance.
Stefano Ceri had a chance to speak out immediately after and to give a syntactically similar but semantically orthogonal message:
There are 3 important things in data management: modeling, modeling, modeling.
Data management is continuously evolving for serving the needs of an increasingly connected society. New challenges apply not only to systems and technology, but also to the models and abstractions for capturing new application requirements.
In our retrospective paper, we describe several models and abstractions which have been progressively designed to capture new forms of data-centered interactions in the last twenty five years – a period of huge changes due to the spreading of web-based applications and the increasingly relevant role of social interactions.
We initially focus on Web-based applications for individuals, then discuss applications among enterprises, and this is all about WebML and IFML; then we discuss how these applications may include rankings which are computed using services or using crowds, and this is related to our work on crowdsourcing (liquid query and crowdsearcher tool); we conclude with hints to a recent research discussing how social sources can be used for capturing emerging knowledge (the social knowledge extractor perspective and tooling).
It’s also true that model-driven engineering is not necessarily the tool of choice for this to happen. Why? As technician, we always tend to blame the customer for not understanding our product. But maybe we should look into ourselves and the kind of tools (conceptual and technical) the MDE community is offering. I’m pretty sure we could find plenty of space for improvement.
Jordi Cabot, Robert Clarisó, Marco Brambilla and Sébastien Gerard submitted a visionary paper on Cognifying Model-driven Software Development to the workshop GrandMDE (Grand Challenges in Modeling) co-located with STAF 2017 in Margburg (Germany) on July 17, 2017. The paper advocates for the cross-domain fertilization of disciplines such as machine learning and artificial intelligence, behavioural analytics, social studies, cognitive science, crowdsourcing and many more, in order to help model-driven software development.But actually, what is cognification?
Cognification is the application of knowledge to boost the performance and impact of any process.
The thesis of our paper is that cognification will also revolution in the way software is built. In particular, we discuss the opportunities and challenges of cognifying Model-Driven Software Engineering (MDSE or MDE) tasks.
MDE has seen limited adoption in the software development industry, probably because the perception from developers’ and managers’ perspective is that its benefits do not outweigh its costs.
We believe cognification could drastically improve the benefits and reduce the costs of adopting MDSE, and thus boost its adoption.
At the practical level, cognification comprises tools that go from artificial intelligence (machine learning, deep learning, as well as human cognitive capabilities, exploited through online activities, crowdsourcing, gamification and so on.
Opportunities (and challenges) for MDE
Here is a set of MDSE tasks and tools whose benefits can be especially boosted thanks to cognification.
A modeling bot playing the role of virtual assistant in the modeling tasks
A model inferencer able to deduce a common schema behind a set of unstructured data coming from the software process
A code generator able to learn the style and best practices of a company
A real-time model reviewer able to give continuous quality feedback
A morphing modeling tool, able to adapt its interface at run-time
A semantic reasoning platform able to map modeled concepts to existing ontologies
A data fusion engine that is able to perform semantic integration and impact analysis of design-time models with runtime data
A tool for collaboration between domain experts and modeling designers
Obviously, we are aware that some research initiatives aiming at cognifying specific tasks in Software Engineering exist (including some activities of ours). But what we claim here is a change in magnitude of their coverage, integration, and impact in the short-term future.
If you want to get a more detailed description, you can go through the detailed post by Jordi Cabot that reports the whole content of the paper.
This is the summary of a joint contribution with Eric Umuhoza to ICEIS 2017 on Model-driven Development of User Interfaces for IoT via Domain-specific Components & Patterns.
Internet of Things technologies and applications are evolving and continuously gaining traction in all fields and environments, including homes, cities, services, industry and commercial enterprises. However, still many problems need to be addressed.
For instance, the IoT vision is mainly focused on the technological and infrastructure aspect, and on the management and analysis of the huge amount of generated data, while so far the development of front-end and user interfaces for IoT has not played a relevant role in research.
On the contrary, we believe that user interfaces in the IoT ecosystem they can play a key role in the acceptance of solutions by final adopters.
In this paper we present a model-driven approach to the design of IoT interfaces, by defining a specific visual design language and design patterns for IoT applications, and we show them at work. The language we propose is defined as an extension of the OMG standard language called IFML.
The slides of this talk are available online on Slideshare as usual:
I’m really proud to announce that our paper “Pattern-Based Specification of Crowdsourcing Applications” has received the BEST PAPER award at ICWE 2014 (International Conference on Web Engineering), held in Toulouse in July 2014. The paper was authored by Alessandro Bozzon, Marco Brambilla, Stefano Ceri, Andrea Mauri, and Riccardo Volonterio.
The work addresses the fact that in many crowd-based applications, the interaction with performers is decomposed in several tasks that, collectively, produce the desired results.
A number of emerging crowd-based applications cover very different scenarios, including opinion mining, multimedia data annotation, localised information gathering, marketing campaigns, expert response gathering, and so on.
In most of these scenarios, applications can be decomposed in tasks that collectively produce their results; Tasks interactions give rise to arbitrarily complex workflows.
In this paper we propose methods and tools for designing crowd-based workflows as interacting tasks.
We describe the modelling concepts that are useful in such framework, including typical workflow patterns, whose function is to decompose a cognitively complex task into simple interacting tasks so that the complex task is co-operatively solved.
We then discuss how workflows and patterns are managed by CrowdSearcher, a system for designing, deploying and monitoring applications on top of crowd-based systems, including social networks and crowdsourcing platforms. Tasks performed by humans consist of simple operations which apply to homogeneous objects; the complexity of aggregating and interpreting task results is embodied within the framework. We show our approach at work on a validation scenario and we report quantitative findings, which highlight the effect of workflow design on the final results.
Here are the slides presented by Alessandro Bozzon during the ICWE conference:
This year, ICWE – International Conference on Web Engineering, took place in Toulouse, France.
Given the upcoming adoption by the OMG – Object Management Group of IFML, I decided to give a tutorial on it there. IFML, the Interaction Flow Modeling Language (IFML) is designed for expressing content, user interaction and control behaviour of the front-end of software applications, as well as the binding to the persistence and business logic layers. IFML is the missing piece for modeling the front end of software applications and perfectly complements other modeling dimensions in broad system modeling projects. Therefore, IFML works best when integrated with other modeling languages in the MDA suite, such as UML and BPMN. This tutorial illustrates the basic concepts of IFML, presents the design best practices and integration with other modelling languages, and discusses some industrial experiences (also featuring quantitative measures of productivity) achieved by the companion tool WebRatio. At the end of the tutorial, attendees will get a general knowledge about IFML (they will be able to design simple models and to derive models from existing interfaces), will be able to associate front-end design with system modelling at large, will see the associated MDE tool WebRatio at work, and will get a glimpse of real-life industrial applications developed for large enterprises. This will let them appreciate the advantages of a model-driven development approach at work within large-scale industrial project.