Modern User Interfaces (UIs) are becoming complex software artifacts themselves, through integration of AI-enhanced software components that enable even more natural interactions, including the possibility to use Natural Language Processing (NLP) via chatbots or voicebots (aka., Conversational User Interfaces or CUIs).
Some times, several types of UIs are combined as part of the same application (e.g. a chatbot in a web page), what it is known as Multiexperience User Interface. These multiexperience UIs may be built together by using a Multiexperience Development Platform (MXDP).
“Multiexperience development involves ensuring a consistent user experience across web, mobile, wearable, conversational and immersive touchpoints”. [Gartner]
A typical scenario of multiexperience user interaction could unroll as follows (see image below too). Suppose that a customer on a Sunday morning wants to buy a new technical product (a cell phone or a home theater system). He first interacts with his home assistant (like Alexa or Google assistant) to ask it to find the best nearby tech store open on Sunday. With this information in mind, he looks at the store web site on his PC and, being satisfied with the kind of store, he asks the web site chatbot to find the type of products he is looking for. After browsing the various alternatives, he finds one item he likes, and sets the place and the product as preferences on his mobile phone. He reads the details of the product on the phone while walking to his car. When he reaches the car, he transfers the information about the place to the car navigation system and drives there. Finally, in the stores he looks around, tries various items, reads the reviews about them on a dedicated mobile app, and finally picks up the product and pays for it.
This kind of dynamic and seamless interaction demands a variety of complex design and implementation mechanisms to be put in place. Clearly, also very critical integration, evolution, and maintenance challenges need to be faced for these CUIs. Developers need to handle the coordination of the cognitive services to build multiexperience UIs, integrate them with external services, and worry about extensibility, scalability, and maintenance.
We believe a model-driven approach for MXDP could be an important first step towards facilitating the specification of rich UIs able to coordinate and collaborate to provide the best experience for end-users. Indeed, most non-trivial systems adhere to some kind of model-based philosophy, where software design models (including GUI models) are transformed into the production code the system executes at run-time. This transformation can be (semi)automated in some cases.
Our recent research tackles the application of model-driven techniques to the development of software applications embedding a multiexperience UI.
we raise the abstraction level used in the definition of this new kind of conversational and smart interfaces.
we show how these CUI models can be used in conjunction with more “traditional” GUI models to combine the benefits of all these different types of interfaces in a multiexperience development project.
In practice, we propose a new Domain Specific Language (DSL), that generalizes the one defined by the Xatkit model to cover all types of CUIs, and we show how this seamlessly integrates with appropriate extensions of the IFML model to design comprehensive multi-experience interfaces.
You can refer to the full paper here for covering the details. The paper reference is:
Planas, E., Daniel, G., Brambilla, M., Cabot, J. Towards a model-driven approach for multiexperience AI-based user interfaces. Software and System Modeling (SoSyM)20, 997–1009 (2021). https://doi.org/10.1007/s10270-021-00904-y
In the context of Domain-Specific Modeling Language (DSML) development, the involvement of end-users is crucial to assure that the resulting language satisfies their needs.
In our paper presented at SLE 2017 in Vancouver, Canada, on October 24th within the SPLASH Conference context, we discuss how crowdsourcing tasks can exploited to assist in domain-specific language definition processes. This is in line with the vision towards cognification of model-driven engineering.
The slides are available on slideshare:
Indeed, crowdsourcing has emerged as a novel paradigm where humans are employed to perform computational and information collection tasks. In language design, by relying on the crowd, it is possible to show an early version of the language to a wider spectrum of users, thus increasing the validation scope and eventually promoting its acceptance and adoption.
We propose a systematic (and automatic) method for creating crowdsourcing campaigns aimed at refining the graphical notation of DSMLs. The method defines a set of steps to identify, create and order the questions for the crowd. As a result, developers are provided with a set of notation choices that best fit end-users’ needs. We also report on an experiment validating the approach.
Improving the quality of the language notation may improve dramatically acceptance and adoption, as well as the way people use your notation and the associated tools.
Essentially, our idea is to spawn to the crowd a bunch of questions regarding the concrete syntax of visual modeling languages, and collect opinions. Based on different strategies, we generate an optimal notation and then we check how good it is.
In the paper we also validate the approach and experiment it in a practical use case, namely studying some variations over the BPMN modeling language.
For centuries, science (in German “Wissenschaft”) has aimed to create (“schaften”) new knowledge (“Wissen”) from the observation of physical phenomena, their modelling, and empirical validation.
Recently, a new source of knowledge has emerged: not (only) the physical world any more, but the virtual world, namely the Web with its ever-growing stream of data materialized in the form of social network chattering, content produced on demand by crowds of people, messages exchanged among interlinked devices in the Internet of Things. The knowledge we may find there can be dispersed, informal, contradicting, unsubstantiated and ephemeral today, while already tomorrow it may be commonly accepted.
The challenge is once again to capture and create consolidated knowledge that is new, has not been formalized yet in existing knowledge bases, and is buried inside a big, moving target (the live stream of online data).
The myth is that existing tools (spanning fields like semantic web, machine learning, statistics, NLP, and so on) suffice to the objective. While this may still be far from true, some existing approaches are actually addressing the problem and provide preliminary insights into the possibilities that successful attempts may lead to.
I gave a few keynote speeches on this matter (at ICEIS, KDWEB,…), and I also use this argument as a motivating class in academic courses for letting students understand how crucial is to focus on the problems related to big data modeling and analysis. The talk, reported in the slides below, explores through real industrial use cases, the mixed realistic-utopian domain of data analysis and knowledge extraction and reports on some tools and cases where digital and physical world have brought together for better understanding our society.
Long time ago, in the past century, the International DB Research Community used to meet for assessing new research directions, starting the meetings with 2-minutes gong showsto tell each one’s opinion and influencing follow-up discussion. Bruce Lindsay from IBM had just been quoted for his message:
There are 3 important things in data management: performance, performance, performance.
Stefano Ceri had a chance to speak out immediately after and to give a syntactically similar but semantically orthogonal message:
There are 3 important things in data management: modeling, modeling, modeling.
Data management is continuously evolving for serving the needs of an increasingly connected society. New challenges apply not only to systems and technology, but also to the models and abstractions for capturing new application requirements.
In our retrospective paper, we describe several models and abstractions which have been progressively designed to capture new forms of data-centered interactions in the last twenty five years – a period of huge changes due to the spreading of web-based applications and the increasingly relevant role of social interactions.
We initially focus on Web-based applications for individuals, then discuss applications among enterprises, and this is all about WebML and IFML; then we discuss how these applications may include rankings which are computed using services or using crowds, and this is related to our work on crowdsourcing (liquid query and crowdsearcher tool); we conclude with hints to a recent research discussing how social sources can be used for capturing emerging knowledge (the social knowledge extractor perspective and tooling).
It’s also true that model-driven engineering is not necessarily the tool of choice for this to happen. Why? As technician, we always tend to blame the customer for not understanding our product. But maybe we should look into ourselves and the kind of tools (conceptual and technical) the MDE community is offering. I’m pretty sure we could find plenty of space for improvement.
Jordi Cabot, Robert Clarisó, Marco Brambilla and Sébastien Gerard submitted a visionary paper on Cognifying Model-driven Software Development to the workshop GrandMDE (Grand Challenges in Modeling) co-located with STAF 2017 in Margburg (Germany) on July 17, 2017. The paper advocates for the cross-domain fertilization of disciplines such as machine learning and artificial intelligence, behavioural analytics, social studies, cognitive science, crowdsourcing and many more, in order to help model-driven software development.But actually, what is cognification?
Cognification is the application of knowledge to boost the performance and impact of any process.
The thesis of our paper is that cognification will also revolution in the way software is built. In particular, we discuss the opportunities and challenges of cognifying Model-Driven Software Engineering (MDSE or MDE) tasks.
MDE has seen limited adoption in the software development industry, probably because the perception from developers’ and managers’ perspective is that its benefits do not outweigh its costs.
We believe cognification could drastically improve the benefits and reduce the costs of adopting MDSE, and thus boost its adoption.
At the practical level, cognification comprises tools that go from artificial intelligence (machine learning, deep learning, as well as human cognitive capabilities, exploited through online activities, crowdsourcing, gamification and so on.
Opportunities (and challenges) for MDE
Here is a set of MDSE tasks and tools whose benefits can be especially boosted thanks to cognification.
A modeling bot playing the role of virtual assistant in the modeling tasks
A model inferencer able to deduce a common schema behind a set of unstructured data coming from the software process
A code generator able to learn the style and best practices of a company
A real-time model reviewer able to give continuous quality feedback
A morphing modeling tool, able to adapt its interface at run-time
A semantic reasoning platform able to map modeled concepts to existing ontologies
A data fusion engine that is able to perform semantic integration and impact analysis of design-time models with runtime data
A tool for collaboration between domain experts and modeling designers
Obviously, we are aware that some research initiatives aiming at cognifying specific tasks in Software Engineering exist (including some activities of ours). But what we claim here is a change in magnitude of their coverage, integration, and impact in the short-term future.
If you want to get a more detailed description, you can go through the detailed post by Jordi Cabot that reports the whole content of the paper.
This is the summary of a joint contribution with Eric Umuhoza to ICEIS 2017 on Model-driven Development of User Interfaces for IoT via Domain-specific Components & Patterns.
Internet of Things technologies and applications are evolving and continuously gaining traction in all fields and environments, including homes, cities, services, industry and commercial enterprises. However, still many problems need to be addressed.
For instance, the IoT vision is mainly focused on the technological and infrastructure aspect, and on the management and analysis of the huge amount of generated data, while so far the development of front-end and user interfaces for IoT has not played a relevant role in research.
On the contrary, we believe that user interfaces in the IoT ecosystem they can play a key role in the acceptance of solutions by final adopters.
In this paper we present a model-driven approach to the design of IoT interfaces, by defining a specific visual design language and design patterns for IoT applications, and we show them at work. The language we propose is defined as an extension of the OMG standard language called IFML.
The slides of this talk are available online on Slideshare as usual:
This year I’m co-organizing with Davide Rossi and a bunch of experts in Business Process Management and Enterprise Architecture a new event called BPM-EA, which aims at bringing together the broad topics of business processes, modeling, and enterprise architecture.
These disciplines are quickly evolving and intertwining with each other, and are often referred to with the broad term of business modeling.
I believe there is a strong need of exploring new paths of improvement, integration and consolidation of these disciplines.
If you are interested to participate and contribute, we seek contributions in the areas of enterprise and systems architecture and modeling, multilevel models tracing and alignment, models transformation, IT & business alignment (both in terms of modeling and goals), tackling both technical (languages, systems, patterns, tools) and social (collaboration, human-in-the-loop) issues.
The deadline for submitting a paper is September 15, 2016.
You can find the complete call and further details on the event website:
This year we decided to be present at ICWSM 2016 in Cologne, with two contributions that basically blend model driven software engineering and big data analysis, to provide value to users and citizens both in terms of high quality software and added value information provision.
Studying Multicultural Diversity of Cities and Neighborhoods through Social Media Language Detection, presented at the CityLab workshop at ICWSM 2016. The focus of this work is to study cities as melting pots of people with different culture, religion, and language. Through multilingual analysis of Twitter contents shared within a city, we analyze the prevalent language in the different neighborhoods of the city and we compare the results with census data, in order to highlight any parallelisms or discrepancies between the two data sources. We show that the officially identified neighborhoods are actually representing significantly different communities and that the use of the social media as a data source helps to detect those weak signals that are not captured from traditional data. Slides here:
We now continuously look for new dataset and computational challenges. Feel free to ask or to propose ideas!
To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).
As model-driven engineering practitioners, we sometimes encounter weird modelling notations for the languages we use… and this is also definitely true for modelling language adopters!
We always end up wondering who could ever think about such or such terrible syntax for a language, also for very well established notations (including, for instance, some pieces of UML or BPMN). I take it for granted this is a common experience (raise your hand if not).
This lead to the idea that also syntax definition should be a more collaborative task. Therefore, we decided to give it a try and test whether crowdsourcing techniques can be used to create and validate language constructs, in particular, its concrete syntax (i.e. notation).
As part of our research work in this area, together with Jordi Cabot’s group, we have setup as an experiment a crowdsourcing campaign using our tool CrowdSearcher.
This boils down to a very simple case: we are asking anyone on the web to look into a very small subset of BPMN, and to participate into 3 simple tasks, including questions for selecting the best notation for some of the BPMN concepts (it won’t take more than 3 minutes).
We asked people to help us responding these 3 quick questions!
1. we don’t care if you are the world’s expert in BPMN or if you never heard about it. We want you!
2. we ask you to register before taking the task (just click on the Register button once you enter the task), simply to make sure we only have one performance per person. All the analysis will run on anonymous data.
3. The results of the survey will be made publicly available in the following months.
To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).