Starting June 2022, our book “Model Driven Software Engineering in Practice” (co-authored with Jordi Cabot and Manuel Wimmer) is now also available via Springer . This means the price is actually lower, and if you are affiliated with an academic institution, you may even have free access to the book through your institutional access. Check it here.
Together with the book, we provided free bonus material including: over 500 slides on MDE ready to use for classes; and all the book examples in the book’s github repository.
Modern User Interfaces (UIs) are becoming complex software artifacts themselves, through integration of AI-enhanced software components that enable even more natural interactions, including the possibility to use Natural Language Processing (NLP) via chatbots or voicebots (aka., Conversational User Interfaces or CUIs).
Some times, several types of UIs are combined as part of the same application (e.g. a chatbot in a web page), what it is known as Multiexperience User Interface. These multiexperience UIs may be built together by using a Multiexperience Development Platform (MXDP).
“Multiexperience development involves ensuring a consistent user experience across web, mobile, wearable, conversational and immersive touchpoints”. [Gartner]
A typical scenario of multiexperience user interaction could unroll as follows (see image below too). Suppose that a customer on a Sunday morning wants to buy a new technical product (a cell phone or a home theater system). He first interacts with his home assistant (like Alexa or Google assistant) to ask it to find the best nearby tech store open on Sunday. With this information in mind, he looks at the store web site on his PC and, being satisfied with the kind of store, he asks the web site chatbot to find the type of products he is looking for. After browsing the various alternatives, he finds one item he likes, and sets the place and the product as preferences on his mobile phone. He reads the details of the product on the phone while walking to his car. When he reaches the car, he transfers the information about the place to the car navigation system and drives there. Finally, in the stores he looks around, tries various items, reads the reviews about them on a dedicated mobile app, and finally picks up the product and pays for it.
This kind of dynamic and seamless interaction demands a variety of complex design and implementation mechanisms to be put in place. Clearly, also very critical integration, evolution, and maintenance challenges need to be faced for these CUIs. Developers need to handle the coordination of the cognitive services to build multiexperience UIs, integrate them with external services, and worry about extensibility, scalability, and maintenance.
We believe a model-driven approach for MXDP could be an important first step towards facilitating the specification of rich UIs able to coordinate and collaborate to provide the best experience for end-users. Indeed, most non-trivial systems adhere to some kind of model-based philosophy, where software design models (including GUI models) are transformed into the production code the system executes at run-time. This transformation can be (semi)automated in some cases.
Our recent research tackles the application of model-driven techniques to the development of software applications embedding a multiexperience UI.
The research has been published in our recent paper Towards a Model-Driven Approach for Multiexperience AI-based User Interfaces, co-authored by Elena Planas, Gwendal Daniel, Marco Brambilla and Jordi Cabot, recently published in the International Journal on Software and Systems Modeling (SoSyM) available online here (open access).
The paper contribution is twofold:
we raise the abstraction level used in the definition of this new kind of conversational and smart interfaces.
we show how these CUI models can be used in conjunction with more “traditional” GUI models to combine the benefits of all these different types of interfaces in a multiexperience development project.
In practice, we propose a new Domain Specific Language (DSL), that generalizes the one defined by the Xatkit model to cover all types of CUIs, and we show how this seamlessly integrates with appropriate extensions of the IFML model to design comprehensive multi-experience interfaces.
IFML model integrating traditional navigation of a web interface and a chatbot component.
You can refer to the full paper here for covering the details. The paper reference is:
Planas, E., Daniel, G., Brambilla, M., Cabot, J. Towards a model-driven approach for multiexperience AI-based user interfaces. Software and System Modeling (SoSyM)20, 997–1009 (2021). https://doi.org/10.1007/s10270-021-00904-y
In the context of Domain-Specific Modeling Language (DSML) development, the involvement of end-users is crucial to assure that the resulting language satisfies their needs.
In our paper presented at SLE 2017 in Vancouver, Canada, on October 24th within the SPLASH Conference context, we discuss how crowdsourcing tasks can exploited to assist in domain-specific language definition processes. This is in line with the vision towards cognification of model-driven engineering.
The slides are available on slideshare:
Indeed, crowdsourcing has emerged as a novel paradigm where humans are employed to perform computational and information collection tasks. In language design, by relying on the crowd, it is possible to show an early version of the language to a wider spectrum of users, thus increasing the validation scope and eventually promoting its acceptance and adoption.
Ready to accept improper use of your tools?
We propose a systematic (and automatic) method for creating crowdsourcing campaigns aimed at refining the graphical notation of DSMLs. The method defines a set of steps to identify, create and order the questions for the crowd. As a result, developers are provided with a set of notation choices that best fit end-users’ needs. We also report on an experiment validating the approach.
Improving the quality of the language notation may improve dramatically acceptance and adoption, as well as the way people use your notation and the associated tools.
Essentially, our idea is to spawn to the crowd a bunch of questions regarding the concrete syntax of visual modeling languages, and collect opinions. Based on different strategies, we generate an optimal notation and then we check how good it is.
In the paper we also validate the approach and experiment it in a practical use case, namely studying some variations over the BPMN modeling language.
The full paper can be found here: https://dl.acm.org/citation.cfm?doid=3136014.3136033. The paper is titled: “Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages” and was co-authored by Marco Brambilla, Jordi Cabot, Javier Luis Cánovas Izquierdo, and Andrea Mauri.
You can also access the Web version on Jordi Cabot blog.
The artifacts described in this paper are also referenced on findresearch.org, namely referring to the following materials:
This is the summary of a joint contribution with Eric Umuhoza to ICEIS 2017 on Model-driven Development of User Interfaces for IoT via Domain-specific Components & Patterns.
Internet of Things technologies and applications are evolving and continuously gaining traction in all fields and environments, including homes, cities, services, industry and commercial enterprises. However, still many problems need to be addressed.
For instance, the IoT vision is mainly focused on the technological and infrastructure aspect, and on the management and analysis of the huge amount of generated data, while so far the development of front-end and user interfaces for IoT has not played a relevant role in research.
On the contrary, we believe that user interfaces in the IoT ecosystem they can play a key role in the acceptance of solutions by final adopters.
In this paper we present a model-driven approach to the design of IoT interfaces, by defining a specific visual design language and design patterns for IoT applications, and we show them at work. The language we propose is defined as an extension of the OMG standard language called IFML.
The slides of this talk are available online on Slideshare as usual:
While this is definitely not a statistically significant benchmark, I think it’s a significant insight on the field and on how ourselves (MDE practitioners and researchers) see the field.
Basically, there is absolutely no agreement and common understanding!!
On the question on whether MDE is a sound engineering discipline, one third of responders said yes, one third said no, and one third is not sure. Perfectly even distribution!
In summary, if you don’t count uncertainty, here is what we collected:
This year I’m involved in the program committee of the Foundations track of ECMFA.
ECMFA 2016 is the 12th European Conference on Modelling Foundations and Applications and is co-located with STAF 2016, on 4-8 July, 2016, in Vienna, Austria. Here are some core excerpts from the call for papers, which could be of interest for software modelling practitioners.
The ECMFA conference series is dedicated to advancing the state of knowledge and fostering the industrial application of Model-Based Engineering (MBE, an approach to the design, analysis and development of software and systems based on high-level models and computer-based automation). Its focus is on engaging the key figures of research and industry in a dialog which will result in stronger and more effective practical application of MBE, hence producing more reliable software based on state-of-the-art research results.
ECMFA 2016 will be co-located with ICMT, TAP, SEFM, ICGT and TTC as part of
the STAF federation of conferences, leading conferences on software
technologies (http://stafconferences.info). The joint organization of
these prominent conferences provides a unique opportunity to gather
practitioners and researchers interested in all aspects of software
technology, and allow them to interact with each other.
ECMFA has two distinct Paper Tracks: one for research papers (Track F)
dealing with the foundations for MBE, and one for industrial/applications
papers (Track A) dealing with the applications of MBE, including experience
reports on MBE tools.
Research Papers (Track F)
In this track, we are soliciting papers presenting original research on all
aspects of MBE. Typical topics of interest include, among others:
Foundations of (Meta)modelling
Domain Specific Modelling Languages and Language Workbenches
Model Reasoning, Testing and Validation
Model Transformation, Code Generation and Reverse Engineering
Model Execution and Simulation
Model Management aspects such as (Co-)Evolution, Consistency, Synchronization
Model-Based Engineering Environments and Tool Chains
Foundations of Requirements Modelling, Architecture Modelling, Platform Modelling
Foundations of Quality Aspects and Modelling non-functional System Properties
Scalability of MBE techniques
Collaborative Modeling
Industrial Papers (Track A)
In this track, we are soliciting papers representing views, innovations and
experiences of industrial players in applying or supporting MBE. In
particular, we are looking for papers that set requirements on the
foundations, methods, and tools for MBE. We are also seeking experience
reports or case studies on the application, successes or current
shortcomings of MBE. Quantitative results reflecting industrial experience
are particularly appreciated. All application areas of MBE are welcomed
including but not limited to any of the following:
MBE for Large and Complex Industrial Systems
MBE for Safety-Critical Systems
MBE for Cyber-Physical Systems
MBE for Software and Business Process Modelling
MBE Applications in Transportation, Health Care, Cloud & Mobile computing, etc. …
Model-Based Integration and Simulation
Model-Based System Analysis
Application of Modeling Standards
Comparative Studies of MBE Methods and Tools
Metrics for MBE Development
MBE Training
Research papers should be up to 16 pages long; Industrial
papers should be 12 pages long (full papers), or 2 pages long (short
papers). Short papers will be given shorter presentation slots.
The authors of selected best papers from the foundations track will be
invited to submit extended version to a special issue of the SoSyM journal
(with another review process).
Important dates for authors:
Abstract submission deadline: February 15, 2016 AoE
Papers submission deadline: March 1, 2016 AoE
Notification to authors: April 7, 2016
Camera ready versions due: April 28, 2016
The complete call for papers is available here in text and here as pdf.
To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).
With Aldo Bongio (WebRatio), Jordi Cabot (ICREA and UOC), Hamza Ed-douibi (EMN) and Eric Umuhoza (Politenico di Milano), we worked on a research on Automatic Code Generation for Cross-platform, Multi-Device Mobile Apps.
We presented our study at the MobileDeLi workshop, where we reported on a comparative study conducted to identify the best trade-offs between different automatic code generation strategies.
Here are the slides presented there:
We covered the following strategies by implementing them using different technologies and target platforms:
PIM-to-Native Code (NC)
PIM-to-PSM-to-NC
PSM-to-NC.
PIM-to-Cross Platform Code (CPC)
PIM-to-Framework Specific Model (FSM)-to-CPC
Some additional details are available in this post by Eric on Jordi’s blog.
Our study showed that there is no approach better than others in absolute terms but provided useful guidelines (e.g. cross platform approaches are generally advisable for companies with limited resoures) that helped us to identify the best strategy for the WebRatio company in particular.
Obviously, further investigations are ongoing…
To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).
I’m glad to share the video of the most recent webinar on WebRatio BPM Platform, the BPMN-based tool designed to support you in building high-end BPM Web and mobile Apps with a tailored User Experience. If you never experienced WebRatio BPM Platform, here is a summary of what you can do with it:
DEVELOP WEB AND MOBILE APPS through prototypes, then change them as many times as you need. No more time wasted building mockups on paper.
NO VENDOR LOCK IN thanks to highly optimized generated code that is open, human readable and based on the most recent Java and JS frameworks.
DEFINE A CUSTOM WEB OR MOBILE FRONT END for your BPM App and create a customized user interface, giving every channel a different user experience.
SUPPORT YOUR USERS’ MOBILITY thanks to the mobile BPM capabilities that let you work on your BPM App on any device, desktop or mobile, and deliver a seamless user experience.
Discover more on the WebRatio site or watch the video of the webinar on YouTube:
To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).
So far, no proposals have been submitted, but Sparx Systems and HP have declared their interest and intent to submit.
The main controversy related to the RFP and subsequently to the proposals is about the role and positioning of a UML profile wrt the actual Archimate standard.
The deadline for proposals and for participating to the voting expires on May 18 (in a week!).
To keep updated on my activities you can subscribe to the RSS feed of my blog or follow my twitter account (@MarcoBrambi).
Here is a short clip of the interview regarding the Interaction Flow Modeling Language (IFML) recorded in March 2015, in occasion of the release of IFML 1.0.
In the interview we discuss with Richard Soley about the relevance of user interaction modelling, the way it can be integrated with broader modelling projects, and the impact it has on overall design effort of software systems. Emanuele Molteni also discusses some success stories in the application of IFML in large-scale industrial projects in the US, by means of the WebRatio tool.