Technical Articles

From MBSE models to generated documents

This article concerns document generation from MBSE models. We explain why and how to use the document generation capability provided by modelling tools to get the most out of your Systems Engineering models.

We start by explaining why generating documents from your models is useful. This can seem obvious, but it is not. Next, we present the generic mechanisms behind document generation, regardless of the language, method and tool used for your model. Then we suggest a method in 3 steps to generate a document. Finally, to make things concrete, we illustrate document generation in the frame of an identical system model formalized with 2 different languages and tools, to show that we can reach the same results independently of the tool used. For this practical demonstration we have chosen two combinations of languages and tools widely known and used by the MBSE community:

  • the Systems Modelling Language (SysML) in combination with the Cameo Systems Modeler tool,
  • the Arcadia Language in combination with the Capella tool.

This will also demonstrate that the document generation can be done almost independently of the method chosen to model your system.

Why generate documents?

We want to use models but we still need documents…

First, let us imagine a world where everyone would be trained to read and write models just as we learn to speak natural languages. We could assume that, at some point in time, models would replace most of the documents exchanged to define, build, verify and validate a complex system.

However, in the current industrial reality, this situation is not likely to occur for decades, if at all. Eventually, within the system definition team, we can assume that team members work with models and consider models as engineering artefacts, from which they are able to derive functions, requirements, components, interfaces, parameters, etc., and from which the team (the same one or a different team) can perform verification and validation. But systems engineering is far more than the isolated work of one system definition team. There are a lot of exchanges with many other teams, including:

  • Capture the problem space (needs and constraints), discuss requirements, external interfaces and architecture with customers and intended users,
  • Request advice, feasibility and assessment analysis on parts or the totality of the envisioned solution from domain specialists (mechanical, thermal, structure, control, safety…)
  • Specify the definition of a subsystem or a component for a different team to realize it, whether in the same company or outside of it (partners and subcontractors)
  • Request verification & validation of the product, at different engineering levels, from internal or external teams
  • Capture the needs and constraints to industrialize and manufacture the solution
  • Capture requirements from purchase/supply and legal departments when creating engagements that cross the boundaries of the company

These other teams do not always understand the modelling language used by the system definition team, and they do not always need to access to the whole model. Hence, it is key to be able to extract a consistent set of model information and share it in a more commonly known language. This is where the document comes back on the scene.

Note: Here we use the term “document” in an inclusive manner to refer to different types of static media, including text, tables, drawings, etc.

For many years, Systems engineers will need to export parts of their model in document formats to allow people not trained or not trained enough in the system model notation (SysML, Arcadia or any other formalism) to understand and analyze the information contained in the model. The goal should be for the model to remain the central repository, the source of truth for all the actors of the project, with exported views adapted to the unique needs of each actor or team involved with the project.

Once we are aware that we need to share documents with other teams (and sometimes within the same team), the central question becomes:

“Shall we build those documents manually or generate them from our model?”

Just as the System Model can be the center for other different analysis tools, it can be the center for your documents, ensuring consistency between the documents relating to the system.

It is really a good idea to generate documents from a model?

This may sound like a silly question, because it seems obviously useful and time-saving to generate the documents. But document generation is long to set up and it requires a lot of effort to get familiar with the document generation approach and the associated tools. For a project with strict deadlines and a limited budget, investing in document generation is not an obvious solution.

In the short term, if the project has short and strict deadlines for the first deliveries, and if the question of generating documents has not been anticipated, the answer is quite simple: “Do not try to generate your documents from your model”. Creating a document (architecture, interface, specification, verification plan, …) will always be faster by hand the first time. You start from a template or from an existing document that stems from another project, then you copy and paste some diagrams, you extract some requirements from your requirements database, and you complete with it with some explanations and drawings. It takes time to arrange the information, time to review the document and to check the completeness and global consistency, but you know you will succeed in writing this document because you know what information to include and where to find it. If you have only just discovered document generation, you do not have any idea of the time you will need to set up everything required for the document generation to work, so it adds a lot of stress if you are short on time.

After a few iterations, as your system definition evolves, you will probably need to update your document (architecture, interfaces, …). You may realize that it becomes tedious to identify the parts of the document that must be updated and to perform those modifications. Especially if those modifications have already been performed in the system definition model. In this case, you have two repositories that share the same information: the model and the document. So now you spend time reflecting any changes in both places, with most of the effort spent to lay out the information, and with risks of inconsistencies between the two repositories. This time is not spent on engineering, but on synchronization.

After a few iterations it becomes very clear for everyone on the team that this double update is expensive and does not bring any value. This is where there is a high risk: give priority to the document…

and lose control of the model.

At Samares Engineering we have a lot of experience in using models. We know that when the project is operational, when there are strict deadlines to conform to, the pressure on the system team regularly rises to deliver some documents because those documents are contractual. Priority is given to their delivery and when time is in short supply, some changes are reflected directly and only in the documents. The synchronization between the model and the documents is not maintained. The consequence is simple: the model becomes progressively obsolete and loses most of its value because it does not reflect the current problem or solution space. The reference is now the document and no longer the model. This is the end of the value of the model. A good example of failure in MBSE. And there will be people who ask you why you spent so much time on building a model…

When looking at the long term, generating documents from the model is the only option.

You know that you cannot afford maintaining the same information in two repositories. If you really want to use models to support systems engineering, then your documents must be deduced from the model, which requires some automation: generate document(s) from the model. Now you know and can prepare your project team 😉

Which documents can we generate?

From our experience, the main kinds of documents generated from a system model concern the system architecture: list of functions, functional breakdown, functional architecture, components, Product Breakdown Structure, interfaces, functional behavior, automatons.

We also often find the description of needs and expected functionalities: use cases, context diagram, external interfaces, scenarios, some state machines, and the generation of Interface Control Documents, either as text (word documents) or as excel sheets.

Note: we can also use the document generation capabilities to extract views not defined in the model, as long as the information is included in the model. For instance, we can build tables that relate use cases and requirements with test procedures described as sequences or state machines to give metrics to the project manager on the traceability.

Some tools, like Cameo Systems Modeler, have the capability to compare two versions of a model and to generate the differences between them in a document. In this way, document generation can be a tool for analyzing the evolution of the models between 2 versions.

Finally, any information stored anywhere in the model, can be put into the generated document. The customization of templates makes it possible to adapt the generated document to the specific needs of your company, project or team.

Examples of documents that may be useful to generate at different stages in the project life, represented on an example of the V-cycle.

How does generating documents work?

The ability to generate documents is not native to any modeling language, but to the tool used for the modeling. Typically, a template is written, containing queries that specify where in the model some specific information can be found, and how this information should be presented in the document. This template in combination with a model is then processed by a document generation tool, in order to produce the generated document. As mentioned in the introduction, we will focus on the modelling tools Cameo Systems Modeler and Capella, which each use a different document generation tool.

Illustration of the principle of generating documents from a model.

The principle, as illustrated in the figure above, is the same no matter the tool used. The template and the model must be constructed respecting the same metamodel, and the same modeling rules. This is the only way to ensure that the correct information can be found in the correct place in the generated document. For languages like Arcadia/Capella, that have less customization options, the same template can possibly be used across several different projects, with minimal adaptations. For languages like SysML, that are extendible by nature, and where different extensions (profiles) may exist within the same company in order to address different specific cases, the templates may require more rework in order to be reused. However, if this is known in advance, it is possible to make the template compatible with all the different extensions (profiles) from the get-go. This will require more of an effort in the beginning, but less maintenance than having several templates to maintain.

All this depends on the specific needs of your company and the different teams within your company.

Using Cameo Systems Modeler for document generation

In Cameo Systems Modeler, the document generation tool is called Report Wizard. It is a technology developed specifically for Cameo Systems Modeler and comes natively with this tool. Report Wizard is based on Velocity, a Java-based template engine. It requires templates to be written in Velocity Template Language (VTL) (Velocity User Guide).

To generate a document from a model using the Report Wizard, it is enough to open the Report Wizard from the model that the document should be generated from and selecting the desired template. Just like any other wizard, it guides the user through the steps of the documentation. Cameo Systems Modeler comes with a fairly wide selection of templates, though in order to obtain a document that is truly useful, custom templates will be necessary. However, these existing templates do give good starting points for developing custom templates.

The Velocity Template Language is written in plain text directly in the template document, where the formatting applied to the template queries (code) reflects the formatting in the finalized generation.

In VTL, each variable is prefaced with “$”, and each command line to be executed starts with “#”. Any lines of text not prefaced with “#” will be reproduced in the generated document, though any variables (starting with “$”) will be replaced with their value.

In addition to the basic queries and operations native to VTL, some helper modules and special variables have been developed specifically for use with Cameo Systems Modeler, that make some of the information in the model much easier to access. For instance, the variable $elements is a list of every single element that exists in the scope of the model selected for generation, and the helper module $report makes it possible to obtain a filtered list of elements.

For example, if your model looks like this:

Using this template:

Will give this result:

VTL can be used in many different formats, including Word, Excel, PowerPoint and html. In this article, we only focus on the generation of Word documents.

Using Capella with M2Doc for document generation

Capella does not have a natively integrated document generation tool. However, since M2Doc (M2Doc reference documentation version 3.1.1) is a technology developed to work with any Eclipse Modeling Framework (EMF) -based model and Capella is based on EMF, the two are compatible. The M2Doc plugin must be installed separately, as it is not natively delivered with the Capella installation. M2Doc requires a Generation Configuration file (.genconf) to serve as the “glue” between the template file and the model. Wizards exist to guide the user in creating the Generation Configuration file and the Template file. Only one template example for use with Capella is available, developed based on the In-Flight Entertainment system (IFE) example by OBEO. This template is not likely to be useful for a company as-is, but it provides a good starting point for developing a custom template.

The template language used with M2Doc is the Acceleo Query Language (AQL) (AQL documentation), which in turn is based on the Object Constraints Language (OCL) (Object Constraint Language Specification Version 2.4 – omg.org). There are no Capella-specific helper modules or variables that exist, since M2Doc and AQL are generic to all EMF-based models, but in Capella there is an interpreter view that makes it possible to see the result of any AQL query immediately and thus is very helpful when writing the custom templates.

Example of the Interpreter view in Capella used to see the result of an AQL query.

In AQL, the variable name “self” is used to refer to the current object. When using the Interpreter view, “self” refers to the currently selected element, and it changes dynamically as you click on different elements. When writing the template, this works differently. You need to declare a variable in the template properties (it is possible to declare several) that will serve as the starting point for exploring the model. It is generally recommended to name this variable “self” and to set it to the System Engineering element. This element is the top-level root element, just below the .aird, and it is obligatory in all Capella models.

M2Doc is currently only developed for Word documents. The AQL queries are included in code fields within the Word template, and the formatting applied to the queries reflect the formatting of the result. Each code field starts with “m:” to signal that this code field should be interpreted by M2Doc. Standard Word code fields, such as ones used to calculate the page number or the figure number, are still compatible and should be used just like in any regular Word document.

As an example, if your Capella model looks like this:

Example of a Capella model, where the template (Word) file, the Generation Configuration file and the System Engineering element are indicated.

And your template (Word) document looks like this:

Example of M2Doc code. The variable “self” indicates the System Engineering element in the model. This code collects and lists all the Logical Functions that exists anywhere below the System Engineering element.

Then the result of the generation will look like this:

The result of the generation of the above template applied to the above model.

Presentation of the Use Case: UAV for Agriculture

To show off the capacities of both modelling tools, we present here the same system modelled in a similar manner using both Cameo Systems Modeler, and Capella. This system is an Unmanned Aerial Vehicle (UAV), also called drone, dedicated to treat fields against pests or illnesses. For this article, we focus on the logical architecture, with functions and requirements allocated to the logical components. We chose to focus on this layer in this article to show how to use the model to generate a preliminary specification for one or several components of your system. This specification can, once it has been completed, be provided to a sub-contractor, or to a team internal to the company, responsible for buying or making this component.

Note that we are talking about a preliminary specification. Most companies that write specifications have some sort of template to serve as a support for writing new specifications, and the exact contents of the specification will vary from company to company. However, it is common for there to be an introductory part with information about the project context for the specification, to help the reader better understand the component being specified. While some parts of this context can be included in the model (name of the global system, other components or external actors that interface with the specified component, etc.) other information such as contact information and referenced documents are rarely included in the model. While it is technically possible to include ALL the information in the model, this can make the model hard to maintain, and the system model is not necessarily adapted for this kind of information. We recommend using classic tools for all information that does not naturally belong in the system model, and completing the partially generated document by hand after it has been generated.

The AgriUAV technical (logical) architecture, modelled in Cameo Systems Modeler using SysML and custom stereotypes.

Logical Architecture of the AgriUAV modelled with Capella.

Creating the Specification document – 3 steps

We will generate a preliminary specification document from each of our models in order to illustrate one of the uses of document generation.

In the next paragraphs we present a method to setup the document generation.

Note: This method is illustrated with the specification document generation, but the method applies to any kind of document to generate.

Step 1 – Characterization of the output

The first step in the document generation setup process is figuring out what we want to display in the output document.

In our case (the system specification), most of the time the system team knows what information to include because specification documents are used in most projects and there is often a Word template defined by the company to guide systems engineers to ensure that all the key information will be filled (or marked as not applicable). This step then consists in taking the template document and using it to describe precisely what we want to display in each chapter and with which layout: list, table, image…  It is a good idea to use examples of information from previous projects (modified if confidentiality applies) so that we can have a good idea of the contents and the layout of the target output document.

For documents that are less common or project-specific, where there is no pre-defined template to base the generation template on, this step will consist in identifying all the information that we want to see in the generated document and to determine the layout of this information. Examples remain useful to ensure everyone understands the same thing and agrees on the final result.

Step 2 – Map dynamic document information to model data

We can separate the contents of the generated document into two categories: static and dynamic. Static content remains the same no matter the model it is generated from, and is included in the generation template in the same way it would be included in a regular template. This content typically includes the cover page with the company information and logo, as well as (some) titles, a table for the revision history, etc. Dynamic content is content that changes based on the model it is generated from. In other words, it is any and all information obtained from the model.

In this second step, we focus on the source of the dynamic information to display in the document. We assume that the static information is already put in place in the template document based on the information obtained in step 1.

For all the different dynamic information (use cases, scenarios, context, functions, requirements, interfaces, components, traceability links….), we first have to identify if this information is present in the model. Sometimes we face a case where parts of the dynamic information we want to include is not present in the system model. For instance, in the specification document we might want to display the traceability to the customer requirements, but our system model does not contain the customer requirements. These are instead stored in a dedicated requirements database. In this example, we realize that the dynamic information we want (the traceability links to customer requirements) is not available in the system model. In this case, there are two solutions; either we complete our model to contain ALL the dynamic information required for the specification document generation, or we need to build our document from several sources (example: one model + one requirements database). This last option generally means generating a partial system specification and completing it manually after generation with the information that is missing.

In this first article about document generation, we want to keep things simple. For the rest of this article, we assume that all the required dynamic information is available in our system model.

Once you have confirmed that all the dynamic information is present in the system model, you have to identify how to access this information from the model.

Let us see an example:

  1. You want to display in the document a table of system requirements with some of their attributes (id, text, verification method) and with their trace (if any) to one or several of the customer requirements.
  2. You have identified the 2 modelling language concepts that contains this dynamic information: Requirement, and trace link.
  3. You need to be able to distinguish the customer requirement and the system requirements in the model and select only the “trace link” that relates system requirements to customer requirements. Let us assume here that the customer requirements are requirements contained in a package called “Customer requirements” and system requirements are stored in a package called “System requirements”. In this case, only the location in the model makes us able to differentiate the Customer requirements and the Stakeholder requirements.
    Note: Alternatively, we could use a tag, a stereotype or a special attribute on the “requirement” concept to differentiate between “customer” and “system” requirements.

We see here that we need formal rules to identify the information in the model. If customer requirements exist at different places in the model and cannot be easily identified (by an attribute, a tag or a finite list of locations) the document generation cannot be automated.

The mapping of dynamic information to model data requires formal rules (also called “queries”) to identify the model data. If the rules are unclear or ambiguous, the document generation is likely to retrieve the wrong data or miss some of the data that should be included.

What kind of credit can you take from your document if you cannot trust the document generation and need to verify that the output document really contains what was expected? 

Step 3 – Finalizing the template document

In this step we write out the formal rules (queries) used to obtain the dynamic information from the model. The language used for these queries depend on the document generation technology used; in our case it is VTL when generating documents from Cameo Systems Modeler, and AQL when using M2Doc to generate documents from Capella.

There are different strategies that can be applied when writing queries for the template, independently of the language used. One strategy is to write the different queries as independently as possible from each other. This strategy has the advantage of allowing the work to be distributed between different people and teams, and also makes reuse of parts of the templates easier. However, depending on the structure of your document and how the different dynamic information correlates, it is not always possible in practice.

Here are two examples of differently structured documents:

  1. Say you have different chapters in your document that are completely independent. In the first chapter you look only at the requirements. In the second you list all the functions in your model. In the third, you list all the components, and so on. In this case, the separation strategy is perfect; each team will get their own chapter to work on.
  2. However, often we want the information to be linked. Instead of listing components independently of requirements and independently of the functions, it is often more desirable to have a chapter for each component, where we get to see all the information relevant to that component; all the requirements it is traced to, all the functions it realizes, all its interfaces, and so on. In this case, all these queries are linked to the same component, and are not independent. In this case, the chapter for a component would be defined once, as one huge query, and then applied to a collection of all the components in the system.

The formatting of the final output document is also done in this step. This is because, for most template languages including both languages presented in this article, the formatting is applied directly to the query. If you want to present the information obtained from a query in a table, then you put that query in a table. If you want the result to be in bold and / or in italics, then you apply bold and / or italic formatting to the query. Since the formatting (layout) and the queries are so linked, it makes no sense to separate the two.

Sometimes the company template document addresses the presentation and provides guidelines for how the different information should be laid out. However, these rules often change when using a model to generate the document, as the model comes with diagrams that may replace some textual information.

This step can take a lot of time if not prepared in advance, especially if the reader needs the first version of the document generation of the model to get an idea of the output and then change their mind on the presentation. This is a bad practice as it leads to a lot of iterations in the document generation and makes it very sequential and time consuming.

A better practice is to discuss the presentation in step 1 with dummy data, so that this “presentation” of data can be completely specified in step 1.

Document Generation illustration with Cameo Systems Modeler and Capella

Both Cameo Systems Modeler and Capella (+ M2Doc) have similar document generation capabilities when it comes to generating Word documents. They are both able to extract diagrams and any information from the model and formatting it as paragraphs, lists, titles or tables, with any style.

We created one template for each tool, containing the same dynamic information presented in the same way with the same formatting.

View in Word of the two generated documents side by side. On the left is the document generated from Cameo Systems Modeler, and on the right is the document generated from Capella.

In the following chapters we focus on some extracts of each of the two documents generated from the two different tools. There are some minor differences between the two documents, as we will see.

Context chapter

As an introductory chapter, we extracted the above seen diagram image, and created a list of the external actors. Except for the fact that the title of the diagram, and of course the diagram itself, is slightly different, this chapter is identical for both generated documents.

Context chapter generated from Cameo Systems Modeler.

Context chapter generated from Capella.

Interfaces chapter

In this chapter we list the external interfaces of the AgriUAV (interfaces that crosses the border of the UAV), with their source, target, and the flow that they transport.

Note: This is just an example and is extremely simplified. Generally, a specification would also include information about the flow type, specific interface constraints, etc. For this article, we have decided to privilege the simplicity simply for presentation and readability purposes.

Interfaces chapter generated from Cameo Systems Modeler.

Interfaces chapter generated from Capella.

At first glance this chapter is identical, but there are some minor differences between the two due to the differences in the two modelling languages used. In SysML, the connector is direction-less; it cannot be used to determine the direction of the flow, so we have to look at the direction of the port to figure it out. For this reason, it is easier to have the “source” and the “target” in a random order (we obtain both ports and list them; an “out” port is a source and an “in” port is a target). In Capella, the component exchanges have a source and a target directly available in the properties, making it easier to obtain this information in a fixed order.

Furthermore, in SysML we have the possibility to use delegated ports, that is, a port that is not a final end but an intermediary port at the border of a block that contains other blocks. In Capella, this kind of port is calculated if we decide to hide some internal parts of a component, for instance, but it does not exist as a standalone modeling element. Because we decided to use these delegated ports in our SysML model, it is easiest to obtain just the first level of the block inside the AgriUAV, instead of the final end port, when generating from Cameo Systems Modeler, while it is easier to obtain the final end when generating from the Capella model. This is why the target for the “actual_pos” is listed as UAV in the document generated from Cameo Systems Modeler, while it is listed as “Perception” in the document generated from Capella.

We could have obtained exactly the same result for both generations, either by altering the code in the Report Wizard template used by Cameo Systems Modeler or in the M2Doc template used by Capella. We could also decide to model our connection directly from end-port to end-port in SysML, without using the delegation port, to obtain the same result as with Capella. However, for this demonstration, we found it more interesting to highlight this difference, as one solution is not necessarily better than another, and it ultimately comes down to how this information will be used.

Components chapter

This chapter lists the direct components of the AgriUAV, with their directly allocated functions and requirements. For this chapter, there are no differences between the document generated from Capella and the one generated from Cameo Systems Modeler.

Components chapter generated from Cameo Systems Modeler

Components chapter generated from Capella.

This chapter shows that we can obtain the same structured chapter for multiple similar elements. In this case, there is a header for each component (obtained dynamically), with two sub-headers, one for allocated functions and one for allocated requirements. In this example, it is the same code that has been used in both cases (applied to a collection containing both elements), but it reacts differently if the query returns empty or not. We also see that we are able to format the obtained information as a table, if we so choose.

When looking at the three parts of this generated document, it might seem poor in terms of information. This is true; there is a lot of information we could have chosen to include, but for a demonstration in an article, we have to limit ourselves. The goal of this article is not to show what you need to include in a specification, but rather to demonstrate the capabilities of the document generation tool and inspire you to come up with your own templates!

Conclusion

As explained, document generation is a key feature when using models for systems engineering. When ready (automated), this transformation reduces the time and efforts needed to build one or several documents and allows other teams to access key up to date information about the project without needing to read the model.

However, the effort required to set up the document generation must not be underestimated, as it is not negligible. There are several steps needed to reach the first generated document, the first one consisting in determining exactly what the result of the generation should be when it is not already defined from an existing template. Document generation requires formal rules (queries) to extract information from the model and the related constraints can sometimes lead to changes in the model structure or in the way we store information in the model.

While the creation of custom templates requires some work, the result is reusable across all similar projects, which will benefit future projects (if the documents are similar).

Both Cameo Systems Modeler and Capella have good capabilities for generating Word documents and makes it possible to reach the same level of presentation (layout) in a Word document.

For beginners, adapting an existing template is generally easier than creating one from scratch, at least to understand how it works. For more experienced users, it is better to focus on reuse: templates can be improved to contain parameters and alternatives that allows the use of the same template in different contexts.

Document generation makes it possible to draw even more benefit from the model, by the vast customization possibilities that exist.

Enjoy MBSE! 

By |2021-03-08T11:49:23+01:00February 19th, 2021|0 Comments

Part 9 – Co-Simulation of SysML and others models through FMI

This article is part of a monthly series entitled “Advanced MBSE with SysML and other languages“.

In this article 9, we explain how to distribute some sub-systems of a logical architecture to a set of suppliers and to integrate and co-simulate their behavioural models in the SysML tool with the FMI standard.

This is the first article about using the FMI standard for co-simulation, but not the last! There is a lot to say about using FMI, and this article 9 can be considered as an introduction on using FMI in extended enterprise, through a fairly simple sample case. At the end of this article, we mention some challenges and advanced topics that we plan to work on during 2021, and this work will lead to other articles.

Context

In the previous articles (part 1 to part 8), we introduced a method based on the SysML notation to support the following systems engineering activities:

This article starts with the availability of an Aircraft Inspection By Drone Assistant (AIDA model inspired from the IRT St Exupery case study) logical architecture. It is illustrated below:

The different subsystems (logical components) will be assigned to different suppliers. We expect each supplier to develop a behavioural model of the assigned subsystem before developing the real subsystem. The idea is to integrate all the behavioural models provided by the different suppliers into a central repository, and to co-simulate all those models (simulate all those models concurrently) to evaluate the global behaviour and validate that it behaves as expected. We will use the FMI standard for co-simulation. This is explained in the next paragraphs, before we show the practical steps for integration and co-simulation of the behavioural models through FMI.

Functional Mock-Up Interface (FMI) Standard Overview

The Functional Mock-Up Interface (FMI) is a standardized interface to exchange dynamic models issued from various simulation tools and package the model as a combination of an XML model description file, executable binary files and eventually C code files into a single zipped file. 

https://fmi-standard.org/

This makes it possible to extract models from several different simulation tools, and integrate heterogeneous models into a single simulation tool and provide the model as a “black-box” for preservation of intellectual property when it is required.

 

Introduction

  • All components that need to support FMI shall comply with a standard interface that brings services to perform a time step in a model: set input data, progress in the behaviour from one time step – fmiDoStep (or fmi3doStep in the most recent version 3), and get results (output data)
  • Basically, an FMI-compliant component is packaged into a Functional Mockup Unit (FMU), which is a zipped file (*.fmu) containing:
    • modelDescription.xml
    • Implementation in source and/or binary files that complies with FMI services
    • Additional resources if necessary

In the next paragraph we present the model description file and the principles of FMI for both model exchange and co-simulation, with a particular focus on co-simulation.

Model Description file structure

The modelDescription file structure is presented below:

Model description structure (from FMI specification)

Principles of model sharing with FMI

  • FMI for Model Exchange:

           This approach is proposed to extract model data from a simulation tool without its solver. It is then possible to re-integrate the extracted model in another tool where an appropriate solver is available. 

Note: As our article will focus on FMI for Co-Simulation, we do not go into detail on the Model-Exchange principles in this article. If you are interested in more details about Model Exchange, please refer to the FMI Standard.

  • FMI for Co-Simulation

This approach makes it possible to extract an executable simulation model from a specific tool. The executable is then used as a component library to be integrated in a wider environment. This mode is particularly useful to integrate several models coming from different suppliers to evaluate the overall consistency of the complete system.

FMI for Co-Simulation

  • FMI for Co-Simulation Export:
    • C code and xml file generation with embedded solver
    • Archives (.zip) sources and binaries into a .fmu file

FMU for Co-Simulation Export

  • FMI for Co-Simulation Import:
    • Requires a Master-Algorithm (MA) which synchronizes fmu exchanges
    • FMUs can be connected to the other parts of the model

FMU for Co-Simulation Import

 

The Master Algorithm executes individual FMUs at regular communication time steps (hC) and propagates the model outputs to their connected models. This mechanism is called by standardized FMI APIs as illustrated below: 

Systems Design and Supply chain process

As explained in the introduction of the article, we want to perform early Verification and Validation of system requirements and verify the global behaviour that is split between the different subsystems. We distribute the subsystems definition models to different suppliers (specialists of different domains) and request them to provide a detailed behavioural model that is compliant with the FMI standard.

This process is illustrated in the figure below:

We collect the different models provided as executable files (“black-box”) to preserve the supplier Intellectual Property (IP). From the integrator side, we can now integrate those potentially heterogeneous models (i.e.: developed with different simulation tools and solvers) into a single tool and verify the overall execution of the different models and their interoperability. This is possible thanks to the FMI standard that ensures this interoperability.

Note: This article briefly presents the co-simulation of different logical components developed in a well-defined sequence from mature specifications. In industrial reality, it may happen that the subsystem behavioural models are developed concurrently with the system architecture definition. We do not give details about the full industrial process of exchange with suppliers and gap analysis for alignment with definition models when both definition and detailed behaviour have been defined concurrently. If you want to know more about this industrial process, you can look at our CSDM 2020 conference joint article with Renault: “Applying Model Identity Card for ADAS V&V“.

Illustration on AIDA case study

In our example we use the AIDA Unmanned Aircraft Vehicle System. The logical architecture of the UAV System is shown in the figure below:

It is possible to define verification criteria based on requirements formalized by SysML constraints, as illustrated below for controller accuracy (0.5m):

Now we provide specifications to our suppliers (including a SysML model of the context of each subsystem and requirements on the expected behaviour). Suppliers will have to develop the behavioural models.

Note: we illustrate the full process for only two subsystems (UAV Control station and UAV system) and one external system (Air gravity) to limit the size of this article. But what is shown can be applied in the same way to many other subsystems and external systems.

  • AV Control Station – Behavioural Model Provider:

This actor is in charge of developing and to providing the behavioural model of the UAV Control Station with a dedicated focus on the Build Flight Plan function behaviour. In this example, the UAV Control Station Model provider uses the MATLAB / Simulink 2019b tool and wants to provide the UAV Control Station behavioural model as a “black-box” executable model to the System Integrator using the FMI for Co-Simulation 2.0 export capability integrated in MATLAB / Simulink.

  • UAV System (vehicle) – Behavioural Model Provider:

This actor is in charge of developing and to providing the behavioural model of the UAV System and in particular of the UAV Control Position and attitude functions behaviour. This behavioural model is developed using the MATLAB / Simulink 2019b tool and is provided as a “black-box” executable model to the System Integrator using the FMI for Co-Simulation 2.0 export capability integrated in MATLAB / Simulink.

  • EnvironmentAir Terrestrial Gravity – Behavioural Model Provider:

This actor is in charge of developing and to providing the behavioural model of the UAV Environment, especially of the Air / Terrestrial Gravity. This model is required to model the effects of the external environment on the UAV System. We suppose that OpenModelica v1.16.2 is used to develop these models and to export the behavioural model using the FMI for Co-Simulation 2.0 export capability integrated in OpenModelica.

 

This responsibility sharing is summarised in the figure below:

Generation of Functional Mock-Up Unit from MATLAB/Simulink

Each supplier shall develop its behavioural models and define its associated interfaces:

When a supplier has finalised the Simulink behavioural model and has tested it through simulation, it is then possible to create a MATLAB/Simulink project and share its contents as an FMU for Co-simulation:

 

Generation of Functional Mock-Up Unit from OpenModelica

When a behavioural model has been developed and simulated successfully on the supplier side, the Modelica model can be exported as an FMU:

The supplier should configure the FMI options in the OMEdit configuration window:

Finally, the supplier can export Functional Mock-Up Unit for Co-Simulation.

Assembly and execution of Functional Mock-Up Units in Cameo 

When all the Functional Mock-Up Units have been received from suppliers as “black-box” executable models, the System Integrator can assemble these models. We can use the Cameo Systems Modeler tool and the Cameo Simulation Toolkit features to verify interface compatibility and the overall behaviour execution.

To perform this action, Cameo systems Modeler (CSM) offers a drag & drop feature of an FMU into a SysML IBD (Internal Block Diagram) and proposes the following import menu:

When all the FMUs have been imported into the Cameo Systems Modeler project, it is possible to assemble them together and check the consistency of the interfaces in regards to the established specification:

In order to simulate this model, it was necessary to “break” the feedback loop between Air Terrestrial Gravity and UAV System. This was done by inserting a “delay” component, which introduces a discrete delay (1/z) configured to 1 communication step size. This kind of annoying effect may appear depending on the tool used for FMU integration and on the underlying co-simulation master algorithm. We propose to address this topic in more detail in a future article. 

Next, the System Integrator is in charge of configuring the Simulation execution and especially the communication step size and the simulation duration. This configuration is done in the Simulation Configuration Diagram, as illustrated below:

Now, before starting the simulation execution, it is necessary to launch the MATLAB Console for each MATLAB/Simulink FMU (at least for MATLAB/Simulink 2019b models) and execute the ShareMATLABforCosimFMU command (where communication between MATLAB FMU and MATLAB runtime is required).

During simulation, the results can be observed from plots available in Cameo Systems Modeler, and it is possible to verify requirement compliance using co-simulation results inside Cameo:

In the execution results, we can observe that the requirement (constraint) concerning the maximum error of 0.5 m between the target position and the real position is verified for the Y coordinate (constraint is respected, in green) but not for the X coordinate (constraint is violated, in red).

 This approach makes it possible to detect requirement violations during co-simulation execution, which would have been very hard to detect without co-simulation, or that would have been detected later in the product verification, perhaps too late…

 

 

 

 

Synthesis

In this example, we have illustrated the capability to perform execution of heterogeneous FMUs (Modelica and MATLAB/Simulink) and co-simulate their execution within Cameo Systems Modeller. This capability allows for Systems Verification while collecting Systems artifacts behaviours from suppliers as Functional Mock-Up Units.

Then, the Systems Engineer can integrate these models in a co-simulation environment and define the appropriate communication time-step (hC) size for the FMUs communication. To define the appropriate time-step size, the Systems Engineer should consider the overall expected parameters, such as the overall simulation time, and signal and behaviour dynamics / periods.   

Going further

 

In this article,  the initial model was initiated manually from the SysML model. However, it would be possible to have a seamless code generation of Modelica partial models from the SysML models. Indeed, since partial models play the role of specification (interface contracts), it would be possible to adapt the approach to take into account the full process of configuration and change management.

Next, as Cameo Systems Modeler offers FMI co-simulation capabilities, it would be possible to generate and assemble FMUs from the Modelica models produced for some subsystems with FMUs generated from Simulink models (for the control subsystems). This would make it possible to characterize the co-simulation architecture from the logical architecture.

Then, it should be interesting to explore the usage of FMI standard companions such as System Structure & Parametrization (SSP) which supports standardization of co-simulation graph and configuration and Distributed Co-Simulation Protocol (DCP) which supports the standardization of communication protocols for co-simulation distributed on several execution nodes such as computers.

Finally, we plan to contribute to different initiatives (like the AFIS/NAFEMS working group) that aim at bridging the gap between system definition models developed by system architects with an MBSE approach, and detailed behavioural models sometimes called “simulation models” developed by domain specialists. The idea is to leverage the integration of those different models to address different purposes including feasibility, evaluation of performances, verification of system requirements and validation of expected behaviour.

Next articles about FMI/FMU

We consider the use of the FMI standard as a key practice to leverage the MBSE approach in extended enterprises. In future articles we plan to address the following complementary topics:

  • Complex multi-physical model: in the current article, we addressed simplified Modelica and MATLAB/Simulink models for the Flight Control System. In a future article we plan to address a more complex multi-physical model which will combine discrete state machines behavioural models, control-command systems and physical acausal systems (pipes, fluids, mechanics, …).
  • Mixing virtual and real systems: we plan to explore a progressive integration of real systems in combination with co-simulated systems to allow for an incremental Verification and Validation process.
  • Connect operational scenarios with a simulated environment: we intend to detail the links between scenarios defined at the operational and functional architectural levels, scenarios used for Verification and Validation and to automate/link Simulation Environment and associated results.
  • Provide feedback on the identification of the communication step size: Detail more complex models and give best practices to define the accurate value for the communication step size.

Enjoy MBSE !

By |2021-01-29T15:39:02+01:00January 21st, 2021|Tags: , , , , , |0 Comments

Part 8 – Digital continuity between SysML and Modelica

This article is part of a monthly series entitled “Advanced MBSE with SysML and other languages“.

In the second set of articles, this series explains how to complete the top-level system definition model, formalized in SysML, with other modeling languages and tools, considered as more efficient to perform the system detailed design or certain kinds of system analysis. The focus is put on digital continuity with guidelines concerning coupling semantics and coupling automation between languages and tools.

In this article 8, we present an approach to refine the system definition into a multi-physical specialized architecture with the support of the Modelica language and associated toolbox.

Executive Summary

  • This article focuses on performing multi-physical modeling with Modelica. It uses a SysML logical architecture to initiate a Modelica model composed of “Partial Models” that can be implemented in different ways. Therefore, partial models play the role of specification for the Modelica engineer. Then, after the design of the subsystems, simulation is performed to assess the system requirements. We show that some requirements are not satisfied, which leads to a request for change for the systems engineer (new interface between the subsystems). Finally, the preferred design is capitalized into the SysML physical architecture.

Context

In the previous articles (part 1 to part 5), we introduced a method based on the SysML notation to support the following systems engineering activities :

SysML focuses on abstraction, requirements, functional decomposition, systems decomposition, allocation, and traceability. Within SysML, and especially in its implementation with Cameo Systems Modeler, it is possible to perform simulation and animation of state machines, activity diagrams, IBDs, or sequence diagrams, and evaluation of parametric diagrams. So, we will show that we can complete the SysML definition with Modelica concepts and use the Modelica toolbox to perform analysis and assessment of the architecture, refine our knowledge and the system specification.

This article starts with the availability of a UAV logical architecture for the agricultural domain . It is illustrated below:

Agri UAV Logical Architecture

Agri UAV Logical Architecture

Modelica Language Overview

Image from https://www.modelica.org/modelicalanguage

Modelica is an object-oriented and equation-based language dedicated to the modeling and analysis of multi-physical systems. It is a defined by the Modelica Association. The Modelica language relies on both graphical and textual syntax. It makes it possible to combine Differential and Algebraic Equations (DAE) with discrete event systems. This language is well suited to represent flows of energy, signals, or materials, and any continuous interactions.

With Modelica, it is possible to create architectures made of sub-models connected by ports (undirected physical flows and directed signal flows). Models can be causal or acausal, and can represent hybrid systems (discrete and continuous). A good amount of free or commercial Modelica libraries are available for domains such as chemistry, automotive, neural network based AI, etc. The following figure shows examples of signal based components (integrator, PID, …) and physical based components (magnet, spring, tank, …) provided by Simulation X.

Signal library from Simulation X

Signal library from Simulation X

 

Modelica Physical Components

Physical components library from Simulation X

Application on the Agri UAV case study

To illustrate the full approach, we use a simplified model of an Unmanned Aerial Vehicle (UAV) for agricultural domain. We use the following languages and tools:

  • SysML in CAMEO Systems Modeler 19.0 SP4
  • Modelica in Simulation X

The goal is to provide a physical solution for the “Water Container Subsystem” and for the “Treatment Subsystem”. The following figures show the Modelica elements that will be used to design the physical solution of each subsystem.

Systems Requirements and SysML logical architecture

First we start with the following logical architecture made by a systems engineer.

Agri UAV Logical Architecture

In this article, we focus on the following requirements that shall be satisfied by the final architecture. Each of these requirements specifies the valid definition domain for the variables to be observed.

Systems requirements

Note that in some of the requirements (req 3 and 5) we have introduced the notion of derivative. This requirement specifies that when the treatment subsystem is requested to stop, the flow shall continuously decrease. This is illustrated by the example of the following figure. All the curves are valid except the red one, which has a positive derivative at some point in time.

Initiate Logical Architecture in Modelica

The first step is to translate the functional/logical architectures defined in the SysML language toward an initial Modelica architecture in order to focus on the “Water Container Subsystem” and “Treatment Subsystem” with a language well adapted for the formalization and simulation of physical phenomena. This step results in a Modelica model containing Modelica partial models. The main advantage of using partial models to represent logical subsystems resides in their ability to be implemented using variants. Therefore, partial models can be seen as interface contracts that shall be respected by the engineers. Then, each individual model can be implemented with different architectures: this is what we will see next.

Modelica Initial Logical Architecture (Partial model)

Modelica Initial Logical Architecture (Partial model)

In order to perform this translation, each Subsystem (SysML Block) is converted into a Modelica partial model. Each information flow is transformed into a Modelica Signal input or output and the hydraulic flow is converted into hydraulic ports and connectors in Modelica. We have used the SysPhS library (SysML Extension for Physical Interactions and Signal Flows Simulation) to type the ports. This library is available in Cameo Systems Modeler SP4 and has been specially built to specify physical and signal flows independently of the targeted simulation platform. Also, using this extension makes it possible to generate Simulink or Modelica models directly from the SysML model. 

In this first structural approach, we propose the following mapping:
 
Systems Engineering Concept SysML Concept Modelica Concept
Function Block Partial Model
Technical System Element Block Partial Model
Data Port Proxy Port + Interface Block (SysPhS Signal Interface) Partial Model
Trigger Port Proxy Port + Interface Block (SysPhS Signal Interface) Signal port
Enable/Disable Port Proxy Port + Interface Block (SysPhS Signal Interface) Signal port
Energy Port Proxy Port + Interface Block (SysPhS Physical Interface) Physical Port
Functional Flow Connector Connect equation

 
 

Define Physical components that fit with the logical architecture

Once we have created a logical architecture in Modelica that is semantically equivalent to the SysML logical architecture, we can explore the different physical solutions able to refine this logical architecture and we determine the appropriate physical components which satisfy all the requirements presented before.

To design the “Liquid container subsystem” and the “Treatment subsystem” we will use the following elements from a hydraulic library available in Simulation X.

First design of the logical subsystems

Using an available hydraulic library, we perform the following design for the “Water Container Subsystem” and the “Treatment Subsystem”. In addition, we build mock-ups for the other components in order to support simulation (this is not presented here).

Water Container Subsystem Design

Treatment Subsystem Design

 

The simulation results of this first design allows us to see if the requirements are satisfied or not.

Volume flow (left curve) and Pressure (right curve)

 

At t = 70s, the stop command is received, then the volume flow (left side) and the pressure (right side) start decreasing. We see that the volume flow can not reach 0.001 l/min in less than 0.5s. Also, the pressure can not reach 0.02 bar in less than 0.5s.

The requirement is not satisfied in the interval [70s; 72s].

 

The requirement is not satisfied in the interval [70s; 72s].

Volume flow (left curve) and Pressure (right curve)

From the simulation results, we can see that the first part of the requirement 2 (“Volume Flow Stop Perf 1”) and the requirement 3 (“Perform Stop Perf 1”) are not satisfied. Indeed, the spray does not stop in less that 0,5 s because of remaining pressure in the pipes. The solution can consist in adding valves before each nozzle that can be opened / closed on demand. The valve ensures that there is no remaining flow from the nozzles when it is not required. In addition, it ensures safety in case of failure of the controller or the pump. However it requires the creation of a new interface between the Mission Management subsystem and the Treatment subsystem. Therefore, this would trigger a request for change to the systems engineer to see the impact of creating a new interface between the mission management and the treatment subsystem.

Second design of the logical subsystems

A new interface is created as seen in the following image (valve_cmd).

SysML Logical Architecture with new interfaces (valve_cmd)

The design in Modelica results in the following Treatment Subsystem model:

The following results show that the requirements 2 and 3 are now satisfied by the design.

Feedback in SysML

The physical interfaces of the subsystems can be generated from the Modelica model. Here is an excerpt of the physical architecture that corresponds to a specific Modelica design. Note that we may find a lot of other types interfaces (electrical, mechanical, …). In that case, we suggest creating one IBD per physical viewpoint (physical domain). Now the treatment subsystem is « completely » specified (electrical and mechanical viewpoints are missing). We have identified a design solution that can satisfy the requirements. Hence, physical interfaces and sub-components can be built in SysML and traced to the rest of the model.

 

Synthesis

In this article we have proposed a coupling method between SysML (a Systems Engineering language) and Modelica (a physical modeling language). The proposed method includes a definition of system requirements and logical architecture using SysML and an initialisation of a partial Modelica model for physical architecting. Finally, when the virtual product can be verified against its requirements, this activity can lead to a change management loop with some updates to perform in the system definition (system requirements) with potential impacts on the SysML model.

Design loop between SysML and Modelica

Going further

 

First, the initial model has been initiated manually from the SysML model. However, it would be possible to have a seamless code generation of Modelica partial models from the SysML models. Indeed, since partial models play the role of specification (interface contracts), it would be possible to adapt the approach to take into account the full process of configuration and change management.

Second, as Cameo Systems Modeler offers FMI co-simulation capabilities, it would be possible to generate and assemble FMUs from the Modelica models produced for some subsystems with FMUs generated from Simulink models (for the control subsystems). This would make it possible to characterize the co-simulation architecture from the logical architecture.

Enjoy MBSE !

 

 

Next articles to come…

  • January 2021 – Co-simulation of SysML and other models through FMI

By |2020-12-22T21:53:05+01:00November 6th, 2020|Tags: , , , |0 Comments

Part 7 – Digital continuity between SysML and AADL

This article is part of a monthly series entitled “Advanced MBSE with SysML and other languages“.

In the second set of articles, this series explains how to complete the top-level system definition model, formalized in SysML, with other modeling languages and tools, considered as more efficient to perform the system detailed design or certain kinds of system analysis. Focus is put on digital continuity with guidelines concerning coupling semantics and coupling automation between languages and tools.

In this article 7, we start from a System Definition model developed with SysML and we present an approach, which uses real-time architecture specialised language to refine this definition into an Electric / Electronics and Software Architecture. We use AADL (Architecture Analysis Description Language) as an example, but we could use languages with similar concepts and purposes such as AUTOSAR (used in automotive industry) or UML MARTE.

We present the allocation of timing budgets on SysML models, starting from operational scenarios, and through the concept of Functional Chains. We refine these timing requirements through the definition of physical components and a physical architecture formalised with AADL.

Executive Summary

  • This article focuses on performing Software and Hardware co-design. In fact, satisfaction of timing requirements by selected design is highly influenced by end to end architecture and delays, execution times induced by selected devices, processors, network topologies and software scheduling properties. These elements cannot be analyzed independently and should be analyzed as a whole.

  • SysML is a good language to define requirements and it is recommended to define timing requirements (timing budgets) at system level. Then, it is possible to verify the compliance of the selected detailed architecture using analysis languages such as AADL.

  • AADL is an appropriate (and industrial domain agnostic) language to define the hw/sw architecture of embedded real-time and safety critical systems. It offers capabilities of analysis related to hw/sw architecture properties (latency, scheduling, …) and makes it possible to verify the suitability of the selected design to achieve the System Requirements.    

Context elements

In the previous articles (part 1 to part 5), we have introduced a method based on the SysML notation to support the following systems engineering activities :

This article starts with the availability of a logical architecture for a case study called AIDA (that comes from the Saint Exupery Research Institute). It is illustrated below:

 

AIDA Logical Architecture

AIDA Logical Architecture

 

End to End Timing requirements and Functional Chains concept in SysML

End to end timing requirements are very important non-functional requirements (amongst others) to consider in order to reduce the solution space and to choose the appropriate physical solution. An end to end timing requirement generally specifies the acceptable maximum timing duration expected from an input to a specific output (of a function or a system) following a specific flow path in a specific operational scenario. End to end timing requirements may be imposed by a Stakeholder Requirement or may emerge as a System Requirement to satisfy a Stakeholder Requirement.

Functional chains are playing a key role to specify end to end timing requirements. A functional chain may be seen as an abstraction of a set of execution paths from an SoI’s input to an SoI’s output. In this article, we propose to formalise a functional chain with a SysML block, and the end-to-end timing requirement with a time duration constraint element directly available in the SysML notation, as illustrated below:

 

End to End Timing Requirement

End to end timing requirement on a functional chain

 

 

Then, we propose to show the realisation of the functional chain with a SysML IBD Diagram. This diagram allows to visualise the end-to-end flow from the selected function/System input(s) to the selected function/System outputs. Each function’s input and output that participate in the functional chain are linked with a dependency link

As an example, the following IBD diagram focuses on the functional chain that controls the position of the UAV and the required thrust value. The end to end timing requirement applies to the whole functional chain, which means in our example that the final solution shall take less than (or equal to) 10 ms to perform the loop. Therefore, it is necessary to divide the timing budget and to allocate a time duration to each function in order to find a solution that can satisfy this requirement.

 

Functional chain for control loop

 

In practice, the proposed process consists for the System Engineer in defining timing requirements as budgets for all the elements of the overall chain. Then, Systems Designers will have to demonstrate how their selected design meets these execution budgets. This is where we suggest to use AADL language to refine timing properties induced by selected hardware components and software properties. The next paragraph quickly presents the AADL language and the following paragraph provides an application on our example.

AADL Language overview

AADL (Architecture Analysis and Design Language) is a language dedicated to the modeling and analysis of real-time, safety critical, embedded systems. It is a standard published by the Society of Automotive Engineers as reference AS5506C.

The AADL language relies on both graphical and textual syntax and it includes the following concepts and extensions:

In this article, we focus on the following Base Standard concepts:

  • Structure Concepts:
    • Abstract Components
    • Software Components (Processes, Threads and Subprograms)

pictures from AS5506C standard

    • Software Timing properties (periods, priorities, …)
    • Hardware Components (Processors, Devices and Buses)

pictures from AS5506C standard

    • AADL Ports

pictures from AS5506C standard

    • Data Types
    • Connections Properties
  • End to End flows and associated latency requirements:
    • concept of end to end flow (from specific input to specific output) and definition of latency requirements
    • concept of latency on flows
  •  
  • Bindings:
    • concept of mapping of functions (abstract components) to software elements
    • concept of mapping of software components to hardware components elements

Application on the AIDA case study

To illustrate the full approach, we use a simplified model of an Unmanned Aerial Vehicle (UAV) based on the AIDA case study developed at the St-Exupery Research Institute. We use the following languages and tools:

  • CAMEO Systems Modeler 19.0 SP4
  • AADL Inspector from Ellidiss
  • STOOD for AADL from Ellidiss

And Hardware architecture has been defined with inspiration from crazyflie aadl architectural model.

The AIDA Logical Architecture in SysML has been recalled at the beginning of this article. Here we put the focus on the UAV system and in particular the co-design of the electric/electronic and distributed software architecture.

The functional goal is to control the actual position of the UAV to fit the expected trajectory around the aircraft. Therefore, one must find the right control parameters so that the UAV can follow the expected trajectory within an expected maximum timing latency. Then, we want to verify the compliance of the selected hardware and software architecture to the timing requirements.

 

Recall of the AIDA Functional and Logical Architectures

The AIDA Functional Architecture is defined in SysML and presented below:

 

AIDA Functional Architecture

AIDA Functional Architecture

 

 

In this functional architecture, we detail the position and attitude control functions, as well as the compute thrust and generate thrust functions of the UAV.

The logical architecture is established with regards to emerging systems and sub-systems decomposition. In this example, we decompose the UAV into the following subsystems:

  • Perception Subsystem
  • Flight Management Subsystem
  • Propulsion Subsystem
  • Mission Management Subsystem

Then, we allocate the functions to the subsystems as follows:

 

AIDA Logical Architecture

AIDA Logical Architecture

 

 

Definition of Timing Requirements (end to end latency) in SysML

Next, we define the expected maximum latency of the measurement to thrust force control (performance of the control loop) in a specific functional chain defined in SysML, as presented below:

 

Functional Chain for Control Loop

Functional Chain for Control Loop

 

 

This chain is extended with the duration constraint property defined with a range of valyues between min and max values. In our example: 0ms .. 10ms.

Initiate Logical Architecture in AADL

From this step, we perform the design of this system using the AADL language to support software design by taking into account hardware constraints and performing timing verification.

The first step is to translate the functional/logical architectures and functional chains defined in the SysML language toward a logical architecture using the AADL language. This step results in an AADL System Implementation using Abstract Components and end to end flows as illustrated below:

Resulting AADL Architecture from SysML

Resulting AADL Architecture from SysML Logical Architecture

In order to perform this translation, we convert our SysML Flow of Information of type “Trigger” into Event and Flow of Information of type “Data” in AADL Data exchanges. The functional chain is converted to the equivalent concept of AADL end to end flow with latency specification.

 

In this first structural approach, we propose the following mapping: 

 

 

System Engineering Concept SysML Concept AADL Concept
Function Block Abstract Component
Technical System Element Block Abstract Component
Trigger Port Port Event Port
Enable/Disable Port Port Event Port
Any other Port (Data, Material or Energy) Port Data Port
Internal Dependency Relation Dependency Flow path
End to End Functional Chain from operational scenario Block, IBD End to end flow 
Functional Chain Duration Time Duration Constraint Latency
Functional Flow Connector Port Connection Feature

Define Hardware / Software components that fit with the logical architecture

 

Once we have created a logical architecture in AADL that is semantically equivalent to the SysML logical architecture, we can start exploring the physical architecture definition using AADL concepts.

We initiate the Physical Architecture from the Logical Architecture and start to design Physical Components (defined as System Components in AADL), one for each Subsystem:

  • Mission Mgt Sub-System
  • Perception Sub-System
  • Flight Mgt Sub-System
  • Propulsion Sub-System

Now we can explore the different physical technical solutions able to refine this logical architecture and we determine the appropriate physical components (hardware and software) which satisfy all the requirements (functional and non-functional such as timing requirements or expected temperature range, safety level of selected devices, constraints due to selected solutions, …).

This exploration space is wide and shall be done carefully with regards to the compatibility of the physical interfaces. In the following example, we select a design solution to minimise the cost and timing execution of the Flight Control Management from Targeted Attitude and Position input up to the generation of the Thrust Force. The engineering goal is to move the UAV with an expected maximum timing budget of 10 ms.

With this design, we propose a mapping from Sub-Systems to Physical Components as follows:

  • Mission Mgt and Flight Mgt are implemented by the FlightMgtController System Electronic Control Unit (Hw and Sw components)
  • Propulsion Sub-System is implemented by a PropulsionMgt System which is composed of 4 Electrical Motors and associated Propeller / Power adapter.
  • Perception Sub-System is implemented by a Sensors_Module System which is composed of an integrated circuit which implements 3 sensors (accelerometer, gyroscope, magnetometer).

Note : In the Logical Architecture, the Perception Sub-System shall eprform the following internal functions:

  • SenseAttitudePosition
  • FuseData

According to the expected performance and available/selected technologies, some sensors may provide hardware support for FuseData. In the proposed solution we perform the selection based on costs, so we decide to perform FuseData using dedicated software and we allocate the FuseData function to the Flight Controller instead of Sensors module.

Define Hardware / Software architectures and bind them to the logical architecture

Then, we can design the software architecture while taking into account hardware constraints, and study the technical solutions to implement the logical components (selection of appropriate devices and processors, decompose functions into several processes, allocate processes to one or more processors, define interaction between the software and hardware elements, …). At this stage, hardware and software engineers shall analyse the suitability of the architecture regarding functional and non-functional requirements (such as timing requirements).

For this article, we propose a simplified electronic architecture using the following physical components :

  • One MPU9250 device for Attitude/Position sensing usable with I2C interface. This device integrates the following sensors:
    • Accelerometers : Digital-output triple-axis accelerometer with a programmable full-scale range of ±2g, ±4g, ±8g and
      ±16g and integrated 16-bit ADCs
    • Gyroscope : Digital-output X-, Y-, and Z-Axis angular rate sensors (gyroscopes) with a user-programmable full-scale range of ±250, ±500, ±1000, and ±2000°/sec and integrated 16-bit ADCs
    • Magnetometer: 3-axis silicon monolithic Hall-effect magnetic sensor with magnetic concentrator
  • One 32 bits FlightController Processor for FlightControl functions execution: STM32F7x5
  • One 32 bits Connectivity Processor for Connectivity features (BlueTooth Low Power mode): STM32WX5
  • 4 electrical motors controlled by PWM outputs from the Main Processor
  • 1 UART bus for communication between the Main Processor and Companion

With those choices, we suggest the following AIDA HW Architecture:

AIDA Hardware Architecture in STOOD for AADL

For software design, we suggest decomposing the software in 2 processes (connectivity and flight management). This is mainly due to the following elements:

  • Different criticality level between the 2 software applications
  • Different timing constraints between the 2 software applications
  • Limited computing power to execute the both applications on one processor
  • The main processor does not support all the required connectivity and interfaces

That is the reason why, in this proposal, we have performed the following decomposition: 

  • One Process dedicated to the Connectivity Stack (focus on communication interfaces with Uav Ctl Station and Pilot)
  • One Process dedicated to FlightMgt related functions:
    • Thread MissionModesMgt: implements RetrievePOI and MissionMgtModes functions
    • Thread FlightControlLoop
      • ControlUAVAttitude: implements Control UAV Attitude function
      • ControlUAVPosition: implements Control UAV Position function
      • ComputeThrust: implements Compute Thrust function
    • Thread Driver I2C MPU9250: implements the interface with the Attitude Sensors Devices and functions of Perception Subsystem (AcquireAttitudePosition and FuseData)
    • Thread CommunicationMgt: implements the communication interface with the Connectivity processor
    • Thread Propeller Interface: implements the interface with the electrical motor command (PWM)

 

The proposed AIDA SW Architecture is presented below:

AIDA SW Breakdown Structure

AIDA SW Architecture in Ellidiss STOOD for AADL

Finally, we can perform the binding in a 3 steps process:

1. Allocate Software Processes to Hardware Processes

2. Perform correspondence/allocation between Abstract components (translated from SysML logical components) and Hw/Sw components. 

3. Verify the consistency of the design by checking that all the logical components are bound to the Processes and the hardware components.

 

AIDA UAV Hw/Sw Architecture

Perform Timing Analysis thanks to AADL 

Now, it is possible to use integrated AADL analysis (static analysis, scheduling analysis) thanks to some AADL compatible simulation tools such as AADL Inspector.

We can analyze the compliance to the expected end to end timing requirements :

And then, perform scheduling analysis with appropriate simulator to check the suitability of the technical solution parameters. Indeed, defining appropriate scheduling properties may be difficult to achieve because hardware elements have physical limits (sensors sampling time, …) and software timing properties (priorities, worst case execution time, data transmission delays, periods …) are not obvious to determine. Moreover, you may encounter feasibility issues to satisfy the allocated timing budget if the System Engineer does not have these constraints in mind.

In this example of scheduling simulation, we can observe that some deadlines are missed , which was not obvious to determine in the initial allocation phase.

Feedback in SysML

The Systems Architect has to analyse the impacts of change requests from the proposed hardware/software solutions.

In this example, the systems engineer has performed an initial allocation of functions to logical components based on his initial knowledge. Then, the Hardware / Software Design team has performed analysis of existing components, devices and software components which exist as libraries. In our case, the analysis has shown that there is a mismatch between initially expected interfaces and provided interfaces by the selected sensor device (which implements Perception Subsystem). So, Hardware/Software design team performs a proposal to change initial allocation of FuseData function from Perception Subsystem to Flight Control Management subsystem.

So we update the Logical Architecture in SysML accordingly, as shown in the following figure:

Modified Logical Architecture

Modified Logical Architecture after AADL analysis.

 

In addition, some timing budgets between subsystems can be adjusted regarding the feasibility of the proposed solutions when possible. Some changes in timing requirements or in interface definitions may have consequences on other engineering specialities or on other components. So, this impact analysis is not always straight forward (rather iterative).

Synthesis

In this article we have proposed a coupling methodology between SysML (a Systems Engineering language) and a hardware/software design language (AADL). Note that this method can easily be tuned for other languages with similar concepts (real-time, scheduling, …) such as AUTOSAR or UML MARTE.

The proposed method includes a definition of system requirements using SysML (including end to end timing requirements with the Functional Chains concept) and an initialisation a specialised model for hardware/software architectures (and associated network topologies). This translation allows to study the definition of the detailed design in the AADL language in order to benefit from suitable concepts and various timing analysis capabilities available in AADL toolboxes.

Finally, when the virtual product can be verified against its requirements, this activity can lead to a change management loop with some updates to perform in the system definition (system requirements) with potential impacts on the SysML model.

Perspectives

To complete this present work, we will later refine the analysis of the mapping between Data Types defined at System Level and Resulting Data Types from the selected design (in particular, study the influence of selected implementation types like how to implement a Real value from system to a fixed point 32 bits data and verify the suitability of the selected implementation type regarding accuracy requirements).

In other further works, we plan to investigate how to create an initial physical architecture in SysML model by defining components libraries (ECUs, sensors, mechanical components, …), network topologies and how to convert this in AADL “world”. Then, we will propose adequate automations to perform the “bridge” from SysML to AADL based on mappings (still to refine).

We also intend to explore in more detail the other AADL language capabilities and associated annexes such as the behavioural annex (how to initiate AADL modes management from SysML behavioural description) or Error Modelling Annex (how to coordinate Systems Safety Analyses, Systems Engineering model and Safety Analyses for hardware/software design in AADL).

 

Enjoy MBSE !

Acknowledgements

We are warmly grateful for the support of the Ellidiss company and in particular Pierre Dissaux concerning the AADL modeling and simulation activity in the context of this article.

A special thanks to Jerôme Hugues for his advice and interesting discussions about these topics.

Next articles to come…

  • November 2020 – Digital continuity between SysML and Modelica
  • January 2021 – Co-simulation of SysML and other models through FMI

By |2020-12-22T21:54:24+01:00October 11th, 2020|Tags: , , , , , |0 Comments

Part 6 – Digital continuity between SysML and Simulink

This article is part of a monthly series entitled “Advanced MBSE with SysML and other languages“.

In the second set of articles, this series explains how to complete the top-level system definition model, formalized in SysML, with other modeling languages and tools, considered as more efficient to perform the system detailed design or certain kinds of system analysis. Focus is put on digital continuity with guidelines concerning coupling semantics and coupling automation between languages and tools.

In this article 6, we start from a System Definition model developed with SysML and we present an approach, which uses Simulink to define or refine part of the system’s behaviour such as the control loop of the system in its environment. We discuss 2 different ways of using Simulink.

Executive Summary

  • This article focuses on Control Engineering. This discipline requires analyzing / detailing several interdependent elements defined in the System Architecture (such as control functions, input signals time/frequency response, external environment behavior models fidelity to the real physical environment, …) to verify the consistency of the interfaces and verify that the behavior complies with the requirements. Those activities require usage of existing knowledge and assets available in mathematical-based simulation tools like MathWorks Simulink.

  • The communication between the Systems Architect and the Control Engineer is key to get a fast and clear transition between the SysML and the MathWorks MATLAB/Simulink models.

  • There are 2 different ways to transition from SysML to Simulink. The “new” way uses System Composer to initialize an architecture model from the SysML model. System Composer can preserve the stereotypes defined in SysML and provides features useful to assess the architecture like “multi views” and “architecture analysis”. This is the way we recommend for the future.

    Today, some important features like “static consistency check of the interfaces” are still missing, and there is no automation to support the transition between SysML and System Composer. So the “traditional” transition to Simulink, even if it does not preserve SysML stereotypes and does not provide features to characterize and assess the architecture, remains useful. And there is some automation to support this transition. 

  • When using Simulink, the Control Engineer verifies the system architecture, the interfaces, and the behavior, to ensure that the requirements can be satisfied. If this is not the case, the Control Engineer shall propose a change to either adapt the architecture or refine the requirements. This change will be managed by the System Architect in the SysML model and it should be reflected in the Simulink model in order to maintain the consistency of the overall system definition.

Context elements

In the previous articles (part 1 to part 5), we introduced a method using the SysML notation to support the following systems engineering activities :

This article starts with the availability of a logical architecture for a case study called AIDA (coming from the Saint Exupery Research Institute). It is illustrated below:

From a Logical Architecture to a Detailed Definition using Simulink

Once a logical architecture has been defined, Systems Engineers start to communicate it to the various specialists involved in the system detailed definition (software engineers, mechanical engineers, command-control engineers, hardware engineers, …). These specialists will have to analyze the system requirements (including interfaces definition, expected behavior and associated performance) and will verify the requirements feasibility (is there a solution that can satisfy all of these requirements?).

In this article we focus on the Control Engineer. This specialist applies Control theory to one or several components of the system architecture and on its environment. He needs to define equations, reuse operators and generic components from libraries and toolboxes, use solvers and timed simulation, access optimization tools, use matrices based computations, etc. All these features are offered by math based simulation tools. Among these tools, we choose to restrict our focus to the usage of the MathWorks MATLAB/Simulink/System Composer suite as it is the most commonly used in the industry today, as far as we know.

Transition between SysML and MathWorks Simulink

In the next paragraph we detail a process to refine the definition of internal control. It contains the following steps:

  1. Definition of interface requirements in the SysML model,
  2. Export/translation of data to the Simulink (and SystemComposer) tool,
  3. Refinement of the interfaces and of the expected behavior in the Simulink model thanks to the component libraries and the support of simulation,
  4. Feedback to the Systems Architect about changes or refinements needed on system requirements and on the system architecture,
  5. Impact analysis of the change requests and update of the system definition model.

Transition from the SysML Logical Architecture to a Simulink behavioral model: the “traditional” approach

In this first approach, the Logical Architecture of the SysML model is translated into a Simulink model while preserving the allocation of system functions to logical components (subsystems). Simulink component libraries are used to refine the functional behavior. Then, the time-based simulation is used to verify that interfaces between system functions and between subsystems are consistent, and that it exists a solution that can satisfy the system functional and performance requirements. The interface definitions can be confirmed or refined by the control engineer.

Transition from the SysML Logical Architecture to a Simulink architecture model: the “new” System Composer approach

In the second approach, we still transition to Simulink but with 2 different steps and usages of Simulink. First we use System Composer (Simulink facet) to characterize and assess the architecture, thanks to features like multi-views, filtered view and analysis. Second, we use the “more traditional” Simulink component libraries to refine the function’s behavior.

The expected benefit of using the System Composer facet is a better separation of concerns: in the first step, the logical architecture can be characterized with the support of stereotypes on ports or on connectors, and assessed with the support of analysis features like “dynamic consistency checks of interfaces”. In the second step, the different components and their allocated functions can be refined, especially for the behavior, with the support of a wide diversity of generic component libraries, patterns and other useful features.

Note: each step has its own interest and may be performed by different users with different experiences.

Application on the AIDA case study

To illustrate and give elements of comparison between these 2 scenarios, we use a simplified model of control for the trajectory of an Unmanned Aerial Vehicle (UAV) based on the AIDA case study developed at the St-Exupery Research Institute. The AIDA Logical Architecture in SysML has been recalled in the context at the beginning of this article. Here we put the focus on the control loop between the perception subsystem, the Flight Management Subsystem and the Thrust Management Subsystem.

The goal is to control the actual position of the UAV to fit the expected trajectory around the aircraft. Therefore, one must find the right control parameters so that the UAV can follow the expected trajectory within a minimum error margin (that shall be defined in the performance requirements).

Simulink model initialisation

First, we define the scope of the transformation between the SysML logical architecture and the Simulink Model because we do not need to translate the full SysML model. We restrict the scope of this transformation to the sole functions and components useful for the UAV trajectory control loop (including the Air / Terrestrial Gravity model to reflect the physical environments effect on the control loop).

Note: in this SysML model, ports have been split (for instance current x and current y which rely on a Current Position flow)  in order to be able to use the automations available in the tool.

Concerning the specification of the interface types and units, the SysML tool (Cameo Systems Modeler) provides access to the ISO 80000 Standard Units :

 

Translation of the logical architecture from SysML to Simulink/System Composer

The SysML logical architecture can be translated to the MathWorks Simulink modeling environment through two different methods:

  • With System Composer:

With this method, it is possible to create a logical architecture semantically equivalent to the SysML model (same components and interfaces) as illustrated below:

 

One of the main benefits of this approach is the preservation of the stereotypes initially defined in the SysML model. For example, in the Functional Architecture model of our case study, we have defined the following stereotypes on function inputs and outputs: Information, Energy, Material.

 

Within System Composer, it is possible to define the same stereotypes and apply them to the System Architecture (functions or component interfaces) :

 

System Composer also offers a feature to filter different views of the same system architecture, which is very useful to ease the architecture reviews:

 

  • Without System composer, using direct transition from the SysML tool to Simulink:

With this method, we can use some automations available out of the box in Cameo Systems Modeler (the SysML tool we have used) to create the Simulink Model skeleton from the SysML filtered model (model filtered with the UAV trajectory scope).

 

If data types and units have been specified in SysML, the automated transition propagates the data types and units in the Simulink model:

 

But without system composer, the additional semantics defined in the SysML model through the stereotypes are lost.

In both cases (with or without system composer), the resulting model contributes to the specification for the control engineer.

 

Simulink model refinement and simulation

Next, we complete the functions behavior with existing assets/knowledge in control such as PID controllers and we refine the associated parameter values thanks to the support of the simulation.

Change requests on requirements and architecture

Once the simulation seems to satisfy the requirements expressed at the logical level, it is possible to derive new lower-level requirements. For instance, it may be possible to add requirements on stability, or on expected control accuracy. The PID’s parameters can be finalized only in a physically realistic environment. However, the simulation gives an idea of the feasibility and and of the range of values to be implemented later.

Feedback in SysML

The Systems Architect has to analyse the impacts of change requests from the control engineer. Some changes in requirements or in interface definitions may have consequences on other engineering specialties or on other components. So, this impact analysis is not always straight forward.

Discussions on the two possible transitions and synthesis

This discussion is based on the use of the following tools and configurations:

  • Cameo Systems Modeler (CSM) V19SP4 (SysML tool)
  • MATLAB/Simulink 2020a

From SysML Logical Architecture to Simulink with the “traditional” (direct) approach

  • Interests of this “traditional” transition:

    1. It is possible ot perform digital continuity between some SysML tools and Simulink: during a simulation session started in CSM (the SysML tool), the Cameo Simulation Toolkit (CST) can call a Simulink model from a SysML block. In practice, CST gives the hand to Matlab, which runs the Simulink model. A the end of the Simulink model function, CST can retrieve data from the Simulink model and make it available for use in the SysML model or visualization in the simulation console.
    2. Co-Simulation of SysML and Simulink models using FMI standard (will be detailed in a future article): both SysML behavioral model and Simulink behavioral model are simulated concurrently through their respective solvers (in fact there is an orchestrator that drives both simulations time step by time step)
  • Issues with this “traditional” transition:

      1. Stereotypes defined in the SysML model are lost after translation into the Simulink model. There is no simple way to retrieve those stereotypes, even with additional automation, because the Simulink meta model does not handle stereotypes.
  • Additional remark :

    Most of SysML tools provide automation to generate a Simulink model from a SysML IBD representing the logical architecture, but the translation of buses from a SysML Logical Architecture model is generally not implemented -> An automation could be developed to fix this issue.

From SysML Logical Architecture to Simulink with the “System Composer” (new) approach

  • Interests of this “new” transition:

      1. Ability to keep SysML stereotypes when translated into System Composer and to use them to create several architectural views (for isntance electrical view, mechanical view, control view…)
      2. Assess the architecture in terms of consistency of interfaces thanks to the availability of stereotypes put on ports and connectors and the use of dynamic checks.
  • Issues with this “new” transition:

      1. Static consistency between ports is not ensured today (will come in a future version).
      2. No transformation available between CSM and System Composer.
        1. No automation available out of the box in CSM V19SP4 to export a SysML model as a System composer model.
        2. No available import of a SysML model from System composer so far.

Synthesis

If your MBSE method is still in the definition stage, and if there is a need to go from SysML logical architecture to Simulink in order to benefit from mathematical-based simulation tools, it is clear for us that System composer is the right target from SysML. System Composer is the MathWorks tool that can preserve the SysML stereotypes put on the logical architecture (components and interfaces) and thus provides good support for architecture views and analysis in the Simulink environment. As soon as we can get some automation to support this transition between SysML and System Composer and some features to check in a static way the consistency of interfaces, we strongly recommend this way of transitioning from SysML to Simulink.

Else, in case you need to go from SysML to Simulink today, with the capabilities provided by Cameo Systems Modeler V19SP4 and MATLAB/Simulink 2020a, it is probably more efficient to use the “traditional” (direct) transition from SysML to Simulink, thanks to the automations that exist to support part of this transition.

Perspectives

In this article we have discussed the transition from a Logical Architecture formalized in SysML, to a Simulink model limited to the structure (components, their allocated functions, and their associated interfaces). Note that export from Cameo Systems Modeler to Simulink (with direct transition) also supports the translation of behavioral elements such as constraint blocks and state machines. Those aspects will be detailed in a future article.

Concerning multi-physical aspects, we plan to explore the usage of the SysPhs standard which is available in Cameo System Modeler through the SysPhsLibrary to support the automated transition of physical elements (structure and behavior) to MathWorks tools.

Additionally, we plan to explore in further detail the change analysis process that is performed when there are updates of the System Logical Architecture in the SysML model, and the possible consequences on the Simulink model. We will focus on the method but also on the tools able to support the difference/merge between SysML and Simulink models.

Finally, we will address in a future article (planned for January 2021) how to perform co-simulation between a SysML behavioral model and other behavioral models using the FMI for Co-Simulation standard.

By |2020-12-22T21:55:18+01:00September 21st, 2020|Tags: , , , |0 Comments

Part 5 – Coupling optimization of logical architecture using genetic algorithm

This article is part of a monthly series entitled “Advanced MBSE with SysML and other languages“.

In the first set of articles, this series explains how to use a modeling approach based on the SysML notation to progressively analyze, structure, refine and derive stakeholder needs and requirements into system architecture and lower-level requirements, down to configuration items containing software and hardware parts.

In the second set of articles, this series will focus on the links to other modeling languages used to detail the design and/or perform detailed analysis and simulations to evaluate, verify or validate the virtual representation of the system.

In the previous article, we explained how it is possible to define a Logical Architecture from a Functional architecture, using an allocation matrix between functions and logical components.

In this article, we go a step further by extracting the coupling metric between functions from the Functional Architecture (using an N² diagram technique) and using an optimization algorithm to minimize the coupling between logical components.

It is possible to consider several criteria with this method such as end to end latency requirements on interfaces. In this case, the algorithm tries to find the best solution that satisfies the coupling minimization, allocation constraints, and also timing constraints. In this article, we will focus on coupling minimization only.

Minimizing the coupling between components, a good systems engineering practice!

Among Systems Engineering best practices, as stated in many standards, it is key to minimize the coupling between the sub-systems in order to master the product complexity. For instance, in the IEC 61508:2000 we find:

The interfaces between subsystems are kept as simple
as possible and the cross-section (i.e. shared data, exchange of information) is minimised.
“.

IEC 61508:2000 – Functional safety of electrical/electronic/
programmable electronic safety-related systems – Part 7:
Overview of techniques and measures

Are there any techniques or methods to support systems architects in minimizing the coupling between components?

Yes, to achieve such minimization, a well-known method consists of using Coupling Matrices (also called N 2 diagrams) and then reorganize them to identify architectures with minimal coupling.

Let us first explain how an N² diagram is defined, and let us illustrate that explanation with a case study. Secondly we will focus on the computation of the N² diagram to identify coupling optimization.

The coupling between components concerns the dependencies between the components. As explained in the previous article (part 4), the dependencies between the logical components mainly stem from the functional interfaces. So it is no surprise that we first start with the functional dependencies, use them to compute a coupling metric, and finally suggest an allocation of functions to components that minimizes the coupling.

Introduction to the N² diagram method

The N 2 chart, also referred to as N 2 diagram, N-squared diagram or N-squared chart, is a diagram in the shape of a matrix, representing functional or physical interfaces between system elements. It is used to systematically identify, define, tabulate, design, and analyze functional and physical interfaces. It applies to system interfaces and hardware and/or software interfaces.

[2] Wikipedia: https://en.wikipedia.org/wiki/N2_chart

In the previous article, we explained how it is possible to define a Logical Architecture from a Functional architecture, using an allocation matrix between functions and logical components.

Here below is an example of an N² diagram for a project with 9 functions that have dependencies.

In this matrix, the “1” represents an existing interface between the function of the concerned row and the function of the concerned column. The “0” indicates that there is no relation between those functions. In this example the matrix is symmetric, indicating that all the links are bi-directional. This is not the standard rules of the N² chart, which are better explained by the following figure.

Illustration of how to read an N² diagram

The placement of the “1” (above or below the diagonal) determines which function is the source and which function is the target of the link.

The use of a coupling matrix is mentioned by the INCOSE SE handbook as a useful practice :

Coupling matrices (also called N 2 diagrams) are a basic method to define the aggregates and the order of integration (Grady, 1994). They are used during architecture definition, with the goal of keeping the interfaces as simple as possible… Simplicity of interfaces can be a distinguishing characteristic and a selection criterion between alternate architectural candidates. The coupling matrices are also useful for optimizing the aggregate definition and the verification of interfaces.

Systems Engineering Handbook 4th edition 2015 in chapter 4.4.2.6 Coupling matrix

From this matrix we can compute a coupling value regarding the interfaces defined between the Logical Components (deduced from the interfaces between functions allocated to these components). The coupling value represent an evaluation of the coupling complexity between logical components based on the following formula derivated from software coupling metrics in Dhama, “Quantitative models of cohesion and coupling in software”, Journal of Systems and Software vol; 29, Apr, 1995

 

`"Coupling"(C_(M_(k))) = 1-1/(d_(i)+2*c_(i)+d_(o)+2*c_(o)+w+r) `

`"Coupling Value" (C_(v)) = sum_(k=1)^n[C_(M_(k))] `

 

where the parameters are defined as follows:

  • `M_(k)`: logical component under consideration
  • `d_(i)`: number of input data parameters
  • `c_(i)`: number of input control parameters
  • `d_(o)`: number of ouput data parameters
  • `c_(o)`: number of ouput control parameters
  • `w`: number of modules called (fan-out)
  • `r`: number of calling the module under consideration (fan-in)

Now, let us see an illustration on a case study.

A sample case to illustrate the definition and use of the N² diagram

Our sample case is based on a case study elaborated within IRT St-Exupery called AIDA (Aircraft Inspection by Drone Assistant). This example was initially developed in a Capella environment and is available at https://sahara.irt-saintexupery.com/AIDA/AIDAArchitecture. For this article, we have translated the sample case to the SysML language.

In the previous article (part 4), we used this sample case to show how we can initialize the logical architecture from the functional architecture with the use of an allocation matrix between functions and logical components. In this article, we use it again, but this time we explain how to use optimization techniques to determine automatically the “best fit” to minimize the coupling. In other words, we want to define one or several possible allocations (illustrated by allocation matrices) between functions and logical components that minimize the coupling between the components.

Let us see this in practice.

N² diagram from the Functional Architecture

In the previous article (part 4) we showed a possible functional architecture elaborated for this sample case. We recall it in the figure below:

 

For this functional architecture, we can extract 2 N 2 diagrams for the leaf functions by analyzing their dependencies:

  • Data/energy/material flows
  • Control (Enable/Disable or Trigger) flows

The results are displayed in the 2 figures below:

N² matrix for data/energy/material flow

 


 

N² matrix for control flow

Now we want to define a logical architecture that minimizes the number of interfaces between its subsystems.

Optimization of allocation between functions and logical components

From a functional N² diagram to a logical architecture…

To perform optimization between components, we analyze the functions to functions coupling matrices introduced previously and we use them as input for a genetic algorithm presented later in this article. This algorithm will progressively iterate over different possible logical architectures and will calculate the coupling between components. In the end, it will select the architectures that minimizes the coupling metric.

Let us look at a possible logical architecture. How do we define it? We simply define the components (or modules) as groups (or partitions) of functions.

As an example, in the figure below, the orange color part of the figure illustrates an allocation (or partition) strategy of the 9 functions into 3 modules: M1, M2, and M3. In this figure, we are not interested in the internal structure of each module, which is why we do not represent the functional interfaces between functions of the same module. However, we want to see the functional interfaces between functions allocated to different modules because it will give us the logical interfaces. If we focus on the M3 module, we see in green the M3 inputs, and in blue the M3 outputs.

L’attribut alt de cette image est vide, son nom de fichier est image-22.png.

Example of coupling analysis for 3 modules (green highlight on Module 3 inputs, blue highlight on Module 3 outputs)

Note: we recommend reading the previous article for more details on the relationships between the functional architecture and the logical architecture.

From this matrix with modules, we can now compute a coupling metric regarding the interfaces of the modules.

Using a Genetic Algorithm to optimize the allocation of functions

Genetic algorithms are algorithms inspired by  evolutionary principles. The main purpose of this kind of algorithm is to explore the solution space of a problem in order to satisfy a set of criteria. The general principles of genetic algorithms are illustrated on the Figure below.

Genetic Algorithm for Functions Allocation

Genetic Algorithm Process

The first step is to create randomly a set of initial subjects (1). This set is called the initial population. The initial population is composed of subjects each representing a possible set of functions allocations. Then, the algorithm evaluates each subject using a fitness function (2). This function makes it possible to give a value, or a rank, to a subject, to estimate its proximity with the “optimal” solution. In our case the fitness function is the coupling equation C. The candidates that are too far from the desired solution are deleted (3).

Then the algorithm evaluates the number of remaining subjects. For instance, if the population size is less or equal to 4, then the algorithm returns the best solution amongst the 4 remaining subjects. On the contrary, if the population size is greater than a specific threshold, then the algorithm continues. And this is where things become interesting…

Here begins the core biomimicry part of the genetic algorithm: the remaining subjects cross over, i.e. they exchange their genes to produce new subjects (4). Finally, the newly created childs are subject to mutation (5): part of their characteristics randomly change. Cross over and mutation are usefull to stay away from local optimum by spreading new subjects through the solution space.

Genetic algorithms are configurable using the following set of parameters:

  1. Initial population size – a key parameter to ensure enough coverage of the solution space at the begining
  2. Max generation number – parameter to ensure that the algorithm ends even if the population grows.
  3. Percentage of survivor – the percentage of the worst subjects to delete
  4. Percentage of parents – the percentage of subjects that cross over
  5. Percentage of child to mutate – the percentage of new subjects to mutate after the cross over
  6. Percentage of gene to mutate – the percentage of genes to mutate for each new subject

What about constraints on allocations?

In practice, systems engineers already have good ideas of some allocations between functions and components or have constraints that exist on fixed allocations (for different reasons including security, performance…). So the genetic algorithm shall consider these first predefined allocations.

We have defined our genetic algorithm to be able to take as input a predefined partial allocation matrix with existing constraints. These constraints are considered by the algorithm that will then define possible logical architectures respecting the given constraints.

Selection of the “best” logical architecture that minimizes coupling

The genetic algorithm presented previously gives us one or several possible logical architectures that minimize the coupling between components while conforming to the functional architecture and eventual allocation constraints. We can use the results to generate or complete the allocation matrix between our functions and the components as presented below.

Allocation Matrix

Thanks to the completion of this allocation matrix, we can deduce a logical architecture, as explained in the previous article (part 4) that shows the different logical subsystems with their allocated functions and keeps the functional flows coming from the functional architecture.

Logical architecture after allocation of functions using GA

 

Can we automate some of the steps presented above?

Yes!

At Samares Engineering, we have investigated automation of the different following steps :

  • Extracting the initial N² Matrix from the functional architecture (for both data/energy/material and control flows)
  • Exploring candidate logical architectures (functional to logical allocation) to automatically find the candidate architectures where the coupling metric is at a minimum value using the genetic algorithm.
  • Defining allocation constraints (for example UAV control position function can be forced to be allocated to the Flight Control System).

 

 

Enjoy MBSE!

Acknowledgements

We are warmly gratefull to Yash Khetan and Minghao Wang for their contribution. It was great to work with both of you. See you!

Next articles to come…

  • September 2020 – Digital continuity between SysML and Simulink
  • October 2020 – Digital continuity between SysML and AADL
  • November 2020 – Digital continuity between SysML and Modelica
  • January 2021 – Co-simulation of SysML and other models through FMI

Previous articles in the series

  • April 2020 – Formalization of functional requirements
  • May 2020 – Derivation of requirements from models: From DOORS to SysML to DOORS again
  • June 2020 – Early validation of stakeholder needs through functional simulation
  • July 2020 – Consistency between functional and logical architectures

 

By |2020-12-22T21:58:10+01:00July 31st, 2020|Tags: , , , |0 Comments

Part 4 – Consistency between functional and logical architectures

 

This article is part of a monthly series entitled “Advanced MBSE with SysML and other languages“.

 

 

In the first set of articles, this series explains how to use a modeling approach based on the SysML notation to progressively analyze, structure, refine and derive stakeholder needs and requirements into system architectures and lower-level requirements, down to configuration items containing software and hardware parts.

 

In the second set of articles, this series will focus on the links to other modeling languages used to detail the design and/or perform detailed analysis and simulations to evaluate, verify or validate the virtual representation of the system.

 

 

 

 

This fourth article deals with functional and logical architectures. We discuss the following questions: Why do we need a logical architecture? And how do we ensure the consistency between the functional and logical architecture?

 

Why do we need a logical architecture?

 

In most industrial practices, and in various industrial domains, systems engineers are used to define one (and sometimes several) functional architecture(s). This architecture formalizes an arrangement of system functions using two viewpoints: the Functional Breakdown Structure (FBS), which shows the decomposition hierarchy as a tree ( “parent” functions and “child” functions) and the connection graph that shows the functional flows between those functions (energy, information, matter).

 

As an illustration, let us take the AIDA open-source sample case from the Saint Exupery Technological Research Institute in Toulouse: https://sahara.irt-saintexupery.com/AIDA/AIDAArchitecture.

 

AIDA stands for “Aircrat Inspection by Drone Assistant”. AIDA provides assistance during the inspection of an aircraft before flights: the drone seeks for Aircraft defects.

 

A320 Pre-Flight Checks Procedure

 

The drone system contains 9 top-level functions:

 

  • Manage mission
  • Build fight plan relative to aircraft type
  • Fly to
  • Retrieve PoI (Points of Interest)
  • Make and record videos
  • Check wind force
  • Monitor UAV control
  • Sense and avoid obstacles
  • Emergency landing

 

The definition of these functions is formalized with Blocks in SysML.

 

We use an IBD to formalize the functional architecture. Practically, this diagram displays the usage of the functions in their operational context (SysML part properties typed by the previously mentioned blocks), the interfaces (connectors with item flows) between the SOI and the other members of the system context, and the interfaces between usages of functions (also connectors with item flows).

 

A possible functional architecture for the identified top-level functions is provided below:

 

AIDA Top Level Functions

 

Some of the top-level functions are still complex and need to be refined through lower-level functions. So we can build a functional architecture that displays several levels of functions as illustrated below:

 

AIDA Functional Architecture Details

 

When developing a system, it is also common to find a description of the physical components. By “physical components”, we mean a hardware part, a Software piece, or any combination of those elements. It includes processors, sensors, structure, propellers, etc.

 

The problem comes when we want to allocate our functions to the physical components. In the frame of a complex system, the list of physical components may become very large, especially when this list is not finalized and contains many alternatives. For instance, in order to allocate the “sense wind” function, we may find a lot of different technologies and means to perform the measurement, mixing software and hardware features.

 

As the final physical architecture shall satisfy all non-functional requirements including reliability and availability, we generally introduce redundancy of safety-critical components to ensure its availability even when there are failures in one of the components. In the end, the number of physical elements to consider for allocation is huge.

 

Let us take the previous example to illustrate a non exhaustive list of physical components:

 

 

The allocation of top-level functions, identified from the needs expressed by the customer and users are hard to allocate to the identified physical components because the abstraction gap between the system functions and the physical components is high. We need an intermediate layer to partition functions into items that represent an abstraction of the final technologies. This is the “logical architecture” layer.

 

The logical architecture as an intermediate layer

 

As stated by the INCOSE Systems Engineering Handbook (4 ed.), the logical architecture definition consists in decomposing and partitioning the system into logical elements

[…]. The elements interact to satisfy system requirements and capture systrem functionality. Having a logical architecture mitigates the impact of requirements and technology changes on system design.

The logical architecture is an arrangement of “logical components” that perform the functions. This first allocation is easier to perform because we can group functions with criteria such as cohesion, coupling, design for change, reliability, and performance.

Later, we will have to do a second allocation: allocate logical components on physical components (with technology). This second step is also easier to perform than the direct allocation from functions to physical components because we only have to focus on technologies/products available on the market to satisfy a logical component already defined.

Let’s go back to our AIDA example. Here is a possible set of logical components for our system of interest surrounded by its environment (as in the functional architecture):

  • Mission management subsystem
  • Propulsion subsystem
  • Flight management subsystem
  • Vision Subsystem

Initial logical architecture

Here is an example with the use of the SysML allocation matrix (within Cameo Systems Modeller environment) to create the allocations of functions to logical subsystems.

 

How do we create the logical architecture?

When creating a logical architecture, it is possible to connect the logical components directly in the diagram, by using engineering knowledge: it is sometimes already known that 2 components will exchange information or energy. However, the rationale for connecting the 2 components is missing. In the end, the logical architecture may miss interfaces or contain useless interfaces.

Therefore, the logical interfaces shall not be fully independent of the functional interfaces. The logical components reflect the partition of functions and should thus reflect the functional flows. There is a consistency between the functional architecture and the logical architecture.

The next chapter explains this in detail.

 

Consistency between the Functional architecture and the logical architecture

We return to the AIDA sample case to illustrate this consistency with a few functions and allocations. Instead of looking at the full functional architecture, we will focus on a simple extract with only 3 leaf functions coming from the “manage and record videos” top-level system function:

  • “Manage Photos Recording”,
  • “Control Camera Orientation”
  • “Record Photos and Videos”

Now we want to allocate the 2 first leaf functions to “Mission Management Subsystem” (in blue) and allocate “Record Photos and videos” to “Vision Subsystem” (in red) as illustrated below:

Note: in SysML, we use the SysML allocation matrix to edit (create and delete) these “allocation” relations. The allocation described above leads to the following matrix.

First allocations

Now we would like to reflect the impact of these allocations on the logical architecture. Practically this means:

  • Display the functions inside their components
  • Display the functional flows between functions through the ports of the logical components because we want to respect the “encapsulation principle” of the components (a component can show or not show its internal structure but its ports do not change)
  • Display the functional flows with the system environment (through the System external ports)

In our example, for the subset of the functional architecture and the 3 allocations, it results in the following logical architecture with the creation of 3 logical flows (in orange):

Logical Architecture result after allocations

We can see that the logical flows (in orange) directly come from the functional architecture: they are deduced / reflected from this functional architecture and from the allocation of functions to the logical components.

Conclusion

There exist a relation between the functional architecture and the logical architecture. A logical subsystem can produce or consume flows if there is one or several functions allocated to it. In addition, some functions may appear directly at the logical layer, e.g., interface function between subsystems, encoding functions, decoding functions, or electrical functions. These functions may make no sense at the functional system level since they depend on the chosen technologies and can be very detailed. But, whatever the abstraction level of the functions, the logical layer shall be consistent with the system functional layer.

Can we automate some of the steps presented above?

Yes !

Overview of the automation

At Samares Engineering, we have created a plugin to automate the update of the logical architecture (display of functions, creation of logical flows) according to the functional architecture and allocation of functions to the logical components. This propagation is done in real-time. And it works in both directions (creation and deletion of allocations, leading potentially to the creation or deletion of logical flows between logical components). So we can ensure that the logical architecture is always consistent with the functional architecture.

We can also show the functions inside each component or hide those functions and only show the components and their logical flows.

Take a look at the video below to see this automation in practice.

Simulation in practice (video)

This video shows how we can ensure consistency between a functional architecture and a logical architecture while editing the allocation of functions to the components, in real-time.

Enjoy MBSE!

Next articles to come…

  • August 2020 – Minimization of the coupling in the logical architecture
  • September 2020 – Digital continuity between SysML and Simulink
  • October 2020 – Digital continuity between SysML and AADL
  • November 2020 – Digital continuity between SysML and Modelica
  • January 2021 – Co-simulation of SysML and other models through FMI

 

Previous articles in the series

  • April 2020 – Formalization of functional requirements
  • May 2020 – Derivation of requirements from models: From DOORS to SysML to DOORS again
  • June 2020 – Early validation of stakeholder needs through simulation

By |2020-12-22T21:59:30+01:00July 21st, 2020|Tags: , , , |0 Comments

Part 3 – Early validation of stakeholder needs through functional simulation

This article is part of a monthly series entitled “Advanced MBSE with SysML and other languages“.

In the first set of articles, this series explains how to use a modeling approach based on the SysML notation to progressively analyze, structure, refine and derive stakeholder needs and requirements into system architecture and lower-level requirements, down to configuration items containing software and hardware parts.

In the second set of articles, this series will focus on the links to other modeling languages used to detail the design and/or perform detailed analysis and simulations to evaluate, verify or validate the virtual representation of the system.

This third article puts a spotlight on a way to validate the stakeholder needs. We show how it is possible to use a modeling approach to structure and refine functional needs into a functional architecture. We also show that it is possible to simulate this functional architecture against operational scenarios expressed by the stakeholders. The simulation allows us to monitor some of the key system parameters and provides good support for validating stakeholder needs early in the development cycle.

Functional architecture is useful to support early validation of the system!

Sometimes we hear from some systems engineers that only the physical architecture is really useful to support validation. That is true if we target the end-product. At this stage, we need to get an architecture as close as possible with the reality (a characteristic sometimes called “fidelity”) to limit errors and wrong conclusions from the results of the simulation. But if we focus on the validation of functional requirements, it is not a good idea to wait too long before starting validation because we may be working with a wrong or incomplete capture of functional needs. And we can already do a lot to verify these needs early in the development cycle, even with a purely functional (virtual) representation of the end product.

In order to reach early validation, there are different activities to perform:

  • The identification of the validation objectives
  • The identification of the system functions and their functional interactions with system operational context
  • The internal functional flows to support complete functional chains starting from operational scenarios.

Identification of the validation objectives

First, we need to identify what we want to validate. The most important thing to keep in mind is the rationale of why we developed our system of interest: the mission(s) to support! So let us focus on the mission(s) of our system of interest and check that our system is able to support the mission profile (set of phases and states) and its expected performance in the operational context. You may think that we need to know the complete physical architecture to measure this performance. Yes for the final detailed figures, but we can already approximate some elements and get a first rough idea without the full list of organs. We will see in the next paragraphs that we can add some behavior to the functions and then we can reach good support for the calculation of the system performance.

Identification of the system functions and their functional interactions with the operational context

This activity consists in mixing two approaches: the engineering knowledge of the solution coming from systems engineers experience on one hand, and the needs expressed by the different stakeholders on the other.

The systems engineers will through their experience provide a set of functions often called “technical functions” because they come from the knowledge of the technical solutions/products commonly used in a similar context. The expression of needs coming from the various stakeholders will lead to what we call “service functions” or “required functions”. These functions are generally identified through a set of scenarios that cover the different lifecycle concepts. The functional architecture will arrange the functions so that we can support top-level required functions with technical functions as illustrated below.

Now let us see in practice how we can use a model-based approach to support these activities.

At the French Chapter of INCOSE , called AFIS, in the MBSE technical committee, we have created a working group to discuss the use of functional model simulation as a means to reach early validation of the system functional requirements. We quickly discovered that it would be useful to compare our different approaches through a common sample case. And we have chosen a connected washing machine for this exercise. It is a system that everyone knows, at least as an end-user.

We use this sample case to illustrate the suggested approach.

A sample case to illustrate the use of a functional model simulation as a means for early validation…

Our sample case is a connected wasching machine. The “connected” part means that you can start and monitor the progress of the washing through your smartphone.

The description is available online here: https://www.samsung.com/fr/washing-machines/front-loading-ww90m645opw/

Concerning the functional behavior, we are not specialists and we have extracted knowledge from this web site (in french): https://www.spareka.fr/comment-reparer/electromenager/lave-linge/fonctionnement

Focus on the mission and identification of the operational scenarios

Let us start by looking at the missions / use cases for this system of interest. We want to be able to wash clothes, either directly or remotely (from our smartphone).

According to these use cases we have 2 main scenarios that describe the interactions between house habitants and the connected washing machine. We use “UC diagram” to represent the different system missions and we use “Sequence Diagrams” to represent the interactions, as illustrated below:

Note: these sequence diagrams are simple and this is on purpose. We do not introduce advanced logics like loop, parallel, or alternatives to keep the diagram very simple and easy to review by end-users and customers who are not necessarily familiar with the SysML notation.

From an operational scenario to a validation scenario…

The different operational scenarios defined previously (through Sequence Diagrams) can be reused as skeletons for the future validation of the system. You may ask: “What is the difference between an operational scenario and a validation scenario?” A validation scenario is more detailed than the operational scenario. It contains the same list of interactions but also some additional elements:

  • Concrete values for the different stimuli sent to the system (from external systems or humans)
  • Some delays between interactions in order to reflect human behavior (a human is not a robot that can immediately trigger the stimuli one after the other)
  • Some observations about the system behavior during the mission

We can use an activity diagram to formalize a validation scenario. It can be translated from the operational scenario quite easily:

  • Each input message is translated into a “send signal action” so that we can send a signal to the system
  • Each output message coming back to the operator is translated into an “accept event action” that waits for the arrival of the signal.

Then let us see how we complete this scenario to allow some validation.

  1. The concrete values used as inputs for the stimuli are formalized with the “ValueSpecification” concept and are transmitted to the “send signal actions” through “object flow“. In the example shown below, we load 5 Kg of dirty clothes and 0,1 liter of detergent.
  2. The delays between the human interactions are formalized with the use of “AccceptTimeEvent” with a delay expressed in “relative” mode (after XX seconds).
  3. The observations are detailed in the next paragraph.
Translation of an operational scenario into a validation scenario with some complements

Identification of the functions

We start the identification of the functions by looking at the system operational context. It gives us the inputs and outputs of the system. We can use an Internal Block Diagram (IBD) to represent this context.

The system of interest seen as a black box in its operational context

Note: we distinguish different types of flows: information, energy, and Matter. We use SysML stereotypes (additional semantics to SysML concepts) in order to manage those specialized flows and associated ports. We can associate a given color and a given icon for each stereotype, which makes the reading easier. A legend is available on the top left of the diagram.

Then the functions are identified by following some simple patterns:

  • The interactions from our System with its context (physical environment, other connected systems, and human interactions) are encapsulated with interface functions that are in charge of managing those interactions.
    • in our sample case, we find “Manage Human interaction” and “Manage Water
  • The mission progress is managed by a dedicated function. In our sample case, it is called “Manage Washing Program
  • Finally, we add all the functions required to manage human interactions and to support the mission
    • In our case, we add “Store Water and Clothes” to ensure the human interaction concerning clothes
    • We add “Provide Washing Movement” to clean the clothes.

We represent the usage of those functions as “part properties” inside our System Of Interest (SoI).

Addition of “service functions” to the system seen as a black box

Note: the functions are all enabled by default in this diagram but some may be disabled by other functions in some conditions during system execution. The function adornment would then change with a new symbol as explained in the legend (top left of the diagram).

Support of service functions with technical functions

When we get a good idea of our service functions identified from operational scenarios and from the operational context, we get 2 options: either we are able to specify their behavior (to support functional simulation) or we consider that the function is too complex or has too many objectives and then we refine it with lower-level technical functions. In that case we use our engineering knowledge to identify those technical functions.

In our case, the “manage Water” function is still complex and needs to be refined into lower-level technical functions. From the website used to understand the washing machine behavior, we learn that we need several functions to manage the water. We need to manage the water level, to heat the water and to store the washing detergent. We connect these functions with the external environment (water supply and sewer, human interactions) . We use an IBD to show this refinement:

Observations of the SoI to reach validation objectives – support by function behavior

Now we want to ensure that we can observe our system in order to check that the system behaves as expected and supports the mission performance. Once we have identified all key parameters to monitor we will be able to define the functional behavior required to compute those key parameters and to carry them to the end-user.

What are the key elements we want to observe from our system of interest seen as a black box?

  • First, we want to see the mission progress for which the system is being developed. In our sample case, we want to monitor the state of the program over time: is it filling the water? washing the clothes? purging the water? spinning the clothes?…
    • We use a dedicated function to manage the mission states (as presented previously) and a state machine diagram to represent the different states and their transitions over time
    • This state machine can be simulated using the Cameo Simulation Toolkit as illustrated below
Simulation of the function ” manage washing program” formalized as a state machine diagram

Note concerning the colors:

The “Red” color represents the current active state during the simulation session.
The “Green” color represents all the states that have been simulated since the beginning of the simulation session.
The “yellow” color represents the current transition being triggered (if any)
  • Then we want to verify the Measures of Effectiveness (MoEs). These are the measures used to ensure that the mission is successful in its operational context. We want to be sure that the system will fulfill its mission with accurate performance. In order to do that, we need to monitor the key system parameters used to calculate the MoEs over time: water level, nb turns per minute, remaining time…
    • We use equations and parametric diagrams to bind the system key parameters with the equation parameters when there are continuous flows (like the water flowing in and out).
Equation for calculating the water level over time according to the flow of water
  • Concerning the water level management, we just need to focus on the modes of the function. This can be done through the use of a state machine. Transitions are triggered on key events that come from the control of the washing program steps. And for each “state” we define a simple behaviour with a simple “Activity” element (using the “DoActivity” to reference those activities).
  • Concerning the human interactions, we can use an activity diagram to represent the configuration of the program, as illustrated below:
Use of an acivity diagram to formalize a function behavior
  • Finally, we want to visualize the different MoEs in a synthetic way. We can use plots (provided by Cameo Simulation Toolkit) to show the different curves of the key parameters over time
Plots that show the evolution of key parameters over time

Finalization of the functional architecture

The system functional architecture is finalized by connecting all the system functions with the SoI operational context and with each other using internal flows.

3 kinds of functional flows

Each functional flow can be of 3 different kinds: information, energy or material. By using different icons and colors to represent those different kinds, it gives the reader immediate feedback and makes the diagram easy to read. The reader can easily focus on one given kind.

Functional architecture formalized as an IBD

We want the functional architecture to support simulation. It means that each function must have an associated behavior that can be executed (simulated). According to the function, we can use the different following behaviours:

  • State machine (introduced previously to represent the states of the function “Manage Washing Program” over time)
  • Parametric diagram (introduced previously to represent a differential equation with regards to water level over time)
  • Activity diagram to complete a state machine diagram or to specify some constant values as illustrated previously for function “manage human interaction”

Note: we can also use an opaque behavior to represent an external behavior such a Matlab function or Modelica equation or a Functional Mockup Unit (see FMI standard for more information about that).

Driving the simulation with graphical support

  • Finally, we may also want to drive the simulation using a Human Machine Interface (HMI) mock-up that reflects the future operations performed by the end user. In our case, a person can use his/her smartphone to drive and monitor the progress of the washing.
    • For this purpose we can use dedicated widgets provided by the Cameo Simulation Toolkit to represent the future HMI and bind some system states with panels or images as illustrated below:
HMI to drive and monitor the connected washing machine

Can we automate some of the steps presented above?

Yes.

We have created a plugin to automate the transformations between the operational scenarios and the validation scenarios. We still need to complete those validation scenarios but this is easy to do when the list of interactions has already been translated from the sequence diagrams.

We have also defined a dedicated functional architecture editor (called FAS for “Functional Architecture Synthesis”) that provides support for the choice of the different kinds of functional flows and that can create the function ports automatically when needed to ensure the encapsulation principle (all functional flows are passed through the ports of the parent function).

Simulation in practice (video)

Look at the video at the bottom to see 3 validation scenarios executed through model simulation.

3 validation scenarios grouped into one state machine

In the first scenario, we use the simulation console to monitor the key parameters’ values during the simulation in addition to the plots that show the progress over time.

In the second scenario, focus is put on the HMI used to drive the scenario (representing a smartphone). There is no use of the simulation console: both the control and the monitoring is done through this HMI.

The last scenario is a rainy day scenario. It shows that it is possible to describe dysfunctional scenarios and use them to see how the system behaves in abnormal conditions.

Enjoy MBSE!

Next articles to come…

  • July 2020 -Consistency between functional and logical architectures
  • August 2020 – Minimization of the coupling in the logical architecture
  • September 2020 – Digital continuity between SysML and Simulink
  • October 2020 – Digital continuity between SysML and AADL
  • November 2020 – Digital continuity between SysML and Modelica
  • December 2020 – Co-simulation of SysML and other models through FMI

Previous articles in the series

  • April 2020 – Formalization of functional requirements
  • May 2020 – Derivation of requirements from models: From DOORS to SysML to DOORS again
By |2020-08-12T17:16:36+01:00June 11th, 2020|Tags: , , , |0 Comments

Part 2 – From textual requirements to model and to textual req again

This article is part of a monthly series entitled “Advanced MBSE with SysML and other languages“.

In the first set of articles, this series explains how to use a modeling approach based on the SysML notation to progressively analyze, structure, refine and derive stakeholder needs and requirements into system architecture and lower-level requirements, down to configuration items containing software and hardware parts.

In the second set of articles, this series will focus on the links to other modeling languages used to detail the design and/or perform detailed analysis and simulations to evaluate, verify or validate the virtual representation of the system.

This second article puts a spotlight on the zig-zag between the top-level system requirements, often expressed as text, the system model that will be used to satisfy those requirements through functional and physical architectures, and the lower-level system requirements that can be partially derived from those architectures. We detail why it is important to clearly define the repository of requirements at each stage of the process. Finally, we demonstrate that we can combine the use of a requirement management tool and of a modeling tool to improve the quality of requirements without duplicating the work.

Functional requirements and functions

The functional analysis consists of analyzing the top-level functional needs and requirements and to build one or several functional architectures that satisfy those requirements.

We start by identifying the main functions that satisfy the different functional requirements and we use a traceability matrix to check that each top-level functional requirement has been taken into account by at least one function. Next, we decompose the main functions into lower-level functions that are easier to understand and to manage. This functional breakdown is recursive, down to the level where the leaf function can be fully performed by a component available on the market or internally, or fully allocated to a subsystem (that will be defined by another team).

Let us take the example of a UAV for healthy agriculture, in charge of spraying a treatment solution on crops attacked by pathogenic agents. One of its top-level functional requirements is to treat while flying. I can define a main function called “Fly and treat” that will be in charge of satisfying this top-level functional requirement.

This main function can be decomposed into 2 functional units that address respectively the flight (Follow flight plan at constant speed) and the treatment (Treat the crop). And we can continue the decomposition of these 2 functions…

Now let us look at the way we can use SysML to perform these two activities: traceability and functional breakdown.

When using the SysML notation, we can formalize a function through different concepts:

  • The “Block” concept is enough if we are only focused on the structural decomposition of the functions (also called functional breakdown)
  • The “Activity” concept is well adapted if we want to use the behavior to identify and decompose the functions
  • A combination of both a block and a behavior definition concept (state machine, activity, opaque function ) is useful when we want to have the flexibility to specify the function and its behavior separately.

In this article we will use the “Block” concept and we will define a “Function” stereotype to distinguish the functions from components (also based on the “Block” concept). The traceability of main functions to top-level system requirements can be achieved through a “Satisfy” requirements matrix. The Functional Breakdown Structure (FBS) of a given main function can be represented either with a Block Definition Diagram (BDD) or with its dual internal representation, the Internal Block Diagram (IBD).

Extract of a traceability matrix from main functions to top level system requirements (left) and FBS as BDD or IBD (right)

For each new function that has been identified, we specify new functional requirements. This gives us two parallel hierarchies that are strongly related: the functional requirements tree and the functions tree.

And then comes the key question: “Where should I store these new functional requirements? In the SysML modeling tool or in my Requirement Management (RM) tool?”

RM tool and SysML Modeling tool – How can we ensure synchronization?

We are used to manage and store the requirements in a Requirement Management tool. For small projects, such a tool could be Microsoft Word or Microsoft Excel but for large projects, we generally use a dedicated commercial solutions (such as DOORS, DOORS Next Generation, Polarion, Jama, Aras Requirements…). Most of the time, the system specification is entirely built from system requirements managed, documented, and reviewed in this requirement management tool.

If we keep this principle of a dedicated tool to manage requirements, this means that we have to add our new functional requirements into this tool. The challenge is to decide how to distribute the activities between the RM tool and the Modeling tool in order to avoid duplicating the efforts and ensure good consistency between the requirements and functions.

The first option is to use the requirement management tool as the only reference to create and maintain the requirements, at any time, and use the modeling tool only to create and maintain the functions. What about the functional requirements just derived from the functions? Should we put them in the RM tool as soon as we identify them? In that case we need to go back and forth between the RM tool and the SysML tool to ensure the consistency between the new functions formalized in SysML and the functional requirements derived from those functions that have to be created or updated in the RM tool. It means that we need to switch between both tools at every change in the functional architecture. It looks painful… and might be an agility killer…

Another option is to create the requirements in the modeling tool, close to the functions they specify, and keep those requirements in the modeling tool until the functional architecture is finalized. If the functional architecture changes (new functions, removed functions, changes in function inputs, outputs, activation in modes, performance…), it is quite easy to adapt the functional requirements because all elements are in the same tool and we can use traceability links to analyze the impacts. When the functional architecture is considered as finalized, then it is time to extract the functional requirements and put them into the RM tool to complete the specification.

If we look at the previous example, we can create a “relation map” diagram that shows the relations between the top-level functional requirement (Fly and Treat), the main functions associated to it, the sub-functions, and a first draft of their associated functional requirements.

Note: The derivation of requirements from a function is an advanced topic that requires some explanation. Here we show the derivation of only ONE basic (draft) requirement for simplification but a function generally leads to the identification of several requirements, built from the combination of function performance criteria and lifecycle “situation” (phase/mode/state… and conditions) in which the function is active. And each of the identified requirements will later be completed to prepare its verification, leading to an improvement of the requirement maturity and quality.

The derivation of requirements will be detailed in a future dedicated article.

Relation map that explains the extended traceability between top level requirements and lower-level requirements through functions

Note: the relation map can be read through the following sequence of relations: the top-level functional requirement (FlyAndTreat) is satisfied by a Function (Block) that is composed of the two sub-functions fly and treat (part properties) that are each typed (defined) by a function (block) that satisfies requirements.

So far, so good. But what happens if one of my colleagues is defining some lower-level functional requirements in the RM tool while I’m defining my functional breakdown in the modeling tool? We are simply doing the same exercise concurrently through 2 different means and in 2 different tools: refine the functional requirements. Double efforts for the same value…

You may smile at this situation but it is something that happens regularly in the industry, especially when the modeling activity has not been defined in the development plan. So it is necessary think about it. The important principle is to ensure that there is only one reference for the modification of the requirements at a given stage of the development process.

When using the second option, we have requirements managed in two repositories. Thus, it is necessary to clarify which requirements can be modified by which tool to conform to this important principle:

  • The top-level functional requirements are defined and maintained in the RM tool and propagated in the modeling tool
  • The lower-level functional requirements are defined and maintained in the modeling tool and propagated later in the RM tool

This approach makes things clear by using only ONE requirement repository at a given stage. It allows an efficient definition of the requirements by using different tools according to each stage.

We suggest 3 different stages to organize the responsibility in the modification of the requirements:

  1. Before the SRR (Systems Requirements Review): the RM tool is used to define and document all top-level system requirements
  2. During the elaboration of the functional architecture: the modeling tool is used to define and document lower-level (refined) functional requirements derived from the functions. This stage ends with the preparation of the Preliminary Design Review (PDR).
  3. Since the preparation of the PDR: the RM tool is used to gather all system requirements including the ones coming from the functional model in order to ease the review of all system requirements and to generate the complete system specification.

In order to support this distributed work on requirements through 2 different tools, we also need to ensure that we can propagate the requirements between tools in an easy way. This question is addressed in the next chapter.

Can we automate the transfer of requirements between tools?

The answer is yes for many situations.

If you use Cameo Systems Modeler as a SysML modeling tool, you may know that 3DS provides a third-party tool called “Cameo Data Hub” that is able to synchronize objects between DOORS and some other RM tools and Cameo Systems Modeler. Clearly this is a good solution to ensure that requirements are aligned between both tools at a given point in time.

But this is not enough. We also need the traceability between top-level and lower-level system requirements. If we place lower-level requirements in DOORS without traceability to top-level requirements, then the modeling may be considered as useless and a waste of time and efforts. Traceability is very important because it gives us a powerful means to analyze a change in top-level requirements and immediately identify the lower-level requirements on which there may be impacts.

The idea is simple: let us extract this traceability from Cameo and let us create direct links between both levels of requirements as we would have done directly in DOORS. A small CSM plugin can do this: extract the traceability chain and recreate the direct links instead of using intermediate modeling elements (the functions).

Automation to capture the extended traceability links between requirements

Once this is done, we can synchronize both the lower-level requirements and their traceability links to top-level requirements between CSM and DOORS.

That’s it! Finally, we get our 2 levels of requirements in DOORS with traceability exactly as if we had worked only in DOORS. But we have in fact used CSM to help us in building a functional architecture as an intermediate step, which leads to a far better quality of the requirements once put back in the RM tool!

Zig zag with synchronization and automation in practice (video)

This short video shows the presented zag zag pattern between the RM tool and the SysML tool in practice.

Note: the derivation of lower-level requirements is very basic in this video (as it was not the main topic and we did not want to spend time on it). There will be additional material on this topic at a later date.

Next articles to come…

  • June 2020 – Early validation of stakeholder needs through simulation
  • July 2020 -Consistency between functional and logical architectures
  • August 2020 – Minimization of the coupling in the logical architecture
  • September 2020 – Digital continuity between SysML and Simulink
  • October 2020 – Digital continuity between SysML and AADL
  • November 2020 – Digital continuity between SysML and Modelica
  • December 2020 – Co-simulation of SysML and other models through FMI

Previous articles in the series

  • April 2020 – Formalization of functional requirements

By |2020-08-12T17:16:36+01:00May 13th, 2020|Tags: , , , |0 Comments
Go to Top