Technical Articles

Part 6 – Digital continuity between SysML and Simulink

This article is part of a monthly series entitled “Advanced MBSE with SysML and other languages“.

In the second set of articles, this series explains how to complete the top-level system definition model, formalized in SysML, with other modeling languages and tools, considered as more efficient to perform the system detailed design or certain kinds of system analysis. Focus is put on digital continuity with guidelines concerning coupling semantics and coupling automation between languages and tools.

In this article 6, we start from a System Definition model developed with SysML and we present an approach, which uses Simulink to define or refine part of the system’s behaviour such as the control loop of the system in its environment. We discuss 2 different ways of using Simulink.

Executive Summary

  • This article focuses on Control Engineering. This discipline requires analyzing / detailing several interdependent elements defined in the System Architecture (such as control functions, input signals time/frequency response, external environment behavior models fidelity to the real physical environment, …) to verify the consistency of the interfaces and verify that the behavior complies with the requirements. Those activities require usage of existing knowledge and assets available in mathematical-based simulation tools like MathWorks Simulink.

  • The communication between the Systems Architect and the Control Engineer is key to get a fast and clear transition between the SysML and the MathWorks MATLAB/Simulink models.

  • There are 2 different ways to transition from SysML to Simulink. The “new” way uses System Composer to initialize an architecture model from the SysML model. System Composer can preserve the stereotypes defined in SysML and provides features useful to assess the architecture like “multi views” and “architecture analysis”. This is the way we recommend for the future.

    Today, some important features like “static consistency check of the interfaces” are still missing, and there is no automation to support the transition between SysML and System Composer. So the “traditional” transition to Simulink, even if it does not preserve SysML stereotypes and does not provide features to characterize and assess the architecture, remains useful. And there is some automation to support this transition. 

  • When using Simulink, the Control Engineer verifies the system architecture, the interfaces, and the behavior, to ensure that the requirements can be satisfied. If this is not the case, the Control Engineer shall propose a change to either adapt the architecture or refine the requirements. This change will be managed by the System Architect in the SysML model and it should be reflected in the Simulink model in order to maintain the consistency of the overall system definition.

Context elements

In the previous articles (part 1 to part 5), we introduced a method using the SysML notation to support the following systems engineering activities :

This article starts with the availability of a logical architecture for a case study called AIDA (coming from the Saint Exupery Research Institute). It is illustrated below:

From a Logical Architecture to a Detailed Definition using Simulink

Once a logical architecture has been defined, Systems Engineers start to communicate it to the various specialists involved in the system detailed definition (software engineers, mechanical engineers, command-control engineers, hardware engineers, …). These specialists will have to analyze the system requirements (including interfaces definition, expected behavior and associated performance) and will verify the requirements feasibility (is there a solution that can satisfy all of these requirements?).

In this article we focus on the Control Engineer. This specialist applies Control theory to one or several components of the system architecture and on its environment. He needs to define equations, reuse operators and generic components from libraries and toolboxes, use solvers and timed simulation, access optimization tools, use matrices based computations, etc. All these features are offered by math based simulation tools. Among these tools, we choose to restrict our focus to the usage of the MathWorks MATLAB/Simulink/System Composer suite as it is the most commonly used in the industry today, as far as we know.

Transition between SysML and MathWorks Simulink

In the next paragraph we detail a process to refine the definition of internal control. It contains the following steps:

  1. Definition of interface requirements in the SysML model,
  2. Export/translation of data to the Simulink (and SystemComposer) tool,
  3. Refinement of the interfaces and of the expected behavior in the Simulink model thanks to the component libraries and the support of simulation,
  4. Feedback to the Systems Architect about changes or refinements needed on system requirements and on the system architecture,
  5. Impact analysis of the change requests and update of the system definition model.

Transition from the SysML Logical Architecture to a Simulink behavioral model: the “traditional” approach

In this first approach, the Logical Architecture of the SysML model is translated into a Simulink model while preserving the allocation of system functions to logical components (subsystems). Simulink component libraries are used to refine the functional behavior. Then, the time-based simulation is used to verify that interfaces between system functions and between subsystems are consistent, and that it exists a solution that can satisfy the system functional and performance requirements. The interface definitions can be confirmed or refined by the control engineer.

Transition from the SysML Logical Architecture to a Simulink architecture model: the “new” System Composer approach

In the second approach, we still transition to Simulink but with 2 different steps and usages of Simulink. First we use System Composer (Simulink facet) to characterize and assess the architecture, thanks to features like multi-views, filtered view and analysis. Second, we use the “more traditional” Simulink component libraries to refine the function’s behavior.

The expected benefit of using the System Composer facet is a better separation of concerns: in the first step, the logical architecture can be characterized with the support of stereotypes on ports or on connectors, and assessed with the support of analysis features like “dynamic consistency checks of interfaces”. In the second step, the different components and their allocated functions can be refined, especially for the behavior, with the support of a wide diversity of generic component libraries, patterns and other useful features.

Note: each step has its own interest and may be performed by different users with different experiences.

Application on the AIDA case study

To illustrate and give elements of comparison between these 2 scenarios, we use a simplified model of control for the trajectory of an Unmanned Aerial Vehicle (UAV) based on the AIDA case study developed at the St-Exupery Research Institute. The AIDA Logical Architecture in SysML has been recalled in the context at the beginning of this article. Here we put the focus on the control loop between the perception subsystem, the Flight Management Subsystem and the Thrust Management Subsystem.

The goal is to control the actual position of the UAV to fit the expected trajectory around the aircraft. Therefore, one must find the right control parameters so that the UAV can follow the expected trajectory within a minimum error margin (that shall be defined in the performance requirements).

Simulink model initialisation

First, we define the scope of the transformation between the SysML logical architecture and the Simulink Model because we do not need to translate the full SysML model. We restrict the scope of this transformation to the sole functions and components useful for the UAV trajectory control loop (including the Air / Terrestrial Gravity model to reflect the physical environments effect on the control loop).

Note: in this SysML model, ports have been split (for instance current x and current y which rely on a Current Position flow)  in order to be able to use the automations available in the tool.

Concerning the specification of the interface types and units, the SysML tool (Cameo Systems Modeler) provides access to the ISO 80000 Standard Units :

 

Translation of the logical architecture from SysML to Simulink/System Composer

The SysML logical architecture can be translated to the MathWorks Simulink modeling environment through two different methods:

  • With System Composer:

With this method, it is possible to create a logical architecture semantically equivalent to the SysML model (same components and interfaces) as illustrated below:

 

One of the main benefits of this approach is the preservation of the stereotypes initially defined in the SysML model. For example, in the Functional Architecture model of our case study, we have defined the following stereotypes on function inputs and outputs: Information, Energy, Material.

 

Within System Composer, it is possible to define the same stereotypes and apply them to the System Architecture (functions or component interfaces) :

 

System Composer also offers a feature to filter different views of the same system architecture, which is very useful to ease the architecture reviews:

 

  • Without System composer, using direct transition from the SysML tool to Simulink:

With this method, we can use some automations available out of the box in Cameo Systems Modeler (the SysML tool we have used) to create the Simulink Model skeleton from the SysML filtered model (model filtered with the UAV trajectory scope).

 

If data types and units have been specified in SysML, the automated transition propagates the data types and units in the Simulink model:

 

But without system composer, the additional semantics defined in the SysML model through the stereotypes are lost.

In both cases (with or without system composer), the resulting model contributes to the specification for the control engineer.

 

Simulink model refinement and simulation

Next, we complete the functions behavior with existing assets/knowledge in control such as PID controllers and we refine the associated parameter values thanks to the support of the simulation.

Change requests on requirements and architecture

Once the simulation seems to satisfy the requirements expressed at the logical level, it is possible to derive new lower-level requirements. For instance, it may be possible to add requirements on stability, or on expected control accuracy. The PID’s parameters can be finalized only in a physically realistic environment. However, the simulation gives an idea of the feasibility and and of the range of values to be implemented later.

Feedback in SysML

The Systems Architect has to analyse the impacts of change requests from the control engineer. Some changes in requirements or in interface definitions may have consequences on other engineering specialties or on other components. So, this impact analysis is not always straight forward.

Discussions on the two possible transitions and synthesis

This discussion is based on the use of the following tools and configurations:

  • Cameo Systems Modeler (CSM) V19SP4 (SysML tool)
  • MATLAB/Simulink 2020a

From SysML Logical Architecture to Simulink with the “traditional” (direct) approach

  • Interests of this “traditional” transition:

    1. It is possible ot perform digital continuity between some SysML tools and Simulink: during a simulation session started in CSM (the SysML tool), the Cameo Simulation Toolkit (CST) can call a Simulink model from a SysML block. In practice, CST gives the hand to Matlab, which runs the Simulink model. A the end of the Simulink model function, CST can retrieve data from the Simulink model and make it available for use in the SysML model or visualization in the simulation console.
    2. Co-Simulation of SysML and Simulink models using FMI standard (will be detailed in a future article): both SysML behavioral model and Simulink behavioral model are simulated concurrently through their respective solvers (in fact there is an orchestrator that drives both simulations time step by time step)
  • Issues with this “traditional” transition:

      1. Stereotypes defined in the SysML model are lost after translation into the Simulink model. There is no simple way to retrieve those stereotypes, even with additional automation, because the Simulink meta model does not handle stereotypes.
  • Additional remark :

    Most of SysML tools provide automation to generate a Simulink model from a SysML IBD representing the logical architecture, but the translation of buses from a SysML Logical Architecture model is generally not implemented -> An automation could be developed to fix this issue.

From SysML Logical Architecture to Simulink with the “System Composer” (new) approach

  • Interests of this “new” transition:

      1. Ability to keep SysML stereotypes when translated into System Composer and to use them to create several architectural views (for isntance electrical view, mechanical view, control view…)
      2. Assess the architecture in terms of consistency of interfaces thanks to the availability of stereotypes put on ports and connectors and the use of dynamic checks.
  • Issues with this “new” transition:

      1. Static consistency between ports is not ensured today (will come in a future version).
      2. No transformation available between CSM and System Composer.
        1. No automation available out of the box in CSM V19SP4 to export a SysML model as a System composer model.
        2. No available import of a SysML model from System composer so far.

Synthesis

If your MBSE method is still in the definition stage, and if there is a need to go from SysML logical architecture to Simulink in order to benefit from mathematical-based simulation tools, it is clear for us that System composer is the right target from SysML. System Composer is the MathWorks tool that can preserve the SysML stereotypes put on the logical architecture (components and interfaces) and thus provides good support for architecture views and analysis in the Simulink environment. As soon as we can get some automation to support this transition between SysML and System Composer and some features to check in a static way the consistency of interfaces, we strongly recommend this way of transitioning from SysML to Simulink.

Else, in case you need to go from SysML to Simulink today, with the capabilities provided by Cameo Systems Modeler V19SP4 and MATLAB/Simulink 2020a, it is probably more efficient to use the “traditional” (direct) transition from SysML to Simulink, thanks to the automations that exist to support part of this transition.

Perspectives

In this article we have discussed the transition from a Logical Architecture formalized in SysML, to a Simulink model limited to the structure (components, their allocated functions, and their associated interfaces). Note that export from Cameo Systems Modeler to Simulink (with direct transition) also supports the translation of behavioral elements such as constraint blocks and state machines. Those aspects will be detailed in a future article.

Concerning multi-physical aspects, we plan to explore the usage of the SysPhs standard which is available in Cameo System Modeler through the SysPhsLibrary to support the automated transition of physical elements (structure and behavior) to MathWorks tools.

Additionally, we plan to explore in further detail the change analysis process that is performed when there are updates of the System Logical Architecture in the SysML model, and the possible consequences on the Simulink model. We will focus on the method but also on the tools able to support the difference/merge between SysML and Simulink models.

Finally, we will address in a future article (planned for November 2020) how to perform co-simulation between a SysML behavioral model and other behavioral models using the FMI for Co-Simulation standard.

By |2020-09-24T09:55:42+01:00September 21st, 2020|Tags: , , , |0 Comments

Part 5 – Coupling optimization of logical architecture using genetic algorithm

This article is part of a monthly series entitled “Advanced MBSE with SysML and other languages“.

In the first set of articles, this series explains how to use a modeling approach based on the SysML notation to progressively analyze, structure, refine and derive stakeholder needs and requirements into system architecture and lower-level requirements, down to configuration items containing software and hardware parts.

In the second set of articles, this series will focus on the links to other modeling languages used to detail the design and/or perform detailed analysis and simulations to evaluate, verify or validate the virtual representation of the system.

In the previous article, we explained how it is possible to define a Logical Architecture from a Functional architecture, using an allocation matrix between functions and logical components.

In this article, we go a step further by extracting the coupling metric between functions from the Functional Architecture (using an N² diagram technique) and using an optimization algorithm to minimize the coupling between logical components.

It is possible to consider several criteria with this method such as end to end latency requirements on interfaces. In this case, the algorithm tries to find the best solution that satisfies the coupling minimization, allocation constraints, and also timing constraints. In this article, we will focus on coupling minimization only.

Minimizing the coupling between components, a good systems engineering practice!

Among Systems Engineering best practices, as stated in many standards, it is key to minimize the coupling between the sub-systems in order to master the product complexity. For instance, in the IEC 61508:2000 we find:

The interfaces between subsystems are kept as simple
as possible and the cross-section (i.e. shared data, exchange of information) is minimised.
“.

IEC 61508:2000 – Functional safety of electrical/electronic/
programmable electronic safety-related systems – Part 7:
Overview of techniques and measures

Are there any techniques or methods to support systems architects in minimizing the coupling between components?

Yes, to achieve such minimization, a well-known method consists of using Coupling Matrices (also called N 2 diagrams) and then reorganize them to identify architectures with minimal coupling.

Let us first explain how an N² diagram is defined, and let us illustrate that explanation with a case study. Secondly we will focus on the computation of the N² diagram to identify coupling optimization.

The coupling between components concerns the dependencies between the components. As explained in the previous article (part 4), the dependencies between the logical components mainly stem from the functional interfaces. So it is no surprise that we first start with the functional dependencies, use them to compute a coupling metric, and finally suggest an allocation of functions to components that minimizes the coupling.

Introduction to the N² diagram method

The N 2 chart, also referred to as N 2 diagram, N-squared diagram or N-squared chart, is a diagram in the shape of a matrix, representing functional or physical interfaces between system elements. It is used to systematically identify, define, tabulate, design, and analyze functional and physical interfaces. It applies to system interfaces and hardware and/or software interfaces.

[2] Wikipedia: https://en.wikipedia.org/wiki/N2_chart

In the previous article, we explained how it is possible to define a Logical Architecture from a Functional architecture, using an allocation matrix between functions and logical components.

Here below is an example of an N² diagram for a project with 9 functions that have dependencies.

In this matrix, the “1” represents an existing interface between the function of the concerned row and the function of the concerned column. The “0” indicates that there is no relation between those functions. In this example the matrix is symmetric, indicating that all the links are bi-directional. This is not the standard rules of the N² chart, which are better explained by the following figure.

Illustration of how to read an N² diagram

The placement of the “1” (above or below the diagonal) determines which function is the source and which function is the target of the link.

The use of a coupling matrix is mentioned by the INCOSE SE handbook as a useful practice :

Coupling matrices (also called N 2 diagrams) are a basic method to define the aggregates and the order of integration (Grady, 1994). They are used during architecture definition, with the goal of keeping the interfaces as simple as possible… Simplicity of interfaces can be a distinguishing characteristic and a selection criterion between alternate architectural candidates. The coupling matrices are also useful for optimizing the aggregate definition and the verification of interfaces.

Systems Engineering Handbook 4th edition 2015 in chapter 4.4.2.6 Coupling matrix

From this matrix we can compute a coupling value regarding the interfaces defined between the Logical Components (deduced from the interfaces between functions allocated to these components). The coupling value represent an evaluation of the coupling complexity between logical components based on the following formula derivated from software coupling metrics in Dhama, “Quantitative models of cohesion and coupling in software”, Journal of Systems and Software vol; 29, Apr, 1995

 

`"Coupling"(C_(M_(k))) = 1-1/(d_(i)+2*c_(i)+d_(o)+2*c_(o)+w+r) `

`"Coupling Value" (C_(v)) = sum_(k=1)^n[C_(M_(k))] `

 

where the parameters are defined as follows:

  • `M_(k)`: logical component under consideration
  • `d_(i)`: number of input data parameters
  • `c_(i)`: number of input control parameters
  • `d_(o)`: number of ouput data parameters
  • `c_(o)`: number of ouput control parameters
  • `w`: number of modules called (fan-out)
  • `r`: number of calling the module under consideration (fan-in)

Now, let us see an illustration on a case study.

A sample case to illustrate the definition and use of the N² diagram

Our sample case is based on a case study elaborated within IRT St-Exupery called AIDA (Aircraft Inspection by Drone Assistant). This example was initially developed in a Capella environment and is available at https://sahara.irt-saintexupery.com/AIDA/AIDAArchitecture. For this article, we have translated the sample case to the SysML language.

In the previous article (part 4), we used this sample case to show how we can initialize the logical architecture from the functional architecture with the use of an allocation matrix between functions and logical components. In this article, we use it again, but this time we explain how to use optimization techniques to determine automatically the “best fit” to minimize the coupling. In other words, we want to define one or several possible allocations (illustrated by allocation matrices) between functions and logical components that minimize the coupling between the components.

Let us see this in practice.

N² diagram from the Functional Architecture

In the previous article (part 4) we showed a possible functional architecture elaborated for this sample case. We recall it in the figure below:

 

For this functional architecture, we can extract 2 N 2 diagrams for the leaf functions by analyzing their dependencies:

  • Data/energy/material flows
  • Control (Enable/Disable or Trigger) flows

The results are displayed in the 2 figures below:

N² matrix for data/energy/material flow

 


 

N² matrix for control flow

Now we want to define a logical architecture that minimizes the number of interfaces between its subsystems.

Optimization of allocation between functions and logical components

From a functional N² diagram to a logical architecture…

To perform optimization between components, we analyze the functions to functions coupling matrices introduced previously and we use them as input for a genetic algorithm presented later in this article. This algorithm will progressively iterate over different possible logical architectures and will calculate the coupling between components. In the end, it will select the architectures that minimizes the coupling metric.

Let us look at a possible logical architecture. How do we define it? We simply define the components (or modules) as groups (or partitions) of functions.

As an example, in the figure below, the orange color part of the figure illustrates an allocation (or partition) strategy of the 9 functions into 3 modules: M1, M2, and M3. In this figure, we are not interested in the internal structure of each module, which is why we do not represent the functional interfaces between functions of the same module. However, we want to see the functional interfaces between functions allocated to different modules because it will give us the logical interfaces. If we focus on the M3 module, we see in green the M3 inputs, and in blue the M3 outputs.

L’attribut alt de cette image est vide, son nom de fichier est image-22.png.

Example of coupling analysis for 3 modules (green highlight on Module 3 inputs, blue highlight on Module 3 outputs)

Note: we recommend reading the previous article for more details on the relationships between the functional architecture and the logical architecture.

From this matrix with modules, we can now compute a coupling metric regarding the interfaces of the modules.

Using a Genetic Algorithm to optimize the allocation of functions

Genetic algorithms are algorithms inspired by  evolutionary principles. The main purpose of this kind of algorithm is to explore the solution space of a problem in order to satisfy a set of criteria. The general principles of genetic algorithms are illustrated on the Figure below.

Genetic Algorithm for Functions Allocation

Genetic Algorithm Process

The first step is to create randomly a set of initial subjects (1). This set is called the initial population. The initial population is composed of subjects each representing a possible set of functions allocations. Then, the algorithm evaluates each subject using a fitness function (2). This function makes it possible to give a value, or a rank, to a subject, to estimate its proximity with the “optimal” solution. In our case the fitness function is the coupling equation C. The candidates that are too far from the desired solution are deleted (3).

Then the algorithm evaluates the number of remaining subjects. For instance, if the population size is less or equal to 4, then the algorithm returns the best solution amongst the 4 remaining subjects. On the contrary, if the population size is greater than a specific threshold, then the algorithm continues. And this is where things become interesting…

Here begins the core biomimicry part of the genetic algorithm: the remaining subjects cross over, i.e. they exchange their genes to produce new subjects (4). Finally, the newly created childs are subject to mutation (5): part of their characteristics randomly change. Cross over and mutation are usefull to stay away from local optimum by spreading new subjects through the solution space.

Genetic algorithms are configurable using the following set of parameters:

  1. Initial population size – a key parameter to ensure enough coverage of the solution space at the begining
  2. Max generation number – parameter to ensure that the algorithm ends even if the population grows.
  3. Percentage of survivor – the percentage of the worst subjects to delete
  4. Percentage of parents – the percentage of subjects that cross over
  5. Percentage of child to mutate – the percentage of new subjects to mutate after the cross over
  6. Percentage of gene to mutate – the percentage of genes to mutate for each new subject

What about constraints on allocations?

In practice, systems engineers already have good ideas of some allocations between functions and components or have constraints that exist on fixed allocations (for different reasons including security, performance…). So the genetic algorithm shall consider these first predefined allocations.

We have defined our genetic algorithm to be able to take as input a predefined partial allocation matrix with existing constraints. These constraints are considered by the algorithm that will then define possible logical architectures respecting the given constraints.

Selection of the “best” logical architecture that minimizes coupling

The genetic algorithm presented previously gives us one or several possible logical architectures that minimize the coupling between components while conforming to the functional architecture and eventual allocation constraints. We can use the results to generate or complete the allocation matrix between our functions and the components as presented below.

Allocation Matrix

Thanks to the completion of this allocation matrix, we can deduce a logical architecture, as explained in the previous article (part 4) that shows the different logical subsystems with their allocated functions and keeps the functional flows coming from the functional architecture.

Logical architecture after allocation of functions using GA

 

Can we automate some of the steps presented above?

Yes!

At Samares Engineering, we have investigated automation of the different following steps :

  • Extracting the initial N² Matrix from the functional architecture (for both data/energy/material and control flows)
  • Exploring candidate logical architectures (functional to logical allocation) to automatically find the candidate architectures where the coupling metric is at a minimum value using the genetic algorithm.
  • Defining allocation constraints (for example UAV control position function can be forced to be allocated to the Flight Control System).

 

 

Enjoy MBSE!

Acknowledgements

We are warmly gratefull to Yash Khetan and Minghao Wang for their contribution. It was great to work with both of you. See you!

Next articles to come…

  • September 2020 – Digital continuity between SysML and Simulink
  • October 2020 – Digital continuity between SysML and AADL
  • November 2020 – Digital continuity between SysML and Modelica
  • December 2020 – Co-simulation of SysML and other models through FMI

Previous articles in the series

  • April 2020 – Formalization of functional requirements
  • May 2020 – Derivation of requirements from models: From DOORS to SysML to DOORS again
  • June 2020 – Early validation of stakeholder needs through functional simulation
  • July 2020 – Consistency between functional and logical architectures

 

By |2020-08-31T10:54:32+01:00July 31st, 2020|Tags: , , , |0 Comments

Part 4 – Consistency between functional and logical architectures

This article is part of a monthly series entitled “Advanced MBSE with SysML and other languages“.

In the first set of articles, this series explains how to use a modeling approach based on the SysML notation to progressively analyze, structure, refine and derive stakeholder needs and requirements into system architectures and lower-level requirements, down to configuration items containing software and hardware parts.

In the second set of articles, this series will focus on the links to other modeling languages used to detail the design and/or perform detailed analysis and simulations to evaluate, verify or validate the virtual representation of the system.

This fourth article deals with functional and logical architectures. We discuss the following questions: Why do we need a logical architecture? And how do we ensure the consistency between the functional and logical architecture?

Why do we need a logical architecture?

In most industrial practices, and in various industrial domains, systems engineers are used to define one (and sometimes several) functional architecture(s). This architecture formalizes an arrangement of system functions using two viewpoints: the Functional Breakdown Structure (FBS), which shows the decomposition hierarchy as a tree ( “parent” functions and “child” functions) and the connection graph that shows the functional flows between those functions (energy, information, matter).

As an illustration, let us take the AIDA open-source sample case from the Saint Exupery Technological Research Institute in Toulouse: https://sahara.irt-saintexupery.com/AIDA/AIDAArchitecture.

AIDA stands for “Aircrat Inspection by Drone Assistant”. AIDA provides assistance during the inspection of an aircraft before flights: the drone seeks for Aircraft defects.

A320 Pre-Flight Checks Procedure

The drone system contains 9 top-level functions:

  • Manage mission
  • Build fight plan relative to aircraft type
  • Fly to
  • Retrieve PoI (Points of Interest)
  • Make and record videos
  • Check wind force
  • Monitor UAV control
  • Sense and avoid obstacles
  • Emergency landing

The definition of these functions is formalized with Blocks in SysML.

We use an IBD to formalize the functional architecture. Practically, this diagram displays the usage of the functions in their operational context (SysML part properties typed by the previously mentioned blocks), the interfaces (connectors with item flows) between the SOI and the other members of the system context, and the interfaces between usages of functions (also connectors with item flows).

A possible functional architecture for the identified top-level functions is provided below:

AIDA Top Level Functions

Some of the top-level functions are still complex and need to be refined through lower-level functions. So we can build a functional architecture that displays several levels of functions as illustrated below:

AIDA Functional Architecture Details

When developing a system, it is also common to find a description of the physical components. By “physical components”, we mean a hardware part, a Software piece, or any combination of those elements. It includes processors, sensors, structure, propellers, etc.

The problem comes when we want to allocate our functions to the physical components. In the frame of a complex system, the list of physical components may become very large, especially when this list is not finalized and contains many alternatives. For instance, in order to allocate the “sense wind” function, we may find a lot of different technologies and means to perform the measurement, mixing software and hardware features.

As the final physical architecture shall satisfy all non-functional requirements including reliability and availability, we generally introduce redundancy of safety-critical components to ensure its availability even when there are failures in one of the components. In the end, the number of physical elements to consider for allocation is huge.

Let us take the previous example to illustrate a non exhaustive list of physical components:

The allocation of top-level functions, identified from the needs expressed by the customer and users are hard to allocate to the identified physical components because the abstraction gap between the system functions and the physical components is high. We need an intermediate layer to partition functions into items that represent an abstraction of the final technologies. This is the “logical architecture” layer.

The logical architecture as an intermediate layer

As stated by the INCOSE Systems Engineering Handbook (4 ed.), the logical architecture definition consists in decomposing and partitioning the system into logical elements

[…]. The elements interact to satisfy system requirements and capture systrem functionality. Having a logical architecture mitigates the impact of requirements and technology changes on system design.

The logical architecture is an arrangement of “logical components” that perform the functions. This first allocation is easier to perform because we can group functions with criteria such as cohesion, coupling, design for change, reliability, and performance.

Later, we will have to do a second allocation: allocate logical components on physical components (with technology). This second step is also easier to perform than the direct allocation from functions to physical components because we only have to focus on technologies/products available on the market to satisfy a logical component already defined.

Let’s go back to our AIDA example. Here is a possible set of logical components for our system of interest surrounded by its environment (as in the functional architecture):

  • Mission management subsystem
  • Propulsion subsystem
  • Flight management subsystem
  • Vision Subsystem
Initial logical architecture

Here is an example with the use of the SysML allocation matrix (within Cameo Systems Modeller environment) to create the allocations of functions to logical subsystems.

How do we create the logical architecture?

When creating a logical architecture, it is possible to connect the logical components directly in the diagram, by using engineering knowledge: it is sometimes already known that 2 components will exchange information or energy. However, the rationale for connecting the 2 components is missing. In the end, the logical architecture may miss interfaces or contain useless interfaces.

Therefore, the logical interfaces shall not be fully independent of the functional interfaces. The logical components reflect the partition of functions and should thus reflect the functional flows. There is a consistency between the functional architecture and the logical architecture.

The next chapter explains this in detail.

Consistency between the Functional architecture and the logical architecture

We return to the AIDA sample case to illustrate this consistency with a few functions and allocations. Instead of looking at the full functional architecture, we will focus on a simple extract with only 3 leaf functions coming from the “manage and record videos” top-level system function:

  • “Manage Photos Recording”,
  • “Control Camera Orientation”
  • “Record Photos and Videos”

Now we want to allocate the 2 first leaf functions to “Mission Management Subsystem” (in blue) and allocate “Record Photos and videos” to “Vision Subsystem” (in red) as illustrated below:

Note: in SysML, we use the SysML allocation matrix to edit (create and delete) these “allocation” relations. The allocation described above leads to the following matrix.

First allocations

Now we would like to reflect the impact of these allocations on the logical architecture. Practically this means:

  • Display the functions inside their components
  • Display the functional flows between functions through the ports of the logical components because we want to respect the “encapsulation principle” of the components (a component can show or not show its internal structure but its ports do not change)
  • Display the functional flows with the system environment (through the System external ports)

In our example, for the subset of the functional architecture and the 3 allocations, it results in the following logical architecture with the creation of 3 logical flows (in orange):

Logical Architecture result after allocations

We can see that the logical flows (in orange) directly come from the functional architecture: they are deduced / reflected from this functional architecture and from the allocation of functions to the logical components.

Conclusion

There exist a relation between the functional architecture and the logical architecture. A logical subsystem can produce or consume flows if there is one or several functions allocated to it. In addition, some functions may appear directly at the logical layer, e.g., interface function between subsystems, encoding functions, decoding functions, or electrical functions. These functions may make no sense at the functional system level since they depend on the chosen technologies and can be very detailed. But, whatever the abstraction level of the functions, the logical layer shall be consistent with the system functional layer.

Can we automate some of the steps presented above?

Yes !

Overview of the automation

At Samares Engineering, we have created a plugin to automate the update of the logical architecture (display of functions, creation of logical flows) according to the functional architecture and allocation of functions to the logical components. This propagation is done in real-time. And it works in both directions (creation and deletion of allocations, leading potentially to the creation or deletion of logical flows between logical components). So we can ensure that the logical architecture is always consistent with the functional architecture.

We can also show the functions inside each component or hide those functions and only show the components and their logical flows.

Take a look at the video below to see this automation in practice.

Simulation in practice (video)

This video shows how we can ensure consistency between a functional architecture and a logical architecture while editing the allocation of functions to the components, in real-time.

Enjoy MBSE!

Next articles to come…

  • August 2020 – Minimization of the coupling in the logical architecture
  • September 2020 – Digital continuity between SysML and Simulink
  • October 2020 – Digital continuity between SysML and AADL
  • November 2020 – Digital continuity between SysML and Modelica
  • December 2020 – Co-simulation of SysML and other models through FMI

Previous articles in the series

  • April 2020 – Formalization of functional requirements
  • May 2020 – Derivation of requirements from models: From DOORS to SysML to DOORS again
  • June 2020 – Early validation of stakeholder needs through simulation
By |2020-08-12T17:16:36+01:00July 21st, 2020|Tags: , , , |1 Comment

Part 3 – Early validation of stakeholder needs through functional simulation

This article is part of a monthly series entitled “Advanced MBSE with SysML and other languages“.

In the first set of articles, this series explains how to use a modeling approach based on the SysML notation to progressively analyze, structure, refine and derive stakeholder needs and requirements into system architecture and lower-level requirements, down to configuration items containing software and hardware parts.

In the second set of articles, this series will focus on the links to other modeling languages used to detail the design and/or perform detailed analysis and simulations to evaluate, verify or validate the virtual representation of the system.

This third article puts a spotlight on a way to validate the stakeholder needs. We show how it is possible to use a modeling approach to structure and refine functional needs into a functional architecture. We also show that it is possible to simulate this functional architecture against operational scenarios expressed by the stakeholders. The simulation allows us to monitor some of the key system parameters and provides good support for validating stakeholder needs early in the development cycle.

Functional architecture is useful to support early validation of the system!

Sometimes we hear from some systems engineers that only the physical architecture is really useful to support validation. That is true if we target the end-product. At this stage, we need to get an architecture as close as possible with the reality (a characteristic sometimes called “fidelity”) to limit errors and wrong conclusions from the results of the simulation. But if we focus on the validation of functional requirements, it is not a good idea to wait too long before starting validation because we may be working with a wrong or incomplete capture of functional needs. And we can already do a lot to verify these needs early in the development cycle, even with a purely functional (virtual) representation of the end product.

In order to reach early validation, there are different activities to perform:

  • The identification of the validation objectives
  • The identification of the system functions and their functional interactions with system operational context
  • The internal functional flows to support complete functional chains starting from operational scenarios.

Identification of the validation objectives

First, we need to identify what we want to validate. The most important thing to keep in mind is the rationale of why we developed our system of interest: the mission(s) to support! So let us focus on the mission(s) of our system of interest and check that our system is able to support the mission profile (set of phases and states) and its expected performance in the operational context. You may think that we need to know the complete physical architecture to measure this performance. Yes for the final detailed figures, but we can already approximate some elements and get a first rough idea without the full list of organs. We will see in the next paragraphs that we can add some behavior to the functions and then we can reach good support for the calculation of the system performance.

Identification of the system functions and their functional interactions with the operational context

This activity consists in mixing two approaches: the engineering knowledge of the solution coming from systems engineers experience on one hand, and the needs expressed by the different stakeholders on the other.

The systems engineers will through their experience provide a set of functions often called “technical functions” because they come from the knowledge of the technical solutions/products commonly used in a similar context. The expression of needs coming from the various stakeholders will lead to what we call “service functions” or “required functions”. These functions are generally identified through a set of scenarios that cover the different lifecycle concepts. The functional architecture will arrange the functions so that we can support top-level required functions with technical functions as illustrated below.

Now let us see in practice how we can use a model-based approach to support these activities.

At the French Chapter of INCOSE , called AFIS, in the MBSE technical committee, we have created a working group to discuss the use of functional model simulation as a means to reach early validation of the system functional requirements. We quickly discovered that it would be useful to compare our different approaches through a common sample case. And we have chosen a connected washing machine for this exercise. It is a system that everyone knows, at least as an end-user.

We use this sample case to illustrate the suggested approach.

A sample case to illustrate the use of a functional model simulation as a means for early validation…

Our sample case is a connected wasching machine. The “connected” part means that you can start and monitor the progress of the washing through your smartphone.

The description is available online here: https://www.samsung.com/fr/washing-machines/front-loading-ww90m645opw/

Concerning the functional behavior, we are not specialists and we have extracted knowledge from this web site (in french): https://www.spareka.fr/comment-reparer/electromenager/lave-linge/fonctionnement

Focus on the mission and identification of the operational scenarios

Let us start by looking at the missions / use cases for this system of interest. We want to be able to wash clothes, either directly or remotely (from our smartphone).

According to these use cases we have 2 main scenarios that describe the interactions between house habitants and the connected washing machine. We use “UC diagram” to represent the different system missions and we use “Sequence Diagrams” to represent the interactions, as illustrated below:

Note: these sequence diagrams are simple and this is on purpose. We do not introduce advanced logics like loop, parallel, or alternatives to keep the diagram very simple and easy to review by end-users and customers who are not necessarily familiar with the SysML notation.

From an operational scenario to a validation scenario…

The different operational scenarios defined previously (through Sequence Diagrams) can be reused as skeletons for the future validation of the system. You may ask: “What is the difference between an operational scenario and a validation scenario?” A validation scenario is more detailed than the operational scenario. It contains the same list of interactions but also some additional elements:

  • Concrete values for the different stimuli sent to the system (from external systems or humans)
  • Some delays between interactions in order to reflect human behavior (a human is not a robot that can immediately trigger the stimuli one after the other)
  • Some observations about the system behavior during the mission

We can use an activity diagram to formalize a validation scenario. It can be translated from the operational scenario quite easily:

  • Each input message is translated into a “send signal action” so that we can send a signal to the system
  • Each output message coming back to the operator is translated into an “accept event action” that waits for the arrival of the signal.

Then let us see how we complete this scenario to allow some validation.

  1. The concrete values used as inputs for the stimuli are formalized with the “ValueSpecification” concept and are transmitted to the “send signal actions” through “object flow“. In the example shown below, we load 5 Kg of dirty clothes and 0,1 liter of detergent.
  2. The delays between the human interactions are formalized with the use of “AccceptTimeEvent” with a delay expressed in “relative” mode (after XX seconds).
  3. The observations are detailed in the next paragraph.
Translation of an operational scenario into a validation scenario with some complements

Identification of the functions

We start the identification of the functions by looking at the system operational context. It gives us the inputs and outputs of the system. We can use an Internal Block Diagram (IBD) to represent this context.

The system of interest seen as a black box in its operational context

Note: we distinguish different types of flows: information, energy, and Matter. We use SysML stereotypes (additional semantics to SysML concepts) in order to manage those specialized flows and associated ports. We can associate a given color and a given icon for each stereotype, which makes the reading easier. A legend is available on the top left of the diagram.

Then the functions are identified by following some simple patterns:

  • The interactions from our System with its context (physical environment, other connected systems, and human interactions) are encapsulated with interface functions that are in charge of managing those interactions.
    • in our sample case, we find “Manage Human interaction” and “Manage Water
  • The mission progress is managed by a dedicated function. In our sample case, it is called “Manage Washing Program
  • Finally, we add all the functions required to manage human interactions and to support the mission
    • In our case, we add “Store Water and Clothes” to ensure the human interaction concerning clothes
    • We add “Provide Washing Movement” to clean the clothes.

We represent the usage of those functions as “part properties” inside our System Of Interest (SoI).

Addition of “service functions” to the system seen as a black box

Note: the functions are all enabled by default in this diagram but some may be disabled by other functions in some conditions during system execution. The function adornment would then change with a new symbol as explained in the legend (top left of the diagram).

Support of service functions with technical functions

When we get a good idea of our service functions identified from operational scenarios and from the operational context, we get 2 options: either we are able to specify their behavior (to support functional simulation) or we consider that the function is too complex or has too many objectives and then we refine it with lower-level technical functions. In that case we use our engineering knowledge to identify those technical functions.

In our case, the “manage Water” function is still complex and needs to be refined into lower-level technical functions. From the website used to understand the washing machine behavior, we learn that we need several functions to manage the water. We need to manage the water level, to heat the water and to store the washing detergent. We connect these functions with the external environment (water supply and sewer, human interactions) . We use an IBD to show this refinement:

Observations of the SoI to reach validation objectives – support by function behavior

Now we want to ensure that we can observe our system in order to check that the system behaves as expected and supports the mission performance. Once we have identified all key parameters to monitor we will be able to define the functional behavior required to compute those key parameters and to carry them to the end-user.

What are the key elements we want to observe from our system of interest seen as a black box?

  • First, we want to see the mission progress for which the system is being developed. In our sample case, we want to monitor the state of the program over time: is it filling the water? washing the clothes? purging the water? spinning the clothes?…
    • We use a dedicated function to manage the mission states (as presented previously) and a state machine diagram to represent the different states and their transitions over time
    • This state machine can be simulated using the Cameo Simulation Toolkit as illustrated below
Simulation of the function ” manage washing program” formalized as a state machine diagram

Note concerning the colors:

The “Red” color represents the current active state during the simulation session.
The “Green” color represents all the states that have been simulated since the beginning of the simulation session.
The “yellow” color represents the current transition being triggered (if any)
  • Then we want to verify the Measures of Effectiveness (MoEs). These are the measures used to ensure that the mission is successful in its operational context. We want to be sure that the system will fulfill its mission with accurate performance. In order to do that, we need to monitor the key system parameters used to calculate the MoEs over time: water level, nb turns per minute, remaining time…
    • We use equations and parametric diagrams to bind the system key parameters with the equation parameters when there are continuous flows (like the water flowing in and out).
Equation for calculating the water level over time according to the flow of water
  • Concerning the water level management, we just need to focus on the modes of the function. This can be done through the use of a state machine. Transitions are triggered on key events that come from the control of the washing program steps. And for each “state” we define a simple behaviour with a simple “Activity” element (using the “DoActivity” to reference those activities).
  • Concerning the human interactions, we can use an activity diagram to represent the configuration of the program, as illustrated below:
Use of an acivity diagram to formalize a function behavior
  • Finally, we want to visualize the different MoEs in a synthetic way. We can use plots (provided by Cameo Simulation Toolkit) to show the different curves of the key parameters over time
Plots that show the evolution of key parameters over time

Finalization of the functional architecture

The system functional architecture is finalized by connecting all the system functions with the SoI operational context and with each other using internal flows.

3 kinds of functional flows

Each functional flow can be of 3 different kinds: information, energy or material. By using different icons and colors to represent those different kinds, it gives the reader immediate feedback and makes the diagram easy to read. The reader can easily focus on one given kind.

Functional architecture formalized as an IBD

We want the functional architecture to support simulation. It means that each function must have an associated behavior that can be executed (simulated). According to the function, we can use the different following behaviours:

  • State machine (introduced previously to represent the states of the function “Manage Washing Program” over time)
  • Parametric diagram (introduced previously to represent a differential equation with regards to water level over time)
  • Activity diagram to complete a state machine diagram or to specify some constant values as illustrated previously for function “manage human interaction”

Note: we can also use an opaque behavior to represent an external behavior such a Matlab function or Modelica equation or a Functional Mockup Unit (see FMI standard for more information about that).

Driving the simulation with graphical support

  • Finally, we may also want to drive the simulation using a Human Machine Interface (HMI) mock-up that reflects the future operations performed by the end user. In our case, a person can use his/her smartphone to drive and monitor the progress of the washing.
    • For this purpose we can use dedicated widgets provided by the Cameo Simulation Toolkit to represent the future HMI and bind some system states with panels or images as illustrated below:
HMI to drive and monitor the connected washing machine

Can we automate some of the steps presented above?

Yes.

We have created a plugin to automate the transformations between the operational scenarios and the validation scenarios. We still need to complete those validation scenarios but this is easy to do when the list of interactions has already been translated from the sequence diagrams.

We have also defined a dedicated functional architecture editor (called FAS for “Functional Architecture Synthesis”) that provides support for the choice of the different kinds of functional flows and that can create the function ports automatically when needed to ensure the encapsulation principle (all functional flows are passed through the ports of the parent function).

Simulation in practice (video)

Look at the video at the bottom to see 3 validation scenarios executed through model simulation.

3 validation scenarios grouped into one state machine

In the first scenario, we use the simulation console to monitor the key parameters’ values during the simulation in addition to the plots that show the progress over time.

In the second scenario, focus is put on the HMI used to drive the scenario (representing a smartphone). There is no use of the simulation console: both the control and the monitoring is done through this HMI.

The last scenario is a rainy day scenario. It shows that it is possible to describe dysfunctional scenarios and use them to see how the system behaves in abnormal conditions.

Enjoy MBSE!

Next articles to come…

  • July 2020 -Consistency between functional and logical architectures
  • August 2020 – Minimization of the coupling in the logical architecture
  • September 2020 – Digital continuity between SysML and Simulink
  • October 2020 – Digital continuity between SysML and AADL
  • November 2020 – Digital continuity between SysML and Modelica
  • December 2020 – Co-simulation of SysML and other models through FMI

Previous articles in the series

  • April 2020 – Formalization of functional requirements
  • May 2020 – Derivation of requirements from models: From DOORS to SysML to DOORS again
By |2020-08-12T17:16:36+01:00June 11th, 2020|Tags: , , , |0 Comments

Part 2 – From textual requirements to model and to textual req again

This article is part of a monthly series entitled “Advanced MBSE with SysML and other languages“.

In the first set of articles, this series explains how to use a modeling approach based on the SysML notation to progressively analyze, structure, refine and derive stakeholder needs and requirements into system architecture and lower-level requirements, down to configuration items containing software and hardware parts.

In the second set of articles, this series will focus on the links to other modeling languages used to detail the design and/or perform detailed analysis and simulations to evaluate, verify or validate the virtual representation of the system.

This second article puts a spotlight on the zig-zag between the top-level system requirements, often expressed as text, the system model that will be used to satisfy those requirements through functional and physical architectures, and the lower-level system requirements that can be partially derived from those architectures. We detail why it is important to clearly define the repository of requirements at each stage of the process. Finally, we demonstrate that we can combine the use of a requirement management tool and of a modeling tool to improve the quality of requirements without duplicating the work.

Functional requirements and functions

The functional analysis consists of analyzing the top-level functional needs and requirements and to build one or several functional architectures that satisfy those requirements.

We start by identifying the main functions that satisfy the different functional requirements and we use a traceability matrix to check that each top-level functional requirement has been taken into account by at least one function. Next, we decompose the main functions into lower-level functions that are easier to understand and to manage. This functional breakdown is recursive, down to the level where the leaf function can be fully performed by a component available on the market or internally, or fully allocated to a subsystem (that will be defined by another team).

Let us take the example of a UAV for healthy agriculture, in charge of spraying a treatment solution on crops attacked by pathogenic agents. One of its top-level functional requirements is to treat while flying. I can define a main function called “Fly and treat” that will be in charge of satisfying this top-level functional requirement.

This main function can be decomposed into 2 functional units that address respectively the flight (Follow flight plan at constant speed) and the treatment (Treat the crop). And we can continue the decomposition of these 2 functions…

Now let us look at the way we can use SysML to perform these two activities: traceability and functional breakdown.

When using the SysML notation, we can formalize a function through different concepts:

  • The “Block” concept is enough if we are only focused on the structural decomposition of the functions (also called functional breakdown)
  • The “Activity” concept is well adapted if we want to use the behavior to identify and decompose the functions
  • A combination of both a block and a behavior definition concept (state machine, activity, opaque function ) is useful when we want to have the flexibility to specify the function and its behavior separately.

In this article we will use the “Block” concept and we will define a “Function” stereotype to distinguish the functions from components (also based on the “Block” concept). The traceability of main functions to top-level system requirements can be achieved through a “Satisfy” requirements matrix. The Functional Breakdown Structure (FBS) of a given main function can be represented either with a Block Definition Diagram (BDD) or with its dual internal representation, the Internal Block Diagram (IBD).

Extract of a traceability matrix from main functions to top level system requirements (left) and FBS as BDD or IBD (right)

For each new function that has been identified, we specify new functional requirements. This gives us two parallel hierarchies that are strongly related: the functional requirements tree and the functions tree.

And then comes the key question: “Where should I store these new functional requirements? In the SysML modeling tool or in my Requirement Management (RM) tool?”

RM tool and SysML Modeling tool – How can we ensure synchronization?

We are used to manage and store the requirements in a Requirement Management tool. For small projects, such a tool could be Microsoft Word or Microsoft Excel but for large projects, we generally use a dedicated commercial solutions (such as DOORS, DOORS Next Generation, Polarion, Jama, Aras Requirements…). Most of the time, the system specification is entirely built from system requirements managed, documented, and reviewed in this requirement management tool.

If we keep this principle of a dedicated tool to manage requirements, this means that we have to add our new functional requirements into this tool. The challenge is to decide how to distribute the activities between the RM tool and the Modeling tool in order to avoid duplicating the efforts and ensure good consistency between the requirements and functions.

The first option is to use the requirement management tool as the only reference to create and maintain the requirements, at any time, and use the modeling tool only to create and maintain the functions. What about the functional requirements just derived from the functions? Should we put them in the RM tool as soon as we identify them? In that case we need to go back and forth between the RM tool and the SysML tool to ensure the consistency between the new functions formalized in SysML and the functional requirements derived from those functions that have to be created or updated in the RM tool. It means that we need to switch between both tools at every change in the functional architecture. It looks painful… and might be an agility killer…

Another option is to create the requirements in the modeling tool, close to the functions they specify, and keep those requirements in the modeling tool until the functional architecture is finalized. If the functional architecture changes (new functions, removed functions, changes in function inputs, outputs, activation in modes, performance…), it is quite easy to adapt the functional requirements because all elements are in the same tool and we can use traceability links to analyze the impacts. When the functional architecture is considered as finalized, then it is time to extract the functional requirements and put them into the RM tool to complete the specification.

If we look at the previous example, we can create a “relation map” diagram that shows the relations between the top-level functional requirement (Fly and Treat), the main functions associated to it, the sub-functions, and a first draft of their associated functional requirements.

Note: The derivation of requirements from a function is an advanced topic that requires some explanation. Here we show the derivation of only ONE basic (draft) requirement for simplification but a function generally leads to the identification of several requirements, built from the combination of function performance criteria and lifecycle “situation” (phase/mode/state… and conditions) in which the function is active. And each of the identified requirements will later be completed to prepare its verification, leading to an improvement of the requirement maturity and quality.

The derivation of requirements will be detailed in a future dedicated article.

Relation map that explains the extended traceability between top level requirements and lower-level requirements through functions

Note: the relation map can be read through the following sequence of relations: the top-level functional requirement (FlyAndTreat) is satisfied by a Function (Block) that is composed of the two sub-functions fly and treat (part properties) that are each typed (defined) by a function (block) that satisfies requirements.

So far, so good. But what happens if one of my colleagues is defining some lower-level functional requirements in the RM tool while I’m defining my functional breakdown in the modeling tool? We are simply doing the same exercise concurrently through 2 different means and in 2 different tools: refine the functional requirements. Double efforts for the same value…

You may smile at this situation but it is something that happens regularly in the industry, especially when the modeling activity has not been defined in the development plan. So it is necessary think about it. The important principle is to ensure that there is only one reference for the modification of the requirements at a given stage of the development process.

When using the second option, we have requirements managed in two repositories. Thus, it is necessary to clarify which requirements can be modified by which tool to conform to this important principle:

  • The top-level functional requirements are defined and maintained in the RM tool and propagated in the modeling tool
  • The lower-level functional requirements are defined and maintained in the modeling tool and propagated later in the RM tool

This approach makes things clear by using only ONE requirement repository at a given stage. It allows an efficient definition of the requirements by using different tools according to each stage.

We suggest 3 different stages to organize the responsibility in the modification of the requirements:

  1. Before the SRR (Systems Requirements Review): the RM tool is used to define and document all top-level system requirements
  2. During the elaboration of the functional architecture: the modeling tool is used to define and document lower-level (refined) functional requirements derived from the functions. This stage ends with the preparation of the Preliminary Design Review (PDR).
  3. Since the preparation of the PDR: the RM tool is used to gather all system requirements including the ones coming from the functional model in order to ease the review of all system requirements and to generate the complete system specification.

In order to support this distributed work on requirements through 2 different tools, we also need to ensure that we can propagate the requirements between tools in an easy way. This question is addressed in the next chapter.

Can we automate the transfer of requirements between tools?

The answer is yes for many situations.

If you use Cameo Systems Modeler as a SysML modeling tool, you may know that 3DS provides a third-party tool called “Cameo Data Hub” that is able to synchronize objects between DOORS and some other RM tools and Cameo Systems Modeler. Clearly this is a good solution to ensure that requirements are aligned between both tools at a given point in time.

But this is not enough. We also need the traceability between top-level and lower-level system requirements. If we place lower-level requirements in DOORS without traceability to top-level requirements, then the modeling may be considered as useless and a waste of time and efforts. Traceability is very important because it gives us a powerful means to analyze a change in top-level requirements and immediately identify the lower-level requirements on which there may be impacts.

The idea is simple: let us extract this traceability from Cameo and let us create direct links between both levels of requirements as we would have done directly in DOORS. A small CSM plugin can do this: extract the traceability chain and recreate the direct links instead of using intermediate modeling elements (the functions).

Automation to capture the extended traceability links between requirements

Once this is done, we can synchronize both the lower-level requirements and their traceability links to top-level requirements between CSM and DOORS.

That’s it! Finally, we get our 2 levels of requirements in DOORS with traceability exactly as if we had worked only in DOORS. But we have in fact used CSM to help us in building a functional architecture as an intermediate step, which leads to a far better quality of the requirements once put back in the RM tool!

Zig zag with synchronization and automation in practice (video)

This short video shows the presented zag zag pattern between the RM tool and the SysML tool in practice.

Note: the derivation of lower-level requirements is very basic in this video (as it was not the main topic and we did not want to spend time on it). There will be additional material on this topic at a later date.

Next articles to come…

  • June 2020 – Early validation of stakeholder needs through simulation
  • July 2020 -Consistency between functional and logical architectures
  • August 2020 – Minimization of the coupling in the logical architecture
  • September 2020 – Digital continuity between SysML and Simulink
  • October 2020 – Digital continuity between SysML and AADL
  • November 2020 – Digital continuity between SysML and Modelica
  • December 2020 – Co-simulation of SysML and other models through FMI

Previous articles in the series

  • April 2020 – Formalization of functional requirements

By |2020-08-12T17:16:36+01:00May 13th, 2020|Tags: , , , |0 Comments

Part 1 – Formalization of functional needs with SysML

 

This article is part of a monthly series entitled “Advanced MBSE with SysML and other languages“.

 

 

In the first set of articles, this series explains how to use a modeling approach based on the SysML notation to progressively analyze, structure, refine and derive stakeholder needs and requirements into system architecture and lower-level requirements, down to configuration items containing software and hardware parts.

 

In the second set of articles, this series will focus on the links to other modeling languages used to detail the design and/or perform detailed analysis and simulations to evaluate, verify or validate the virtual representation of the system.

 

 

 

 

This first article puts a spotlight on the top-level part of the V Cycle, concerning the translation of stakeholder needs into system requirements. We show how it is possible to use a modeling approach to structure and refine functional needs expressed by the stakeholders of our System of Interest (SoI) and then deduce top-level functions and draft system functional requirements.

 

We need different views

 

For the capture, structure, and synthesis of functional needs, we suggest using the following different views:

 

  • Use Cases view, to define system missions captured from stakeholder needs
  • Operational scenarios view, to show system interactions and expected reactions of the system
  • System context view, to synthesize all external functional interfaces
  • Top-level functions view, to list all the functions derived from operational scenarios or already allocated to the SoI by an enclosing system
  • Operational modes view, that provides boundaries for activation and deactivation of functions
  • Allocation matrix of functions on modes to specify the validity scope of the functions

 

Note: there exists other views useful to capture and structure nonfunctional requirements like Measures of Effectiveness (MoE) and physical constraints. In this article we only focus on the functional needs.

 

 

 

Now we will show how to use SysML to support those different views, but first we introduce the sample case that is used to illustrate the mappings: the AIDA model. AIDA is an open source model defined at St Exupery Research Institute (Toulouse, France) to formalize a drone in charge of aircraft inspection. It was initially developed with Capella. Here we have translated this example to the SysML notation. The drone used for inspection is our System of Interest (SoI) and the aircraft inspection is the main mission the drone contributes to.

 

When using SysML, the Use Cases view directly maps to the Use Cases Diagram (UCD). We use the actors as roles played by external entities that interact with our drone.

 

 

The operational scenarios can be formalized in SysML with interaction diagrams, graphically represented by Sequence Diagrams (SD). We define simple, black-box sequence diagrams showing interactions between main actor (our drone pilot) , the SoI, and the other actors that represent roles played by external entities involved in the scenario. There is no need for other lifelines.

 

We define as many diagrams as needed to translate all the functional needs of the different stakeholders, including our customers and users. The main goal of these diagrams is not to define the execution logic, but rather to identify the required service functions (functions that provide services to the system end-users) expected by the end-users and customers of the future system. Let us illustrate with 2 sequence diagrams:

 

 

The system context view is focused on external functional interfaces. We create this view with SysML by using an Internal Block Diagram (IBD) that shows the system connected to its environment/context. By environment we mean here its physical context with the other systems or external entities that interact with our system of interest. The top-level service functions view can be shown as a tree.

 

 

Finally, the operational modes view can be formalized through a SysML state machine. We use a “mode” stereotype to distinguish those modes from other system states. An allocation matrix is used to define which functions are available in which mode.

 

 

But these different views are not independent !

 

If you create these views independently, you will soon discover that there are several modeling elements that relate to the same concept. You will then realize that your model contains duplicates of the same information, with the possibility of inconsistencies…

 

How can we avoid this issue? By studying the links between the elements and by providing some rules for the organization of the data in the model (parent / child, owner / owned, whole / part…).

 

Here is our suggestion:

 

  • Define your scenarios as behaviors of your use cases. Then you can check that your scenarios use same actors (at least a subset) of the ones associated with your Use case. You can then store your scenarios below the UC as “owned behaviors” and navigation from UC to scenarios is immediate.
  • Initiate and maintain your system context (IBD) external interfaces with messages from the scenarios (at least the messages that go in or out of the SoI). Note: for this you need to map your actors to the elements of your context to enable this mapping of messages to the context elements.
  • Initiate and maintain a list of top-level required functions identified from the scenario messages: external messages will lead to interface functions while reflexive messages on the SoI will lead to system internal functions.
  • Identify modes of your system by looking at the mission profile (stages) and the human interaction steps. This identification is not easy and will probably lead to additional guidance in a future article dedicated to this kid of identification.
  • Once you have both top-level functions and modes, you should allocate functions on modes to specify the validity of the functions validity in accordance with the modes.

 

Here is below an illustrated summary of the mappings that can be made to check and ensure the consistency between the different views:

 

 

Note: when all the semantic mappings have been defined, all the views contribute to the validation of needs or the elaboration of draft functional requirements.

 

 

Note: in a future article we will see how these functional requirements can be completed smoothly with additional conditions, performance, and elements of verification to improve their maturity and reach good quality.

 

Can we automate these mappings?

 

Yes we can. And we did it.

 

We have created a plugin to automate the transformations between the views according to changes done by systems engineers. This is a “live mode”: any change in one Sequence Diagram is immediately reflected in the other views (context, functions…) including the requirements view (agile approach).

 

 

Automations in practice (video)

 

Look at the video to see these automations in practice. Enjoy MBSE !

 

 

 

Next articles to come…

 

  • May 2020 – Derivation of requirements from models: From DOORS to SysML to DOORS again
  • June 2020 – Early validation of stakeholder needs through simulation
  • July 2020 -Consistency between functional and logical architectures
  • August 2020 – Minimization of the coupling in the logical architecture
  • September 2020 – Digital continuity between SysML and Simulink
  • October 2020 – Digital continuity between SysML and AADL
  • November 2020 – Digital continuity between SysML and Modelica
  • December 2020 – Co-simulation of SysML and other models through FMI

 

 

By |2020-08-15T14:20:03+01:00April 7th, 2020|Tags: , , , |0 Comments
Go to Top