Why Do We Model? Part 2

In an earlier blog I talked about optimization of cost, schedule and performance for the systems as the primary reason for modeling, but I realized that there was another, perhaps simpler way to look at modeling. But before we get to that we have to define the type of modeling we are discussing.

Engineers develop many types of models. The mechanical engineer develops CAD models and analyzes them with computational fluid dynamics models and finite element models. The software developer may use some kind of object-oriented modeling (if they are writing code in an O-O (Object-Oriented) language like Java or C++ … not so much if they are writing UI code in JavaScript). Electrical engineers use all kinds of cool modeling of electronics systems using tools like the Ansys Electronics and Maxwell (which also uses finite element analysis techniques).

But systems engineers are not doing physics-based modeling. The kind of modeling we do is functional or behavioral modeling. Why is this type of modeling most important to systems engineering? Because we are trying to control the behavior of the systems, which includes people, and behavior cannot easily be represented by Physics equations (yes I know about Wyman’s Set Theory work). As a physicist myself, I understand the non-linear nature of the problem and even if we could come up with the proper equations, we don’t have the math to solve them.

 

Functional Modeling
Functional Modeling

So, our functional modeling (and much of even SysML is functional in nature – e.g. sequence diagrams and activity diagrams are both functional diagrams) gives us something important: functional requirements. Why are functional requirements so important, you might ask? Those requirements relate best to the user needs that should be driving the design. Functional requirements allow the design engineers the freedom to implement those requirements in many different ways, which can go a long way toward system optimization.

Functional requirements provide that bridge between user needs and detailed design. It is notable that the software community realized the importance of functional requirements a couple of decades ago with their adoption of Agile software development techniques. Agile, in all forms, begins with functional requirements. Where do they get the functional requirements? From the systems engineers (hopefully) who derive those functional requirements by developing behavioral (or if you prefer Activity) models of “uses cases” or “operational threads” or “scenarios.” Of course, to drive out the full functionality you want from a system requires the right set of scenarios. That’s for another blog, another day.

These functional requirements are also essential for verification and validation (V&V). The operational functional requirements are validated in the Operational Test and Evaluation stage of the lifecycle. The system functional requirements are verified in the Developmental Test and Evaluation stage. So, a good requirements analyst either makes the functional requirements verifiable or they trace them to verification requirements. Those requirements are what the testing community uses to design the V&V activities.

Verification and Validation Through Test Center
Verification and Validation Through Test Center

We can also ensure that we have the proper functional requirements by using simulation to test them. Discrete event simulation provides a means to “step through” the functional model to make sure the process works. It also provides a means to derive performance requirements. By adding timing information to each step, we can use the simulation to predict how long the overall process will take, which is often called “timeliness” which is key metric in the Quantity, Quality and Timeliness set of measures we use to make decisions about the viability of the system. We can also get to “quantity” by modeling the resource production and usage.

Of course, there is a lot of variation in most processes. You will go down different paths depending on conditions (decision points). These decision points may even be dependent on resource availability. Also you likely do not know the time precisely, particularly early in the process development and modeling. So you will use distributions for the timing of each process step (i.e., each function).

Another impact to your timing and its effect on the functional model comes in the form of physical constraints, such as the time it takes to transmit data. These physical constraints can be modelled using the “size” of the transmission “pipe” and the amount of material or information being transmitted. We usually characterize the size of the pipe in terms of latency and capacity. Latency comes from the fact that nothing is instantaneous. For example, when you transmit your phone call through satellites in geosynchronous orbit (GEO) it takes several hundreds of milliseconds to go up and back down to the ground. These latencies can add up quickly as you do multiple “hops.” Capacity comes into account when the amount of information is significantly larger than the rate at which the information can be transmitted, often call bandwidth in communications. You can see this effect when you try to stream your video and you get a “spinner” as its waiting to load the next scene.

Asset Diagram for Physical Modeling
Physical Modeling

All this variation means that just running a single pass using discrete event simulation is not enough to ensure that the functional model is correct. And since these processes are stochastic (randomly determined) in nature, we need to use a Monte Carlo technique (simulator) to fully assess the viability of the functional model.

That means that I not only need to model the system, but I also need to simulate it to ensure that the model is correct! And I had better do it using both discrete event and Monte Carlo simulation (at least).

Monte-Carlo Simulator

 

Now I know you are saying to yourself, “this is a lot of work!” The answer to that is the usual one: “It depends.” What is depends on is the technique and tools you are using. You can use the common tool set used by many practitioners today (DOORS, MagicDraw, Cameo Simulator, etc.) or you can use an integrated tool that does all of this and is based on an open standard language, such as Innoslate® that implements the Lifecycle Modeling Language (LML). Innoslate® provides this entire capability discussed above for nearly the cost of any individual tool in the toolset you would need. Oh and by the way, if you used the Cameo suite, you still have to do a lot of coding to just get it to work, let alone deal with the timing and physical constraints discussed above. You also have to deal with synchronizing all those different databases, because each tool has its own database. Innoslate® lets you focus on the complexity of the problem and not have to worry about the complexity of the tool environment. Create a free account at cloud.innoslate.com/sign-up