Everyone is talking about “Model-Based Systems Engineering” or MBSE, but why are we modeling? What are we supposed to be getting out of these models? To answer these questions, we have to go back to basics and talk about what we are doing as systems engineers.
We often say that the job of a systems engineer is to “optimize the system’s cost, schedule, and performance, while mitigating risks in each of these areas.” Note that this is essentially the same thing that the program manager does for the program, hence the close relationship between the two disciplines.
So our modeling must support both these optimization goals: 1) cost, schedule, performance, and risk; 2) design disciplines. So how does modeling support that?
Another aspect of systems engineering is that we need to be the honest broker by optimizing the design between all the different design disciplines. The picture below shows what would happen if we let any particular discipline dominate the design.
Using the Lifecycle Modeling Language and its implementation in the Innoslate® tool, we easily accomplish both tasks. For the cost, schedule, and performance optimization, we use only two diagrams: Action and Asset; along with the ontology entity classes of Actions, Assets, Input/Output, and Conduits as the primary entities in these diagrams. But Innoslate® has included Resources as well as allocation of Actions to Assets (performed by/performs relationship) and Input/Outputs to Conduits (transferred by/transfers). This capability to allocate entities to each other allows the functional model to be constrained by the physical model. This constraint occurs by the fact that Input/Outputs have a size and the Conduits have latency and capacity. Thus, we can calculate the appropriate delays for transmission of data or power or any other physical flow.
The Resources can be used to represent key performance parameters like weight (mass) and power. Actions can produce, seize or consume Resources. Another key performance parameter is timing. Time is included in the Action as the duration for each step and of course each Action can be decomposed and the timings associated with each of these subordinate steps can accumulate to result in the overall system timings of interest. So we can see how this approach gives us the necessary information to predict performance of the system.
Note we can model the business, operations or development processes this same way and thus use this modeling to derive the overall schedule for the program. So we get to the Schedule part of optimization as well using the same approach! Talk about reducing confusion between the systems engineering and program management disciplines!
But let’s not forget Cost. Since LML defines an independent Cost class, we can use that to identify the costs incurred by personnel in each step of the process and consumption of resources.
So now if we can dynamically accumulate these performance parameters, schedule, and cost elements through process execution, we have the first part of our first optimization goal. Of course, we can easily execute this model by using the discrete event simulator built into Innoslate®. Execution of the model will occur for at least one path through the model. The tool accumulates the values for cost, produces a Gantt Chart schedule, and tracks the Resource usage over time, which leads us to performance.
But how do we get to risk? That’s where we find out that the values we use for the size, latency, capacity, duration, and other numerical attributes of these entities can be represented by distributions. With these distributions, we can now execute the built-in Monte Carlo simulator to execute the model as many times as needed to create the distributions for cost, schedule and performance (Resources). These distributions represent the uncertainties of achieving each item. Those uncertainties are directly related to the probabilities of occurrence for the risk in each area. If we add consequence to this probability, we have the estimated value for Risk. Of course, LML gives us a Risk class, which has been fully implemented in Innoslate® and visualized using the Risk Matrix diagram.
OK, so now we have the first optimization complete, how do we get to the next one: optimization across the design disciplines. LML comes into play there as well. LML is an easy to understand language designed for all the stakeholders, including the design engineers, management, users, operators, cost analysts, etc. They all can play their role in the system development, many using their own tools. LML provides that common language that anyone can use and we can easily translate what the Electrical or Mechanical or whatever Engineer does into this language. Innoslate® also provides the capability to store can view CAD files. Results from Computational Fluid Dynamics (CFD) codes or other physics-based models can also be captured as Artifacts. We can take the summary results and translate them into the performance distributions used in the Monte Carlo calculations. For example, if we use Riverbed to characterize the capacity (bandwidth) and latency of a network, we take those resulting distribution and use them to refine our model. We can then rerun the Monte Carlo calculation and see the impact.
So LML and Innoslate® give us the capability to meet the optimization goals for all systems engineering and program management in a simple, easy to explain to decision makers way. Think of LML and Innoslate® as modeling made useful!