I recently attended the first Andrew P. Sage Senior Design Capstone Competition at George Mason University. This conference included student papers and presentations from GMU, West Point, University of Pennsylvania, US Naval Academy, Stevens Institute of Technology, and Virginia Tech. The conference is named for Andy Sage, who was the first Dean of Engineering at GMU and a prolific writer in the field of systems engineering. The students and faculty did him proud.
But perhaps the most impactful presentation on me was that of the keynote speaker, Dr. Kirk Borne. His topic was: “Using Analytics to Predict and to Change the Future.” He was coming at the problem from a “Big Data” point of view, beginning early in the presentation with the picture below talking about Zettabytes of data from airline engines. That is 1 x 1021 bytes of data.
I have often noted that in systems engineering, particularly in the early concept development phase, I have a sparse dataset, not a large one. In cutting edge work, such as defense applications, we often have only basic research where the massive data from other systems may not relate well to the new concept. However, during the presentation, I found myself writing many notes to myself about how the same concepts work even for smaller datasets.
Then I realized that we are already applying these kinds of techniques to Innoslate, as a result of applying natural language processing (NLP) to the information we are gathering and developing to create the system model.
For those new to NLP, Wikipedia defines it as “an area of computer science and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to fruitfully process large amounts of natural language data.” We currently use NLP in three of our Innoslate analytical tools: Requirements Quality Checker; Intelligence; and Traceability Assistant. The first two tools have been around for a while, but the Traceability Assistant is new with version 4.0.
If you are not familiar with the Requirements Quality Checker, it automates one of the more difficult problems in requirements management: knowing when you have good requirements. The picture below shows an example. The NLP algorithm assesses six of the eight quality attributes (Clear, Complete, Consistent, Design, Traceable, and Verifiable) shown on the sidebar below, and rolls them up into a quality score.
We use this information to identify problems with the requirements and suggested fixes. Often those fixes are simple, such as forgetting a punctuation mark to complete the sentence or including a key verb (i.e., “shall”). You can always override the suggestion and select that it passes the test. All such changes are recorded in the History record for that entity.
Intelligence View also applies NLP technology against over 65 heuristics (i.e., rules of thumb) that represent best practices. Again, the NLP comes into play by looking at roots of words and comparing them, so it quickly recognizes that Wildfire and Wildfires are potentially the object. You can also select the “Fix” button and a window pops up letting you know what the problem is and helps you fix it (see image on the right).
Finally, our newest application of NLP technology comes in the form of the Traceability Assistant. Innoslate’s Traceability Assistant is the “dream come true” for all of us who have been working with relational databases. The real challenge has been how to relate information between different classes of data. In fact, I was working on mapping two related policy documents the other day and went to my developers and asked, “Is there some way to automate this process of tracing requirements between documents?” Then they showed me what they were working on: The Traceability Assistant. They used the NLP technology to read the information contained in the name and description fields of every item for comparison and then determine if it is a match and how good a match it might be. In the example below, we can see different shades of green, where the darker green indicates a higher probability match. Now it’s just an algorithm and you may not agree with the conclusions, so you must put the “X” in the box, but the tool also shows the full name and description of the row and column entities so that you can make an informed decision. The best part is this works with any relationships between entity classes. So we can use this for functional allocation, as well as requirements traceability, and all the other connecting relationships. Can you imagine the productivity increase from this?
Innoslate also has a suspect assist, so that if relationships have been already created and reviewed, but then changes are made, it will help identify when the information entities should likely not be connected. This feature isn’t like many other tools provide, which show that you changed something, so all the information downstream is suspect. That can lead to someone cleaning up the grammar causing a major review down the entire chain. What a waste of time and energy. Innoslate’s Suspect Assistant highlights in shades of red the probability that traced entities should no longer be connected. It can also be used when you have done a set of manual connections to identify that not enough information has been provided in the name/description to validate a connection between the entities. This feature will then help you identify where you need to enhance the clarity between connected entities.
Both these tools are available in the traceability matrix diagram provided in Innoslate 4.0. Our commitment to the customer and application of emerging technologies, such as LML, cloud computing, and NLP technology, demonstrates that Innoslate is the tool for enabling 21st Century Digital Engineering.