March 18, 2020 By Melinda Ballou, IDC 4 min read

The Software Imperative – Product Companies Must Also be Software Companies for Responsiveness and Competitiveness

Digital transformation is both driving and weaving its way into product development. To differentiate and compete, integration of software and systems is becoming the keystone. Digital transformation, driven by increasingly software-intensive systems, challenges the way products are engineered and impacts underlying  business execution. This fundamentally changes the way in which companies design, model, develop, test, change, manage, and deploy products. At the same time, the sheer volume of needed software —  along with extroadinarily high and increasing levels of technical, regulatory and business complexity —   is driving optimization of engineering processes for improved visibility into real-time data.  Electrical, mechanical, systems and software engineering coordination across systems of systems and Internet of Things (IoT) data is dramatically creating new  opportunities for digital transformation.

 

But what about the costs and consequences of poor approaches to the creation of intelligent systems and the imperative to incorporate appropriate engineering lifecycle management? A recent example of highly visible failure with Boeing’s Starliner spacecraft resulted from multiple software flaws which were noticed after the Starliner capsule took off and was already in the air. In this instance, the Starliner clock reset itself to the wrong time, thereby depleting fuel and preventing it from reaching the International Space Station. While troubleshooting the issue, the teams uncovered another software defect that would have caused the capsule to be destroyed as it re-entered earth’s atmosphere. These problems needed to be addressed while the capsule was still in flight. This systems-of-systems debacle resulted (in part) from not testing software in the context of other systems and ultimately will cost Boeing hundreds of millions of dollars. While Boeing has traditionally been a worldclass engineering company for aeronautics, it has not been a software company.

 

Safety critical engineering with massively increasing amounts of software demands automation and effective models. Teams managing tens of thousands of requirements, or more, across broad partner and supplier ecosystems need software solutions that can handle this type of complexity. This can help ensure that interfaces are well defined and tested even before hardware is ready. Employing  automation for analysis and reporting for complex tasks brings shared visibility to key stakeholders. That visibility can help teams to maximize reuse,  a major improvement on how fast you can get to market.  If you’re able to reuse appropriate functionality that has been effectively vetted and tested you don’t have to reinvent it. Current, timely, up to the minute reporting  provides visibility  into what’s been accomplished,  how it maps to what was planned and iterations needed to help set metrics. Validation and verification underlies all iterations for hardware and software quality, resilience and performance. Therefore, adopting an agile methodology, setting sprint boundaries and pulling in various kinds of integration testing can free software teams to execute within broader, larger iterations.

 

DevOps and Digital Engineering

DevOps practices as part of Digital Engineering and Engineering Lifecycle Management (ELM) can be well complemented by Machine Learning (ML) and advanced analytics. Doing so can help teams assess compliance levels, the delta between where they are currently and where they need to be, to contextualize and plan next steps. As an example, in the past, teams have written pattern matching scripts to review and check requirements, but uses were limited. It was challenging to find and solve simple semantic problems let alone match complex compliance standards. These scripts didn’t understand the semantics of sentences. Natural language can apply rule trees to understand content, delineate requirements and identify problems with quality checkers. Coordinating these kinds of capabilities with both requirements and quality analysis can help to create baseline scores for successful execution with regards to industry standards like those published by INCOSE.

 

Being able to access these kinds of ELM capabilities in the cloud, in conjunction with development environments, requires coordination. Since organizations have mixed environments, synchronization across various automation and engineering life cycle phases is a necessity. Needed capabilities include: change management with engineering workflow, quality management, model-based testing and linking to requirements to allow coordination and visibility across the development lifecycle. For engineering data, versioning is core to managing complexity. Linking changes and relating those to requirements and quality is vital for shared visibility and management of the ELM process.

 

And we have to assume an open world with non-homogeneous tools in a shared configuration context. Tying systems together with standards such as OSLC can provide a way for mixed toolsets to provide development streams and/or baselines, loading in models from a range of tools, creating links between models and requirements. As the system evolves, this federated context across tools can enable broader coordination, including allowing users to choose when to update versions of their models.

 

Even simple products are becoming complex systems of systems, and we need to understand impact and dependencies in a way that’s trackable. The notion of global configurations provides a concept of tree control over changing systems with multiple levels of visibility into those changes. When you update requirements or code, you also have to update tests. These tests need to be baselined to keep track of test output; all of that needs to be managed by the system and available later as part of change management. Like Russian nesting dolls, there are compound layers among layers for requirements, architecture, design, testing, and change management relating to the system itself. Understanding a single device as part of a system, and then understanding how that system impacts and is interdependent with other systems, is vital. This visibility can also provide flexibility. Taking a successful, documented base model and building a variant that introduces a new product, leveraging the existing requirements, models, tests that are reused – can save development time, lower development costs, and help ensures quality.

 

IDC Recommends:

IDC recommends the following next steps: 1) move from risky, ad hoc “heroic” efforts to instead adopt collaborative, agile approaches to product and related software creation, engineering and reuse; 2) begin by assessing current levels of maturity for engineering lifecycle management to address key gaps and establish process and automation strategies, starting where pain points are highest and business benefits cleares); 3) Use this approach to bring together teams from mechanical, electrical, systems and software engineering across  product life cycle phases, from requirements through to quality, change management, compliance and product release; 4) Create an integrated  product engineering lifecycle strategy to  benefit from emerging technologies like ML and AI, to  augment existing  capabilities with advanced analytics that address increasingly complex  systems of systems and IoT enabled offerings.

Get the IDC Analyst Connection on Digital Transformation in Product Development

Learn more about IBM Engineering Lifecycle Management

Get the IDC Analyst Connection on Digital Transformation in Product Development

Was this article helpful?
YesNo

More from Cloud

Enhance your data security posture with a no-code approach to application-level encryption

4 min read - Data is the lifeblood of every organization. As your organization’s data footprint expands across the clouds and between your own business lines to drive value, it is essential to secure data at all stages of the cloud adoption and throughout the data lifecycle. While there are different mechanisms available to encrypt data throughout its lifecycle (in transit, at rest and in use), application-level encryption (ALE) provides an additional layer of protection by encrypting data at its source. ALE can enhance…

Attention new clients: exciting financial incentives for VMware Cloud Foundation on IBM Cloud

4 min read - New client specials: Get up to 50% off when you commit to a 1- or 3-year term contract on new VCF-as-a-Service offerings, plus an additional value of up to USD 200K in credits through 30 June 2025 when you migrate your VMware workloads to IBM Cloud®.1 Low starting prices: On-demand VCF-as-a-Service deployments begin under USD 200 per month.2 The IBM Cloud benefit: See the potential for a 201%3 return on investment (ROI) over 3 years with reduced downtime, cost and…

The history of the central processing unit (CPU)

10 min read - The central processing unit (CPU) is the computer’s brain. It handles the assignment and processing of tasks, in addition to functions that make a computer run. There’s no way to overstate the importance of the CPU to computing. Virtually all computer systems contain, at the least, some type of basic CPU. Regardless of whether they’re used in personal computers (PCs), laptops, tablets, smartphones or even in supercomputers whose output is so strong it must be measured in floating-point operations per…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters