In our previous blog, we identified the three layers to network data monetization. These were the data layer, the analytics layer and the automation layer. To address the network data value tree successfully, we must address the complexities of these three layers, which are essential for automated operations in telco. In the next part we will discuss the complexities of each of the layers.
Three layers of complexity
As a recap, we identified the three layers of complexity on the way towards automated operations:
- Data Layer: Collecting the data and making it accessible and understandable to all interested consumers
- Analytics Layer: Analyzing the data for the various Use Cases to provide actionable insights
- Automation Layer: Acting upon the actionable insights in an automated way
The main idea behind the data layer is data democratization. Data democratization is based on two concepts. First, collected data should never be monopolized by the entity that collected it. Second, everyone in the CSP’s organization must be able to leverage the data, irrespective of their technical know-how (of course with the prerequisite that the data access policies allow the access). The analytics layer comes on top of the data layer. It is initially an empty but pluggable layer, with management capabilities, that can host analytics functions as data consumers and providers of actionable insights. Finally, the top layer is the automation layer. It hosts various functions that consume actionable insights from the analytics layer to automate operation and optimization processes in the network.
The key complexities of the network data layer:
- Completeness of the data – Some networks produce so much data that often in classical systems for practical reasons many data is simply ignored. An example can be found in the Fault Management domain: if the focus is on major and critical events, warning and informational events may not be stored, while these are very useful for the prediction of major and critical events.
- Meaning of the data – Network data is far more abstract than for example credit card data. The nomenclature of the data points that are produced by the network is not necessarily intuitively clear. Often there are several data points that collectively describe a specific network behavior. For example, in Radio Access Networks details about radio access bearer setup procedure are delivered over tens of different parameters. This typically requires establishing assets such as data catalogs to help data interpretation. Finally, understanding the meaning of the data is the first step in knowing if all the data relevant to an observed use case is available.
- Volume of the data – Network entities produce very large amounts of data which, when collected, requires enormous storage capacities, resulting in increased energy consumption. At the same time, there is a sparse usage of data for the valuable Use Cases as not all collected data is consumed by the analytical modules. Hence, only the consumed data must be collected. Otherwise, the data layer wastes energy on collecting and storing non-consumed data, which raises serious environmental concerns.
- Velocity of the data – Collection intervals need to be very short to meet the real-time requirements of the Use Cases. In fact, the standards for the modern state-of-the-art networks suggest 10 ms collection interval for the near-real time Use Cases. Given that the typical collection period in the legacy networks is 15 minutes (900.000 ms), data collection speed must become 90.000 times faster. And the volume of the data increases by the same factor.
- Variety of the data – Millions of unique KPIs are collected in a real network as each network element produces many data points. In addition, the operators usually have network equipment from multiple vendors, each of them publishing its data points using their own nomenclature and formatting, which needs to be aligned. The challenge is to consolidate these differences such that the Data Analyst does not have to be the expert on the specifics of each vendor.
- Selection of data for usage – Some network elements produce 10.000 unique KPIs and the challenge is to identify which are the one that can add value in a Use Case.
The key complexities of the analytics layer:
- Complexity – Analytics use cases vary from simple KPI aggregates or threshold-based analysis to advanced AI/ML-based algorithms that predict future values of datapoints. Predictive capabilities are needed to improve quality of the services provided and enable proactive operations that are essential for achieving the stringent SLAs of the modern services such as ultra-low latency or enhanced mobile broadband.
- Latency requirements – Analytics use cases have various latency requirements, which further impose requirements on their physical placement – some can run in the central network locations, while some require extreme data proximity to be able to analyze data in near-real time.
- Chaining of analytics modules – Insights from one analytics module can trigger another module. The insights must be stamped and refer to UTC so that they are distinguishable when consumed.
- Correlation of datapoints from different network elements – Network elements deliver services together, hence datapoints from them needs to be analyzed together.
The key complexities of the automation layer:
- Automate reactions on actionable insights – The actionable insights from the analytics layer are not very useful unless we automate reactions on them. However, the main question here is how to ensure that automated responses are aligned to the operator’s operations goals. For this the set of global policies must be defined to govern the generation and execution of automated responses.
- Conflict detection and resolution – The analytics modules may in fact send conflicting insights and conflicting automated reactions to the insights. This imposes the existence of the policy conflict management that can detect conflicts and resolve them such that the operator’s global policies are not violated. For example, energy saving automated actions may conflict with automated actions for improvement of degraded service performance. In such a scenario, the latter action must be prioritized and approved, while the former action must be denied.
Foundational and aspirational use case examples
Below are some common examples of foundational use cases:
- Automatic root cause analysis for the Network Operations Center (NOC)
- Energy saving in the Radio Access Network
- Predict network outages to minimize customer impact
- Analyze call drops in the network to find their root causes
- Analyze cross domain impacts (core, transport, access domain)
While these use cases are common in demand, the implementation may be challenging.
- Example 1: A fiber cut will cause hundreds, if not thousands of events, while the fiber itself is a passive element and does not provide any event. The fiber cut event class can be easily recognized by the sudden flood of similar events, however the determination of the fiber cut location is more complex and may require additional network topology information (Completeness of the data).
- Example 2: A 15-minute interval may not be granular enough to detect anomalies accurately, and more granular collection intervals may not be possible due to system limitations (Velocity of the data).
- Example 3: Syslog data is typically very voluminous, while the information contained in these messages is very cryptic and not very self-explanatory (Volume of the data and Meaning of the data).
Examples of aspirational use cases:
- Analysis of potential correlations between seemingly unrelated domains
- Analysis of traffic patterns that precede outages
- Analysis of potential traffic redistribution possibilities for optimized resource utilization
- Analysis how changes in user and traffic dynamics impact network’s ability to fulfill the user SLAs
How you can delivery successful network analytics projects
To deliver successful network analytics projects, it is important to focus on the value that you want to drive, while not forgetting the essential enablers.
Many network analytics projects struggle because of the poor accessibility and understanding of the network data by data scientist. Once the data issue has been overcome, the possible lack of automation capabilities may prevent the monetization of the insights derived.
A good starting point is a holistic Network Data Assessment, covering all three layers:
- How well is network data accessible?
- What is the network data being used for, and what other usages are not exploited?
- How well is network data understood by people outside the network domain?
- What types of analytics are applied on the network data to obtain insights that are valuable for your organization (and can be acted upon)?
- What is done with these actionable insights? What level of automation is associated?
The IBM approach for this assessment is vendor agnostic; this means we can work with IBM Technology components, as well as with technology components from other suppliers and hyperscalers.
The IBM Garage approach can help you to optimize the value from your current capabilities. Together with your stakeholders, we can help you create the Network Data Value Tree and establish a roadmap to drive more value from your network data, addressing the complexities in each of the three layers (data, analytics and automation) at the same time in an incremental way.
Want to learn more? Contact us at Maja.Curic@ibm.com and chris.van.maastricht@nl.ibm.com.