With aggressive competition driving a wider selection of service options, flexible subscription models and commodity pricing, customer service has become a key differentiator in the voice and data services market. If customers aren’t happy, they’re quick to change – and that churn can seriously erode margins and impact the bottom line.
This means that the traditional “break-fix” approach to maintaining network quality of service (QoS) is no longer enough. End customers have now become dependent upon always-on connectivity and so sensitive to service outages that even short service interruptions can be a deal, and contract, beaker.
Forward-thinking operators are starting to look at a more proactive approach and anticipate service issues before they occur, and use that insight to deliver an unrivalled customer experience.
Network data is key
One way to ensure a consistently excellent customer experience is to constantly collect and analyse the data from both inside and outside the network. The characteristics and behaviour of this data can demonstrate trends that new machine algorithms can now detect – well before issues may occur. It may not predict a bulldozer cutting a fibre, but it can be used to help anticipate fibre, laser, or hardware degradation before any service disruptions.
Building an ecosystem for network analytics
Leveraging machine algorithms to analyse network data in real-time has a lot of potential, but it also requires the right foundation to retrieve, store, and normalise the data. You’ll need an automation platform that includes technologies for tapping into all available data sources and for streaming the real-time telemetry data from them.
You’ll also need a data lake – a repository for multiple data types from multiple sources – to store both network-generated and externally generated data, and to normalise and categorise it for analysis.
These underlying components can now act as a kind of ‘nervous system’ that can collect and store data from a variety of sources. Building upon this analogy, a ‘brain’ is also now required to draw conclusions and make decisions based upon the data it now has access to in the ‘lake’. This brain can now start to identify trends, predict what may happen, prescribe the optimal solution and automate appropriate responses.
Predictive health and the future of networking
To improve the customer experience, we can now imagine a ‘network health predictor’ application that drives insight from network data. Architecturally this sits above of the analytics ecosystem, monitoring the performance of key parameters and helping to predict and address service degradation risks. In effect it forms the ‘brain’ for improving the customer experience.
Predictive health analytics applications can anticipate a range of service issues, and algorithms are constantly evolving to provide even broader insights into performance trends. These kinds of predictive health capabilities, combined with a range of other analytics applications such as predictive capacity planning, are paving the way for the self-driving, self-healing networks of the future.
It’s closer than you think
While fully autonomous networks are still an aspiration, technologies that predict future network health already exist. Tools such as Ciena’s Blue Planet Analytics provides simple tools for collecting data from across the network, a data lake for storing it, and predictive health capabilities such as a Network Health Predictor anticipate future performance and service issues. The platform is also completely open, so carriers can integrate ecosystem elements from other network providers quickly and seamlessly as needs dictate.
As customer experience continues to redefine the competitive landscape, technologies that can deliver even small QoS improvements will make a significant difference in terms of competitive advantage. That means that when it comes to implementing network analytics ecosystems and intelligent analytics applications such as predictive health, the time to act is now.