Aggregated data analysis – the pathway towards an integrated model of behavior and customer experience

Aggregated data analysis – the pathway towards an integrated model of behavior and customer experience

Why preparing your data platform now is essential for future operational performance and automation.

Why do we need to go beyond the control and user planes?

The process of collecting control and user plane data from our networks continues to evolve. As has long been recognised, it’s an essential task for the smooth operation of complex networks. This data helps build a picture of both network performance and the experiences customers receive.

However, while necessary, it’s no longer sufficient simply to gather control and user plane data; there’s much more data to consider that can enhance the view of performance that’s available. In this blog, we’ll explain why – and explore new approaches for the collection, aggregation and supply of data, across multiple sources, which can build a more integrated model of network and user behavior, and customer experience.

User experience is becoming a multi-dimensional problem

Today’s users experience the network in vastly different ways from customers of just 20 years ago. They don’t just make calls or send messages; they stream content (in real-time), they post their own content to difference sites, they browse and surf (even while performing other tasks). There’s a lot going on.

At the same time, operators must deliver new levels of service agility and dynamic performance – not just for their traditional customers, but also for new opportunities they seek to address, such as high-performance private networks, optimized for different vertical sectors and applications, or slices, orchestrated in real-time to deliver QoS-driven applications to specific users.

As a result, operators need to make faster decisions and to implement actions through automation. This depends on analysis of data in real-time, which, in turn, demands an extended view of data, bringing all possible sources together.

Extending our focus – building a new data warehouse

So, we must consider not only the control and user plane data that has been the traditional input for customer experience analysis and network assurance, and other current sources, such as OSS and BSS elements – but we now have to augment these with inputs from other areas and streams that may not previously have been part of the operator domain.

Such data might include input from third parties – such as organizations that perform customer speed tests, for example. Similarly, in the context of private networks, data that is directed away from the operator domain to edge computing resources could be considered as an input – but today’s approaches will not give you sight of this potentially valuable information source.

All of these possible data sources and inputs need to be combined into a single, aggregated data pool that can, in turn, be used to feed any data-dependent application. The new data warehouse.

Until now, that wasn’t possible, because of the diversity (variety) of data, the rapid flow (velocity) and the sheer quantity of data from different sources (volume). Consequently, we have had to maintain different data processing silos and different windows into that data. It may have been possible to have shared information from these different silos, but only really at a higher level or not in real-time.

Leveraging cloud-scale to deliver

But, by taking advantage of cloud-scale platforms and cloud native architectures, we can create a single pool that can ingest any data, from any source – and then present this data to any application or process that requires it. Any new source of data can be added to the lake, extending its borders, as required.

In other words, we’re building the new data warehouse and data ingestion approach that operators will need to go beyond control and user plane data, and to secure the aggregated views they will need for more agile operations and service delivery.

This is important, because we don’t know yet which data sources are going to be needed in the future – so we must have an ability to add any such source to our lake of available resources.

And, given the ongoing diversity of operators’ OSS, BSS, and network infrastructure, we can expect that solution silos will endure, whether we want them to or not. So, what matters is being able to take the data, extract it from any silo, and abstracting it away from the sources, so that the data collected can be usable by any process that needs it.

DataOps driven data collection, ingestion and aggregation

That’s what we’re doing with Cardinality. We take data, ingest it, and then make it accessible – providing the unified data lake that is essential to understand what’s happening today – and to adapt for what will happen tomorrow. Of course, doing this at scale is complicated – but the promise of being able to do so is immense, because it can simplify the process of securing insights.

Specific applications and use cases that can be unlocked by this approach include classical functions, such as network performance management (but newly enriched and informed with more information about what users are really doing), and network analytics, as well as new cases such as real-time inputs on customer cell site experiences (back to those speed tests), and then on to completely new outputs, such as a customer happiness index.

This latter case could be based on a combination of inputs from the call centre (what volume of calls, when, their duration and so on), social media output (unhappy customers will probably be busy airing their grievances on their social media platforms of choice), all layered on top of traditional sources of information. In fact, we can only really speculate what use cases will emerge, because we’re only just starting to explore the potential of what this new approach can deliver.

Evolving for new use cases and applications

But we do have to be ready for enabling this new level of data processing, for new applications. It’s an evolution path, that starts from the position that creating the new data lake and extending the range of data sources is the key adaptive step. With this in place, it’s ready for any future use case that may emerge – or any new data source that may need to be integrated and ingested.

In short, that’s what our platform delivers. It enables operators to ingest, analyze and visualize any data from any source in a simple useable manner. It does so in real-time, with the scalability needed by operators with millions of customers (and billions of data points), and with the speed and reliability to provide insights, consistently.

Built with Tier 1 operators, we confronted and solved the challenges of this massive and diverse data ingestion task, while providing the enrichment to enable users to make sense of it all – finding the needle, when you need to.

So, what we give you is the pathway you need towards truly aggregated data analysis – extending the focus from control and user plane to encompass anything that might be relevant to delivering the best possible customer experiences – whether for individual consumers, or for complex applications delivered over private networks or slices.