Random does not Necessarily Mean Random

People look at pictures and seemingly random images and can always find an object or pattern that they can recognize. Psychiatrists have been using this as a tool since the 1920s when Hermann Rorschach published his research on the interpretation of inkblots through free association. This has been further defined in psychology through the Gestalt laws of grouping. But as we learn more about how our brains work, we find that there is a scientific process and reason as to why we look for and find patterns in seemingly unstructured data.

While snow on the television is random, people will claim that they see specific patterns and visual cues in the image. We are always looking for patterns and structure when we look at data sets. This is the concept behind big data and the ability to extract nuggets of insight from large and seemingly unrelated pieces of information.

When we look this in the context of the Internet, we inspect, collect, and analyze the data that traverses the network. Service providers look for patterns and clues as to the behavior of their subscribers to be able to make realistic predictions towards how the network will be utilized and what content the subscribers tend to access. Even though the network supports millions of individuals who are not necessarily related, there are patterns that can be discerned based on group habits and general trends.

This is an important concept to understand. Individuals perform unique tasks, but when aggregated, trends can be established. The above graph from the Financial Times shows subscriber consumption over the course of the week and it is obvious that there are preferential times for using the fixed-line and mobile networks based on the person’s daily habits and activities (work, sleep, weekend, etc.).  When one looks at deeper into the pattern, it can be seen that certain applications are more popular at different points in time based on subscriber habits.  This high level insight gives us enough information to start looking for ways to deliver a higher quality experience.

For service providers, this is extremely beneficial information for capacity planning and traffic management. For example, if one service is known or expected to be known to cause congestion at certain regular periods of time, the network can proactively manage this through optimization techniques such as video optimization.  Resources can be tasked based on demand requirements as well. To properly optimize the content and delivery of that content to the subscribers in a method that maximizes the experience, it is essential that service providers have the capability to understand what content subscribers are accessing through content inspection and context correlation.

This requires the service provider to have the technology to identify and classify content, steer specific content based on subscriber profile and application context to value-added services (VAS) to optimize the content and a network management infrastructure that can collect and analyze the data over time to predictably orchestrate the network to adjust to the various (daily, weekly, regional) usage patterns in a proactive manner.

In a theoretical and ideal scenario, these functions are related and interconnected, meaning that the inspection, data collection, and steering can be done in the same logical function and sequence while the heuristic data and profile can be accessed in real-time to properly manage the network infrastructure to support the necessary functions without any discernable loss in customer experience.

Published Aug 04, 2014
Version 1.0
No CommentsBe the first to comment