Edge DNS GSLB for Corporate CDN: Improving UX and Bandwidth

As previously described in our ”Predictions 2020” blog, workloads will progressively move this year from central cloud locations to edge ones. Digital transformation and new usages like device data collection, augmented reality or contextual video advertising are requiring more capacity from IP networks and shorter delays, which may be incompatible with cloud computing. In this context, Edge DNS GSLB is a very simple solution for intelligently directing users to their local computing resources, thus improving both UX and bandwidth utilization.

What is CDN?

A content delivery network solution provides optimized access to content which is frequently downloaded by applications, mainly via internet browsers. Some specific content is either stored or cached at the edge of the network, ideally near the users in order to reduce access time and minimize download impact on network bandwidth.

The main objective of a CDN solution is to enhance the user experience and for ISPs to optimize their network resources. Technically, it is performed through:

  • Reducing access delay to content, especially large content (video chunks, images, database access). This is particularly efficient for on-demand flows like internet browsing or video-on-demand (or TV replay) since files can be stored or cached at edge location.
  • Reducing WAN bandwidth usage for ISP. Since content is pushed from central source to the edges, the ISP will limit the bandwidth on their peering links with other Internet providers.

For providing a CDN service, technologies can be quite complex, but the basic requirement is to place numerous points of presence close to the users. This is key for user experience and for the following solutions:

  • A cache system: if the content is not present on the edge location, the delivery engine knows how to get it from an upper layer server and can store it in order to be able to serve it multiple times – this solution is particularly efficient when a large amount of cacheable data can be accessed but it’s not obvious to determine in advance which data will be accessed.
  • A push solution from the source to the edge in order to have pre-populated caches, ready to serve user demands – this solution is efficient when large amounts of users are likely to have access to the same content – such as when Netflix plan the first episode of a new season of a popular series – preloading all the caches will probably optimize the hit ratio.
  • An IP routing solution to direct users from their location to the local edge source of content. This is mainly performed using DNS and Anycast combination.

Why do we need edge computing?

The move we have seen years ago consisting of pushing infrastructure to central datacenters and using infrastructure-as-a-service at large cloud provider facilities was mainly driven by complexity of infrastructure maintenance. However, computing has changed, taking advantage of virtualization and containers. Even if not all implementations use microservices, hardware required for running modern workloads has been simplified. Hyperconvergence offers pieces of computing resources more simple to manage, and we now also have some cloud providers proposing their pod as a managed solution on-premise. Consequently, cloud to the edge is certainly a coherent move.

It is possible nowadays to build small computer facilities at the edge of corporate networks in order to accept new types of workload and comply with new requirements. There are many situations where using edge computing for enterprises becomes useful. Examples are when:

  • Data is only valuable at the edge: no requirement to move this data back to a central cloud storage service as it is only processed at the edge.
  • Data is valuable real-time at edge and centrally for advanced analytics: specific processing is then required at the edge to prepare the data for central migration.
  • Data used at edge quickly loses its value (e.g. localization of a container in a harbour, position of a client in a store, local warehouse stock level).
  • Data governance practices or regulations require local management of the collected data, and only anonymized versions of the data are permitted at a central cloud location.

This requirement for local computing will probably also be expected from businesses manipulating or generating huge amounts of data from multiple devices. IoT already requires massive storage and compute – in 2017 Gartner forecast that three quarters of the 54 exabytes of data generated will be from automotive (41%) and heavy trucks (14%) subsystems, connected field devices (14%) and robots (7%) in manufacturing.

Of course we can try to push all this data towards a central storage and compute location, but that would be unnatural from a design and architecture standpoint. Local computing is therefore the ideal solution.

Edge Computing & CDN

CDN and local computing is a new pattern for application development engineers to increase user experience and treat data at the appropriate level of the network infrastructure. The main functions per location tend to be located either:

  • At the edge: aggregate, transform, filter and forward to the cloud or central DC
  • Central: analyze, long term storage, global read

Several use cases exist, across all sectors. Some of the important ones include:

  • Access to the warehouse stock management system: local computing solution and remote recovery solution for maintenance and service continuity
  • Payment system: local + central backup
  • Video broadcasting on multiple devices (HD TV and smaller screen) using local storage server and bandwidth / display adaptation solution like ABR (adaptive bitrate streaming) in combination with a central video processing solution taking advantage of cloud elasticity
  • Access to detailed information on a product in a distributed PIM (e.g. high quality picture and 3D, maintenance documentation)

How to build this solution

Building local edge computing and a content delivery network can be complex. It usually requires coordination between multiple teams and good alignment between business requirements and I&O teams’ capacities, which is not always easy to achieve.

Installing a small local computing solution is a task which is far easier nowadays than it was just 5 years ago. We now see very interesting solutions with micro datacenters (hyperconvergence, containers – Kubernetes) or simple integrated edge servers (nano datacenters) that ease the compute and data storage parts.

For the network part it remains complex since CDN providers are outside the enterprise network and only able to access external resources located on the Internet. Very useful when proposing computing services to Internet connected clients, but not relevant for internal users accessing local computing resources. While some CDN providers are starting to propose Function-as-a-Service solutions (FaaS), they are not yet mature and still not connected on enterprise internal networks.

In order to absorb specific device traffic – which can be heavy if directed to one single datacenter/server – a traffic distribution strategy is mandatory for avoiding the equivalent of a DDoS attack to a centralized service (even with a datacenter load-balancer). Access through a CDN relies on a redirection from a global name to a local resource. This can be achieved with IP Anycast routing, conditional DNS responses (e.g. views) and also using a reverse proxy server.

IP routing using Anycast is a good solution from a pure network perspective, but it requires computing resources to be deeply integrated in the IP network at the routing level in order to maintain localization of the service. Adding dynamic IP routing to a server is not easy, and definitely not standard. Having the computing service announce itself to the network is even more complex. This solution is possible for a single application, but becomes far more complex for a complete ecosystem.

Fortunately, proxy servers and caching engines are powerful solutions and relatively simple to set up. They can handle most traffic patterns of modern applications, though require a client configuration that could be complex to set up. What is simple to perform on a homogeneous group of Microsoft Windows workstations becomes complex when dealing with various device types and more specifically with IoT.

How can GSLB help?

It is probably now clear to you that the tricky part is to route the user traffic towards the local resource, if available. This is where GSLB (Global Server Load Balancing) can be useful as its main objective is to route traffic from user to application while taking into consideration the overall environment.

Edge DNS GSLB is very simple to activate and configure on local DNS servers in order to supercharge IP routing decisions and direct local users to their local application if it’s available. With no complex routing, no complex engine to deploy on infrastructure servers, and no manual reconfiguration in case of incidents, Edge DNS GSLB adapts itself dynamically to the ecosystem. The hierarchy of nodes offering the service is configured in a single pool at the GSLB level, down to each FQDN of the application. And thanks to the health checking system, the right decision will be taken for each user connection intent.

Neither complex IP routing nor conditional DNS configuration are required. Edge DNS GSLB provides a health-aware control plane, able to take the appropriate decision for directing traffic to either edge computing or central cloud nodes, depending on the overall situation of the network and servers. Now, that really is a smart way to optimize bandwidth and enhance user experience.

Intelligent Application Traffic Routing

EfficientIP's Edge DNS GSLB is the world’s first DNS with robust GSLB functionality built into recursive servers.

LEARN MORE
Posted in:
7 February 2020 As previously described in our ”Predictions 2020” blog, workloads will progressively move this y...

EfficientIP