Edge DNS GSLB (Global Server Load Balancing) is a great solution for enabling global application traffic routing and is therefore very complementary to all ADC load-balancing solutions installed directly in datacenters for improving performance and redundancy. This blog article focuses especially on the combined benefits of using both technologies to provide more value to the users and help ease the work of I&O teams.
The role of a load balancer (LB) or application delivery controller (ADC) is mainly to distribute the incoming traffic onto multiple backend servers in order to provide load distribution and failure avoidance. It is a very common tool in any datacenter and is able to provide advanced features like compression, ciphered traffic offload or header manipulation, which is a real plus for application load balancing. Edge DNS GSLB, as proposed by EfficientIP SOLIDserver, is an interesting complement to any ADC located within a datacenter: the GSLB at the edge of the network near the users takes care of the geographical application traffic routing, while the ADC in the datacenter manages the local performance routing between application servers.
DNS GSLB brings inter-datacenter load balancing
One important function of an ADC is to enhance performance and local resiliency. It sees all the traffic going from the client to the servers therefore requires very high throughput performances, high speed networking interfaces and computing power. By being positioned on the network path, it can also bring advanced traffic optimization like payload compression, header manipulation for HTTP traffic, TLS encryption offload and certificate store. However, an ADC has to be located very near the servers and follow all sessions even in case of hardware failure which requires high end clustering features. ADC is not the right tool to perform inter-datacenter or inter-region load balancing.
On the other hand, Edge DNS GSLB is very frugal, being directly embedded into the DNS recursive server. It sees all the intent of the traffic, not the real traffic itself, but this is enough to influence traffic routing. It can reroute traffic following predefined policies and dynamic constraints on availability of the service such as network latency, or service response time. Being able to perform these analyses from the edge of the network, hence close to users, brings a real advantage on the quality of the decision it will take for routing the traffic. In addition, by using for its analysis the same protocols as the application, Edge GSLB will take exactly the same path on complex networks including multi-home sites and SD-WAN hybrid networks using both internet VPN and private MPLS links for example (see also: How Edge DNS GSLB Ensures App Availability During WAN Failure).
Putting DNS GSLB at the Edge offers intelligent routing between regions
Load balancing between multiple regions is a complex task and requires specific architecture. When proposing applications over the Internet for public customers it is relatively easy to use either a CDN or a global DNS-based solution like AWS Route 53 (see DNS Cloud on how to manage public zones). But for internal applications used on a private network it is far more complex. Installing a load balancing solution on each remote location is not a viable option, so intuitively a solution at the edge of the corporate network would seem to be most efficient. Edge DNS GSLB provides such a solution, easy to install, based on reliable DNS components and able to react very quickly to changes on the infrastructure. In addition to standard health checks using network and application protocols, Edge GSLB can be configured with more specific health checkers. When cascading GSLB and standard ADC in each datacenter, the health of the load balancing in front of servers can be used to determine which region the application traffic should be routed to. For example, using the global load of a server pool or average response time seen at the ADC level can be indicators used in the routing decision process at the edge level. Generally this information is directly available through the ADC APIs.
Combining both ADC and Edge GSLB technologies brings many benefits to application users:
- No impact on activity when traffic goes from one server to another
- No impact on activity if server failure occurs in the datacenter
- No impact on activity increase thanks to scale in and scale out
- Almost infinite hosting resources for the application
- Geographical routing of the traffic to the nearest hosting facility for the application based on same experience as the user (eg: latency, network load, global availability or application performances)
- No impact on activity during planned IT operations as redirection to other hosting facilities can be transparent from a network and server perspective
Combining both technologies also brings multiple benefits for I&O teams:
- Ability to easily add & remove application servers from a pool
- Easy management of maintenance window with server tear down and insertion facilities
- Easy access to advanced traffic features in order to optimize infrastructure and enhance service continuity
- Simple implementation of load balancing strategies with GSLB without ADC, e.g. for development purposes
- Capability to apply specific application traffic routing policies at remote site locations using the Edge GSLB feature
- Implementation of DRP failover rules, which can be easily tested (see DNS GSLB for Disaster Recovery)
- Complementing multipath routing of SDN branch devices with specific rules applied directly at DNS level
- Application of specific regional policies for regulation, testing purposes or cost optimization
- Performing local offload of some application traffic (e.g. corporate CDN)
Depending on the application and how the users from the remote locations need to be routed, GSLB will perform this action easily. Whenever the traffic reaches the regional datacenter, it is the responsibility of the ADC and all optimization techniques in place to direct the user traffic to the appropriate back-end server. By combining these two load-balancing techniques, the I&O teams will get the best from their infrastructure and the users will be able to make best use of their applications in any circumstances.