Today, the user experience of digital services has the greatest impact on the perception of an organization’s ability to operate competently and successfully in a digital world. It is the key competitive differentiator, and is essential for avoiding the risk of customer churn.
Website visitors, syndicated content consumers, partners who consume APIs exposed by your web services, and staff members all rely on the organization’s infrastructure to be highly responsive, as well as always available. If users and machines are unable to query DNS servers to resolve addresses and must compete for resources – or if responses are too slow – your digital services will not be able to function as intended or expected. The results can include painful delays due to network connectivity, slow page load times or unexpected failure causing the loss of work in progress. The quality of your DNS service is therefore key to keeping your users happy.
Exploding browser activity + substandard DNS performance = poor UX
The increase to over 4bn internet users and 1.24bn websites worldwide has led to exploding browser activity and thus put tremendous demands on DNS. The web pages themselves are ecosystems aggregating various objects – potentially located in numerous places – that contribute to the page content. The load of this entire ecosystem is increasingly important for an audience. Users expect pages to load in about two seconds – any longer ends up with 40% of them going elsewhere. There are numerous potential causes for slowness – poor server performance, network latency, etc. – but one is often forgotten: substandard DNS performance.
DNS resolution time has a direct impact on page performance, delaying the “Start Render” which contributes to a significant amount of user-perceived latency. The reason is that every website, including BBC News, Reuters or The Economist, already integrate with a multitude of 3rd party services including APIs, content delivery/caching and marketing technologies that require DNS resolution in order to function. Since these components are likely to be hosted under several domains, there can often be more than 32 external domain calls for a single page (ex: techcrunch.com). The consequent latency will most likely be a few hundred ms, but can potentially rise to one second or more.
At the same time, companies are facing the ever-spreading adoption of IoT, sensor networks, supply chain automation technologies and cloud-based services. All these innovations are fueling the rapid increase in new endpoints and devices now connecting to enterprise resources, thus putting extra strain on DNS resolution tasks and hence adding to the potential of increased latency. Without a resilient, high-performance DNS service, how can you expect your user experience improvements upstream (web/database server optimizations, application code refactoring etc.) materializing if the underlying service architecture is weak and fault-prone?
Getting back visibility and control
Achieving and improving best-possible performance is a key goal for user experience KPIs. To reduce latency and improve responsiveness, network architects can use a DNS Anycast design which distributes the DNS architecture to ensure services are delivered as close as possible to the intended user/machine base of clients. DNS clients will always query the same IP address(es) but their packets will be systematically routed to the “nearest” server in the topology. If that server is down, the clients will be automatically redirected to the nearest running server, preventing DNS fallback to a secondary IP which is a slow mechanism. This avoids using remote servers based on the IP address alone, thus ensuring that DNS clients query their local servers first. Being implementable on both recursive and authoritative DNS servers, DNS Anycast adds an extra layer of resilience for your DNS.
Importantly, you can’t get the best from your network if you are unable to control it. Monitoring of your DNS infrastructure is therefore critical. It’s vital to conduct periodic reporting of service consumption levels to know what your actual peak levels are so as to ensure sufficient capacity and service availability.
Network managers can also take a defensive security posture to ensure user experience is not degraded by DNS issues caused by attacks. Because of its criticality to network service availability, DNS is often targeted by hackers, both as a vector for attacks as well as the end target itself. It’s therefore essential to secure it appropriately, for example by investing in technologies that can automatically detect, log and react to events in real-time, such as quarantining malicious behavior which is impacting endpoints/services.
A well-functioning DNS means happy customers
DNS is the first component of your infrastructure that customers transparently interact with. Failing to take proper care of your DNS – on which all of your apps and devices depend upon, every second of every day – means the underlying performance of your network infrastructure will continue to be an unsolved problem, impacting all of your digital services.
So by putting into practice the recommendations mentioned above, you’ll ensure your infrastructure is proactively helping your organization achieve the best possible user experience. What better way to make sure your customers stay with you?
Learn more about optimal DNS performance and security in the real world.
Read the FFT case study and discover how they “serve” users during the French Open Grand Slam.