Public Cloud Platforms Are Not Waterproof

14 February 2019

Digital transformation is eased by cloud infrastructures

Many organizations are starting their journey to the cloud by moving some workloads to public providers. Most of the time, the first ones are development and test environments as they are generally considered less critical. Moving to production is performed after this initiation on non-critical front applications, sometimes including storage as files and databases. Then come bigger deployments.

Common strategy starts with “lift & shift” of existing eligible applications to public cloud compute services, aka IaaS (Infrastructure as a Service), like AWS EC2, Google Compute Engine or Microsoft Azure. At the same time, enterprises start refactoring and building new applications for optimal use of elastic cloud serverless resources and containers for microservices. Since refactoring takes time and may have low business value, most workloads will remain in the IaaS for a long period of time.

 

Public Cloud Platforms are not waterproof

 

Simplification is supported by a full management of infrastructure resources

What is really attractive in public cloud offering is that all of the underlying parts of  the infrastructure and orchestration procedures hosting the application components are fully managed and mostly hidden. Here we are talking about network, internet access and security, storage, servers, and computing virtualization. Everything is easily configurable through a simple user interface or API, and we now see a lot of orchestration and configuration tools available to build full application stacks, ready to host the software developed in-house.

DNS is a central part of this infrastructure, fully managed by the cloud provider and allowing access to any cloud services and internet resources. Most of the time, no specific configuration is required to get full DNS access from the workloads pushed on public cloud infrastructures.

Security is reinforced by using private networks isolated from the Internet. Or perhaps not?

Deploying multi-tier applications on cloud services still require some basic security and isolation concepts. In order to answer enterprise security concerns, cloud providers are proposing private networking solutions to deploy internal resources and back-end services (e.g. databases, file storage, specific computation, back office management). This allows them to embrace security and regulatory concerns like data protection (e.g. GDPR or US CLOUD Act), data ciphering, or simply not exposing part of the applications directly to the Internet, which is good practice.

However, we would advise deploying computational back-end resources on subnets or networks not connected to Internet, and only reachable by known sources. Filtering rules can be enforced on managed network components in order to restrict access and comply to security policies. Most of the time, the filtering is performed at a higher level on cloud infrastructures than on internal ones, thanks to “infrastructure as code” patterns and automatic deployment strategies.

Are you aware that private networks, without access to the Internet, are still able to communicate with it through DNS?

 

DNS tunneling, DNS file systems and data exfiltration are possible on most public cloud providers by default. This is not a security flaw, but more due to “standard-built” DNS implementation, mainly to help the workloads accessing cloud serverless services, thus easing digital transformation.

This opens a wide range of possible data leaks. We’ll focus on four that present different impacts and likelihoods:

  • From external: malicious access to the application back-end is performed through standard methods (eg SQL injection, heap overflow, known vulnerability, unsecured API). Code is inserted in the back-end and DNS resolution will be performed to extract data
  • From internal: persons inside  the enterprise who have access to a host can modify/install/develop an application that uses DNS to perform malicious operation using DNS (contact C&C, push data, get malware content)
  • From internal: via automatic code deployment strategy (within CI/CD), a developer could insert specific code that does not require change in the infrastructure and use DNS to extract production data, events, or account information. The code will pass the quality gates of the continuous integration and testing part and be automatically deployed to production
  • From external: inserting malicious code in a widely used library will potentially impact all the users of this library (cf Supply chain attack), knowing that using library is a standard development usage, regardless of the language (e.g. java, python, nodejs)

If you manipulate or store business information on private networks hosted in public cloud, even temporarily, you have to deploy a private DNS service that will allow you to filter what should be accessible and what should not.

Cloud architecture and regular audits are now mandatory

This requires specific cloud patterns that are new to most system and network architects. Rare are workloads requiring full access to the Internet, the standard architecture patterns impose these to be fully autonomous, as data access and security are far more simple to track that way. But sometimes, DevOps people prefer to have direct access to the Internet in order to update the infrastructure, install packages or dependencies- it simplifies the deployment phase and concurs to “time to market” regular deliveries.

One good approach is “immutable infrastructure” with prebuilt images, private networks and controlled communication inbound and outbound. In addition, regular testing phases should be performed as options of cloud providers may change without being integrated into standard change management of enterprises. Remember, ITIL flowcharts are still used for at least the legacy IT systems.

Multi-cloud approaches are more complex to handle since each cloud provider is implementing the features with different flavors. DNS could be disabled, or not, per subnet, per virtual private cloud network, or per host. Some are proposing advanced DNS service to host private zones, others allow contact to enterprise internal DNS. What is questionable is that DNS could be applied to all the underlying services considered as system, network or security and handled directly by the service provider. Don’t underestimate the focus required for securing the workload you push to the public cloud and generally to IaaS and PaaS providers.

Privacy must be reinforced with private DNS, even in the cloud

To be efficient, private networks in the cloud should be deployed without DNS access (at least the recursive part if possible at cloud provider level). A private DNS solution is required and will reinforce any resolution performed by all the workloads in IaaS and cloud services (e.g. VMs, function as a service, big data computation cluster, containers orchestrators, or batch systems). In addition, it could also include security features based on traffic behavior.

Configuring such on-demand built DNS service in public cloud is made easier with a flexible DDI solution integrated to the cloud orchestrator. DDI will automatically push the appropriate records in the configuration once the service is enabled, bringing time savings and enforcing policies to help secure public cloud deployments.

Une version en français tiré de cet article est disponible sur le site des Echos Opinions : Les plateformes de cloud public ne sont pas étanches !

Learn more best practices to mitigate DNS-based breaches

Download the IDC Technology Spotlight: Dealing with DNS-Based Data Breaches to Avoid GDPR Non-Compliance

GET IT HERE