April 15, 2019 By Jeffrey Kwong 7 min read

IBM Cloud Private allows the flexibility of a public cloud infrastructure without abandoning enterprise tools and security policies

This the second post of a series focusing on deploying IBM Cloud Private to various infrastructure platforms. For more background on our team’s work, see the first installment, “Deploying IBM Cloud Private on Amazon Web Services,” by Gang Chen. If you are new to cloud development, see IBM Cloud Private learning journey for a comprehensive understanding, including container basics, Helm, and Kubernetes.

At IBM, a lot of my team’s focus for the last few months has been on enabling cloud-native development in on-premise datacenters using IBM Cloud Private, our Kubernetes-based container orchestration platform. Since cloud-native development involves a culture shift in addition to adopting new technologies, providing a “cloud-like” experience within a client’s own data center can help identify and prepare for moving existing workloads and creating new ones in the cloud.

Some of our clients have gone all-in with cloud and are shutting down data centers in favor of cloud infrastructure. These clients ask us how they can provide the same consistent control plane across what used to be their on-premise environments. Although the major cloud vendors have a managed Kubernetes offering, clients see value in a consistent set of management services on every cloud infrastructure the application is deployed on.  The principle is that as long as the cloud infrastructure vendor can run my Kubernetes platform, I can run my business there.

Deploying a container platform like IBM Cloud Private on top of cloud infrastructure provides the most control to organizations that want flexibility the public cloud provides, but have adopted enterprise policies around tools and security. An earlier post from my colleague described the planning process we followed; it includes a review of the motivations behind deploying a “private” cloud platform on public cloud infrastructure.

An IBM Cloud installation, capable of managing a panoply of Kubernetes-based cloud vendor platforms, requires just a set of VMs of varying CPU, memory, and disk requirements for each Kubernetes node role, a flat, routable network for Calico, and some configuration files that tell the installer the IP addresses of the nodes. Most cloud infrastructure providers fulfill these requirements for compute, network, and storage—a single command to kick off the Ansible-based installation takes care of the rest.

Applying the multicloud approach by example with AWS

Note: This section outlines how to deploy IBM Cloud Private on Amazon Web Services, but the same approach applies to other infrastructure platforms. For more details, see our Terraform templates on GitHub for highly available IBM Cloud deployments on Amazon Web ServicesMicrosoft AzureGoogle Cloud PlatformVMware, and, of course, IBM Cloud.

One of our clients asked us to provide specific guidance for deployment on Amazon Web Services as part of their multicloud strategy where they were deploying a microservices application on both IBM Cloud and AWS for redundancy and high availability. We mapped the compute, network, and storage requirements for IBM Cloud Private to Amazon EC2 resources with a VPC in a single region and a single subnet. After the EC2 instances are created, we set up passwordless SSH on each node, installed the Docker runtime on each node, generated the config.yaml, and fired up the icp-inception container to install the cluster.

The diagram above shows the resulting topology, which would be similar on each public cloud provider:

  • One proxy node that accepts external traffic from the internet

  • One master node that serves the control plane

  • Several worker nodes that run the actual container workload

Since there is a single proxy node that accepts client traffic from the internet, this cluster is not highly available and may not scale very well as the number of clients increases.

There are a few advantages when you are working on public cloud:

  • Client scalability: Leverage robust cloud-native load balancing instead of standing up separate proxy nodes to serve as ingress to the containers

  • High availability: Public cloud allows us to deploy nodes across multiple availability zones which are typically connected with a fast, low latency interconnect between each zone.

  • Network security: We used AWS EC2 security groups to prevent non-cluster traffic in a distributed way instead of using traditional firewall appliances.

There are also managed DNS services for name resolution and gateway devices that allow northbound and southbound traffic for us to pull in images from our external image registry or reach service in an on-premise network. All of these can be provisioned or destroyed on demand through APIs.

Supporting multiple deployments with a single control point

While the IBM Cloud Private requirements seemed simple enough, our client required five separate environments for development, test, staging, performance, and production, as well as providing plans for disaster recovery and scaling out to other regions for high availability and performance reasons. Each environment had different requirements as well; non-production environments were not to be exposed on the internet, while production environments needed more worker nodes, cores, and memory and needed to be spread across availability zones for high-availability.

Because we had to repeat the exercise for each environment, we used Terraform to create declarative infrastructure for an IBM Cloud Private cluster in AWS. Terraform allows us to declare the resources we require as source code and translates these resource declarations into API calls in the AWS API. From a skills perspective, instead of using the native language for automation for each cloud (e.g., AWS CloudFormation), learning and using an abstraction like Terraform allows our engineers to reuse those skills on other clouds since most major cloud vendors have Terraform providers that allow us to translate declarative resources into objects on each cloud.

Automated deployment of AWS on IBM Cloud Private with Terraform

Development of the Terraform scripts occurred in a sandbox account on AWS until we got it right. Provisioning the environment on AWS was divided into these steps:

  • Lay down the infrastructure using the AWS API by specifying Terraform resources.

  • Use Terraform interpolations to generate the configuration files based on the resources it created and push the configuration files into an S3 bucket.

  • Pull the configuration files and the installation binaries from S3 into the boot node.

  • Perform the installation; any post-install commands such as generating service accounts and keys for our CI/CD tool to perform application deployment.

  • Commit the state file containing the resource mappings to versioned source control, in our case, an S3 bucket.

The declarative nature of Terraform allows us to commit our infrastructure into a source code repository and track it like code.  We can iteratively make changes, observe over time how the infrastructure has changed, and see who committed code changes that resulted in infrastructure changes, as well as provide comments and history based on changing application requirements.  Terraform builds a dependency graph of the required resources and can iteratively provision additional resources and modify or remove existing resources based on the declarations by comparing it to a state file. The Terraform state file contains the mappings of resources it created at each execution and uses it to determine changes on each execution.

Each environment is represented in our code repository as a branch. For example, our development environment is stored in the “dev” branch and contains a simple non-HA setup of the free IBM Cloud Private community edition. Our production environment is stored in the “prod” branch and contains a full highly available setup of IBM Cloud Private enterprise edition, where the nodes are deployed across three availability zones, with larger instance types and additional nodes.

When code is committed on one of the branches in our code repository, our continuous deployment tool (Jenkins) triggers execution of Terraform to update that environment in the cloud infrastructure. This is commonly referred to as “GitOps,” where all operational tasks are mapped to and/or triggered by git operations. To create a new environment, we fork a branch, modify some parameters, and create a new pipeline.

Important reminders about infrastructure automation

One thing to note about infrastructure automation (and any automation in general)—it is important to enforce the separation of duties:

  • A service account with write permissions should be created on each cloud provider and only the continuous delivery (CD) tool should be given the credentials.

  • Only a code commit should be allowed to update the cloud infrastructure and not humans since this allows changes outside of the code repository history and becomes impossible to track and audit.

  • Only authorized users should be allowed to commit code to the repository that results in infrastructure changes.

A good policy to implement is to recycle AWS access keys every 30 days and update the CD tool to make sure that even if someone obtained access to the keys, it would only be good up for 30 days.

We have published the output of our Terraform work on GitHub. Internally, a lot of IBMers have asked us for a prescriptive way of installing IBM Cloud Private on AWS that has been implemented at clients. Instead of writing all the best practices out individually, we just point to our GitHub repo and ask them to fork it and update it according to their needs. However, in addition to being a fast, one-command way of standing up IBM Cloud Private, a good DevOps practice will take advantage of automation and continuous delivery to be able to incrementally make changes to the environment as needed.

What’s next in the multicloud management series

At IBM, we are seeing the container platform revolution transform our organization. By rallying around a platform like IBM Cloud Private, we are enabling delivery of IBM capabilities anywhere we can deploy our platform, including traditional on-premise infrastructure and public clouds. For more information on IBM Cloud Private reference architectures, please see our Architecture Center and particularly our architecture for IBM Cloud Private on Amazon Web Services.

In the next installment of this series, we will dive deeper on how we integrated our container platform with other pre-existing resources on internet networks and across other clouds.

References

Was this article helpful?
YesNo

More from Cloud

Cloud investments soar as AI advances

3 min read - These days, cloud news often gets overshadowed by anything and everything related to AI. The truth is they go hand-in-hand since many enterprises use cloud computing to deliver AI and generative AI at scale. "Hybrid cloud and AI are two sides of the same coin because it's all about the data," said Ric Lewis, IBM’s SVP of Infrastructure, at Think 2024. To function well, generative AI systems need to access the data that feeds its models wherever it resides. Enter…

3 keys to building a robust hybrid cloud risk strategy

2 min read - Hybrid cloud has become the new normal for enterprises in nearly all industries. Many enterprises have also deployed a hybrid multicloud environment that’s reliant on an ecosystem of different cloud service providers. According to an IBM Institute for Business Value report, 71% of executives think it’s difficult to realize the full potential of a digital transformation without having a solid hybrid cloud strategy in place. Managing complex business operations across a hybrid multicloud environment presents leaders with unique challenges, not…

The power of embracing distributed hybrid infrastructure

2 min read - Data is the greatest asset to help organizations improve decision-making, fuel growth and boost competitiveness in the marketplace. But today’s organizations face the challenge of managing vast amounts of data across multiple environments. This is why understanding the uniqueness of your IT processes, workloads and applications demands a workload placement strategy based on key factors such as the type of data, necessary compute capacity and performance needed and meeting your regulatory security and compliance requirements. While hybrid cloud has become…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters