September 12, 2022 By Powell Quiring 5 min read

How to design a continuous integration and continuous deployment (CI/CD) for virtual machines in an IBM Virtual Private Cloud (VPC).

Software is crucial for business — even established businesses. The front door is an application running on a phone or a website. Sales and marketing rely on customer relationship management (CRM) systems. Shipping and receiving are automated logistics.

Delivering new versions of software is the cornerstone of continuous improvement. Continuous integration and continuous deployment (CI/CD) is a proven strategy for delivering high-quality software. At its core, CI/CD captures the steps to create and deploy software. The goal is to remove humans from the mundane by automating the steps to improve reliability and deliver fixes and features more frequently.

This blog post will cover the issues around the automated development of software for virtual server instances (VSIs). There is a companion GitHub repository that demonstrates a few of these concepts.

Background

IBM Virtual Private Cloud (VPC) provides compute instances with various flavors of CPU, memory, network and storage options for securely running workloads. Virtual server instance (VSI) images are the initial contents of the boot disk of a VPC instance.

IBM has documented a number of off-the-shelf architectures in the architecture center — like workloads in IBM VPC. Part of implementing the architecture is delivering software to the virtual server instances. Focusing the architecture lens on a single instance looks like this:

  • The app is an image running on a VSI.
  • 1.1 is the version of the app.

It’s reasonable to bake the application into a VSI image. Each application release will create a new version of the image, and the image will pass through a number of phases: build stage, test stage, pre-production stage and the final deployment to production. VSIs in a stage are provisioned with the new image version to deploy the software.

Create a pipeline to create and deploy VSI images

Automated pipelines are can be integrated into automated tools like DevOps toolchains. The automated steps will start with an IBM stock image and create custom images as the software is developed and fixes are applied. The custom images are then deployed into the staging environment.

Basic pipeline

  • The stock images are provided by IBM and regularly updated.
  • The dept images are images created by the department. The image to deploy to stage is tagged with stage. Notice how that tag was “moved” from dept-1-1 to dept-1-2.

Steps:

  • Image pipeline:
    • Start with an IBM stock image.
    • Create a new image with desired changes.
    • Delete the stage tag from the previous version.
    • Add the version tag and stage tag to the new image.
  • Stage pipeline:
    • Notice that a new image with the stage tag is available.
    • Provision the architecture with the new image.

Multi-stage pipeline

An organization can have a central set of images that serve as the base images for all development departments:

Corporate images are base images used by all departments.

Create an image pipeline with Packer

Packer with the IBM plugin can be used to create images. The blog post “Build Hardened and Pre-Configured VPC Custom Images with Packer” provides an introduction. Here are some snippets of the Packer configuration that define a starting point using an IBM stock image. Provisioners are used to install software like nginx or your application. More steps are needed to further configure the application runtime environment, but you get the idea. Below is a cut-down of this full example:

packer {
  required_plugins {
    ibmcloud = {
      source  = "github.com/IBM/ibmcloud"
    }
  }
}

source "ibmcloud-vpc" "ubuntu" {
  vsi_base_image_name = "ibm-ubuntu-22-04-minimal-amd64-1"
}

provisioner "shell" {
  inline = [
    "apt -qq -y install nginx < /dev/null",
  ]
}

provisioner "file" {
  source = "app.tar.gz"
  destination = "/root/app.tar.gz"
}

Basic steps that are triggered by a change in the application:

  • Create an image using Packer.
  • Signal the next stage — the deploy pipeline.

Create a deploy pipeline to deploy the new image

The deploy pipeline in the diagram above is for provisioning new VSIs to run images generated by the image pipeline.

Steps:

  • Create a VPC Subnet and other resources.
  • Wait for signal from previous stage.
  • Provision new VSIs running new image.

The VPC architecture and corresponding VSIs will depend on the problem being solved. They could be as simple as a single VSI or more complicated like the three-tier architecture. The provision/destroy steps will depend on the architecture. It may be sufficient to invoke a Terraform script that uses the dynamic evaluation of tags to identify the image (see example vpc.tf). Alternatively, you can use the IBM Cloud Command Line Interface to find the image with the stage tag:

ibmcloud resource search 'service_name:is AND type:image AND tags:"stage"'

You will need to consider a replacement strategy for the existing VSIs. Other resources may be dependent on an existing VSI. For example, load balancers or DNS entries are dependent on the private IP addresses of the VSI. Here are some possible scenarios:

Preserve the VSI IPs

The reserved IPs capability of VPC allows you to reserve an IP address in a subnet. The destroy followed by a provision of a VSI will result in the same IP address. Here is an example Terraform snippet:

resource "ibm_is_subnet" "zone" { }
resource "ibm_is_subnet_reserved_ip" "instance" {
  subnet = ibm_is_subnet.zone.id
}
resource "ibm_is_instance" "test" {
  image          = data.ibm_is_image.name.id  // new image version to provision
  primary_network_interface {
    subnet = ibm_is_subnet.zone.id
    primary_ip {
      reserved_ip = ibm_is_subnet_reserved_ip.instance.reserved_ip
    }
  }
}

DNS record or load balancer update

It may be advantageous to provision the new VSI application using a new IP address. After both are running, you can change the dependent resources. Update the DNS record to the new IP address when both the old and new VSIs are active. Load balancer pool members can be handled similarly.

VPC instance group

Instance groups allow bulk provisioning. An instance group can even be the pool for a load balancer. The image is specified by an instance template resource. Create a new instance template for the new image version and connect it to the instance group. New instances will be provisioned using the new image. You will need to remove the instances running the previous image version.

The diagrams below show the before on the left, and the after on the right:

  • Create a new Instance Template with version 1.2 of the image.
  • Initialize the Instance Group with the new Instance Template.
  • Delete the Instance Group Members running the previous versions.

Summary and next steps

Automating software build, test, integration and deploy will improve software quality. Virtual machine images can be the foundation of the process. IBM Virtual Private Cloud (VPC) has the compute capacity along with the isolation and control to make it simple, powerful and secure.

More reading:

If you have feedback, suggestions or questions about this post, please email me or reach out to me on Twitter (@powellquiring).

Was this article helpful?
YesNo

More from Cloud

How a US bank modernized its mainframe applications with IBM Consulting and Microsoft Azure

9 min read - As organizations strive to stay ahead of the curve in today's fast-paced digital landscape, mainframe application modernization has emerged as a critical component of any digital transformation strategy. In this blog, we'll discuss the example of a US bank which embarked on a journey to modernize its mainframe applications. This strategic project has helped it to transform into a more modern, flexible and agile business. In looking at the ways in which it approached the problem, you’ll gain insights into…

The power of the mainframe and cloud-native applications 

4 min read - Mainframe modernization refers to the process of transforming legacy mainframe systems, applications and infrastructure to align with modern technology and business standards. This process unlocks the power of mainframe systems, enabling organizations to use their existing investments in mainframe technology and capitalize on the benefits of modernization. By modernizing mainframe systems, organizations can improve agility, increase efficiency, reduce costs, and enhance customer experience.  Mainframe modernization empowers organizations to harness the latest technologies and tools, such as cloud computing, artificial intelligence,…

Modernize your mainframe applications with Azure

4 min read - Mainframes continue to play a vital role in many businesses' core operations. According to new research from IBM's Institute for Business Value, a significant 7 out of 10 IT executives believe that mainframe-based applications are crucial to their business and technology strategies. However, the rapid pace of digital transformation is forcing companies to modernize across their IT landscape, and as the pace of innovation continuously accelerates, organizations must react and adapt to these changes or risk being left behind. Mainframe…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters