September 12, 2022 By Powell Quiring 5 min read

How to design a continuous integration and continuous deployment (CI/CD) for virtual machines in an IBM Virtual Private Cloud (VPC).

Software is crucial for business — even established businesses. The front door is an application running on a phone or a website. Sales and marketing rely on customer relationship management (CRM) systems. Shipping and receiving are automated logistics.

Delivering new versions of software is the cornerstone of continuous improvement. Continuous integration and continuous deployment (CI/CD) is a proven strategy for delivering high-quality software. At its core, CI/CD captures the steps to create and deploy software. The goal is to remove humans from the mundane by automating the steps to improve reliability and deliver fixes and features more frequently.

This blog post will cover the issues around the automated development of software for virtual server instances (VSIs). There is a companion GitHub repository that demonstrates a few of these concepts.

Background

IBM Virtual Private Cloud (VPC) provides compute instances with various flavors of CPU, memory, network and storage options for securely running workloads. Virtual server instance (VSI) images are the initial contents of the boot disk of a VPC instance.

IBM has documented a number of off-the-shelf architectures in the architecture center — like workloads in IBM VPC. Part of implementing the architecture is delivering software to the virtual server instances. Focusing the architecture lens on a single instance looks like this:

  • The app is an image running on a VSI.
  • 1.1 is the version of the app.

It’s reasonable to bake the application into a VSI image. Each application release will create a new version of the image, and the image will pass through a number of phases: build stage, test stage, pre-production stage and the final deployment to production. VSIs in a stage are provisioned with the new image version to deploy the software.

Create a pipeline to create and deploy VSI images

Automated pipelines are can be integrated into automated tools like DevOps toolchains. The automated steps will start with an IBM stock image and create custom images as the software is developed and fixes are applied. The custom images are then deployed into the staging environment.

Basic pipeline

  • The stock images are provided by IBM and regularly updated.
  • The dept images are images created by the department. The image to deploy to stage is tagged with stage. Notice how that tag was “moved” from dept-1-1 to dept-1-2.

Steps:

  • Image pipeline:
    • Start with an IBM stock image.
    • Create a new image with desired changes.
    • Delete the stage tag from the previous version.
    • Add the version tag and stage tag to the new image.
  • Stage pipeline:
    • Notice that a new image with the stage tag is available.
    • Provision the architecture with the new image.

Multi-stage pipeline

An organization can have a central set of images that serve as the base images for all development departments:

Corporate images are base images used by all departments.

Create an image pipeline with Packer

Packer with the IBM plugin can be used to create images. The blog post “Build Hardened and Pre-Configured VPC Custom Images with Packer” provides an introduction. Here are some snippets of the Packer configuration that define a starting point using an IBM stock image. Provisioners are used to install software like nginx or your application. More steps are needed to further configure the application runtime environment, but you get the idea. Below is a cut-down of this full example:

packer {
  required_plugins {
    ibmcloud = {
      source  = "github.com/IBM/ibmcloud"
    }
  }
}

source "ibmcloud-vpc" "ubuntu" {
  vsi_base_image_name = "ibm-ubuntu-22-04-minimal-amd64-1"
}

provisioner "shell" {
  inline = [
    "apt -qq -y install nginx < /dev/null",
  ]
}

provisioner "file" {
  source = "app.tar.gz"
  destination = "/root/app.tar.gz"
}

Basic steps that are triggered by a change in the application:

  • Create an image using Packer.
  • Signal the next stage — the deploy pipeline.

Create a deploy pipeline to deploy the new image

The deploy pipeline in the diagram above is for provisioning new VSIs to run images generated by the image pipeline.

Steps:

  • Create a VPC Subnet and other resources.
  • Wait for signal from previous stage.
  • Provision new VSIs running new image.

The VPC architecture and corresponding VSIs will depend on the problem being solved. They could be as simple as a single VSI or more complicated like the three-tier architecture. The provision/destroy steps will depend on the architecture. It may be sufficient to invoke a Terraform script that uses the dynamic evaluation of tags to identify the image (see example vpc.tf). Alternatively, you can use the IBM Cloud Command Line Interface to find the image with the stage tag:

ibmcloud resource search 'service_name:is AND type:image AND tags:"stage"'

You will need to consider a replacement strategy for the existing VSIs. Other resources may be dependent on an existing VSI. For example, load balancers or DNS entries are dependent on the private IP addresses of the VSI. Here are some possible scenarios:

Preserve the VSI IPs

The reserved IPs capability of VPC allows you to reserve an IP address in a subnet. The destroy followed by a provision of a VSI will result in the same IP address. Here is an example Terraform snippet:

resource "ibm_is_subnet" "zone" { }
resource "ibm_is_subnet_reserved_ip" "instance" {
  subnet = ibm_is_subnet.zone.id
}
resource "ibm_is_instance" "test" {
  image          = data.ibm_is_image.name.id  // new image version to provision
  primary_network_interface {
    subnet = ibm_is_subnet.zone.id
    primary_ip {
      reserved_ip = ibm_is_subnet_reserved_ip.instance.reserved_ip
    }
  }
}

DNS record or load balancer update

It may be advantageous to provision the new VSI application using a new IP address. After both are running, you can change the dependent resources. Update the DNS record to the new IP address when both the old and new VSIs are active. Load balancer pool members can be handled similarly.

VPC instance group

Instance groups allow bulk provisioning. An instance group can even be the pool for a load balancer. The image is specified by an instance template resource. Create a new instance template for the new image version and connect it to the instance group. New instances will be provisioned using the new image. You will need to remove the instances running the previous image version.

The diagrams below show the before on the left, and the after on the right:

  • Create a new Instance Template with version 1.2 of the image.
  • Initialize the Instance Group with the new Instance Template.
  • Delete the Instance Group Members running the previous versions.

Summary and next steps

Automating software build, test, integration and deploy will improve software quality. Virtual machine images can be the foundation of the process. IBM Virtual Private Cloud (VPC) has the compute capacity along with the isolation and control to make it simple, powerful and secure.

More reading:

If you have feedback, suggestions or questions about this post, please email me or reach out to me on Twitter (@powellquiring).

Was this article helpful?
YesNo

More from Cloud

IBM + AWS: Transforming Software Development Lifecycle (SDLC) with generative AI

7 min read - Generative AI is not only changing the way applications are built, but the way they are envisioned, designed, tested, documented, and deployed. It’s also revolutionizing the software development lifecycle (SDLC). IBM and AWS are infusing Amazon Bedrock generative AI capabilities into the IBM® SDLC solution to drive increased efficiency, speed, quality and value in every application lifecycle consistently and at scale. And The evolution of the SDLC landscape The software development lifecycle has undergone several silent revolutions in recent decades.…

How digital solutions increase efficiency in warehouse management

3 min read - In the evolving landscape of modern business, the significance of robust maintenance, repair and operations (MRO) systems cannot be overstated. Efficient warehouse management helps businesses to operate seamlessly, ensure precision and drive productivity to new heights. In our increasingly digital world, bar coding stands out as a cornerstone technology, revolutionizing warehouses by enabling meticulous data tracking and streamlined workflows. With this knowledge, A3J Group is focused on using IBM® Maximo® Application Suite and the Red Hat® Marketplace to help bring…

How fintechs are helping banks accelerate innovation while navigating global regulations

4 min read - Financial institutions are partnering with technology firms—from cloud providers to fintechs—to adopt innovations that help them stay competitive, remain agile and improve the customer experience. However, the biggest hurdle to adopting new technologies is security and regulatory compliance. While third and fourth parties have the potential to introduce risk, they can also be the solution. As enterprises undergo their modernization journeys, fintechs are redefining digital transformation in ways that have never been seen before. This includes using hybrid cloud and…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters