June 7, 2021 By Phil Downey
George Baklarz
4 min read

Fast-tracking your database journey to Red Hat OpenShift.

Hybrid cloud is gaining momentum in the enterprise, with many customers adopting a multi-vendor approach to avoid being pinned down to a single cloud vendor. Red Hat OpenShift supports this strategy by providing customers with a containerization framework that is supported across all major cloud platforms. Customers can easily move their OpenShift workloads between cloud providers without the need for disruptive migrations.

Once a containerization strategy has been adopted, the challenge that many customers now face is how to move their Db2 traditional installations (bare metal or virtual machines) to an operator-based cloud-native deployed Db2 service. How do they get there?

The traditional answers that most customers expect are database backup and restore or the more disruptive export and import of data. However, there is another option available for Db2 for Linux users. Instead of using traditional migration methods, a new move utility is available that will optionally upgrade the database with NO MIGRATION required.

IBM Db2 Click to Containerize

Welcome to the new IBM Db2 Click to Containerize (C2C) tool kit. Db2 C2C enables you to inspect a database, review and change any required settings and then move it into your hybrid cloud environment running OpenShift or IBM Cloud Pak® for Data. The program produces an auditable script that will move your database and optionally set up HADR synchronization from your original database to the new containerized database on Red Hat OpenShift. It is that simple — without exporting, importing, decrypting or exposing any of the data within your database:

Click to Containerize can work with the standard Db2 Operator for Red Hat OpenShift or with IBM Db2 for Cloud Pak for Data. It is available as a graphical tool or command line interface so it can be easily integrated into scheduled jobs that are parameterised to move multiple databases at one time.

But Click to Containerize is not limited to migrations; consider the scenario where development teams want to create cloned images of a production database to test containerization, performance, migration and quality assurance. Cloning databases becomes a simple command rather than the tedious backup and restore process.

Click to Containerize supports SMP and MPP deployments along with row and columnar storage formats. Large databases can be moved in stages to avoid outage windows and data movement can be parallelized with full recovery and integrity maintained. Performance is dependent on parallelism, network bandwidth and available processor capacity, but laboratory testing has demonstrated the movement of databases, to different cloud environments, between different geographies, to be extremely efficient and reliable.

Click to Containerize supports many different containerization scenarios. A brief summary of these scenarios is described below.

Simplified database upgrade inline

The shift process detects that the source database is an older release (Db2 10.5, 11.1) and upgrades the database to 11.5 during the shift:

Cached containerization

It may not always be practical to shift the database directly from the source location to the destination. The reasons are usually due to the restrictions put in place on the source server:

  • Unable to install additional software to run Click to Containerize
  • Insufficient resources to run the shift process
  • Minimize impact on production workloads
  • Security/network connectivity issues

To accommodate these server restrictions, Click to Containerize can access a split/mirror copy of a database (using the disk subsystem to create a clone) or use the Clone/Deploy feature.

The Clone feature will copy the database control files and file system objects to another directory on the local machine or to another attached file system. This process takes substantially less time than a shift operation. The cloned files can be transferred to another server that has the appropriate software and bandwidth available to do the deployment step.

The Deploy process will complete the shift of the original database without requiring access to the original source database. optionally, HADR can be added to the source database and connected to the new cloned copy to allow for the database to be synchronized between the two sites:

Clone an existing service

Click to Containerize is not limited to on-premises-to-cloud deployments. An additional feature of Click to Containerize allows you to generate a clone of an existing Db2 Container service on OpenShift or IBM Cloud Pak for Data:

Performance and design

As mentioned previously, performance of data movement is restricted by your infrastructure performance, with network performance and storage speed being the largest contributors to the run time. Click to Containerize attempts to improve performance by compressing the data in flight and parallelizing the data transfers.

Click to Containerize uses Db2-native APIs and Red Hat OpenShift CLI commands to perform secure transfers with all operations being audited.

In summary, Db2 Click to Containerize can accelerate your move to hybrid cloud on Red Hat OpenShift and IBM Cloud Pak for Data, catering for the availability and speed of movement needs and making your journey to cloud significantly easier and a lot faster than traditional migration approaches.

Learn more about how to get started with IBM Db2 Click to Containerize to help with your modernization journey.

If you have questions, you can also contact Phil Downey or George Baklarz.

More from Announcements

Success and recognition of IBM offerings in G2 Summer Reports  

2 min read - IBM offerings were featured in over 1,365 unique G2 reports, earning over 230 Leader badges across various categories.   This recognition is important to showcase our leading products and also to provide the unbiased validation our buyers seek. According to the 2024 G2 Software Buyer Behavior Report, “When researching software, buyers are most likely to trust information from people with similar roles and challenges, and they value transparency above other factors.”  With over 90 million visitors each year and hosting more than 2.6…

Manage the routing of your observability log and event data 

4 min read - Comprehensive environments include many sources of observable data to be aggregated and then analyzed for infrastructure and app performance management. Connecting and aggregating the data sources to observability tools need to be flexible. Some use cases might require all data to be aggregated into one common location while others have narrowed scope. Optimizing where observability data is processed enables businesses to maximize insights while managing to cost, compliance and data residency objectives.  As announced on 29 March 2024, IBM Cloud® released its next-gen observability…

Unify and share data across Netezza and watsonx.data for new generative AI applications

3 min read - In today's data and AI-driven world, organizations are generating vast amounts of data from various sources. The ability to extract value from AI initiatives relies heavily on the availability and quality of an enterprise's underlying data. In order to unlock the full potential of data for AI, organizations must be able to effectively navigate their complex IT landscapes across the hybrid cloud.   At this year’s IBM Think conference in Boston, we announced the new capabilities of IBM watsonx.data, an open…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters