IBM Cloud Orchestrator, Version 2.5.0.1

Preparing for the migration

To prepare your environment for migration, you must install a new IBM® Cloud Orchestrator V2.5.0.1 environment which is in place in parallel with your IBM Cloud Orchestrator V2.4.0.2 environment by running the following procedure.

Procedure

  1. Install the IBM Cloud Manager with OpenStack deployment server and apply the latest fix pack.

    For information about installing the IBM Cloud Manager with OpenStack deployment server see Installing IBM Cloud Manager with OpenStack on Linux. For information about applying the latest fix pack, see Applying fixes and updates.

  2. Prepare the IBM Cloud Orchestrator V2.5.0.1 Server and download the required image files as described in Preparing the IBM Cloud Orchestrator Servers and Downloading the required image files.
  3. Prepare the IBM Cloud Manager with OpenStack controller for the first region and deploy an IBM Cloud Manager with OpenStack cloud according to the topology of the first IBM Cloud Orchestrator V2.4.0.2 region that you want to migrate. For information, see Deploying an IBM Cloud Manager with OpenStack cloud.
    Note: The first region is migrated to the IBM Cloud Manager with OpenStack master controller that will also contain the Keystone component shared among the regions.
    Important: When preparing for the migration of a KVM region, ensure that the compute node topology being created for the IBM Cloud Orchestrator V2.5.0.1 environment is the same as the topology present on the IBM Cloud Orchestrator V2.4.0.2 environment. In particular, ensure that there are the same number of compute nodes in both the environments and that the IBM Cloud Orchestrator V2.5.0.2 compute nodes have sufficient capacity to contain the virtual machines being migrated to them.
  4. Configure the IBM Cloud Manager with OpenStack controller for the first region to be managed by IBM Cloud Orchestrator V2.5.0.1 by running the procedure described in [Typical] Configuring the IBM Cloud Manager with OpenStack servers.
  5. Install the IBM Cloud Orchestrator V2.5.0.1 Server by running the following procedures:
    1. Setting and validating the deployment parameters
    2. Adding the OpenStack simple token to the response file
    3. Checking the installation prerequisites
    4. Deploying the IBM Cloud Orchestrator Servers
  6. [For IBM Cloud Orchestrator Enterprise Edition only:] Upgrade SmartCloud Cost Management by following the procedure in Upgrading from SmartCloud Cost Management 2.1.0.4.
  7. Discover the current IBM Cloud Orchestrator V2.4.0.2 and IBM Cloud Orchestrator V2.5.0.1 topologies by running the following procedure:
    1. On the IBM Cloud Orchestrator V2.5.0.1 Server, as user root, run the following commands to edit the ico_upgrade.rsp:
      cd /opt/ico_install/V2501/installer
      vi ico_upgrade.rsp
      and specify the following information:
      DEPLOYMENT_SERVER
      The IP address or the fully qualified domain name (FQDN) of the IBM Cloud Orchestrator V2.4.0.2 Deployment Server.
      DEPLOYMENT_SERVER_PASSWORD
      The root password of the IBM Cloud Orchestrator V2.4.0.2 Deployment Server.
      ICM_DEPLOYMENT_SERVER
      The IP address or the fully qualified domain name (FQDN) of the IBM Cloud Manager with OpenStack deployment server.
      ICM_DEPLOYMENT_SERVER_PASSWORD
      The root password of the IBM Cloud Manager with OpenStack deployment server.
      SOURCE_ICO_SERVER
      The IP address or the fully qualified domain name (FQDN) of the IBM Cloud Orchestrator V2.4.0.2 Central Server 2.
      DEST_ICO_SERVER
      The fully qualified domain name (FQDN) of the IBM Cloud Orchestrator V2.5.0.1 Server.
      DEST_ICO_SERVER_PASSWORD
      The root password of the IBM Cloud Orchestrator V2.5.0.1 Server.
    2. Perform the upgrade discovery prerequisite check on the system that you have specified in the upgrade response file by running the following command:
      ./upgrade-prereq-checker.py ico_upgrade.rsp
      If the prerequisite check passes successfully, continue with the next step. Otherwise, follow the instructions outputted by the prerequisite checker and edit the upgrade response file again to fix any issues.
    3. Discover the topologies by running the following command:
      ./upgrade.py ico_upgrade.rsp --discover

      The discovery reports are created in the /tmp/discovery directory.

      View the report of migrated regions, unmigrated regions, and unattached pre-prepared regions by viewing the discoveryMigrationReport.html report file in a browser. Ensure that the details are correct. At this stage, there are no entries in the Migrated Regions section.

  8. Migrate all the images that are currently registered in Glance on the IBM Cloud Orchestrator V2.4.0.2 Region Server to the IBM Cloud Manager with OpenStack controller by running the following procedure:
    1. On the IBM Cloud Orchestrator V2.5.0.1 Server, as user root, run the following commands to edit the ico_upgrade.rsp file:
      cd /opt/ico_install/V2501/installer
      vi ico_upgrade.rsp
      and specify the following information according to the details in the discovery report:
      SOURCE_UNMIGRATED_REGION
      The fully qualified domain name (FQDN) of the IBM Cloud Orchestrator V2.4.0.2 Region Server to migrate from.
      SOURCE_CENTRAL_DB_PASSWORD
      The root password for the IBM Cloud Orchestrator V2.4.0.2 server on which the DB2 database resides. By default it is Central Server 1, otherwise it is where your external database resides.
      SOURCE_REGION_PASSWORD
      The root password for the IBM Cloud Orchestrator V2.4.0.2 Region Server.
      SOURCE_CENTRAL_SERVER_PASSWORD
      The root password for the IBM Cloud Orchestrator V2.4.0.2 Central Server 2.
      SOURCE_ICO_ADMIN_PASSWORD
      The password of the IBM Cloud Orchestrator V2.4.0.2 admin user.
      DEST_UNATTACHED_REGION
      The fully qualified domain name (FQDN) of the IBM Cloud Manager with OpenStack controller.
      DEST_REGION_PASSWORD
      The root password of the IBM Cloud Manager with OpenStack controller.
      DEST_ICO_ADMIN_PASSWORD
      The password of the IBM Cloud Orchestrator V2.5.0.1 admin user.
    2. Perform the upgrade migration prerequisite check on the system specified in the upgrade response file by running the following command:
      ./upgrade-prereq-checker.py ico_upgrade.rsp --check-regions
      If this check passes successfully, continue with the next step. Otherwise, follow the instructions given in output by the prerequisite checker and edit the upgrade response file to fix any issues.
    3. Migrate the images by running the following command:
      ./upgrade.py ico_upgrade.rsp --copy-images
  9. Prepare the upgrade attribute mapping file to specify region environmental information which the discovery process is unable to retrieve automatically. The upgrade_map.csv file is a two-column CSV file. The first column contains attributes from the source system, the second column contains attributes from the destination system. In the upgrade_map.csv, you must specify the following information:
    Physical network mapping
    OpenStack Neutron has a concept of a physical_network that is configured in the Neutron plug-in configuration files. Each plug-in (for example, ML2 or Open vSwitch) configures one or more physical networks to match the underlying network topology. Before the migration, you must configure the Neutron plug-ins on the destination system to match the source system. The names of the physical networks does not need to be the same, but if they are different, you must add the related entry to the upgrade_map.csv file.
    For example, if you configured a Cisco Nexus switch (via the /etc/neutron/plugins/ml2/ml2_conf_cisco.ini file) as physnet0 on the source system, and it is called nexus0 on the destination system, you must add the following line to the upgrade_map.csv file:
    physnet0,nexus0
    In the upgrade_map.csv file, you must add all the instances where the source physical network is different from the destination physical network.
    [For KVM regions only:] Hypervisor host name mapping
    Neutron uses the binding:host_id attribute to map network ports to hypervisors. This attribute contains the host name of the KVM hypervisor, and it is used for message routing purposes. For each KVM compute node to be migrated, you must add an entry to the upgrade_map.csv file.
    For example, if you are migrating from the source KVM compute node compute00 to the destination KVM compute node compute11, you must add the following line to the upgrade_map.csv file:
    compute00,compute11
  10. If you have more than one region, you may choose to prepare the second and subsequent regions for migration at any time before migrating them. Preparation in advance allows you to migrate more regions in the same outage period.

    To prepare the migration for the remaining regions, repeat steps 3, 4, 7, 8, and 9 for each region that you want to migrate.