Migrate a VMware region from the IBM® Cloud
Orchestrator V2.4.0.2 environment
to an IBM Cloud Manager with OpenStack controller
in your IBM Cloud
Orchestrator V2.5.0.1
environment.
Before you begin
Alert users that
the IBM Cloud
Orchestrator V2.4.0.2
region is going to be migrated and it cannot be used any more in the IBM Cloud
Orchestrator V2.4.0.2 environment.
Ensure that no new user or project is added in the IBM Cloud
Orchestrator V2.4.0.2 environment
until the entire environment is migrated. Additionally, ensure that
no new toolkits are imported or modified and that no categories or
offerings are created or updated. Any such changes you make after
running this procedure must be manually applied to the IBM Cloud
Orchestrator V2.5.0.1 environment.
Procedure
- List the details of the information in the OpenStack databases
in your IBM Cloud
Orchestrator V2.4.0.2
environment and save these details so that after the migration procedure
is completed, you can check that it worked correctly:
- If this is the first region to be migrated, log in to the IBM Cloud
Orchestrator V2.4.0.2 Central
Server 2 as root and save the output of the following
commands:
source ~/keystonerc
keystone endpoint-list
keystone user-list
- Log in to the IBM Cloud
Orchestrator V2.4.0.2
Region Server as root and save the output of the
following commands:
source ~/openrc
glance image-list
heat stack-list
cinder list
neutron net-list
neutron subnet-list
nova list
nova image-list
nova flavor-list
- If any recent change occurred in your environment topologies,
discover the current topologies as described in step 7 in Preparing for the migration.
- Migrate the OpenStack data by
logging in to the IBM Cloud
Orchestrator V2.5.0.1
Server as root and running the following commands:
cd /opt/ico_install/V2501/installer
./upgrade.py ico_upgrade.rsp --export-region
./upgrade.py ico_upgrade.rsp --import-region
The script
exports the OpenStack data
from the IBM Cloud
Orchestrator V2.4.0.2
Region Server and it imports the data in the IBM Cloud Manager with OpenStack controller.
As part of the export script, the Region Server services are stopped
and disabled and so they no longer manage the hypervisor.
- Check details of what was imported in the IBM Cloud Manager with OpenStack controller and
compare the details with what you saved in step 1.
Log in
to the
IBM Cloud Manager with OpenStack controller
as
root and run the following commands:
source ~/v3rc
openstack endpoint list
openstack user list
openstack project list
openstack domain list
openstack image list
heat stack-list
cinder list
openstack network list
neutron subnet-list
nova list
openstack server list
openstack flavor list
- Check that the OpenStack Dashboard in the IBM Cloud
Orchestrator V2.5.0.1 environment
is working as expected.
Log in to the
OpenStack Dashboard as
admin at
the following URL:
https://icm_controller_fqdn
where
icm_controller_fqdn is
the fully qualified domain name of the
IBM Cloud Manager with OpenStack controller.
Check that the details of the users, projects, networks, images, instances,
and volumes are as expected.
Log in to the OpenStack Dashboard as a non-admin
user and check that the details of the users, projects, networks,
images, instances, and volumes are as expected.
- Troubleshoot any issues which occurred during the region
migration.
If you see any critical errors in the output
during the migration which indicate that the migration was not successful,
run the following steps:
- To reduce system downtime during the debug process, roll the migrated
region back to its state prior to migration by executing the following
command:
./upgrade.py ico_upgrade.rsp --rollback-region
This
command re-enables the Region Server services which were stopped and
disabled and remove any migration flags so that the discovery process
considers the region as unmigrated.
- Debug any errors that are displayed on the console output and
check the export and import logs in the /opt/ico_install/V2501/installer/upgrade/logs directory
for any further issues which may have occurred during the data migration.
Note: If the VMware discovery process discovers duplicated
neutron networks, it is due to differences between the physical network
or VLAN attributes in the defined network and the discovered network.
Either update the manually defined network to match the discovered
network, or add the relevant port group to the filter list, as described
in
VMware driver discovery service.