SevOne NMS Upgrade Process Guide
About
This document describes SevOne NMS upgrade process. If you are performing an upgrade, you may use SevOne's Self-Service Upgrade tool. For details, please refer to section using Self-Service Upgrade.
As of SevOne NMS 7.0.0, SevOne is distributed using container technology, allowing a more confident deployment of the software. To run administrative commands on a SevOne appliance, the administrator must now execute commands in the context of the intended container.
By default, the container deployment of SevOne is set to be read-only.
In SevOne NMS 6.8.0, the operating system was changed from CentOS 8 Stream to RHEL 8.9.
In SevOne NMS 7.0, new NMS containerization architecture has been introduced.
Upgrade
The upgrade to SevOne NMS 7.0.1 can only be reached from SevOne NMS 6.7 or SevOne NMS 6.8.
To upgrade from SevOne NMS 6.7.x or SevOne NMS 6.8.x to SevOne NMS 7.0.4 (for example), you must first upgrade to SevOne NMS 7.0.1 before you can upgrade to SevOne NMS 7.0.4. SevOne NMS 7.0.1 is a mandatory stop.
To upgrade from SevOne NMS 6.7.x or SevOne NMS 6.8.x to SevOne NMS 7.1.1 (for example), you must first upgrade to SevOne NMS 7.0.1 before you can upgrade to SevOne NMS 7.1.1. SevOne NMS 7.0.1 is a mandatory stop.
This means that if you are upgrading from SevOne NMS 6.7.x or SevOne NMS 6.8.x to SevOne NMS 7.x.x, the first mandatory stop to upgrade to is SevOne NMS 7.0.1 before you can upgrade to a later release.
You cannot upgrade to SevOne NMS 7.0.1 from SevOne NMS 7.0.0. Both SevOne NMS 7.0.0 and SevOne NMS 7.0.1 versions can upgrade to SevOne NMS 7.0.2 or above.
Downgrade:
Downgrade is to the nearest major / minor version. For example,
- SevOne NMS 7.2.0 can downgrade to SevOne NMS 7.1.0. If the minor version is zero i.e., 7.2.0 and you want to perform a downgrade, it will downgrade to the next lowest major version i.e., 7.1.0.
- SevOne NMS 7.0.4 can downgrade to SevOne NMS 7.0.3.
- SevOne NMS 7.0.2 can downgrade to SevOne NMS 7.0.1.
- SevOne NMS 7.0.1 can downgrade to SevOne NMS 6.<x>.<y> where <x> and <y> is the actual version used to upgrade to SevOne NMS 7.0.1. For example, if you upgraded from SevOne NMS 6.8.3 to SevOne NMS 7.0.1 and now, you want to do a downgrade, it will downgrade to SevOne NMS 6.8.3 (as this is the version from which SevOne NMS was upgraded from SevOne NMS 6.8.3 to SevOne NMS 7.0.1).
KNOWN ISSUE: If you are on SevOne NMS 7.0.1 and want to upgrade to SevOne NMS 7.0.2 or above, you may encounter an issue with the upgrade.
Workaround: SevOne recommends that if you upgrading from SevOne NMS 7.0.1, please upgrade to SevOne 7.1.1 instead of upgrading to SevOne NMS 7.0.1+.
DO NOT upgrade to SevOne NMS 7.0.x (regardless of which prior SevOne NMS version you are on) until you have reviewed the list of Deprecated / Removed Features & Functions. For guidance, please reach out to your IBM Technical Account Team, IBM SevOne Support, or IBM Expert Labs.
SevOne NMS 6.8 has migrated from CentOS 8 Stream to RHEL 8.9
SevOne NMS 7.0.x requires Red Hat Enterprise Linux (RHEL) 8.9.
When upgrading from SevOne NMS 6.7.x to SevOne NMS 7.0.x, the upgrade process automatically upgrades the environment from CentOS 8 Stream to RHEL 8.9 using the Command Line Interface. If you encounter any problems during the upgrade, please contact SevOne Support.
Upgrade to SevOne NMS 7.0.x - when performing an upgrade to SevOne NMS 7.0.x, you must be on SevOne NMS 6.7.x or higher. If you have SevOne NMS prior to version 6.7.0, you must first upgrade to SevOne NMS 6.7.x before continuing with an upgrade to SevOne NMS 7.0.x. For upgrade details, please refer to SevOne NMS Upgrade Process Guide for version <= 6.7.0.
Downgrade from SevOne NMS 7.0.x - can only downgrade to the SevOne NMS version you upgraded from. For example,
- if you upgraded from SevOne NMS 6.8.0 to SevOne NMS 7.0.x and now would like to downgrade, you will downgrade back to SevOne NMS 6.8.0.
- if you upgraded from SevOne NMS 6.8.2 to SevOne NMS 7.0.x and now would like to downgrade, you will downgrade back to SevOne NMS 6.8.2.
- assume you have upgraded from SevOne NMS 6.8.2 to SevOne NMS 7.0.1 and then, you have upgraded again from SevOne NMS 7.0.1 to SevOne NMS 7.0.2. If you need to downgrade, you can downgrade from SevOne NMS 7.0.2 to SevOne NMS 7.0.1 only. Downgrading to SevOne NMS 6.8.2 is not possible - this is because the system does a backup during each upgrade. When the user upgrades from SevOne NMS 6.8.2 to SevOne NMS 7.0.1, a backup is created for SevOne NMS 6.8.2 in /opt directory ( for non-Openstack cluster) and in /data/upgrade directory (for Openstack cluster). Now, if the user does another upgrade from SevOne NMS 7.0.1 to SevOne NMS 7.0.2, a backup is created for SevOne NMS 7.0.1 and the backup originally created for SevOne NMS 6.8.2 is lost. In this scenario, you will not be able to backup from SevOne NMS 7.0.2 to SevOne NMS 7.0.1 to SevOne NMS 6.8.2. Only downgrade to SevOne NMS 7.0.1 is possible in this scenario.
This document provides details on how to upgrade/downgrade using:
- Command Line Interface - please refer to section using Command Line Interface to Upgrade.
- Self-Service Upgrades - please refer to section using Self-Service Upgrade.Warning: Self-Service Upgrades
- For Self Service Upgrades, SevOne requests the customer to raise a proactive ticket to make SevOne Support aware that the customer will be performing this. By doing this, SevOne Support can assist the customer with the upgrade preparation and readiness.
- If there are Solutions such as SD-WAN, WiFi, and SDN present on your cluster, then please check the product Compatibility Matrix on SevOne Support Customer Portal before proceeding with the upgrade.
- For add-ons/customizations, please engage SevOne's Platform Services team before the upgrade.
- If you are running SevOne Data Insight and you encounter the SOA check rpm error during post-checks, you may ignore the reported error. However, if you see the error during pre-checks, please contact SevOne Support.
For all platforms, if / is greater than 80GB, then 45GB of free disk space is required.
In addition to this,
- on a cluster without Openstack,
- free disk space on / must be greater than 20GB.
- on a cluster with Openstack,
- free disk space on /data must be greater than 20GB.
In this guide if there is,
- [any reference to master] OR
- [[if a CLI command contains master] AND/OR
-
[its output contains master]],
it means leader.
And, if there is any reference to slave, it means follower.
Execute the following steps when there is a potential IP address overlap between the customer's network and SevOne's Docker IP address range 172.17.0.0/16.
Update bridge IP range
$ vi /etc/docker/daemon.json
{
"bip": "172.17.0.1/24"
}
Restart docker
$ systemctl restart docker
Ansible can be utilized to upgrade SevOne NMS. Please make a request on SevOne Support Portal for the latest forward / reverse migration tarball files along with the signature tools required for the upgrade.
- all the required ports are open. Please refer to SevOne NMS Port Number Requirements Guide for details.
- Port 60006 is required during the upgrade pre-checks.
- you have the required CPUs, total vCPU cores, RAM (GB), Hard Drives, etc. based on SevOne NMS
Installation Guide - Virtual Appliance > section Hardware Requirements.Note: Due to technology advancements, resource requirements may change.
Performing a SevOne NMS version upgrade on a virtual machine deployment means that the deployment on the virtual machine was done on a prior SevOne NMS version. As such, the current virtual machine's hardware specifications would be on SevOne NMS version the virtual machine that was previously deployed on or upgraded to. If the target SevOne NMS version of this upgrade has different hardware specifications than the current configuration on the virtual machine, it is important that the hardware resources are aligned prior to the upgrade based on the current documented requirements of target version of SevOne NMS upgrade.
For example, SevOne NMS 5.7.2.7 had 50GB as / (root) partition requirement. Let's assume your virtual machine is currently on SevOne NMS 5.7.2.7 and you want to upgrade to the target SevOne NMS version where the / (root) partition requirement has changed to 150GB. Because of / (root) partition requirement change, you will have to increase the disk size on the / (root) partition of your virtual machine to 150GB before performing the SevOne NMS upgrade.
If CPUs, total vCPU cores, and RAM (GB) do not match the target version requirements, please discuss with your infrastructure team to align the resources required for your virtual machine.
If the Hard Disk space requires an increase to align with SevOne NMS' target version requirements, the following must be considered.
- contact your infrastructure team to increase the disk space on your virtual machine at the hypervisor end.
- once the disk space has been resized successfully at the hypervisor level, before starting the upgrade, the following procedure must be completed with instructions in section Expand Logical Volume below.
If your virtual machine has any custom requirements/specifications higher than the documented specifications in SevOne NMS Installation Guide - Virtual Appliance > section Hardware Requirements, please contact your Technical Account Manager to discuss the details of your custom requirements.
using Command Line Interface to Upgrade
Upgrade Steps
As of SevOne NMS 6.5.0, if your flow template contains field 95, then you will see the following name changes in the flow template.
Field # | Pre-existing Field Name | New Field Name |
---|---|---|
95 | Application Tag | Application ID |
45010 | Engine ID-1 | Application Engine ID |
45011 | Application ID | Application Selector ID |
- Using ssh, log in to SevOne NMS appliance (Cluster Leader of SevOne NMS cluster) as
root.
$ ssh root@<NMS appliance>
- Check the version your NMS appliance is running on. For example, the output below indicates that
your NMS appliance is on SevOne NMS 6.8.0. To proceed with the upgrade to SevOne NMS
7.0.x, you must be on SevOne NMS 6.7.x or above.
Example$ SevOne-show-version SevOne version: 6.8.0 kernel version: 4.18.0-513.11.1.el8_9.x86_64 #1 SMP Thu Dec 7 03:06:13 EST 2023 nginx version: 0.0.0 MySQL version: 10.6.12-MariaDB PHP version: 8.1.27 SSH/SSL version: OpenSSH_8.0p1, OpenSSL 1.1.1k FIPS 25 Mar 2021 REST API version: 2.1.47, Build time 2024-02-15T08:46:36+0000, Hash 3dd4faa Hardware Serial Number: VMware-42 Intel(R) Xeon(R) CPU 2 cores @ 2199.998MHz 8GB RAM 31GB SWAP 569 GB / Partition
Important: If Openstack cluster,$ mkdir /data/upgrade $ cd /data/upgrade
- Verify the OS version. Confirm that version is Red Hat Enterprise Linux
(RHEL).
$ cat /etc/redhat-release Red Hat Enterprise Linux release 8.9 (Ootpa)
- Using curl -kO, copy the signature tools checksum file (signature-tools-<latest version>-build.<###>.tgz.sha256.txt) received from SevOne to /opt directory. For example, signature-tools-2.0.3-build.1.tgz.sha256.txt.
- Using curl -kO, copy the signature tools file (signature-tools-<latest version>-build.<###>.tgz) received from SevOne to /opt directory. For example, signature-tools-2.0.3-build.1.tgz.
- Verify the signature tools
checksum file from /opt
directory.
$ cd /opt
$ sha256sum --check signature-tools-<latest version>-build.<###>.tgz.sha256.txt
Example
$ sha256sum --check signature-tools-v2.0.3-build.1.tgz.sha256.txt
- Extract the signature tools tar
file.
$ tar -xzvf signature-tools-<latest version>-build.<###>.tgz -C /
Example
$ tar -xzvf signature-tools-v2.0.3-build.1.tgz -C /
- Using curl -O, copy the forward tarball file (<forward SevOne NMS tarball>.tar.gz) received from SevOne to /opt directory but if Openstack, to /data/upgrade. For example, tarball file for SevOne NMS 7.0.1.
- Using curl -O, copy the checksum file (<SevOne NMS>.sha256.txt) received from SevOne to /opt directory. However, if Openstack, to /data/upgrade directory.
- Change directory.
not Openstack
$ cd /opt
for Openstack
$ cd /data/upgrade
- (optional) Validate the signature for the forward tarball.
Ensure valid & trusted certificate is used,
$ SevOne-validate-image -i v7.0.1-build<enter build number>.tar.gz -s v7.0.1-build<enter build number>.tar.gz.sha256.txt INFO: Extracting code-signing certificates from image file... Image signed by SevOne Release on Sat, 25 May 2024 00:05:30 +0000. The certificate is trusted. Certificate subject= commonName = International Business Machines Corporation organizationalUnitName = IBM CCSS organizationName = International Business Machines Corporation localityName = Armonk stateOrProvinceName = New York countryName = US Certificate issuer= commonName = DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1 organizationName = DigiCert, Inc. countryName = US INFO: Checking the signature of the image The image can be installed.
Please contact SevOne Support Team if the certificate is not trusted or the signature does not match.
- Before starting the upgrade, it is recommended that you perform the upgrade
pre-checks to identify and address the known potential upgrade blocking issues.
SevOne-act check checkout --full-cluster SevOne-act check listening-ports --full-cluster SevOne-act stk checkout --full-cluster SevOne-act stk checkout-full --full-cluster SevOne-act stk get-peer-time --full-cluster SevOne-act stk get-free-disk-space --full-cluster SevOne-act stk partition-read-state --full-cluster
- Once the pre-check completes successfully and you have identified and addressed the known
potential upgrade blocking issues, validate the signature and install the forward tarball.
Important: If you want to upgrade using the self-service upgrade, please refer to section using Self-Service Upgrade. Else, continue.
$ SevOne-validate-image -i v7.0.1-build<enter build number>.tar.gz -s v7.0.1-build<enter build number>.tar.gz.sha256.txt \ --installer nms
Important: It can take ~60 - 75 minutes to perform the upgrade - please wait until it completes and you see something similar to the following.Example
... ... ... cleanup : Remove nmsUpgradeRepo ------------------------------------------------------------------------------------- ------------------------------------ 0.89s cleanup : Check symlink for nmsDowngradeRepo ------------------------------------------------------------------------------------- ----------------------- 0.63s cleanup : Remove dereferenced symlink ------------------------------------------------------------------------------------- ------------------------------ 0.03s cleanup : Remove nmsDowngradeRepo ------------------------------------------------------------------------------------- ---------------------------------- 0.63s cleanup : Copy ansible.cnf to other appliances ------------------------------------------------------------------------------------- --------------------- 0.02s cleanup : Upgrade: Remove the flag file for AWS metadata namespaces check if upgrading to 6.5.0 or higher ----------------------------------------------- 0.03s cleanup : Upgrade: Remove the flag file for AWS metadata namespaces check if upgrading to 6.6.0 or higher ----------------------------------------------- 0.02s cleanup : Downgrade: Remove aws-nms-collector containers (NMS-80065) ------------------------------------------------------------------------------------ 0.01s cleanup : Downgrade: Remove AWS-related metadata namespaces and attributes (NMS-80065) ------------------------------------------------------------------ 0.01s cleanup : Downgrade: Remove AWS-related metadata mappings in 6.6.0 ------------------------------------------------------------------------------------- - 0.01s cleanup : Downgrade: Remove AWS-related metadata namespaces and attributes in 6.6.0 --------------------------------------------------------------------- 0.01s cleanup : Check if package compat-openssl10 is present and remove it ------------------------------------------------------------------------------------ 2.23s cleanup : Clean out upgrade/downgrade directories from /opt and the package lists ----------------------------------------------------------------------- 5.05s cleanup : Clean up mariadb temporary files ------------------------------------------------------------------------------------- ------------------------- 0.03s cleanup : Clean up mysql temporary files ------------------------------------------------------------------------------------- --------------------------- 0.03s cleanup : Clean up RHEL upgrade temporary files ------------------------------------------------------------------------------------- -------------------- 0.64s Turning on SevOne-masterslaved after the upgrade has finished ------------------------------------------------------------------------------------- ------ 5.35s Let cluster_state table know upgrade has finished ------------------------------------------------------------------------------------- ------------------ 0.02s Remove hosts.ini file ------------------------------------------------------------------------------------- ---------------------------------------------- 0.61s Add upgrade entry in nms_upgrade_history table in database ------------------------------------------------------------------------------------- --------- 0.03s =============================================================================== root@sevone:/opt/ansible [7.0.1] [04:01:45] $
Note: Option --installer-opts allows flags to be passed to the installer. The following options are case-sensitive parameters. If more than one flag/parameter is passed, you must pass them in single / double-quotes.IMPORTANT: For pre-upgrade flags -e and -f,
Flag -e can be added to option --installer-opts to skip the pre-upgrade errors.
Flag -f can be added to option --installer-opts to skip the pre-upgrade checks and to force the install.
However, there are certain pre-checks that do not get skipped even if -e or -f flag is passed. For example, when performing an upgrade, your current SevOne NMS version must be a version that is prior to the SevOne NMS version you are upgrading to. Otherwise, you will get 'Starting version of NMS should be less than the forward version of NMS.' message.
- -a: Avoid playbook tags to run.
- -c: Prevents hosts.ini from being automatically regenerated. If this flag is not passed as an option, hosts.ini will be automatically regenerated.
- -e: Skip pre-upgrade errors if found, applicable only when run WITHOUT -f option.
- -f: Skip pre-upgrade checks and force install.
- -n: Don't start in a screen session. Used for automated builds.
- -s: Run the upgrade without the UI logger.
- -x: Run pre-upgrade checks with --hub-spoke-network option.
- -h: Show this help.
For example,
To run the upgrade by skipping pre-upgrade checks all together,
$ SevOne-validate-image -i v7.0.1-build<enter build number>.tar.gz -s v7.0.1-build<enter build number>.tar.gz.sha256.txt \ --installer nms --installer-opts -f
To run the upgrade with the hub-spoke flag,
$ SevOne-validate-image -i v7.0.1-build<enter build number>.tar.gz -s v7.0.1-build<enter build number>.tar.gz.sha256.txt \ --installer nms --installer-opts -x
- The installer starts a screen session named ansible-{version}.Important: The screen session on the terminal must not be detached as the ongoing process is in the memory and packages may no longer be available on the appliance during the upgrade.
After the ansible-playbook execution completes in the screen, you must exit the screen session.
Using a separate ssh connection to the boxes while upgrading, may show some PHP / other warnings on the terminal. SevOne recommends you to wait until after the upgrade process completes successfully.
- Packages on all peers / hosts are updated at the same time. [tag: prepare_rpm, install_rpm, docker_setup]
- Database migrations are run on the clustermaster (mysqlconfig and mysqldata) and active peers (mysqldata only). [tag: database]
- System patches are run on all peers at the same time. [tag: systempatches]
- Cleanup actions on all hosts is performed last. [tag: cleanup]
- SevOne NMS 7.0.1 is installed on your machine.
Upgrade opens a TUI (Text-based User Interface) or Terminal User Interface window which splits the progress into 3 columns.
Important: If you are unable to interact with the screen session, you must still be able to view the progress in the Column 1. Allow the upgrade to complete.- Column 1: Host panel - shows the progress per host.
- Column 2: Tasks panel - shows the tasks being executed.
- Column 3: Logs panel - shows the logs associated with each task.Important: FYI
- Press F1 to show / hide HELP.
- Ctrl+C - kills ansible execution.
To navigate between panel / column, press:
- 1 - to select Column 1 (Host panel)
- 2 - to select Column 2 (Tasks panel)
- 3 - to select Column 3 (Logs panel)
- up arrow - to move cursor / log up
- down arrow - to move cursor / log down
- left arrow - to move cursor / log left
- right arrow - to move cursor / log right
To detach from screen mode, Ctrl+A followed by the letter d. To attach to the screen mode, enter screen r.
Upgrade
Click on F1 to show the HELP menu in the logger
- After successfully upgrading to SevOne NMS 7.0.x, check the version to ensure that you are on
SevOne NMS 7.0.x. i.e., SevOne NMS 7.0.1 as shown in the example below.
Example
$ podman exec -it nms-nms-nms /bin/bash
$ SevOne-show-version SevOne version: 7.0.1 kernel version: 4.18.0-553.el8_10.x86_64 #1 SMP Thu Dec 7 03:06:13 EST 2023 nginx version: 1.14.1 MySQL version: 10.6.18-MariaDB PHP version: 8.3.7 SSH/SSL version: OpenSSH_8.0p1, OpenSSL 1.1.1k FIPS 25 Mar 2021 REST API version: 2.1.47, Build time 2024-05-16T06:53:00+0000, Hash 07f225e Can't read memory from /dev/mem Hardware Serial Number: Intel(R) Xeon(R) CPU 2 cores @ 2199.998MHz 8GB RAM 31GB SWAP 569 GB / Partition 569 GB /data partition
For post-Upgrade steps, please refer to section post-Upgrade Stage below.
using Command Line Interface to Downgrade
Downgrade Steps
- signature-tools-<latest version>-build.<###>.tgz
For example, signature-tools-2.0.3-build.1.tgz - signature-tools-<latest version>-build.<###>.tgz.sha256.txt
For example, signature-tools-2.0.3-build.1.tgz.sha256.txt
- Using ssh, log in to SevOne NMS appliance (Cluster Leader of SevOne NMS cluster) as
root.
$ ssh root@<NMS appliance>
- Confirm that NMS appliance is running version SevOne NMS 7.0.x. i.e., SevOne NMS
7.0.1 as shown in the example below.
Example
$ podman exec -it nms-nms-nms /bin/bash
$ SevOne-show-version SevOne version: 7.0.1 kernel version: 4.18.0-553.el8_10.x86_64 #1 SMP Thu Dec 7 03:06:13 EST 2023 nginx version: 1.14.1 MySQL version: 10.6.18-MariaDB PHP version: 8.3.7 SSH/SSL version: OpenSSH_8.0p1, OpenSSL 1.1.1k FIPS 25 Mar 2021 REST API version: 2.1.47, Build time 2024-05-16T06:53:00+0000, Hash 07f225e Can't read memory from /dev/mem Hardware Serial Number: Intel(R) Xeon(R) CPU 2 cores @ 2199.998MHz 8GB RAM 31GB SWAP 569 GB / Partition 569 GB /data partition
- Verify the OS version. Confirm that version is Red Hat Enterprise
Linux.
$ cat /etc/redhat-release Red Hat Enterprise Linux release 8.9 (Ootpa)
- Go to folder /data.
$ cd /data
- Make directory downgrade
$ mkdir dowgrade
- Go to folder /data/downgrade>
$ cd /data/downgrade
- Using curl -O, copy the signature tools checksum file (signature-tools-<latest version>-build.<###>.tgz.sha256.txt) received from SevOne to /data/downgrade directory. For example, signature-tools-2.0.3-build.1.tgz.sha256.txt.
- Using curl -O, copy the signature tools file (signature-tools-<latest version>-build.<###>.tgz) received from SevOne to /data/downgrade directory. For example, signature-tools-2.0.3-build.1.tgz.
- Verify the signature tools
checksum
file.
$ sha256sum --check signature-tools-<latest version>-build.<###>.tgz.sha256.txt
Example
$ sha256sum --check signature-tools-v2.0.3-build.1.tgz.sha256.txt
- Extract the signature tools tar file to /data/downgrade directory.
$ tar -xzvf signature-tools-<latest version>-build.<###>.tgz -C /
Example
$ tar -xzvf signature-tools-v2.0.3-build.1.tgz -C /
- Using curl -O, copy the reverse tarball file (<reverse SevOne NMS tarball>.tar.gz) received from SevOne to /data/downgrade directory. For example, tarball file for SevOne NMS 6.<x>.<y>.
- Using curl -O, copy the checksum file (<reverse SevOne NMS tarball>.sha256.txt) received from SevOne to /data/downgrade directory.
- Change directory.
$ cd /data/downgrade
- (optional) Validate the signature for the reverse
tarball.
$ podman exec -it nms-nms-nms /bin/bash
$ SevOne-validate-image -i v7.0.1-to-v6.x.y-build<enter build number>.tar.gz -s v7.0.1-to-v6.x.y-build<enter build number>.tar.gz.sha256.txt INFO: Extracting code-signing certificates from image file... Image signed by SevOne Release on Sat, 25 May 2024 00:04:54 +0000. The certificate is trusted. Certificate subject= commonName = International Business Machines Corporation organizationalUnitName = IBM CCSS organizationName = International Business Machines Corporation localityName = Armonk stateOrProvinceName = New York countryName = US Certificate issuer= commonName = DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1 organizationName = DigiCert, Inc. countryName = US INFO: Checking the signature of the image The image can be installed.
Important: When downgrading from SevOne NMS 7.0.x, you can only downgrade to SevOne NMS 6.<x>.<y> version you upgraded from. - Before starting the downgrade, clean up the /data/downgrade directory, especially the
installRPM.tar file and the ansible
folder.
$ rm /data/downgrade/installRPMs.tar $ rm -rf /data/downgrade/ansible
- Validate the signature and install the reverse tarball.
$ podman exec -it nms-nms-nms /bin/bash $ SevOne-validate-image -i v7.0.1-to-v6.x.y-build<enter build number>.tar.gz -s v7.0.1-to-v6.x.y-build<enter build number>.tar.gz.sha256.txt \ --installer reverse
Important: It can take ~30 - 35 minutes to perform the downgrade - please wait until it completes and you see something similar to the following.Example
... ... ... database : Restore previous NMS net.settings ----------------------------------------------------------- 0.38s database : Finding the upgrade migration schemas ------------------------------------------------------- 0.06s database : set_fact ------------------------------------------------------------------------------------ 0.05s database : Finding the downgrade migration schemas ----------------------------------------------------- 0.01s database : set_fact ------------------------------------------------------------------------------------ 0.06s Applying data database patches ------------------------------------------------------------------------- 0.12s Reverting data database patches ------------------------------------------------------------------------ 0.19s database : Remove time_enabled column for 6.8.0 only --------------------------------------------------- 0.03s cleanup : Check symlink for nmsUpgradeRepo ------------------------------------------------------------- 0.65s cleanup : Remove dereferenced symlink ------------------------------------------------------------------ 0.06s cleanup : Remove nmsUpgradeRepo ------------------------------------------------------------------------ 0.68s cleanup : Check symlink for nmsDowngradeRepo ----------------------------------------------------------- 0.65s cleanup : Remove dereferenced symlink ------------------------------------------------------------------ 0.06s cleanup : Remove nmsDowngradeRepo ---------------------------------------------------------------------- 0.69s cleanup : Clean repo files in /etc/yum.repos.d --------------------------------------------------------- 1.42s cleanup : Clean out upgrade/downgrade directories from /opt and the package lists ---------------------- 6.07s Let cluster_state table know downgrade has finished ---------------------------------------------------- 0.08s Clean out tmp files ------------------------------------------------------------------------------------ 7.42s Remove hosts.ini file ---------------------------------------------------------------------------------- 0.62s =============================================================================== root@sevone ansible]#
Note: Option --installer-opts allows flags to be passed to the installer. The following options are case-sensitive parameters. If more than one flag/parameter is passed, you must pass them in single / double-quotes.- -a: Avoid playbook tags to run.
- -e: Skip pre-upgrade errors if found, applicable only when run WITHOUT -f option.
- -f: Skip pre-upgrade checks and force install.
- -n: Don't start in a screen session. Used for automated builds.
- -s: Run the upgrade without the UI logger.
- -x: Run pre-upgrade checks with --hub-spoke-network option.
- -h: Show this help.
For example,
To run the downgrade by skipping pre-upgrade checks all together,
$ podman exec -it nms-nms-nms /bin/bash $ SevOne-validate-image -i v7.0.1-to-v6.x.y-build<enter build number>.tar.gz -s v7.0.1-to-v6.x.y-build<enter build number>.tar.gz.sha256.txt \ --installer reverse --installer-opts -f
To run the downgrade with the hub-spoke flag,
$ podman exec -it nms-nms-nms /bin/bash $ SevOne-validate-image -i v7.0.1-to-v6.x.y-build<enter build number>.tar.gz -s v7.0.1-to-v6.x.y-build<enter build number>.tar.gz.sha256.txt \ --installer reverse --installer-opts -x
To run the downgrade by skipping pre-upgrade check errors,
$ podman exec -it nms-nms-nms /bin/bash $ SevOne-validate-image -i v7.0.1-to-v6.x.y-build<enter build number>.tar.gz -s v7.0.1-to-v6.x.y-build<enter build number>.tar.gz.sha256.txt \ --installer reverse --installer-opts -e
- The installer starts a screen session named ansible-{version}.Important: The screen session on the terminal must not be detached as the ongoing process is in the memory and packages may no longer be available on the appliance during the downgrade.
After the ansible-playbook execution completes in the screen, you must exit the screen session.
Using a separate ssh connection to the boxes while downgrading, may show some PHP / other warnings on the terminal. SevOne recommends you to wait until after the downgrade process completes successfully.
- Packages on all peers / hosts are updated at the same time. [tag: prepare_rpm, install_rpm, docker_setup]
- Reverse database migrations are run on the clustermaster (mysqlconfig and mysqldata) and active peers (mysqldata only). [tag: database]
- System patches are run on all peers at the same time. [tag: systempatches]
- Cleanup actions on all hosts is performed last. [tag: cleanup]
- If you were on SevOne NMS 7.0.1, SevOne NMS 6.8.0 is now installed on your machine.
Downgrade opens a TUI (Text-based User Interface) or Terminal User Interface window which splits the progress into 3 columns.
Important: If you are unable to interact with the screen session, you must still be able to view the progress in the Column 1. Allow the downgrade to complete.- Column 1: Host panel - shows the progress per host.
- Column 2: Tasks panel - shows the tasks being executed.
- Column 3: Logs panel - shows the logs associated with each task.Important: FYI
- Press F1 to show / hide HELP.
- Ctrl+C - kills ansible execution.
To navigate between panel / column, press:
- 1 - to select Column 1 (Host panel)
- 2 - to select Column 2 (Tasks panel)
- 3 - to select Column 3 (Logs panel)
- up arrow - to move cursor / log up
- down arrow - to move cursor / log down
- left arrow - to move cursor / log left
- right arrow - to move cursor / log right
To detach from screen mode, Ctrl+A followed by the letter d. To attach to the screen mode, enter screen r.
Downgrade
Click on F1 to show the HELP menu in the logger
- Once the downgrade is complete, check the kernel packages installed.
Check kernel version
$ SevOne-show-version OR $ rpm -qa | grep kernel OR $ uname -r
Important: Kernel automatically gets updated as part of the downgrade and not every NMS release has a new kernel.Depending on the NMS release, kernel versions must be:
SevOne NMS Version Kernel Version NMS 7.0.1 4.18.0-553.el8_10 NMS 6.8.0 4.18.0-513.11.1.el8_9.x86_64 NMS 6.7.0 4.18.0-499.el8.x86_64 Kernel version is based on the NMS release.
Kernel version must be based on the NMS release. If not, you must reboot the entire cluster by executing the step below to apply the new kernel otherwise, reboot is not required.
$ SevOne-shutdown reboot
- Confirm that NMS appliance is running version SevOne NMS
6.8.0.
$ SevOne-show-version SevOne version: 6.8.0 kernel version: 4.18.0-513.11.1.el8_9.x86_64 #1 SMP Thu Dec 7 03:06:13 EST 2023 nginx version: 0.0.0 MySQL version: 10.6.12-MariaDB PHP version: 8.1.27 SSH/SSL version: OpenSSH_8.0p1, OpenSSL 1.1.1k FIPS 25 Mar 2021 REST API version: 2.1.47, Build time 2024-02-15T08:46:36+0000, Hash 3dd4faa Hardware Serial Number: VMware-42 Intel(R) Xeon(R) CPU 2 cores @ 2199.998MHz 8GB RAM 31GB SWAP 569 GB / Partition
- Clean up the /data/downgrade directory after the downgrade, especially the
installRPM.tar file and the ansible
folder.
$ rm /data/downgrade/installRPMs.tar $ rm -rf /data/downgrade/ansible
- Execute the following command to identify errors, if
any.
$ SevOne-act check checkout --full-cluster --verbose
Check Docker Services
Post downgrade, ensure that docker services running/enabled prior to the downgrade are still running/enabled.
- Using ssh, log in to SevOne NMS appliance as
root.
$ ssh root@<NMS appliance>
- Check docker service is running / enabled.
Check for actual containers. For example, soa
$ rpm -qa | grep soa SevOne-soa-6.8.0-1.28.el8.x86_64
Check whether docker service is active
$ SevOne-peer-do "systemctl is-active docker.service" --- Gathering peer list... --- Running command on 127.0.0.1... active [ OKAY ]
Check whether docker service is enabled
$ SevOne-peer-do "systemctl is-enabled docker.service" --- Gathering peer list... --- Running command on 127.0.0.1... enabled [ OKAY ]
- Restart SOA.
$ supervisorctl restart soa
Log Files
Log File can be found in /var/SevOne/ansible-reverse/<toVersion>/<timestamp>/<peerIP>.log.
For example, /var/SevOne/ansible-reverse/v6.8.0/<timestamp>/<peerIP>.log
- Each peer will have its own log file located on the cluster leader
- A new log file will be created for each run of the downgrade and is split by the timestamp folder
using Self-Service Upgrade
To change the port on which the Graphical User Interface installer runs, go to Cluster Manager > Cluster Settings tab > Ports subtab. You may change the SevOne-gui-installer Port to any value desired. If cluster-wide firewall setting is enabled, this will automatically add the new port to the allowed ports list.
Obtain URL from Installer
Obtain the URL by using the GUI Installer or Command Line Interface. Enter the URL obtained in the browser of your choice to perform the steps in section Upgrade Stages.
using GUI Installer
SevOne NMS > Administration > Cluster Manager > Cluster Upgrade tab appears on the active Cluster Leader only.
Cluster Upgrade tab enables you to upgrade the artifact via the SFTP server, run the installer to use the newly downloaded the upgrade artifact, and obtain the URL. In addition to this, it also contains the cluster upgrade history.
The GUI installer returns the URL, for example, https://10.49.14.220:9443/. Enter the URL in the browser of your choice and proceed to the steps mentioned in Upgrade Stages.
using Command Line Interface
- Perform steps 1 - 11 in section using Command Line Interface to Upgrade > subsection Upgrade Steps above.
- Execute the following command to install the Self-Service Upgrade installer on your
appliance.
$ SevOne-validate-image -i v7.0.1-build<enter build number>.tar.gz -s v7.0.1-build<enter build number>.tar.gz.sha256.txt \ --installer nms-gui
Important: The installer returns the URL, for example, https://10.49.14.220:9443/.##################################################################################### # SevOne-gui-installer # ##################################################################################### # Note: please open https://10.49.14.220:9443 in web browser to access GUI # # # #####################################################################################
- Enter the URL, https://10.49.14.220:9443/, in the browser of your choice and proceed to the steps mentioned in Upgrade Stages.
Upgrade Stages
- Using the browser of your choice, enter the URL that the installer has returned in the step above. For example, https://10.49.14.220:9443/.
- Enter the login credentials to launch the upgrade stages.
Check For Upgrade Stage
This stage checks whether an update is available for the SevOne NMS cluster.
- - denotes that you can toggle to dark theme.
- Current Version - denotes the current version of your SevOne NMS cluster.
- Upgrade Available - denotes the version of the available update. The upgrade artifact (tarball) must be located in /opt directory of the Active Cluster Leader for this stage to detect an available update. However, if Openstack cluster, the upgrade artifact (tarball) must be located in /data/upgrade directory.
- Read the release notes - provides the link to the release notes of SevOne NMS version you are upgrading to.
- Limited functionality - provides the upgrade statistics for the cluster and testing parameters such as,
-
- Total upgrade time
- Estimated disruption to polling
- Estimated disruption to netflow
- Estimated disruption to alerting
- Estimated disruption to SURF UI (user interface)
- Estimated disruption to SURF reporting
- Estimated disruption to reporting from DI (Data Insight)
- Provided testing parameters for the testing environment of each estimation aboveNote: Statistics may vary based on your cluster size and testing environment variables.
- If there is an artifact for a version higher than your current SevOne NMS version, you may proceed to the next stage by clicking on Continue to Pre-upgrade.
Pre-Upgrade Stage
The Pre-Upgrade stage runs only pre-upgrade checks against your SevOne NMS cluster to ensure your system is ready for the upgrade.
Click Run Pre-Upgrade to ensure that SevOne NMS cluster is in good health for the upgrade. Some of the checks include:
- Interpeer connectivity
- MySQL replication and overall NMS health
- Free disk spaceNote: Running the Pre-upgrade checks may take a few minutes.
- The top part shows the overall state and progress of the pre-upgrade checks. The status can be in progress, successful, or failed.
- Under Peers, is a list of peers, peer-wise status, and the completion progress. The
status of a peer can be:
- Unreachable - denotes the peer is unreachable while running the pre-checks.
- Failed - denotes that some checks have failed on the peer.
- Completed - denotes that the checks have completed
successfully.
Example
Note: By selecting a row in the Peers section, you can view the status of each task on the individual peer. The search box allows you to search on the tasks in the list.Each in the screenshot above performs different downloads. You may download:
- a peer log
- log for each task in the peer
- all logs in the cluster
Click Download System Log to download the system log to a file.
All downloaded files are saved in your default download folder.
When completed, you can view and download logs, , for each task.
When you click , you get a Log Viewer pop-up. Click Copy to clipboard to copy the contents in the log viewer and paste it to a file.
Log Viewer
[ { "content": { "changed": true, "cmd": [ "/usr/local/scripts/SevOne-act", "stk", "topn-incorrect-percentage-config" ], "delta": "0:00:00.210400", "end": "2024-05-27 14:34:30.920980", "rc": 0, "start": "2024-05-27 14:34:30.710580", "stderr_lines": [ "" ], "stdout_lines": [ "[ OK ] No Errors Detected" ] }, "ended": "2024-05-27 14:34:30.962798+00:00", "peer_name": "127.0.0.1", "started": "2024-05-27 14:34:30.081742+00:00", "status": "ok", "task_name": "STK : Checking topn incorrect percentage config" } ]
- Summary - the bottom summary of the stage indicates the breakup of the tasks for the
selected peer and the entire cluster.
- Total - denotes the total number of tasks on the peer or overall tasks on the cluster.
- Ok - denotes the number of tasks which have run successfully.
- Skipped - denotes the number of tasks skipped. Not all tasks may run on all peers. Some tasks may run only on the Cluster Leader or the Active appliances and some may not run on certain appliance types such as, DNC. In such cases, there may be skipped tasks for certain peers.
- Failed - denotes the number of tasks which have failed. You can see individual logs for each task for any selected peer.
- Ignored : denotes the tasks/checks for which failures are ignored. Failure of these tasks/checks will not cause the stage to fail.
- Unreachable - denotes the number of tasks which have failed because the peer was unreachable. This is the first task after the peer has become unreachable and the remaining tasks will not be executed.
- Unexecuted - denotes the number of tasks that were not executed. This can be because the
peer was unreachable and/or the checks were stopped in between.Important: Some checks such as md5-hack and lsof may fail. At this time, the results from these two checks are being ignored in the overall check status. If either of these two checks fail, the pre-upgrade stage will still show as Passed. However, if any other check is failing and an upgrade needs to be forced, then the upgrade can be performed using the CLI.
It is highly recommended to contact the SevOne Support if any pre-check is failing.
After the pre-upgrade stage has completed successfully, click Continue to go to the Backup stage.
Backup Stage
The backup is run before the actual upgrade is performed. Click Run Backup and wait until the backup has completed successfully. This stage executes a few scripts to backup the database and a few folders critical to the system. This stage runs on the Cluster Leader and is optional.
When a peer is selected, it displays the list of tasks for the selected peer. The search box provides the capability to search in the task list.
When backup has completed successfully, click Continue.
Upgrade Stage
At this stage, the actual NMS upgrade is performed to the latest version. You will have limited functionality while upgrade is in progress. Click Run Upgrade. The User Interface workflow is identical to the Pre-Upgrade Stage. You are allowed to run the upgrade from the User Interface if and only if the pre-checks have succeeded. Most of the User Interface components are the same as the Pre-Upgrade checks. You can view individual peers and the overall cluster status. The bottom summary panel shows the peer and overall status. In case of upgrade, the execution will stop on a peer at the first failed task. The remaining tasks will show as Unexecuted in the bottom summary panel.
The search box under Tasks provides the capability to search in the task list.
After the upgrade stage has completed successfully, you are now ready to perform the Health Check. Click Continue.
Health Check Stage
IMPORTANT
The target release must have the new kernel version. Prior to performing the health check, you must reboot the machine to load the new kernel and to start all services. Please do not skip this step.
$ podman exec -it nms-nms-nms /bin/bash
$ SevOne-shutdown reboot
Click Run Health Check to run SevOne NMS health checks after a successful upgrade. The checks are identical to the pre-upgrade checks.
When a peer is selected, it displays the list of tasks for the selected peer. The search box provides the capability to search in the task list.
From Command Line Interface, confirm that NMS appliance is running version SevOne NMS 7.0.1.
Example
$ podman exec -it nms-nms-nms /bin/bash
$ SevOne-show-version
SevOne version: 7.0.1
kernel version: 4.18.0-553.el8_10.x86_64 #1 SMP Thu Dec 7 03:06:13 EST 2023
nginx version: 1.14.1
MySQL version: 10.6.18-MariaDB
PHP version: 8.3.7
SSH/SSL version: OpenSSH_8.0p1, OpenSSL 1.1.1k FIPS 25 Mar 2021
REST API version: 2.1.47, Build time 2024-05-16T06:53:00+0000, Hash 07f225e
Can't read memory from /dev/mem
Hardware Serial Number:
Intel(R) Xeon(R) CPU
2 cores @ 2199.998MHz
8GB RAM
31GB SWAP
569 GB / Partition
569 GB /data partition
post-Upgrade cleanup
Perform a cleanup by removing the ansible directory and installRPMs.tar file.
$ rm -rf /opt/ansible
$ rm /opt/installRPMs.tar
$ systemctl restart sevone-installer-gunicorn.service
post-Upgrade Stage
After all the upgrade stages have completed successfully, please refer to section post-Upgrade Steps below.
FAQs
How do I change the port on which the installer runs?
The default port is 9443. You may change the port from SevOne NMS > Administration > Cluster Manager > Cluster Settings tab > Ports subtab > field SevOne-gui-installer Port. After changing the port, you must regenerate the installer URL from Administration > Cluster Manager > Cluster Upgrades tab > click Run Installer.
SevOne recommends you to use any free port (i.e., port that is not used by any other service) in the range 1024 - 65535.
Execute the following command to check port availability.
$ netstat -anp |grep tcp |grep LISTEN |grep <port_number_desired>
Custom certificates are already deployed on SevOne NMS server. Can the same custom certificates be used instead of the self-signed certificates used by the installer?
Self-Service Upgrade running on gunicorn server uses the self-signed certificates of the nginx server by default.
"SSL_CERT_PATH": "/secrets/nginx/nginx.crt",
"SSL_KEY_PATH": "/secrets/nginx/nginx.key"
Execute the following command to restart gunicorn.
$ systemctl restart sevone-installer-gunicorn
If the custom certificate is based on hostname, execute the following steps.
- Using a text editor of your choice, edit /opt/sevone_installer/SevOne-django-settings.sh
file.
$ vi /opt/sevone_installer/SevOne-django-settings.sh
- Update the
variables.
# Change IP, HOSTNAME (variables value to hostname or ip address # whichever set in valid SSL cert), SSL_CERT_PATH & SSL_KEY_PATH IP='<hostname_of server>' HOSTNAME='<hostname_of_server>'
Note: /opt/sevone_install/SevOne-django-settings.sh file contains the following by default.# Use nginx cert and key, generate open-ssl if (nginx cert/key) doesn't exist. NGINX_CERT='/secrets/nginx/nginx.crt' NGINX_KEY='/secrets/nginx/nginx.key' if [[ -f "$NGINX_CERT" && -f "$NGINX_KEY" ]]; then SSL_CERT_PATH="$NGINX_CERT"; SSL_KEY_PATH="$NGINX_KEY";
- Run SevOne-django-settings.sh
script.
$ ./opt/sevone_installer/SevOne-django-settings.sh
- Restart
gunicorn.
$ systemctl restart sevone-installer-gunicorn
Which address should variables SEVONE_INSTALLER_BIND_IP and url contain?
The upgrade script, SevOne-validate-image, creates the following files.
- /opt/sevone_installer/config.json
- /opt/sevone_installer/build/static/env.json
Variable SEVONE_INSTALLER_BIND_IP in config.json file contains the IPv6 address or the IP address / FQDN of the Cluster Leader stored in the peers table.
In case where proxy is set, the config.json and env.json files must be updated manually and the installer service must be restarted. Execute the following steps.
- Using a text editor of your choice, edit /opt/sevone_installer/config.json to change the IP address to the proxy IP address / FQDN of the Cluster Leader in variable SEVONE_INSTALLER_BIND_IP. If you are using certificates, then FQDN of the Cluster Leader must be assigned to variable SEVONE_INSTALLER_BIND_IP.
- Using a text editor of your choice, edit /opt/sevone_installer/build/static/env.json to change the IP address to the proxy IP address / FQDN of the Cluster Leader in variable url. If you are using certificates, then FQDN of the Cluster Leader must be assigned to variable url.
- Change directory to /opt/sevone_installer.
$ cd /opt/sevone_installer
$ python3 manage.py collectstatic
Restart the installer service
$ systemctl restart sevone-installer-gunicorn.service
Why does the User Interface report 'No update available' on the front page?
The upgrade artifact must be placed in/opt directory or /data/upgrade directory (for Openstack) of the active Cluster Leader for the installer to detect an upgrade. If the SevOne NMS cluster is already on the target version, the installer will not detect an available upgrade path and return a message that no update is available.
What happens if pre-check fails?
If pre-checks fail, you will not be allowed to continue with the upgrade. This is intentional. SevOne does not support forcing an upgrade on corrupted environments. However, you can force an upgrade from the CLI, just like before, and then monitor the progress from the User Interface.
What happens if upgrade fails?
If upgrade fails, you may download the cluster and/or individual peer logs and contact SevOne Support from the link in error message. For debugging, old steps of checking the upgrade in screen and logs/retry attempts all remain the same.
Can two upgrades be run at the same time?
No. Two upgrades cannot be run at the same time. If an upgrade is running and a new installer instance is launched, it will show the status of the ongoing upgrade only.
One of my peers is unreachable. Will I still be able to upgrade from the installer UI?
No. All peers must be reachable from the Cluster Leader to run an upgrade. The requirements are the same as before. However, if after starting an upgrade, a peer becomes unreachable, you can see its status on the installer's User Interface and the upgrade will stop, like before.
What happens if there is a failover after the upgrade?
Nothing has changed here. Like before, the installer does not handle HSA failover. If there is a failover after setting up the installer, the setup will have to be done separately on the now active Cluster Leader.
The installer is not coming up at the given URL. How can I see what is wrong?
- Make sure you are accessing the URL with https and, accept the certificate warning.
- Check the status of the installer service systemctl status sevone-installer-gunicorn.service.
- If systemctl status sevone-installer-gunicorn.service is running and you are still unable
to access the URL then, check the gunicorn log file for more details.
- /var/log/SevOne-gui-installer/gunicorn.log OR
- /var/log/SevOne-gui-installer/access.log
- Check request status codes 500 or 404 in /var/log/SevOne-gui-installer/access.log.
Can I run the installer from any peer?
No. The installer has to be run from the active Cluster Leader only.
Can I run this server in debug mode?
- Check log files in /var/log/SevOne-gui-installer/.
- Stop installer service systemctl stop sevone-installer-gunicorn.service if it is running.
- Using the text editor of your choice, edit /opt/sevone_installer/sevone_installer/settings.py file. Change variable DEBUG to TRUE.
- Run development web server like python3 manage.py runserver 0.0.0.0:1234.
How can I upgrade the SOA version?
- Using ssh, log in to SevOne NMS appliance (Cluster Leader of SevOne NMS cluster) as
root.
$ ssh root@<NMS appliance>
- Check the current SOA
version.
Example
$ rpm -qa |grep soa SevOne-soa-6.8.0-1.28.el8.x86_64
Note: If you need to upgrade the SOA version, then continue with the steps below. Else, you are all set. - Using curl -O, copy the SOA .rpm file to /opt directory but if
Openstack, to /data/upgrade to the Cluster Leader and its peers.Important: The latest SOA .rpm file can be downloaded from IBM Passport Advantage (https://www.ibm.com/software/passportadvantage/pao_download_software.html) via Passport Advantage Online. However, if you are on a legacy / flexible SevOne contract and do not have access to IBM Passport Advantage but have an active Support contract, please contact SevOne Support Team for the file.
- Upgrade SOA version. This step will perform the upgrade and restart
SOA.
Example
not Openstack
$ SevOne-peer-do 'rpm -Uvh /opt/SevOne-soa-7.0.0-1.19.el8.x86_64'
for Openstack
$ SevOne-peer-do 'rpm -Uvh /data/upgrade/SevOne-soa-7.0.0-1.19.el8.x86_64'
- Check the SOA version to make sure that it is on the version you have upgraded SOA
to.
Example
$ rpm -qa |grep soa SevOne-soa-7.0.0-1.19.el8.x86_64
How to expand a Logical Volume?
Please refer to section Expand Logical Volume below.
How do I monitor the metrics supported for AWS?
Please refer to AWS Quick Start Guide for details.
What browsers are supported?
Latest versions of Chrome, Firefox, and Safari are supported. Internet Explorer (IE) is not supported.
How much time does the upgrade / downgrade take?
Upgrade / Downgrade | Average Time Taken |
---|---|
Upgrade from NMS 6.8.x to NMS 7.0.x | ~ 60 - 75 minutes |
Downgrade from NMS 7.0.x to NMS 6.8.x | ~ 40-45 minutes |
Debugging
- Ansible logs can be found in the /var/SevOne/ansible-upgrade/v6.8.0-v7.0.1/<timestamp>/127.0.0.1.log on the Cluster Leader.
- If there are failures, resolve them on the relevant peers, and re-run the upgrade to start
playbook execution again and the upgrade will start over.
- Failures on individual peers get recorded in the retry_files folder (in the parent ansible directory with the IP address of all the failed hosts)
- There are tags for each step of the upgrade, so in case there is a failure and you would like to kick off the upgrade for tasks remaining after a certain section, you can do that.
- For example, if you know ALL the peers have successfully completed the package upgrade phase,
and the upgrade failed in the database section; you can kickoff the upgrade again like shown below.
When upgrading from 6.8.0 to 7.0.1,
startingNMSversion = 6.8.0
upgradeNMSversion = 7.0.1
The command will run the playbook for all tasks except the packages tag aka the RPM upgrade section.
$ ansible-playbook -i hosts.ini main.yml \ --extra-vars "startingVersion={startingNMSversion} forwardVersion={upgradeNMSversion}" \ --tags="database, systempatches, cleanup" -v;
- If you know a certain subsection of peers have finished the upgrade completely, you can also remove them from the hosts inventory before restarting the upgrade script. This will result in kicking off all the tasks for all the hosts except the ones you removed from the inventory.
post-Upgrade Steps
Option 1: Reboot appliances by performing failover between Primary / Secondary
Option 2: Reboot appliances without failover
IMPORTANT: In either option, please make sure to successfully restart all the other appliances first before performing the restart operation on the Cluster Leader active appliance.
If the updater process is running, the command to shutdown/reboot an appliance will not proceed and you will get a message suggesting you to use the --force option.
It is not recommended to use the force operation as it can lead to short-term data loss. Updater is scheduled to start running every 2 even hours at 30 minutes past the hour. For example, starting at 00:30, 02:30, 04:30, etc. The updater process is expected to run for approximately 1800 seconds (30 minutes). However, on very large and busy appliances, it can sometimes take a few minutes longer. Due to this, plan to reboot the appliances at times when the updater process is not running.
Find Cluster Leader
Prior to rebooting the appliances in the cluster using one of the options mentioned above, you must make note of the appliance that is the Cluster Leader in the cluster. From Administration > Cluster Manager > Cluster Overview tab > field Cluster Leader provides the name of the appliance that is the leader. For example, pandora-01 as shown in the screenshot below, is the Cluster Leader.
(Option 1) Reboot appliances by performing failover between Primary / Secondary
It is not necessary to perform failover / failback while performing reboot of appliances, but to minimize the polling downtime due to a reboot, this option can be used to perform failover, reboot, and failback between the primary / secondary appliances.
The failover operation must be done manually on a per peer basis, and it is not recommended to perform failover on more than one peer at the same time. However, the reboot of multiple appliances can be done in batches of 4 to 5 appliances at the same time. It is important to note that the failover steps for Cluster Leader pair must be done last once all other appliances have been rebooted.
Reboot Secondary appliances first (including Cluster Leader Secondary appliance)
Identify to confirm the passive appliance of the pair. From Administration > Cluster Manager > left navigation bar, expand the peer to identify which appliance of the pair is currently passive. For example, 10.129.13.121 as shown in the screenshot below, is the passive appliance of the pair.
- Using ssh, log in to each SevOne NMS passive appliance as root, including
the Cluster Leader passive
appliance.
$ ssh root@<NMS 'passive' appliance>
- Reboot the passive appliance.
$ podman exec -it nms-nms-nms /bin/bash $ SevOne-shutdown reboot
Note: Repeat the steps above for each passive appliance including the Cluster Leader passive appliance.Important: Multiple passive appliances of different peers can be restarted at the same time in batches. SevOne recommends performing no more than 4 to 5 appliances at the same time to keep the operation manageable.
Check and confirm replication status after reboot of Secondary appliances
Once the passive appliance is back up after reboot, confirm the replication is good for each pair - replication can take a few minutes. Execute the following commands to check the system uptime and replication status for the appliance that was rebooted.
$ podman exec -it nms-nms-nms /bin/bash
$ uptime
$ SevOne-act check replication
$ SevOne-masterslave-status
Perform the failover operation to make Secondary the active appliance (all peers except Cluster Leader)
You may now perform the fail over operation. From Administration > Cluster Manager > left navigation bar, expand the peer to select the active appliance of the pair. In the upper-right corner, click and select option Fail Over. For additional details, please refer to Cluster Manager > section Appliance Level Actions.
Check replication state after failover of the appliances
Once the failover is complete, confirm that the replication is good for each pair. This may take a few minutes for replication to catch up if the replication is lagging. Execute the following commands to check the system uptime and replication status for the appliance that was rebooted.
$ podman exec -it nms-nms-nms /bin/bash
$ SevOne-act check replication
$ SevOne-masterslave-status
You are now ready to restart the primary appliances.
Perform restart of Primary appliance(s) (all peers except Cluster Leader)
After the failover, the secondary appliances that were rebooted in the previous step will now be the current active appliance and the primary appliance will now be in the current passive state. Identify to confirm the passive appliance of the pair from the User Interface > Administration > Cluster Manager. In the left navigation bar, expand the peer to identify which appliance of the pair is currently passive of the pair.
Refresh the browser to confirm that the failovers are successful and the primary appliances are now reported as passive.
Now, log in using SSH to the passive appliances as root and perform the reboot.
$ podman exec -it nms-nms-nms /bin/bash
$ SevOne-shutdown reboot
Perform failover operation to make Primary as active appliance (all peers except Cluster Leader)
You may now perform the fail over operation. From Administration > Cluster Manager > left navigation bar, expand the peer to select the active appliance of the pair. In the upper-right corner, click and select option Fail Over. For additional details, please refer to Cluster Manager > section Appliance Level Actions.
Check replication state after failover of the appliances
Once the failover is complete, confirm the replication is good for each pair - replication can take a few minutes. Execute the following commands to check the system uptime and replication status for the appliance that was rebooted.
$ podman exec -it nms-nms-nms /bin/bash
$ uptime
$ SevOne-act check replication
$ SevOne-masterslave-status
Reboot all peers with single appliance (peers that do not have a Secondary)
Identify all appliances that do not have any secondary appliances. From Administration > Cluster Manager > left navigation bar, expand the peers to identify which peer has a single appliance. For example, 10.129.15.139 as shown in the screenshot below, does not have any associated passive appliance.
Now, log in using SSH to all single primary appliances as root and perform the reboot.
$ podman exec -it nms-nms-nms /bin/bash
$ SevOne-shutdown reboot
Perform failover operation to make Cluster Leader Secondary as the active appliance
Before failing over the Cluster Leader, perform a cluster wide check on replication to ensure no errors are reported.
$ podman exec -it nms-nms-nms /bin/bash
$ SevOne-act check replication --full-cluster
You may now perform the fail over operation for the Cluster Leader peer. From Administration > Cluster Manager > left navigation bar, expand the peer to select the active appliance of the Cluster Leader pair. In the upper-right corner, click and select option Fail Over. For additional details, please refer to Cluster Manager > section Appliance Level Actions.
Check replication state after failover of the Cluster Leader appliance
Once the passive appliance is back up after reboot, confirm the replication is good for each pair - replication can take a few minutes. Execute the following commands from the Primary appliance using the Command Line Interface to confirm the replication status.
$ podman exec -it nms-nms-nms /bin/bash
$ SevOne-act check replication
$ SevOne-masterslave-status
Perform reboot of Cluster Leader Primary appliance
After the failover, the Cluster Leader Secondary appliance will now be the active appliance and, the Primary appliance will be in the passive state. Identify to confirm the passive appliance of the pair from the User Interface > Administration > Cluster Manager. In the left navigation bar, expand the Cluster Leader peer to identify which appliance of the pair is currently passive of the pair.
Refresh the browser to confirm that the failovers are successful and the Cluster Leader primary appliance is now reported as passive.
Now, log in using SSH to Cluster Leader passive appliance as root and perform the reboot.
$ podman exec -it nms-nms-nms /bin/bash
$ SevOne-shutdown reboot
Perform failover operation to make Cluster Leader Primary as active appliance
You may now perform the fail over operation. From Administration > Cluster Manager > left navigation bar, identify the active appliance of the Cluster Leader pair and select the active appliance of the pair. In the upper-right corner, click and select option Fail Over. For additional details, please refer to Cluster Manager > section Appliance Level Actions.
Check replication state after failover of the appliances
Once the failover is complete, confirm the replication is good for the Cluster Leader peer - replication can take a few minutes. Execute the following commands from the Secondary appliance using the Command Line Interface to confirm the replication status.
$ podman exec -it nms-nms-nms /bin/bash
$ SevOne-act check replication
$ SevOne-masterslave-status
(Option 2) Reboot appliances without failover
If the upgrade is in a complete maintenance window and an outage on polling is acceptable for the duration of the restart of the appliances, then you can choose to perform the restart of the appliances in any order. However, SevOne always recommends restart of the passive appliances first and then, restart the active appliances for all the peers.
Reboot Secondary appliances first (including Cluster Leader Secondary appliance)
Identify to confirm the passive appliance of the pair. From Administration > Cluster Manager > left navigation bar, expand the peer to identify which appliance of the pair is currently passive. For example, 10.129.13.121 as shown in the screenshot below, is the passive appliance of the pair.
- Using ssh, log in to each SevOne NMS passive appliance as root, including
the Cluster Leader passive
appliance.
$ ssh root@<NMS 'passive' appliance>
- Reboot the passive appliance.
$ podman exec -it nms-nms-nms /bin/bash $ SevOne-shutdown reboot
Note: Repeat the steps above for each passive appliance including the Cluster Leader passive appliance.Important: Multiple passive appliances of different peers can be restarted at the same time in batches. SevOne recommends performing no more than 4 to 5 appliances at the same time to keep the operation manageable.
Perform restart of the Primary appliance(s) (all peers except Cluster Leader)
Identify to confirm the active appliance of the pair from the User Interface > Administration > Cluster Manager. In the left navigation bar, expand the peer to identify which appliance of the pair is currently active of the pair.
Now, log in using SSH to the active appliance(s) as root and perform the reboot.
$ podman exec -it nms-nms-nms /bin/bash
$ SevOne-shutdown reboot
Perform reboot of Cluster Leader Primary appliance
Once all other appliances in the cluster have been restarted, the Cluster Leader primary appliance must be restarted. Identify to confirm the active appliance of the Cluster Leader pair from the User Interface > Administration > Cluster Manager. In the left navigation bar, expand the Cluster Leader peer to identify which appliance of the pair is currently active of the pair.
Now, log in using SSH to the Cluster Leader active appliance as root and perform the reboot.
$ podman exec -it nms-nms-nms /bin/bash
$ SevOne-shutdown reboot
Confirm all appliances in cluster have restarted
Execute the following script from the active Cluster Leader.
$ for IP in $(SevOne-peer-list); do echo -en "IP: $IP \t"; ssh $IP 'echo -e "Hostname: $(hostname) \t
System Uptime: $(uptime)" '; done
Cluster Leader does not need to be restarted again.
Load Kernel & Start Services
- Perform some cleanup.
$ rm /opt/installRPMs.tar $ rm -rf /opt/ansible
- After the reboot completes successfully, SSH back to your active Cluster Leader of SevOne
NMS cluster to check the installed kernel packages. Please see the table below to confirm
that you have the correct kernel version.
Check kernel version
$ podman exec -it nms-nms-nms /bin/bash $ SevOne-show-version OR $ rpm -qa | grep kernel OR $ uname -r
Important: Kernel automatically gets updated as part of the upgrade and not every NMS release has a new kernel.
Depending on the NMS release, kernel versions must be:SevOne NMS Version Kernel Version NMS 7.0.1 4.18.0-553.el8_10 NMS 6.8.0 4.18.0-513.11.1.el8_9.x86_64 NMS 6.7.0 4.18.0-499.el8.x86_64 Kernel version is based on the NMS release.
Kernel version must be based on the NMS release. If not, you must reboot the entire cluster by executing the step below to apply the new kernel otherwise, reboot is not required.
$ podman exec -it nms-nms-nms /bin/bash $ SevOne-shutdown reboot
- Verify the Operating System version. Confirm that version is Red Hat Enterprise Linux
after the upgrade.
$ cat /etc/redhat-release Red Hat Enterprise Linux release 8.9 (Ootpa)
- Execute the following command to identify errors, if
any.
$ SevOne-act check checkout --full-cluster --verbose
Check Docker Services
Post upgrade, ensure that docker services running/enabled prior to the upgrade are still running/enabled.
- Using ssh, log in to SevOne NMS appliance as
root.
$ ssh root@<NMS appliance>
- Check docker service is running / enabled.
Check for actual containers. For example, soa
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 675c0f79c08f docker.s1artrtp1.s1.devit.ibm.com/soa:v6.8.0-build.2 "/entrypoint.sh" 31 minutes ago Up 31 minutes soa
Check whether docker service is active
$ SevOne-peer-do "systemctl is-active docker.service" --- Gathering peer list... --- Running command on 127.0.0.1... active [ OKAY ]
$ SevOne-peer-do "systemctl is-enabled docker.service"
Check whether docker service is enabled
$ SevOne-peer-do "systemctl is-enabled docker.service" --- Gathering peer list... --- Running command on 127.0.0.1... enabled [ OKAY ]
- Restart SOA.
$ supervisorctl restart soa
Log Files
Log File can be found in /var/SevOne/ansible-upgrade/<fromVersion-toVersion>/<timestamp>/<peerIP>.log.
For example, /var/SevOne/ansible-upgrade/v6.8.0-v7.0.1/<timestamp>/<peerIP>.log
- Each peer will have its own log file located on the Cluster Leader.
- A new log file will be created for each run of the upgrade and is split by the timestamp folder.
Expand Logical Volume
If you have low storage space and need to increase the capacity of your virtual machine's partitions, the steps below will help extend the storage by using Logical Volume Manager (LVM).
Customer Domain
- Open your VMware vSphere Client.Note: Your pages may vary from the screenshots in the steps below.
- Search the virtual machine for the hardware component you want to upgrade. For example, vDNC100_RHEL.
- Right-click on the virtual machine and select Edit
Settings.
Example
where,
- Hard disk 1 is your root partition (/) and is set to 50GB.
- Hard disk 2 is your data partition (/data) and is set to 1024GB.
You will see the current settings of your virtual machine.
SevOne Domain
- SSH into your SevOne NMS virtual machine and log in as
root.
$ ssh root@<virtual machine IP address or hostname>
Example
$ ssh root@10.168.117.98
- Execute the following commands in the order shown below.
- lsblk command reads the sysfs filesystem and udev db to gather
information.
$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 50G 0 disk ├─sda1 8:1 0 2M 0 part ├─sda2 8:2 0 500M 0 part /boot └─sda3 8:3 0 49.5G 0 part ├─cl-root 253:0 0 45.5G 0 lvm / └─cl-swap 253:1 0 4G 0 lvm [SWAP] sdb 8:16 0 1T 0 disk └─sdb1 8:17 0 1024G 0 part └─data_vg-data_lv 253:2 0 1024G 0 lvm /data sr0 11:0 1 1024M 0 rom
Important: You will see that sda3 is 49.5G. - View the existing partitions and storage
devices.
$ parted -l | head -12 Model: VMware Virtual disk (scsi) Disk /dev/sda: 53.7GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 3146kB 2097kB primary 2 3146kB 527MB 524MB primary xfs boot 3 527MB 53.7GB 53.2GB primary lvm
Important: Disk /dev/sda currently has 53.7GB. - The pvs command provides physical volume information in a configurable form, displaying
one line per physical volume.
$ pvs PV VG Fmt Attr PSize PFree /dev/sda3 cl lvm2 a-- <49.51g 0 /dev/sdb1 data_vg lvm2 a-- <1024.00g 0
- The pvdisplay allows you to see the attributes of one or more physical volumes like size,
physical extent size, space used for the volume group descriptor area and so on. In the command
below, physical volume is passed to obtain its. For example,
/dev/sda3.
$ pvdisplay /dev/sda3 --- Physical volume --- PV Name /dev/sda3 VG Name cl PV Size <49.51 GiB / not usable 0 Allocatable yes (but full) PE Size 4.00 MiB Total PE 12674 Free PE 0 Allocated PE 12674 PV UUID tJa0zz-UQEc-nKHh-mvoo-RfM6-NMM9-fyeaZC
Important: Free PE is 0 for /dev/sda3. - Based on target version's hardware requirements mentioned in SevOne NMS Installation Guide -
Virtual Appliance, if you need to increase your hard disk based on the specifications, go to
your VMware vSphere Client > choose your virtual machine for example, vDNC100_RHEL and
its IP address. Right-click on the virtual machine and select Edit Settings. Increase your
Hard disk 1 from 50GB to 150GB and click OK in the lower-right
corner.Important: Hard disk 1 change from 50GB to 150GB will only take effect after the logical volume is extended by executing the lvextend command as shown below.
- The vgs command provides volume group information in a configurable form, displaying one
line per volume group.
$ vgs VG #PV #LV #SN Attr VSize VFree cl 1 2 0 wz--n- <49.51g 0 data_vg 1 1 0 wz--n- <1024.00g 0
- The vgdisplay commanddisplays volume group properties (such as size, extents, number of
physical volumes, etc.) in a fixed form. In the command below, VG Name, cl, is passed to
obtain the attributes of this physical volume.
$ vgdisplay cl --- Volume group --- VG Name cl System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size <49.51 GiB PE Size 4.00 MiB Total PE 12674 Alloc PE / Size 12674 / <49.51 GiB Free PE / Size 0 / 0 VG UUID 7cMstg-INr3-loF0-vT2N-ghpb-IBJx-bzXuv7
Important: VG Size is 49.51 GiB. - Rescan to see that diskhas resized from 53687091200 (50GB) to 161061273600
(150GB).
$ echo 1>/sys/class/block/sda/device/rescan; dmesg -T | tail -2 [Thu Dec 8 16:44:33 2022] sd 0:0:0:0: [sda] 314572800 512-byte logical blocks: (161 GB /150 GiB) [Thu Dec 8 16:44:33 2022] sda: detected capacity change from 53687091200 to 161061273600
- View the existing partitions and storage devices
again.
$ parted -l | head -12 Model: VMware Virtual disk (scsi) Disk /dev/sda: 161GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 3146kB 2097kB primary 2 3146kB 527MB 524MB primary xfs boot 3 527MB 53.7GB 53.2GB primary lvm
Important: Disk, /dev/sda, has been resized to 161GB. - Now, resize the disk partition from 50GB to 150GB.
Example
For this example, at the prompts below, you will enter the following in the order listed.
- print - will show you the disk size.
- print free - shows the free space available. In the example below, you will see the
following.
Number Start End Size Type File system Flags 32.3kB 1049kB 1016kB Free Space 1 1049kB 3146kB 2097kB primary 2 3146kB 527MB 524MB primary xfs boot 3 527MB 53.7GB 53.2GB primary lvm 53.7GB 161GB 107GB Free Space
- resizepart - resizes the partition.
- at prompt Partition number?, enter 3 (this is obtainted from the Number column in the example above).
- at prompt End? [53.7GB]?, enter 161GB (this is obtained from the End column in the example above).
- print - verifies the disk size change to 161GB.
-
quit - allows you to quit parted utility.
$ parted /dev/sda GNU Parted 3.1 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print Model: VMware Virtual disk (scsi) Disk /dev/sda: 161GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 3146kB 2097kB primary 2 3146kB 527MB 524MB primary xfs boot 3 527MB 53.7GB 53.2GB primary lvm (parted) print free Model: VMware Virtual disk (scsi) Disk /dev/sda: 161GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 32.3kB 1049kB 1016kB Free Space 1 1049kB 3146kB 2097kB primary 2 3146kB 527MB 524MB primary xfs boot 3 527MB 53.7GB 53.2GB primary lvm 53.7GB 161GB 107GB Free Space (parted) resizepart Partition number? 3 End? [53.7GB]? 161GB (parted) print Model: VMware Virtual disk (scsi) Disk /dev/sda: 161GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 3146kB 2097kB primary 2 3146kB 527MB 524MB primary xfs boot 3 527MB 161GB 160GB primary lvm (parted) quit Information: You may need to update /etc/fstab. $
- Check to see if the disk is
resized.
$ parted -l | head -12 Model: VMware Virtual disk (scsi) Disk /dev/sda: 161GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 3146kB 2097kB primary 2 3146kB 527MB 524MB primary xfs boot 3 527MB 161GB 160GB primary lvm
For partition 3, you will see that the End column now has 161GB.
- Now, you need to resize the physical volume in Logical Volume Manager. The pvs command
provides physical volume information in a configurable form, displaying one line per physical
volume.
$ pvs PV VG Fmt Attr PSize PFree /dev/sda3 cl lvm2 a-- <49.51g 0 /dev/sdb1 data_vg lvm2 a-- <1024.00g 0
- Display the physical volume is /dev/sda3, for
example.
$ pvdisplay /dev/sda3 --- Physical volume --- PV Name /dev/sda3 VG Name cl PV Size <49.51 GiB / not usable 0 Allocatable yes (but full) PE Size 4.00 MiB Total PE 12674 Free PE 0 Allocated PE 12674 PV UUID tJa0zz-UQEc-nKHh-mvoo-RfM6-NMM9-fyeaZC
Important: Free PE is still 0 for /dev/sda3. - Resize the Physical Volume.
$ pvresize /dev/sda3 Physical volume "/dev/sda3" changed 1 physical volume(s) resized or updated / 0 physical volume(s) not resized
- After resizing the physical volume, display the physical volume is /dev/sda3, for
example, again.
$ pvdisplay /dev/sda3 --- Physical volume --- PV Name /dev/sda3 VG Name cl PV Size 149.45 GiB / not usable <1.57 MiB Allocatable yes PE Size 4.00 MiB Total PE 38259 Free PE 25585 Allocated PE 12674 PV UUID tJa0zz-UQEc-nKHh-mvoo-RfM6-NMM9-fyeaZC
Free PE is now 25585 for /dev/sda3.
- You are now ready to extend the logical volume and filesystem using lvextend command with all, 100%, of available free space.
- Display information related to file systems about total and available space for root
partition.
$ df -hT / Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/cl-root xfs 46G 17G 29G 37% /
Size is 46G for 50GB hard disk.
- lvs command provides logical volume information in a configurable form, displaying one
line per logical volume.
$ lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root cl -wi-ao---- <45.51g swap cl -wi-ao---- 4.00g data_lv data_vg -wi-ao---- <1024.00g
- lvdisplay commanddisplays the properties of LVM logical volumes. For example, logical
volume, cl, in the example below.
$ lvdisplay cl --- Logical volume --- LV Path /dev/cl/swap LV Name swap VG Name cl LV UUID jj8N64-6UPp-WWQz-UKGj-TAxf-MwXW-HJCu7n LV Write Access read/write LV Creation host, time localhost, 2017-12-07 22:10:11 +0000 LV Status available # open 2 LV Size 4.00 GiB Current LE 1024 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:1 --- Logical volume --- LV Path /dev/cl/root LV Name root VG Name cl LV UUID Jqecl2-YErD-8zwd-JHOG-MPZB-lmdI-HdcT8u LV Write Access read/write LV Creation host, time localhost, 2017-12-07 22:10:11 +0000 LV Status available # open 1 LV Size <45.51 GiB Current LE 11650 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:0
- You are now ready to extend the logical volume using lvextend
command.
$ lvextend -l+100%FREE -r /dev/cl/root Size of logical volume cl/root changed from <45.51 GiB (11650 extents) to <145.45 GiB (37235 extents). Logical volume cl/root successfully resized. meta-data=/dev/mapper/cl-root isize=512 agcount=4, agsize=2982400 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=11929600, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=5825, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 11929600 to 38128640
- Check the root partition size for
/dev/cl/root.
$ lvdisplay /dev/cl/root --- Logical volume --- LV Path /dev/cl/root LV Name root VG Name cl LV UUID Jqecl2-YErD-8zwd-JHOG-MPZB-lmdI-HdcT8u LV Write Access read/write LV Creation host, time localhost, 2017-12-07 22:10:11 +0000 LV Status available # open 1 LV Size <145.45 GiB Current LE 37235 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:0
- Display information related to file systems about total and available space for root
partition.
$ df -hT / Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/cl-root xfs 146G 17G 129G 12% /
You will see that the root partition has been updated and resized. It is now 146G.
- Exit from Command Line Interface.
$ exit
- Your virtual machine resource requirements now align with what is required by SevOne NMS' target version. You may now proceed with the steps to perform the upgrade.
- lsblk command reads the sysfs filesystem and udev db to gather
information.