Overview of the installation toolkit

The installation toolkit automates the steps that are required to install IBM Storage Scale, deploy protocols, and upgrade to a later IBM Storage Scale release. For a release-wise list of features available in the installation toolkit, see Table 1.

When using the installation toolkit, you provide environmental information based on which the installation toolkit dynamically creates a cluster definition file. Thereafter, the installation toolkit installs, configures, and deploys the specified configuration.

The installation toolkit enables you to do the following tasks:

  • Install and configure IBM Storage Scale.
  • Add IBM Storage Scale nodes to an existing cluster.
  • Deploy and configure SMB, NFS, Object, HDFS, and performance monitoring tools.
  • Perform verification before installing, deploying, or upgrading. It includes checking whether passwordless SSH is set up correctly.
  • Enable and configure call home and file audit logging functions.
  • Upgrade IBM Storage Scale.

Installation and configuration are driven through commands.

From the self-extracting package, the installation toolkit is extracted to this directory, by default:
/usr/lpp/mmfs/package_code_version/ansible-toolkit

Using the installation toolkit is driven through the spectrumscale command in this directory, and this directory can optionally be added to the path.

The installation toolkit operation consists of four phases:
  1. User input by using spectrumscale commands:
    1. All user input is recorded into a cluster definition file in /usr/lpp/mmfs/5.1.9.0/ansible-toolkit/ansible/vars.
    2. Review the cluster definition file to make sure that it accurately reflects your cluster configuration.
    3. As you input your cluster configuration, you can have the installation toolkit act on parts of the cluster by not specifying nodes that might have incompatible operating systems, OS versions, or architectures.
  2. A spectrumscale install phase:
    1. Installation acts upon all nodes that are defined in the cluster definition file.
    2. GPFS and performance monitoring packages are installed.
    3. File audit logging and AFM to cloud object storage packages might be installed.
    4. GPFS portability layer is created.
    5. GPFS is started.
    6. A GPFS cluster is created.
    7. Licenses are applied.
    8. GUI nodes might be created and the GUI might be started upon these nodes.
    9. Performance monitoring, GPFS ephemeral ports, and cluster profile might be configured.
    10. NSDs are created.
    11. File systems are created.
  3. A spectrumscale deploy phase:
    1. Deployment acts upon all nodes that are defined into the cluster definition file.
    2. SMB, NFS, HDFS, and Object protocol packages are copied to all protocol nodes and installed.
    3. SMB, NFS, HDFS, and Object services might be started.
    4. File audit logging and message queue might be configured.
    5. Licenses are applied.
    6. GUI nodes might be created and the GUI might be started upon these nodes.
    7. Performance monitoring, call home, file audit logging, GPFS ephemeral ports, and cluster profile might be configured.
  4. A spectrumscale upgrade phase:
    Note: The upgrade phase does not allow new function to be enabled. In the upgrade phase, the required packages are upgraded, but adding functions must be done either before or after the upgrade.
    1. Upgrade acts upon all nodes input into the cluster definition file. However, you can exclude a subset of nodes from the upgrade configuration.
    2. All installed or deployed components are upgraded. During the upgrade phase, any missing packages might be installed and other packages that are already installed are upgraded.
    3. Upgrade can be done in the following ways:
      • Online upgrade (one node at a time)

        Online upgrades are sequential with multiple passes. For more information, see Upgrade process flow.

      • Offline upgrade

        Offline upgrades can be done in parallel saving a lot of time in the upgrade window.

      • Upgrade while excluding a subset of nodes
    4. Allows for prompting to be enabled on a node to pause to allow for application migration from the node before proceeding with upgrade.
    For more information, see Upgrading IBM Storage Scale components with the installation toolkit and Upgrade process flow.
For information about command options available with the spectrumscale command, see spectrumscale command.
The following table lists the features available in the installation toolkit in the reverse chronological order of releases.
Table 1. Installation toolkit: List of features
Release Features
5.1.9.x
  • Toolkit support for Remote mount configuration.
  • Extended operating system certification and support.
  • ECE SED installation, upgrade, and multi-DA support for vdisk creation.
5.1.8.x
  • Extended operating system certification and support.
  • Problem determination enhancement in the installation and upgrade process.
5.1.7.x
  • Support for prebuilt gplbin installation and upgrade support on RHEL and SLES Operating system.
  • Toolkit support for option to specify a port to use for both tscTcpPort (daemon) and mmsdrservPort (sdr/cc).
  • ECE install toolkit enhancement to support checking fault tolerance for node upgrade.
  • ECE install toolkit enhancement to check and show how many DAs, type and usable spaces are available.
5.1.6.x
  • Precheck problem determination enhancement in the installation and upgrade process.
  • Ansible toolkit ECE validation enhancement for block size and with RAID code.
  • Ansible library support for Red Hat Enterprise Linux®® 9.0.
  • Support for Red Hat Enterprise Linux 8.7, 9.0, and 9.1 operating system on x86_64, PPC64LE, and s390x architectures. For more information, see IBM Storage Scale FAQ in IBM Documentation.
5.1.5.x
  • Ansible collection support in the toolkit.
  • Precheck problem determination enhancement.
  • Config populate enhancement.
5.1.4.x
  • Support for Ubuntu 22.04 on x86_64.
  • Support for Red Hat Enterprise Linux 8.6 on x86_64, PPC64LE, and s390x.
  • Modified the command to enable workload prompt to allow administrators to stop and migrate workloads before a node is shut down for upgrade. For more information, see Upgrading IBM Storage Scale components with the installation toolkit.
5.1.3.x
  • Support for parallel offline upgrade of all nodes in the cluster.
  • Support for Ansible 2.10.x.
5.1.2.x
  • Support for Red Hat Enterprise Linux 8.5 on x86_64, PPC64LE, and s390x.
  • Support for user-defined profiles of GPFS configuration parameters.
  • Support for populating cluster state information from mixed CPU architecture nodes. The information is populated from nodes that have the same CPU architecture as the installer node.
  • Several optimizations in the upgrade path resulting in faster upgrades than in earlier releases.
5.1.1.x
  • Migration to the Ansible® automation platform:
    • Enables scaling up to more number of nodes
    • Avoids issues that arise due to using an agent-based tooling infrastructure such as Chef
    • Enables parity with widely-adopted, modern tooling infrastructure
  • Support for Red Hat Enterprise Linux 8.4 on x86_64, PPC64LE, and s390x
  • Support for Ubuntu 20.04 on PPC64LE
  • Support for multiple recovery groups in IBM Storage Scale Erasure Code Edition
  • Simplification of file system creation:

    The file system is created during the installation phase rather than the deployment phase.

  • Support for IPv6 addresses
  • Support for the CES interface mode
  • IBM Storage Scale deployment playbooks are now open sourced on GitHub. Users can access the playbooks from the external GitHub repository and implement in their environment on their own. For more information, see https://github.com/IBM/ibm-spectrum-scale-install-infra.
    Note: The installation toolkit (./spectrumscale command) is only available as part of the IBM Storage Scale installation packages that can be downloaded from IBM® FixCentral.
  • Added an option to enable workload prompt to allow administrators to stop and migrate workloads before a node is shut down for upgrade.
  • Discontinued the following functions:
    • NTP configuration

      Time synchronization configuration across all nodes in a cluster is recommended. Do it manually by using the available method.

    • File and object authentication configuration

      File and object authentication configuration must be done by using the mmuserauth command.

    • NSD balance

      Balance the NSD preferred node between the primary and secondary nodes by using the ./spectrumscale nsd servers command.

5.1.0.x
  • Support for Ubuntu 20.04 on x86_64
  • Support for Red Hat Enterprise Linux 8.3 on x86_64, PPC64LE, and s390x
  • Support for Red Hat Enterprise Linux 7.9 on x86_64, PPC64LE, and s390x
  • Support for NFS and SMB protocols, and CES on SLES 15 on x86_64
  • Support for installing and upgrading AFM to cloud object storage (gpfs.afm.cos) package
5.0.5.x
  • Support for Red Hat Enterprise Linux 8.2 on x86_64, PPC64LE, and s390x
  • Support for Red Hat Enterprise Linux 7.8 on x86_64, PPC64, PPC64LE, and s390x
  • Support for packages and repository metadata signed with a GPG (GNU Privacy Guard) key
  • Enhanced handling of host entries in the /etc/hosts file. Support for both FQDN and short name
5.0.4.x
  • Support for Hadoop Distributed File System (HDFS) protocol
  • Support for ESS 3000 environments.
  • Support for Red Hat Enterprise Linux 8.0 and 8.1 on x86_64, PPC64LE, and s390x
  • Support for Red Hat Enterprise Linux 7.7 on x86_64, PPC64, PPC64LE, and s390x
  • Support for SLES 15 SP1 on x86_64 and s390x
    Note: NFS, SMB, and object are not supported on SLES 15 SP1.
  • Improvements in online and offline upgrade paths
  • Removed the installation GUI
  • Support for IBM Storage Scale Developer Edition
5.0.3.x
  • Support for SLES 15 on x86_64 and s390x
    Note: NFS, SMB, and object are not supported on SLES 15.
  • Upgrade related enhancements:
    • Upgrade flow changes to minimize I/O disruptions
    • Enhanced upgrade pre-checks to determine the packages that must be upgraded.

      Compare the versions of the installed packages with the versions in the repository of the packages you want to upgrade to. In a mixed operating system cluster, the comparison is done with the package repository applicable for the operating system running on the respective nodes.

    • Mixed OS support for upgrade
    • Enhanced upgrade post-checks to ensure that all packages have been upgraded successfully
    • Enhanced dependency checks to ensure dependencies are met for each required package
  • IBM Storage Scale Erasure Code Edition
    • Ability to define a new setup type ece
    • Ability to designate a scale-out node
    • Ability to define recovery group, vdisk set, and file system
    • Support for installation and deployment of IBM Storage Scale Erasure Code Edition
    • Support for config populate function in an IBM Storage Scale Erasure Code Edition environment
    • Offline upgrade support for IBM Storage Scale Erasure Code Edition
5.0.2.x
  • Support for IBM Z® (RHEL 7.x, SLES12.x, Ubuntu 16.04 and Ubuntu 18.04 on s390x)
  • Support for RHEL 7.6 on x86_64, PPC64, PPC64LE, and s390x
  • Support for offline upgrade of nodes or components while they are stopped or down
  • Support for excluding nodes from an upgrade run
  • Support for rerunning an upgrade procedure after a failure
  • Support for watch folder
  • Configuration of message queue for file audit logging and watch folder
  • Enhancements in CES shared root creation and detection in config populate
  • Upgraded bundled Chef package
5.0.1.x
  • Support for Ubuntu 18.04 on x86_64
  • Support for RHEL 7.5 on x86_64, PPC64, and PPC64LE
  • Support for Ubuntu 16.04.4 on x86_64
  • Config populate support for call home and file audit logging
  • Performance monitoring configuration-related changes
5.0.0.x
  • Extended operating system support:
    • Ubuntu 16.04.0, 16.04.1, 16.04.2, 16.04.3 on x86_64
    • RHEL 7.4 on x86_64, PPC64, and PPC64LE
    • SLES 12 SP3 on x86_64
  • Improved deployment integration with Elastic Storage Server: The installation toolkit includes the capability to detect ESS nodes (EMS and I/O) and it ensures validation of permitted operations when you are adding protocol or client nodes to a cluster containing ESS.
  • File audit logging installation and configuration
  • Call home configuration
  • Cumulative object upgrade support
  • Enhanced network connectivity pre-checks including passwordless SSH validation from the admin node
  • Updated file system default block size for more likely best performance defaults