Installing IBM Storage Scale and creating a cluster

After you set up the installer node, and define nodes, NSDs, and file systems in the cluster definition file, and set certain configuration parameters, you can install IBM Storage Scale according to the topology defined in the cluster definition file.

  1. If call home is enabled in the cluster definition file, specify the minimum call home configuration parameters.
    ./spectrumscale callhome config -n CustName -i CustID -e CustEmail -cn CustCountry
    For more information, see Enabling and configuring call home using the installation toolkit.
  2. Do environment checks before initiating the installation procedure.
    ./spectrumscale install -pr

    This step is not mandatory because running ./spectrumscale install with no arguments also runs these checks before the installation.

  3. Start the IBM Storage Scale installation and the creation of the cluster.
    ./spectrumscale install

Understanding what the installation toolkit does during the installation

  • If the installation toolkit is being used to install IBM Storage Scale on all nodes, create a new cluster, and create NSDs, it automatically does the following outlined steps.
  • If the installation toolkit is being used to add nodes to an existing GPFS cluster and create new NSDs, it automatically does the following steps.
  • If all nodes in the cluster definition file are in a cluster, then the installation toolkit automatically does the following steps.

To add nodes to an existing GPFS cluster, at least one node in the cluster definition must belong to the cluster where the nodes not in a cluster are to be added. The cluster name in the cluster definition must also exactly match the cluster name outputted by mmlscluster.

When you run the ./spectrumscale install command, the installation toolkit does these steps:

Install IBM Storage Scale on all nodes, create a new GPFS cluster, and create NSDs
  • Run preinstall environment checks
  • Install the IBM Storage Scale packages on all nodes
  • Build the GPFS portability layer on all nodes
  • Install and configure performance monitoring tools
  • Create a GPFS cluster
  • Configure licenses
  • Set ephemeral port range
  • Create NSDs, if any are defined in the cluster definition
  • Create file systems, if any are defined in the cluster definition
  • Run post-install environment checks
Add nodes to an existing GPFS cluster and create any new NSDs
  • Run preinstall environment checks
  • Install the IBM Storage Scale packages on nodes to be added to the cluster
  • Install and configure performance monitoring tools on nodes to be added to the cluster
  • Add nodes to the GPFS cluster
  • Configure licenses
  • Create NSDs, if any are defined in the cluster definition
  • Create file systems, if any are defined in the cluster definition
  • Run post-installation environment checks
Important: The installation toolkit does not alter anything on the existing nodes in the cluster. You can determine the existing nodes in the cluster by using the mmlscluster command.

The installation toolkit might change the performance monitoring collector configuration if you are adding the new node as a GUI node or an NSD node, due to collector node prioritization. However, if you do not want to change the collector configuration then you can use the ./spectrumscale config perfmon -r off command to disable performance monitoring before initiating the installation procedure.

All nodes in the cluster definition are in a cluster
  • Run preinstall environment checks
  • Skip all steps until NSD creation
  • Create NSDs (if any new NSDs are defined in the cluster definition)
  • Run post-install environment checks

For more information, see spectrumscale command.

What to do next
Upon completion of the installation, you have an active GPFS cluster. Within the cluster, NSDs might be created, file systems might be created, performance monitoring is configured, and all product licenses are accepted.
The installation can be rerun in the future to:
  • Add NSD server nodes
  • Add GPFS client nodes
  • Add GUI nodes
  • Add NSDs
  • Define new file systems