Deploying protocols

Use this information to deploy protocols in an IBM Storage Scale cluster using the installation toolkit.

Deployment of protocol services is performed on a subset of the cluster nodes that are designated as protocol nodes using the ./spectrumscale node add node_name -p command. Protocol nodes have an additional set of packages installed that allow them to run the NFS, SMB, and Object protocol services.

Data is served through these protocols from a pool of addresses designated as Export IP addresses or CES public IP addresses using ./spectrumscale config protocols -e IP1,IP2,IP3... or added manually using the mmces address add command. The allocation of addresses in this pool is managed by the cluster, and IP addresses are automatically migrated to other available protocol nodes in the event of a node failure.

Before deploying protocols, there must be a GPFS cluster that has GPFS started and it has at least one file system for the CES shared root file system.

Notes: Here are a few considerations for deploying protocols.
  1. All the protocol nodes must be running the supported operating systems, and the protocol nodes must have the same CPU architecture. Although the other nodes in the cluster could be on other platforms and operating systems.

    For information about supported operating systems for protocol nodes and their required minimum kernel levels, see IBM Storage Scale FAQ in IBM® Documentation.

  2. The packages for all protocols are installed on every node designated as a protocol node; this is done even if a service is not enabled in your configuration.
  3. Services are enabled and disabled cluster wide; this means that every protocol node serves all enabled protocols.
  4. If SMB is enabled, the number of protocol nodes is limited to 16 nodes.
  5. If your protocol node has Red Hat Enterprise Linux® 7.x installed, there might be an NFS service already running on the node that can cause issues with the installation of IBM Storage Scale NFS packages. To avoid these issues, before starting the deployment, you must do the following:
    1. Stop the NFS service using the systemctl stop nfs.service command.
    2. Disable the NFS service using the systemctl disable nfs.service command.

      This command ensures that this change is persistent after system reboot.

  6. The installation toolkit does not support adding protocol nodes to an existing ESS cluster prior to ESS version 3.5.

Defining a shared file system for protocols

To use protocol services, a shared file system (CES shared root file system) must be defined. If GPFS has already been configured, the shared file system can be specified manually or by re-running the spectrumscale install command to assign an existing NSD to the file system. If re-running spectrumscale install, be sure that your NSD servers are compatible with the installation toolkit and contained within the cluster definition file.

You can use the ./spectrumscale config protocols command to define the shared file system with the -f option and the shared file system mount point or path with the -m option.

For example: ./spectrumscale config protocols -f cesshared -m /gpfs/cesshared.

To view the current settings, issue this command:

./spectrumscale config protocols --list
[ INFO  ] No changes made. Current settings are as follows:
[ INFO  ] Shared File System Name is cesshared
[ INFO  ] Shared File System Mountpoint or Path is /gpfs/cesshared

Adding protocol nodes to the cluster definition file

To deploy protocols on nodes in your cluster, they must be added to the cluster definition file as protocol nodes.

Issue the following command to designate a node as a protocol node:

./spectrumscale node add NODE_IP -p

Enabling NFS and SMB

To enable or disable a set of protocols, use the ./spectrumscale enable and ./spectrumscale disable commands. For example:

./spectrumscale enable smb nfs
[ INFO  ] Enabling SMB on all protocol nodes.
[ INFO  ] Enabling NFS on all protocol nodes.

You can view the current list of enabled protocols by using the spectrumscale node list command. For example:

./spectrumscale node list
[ INFO  ] List of nodes in current configuration:
[ INFO  ] [Installer Node]
[ INFO  ] 192.0.2.1
[ INFO  ]
[ INFO  ] [Cluster Name]
[ INFO  ] ESDev1
[ INFO  ]
[ INFO  ] [Protocols]
[ INFO  ] Object : Disabled
[ INFO  ] SMB : Enabled
[ INFO  ] NFS : Enabled
[ INFO  ]
[ INFO  ] GPFS Node              Admin  Quorum  Manager  NSD Server  Protocol  GUI Server  OS     Arch
[ INFO  ] ESDev1-GPFS1             X       X       X                    X                  rhel7  x86_64
[ INFO  ] ESDev1-GPFS2                             X                    X                  rhel7  x86_64
[ INFO  ] ESDev1-GPFS3                             X                    X                  rhel7  x86_64
[ INFO  ] ESDev1-GPFS4             X       X       X          X                            rhel7  x86_64 
[ INFO  ] ESDev1-GPFS5             X       X       X          X                            rhel7  x86_64 

Configuring object

Attention:
  • The object protocol is not supported in IBM Storage Scale 5.1.0.0. If you want to deploy object, install the IBM Storage Scale 5.1.0.1 or a later release.
  • If SELinux is disabled during installation of IBM Storage Scale for object storage, enabling SELinux after installation is not supported.
If the object protocol is enabled, further protocol-specific configuration is required. You can configure these options by using the spectrumscale config object command, which has the following parameters:
usage: spectrumscale config object [-h] [-l] [-f FILESYSTEM] [-m MOUNTPOINT]
                                   [-e ENDPOINT] [-o OBJECTBASE]
                                   [-i INODEALLOCATION]
                                   [-au ADMINUSER] [-ap ADMINPASSWORD]
                                   [-SU SWIFTUSER] [-sp SWIFTPASSWORD]
                                   [-dp DATABASEPASSWORD]
                                   [-s3 {on,off}]

The object protocol requires a dedicated fileset as its back-end storage; this fileset is defined using the --filesystem (-f), --mountpoint(-m) and --objectbase (-o) flags to define the file system, mount point, and fileset respectively.

The --endpoint(-e) option specifies the host name that is used for access to the file store. This should be a round-robin DNS entry that maps to all CES IP addresses; this distributes the load of all keystone and object traffic that is routed to this host name. Therefore, the endpoint is an IP address in a DNS or in a load balancer that maps to a group of export IPs (that is, CES IPs that were assigned on the protocol nodes).

The following user name and password options specify the credentials used for the creation of an admin user within Keystone for object and container access. The system prompts for these during ./spectrumscale deploy pre-check and ./spectrumscale deploy run if they have not already been configured. The following example shows how to configure these options to associate user names and passwords: ./spectrumscale config object -au -admin -ap -dp

The -ADMINUSER(-au) option specifies the admin user name. This credential is for the Keystone administrator. This user can be local or on remote authentication server based on authentication type used.

The -ADMINPASSWORD(-ap) option specifies the password for the admin user.
Note: You are prompted to enter a Secret Encryption Key which is used to securely store the password. Choose a memorable pass phrase which you are prompted for each time you enter the password.

The -SWIFTUSER(-su) option specifies the Swift user name. The -ADMINUSER(-au) option specifies the admin user name. This credential is for the Swift services administrator. All Swift services are run in this user's context. This user can be local or on remote authentication server based on authentication type used.

The -SWIFTPASSWORD(-sp) option specifies the password for the Swift user.
Note: You are prompted to enter a Secret Encryption Key which is used to securely store the password. Choose a memorable pass phrase which you are prompted for each time you enter the password.
The -DATABASEPASSWORD(-dp) option specifies the password for the object database.
Note: You are prompted to enter a Secret Encryption Key which is used to securely store the password. Choose a memorable pass phrase which you are prompted for each time you enter the password

The -s3 option specifies whether the S3 (Amazon Simple Storage Service) API should be enabled.

Adding CES IP addresses

Note: This is mandatory for protocol deployment.

CES public IP addresses or export IP addresses are used to export data through the protocols (NFS, SMB, object). File and object clients use these public IPs to access data on GPFS file systems. Export IP addresses are shared between all protocols and they are organized in a public IP address pool; there can be fewer public IP addresses than protocol nodes. Export IP addresses must have an associated host name and reverse DNS lookup must be configured for each export IP address.

  1. Add export IP addresses to your cluster by using this command:
    ./spectrumscale config protocols -e EXPORT_IP_POOL

    Where EXPORT_IP_POOL is a comma-separated list of IP addresses.

    In the CES interface mode, you must specify the CES IP addresses with the installation toolkit in the Classless Inter-Domain Routing (CIDR) notation. In the CIDR notation, the IP address is followed by a forward slash and the prefix length.
    IPAddress/PrefixLength
    For example,
    IPv6
    2001:0DB8::/32
    IPv4
    192.0.2.0/20
    You must specify the prefix length for every CES IP address, otherwise you cannot add the IP address by using the installation toolkit.
    • When you are using IPv6 addresses, prefix length must be in the range 1 - 124.
    • When you are using IPv4 addresses, prefix length must be in the range 1 - 30.
  2. If you are using the CES interface mode, specify the interfaces by using the following command.
    ./spectrumscale config protocols -i INTERFACES

    Where INTERFACES is the comma-separated list of network interfaces. For example, eth0,eth1.

  3. View the current configuration by using the following command:
    ./spectrumscale node list
    View the CES shared root and the IP address pool by using the following command:
    ./spectrumscale config protocols -l
    View the object configuration by using the following command:
    ./spectrumscale config object -l

Running the ./spectrumscale deploy command

After adding the previously-described protocol-related definition and configuration information to the cluster definition file, you can deploy the protocols specified in that file.

You can also use the mmnetverify command to identify any network problems before doing the deployment. For more information, see mmnetverify command.

Use the following command to deploy protocols:
./spectrumscale deploy
Note: You are prompted for the Secret Encryption Key that you provided while configuring object and/or authentication unless you disabled prompting.
This does the following:
  • Performs pre-deploy checks.
  • Deploys protocols as specified in the cluster definition file.
  • Performs post-deploy checks.

You can explicitly specify the --precheck (-pr) option to perform a dry run of pre-deploy checks without starting the deployment. This is not required, however, because ./spectrumscale deploy with no argument also runs these checks. Alternatively, you can specify the --postcheck (-po) option to perform a dry run of post-deploy checks without starting the deployment. These options are mutually exclusive.

After a successful deployment, you can verify the cluster and CES configuration by running this command:
$ /usr/lpp/mmfs/bin/mmlscluster --ces

What to do next

You can rerun the ./spectrumscale deploy command in the future to do the following tasks:
  • Add protocol nodes
  • Enable additional protocols