spectrumscale command
Installs and configures GPFS; adds nodes to a cluster; deploys and configures protocols, performance monitoring tools, and authentication services; configures call home and file audit logging; and upgrades GPFS and protocols.
Synopsis
spectrumscale setup [-i SSHIdentity] [-s ServerIP]
[-st {"ss","SS","ess","ESS","ece","ECE"}] [--storesecret]
or
spectrumscale node add [-g] [-q] [-m] [-a] [-n] [-e] [-c] [-b] [-p] [-so] Node
or
spectrumscale node load [-g] [-q] [-m] [-a] [-n] [-e] [-c] [-b] [-p] [-so] NodeFile
or
spectrumscale node delete [-f] Node
or
spectrumscale node clear [-f]
or
spectrumscale node list
or
spectrumscale config gpfs [-l] [-c ClusterName] [-p {default | randomio}]
[-r RemoteShell] [-rc RemoteFileCopy]
[-e EphemeralPortRange]
or
spectrumscale config protocols [-l] [-f FileSystem] [-m MountPoint] [-e ExportIPPool]
or
spectrumscale config object [-f FileSystem] [-m MountPoint][-e EndPoint] [-o ObjectBase]
[-i InodeAllocation] [-t AdminToken]
[-au AdminUser] [-ap AdminPassword]
[-su SwiftUser] [-sp SwiftPassword]
[-dp DatabasePassword]
[-mr MultiRegion] [-rn RegionNumber]
[-s3 {on | off}]
or
spectrumscale config perfmon [-r {on | off}] [-d {on | off}] [-l]
or
spectrumscale config ntp [-e {on | off} [-l List ][-s Upstream_Servers]]
or
spectrumscale config clear {gpfs | protocols | object}
or
spectrumscale config update
or
spectrumscale config populate --node Node
or
spectrumscale nsd add -p Primary [-s Secondary] [-fs FileSystem]
[-po Pool]
[-u {dataOnly | dataAndMetadata | metaDataOnly | descOnly | localCache}]
[-fg FailureGroup] [--no-check]
PrimaryDevice [PrimaryDevice ...]
or
spectrumscale nsd balance [--node Node | --all]
or
spectrumscale nsd delete NSD
or
spectrumscale nsd modify [-n Name]
[-u {dataOnly | dataAndMetadata | metadataOnly | descOnly | localCache}]
[-po Pool] [-fs FileSystem] [-fg FailureGroup]
NSD
or
spectrumscale nsd servers
or
spectrumscale nsd clear [-f]
or
spectrumscale nsd list
or
spectrumscale filesystem modify [-B {64K | 128K | 256K | 512K | 1M | 2M | 4M | 8M | 16M}] [-m MountPoint]
[-r {1 | 2 | 3}] [-mr {1 | 2 | 3}] [-MR {1 | 2 | 3}] [-R {1 | 2 | 3}]
[--metadata_block_size {64K | 128K | 256K | 512K | 1M | 2M | 4M | 8M | 16M}]
[--fileauditloggingenable [--degradedperformance] [--degradedperformancedisable]]
[--fileauditloggingdisable] [--logfileset LogFileset]
[--log_fileset_device LogFilesetDevice] [--retention RetentionPeriod]
FileSystem
or
spectrumscale filesystem define [-fs FileSystem] -vs VdiskSet [--mmcrfs MmcrfsParams]
or
spectrumscale filesystem list
or
spectrumscale fileauditlogging enable
or
spectrumscale fileauditlogging disable
or
spectrumscale fileauditlogging list
or
spectrumscale watchfolder enable
or
spectrumscale watchfolder disable
or
spectrumscale watchfolder list
or
spectrumscale recoverygroup define [-rg RGName] [-nc ScaleOutNodeClassName] --node Node
or
spectrumscale recoverygroup undefine RGName
or
spectrumscale recoverygroup change [-rg NewRGName] ExistingRGName
or
spectrumscale recoverygroup list
or
spectrumscale recoverygroup clear [-f]
or
spectrumscale vdiskset define [-vs VdiskSet] [-rg RGName]
-code {4+2P | 4+3P | 8+2P | 8+3P}
-bs {1M | 2M | 4M | 8M | 16M}
-ss VdiskSetSize
or
spectrumscale vdiskset undefine VdiskSet
or
spectrumscale vdiskset clear[-f]
or
spectrumscale vdiskset list
or
spectrumscale callhome enable
or
spectrumscale callhome disable
or
spectrumscale callhome config -n CustomerName -i CustomerID -e CustomerEmail -cn CustomerCountry
[-s ProxyServerIP] [-pt ProxyServerPort]
[-u ProxyServerUserName] [-pw ProxyServerPassword] [-a]
or
spectrumscale callhome clear {--all | -n | -i | -e | -cn | -s | -pt | -u | -pw}
or
spectrumscale callhome schedule {-d | -w} [-c]
or
spectrumscale callhome list
or
spectrumscale auth file {ldap | ad | nis | none}
or
spectrumscale auth object [--https] {local | external | ldap | ad}
or
spectrumscale auth commitsettings
or
spectrumscale auth clear
or
spectrumscale enable {object | nfs | smb}
or
spectrumscale disable {object | nfs | smb}
or
spectrumscale install [-pr] [-po] [-s] [-f] [--skip]
or
spectrumscale deploy [-pr] [-po] [-s] [-f] [--skip]
or
spectrumscale upgrade precheck [--skip]
or
spectrumscale upgrade config offline [-N Node] [--clear]
exclude [-N Node] [--clear]
list
clear
or
spectrumscale upgrade run [--skip]
or
spectrumscale upgrade postcheck [--skip]
or
spectrumscale upgrade showversions
or
spectrumscale installgui {start | stop | status}
Availability
Available on all IBM Spectrum Scale editions.Description
- Install and configure GPFS.
- Add GPFS nodes to an existing cluster.
- Deploy and configure SMB, NFS, object (OpenStack Swift), and performance monitoring tools on top of GPFS.
- Configure authentication services for protocols.
- Enable and configure the file audit logging function.
- Enable and configure the call home function.
- Enable and configure the watch folder function.
- Configure recovery groups and vdisk sets, and define file systems for an IBM Spectrum Scale Erasure Code Edition environment.
- Upgrade IBM Spectrum Scale components.
- Perform offline upgrade of nodes that have services that are down or stopped.
- Exclude one or more nodes from the current upgrade run.
- Resume an upgrade run after a failure.
- The installation toolkit requires following packages:
- python-2.7
- net-tools
- TCP traffic from the nodes should be allowed through the firewall to communicate with the install toolkit on port 8889 for communication with the chef zero server and port 10080 for package distribution.
- The nodes themselves have external Internet access or local repository replicas that can be reached by the nodes to install necessary packages (dependency installation). For more information, see the Repository setup section of the Installation prerequisites topic in IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
- To install protocols, there must a GPFS cluster running a minimum version of 4.1.1.0 with CCR enabled.
- The node that you plan to run the installation toolkit from must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages.
- Any node that is set up to be a call home node must have network connectivity to IBM Support to upload data.
- Check whether passwordless SSH is set up between all admin nodes and all the other nodes in the cluster. If this check fails, a fatal error occurs.
- Check whether passwordless SSH is set up between all protocol nodes and all the other nodes in the cluster. If this check fails, a warning is displayed.
- Check whether passwordless SSH is set up between all protocol nodes in the cluster. If this check fails, a fatal error occurs.
Parameters
- setup
- Installs Chef and its components, as well as configure the install node in the cluster
definition file. The IP address passed in should be the node from which the installation toolkit will be run. The SSH key passed in should be the
key the installer should use to have passwordless SSH onto all other nodes. This is the first
command you will run to set up IBM Spectrum
Scale. This option
accepts the following arguments:
- -i SSHIdentity
- Adds the path to the SSH identity file into the configuration.
- -s ServerIP
- Adds the control node IP into the configuration.
- -st {"ss","SS","ess","ESS","ece","ECE"}
- Specifies the setup type. The allowed values are ess, ece, and ss. The default value is ss.
- If you are using the installation toolkit in a cluster containing ESS, specify the setup type as ess.
- If you are using the installation toolkit in an IBM Spectrum Scale Erasure Code Edition cluster, specify the setup type as ece.
- The setup type ss specifies an IBM Spectrum Scale cluster containing no ESS nodes.
Regardless of the mode, the installation toolkit contains safeguards to prevent changing of a tuned ESS configuration. While adding a node to the installation toolkit, it looks at whether the node is currently in an existing cluster and if so, it checks the node class. ESS I/O server nodes are detected based upon existence within the gss_ppc64 node class. ESS EMS nodes are detected based upon existence within the ems node class. ESS I/O server nodes are not allowed to be added to the installation toolkit and must be managed by the ESS toolsets contained in the EMS node. A single ESS EMS node is allowed to be added to the installation toolkit. Doing so adds this node as an admin node of the installation toolkit functions. While the installation toolkit runs from a non-ESS node, it uses the designated admin node (an EMS node in this case) to run mm commands on the cluster as a whole. Once in the ESS mode, the following assumptions and restrictions apply:- File audit logging is not configurable using the installation toolkit.
- Call home is not configurable using the installation toolkit
- EMS node will be the only admin node designated in the installation toolkit. This designation will automatically occur when the EMS node is added.
- EMS node will be the only GUI node allowed in the installation toolkit. Additional existing GUI nodes can exist but they cannot be added.
- EMS node will be the only performance monitoring collector node allowed within the installation toolkit. Additional existing collectors can exist but they cannot be added.
- EMS node cannot be designated as an NSD or a protocol node.
- I/O server nodes cannot be added to the installation toolkit. These nodes must be managed outside the installation toolkit by ESS toolsets contained in the EMS node.
- NSDs and file systems managed by the I/O server nodes cannot be added to the installation toolkit.
- File systems managed by the I/O server nodes can be used for placement of the Object fileset as well as CESSharedRoot file system. Simply point the installation toolkit to the path.
- The cluster name is set upon addition of the EMS node to the installation toolkit. It is determined by mmlscluster being run from the EMS node.
- EMS node must have passwordless SSH set up to all nodes, including any protocol, NSD, and client nodes being managed by the installation toolkit.
- EMS node can be a different architecture or operating system than the protocol, NSD, and client nodes being managed by the installation toolkit.
- If the config populate function is used, an EMS node of a different architecture or operating system than the protocol, NSD, and client nodes can be used.
- If the config populate function is used, a mix of architectures within the non-ESS nodes being added or currently within the cluster cannot be used. To handle this case, use the installation toolkit separately for each architecture grouping. Run the installation toolkit from a node with similar architecture to add the required nodes. Add the EMS node and use the setup type ess.
- --storesecret
- Disables the prompts for the encryption secret.CAUTION:If you use this option, passwords will not be securely stored.
This is the first command to run to set up IBM Spectrum Scale.
- node
- Used to add, remove, or list nodes in the cluster definition file. This command only interacts with this
configuration file and does not directly configure nodes in the cluster itself. The nodes that have
an entry in the cluster definition file will be used during
install, deploy, or upgrade. This option accepts the following arguments:
- add Node
- Adds the specified node and configures it according to the following arguments:
- -g
- Adds GPFS Graphical User Interface servers to the cluster definition file.
- -q
- Configures the node as a quorum node.
- -m
- Configures the node as a manager node.
- -a
- Configures the node as an admin node.
- -n
- Specifies the node as NSD.
- -e
- Specifies the node as the EMS node of an ESS system. This node is automatically specified as the admin node.
- -c
- Specifies the node as a call home node.
- -b
- Specifies the node as a broker node for the message queue for file audit logging and watch
folder functions.Note: If the setup type is ESS or ess in the cluster definition file, the use of the -b flag is blocked. You must manually enable the message queue and the file audit logging function after the Kafka packages are installed on nodes other than the EMS and I/O server nodes in a cluster containing ESS. For more information, see Manually installing file audit logging.
- -p
- Configures the node as a protocol node.
- -so
- Specifies the node as a scale-out node. The setup type must be ece for adding this type of nodes in the cluster definition.
- Node
- Specifies the node name.
- load NodeFile
- Loads the specified file containing a list of nodes, separated per line; adds the nodes
specified in the file and configures them according to the following:
- -g
- Sets the nodes as GPFS Graphical User Interface server.
- -q
- Sets the nodes as quorum nodes.
- -m
- Sets the nodes as manager nodes.
- -a
- Sets the nodes as admin nodes.
- -n
- Sets the nodes as NSD servers.
- -e
- Sets the node as the EMS node of an ESS system. This node is automatically specified as the admin node.
- -c
- Sets the nodes as call home nodes.
- -b
- Sets the nodes as broker nodes for the message queue for file audit logging and watch folder functions.
- -p
- Sets the nodes as protocol nodes.
- -so
- Sets the nodes as scale-out nodes. The setup type must be ece for adding this type of nodes in the cluster definition.
- delete Node
- Removes the specified node from the configuration. The following option is accepted.
- -f
- Forces the action without manual confirmation.
- clear
- Clears the current node configuration. The following option is accepted:
- -f
- Forces the action without manual confirmation.
- list
- Lists the nodes configured in your environment.
- config
- Used to set properties in the cluster definition file that
will be used during install, deploy, or upgrade. This command only interacts with this configuration
file and does not directly configure these properties on the GPFS cluster. This option accepts the following arguments:
- gpfs
- Sets any of the following GPFS-specific properties to be
used during GPFS installation and configuration:
- -l
- Lists the current settings in the configuration.
- -c ClusterName
- -p
- Specifies the profile to be set on cluster creation. The following values are accepted:
- default
- Specifies that the GpfsProtocolDefaults profile is to be used.
- randomio
- Specifies that the GpfsProtocolRandomIO profile is to be used.
- -r RemoteShell
- Specifies the remote shell binary to be used by GPFS. If no remote shell is specified in the cluster definition file, /usr/bin/ssh will be used as the default.
- -rc RemoteFileCopy
- Specifies the remote file copy binary to be used by GPFS. If no remote file copy binary is specified in the cluster definition file, /usr/bin/scp will be used as the default.
- -e EphemeralPortRange
Specifies an ephemeral port range to be set on all GPFS nodes. If no port range is specified in the cluster definition, 60000-61000 will be used as default.
For information about ephemeral port range, see the topic about GPFS port usage in Miscellaneous advanced administration topics.
- protocols
- Provides details of the GPFS environment that will be used
during protocol deployment, according to the following options:
- -l
- Lists the current settings in the configuration.
- -f FileSystem
- Specifies the file system.
- -m MountPoint
- Specifies the shared file system mount point or path.
- -e ExportIPPool
- Specifies a comma-separated list of additional CES export IPs to configure on the cluster.
- object
- Sets any of the following Object-specific properties to be used during Object deployment and
configuration:
- -l
- Lists the current settings in the configuration.
- -f FileSystem
- Specifies the file system.
- -m MountPoint
- Specifies the mount point.
- -e EndPoint
- Specifies the host name that will be used for access to the object store. This should be a round-robin DNS entry that maps to all CES IP addresses or the address of a load balancer front end; this will distribute the load of all keystone and object traffic that is routed to this host name. Therefore the endpoint is an IP address in a DNS or in a load balancer that maps to a group of export IPs (that is, CES IPs that were assigned on the protocol nodes).
- -o ObjectBase
- Specifies the object base.
- -i InodeAllocation
- Specifies the inode allocation.
- -t AdminToken
- Specifies the admin token.
- -au AdminUser
- Specifies the user name for the admin.
- -ap AdminPassword
- Specifies the admin user password.
This credential is for the Keystone
administrator. This user can be local or on remote authentication server based on the authentication
type used.Note: You will be prompted to enter a Secret Encryption Key which will be used to securely store the password. Choose a memorable pass phrase which you will be prompted for each time you enter the password.
- -su SwiftUser
- Specifies the Swift user name. This credential is for the Swift services administrator. All Swift services are run in this user's context. This user can be local or on remote authentication server based on the authentication type used.
- -sp SwiftPassword
- Specifies the Swift user password.Note: You will be prompted to enter a Secret Encryption Key which will be used to securely store the password. Choose a memorable pass phrase which you will be prompted for each time you enter the password.
- -dp DataBasePassword
- Specifies the object database user password. Note: You will be prompted to enter a Secret Encryption Key which will be used to securely store the password. Choose a memorable pass phrase which you will be prompted for each time you enter the password.
- -mr MultiRegion
- Enables the multi-region option.
- -rn RegionNumber
- Specifies the region number.
- -s3 on | off
- Specifies whether s3 is to be turned on or off.
- perfmon
- Sets performance monitoring specific properties to be used during installation and configuration:
- -r on | off
- Specifies if the install toolkit can reconfigure performance monitoring.Note: When set to on, reconfiguration might move the collector to different nodes and it might reset sensor data. Custom sensors and data might be erased.
- -d on | off
- Specifies if performance monitoring should be disabled (not installed).Note: When set to on, pmcollector and pmsensor packages are not installed or upgraded. Existing sensor or collector state remains as is.
- -l
- Lists the current settings in the configuration.
- ntp
- Used to add, list, or remove NTP nodes to the configuration. NTP nodes will be configured on the
cluster as follows: the admin node will point to the upstream NTP servers that you provide to
determine the correct time. The rest of the nodes in the cluster will point to the admin node to
obtain the time.
- -s Upstream_Server
- Specifies the host name that will be used. You can use an upstream server that you have already
configured, but it cannot be part of your Spectrum Scale cluster. Note: NTP works best with at least four upstream servers. If you provide fewer than four, you will receive a warning during installation advising that you add more.
- -l List
- Lists the current settings of your NTP setup.
- -e on | off
- Specifies whether NTP is enabled or not. If this option is turned to off, you will receive a warning during installation.
- clear
- Removes specified properties from the cluster definition file:
- gpfs
- Removes GPFS related properties from the cluster definition file:
- -c
- Clears the GPFS cluster name.
- -p
- Clears the GPFS profile to be applied on cluster creation.
The following values are accepted:
- default
- Specifies that the GpfsProtocolDefaults profile is to be cleared.
- randomio
- Specifies that the GpfsProtocolRandomIO profile is to be cleared.
- -r RemoteShell
- Clears the absolute path name of the remote shell command GPFS uses for node communication. For example, /usr/bin/ssh.
- -rc RemoteFileCopy
- Clears the absolute path name of the remote copy command GPFS uses when transferring files between nodes. For example, /usr/bin/scp.
- -e EphemeralPortRange
- Clears the GPFS daemon communication port range.
- --all
- Clears all settings in the cluster definition file.
- protocols
- Removes protocols related properties from the cluster definition file:
- -f
- Clears the shared file system name.
- -m
- Clears the shared file system shared file system mount point or path.
- -e
- Clears a comma-separated list of additional CES export IPs to configure on the cluster.
- --all
- Clears all settings in the cluster definition file.
- object
- Removes object related properties from the cluster definition file:
- -f
- Clears the object file system name.
- -m
- Clears the absolute path to your file system on which the objects reside.
- -e
- Clears the host name which maps to all CES IP addresses in a round-robin manner.
- -o
- Clears the GPFS fileset to be created or used as the object base.
- -i
- Clears the GPFS fileset inode allocation to be used by the object base.
- -t
- Clears the admin token to be used by Keystone.
- -au
- Clears the user name for the admin user.
- -ap
- Clears the password for the admin user.
- -su
- Clears the user name for the Swift user.
- -sp
- Clears the password for the Swift user.
- -dp
- Clears the password for the object database.
- -s3
- Clears the S3 API setting, if it is enabled.
- -mr
- Clears the multi-region data file path.
- -rn
- Clears the region number for the multi-region configuration.
- --all
Clears all settings in the cluster definition file.
- update
- Updates operating system and CPU architecture fields in the cluster definition file. This update is automatically done if you run the upgrade precheck while upgrading to IBM Spectrum Scale release 4.2.2 or later.
- populate
- Populates the cluster definition file with the current
cluster state. In the following upgrade scenarios, you might need to update the cluster definition file with the current cluster state:
- A manually created cluster in which you want to use the installation toolkit to perform administration tasks on the cluster such as adding protocols, adding nodes, and upgrading.
- A cluster created using the installation toolkit in which manual changes were done without using the toolkit wherein you want to synchronize the installation toolkit with the updated cluster configuration that was performed manually.
- --node Node
- Specifies an existing node in the cluster that is used to query the cluster information. If you want to use the spectrumscale config populate command to retrieve data from a cluster containing ESS, you must specify the EMS node with the --node flag.
- nsd
- Used to add, remove, list or balance NSDs, as well as add file systems in the cluster definition file. This command only interacts with this
configuration file and does not directly configure NSDs on the cluster itself. The NSDs that have an
entry in the cluster definition file will be used during install. This option accepts the following arguments:
- add
- Adds an NSD to the configuration, according to the following specifications:
- -p Primary
- Specifies the primary NSD server name.
- -s Secondary
- Specifies the secondary NSD server names. You can use a comma-separated list to specify up to seven secondary NSD servers.
- -fs FileSystem
- Specifies the file system to which the NSD is assigned.
- -po Pool
- Specifies the file system pool.
- -u
- Specifies NSD usage. The following values are accepted:
- dataOnly
- dataAndMetadata
- metaDataOnly
- descOnly
- localCache
- -fg FailureGroup
- Specifies the failure group to which the NSD belongs.
- --no-check
- Specifies not to check for the device on the server.
- PrimaryDevice
- Specifies the device name on the primary NSD server.
- balance
- Balances the NSD preferred node between the primary and secondary nodes. The following options
are accepted:
- --node Node
- Specifies the node to move NSDs from when balancing.
- --all
- Specifies that all NSDs are to be balanced.
- delete NSD
- Removes the specified NSD from the configuration.
- modify NSD
- Modifies the NSD parameters on the specified NSD, according to the following options:
- -n Name
- Specifies the name.
- -u
- The following values are accepted:
- dataOnly
- dataAndMetadata
- metadataOnly
- descOnly
- localCache
- -po Pool
- Specifies the pool
- -fs FileSystem
- Specifies the file system.
- -fg FailureGroup
- Specifies the failure group.
- servers
- Adds and removes servers, and sets the primary server for NSDs.
- clear
- Clears the current NSD configuration. The following option is accepted:
- -f
- Forces the action without manual confirmation.
- list
- Lists the NSDs configured in your environment.
- filesystem
- Used to list or modify file systems in the cluster definition file. This command only interacts with this
configuration file and does not directly modify file systems on the cluster itself. To modify the
properties of a file system in the cluster definition file, the
file system must first be added with spectrumscale nsd. This option
accepts the following arguments:
- modify
- Modifies the file system attributes. This option accepts the following arguments:
- -B
- Specifies the file system block size. This argument accepts the following values: 64K, 128K, 256K, 512K, 1M, 2M, 4M, 8M, 16M.
- -m MountPoint
- Specifies the mount point.
- -r
- Specifies the number of copies of each data block for a file. This argument accepts the following values: 1, 2, 3.
- -mr
- Specifies the number of copies of inodes and directories. This argument accepts the following values: 1, 2, 3.
- -MR
- Specifies the default maximum number of copies of inodes and directories. This argument accepts the following values: 1, 2, 3.
- -R
- Specifies the default maximum number of copies of each data block for a file. This argument accepts the following values: 1, 2, 3.
- --metadata_block_size
- Specifies the file system meta data block size. This argument accepts the following values: 64K, 128K, 256K, 512K, 1M, 2M, 4M, 8M, 16M.
- --fileauditloggingenable
- Enables file audit logging on the specified file system.
- --degradedperformance
- Allows file audit logging to be enabled without many default performance enhancements. The --degradedperformance option reduces the amount of local disk space (10GB vs 20GB) that is required per broker node per file system enabled for file audit logging.
- --degradedperformancedisable
- Disables the --degradedperformance option.
- --fileauditloggingdisable
- Disables file audit logging on the specified file system.
- --logfileset LogFileset
- Specifies the log fileset name for file audit logging. The default value is .audit_log.
- --log_fileset_device LogFilesetDevice
- Specifies the log fileset device name for file audit logging.
- --retention RetentionPeriod
- Specifies the file audit logging retention period in number of days. The default value is 365 days.
- FileSystem
- Specifies the file system to be modified.
- define
- Adds file system attributes in an IBM Spectrum
Scale Erasure Code Edition
environment. The setup type must be ece for using this option.Note: If you are planning to deploy protocols in the IBM Spectrum Scale Erasure Code Edition cluster, you must define a CES shared root file system before initiating the installation toolkit deployment phase by using the following command.
./spectrumscale config protocols -f FileSystem -m MountPoint
- -fs FileSystem
- Specifies the file system to which the vdisk set is to be assigned.
- -vs VdiskSet
- Specifies the vdisk sets to be affected by a file system operation.
- --mmcrfs MmcrfsParams
- Specifies that all command line parameters following the --mmcrfs flag must be passed to the IBM Spectrum Scale mmcrfs command and they must not be interpreted by the mmvdisk command.
- list
- Lists the file systems configured in your environment.
- fileauditlogging
- Enable, disable, or list the file audit logging configuration in the cluster definition file.
- enable
- Enables the file audit logging configuration in the cluster definition file.
- disable
- Disables the file audit logging configuration in the cluster definition file.
- list
- Lists the file audit logging configuration in the cluster definition file.
- watchfolder
- Enable, disable, or list the watch folder configuration in the cluster definition file.
- enable
- Enables the watch folder configuration in the cluster definition file.
- disable
- Disables the watch folder configuration in the cluster definition file.
- list
- Lists the watch folder configuration in the cluster definition file.
- recoverygroup
- Define, undefine, change, list, or clear recovery group related configuration in the cluster definition file in an IBM Spectrum
Scale Erasure Code Edition environment. The setup type must be
ece for using this option.
- define
- Defines recovery groups in the cluster definition file.
- -rg RgName
- Sets the name of the recovery group.
- -nc ScaleOutNodeClassName
- Sets the name of the scale-out node class.
- --node Node
- Specifies the scale-out node with in an existing IBM Spectrum Scale Erasure Code Edition cluster for the server node class.
- undefine
- Undefines specified recovery group from the cluster definition file.
- RgName
- The name of the recovery group that is to be undefined.
- change
- Changes the recovery group name.
- ExistingRgName
- The name of the recovery group that is to be modified.
- -rg NewRgName
- The new name of the recovery group.
- clear
- Clears the current recovery group configuration from the cluster definition file.
- -f
- Forces operation without manual confirmation.
- list
- Lists the current recovery group configuration in the cluster definition file.
- vdiskset
- Define, undefine, list, or clear vdisk set related configuration in the cluster definition file in an IBM Spectrum
Scale Erasure Code Edition environment. The setup type must be
ece for using this option.
- define
- Defines vdisk sets in the cluster definition file.
- -vs VdiskSet
- Sets the name of the vdisk set.
- -rg RgName
- Specifies an existing recovery group with which the defined vdisk set is to be associated.
- -code
- Defines the erasure code. This argument accepts the following values: 4+2P, 4+3P, 8+2P, and 8+3P.
- -bs
- Specifies the block size for a vdisk set definition. This argument accepts the following values: 1M, 2M, 4M, 8M, and 16M.
- -ss VdiskSetSize
- Defines the vdisk set size in percentage of the available storage space.
- undefine
- Undefines specified vdisk set from the cluster definition file.
- VdiskSet
- The name of the vdisk set that is to be undefined.
- clear
- Clears the current vdisk set configuration from the cluster definition file.
- -f
- Forces operation without manual confirmation.
- list
- Lists the current vdisk set configuration in the cluster definition file.
- callhome
- Used to enable, disable, configure, schedule, or list call home configuration in the cluster definition file.
- enable
- Enables call home in the cluster definition file.
- disable
- Disables call home in the cluster definition file. The call home function is enabled by default in the cluster definition file. If you disable it in the cluster definition file, the call home packages are installed on the nodes but no configuration is done by the installation toolkit.
- config
- Configures call home settings in the cluster definition file.
- -n CustomerName
- Specifies the customer name for the call home configuration.
- -i CustomerID
- Specifies the customer ID for the call home configuration.
- -e CustomerEmail
- Specifies the customer email address for the call home configuration.
- -cn CustomerCountry
- Specifies the customer country code for the call home configuration.
- -s ProxyServerIP
- Specifies the proxy server IP address for the call home configuration. This is an optional
parameter.
If you are specifying the proxy server IP address, the proxy server port must also be specified.
- -pt ProxyServerPort
- Specifies the proxy server port for the call home configuration. This is an optional
parameter.
If you are specifying the proxy server port, the proxy server IP address must also be specified.
- -u ProxyServerUserName
- Specifies the proxy server user name for the call home configuration. This is an optional parameter.
- -pw ProxyServerPassword
- Specifies the proxy server password for the call home configuration. This is an optional
parameter.
If you do not specify a password on the command line, you are prompted for a password.
- -a
- When you specify the call home configuration settings by using the ./spectrumscale
callhome config, you are prompted to accept or decline the support information collection
message. Use the -a parameter to accept that message in advance. This is an
optional parameter.
If you do not specify the -a parameter on the command line, you are prompted to accept or decline the support information collection message.
- Clear
- Clears the specified call home settings from the cluster definition file.
- --all
- Clears all call home settings from the cluster definition file.
- -n
- Clears the customer name from the call home configuration in the cluster definition file.
- -i
- Clears the customer ID from the call home configuration in the cluster definition file.
- -e
- Clears the customer email address from the call home configuration in the cluster definition file.
- -cn
- Clears the customer country code from the call home configuration in the cluster definition file.
- -s
- Clears the proxy server IP address from the call home configuration in the cluster definition file.
- -pt
- Clears the proxy server port from the call home configuration in the cluster definition file.
- -u
- Clears the proxy server user name from the call home configuration in the cluster definition file.
- -pw
- Clears the proxy server password from the call home configuration in the cluster definition file.
- schedule
- Specifies the call home data collection schedule in the cluster definition file.
By default, the call home data collection is enabled in the cluster definition file and it is set for a daily and a weekly schedule. Daily data uploads are by default executed at 02:xx AM each day. Weekly data uploads are by default executed at 03:xx AM each Sunday. In both cases, xx is a random number from 00 to 59. You can use the spectrumscale callhome schedule command to set either a daily or a weekly call home data collection schedule.
- -d
- Specifies a daily call home data collection schedule.
If call home data collection is scheduled daily, data uploads are executed at 02:xx AM each day. xx is a random number from 00 to 59.
- -w
- Specifies a weekly call home data collection schedule.
If call home data collection is scheduled weekly, data uploads are executed at 03:xx AM each Sunday. xx is a random number from 00 to 59.
- -c
- Clears the call home data collection schedule in the cluster definition file.
The call home configuration can still be applied without a schedule being set. In that case, you either need to manually run and upload data collections or you can set the call home schedule to the desired interval at a later time with Daily: ./spectrumscale callhome schedule -d, Weekly: ./spectrumscale callhome schedule -w, or Both Daily and Weekly: ./spectrumscale callhome schedule -d -w commands.
- list
- Lists the call home configuration specified in the cluster definition file.
- auth
- Used to configure either Object or File authentication on protocols in the cluster definition file. This command only interacts with this
configuration file and does not directly configure authentication on the protocols. To configure
authentication on the GPFS cluster during a deploy,
authentication settings must be provided through the use of a template file. This option accepts the
following arguments:
- file
- Specifies file authentication. One of the following must be specified:
- ldap
- ad
- nis
- none
- object
- Specifies object authentication. Either of the following options are accepted:
- --https
One of the following must be specified:- local
- external
- ldap
- ad
Both file and object authentication can be set up with the authentication backend server specified. Running this command will open a template settings file to be filled out before installation.
- commitsettings
- Merges authentication settings into the main cluster definition file.
- clear
- Clears your current authentication configuration.
- enable
- Used to enable Object, SMB or NFS in the cluster definition file. This command only interacts with this
configuration file and does not directly enable any protocols on the GPFS cluster itself. The default configuration is that all protocols are
disabled. If a protocol is enabled in the cluster definition file, this protocol will be enabled on the GPFS cluster during
deploy. This option accepts the following arguments:
- obj
- Object
- nfs
- NFS
- smb
- SMB
- disable
- Used to disable Object, SMB or NFS in the cluster definition file. This command only interacts with this
configuration file and does not directly disable any protocols on the GPFS cluster itself. The default configuration is that all protocols are
disabled, so this command is only necessary if a protocol has previously been enabled in the cluster
definition file, but is no longer required. Note: Disabling a protocol in the cluster definition will not disable this protocol on the GPFS cluster during a deploy, it merely means that this protocol will not be enabled during a deploy.
This option accepts the following arguments:
- obj
- ObjectCAUTION:Disabling object service discards OpenStack Swift configuration and ring files from the CES cluster. If OpenStack Keystone configuration is configured locally, disabling object storage also discards the Keystone configuration and database files from the CES cluster. However, the data is not removed. For subsequent object service enablement with a clean configuration and new data, remove object store fileset and set up object environment. See the mmobj swift base command. For more information, contact the IBM Support Center.
- nfs
- NFS
- smb
- SMB
- install
- Installs, creates a GPFS cluster, creates NSDs and adds
nodes to an existing GPFS cluster. The installation toolkit will use the environment details in the cluster definition file to perform these tasks. If all configuration
steps have been completed, this option can be run with no arguments (and pre-install and
post-install checks will be performed automatically). For a
dry-run,
the following arguments are accepted:- -pr
- Performs a pre-install environment check.
- -po
- Performs a post-install environment check.
- -s SecretKey
- Specifies the secret key on the command line required to decrypt sensitive data in the cluster definition file and suppresses the prompt for the secret key.
- -f
- Forces action without manual confirmation.
- --skip
- Bypasses the specified precheck and suppresses prompts. For example, specifying --skip ssh bypasses the SSH connectivity check. Specifying --skip chef suppresses Chef related prompts and all answers are considered as yes.
- deploy
- Creates file systems, deploys protocols, and configures protocol authentication on an existing
GPFS cluster. The installation toolkit will use the environment details in the cluster definition file to perform these tasks. If all configuration
steps have been completed, this option can be run with no arguments (and pre-deploy and post-deploy
checks will be performed automatically). However, the secret key will be prompted for unless it is
passed in as an argument using the -s flag.
For a
dry-run,
the following arguments are accepted:- -pr
- Performs a pre-deploy environment check.
- -po
- Performs a post-deploy environment check.
- -s SecretKey
- Specifies the secret key on the command line required to decrypt sensitive data in the cluster definition file and suppresses the prompt for the secret key.
- -f
- Forces action without manual confirmation.
- --skip
- Bypasses the specified precheck and suppresses prompts. For example, specifying --skip ssh bypasses the SSH connectivity check. Specifying --skip chef suppresses Chef related prompts and all answers are considered as yes.
- upgrade
- Performs upgrade procedure, upgrade precheck, upgrade postcheck, and upgrade related
configuration to add nodes as offline, or exclude nodes from the upgrade run.
- precheck
- Performs health checks on the cluster prior to the upgrade.During the upgrade precheck, the installation toolkit displays messages in a number of scenarios including:
- If the installed Chef version is different from the supported versions.
- If there are AFM relationships in the cluster. All file systems that have associated AFM primary or cache filesets are listed and reference to procedure for stopping and restarting replication is provided.
- config
- Manage upgrade related configuration in the cluster definition file.
- offline
- Designates specified nodes in the cluster as offline for the upgrade run.
For entities designated as offline, only the packages are upgraded during the upgrade; the services
are not restarted after the upgrade. You can use this option to designate those
nodes as offline that have services down or
stopped, or that have unhealthy components that are flagged in the upgrade precheck.
- -N Node
- You can specify one or more nodes that are a part of the cluster that is being upgraded with -N
in a comma-separated list. For example: node1,node2,node3
If the nodes being specified as offline are protocol nodes then all components (GPFS, SMB, NFS, and object) are added as offline in the cluster configuration. If the nodes being specified as offline are not protocol nodes then GPFS is added as offline in the cluster configuration.
- --clear
- Clears the offline nodes information from the cluster configuration.
- exclude
- Designates specified nodes in a cluster to be excluded from the upgrade run. For nodes
designated as excluded, the installation toolkit does not perform any action during the upgrade.
This option allows you to upgrade a subset of a cluster.Note: Nodes that are designated as excluded must be upgraded at a later time to complete the cluster upgrade.
- -N Node
- You can specify one or more nodes that are a part of the cluster that is being upgraded with -N in a comma-separated list. For example: node1,node2,node3
- --clear
- Clears the excluded nodes information from the cluster configuration.
- list
- Lists the upgrade related configuration information in the cluster definition file.
- clear
- Clears the upgrade related configuration in the cluster definition file.
- run
- Upgrades components of an existing IBM Spectrum Scale cluster.
This command can still be used even if all protocols are not enabled. If a protocol is not enabled, then the respective packages are still upgraded, but the respective service is not started. The installation toolkit uses environment details in the cluster definition file to perform upgrade tasks.
The installation toolkit includes the ability to determine if an upgrade is being run for the first time or if it is a rerun of a failed upgrade.
To perform environment health checks prior to and after the upgrade, run the ./spectrumscale upgrade command using the precheck and postcheck arguments. This is not required, however, because specifying upgrade run with no arguments also runs these checks.
- --skip
- Bypasses the specified precheck and suppresses prompts. For example, specifying --skip ssh bypasses the SSH connectivity check. Specifying --skip chef suppresses Chef related prompts and all answers are considered as yes.
- postcheck
- Performs health checks on the cluster after the upgrade has been completed.
- showversions
- Shows installed versions of GPFS and protocols and available versions of these components in the configured repository.
- installgui
- Invokes the installation GUI that can be used to install the IBM Spectrum
Scale software on cluster nodes, create an IBM Spectrum
Scale cluster, and configure NTP. The installation GUI
is used only for installing the system and a separate management GUI needs to be used for
configuring and managing the system. The installation GUI cannot be used to upgrade the software in
an existing IBM Spectrum
Scale system. For more
information, see Installing IBM Spectrum Scale by using the graphical user interface (GUI).Note: The installation GUI is deprecated and it will be removed in a future release.
- start
- Starts the installation GUI
- status
- Displays the status of the processes that are running on the installation GUI
- stop
- Stops the installation GUI through the CLI. The installation process through the GUI automatically stops when you exit the installation GUI.
Exit status
- 0
- Successful completion.
- nonzero
- A failure has occurred.
Security
You must have root authority to run the spectrumscale command.
The node on which the command is issued must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages. For more information, see Requirements for administering a GPFS file system.
Examples
Creating a new IBM Spectrum Scale cluster
- To instantiate your chef zero server, issue a command similar to the
following:
spectrumscale setup -s 192.168.0.1
- To designate NSD server nodes in your environment to use for the installation, issue this
command:
./spectrumscale node add FQDN -n
- To add four non-shared NSDs seen by a primary NSD server only, issue this
command:
./spectrumscale nsd add -p FQDN_of_Primary_NSD_Server /dev/dm-1 /dev/dm-2 /dev/dm-3 /dev/dm-4
- To add four non-shared NSDs seen by both a primary NSD server and a secondary NSD server, issue
this
command:
./spectrumscale nsd add -p FQDN_of_Primary_NSD_Server -s FQDN_of_Secondary_NSD_Server\ /dev/dm-1 /dev/dm-2 /dev/dm-3 /dev/dm-4
- To define a shared root file system using two NSDs and a file system fs1
using two NSDs, issue these
commands:
./spectrumscale nsd list ./spectrumscale filesystem list ./spectrumscale nsd modify nsd1 -fs cesSharedRoot ./spectrumscale nsd modify nsd2 -fs cesSharedRoot ./spectrumscale nsd modify nsd3 -fs fs1 ./spectrumscale nsd modify nsd4 -fs fs1
- To designate GUI nodes in your environment to use for the installation, issue this
command:
./spectrumscale node add FQDN -g -a
- To designate additional client nodes in your environment to use for the installation, issue this
command:
./spectrumscale node add FQDN
- To allow the installation toolkit to reconfigure Performance Monitoring if it detects any
existing configurations, issue this
command:
./spectrumscale config perfmon -r on
- To name your cluster, issue this
command:
./spectrumscale config gpfs -c Cluster_Name
- To configure the call home function with the mandatory parameters,
issue this
command:
./spectrumscale callhome config -n username -i 456123 -e username@example.com -cn US
If you do not want to use call home, disable it by issuing the following command:For more information, see Enabling and configuring call home using the installation toolkit../spectrumscale callhome disable
- To review the configuration prior to installation, issue these
commands:
./spectrumscale node list ./spectrumscale nsd list ./spectrumscale filesystem list ./spectrumscale config gpfs --list
- To start the installation on your defined environment, issue these
commands:
./spectrumscale install --precheck ./spectrumscale install
- To deploy file systems after a successful installation, do one of the following depending on
your requirement:
- If you want to use only the file systems, issue these
commands:
./spectrumscale deploy --precheck ./spectrumscale deploy
- If you want to deploy protocols also, see the examples in the Deploying protocols on an existing cluster section.
- If you want to use only the file systems, issue these
commands:
Deploying protocols on an existing cluster
- To instantiate your chef zero server, issue a command similar to the
following:
spectrumscale setup -s 192.168.0.1
- To designate protocol nodes in your environment to use for the deployment, issue this
command:
./spectrumscale node add FQDN -p
- To designate GUI nodes in your environment to use for the deployment, issue this
command:
./spectrumscale node add FQDN -g -a
- To configure protocols to point to a file system that will be used as a shared root, issue this
command:
./spectrumscale config protocols -f FS_Name -m Shared_FS_Mountpoint_Or_Path
- To configure a pool of export IPs, issue this
command:
./spectrumscale config protocols -e Comma_Separated_List_of_Exportpool_IPs
- To enable NFS on all protocol nodes, issue this
command:
./spectrumscale enable nfs
- To enable SMB on all protocol nodes, issue this
command:
./spectrumscale enable smb
- To enable Object on all protocol nodes, issue these
commands:
./spectrumscale enable object ./spectrumscale config object -au Admin_User -ap Admin_Password -dp Database_Password ./spectrumscale config object -e FQDN ./spectrumscale config object -f FS_Name -m FS_Mountpoint ./spectrumscale config object -o Object_Fileset
- To enable file audit logging, issue the following
command:
./spectrumscale fileauditlogging enable
For more information, see Enabling and configuring file audit logging using the installation toolkit.
- To review the configuration prior to deployment, issue these
commands:
./spectrumscale config protocols ./spectrumscale config object ./spectrumscale node list
- To deploy protocols on your defined environment, issue these
commands:
./spectrumscale deploy --precheck ./spectrumscale deploy
Deploying protocol authentication
- To enable file authentication with AD server on all protocol nodes, issue this
command:
./spectrumscale auth file ad
Fill out the template and save the information, and then issue the following commands:
./spectrumscale deploy --precheck ./spectrumscale deploy
- To enable Object authentication with AD server on all protocol nodes, issue this
command:
./spectrumscale auth object ad
Fill out the template and save the information, and then issue the following commands:
./spectrumscale deploy --precheck ./spectrumscale deploy
Upgrading an IBM Spectrum Scale cluster
- Extract the IBM Spectrum
Scale package for the
required code level by issuing a command similar to the following depending on the package
name:
./Spectrum_Scale_Protocols_Standard-5.0.x.x-xxxxx
- Copy the cluster definition file from the prior installation
to the latest installer location by issuing this
command:
cp -p /usr/lpp/mmfs/4.2.3.0/installer/configuration/clusterdefinition.txt\ /usr/lpp/mmfs/5.0.3.x/installer/configuration/
Note: This is a command example of a scenario where you are upgrading the system from 4.2.3.0 to 5.0.3.0.You can populate the cluster definition file with the current cluster state by issuing the spectrumscale config populate command.
- Run the upgrade precheck from the installer directory of the latest code level extraction by
issuing commands similar to the
following:
cd /usr/lpp/mmfs/Latest_Code_Level_Directory/installer ./spectrumscale upgrade precheck
Note: If you are upgrading to IBM Spectrum Scale version 4.2.2, the upgrade precheck updates the operating system and CPU architecture fields in the cluster definition file. You can also update the operating system and CPU architecture fields in the cluster definition file by issuing the spectrumscale config update command. - [Optional] Specify nodes as
offline by issuing the following command, if services running on these nodes are stopped or
down.
./spectrumscale upgrade config offline -N Node
- [Optional] Exclude nodes that you do not want to upgrade at
this point by issuing the following
command.
./spectrumscale upgrade config exclude -N Node
- Run the upgrade by issuing this
command:
cd /usr/lpp/mmfs/Latest_Code_Level_Directory/installer ./spectrumscale upgrade run
Adding to an installation process
- To add nodes to an installation, do the following:
- Add one or more node types using the following commands:
- Client nodes:
./spectrumscale node add FQDN
- NSD nodes:
./spectrumscale node add FQDN -n
- Protocol nodes:
./spectrumscale node add FQDN -p
- GUI nodes:
./spectrumscale node add FQDN -g -a
- Client nodes:
- Install GPFS on the new nodes using the following
commands:
./spectrumscale install --precheck ./spectrumscale install
- If protocol nodes are being added, deploy protocols using the following
commands:
./spectrumscale deploy --precheck ./spectrumscale deploy
- Add one or more node types using the following commands:
- To add NSDs to an installation, do the following:
- Verify that the NSD server connecting this new disk runs an OS compatible with the installation toolkit and that the NSD server exists within the cluster.
- Add NSDs to the installation using the following
command:
./spectrumscale nsd add -p FQDN_of_Primary_NSD_Server Path_to_Disk_Device_File
- Run the installation using the following
commands:
./spectrumscale install --precheck ./spectrumscale install
- To add file systems to an installation, do the following:
- Verify that free NSDs exist and that they can be listed by the installation toolkit using the following
commands.
mmlsnsd ./spectrumscale nsd list
- Define the file system using the following
command:
./spectrumscale nsd modify NSD -fs File_System_Name
- Deploy the file system using the following
commands:
./spectrumscale deploy --precheck ./spectrumscale deploy
- Verify that free NSDs exist and that they can be listed by the installation toolkit using the following
commands.
- To enable another protocol on an existing cluster that has protocols enabled, do the following
steps depending on your configuration:
- Enable NFS on all protocol nodes using the following
command:
./spectrumscale enable nfs
- Enable SMB on all protocol nodes using the following
command:
./spectrumscale enable smb
- Enable Object on all protocol nodes using the following
commands:
./spectrumscale enable object ./spectrumscale config object -au Admin_User -ap Admin_Password -dp Database_Password ./spectrumscale config object -e FQDN ./spectrumscale config object -f FS_Name -m FS_Mountpoint ./spectrumscale config object -o Object_Fileset
- Enable the new protocol using the following
commands:
./spectrumscale deploy --precheck ./spectrumscale deploy
- Enable NFS on all protocol nodes using the following
command:
Using the installation toolkit in cluster containing ESS
- Add protocol nodes in the ESS cluster by issuing the following
command.
You can add other types of nodes such as client nodes, NSD servers, and so on depending on your requirements. For more information, see Defining the cluster topology for the installation toolkit../spectrumscale node add NodeName -p
- Specify one of the newly added protocol nodes as the installer node and specify the setup type
as ess by issuing the following
command.
./spectrumscale setup -s NodeIP -i SSHIdentity -st ess
The installer node is the node on which the installation toolkit is extracted and from where the installation toolkit command, spectrumscale, is initiated.
- Specify the EMS node of the ESS system to the installation toolkit by issuing the following
command.
./spectrumscale node add NodeName -e
This node is also automatically specified as the admin node. The admin node, which must be the EMS node in an ESS configuration, is the node that has access to all other nodes to perform configuration during the installation.
- Proceed with specifying other configuration options, installing, and deploying by using the installation toolkit. For more information, see Defining the cluster topology for the installation toolkit, Installing GPFS and creating a GPFS cluster, and Deploying protocols.
Manually adding protocols to a cluster containing ESS
For information on preparing a cluster that contains ESS for deploying protocols, see Preparing a cluster that contains ESS for adding protocols.
After you have prepared your cluster that contains ESS for adding protocols, you can use commands similar to the ones listed in the Deploying protocols on an existing cluster section.
Using the installation toolkit in an IBM Spectrum Scale Erasure Code Edition environment
- Specify the installer node and the setup type as ece in the cluster definition file for IBM Spectrum
Scale Erasure Code Edition.
./spectrumscale setup -s InstallerNodeIP -st ece
- Add scale-out nodes for IBM Spectrum
Scale Erasure Code Edition in the cluster definition file.
./spectrumscale node add NodeName -so
- Define the recovery group for IBM Spectrum
Scale Erasure Code Edition in the
cluster definition file.
./spectrumscale recoverygroup define -N Node1,Node2,...,NodeN
- Define vdisk sets for IBM Spectrum
Scale Erasure Code Edition in the cluster definition file.
./spectrumscale vdiskset define -rg RgName -code RaidCode -bs BlockSize -ss SetSize
- Define the file system for IBM Spectrum
Scale Erasure Code Edition in the
cluster definition file.
./spectrumscale filesystem define -fs FileSystem -vs VdiskSet
Diagnosing an error during install, deploy, or upgrade
- Note the screen output indicating the error. This helps in narrowing down the general
failure.
When a failure occurs, the screen output also shows the log file containing the failure.
- Open the log file in an editor such as vi.
- Go to the end of the log file and search upwards for the text FATAL.
- Find the topmost occurrence of FATAL (or first FATAL error that occurred) and look above and below this error for further indications of the failure.
For more information, see Finding deployment related error messages more easily and using them for failure analysis.