Creating an IBM Spectrum Scale cluster
When you create an IBM Spectrum Scale™ cluster, you can either create a small cluster of one or more nodes and later add nodes to it or you can create a cluster with all its nodes in one step.
- Run the mmcrcluster command to create a cluster that contains one or more nodes, and later run the mmaddnode command as needed to add nodes to the cluster. This method is more flexible and is appropriate when you want to build a cluster step by step.
- Run the mmcrcluster command to create a cluster and at the same time to add a set of nodes to the cluster. This method is quicker when you already know which nodes you want to add to the cluster.
NodeName:NodeDesignations:AdminNodeName
- NodeName
- The
host name or IP address of the node for GPFS daemon-to-daemon
communication. The host name or IP address that is used for a node must refer to the communication adapter over which the GPFS daemons communicate. Alias names are not allowed. You can specify an IP address at NSD creation, but it will be converted to a host name that must match the GPFS node name. You can specify a node using any of these forms:
- Short hostname (for example, h135n01)
- Long hostname (for example, h135n01.frf.ibm.com)
- IP address (for example, 7.111.12.102)
Note: Host names should always include at least one alpha character, and they should not start with a hyphen (-).Whichever form you specify, the other two forms must be defined correctly in DNS or the hosts file.
- NodeDesignations
- An optional, "-" separated list of node roles.
- manager | client – Indicates whether
a node is part of the node pool from which file system managers and token managers can be selected.
The default is client, which means do not include the node in the pool of
manager nodes. For detailed information on the manager node functions, see The file system manager.
In general, it is a good idea to define more than one node as a manager node. How many nodes you designate as manager depends on the workload and the number of GPFS server licenses you have. If you are running large parallel jobs, you may need more manager nodes than in a four-node cluster supporting a Web application. As a guide, in a large system there should be a different file system manager node for each GPFS file system.
- quorum | nonquorum – This designation
specifies whether or not the node should be included in the pool of nodes from which quorum is
derived. The default is nonquorum. You need to designate at least one node
as a quorum node. It is recommended that you designate at least the primary and secondary cluster
configuration servers and NSD servers as quorum nodes.
How many quorum nodes you designate depends upon whether you use node quorum or node quorum with tiebreaker disks. See Quorum.
- manager | client – Indicates whether
a node is part of the node pool from which file system managers and token managers can be selected.
The default is client, which means do not include the node in the pool of
manager nodes. For detailed information on the manager node functions, see The file system manager.
- AdminNodeName
- Specifies an optional field that consists of a node name to be used by the administration
commands to communicate between nodes.
If AdminNodeName is not specified, the NodeName value is used.
- While a node may mount file systems from multiple clusters, the node itself may only reside in a single cluster. Nodes are added to a cluster using the mmcrcluster or mmaddnode command.
- The nodes must be available when they are added to a cluster. If any of the nodes listed are not available when the command is issued, a message listing those nodes is displayed. You must correct the problem on each node and then issue the mmaddnode command to add those nodes.
- Designate at least one but not more than seven nodes as quorum nodes. When not using tiebreaker disks, you can designate more quorum nodes, but it is recommended to use fewer than eight if possible. When using server-based configuration repository, it is recommended that you designate the cluster configuration servers as quorum nodes. How many quorum nodes altogether you will have depends on whether you intend to use the node quorum with tiebreaker algorithm or the regular node based quorum algorithm. For more details, see Quorum.