db2cluster - Manage Db2 cluster services command
The db2cluster command is used to perform management operations that are related to Db2 cluster services.
The db2cluster command is functionally similar to the interactive db2haicu tool, but it contains a much wider array of options for administering in a Db2 pureScale® environment.
Authorization
The options of the db2cluster command that you can use depend on your authorization level. Some options can be specified only by the Db2 cluster services administrator. Other options can be specified only if you are part of the SYSADM, SYSCTL, or SYSMAINT group, and a smaller subset of commands can be run by any user ID on the system. See the Command parameters section for information on the authorities that are required for each option. In addition, there is a set of advanced troubleshooting options for the db2cluster command, which can be used only under the guidance of service.
Command syntax - standard options
Command syntax - advanced options
Command parameters
- -cm
- Specifies a resource-based command or maintenance operation.
- -set
- Specifies the tiebreaker device for the cluster manager, the host failure detection time, or the
preferred primary cluster caching facility.
- -tiebreaker
- Specifies the type of device to be used as the Db2 cluster
services
tiebreaker. This option is only available to the Db2 cluster
services
administrator. It is important to ensure the CM and CFS both use the same
tiebreaker type. Run
db2cluster -verify -req -topology
to perform the cluster topology verification. - -option
-
- HostFailureDetectionTime -value value
- Specifies the length of time (a
range of 1 to 60 seconds) for detecting a host failure or network
partition in the cluster. This option is only available to the Db2 cluster
services
administrator.
- force
- If you attempt to set HostFailureDetectionTime without force and the result is an error that indicates the cluster file system is still active, resubmit the command with the force option. Including the force option ensures that the cluster file system remains down while setting HostFailureDetectionTime.
- pprimary -value value
- Specifies which cluster caching facility Db2 cluster services will attempt to start in the primary role. This option is only available to a user in the SYSADM, SYSCTL, or SYSMAINT group.
- autofailback -value
- Specifies whether automatic failback of a member to its home host is immediate (enabled) or
automatic failback is delayed (disabled) until automatic failback is manually enabled by the
administrator. Initially, automatic failback is enabled by default. The change to this parameter
will become effective after the instance is restarted.
- off
- Specify the off parameter to disable immediate automatic failback of the member to its home host when the home host becomes available. This setting provides time to verify the health of the restarted home host before reintegrating it into the cluster.
- on
- Specify the on parameter to enable immediate automatic failback of the member to its home host when the home host becomes available.
- -list
- Returns details about the following:
- -tiebreaker
- Lists the type of device being used as the Db2 cluster
services tiebreaker.
- zout
- Displays output in a format that can be consumed by an application.
- -alert
- Lists any alerts for cluster elements.
- -HostFailureDetectionTime
- Lists how long it will take for Db2 cluster
services to
detect a host failure or network partition.
- zout
- Displays output in a format that can be consumed by an application.
- -LocalHostVersion
- Lists the version of SA MP (or IBM® Spectrum Scale) which is currently installed on the host where this command is invoked.
- -DomainCommittedVersin
- Lists the version of SA MP (or IBM Spectrum Scale) which is currently committed in the domain where this command is invoked.
- -pprimary
- Lists which cluster caching facility Db2 cluster services is designated as the preferred primary. This option is only available to a user in the SYSADM, SYSCTL, or SYSMAINT group.
- -autofailback
- Lists automatic failback status.
- zout
- Displays output in a format that can be consumed by an application.
- -verify
- -resources
- Verifies that the resource model for the instance is correct, and that there are no inconsistencies in the resource model.
- -maintenance
- Ensures that the cluster manager is offline on the host so that the binaries can be updated.
- -all
- Query the maintenance state of all the hosts in the cluster. In previous versions of Db2, this option is only available to the Db2 cluster services administrator. As Db2 cluster services administrator, the instance name must be specified by setting the DB2INSTANCE environment variable. On V11.1.4.4 and above, this option can also be run as instance owner.
- -zout
- Displays output in a format that can be consumed by an application.
- -enter -maintenance
- Puts the host on which this
command was issued into maintenance mode. This option is only available to the Db2 cluster
services
administrator. The
instance name must be specified by setting the DB2INSTANCE environment variable. If other tools
(such as sudo, su, ect) are used to execute with Db2 cluster services administrator privileges,
additional platform specific options for the tool should be specified to preserve the environment
variable. Internally, this option issues an asynchronous command to stop the cluster manager on the
host. The command polls the state of the cluster manager to determine when the stop operation has
completed. On less responsive systems, it may time out and report an error while the cluster manager
is in the process of shutting down. Individual hosts can be put into maintenance mode one at a time.
Multiple
hosts can be put into maintenance mode one at a time by using the db2cluster
command while maintaining the quorum.
- -all
- Puts the entire cluster domain into maintenance mode, stopping the cluster manager on all hosts in the instance. If you use the -all option to enter maintenance mode, you must use the -all option to exit maintenance mode.
- -exit -maintenance
- Removes the host
on which this command was issued from maintenance mode, and puts the domain online if it is
currently offline. This option is only available to the Db2 cluster
services
administrator. The instance name must be specified by
setting the DB2INSTANCE environment variable. If other tools (such as sudo, su, etc) are used to
execute with Db2 cluster services administrator privileges, additional platform specific options for
the tool should be specified to preserve the environment variable. If the entire peer domain was put
into maintenance mode using the -all option, you cannot use this option to exit
maintenance mode on individual hosts.
- -all
- Removes the cluster domain from maintenance mode, starting the entire cluster manager peer domain, thus bringing the cluster manager online on all hosts in the instance. You must use the -all option to exit maintenance mode if you used the -all option to enter maintenance mode.
- -commit
- Commits the updates that are made to Db2 cluster services and makes them available to the Db2 database system. This option is only available to the Db2 cluster services administrator.
- -clear -alert
- Clears current alerts on the specified cluster element. This option is only available to a user
in the SYSADM, SYSCTL, or SYSMAINT group.Warning: This command has been deprecated and may be removed in the future. Use the db2cluster -clear -alert command instead.
- -member member-id
- -cf cf-id
- -host host-name
- -cfs
- Specifies a cluster-file-system-based command or a maintenance
operation.
- -create
- Creates a shared file system according to the following options.
- -filesystem fs-name
- Specifies the name of the shared file system that is created.
- -disk disk1...diskN
- Specifies the storage paths for the shared file system.
- -rdncy_grp_id id
- Refers to a group of resources that are independent of other redundancy groups, where together they provide the active-active and transient failover capability in a Db2 pureScaleenvironment. The valid values of this option are 1, and 2. The values 1 and 2 are referred to as the primary and secondary respectively. The tiebreaker disks for each GPFS cluster file system belong to a special group known as file system tiebreaker group. In a geographically dispersed pureScale cluster (GDPC),the host that owns this file system tiebreaker group is also known as the tiebreaker host or site. In a single-site pureScale environment, the resources that are represented by this ID include the storage only. In a geographically dispersed pureScale cluster (GDPC), the resources include the members and CFs that reside in the same physical location as the storage. If two sets of disks are specified by using -disk option, then the associated redundancy group IDs must not be the same.
- -fstiebreaker fstbdisk -host tbhost
- Specifies the tiebreaker disk for the target file system. This option is used only when creating a replicated file system. The option -host tbhost, when specified, indicates the fstbdisk is only accessible by other hosts in the cluster through TCP/IP connection through this host. If the fstbdisk is on SAN storage and accessible by all hosts in the cluster, the -host tbhost option is not required.
- -mount directory-name
- Specifies the mount point for the shared file system. If no mount point is provided, the file system is created under a root file system name of /db2fs.
- -add
- Adds additional disks to an existing shared file system cluster. When
all file system changes are completed, run
db2cluster -verify -req -topology
to perform the cluster topology verification.- -filesystem fs-name
- Specifies the name of the shared file system to which the disks are to be added.
- -host hostname
- The host name where the list of disks that are specified are located. This option is only valid for non-pureScale instance.
- -disk disk-list
- Specifies the disks to be added to the shared file system cluster.
- -rdncy_grp_id id
- Refers to a group of resources that are independent of other redundancy groups, where together they provide the active-active and transient failover capability in a Db2 pureScale environment. The valid values of this option are 1, and 2. The values 1 and 2 are referred to as the primary and secondary respectively. The tiebreaker disks for each GPFS cluster file system belong to a special group known as file system tiebreaker group. In a geographically dispersed pureScale cluster (GDPC),the host that owns this file system tiebreaker group is also known as the tiebreaker host or site. In a single-site pureScale environment, the resources that are represented by this ID include the storage only. In a geographically dispersed pureScale cluster (GDPC), the resources include the members and CFs that reside in the same physical location as the storage. If two sets of disks are specified by using -disk option, then the associated redundancy group IDs must not be the same.
- -fstiebreaker fstbdisk -host tbhost
- Specifies the tiebreaker disk for the target file system. This option is used only when creating a replicated file system. The option -host tbhost, when specified, indicates the fstbdisk is only accessible by other hosts in the cluster through TCP/IP connection through this host. If the fstbdisk is on SAN storage and accessible by all hosts in the cluster, the -host tbhost option is not required.
- -remove
- Removes disks from an existing file system cluster. When all file system
changes are completed, run
db2cluster -verify -req -topology
to perform the cluster topology verification.- -filesystem fs-name
- Specifies the name of the shared file system from which the disks are to be removed.
- -host hostname
- The host name where the list of disks that are specified are located. This option is only valid for non-pureScale instance.
- -disk disk-name
- Specifies the disk to be removed from the shared file system cluster. The command fails if the specified disk is the last disk in the file system or if it is the tiebreaker disk.
- -enableReplication
- Converts an existing non-replicated file system to a replicated file system by assigning the list of disks that are currently assigned to the file system to redundancy group ID 1. During this process, GPFS replication is not enabled. GPFS replication is enabled immediately after the redundancy group ID 2 is created for the same file system.
- -delete
- Deletes a shared file system. This option is only available to the Db2 cluster
services administrator.
- -filesystem fs-name
- Specifies the name of the shared file system that is to be deleted. The command fails if the file system is not empty.
- -set
- Specifies the tiebreaker type or sets configuration options. This option is only available to
the Db2 cluster
services administrator.
- -tiebreaker
- Specifies the type of device to be used as the IBM Spectrum Scale tiebreaker.
It is important to ensure the CM and CFS both use the same tiebreaker type.
Run
db2cluster -verify -req -topology
to perform the cluster topology verification. - -option option
-
Explicitly sets a configuration option for the cluster file system. In most cases, you do not need to set any of these values because they are automatically set to optimal values when the cluster is created. The cluster verification that runs implicitly after installation and updates returns an error message if any of the options is not set to its mandatory value.
- adminMode
- Specifies whether all nodes in the cluster, or just a subset of the nodes, are to be used for issuing IBM Spectrum Scale administration commands. The default and mandatory value is allToAll.
- ccrEnable
- Specifies whether the cluster configuration repository (CCR) type is to be used. The default and mandatory value for Db2 managed Spectrum Scale pureScale cluster is yes. Set it to "no" to disable it. Disabling CCR requires putting the entire cluster to be in maintenance mode first.
- maxFilesToCache
- Specifies the number of nodes to cache for recently used files that are closed. The default and minimum mandatory value is 15000.
- maxMBpS
- Specifies an estimate of how many megabytes of data can be transferred per second into or out of a single node. The default value is 150. Increase this value if more I/O is required.
- pagepool
- Specifies the size in megabytes of the cache on each node. The default and minimum mandatory value is 2G.
- remoteFileCopyCommand
- Specifies the fully-qualified path name for the remote file copy program to be used by IBM Spectrum Scale. The remote copy command must adhere to the same syntax format as the scp command, but may implement an alternate authentication mechanism.
- remoteShellCommand
- Specifies the fully-qualified path name for the remote shell program to be used by IBM Spectrum Scale. The remote shell command must adhere to the same syntax format as the ssh command, but may implement an alternate authentication mechanism.
- (AIX® only)
- Specifies the amount of memory, in megabytes, available to store various IBM Spectrum Scale structures. The default and minimum mandatory value is 2047.
- totalPingTimeout
- Specifies the length of time in seconds that IBM Spectrum Scale waits before expelling nodes it cannot communicate with when the primary subnet is not used for IBM Spectrum Scale communication. The default value is 45 in clusters that have use SCSI-3 Persistent Reserve (PR) and 75 in clusters that do not have PR enabled.
- usePersistentReserve
- Specifies whether to enable or disable PR on the disks. The default value is YES. If you change the setting to NO, then fast failure recovery is replaced with a longer GPFS lease timeout wait period, resulting in a slower failure recovery.
- verifyGpfsReady
- Specifies that the peer domain is to be coordinated with the IBM Spectrum Scale cluster, ensuring that the IBM Spectrum Scale mounts the file system at the appropriate time. The default and mandatory value is YES.
- workerThreads
- Specifies the maximum number of concurrent file operations. The default value is 512.
- tscCmdPortRange
- Certain IBM Spectrum Scale commands require an additional socket to be created for the duration of the command. The port numbers assigned to these temporary sockets are controlled with the tscCmdPortRange configuration parameter. If this parameter is not set, the port number is dynamically assigned by the operating system from the range of ephemeral port numbers. To restrict the range of ports used by IBM Spectrum Scale commands, this option can be set to a valid port range between 1024-65535. The number of ports specified for this range must be equal to, or higher than, 100 ports. The port range specified must not collide with the range specified for the DB2_FIREWALL_PORT_RANGE registry variable. To unset this port range, the value specified for this option must be 0
- -value value
- Specifies a value for the option.
- -filesystem fs-name
- Specifies which file system the configuration option is applied to.
- -list
- Returns details about the following:
- -tiebreaker
- Lists the type of device that is being used as the IBM Spectrum Scale tiebreaker.
- zout
- Displays output in a format that can be consumed by an application.
- -filesystem
- Returns details about the following:
- zout
- Displays output in a format that can be consumed by an application.
- -filesystem fs-name
- Returns details about the following:
- -configuration
- Lists the current configuration of the file system and configuration parameters that can be changed.
- -disk
- Lists the current disks in the file system.
- -LocalHostVersion
- Lists the version of IBM Tivoli® System Automation for Multiplatforms (SA MP) (or IBM Spectrum Scale)) which is currently installed on the host where this command is invoked.
- -DomainCommittedVersion
- Lists the version of SA MP (or IBM Spectrum Scale) which is currently committed in the domain where this command is invoked.
- -configuration
- Lists the current configuration attributes of the IBM Spectrum Scale cluster that have been explicitly set.
- -verify
- -configuration
- Verifies the code version and the release level of the IBM Spectrum Scale cluster, and checks certain configuration settings for the cluster file system, including the following options: If any of these options are not set to the recommended value for optimal cluster file system performance, a warning message is returned by the db2cluster command that contains information that the relevant option is not optimally set.
- -maintenance
- Ensures that the shared file system cluster host is offline to
allow for updates to the binaries.
- zout
- Displays output in a format that can be consumed by an application.
- -mount -filesystem fs-name
- Makes the specified file system available to the operating system for read and write access by users.
- -rebalance -filesystem fs-name
- Restripes the data on disk across all disks in the file system. This option should be run during periods of lower system activity for the file system being rebalanced.
- -replicate -filesystem fs-name
- Triggers data replication of the specified file system.
- -unmount -filesystem fs-name
- Makes the specified file system inaccessible to the operating system.
- -enter -maintenance
- Puts the host on which this command was issued into maintenance mode. This option is only
available to the Db2 cluster
services
administrator. The cfs action db2cluster -cfs -enter
-maintenance command requires that the host already be entered into CM maintenance
state.
- -all
- Puts all hosts in the shared file system cluster into maintenance mode.
- -exit -maintenance
- Removes the host on which this command was issued from maintenance mode. This option is only
available to the Db2 cluster
services
administrator. The cfs action db2cluster -cfs -exit
-maintenance command requires that the host not be in CM maintenance state.
- -all
- Ensures that all hosts in the shared file system cluster are started.
- -commit
- Commits the updates that are made to Db2 cluster services and makes them available to the Db2 database system. This option is only available to the Db2 cluster services administrator.
- -add
- Associates the target member or CF with a redundancy group ID in cluster manager domain. This
operation is only valid when the target member or CF does not have any redundancy group ID
association before. When all operations are completed, run
db2cluster -verify -req -topology
to perform the cluster topology verification.- -member N -rdncy_grp_id id
- Associates the target member with the specified redundancy group ID. Use this option to associate the relationship for the first time. If the target member has an existing association, use the -set option to modify it.
- -cf N -rdncy_grp_id id
- Associates the target CF with the specified redundancy group ID. Use this option to associate the relationship for the first time. If the target CF has an existing association, use the -set option to modify it.
- -host host-name
-
- -san_access
- Specifies that the target host has access to the shared storage on SAN network. This is the default value of all the members and CFs, if not specified.
- -no_san_access
- Specifies that the target host does not have access to the shared storage on SAN network. This value is specified for adding a dedicated tiebreaker host only.
- -list
- Displays the target member or CF's current redundancy group ID
association in cluster manager domain. If -member or -cf option
is not specified, it displays each member and CF's redundancy group
ID association.
- -rdncy_grp_id
- Displays all members and CFs current redundancy group ID association.
- -member N -rdncy_grp_id
- Displays the target member's redundancy group ID association.
- -cf N -rdncy_grp_id
- Displays the target CF's redundancy group ID association.
- -delete
- Removes the target member or CF's redundancy group ID association in cluster manager domain.
This operation is only valid when the target member or CF currently has a redundancy group ID
association. When all operations are completed, run
db2cluster -verify -req -topology
to perform the cluster topology verification.- -member N -rdncy_grp_id
- Removes the target member's current redundancy group ID association.
- -cf N -rdncy_grp_id
- Removes the target CF's current redundancy group ID association.
- -set
- Associates the target member or CF with a different redundancy group ID in cluster manager
domain. This operation is only valid when the target member or CF currently has a redundancy group
ID association. When all operations are completed, run
db2cluster -verify -req -topology
to perform the cluster topology verification.- -member N -rdncy_grp_id id
- Modifies the target member's current redundancy group ID association to the specified one. Do not use this option when the target member has no current redundancy group ID association.
- -cf N -rdncy_grp_id id
- Modifies the target CF's current redundancy group ID association to the specified one. Do not use this option when the target CF has no current redundancy group ID association.
- -host host-name
- Modifies the intended usage of the target host. This option is used to convert a dedicated
tiebreaker host to a member by granting SAN access to the defined member.
- -san_access
- Specifies that the target host has access to the shared storage on SAN network. This is the default value of all the members and CFs, if not specified.
- -no_san_access
- Specifies that the target host does not have access to the shared storage on SAN network. This value is specified for adding a dedicated tiebreaker host only.
- -remove
- When all operations are completed, run
db2cluster -verify -req -topology
to perform the cluster topology verification. - -enter -maintenance -mount mount_name
- Places the specified mount point on the host on which this command was issued into maintenance
mode. The mount name must be specified with the leading ‘/’. This command must be issued as instance owner.
- -all
- Places the specified mount point on all member hosts into maintenance mode. This command requires the mount to not be in host-level maintenance mode on any member host in the cluster. This command must be issued as instance owner.
See Placing mount points associated with database-level mount resources into maintenance mode for further details on this operation.
- -exit -maintenance -mount mount_name
- Exits maintenance mode on the specified mount point on the host on which this command was
issued. The mount name must be specified with the leading ‘/’. This command requires the mount to be
in maintenance mode on the current host. The command must be issued as instance owner.
- -all
- Exits maintenance mode on the specified mount point on all member hosts in the cluster. This command requires the mount to be in maintenance mode on all member hosts. This command must be issued as instance owner.
See Placing mount points associated with database-level mount resources into maintenance mode for further details on this operation.
- -verify
- Defaults to -verify -req
- -req
- Performs a comprehensive list of checks to validate the health of the pureScale cluster. Accordingly, an alert is raised for each failed
criterion and is displayed in the instance monitoring command db2instance -list.
The validations performed include, but are not limited to, the following:
- Configuration settings in peer domain and IBM Spectrum Scale cluster
- RDMA communication between hosts
- Replication setting for each file system
- Status of each disk in the file system
- Remote access with db2sshid among all nodes through db2locssh and db2scp
- -rdma_ping
- Verifies RDMA communication between hosts. In version 11.5.5 and later, this validation is performed in parallel across multiple adapters. The degree of concurrency is auto computed based on system resources, SSH configuration, and is also capped to ensure a successful run. Refer to Installing and setting up OpenSSH for the relevant configuration parameter to optimize this concurrent validation. This option is available only to users that belong to the SYSADM, SYSCTL, or SYSMAINT groups.
- -topology
- Verifies topology setup including cluster membership, quorum configuration and more.
- -perf
- Indicates that performance related tasks will be executed.
- -collect
- Only diagnostic data collection will be performed without any analysis.
- -db
- An existing database name against which the performance evaluation will be executed.
- -interval
- Performance must always be analyzed against two snapshots of activities in time. First snapshot is taken when the command is first run and the second snapshot is taken after the specified interval in seconds elapses. The input range is between 1 and 2147483648.
- -maintenance -mount mount_name
- Lists the hosts on which the specified mount point is in maintenance mode.
- -clear -alert
- Clears current alerts on the specified cluster element. This option is only available to a user
in the SYSADM, SYSCTL, or SYSMAINT group.
- -member member-id
- -cf cf-id
- -host host-name
Advanced command parameters
- -cm
- -add
- Adds either a host or database mounts to the cluster manager domain. To add a host, you must run
the command from an online host that is already in the peer domain. This option is only available to
the Db2 cluster
services
administrator and is not normally needed unless directed by service.
- -host host-name
- Adds a host (one at a time) to the cluster manager domain. The software must already be installed on the host that is added to the cluster and be ready for usage.
- -database_mounts database-name
- Adds mount resources for a database that was not created with the instance that is being used. You would use this, for example, if the database is migrated from version 9.7 or if the database was created with another instance and then cataloged to be used with the current instance.
- -create
- Creates a peer domain or a resource model. This option is not
normally needed unless directed by service.
- -domain domain-name -host host-name
- Creates a peer domain. The host named in the command must be the local host and must have the Db2 cluster services software already installed. Specify a domain name if you want to change the name of the peer domain to something other than the default: db2domain. This option is only available to the Db2 cluster services administrator.
- -resources
- Creates the cluster manager resource model for the instance based
on the information in the db2nodes.cfg file.
This option is only available to a user in the SYSADM, SYSCTL, or
SYSMAINT group.
Upon completion of this command, the HA registry is populated with the resource metadata.
If resource metadata exists in the HA registry, then this command fails. Either invoke the db2cluster command with the -delete -resources -force to remove the HA registry metadata or invoke db2cluster -repair -resources to recreate the resource model using the resource metadata in the HA registry.
- -unhealthy_host_response -option
- Specifies that Db2 cluster
services is to
take one of the following actions if a host experiences excessive load or paging that could impact
throughput or availability in the rest of the cluster. The criteria for this response are as
follows:
where PctTotalPgSpFree is the percentage of free page space (meaning that page file usage is greater than 5%) and VMPgInRate is the page-in rate.PctTotalPgSpFree < 95% and VMPgInRate > 20 per second
- -reboot_host
- Reboots the host. Any resident member on the host will restart in light mode on a guest host until its home host has successfully rebooted. If the primary cluster caching facility is on the host, the secondary cluster caching facility will take over as the new primary; after the old primary’s host has successfully rebooted, it will rejoin the cluster as the secondary cluster caching facility.
- -offline_member
- Forces any member or cluster caching facility on the host offline. An offlined cluster caching facility will not restart until the resulting alert is manually cleared. An offlined member restarts in light mode on a guest host and will not restart on its home host until the resulting alert is manually cleared.
- -option -apply_to_current_host
- Specifies that the unhealthy host response is to be applied to the current host.
Note: This option should be used with caution and can only be run as the Db2 cluster services administrator. For more information, see theRelated links
section.
- -remove
- Removes either a host or database mounts from the cluster manager domain. To remove a host, you
must run the command from an online host that is in the peer domain. This option is only available
to the Db2 cluster
services
administrator and is not normally needed unless directed by service.
- -host host-name
- Removes a single host from the cluster manager domain. Any members or cluster caching facilities must have already been removed from the host.
- -database_mounts database-name
- Removes mount resources for a database.
- -delete
- Deletes either a peer domain or the resource model. This option
is not normally needed unless directed by service.
- -domain domain-name
- Deletes the peer domain. This option is only available to the Db2 cluster
services administrator.
- -force
- The force option can be used to force the instance domain resources to be deleted, even if there
are any online cluster resources on the domain.Note: The -force option deletes all the metadata for the resource model, so the instance configuration will be lost.
- -resources
- Deletes the cluster manager resource model. After resources are deleted, the Db2 instance cannot start
until the resource model is properly re-created. This option is only available to a user in the
SYSADM, SYSCTL, or SYSMAINT group. The contents of db2nodes.cfg are
compared to the HA registry metadata. The command fails if there is a discrepancy. The main reason
for a discrepancy is if one or more members are currently failed over to a guest host. In such a
case, the cause of the failure should be determined and repaired so that the member can fail back to
its home host.
- -force
- Bypasses the db2nodes.cfg comparison and deletes the HA registry metadata that exist for the resource model. This option should only be used when directed to do so by IBM support.
- -unhealthy_host_response
- Deletes the automated response (either rebooting the host reboot or taking any member or cluster caching facility on the host offline) on hosts that experience excessive load or paging. If this condition occurs, the instance takes no action and the condition is logged by the member or cluster caching facility (the default behavior).
- -list
- Returns information about these options:
- -domain
- Returns the name of the cluster manager peer domain.
- -host host-name1...host-nameN
- Lists the hosts that are in the peer domain.
- -state
- Returns the state of the hosts that are in the peer domain.
- configuration
- Lists the current configuration of the file system and configuration parameters that can be
changed.Important: Starting in version 11.5.6, the db2cluster -cm -list -configuration option for Db2 pureScale on all supported platforms is deprecated and will be removed in a future release.
- -tiebreaker
- Lists the type of device being used as the Db2 cluster
services tiebreaker.
- zout
- Displays output in a format that can be consumed by an application.
- alert
- Lists any alerts for cluster elements.
- pprimary
- Lists which cluster caching facility Db2 cluster services is designated as the preferred primary. This option is only available to a user in the SYSADM, SYSCTL, or SYSMAINT group.
- -repair
- Repairs an inconsistent resource model or domain for an instance. The Db2 instance resources
are re-created based on the last good instance configuration; that is, the instance configuration
from which the db2start command was last successful.
- -resources
- Specifies to repair the resource model of the instance.
- -domain domain-name
- Specifies to repair the cluster manager domain. The domain is re-created using the same topology
configuration as the existing cluster manager domain. This includes the existing set of cluster
hosts in addition to initializing the Db2 cluster services
tiebreaker and host failure detection time. Currently only one instance per cluster is supported,
and this instance must be specified by setting the DB2INSTANCE environment
variable. This command can only be run as the Db2 cluster services
administrator. For more information, see the
Related links
section.- -force
-
Deletes and re-creates the cluster manager domain. The cluster host failure detection time is set to the default value of 8 seconds. Additionally, the db2nodes.cfg will be corrected if it does not match the contents of the HA registry.
- -start
- Starts one of these options:
- -domain domain-name
- Starts the peer domain. The start can only start other nodes if the domain is already started on the current host. This option is only available to the Db2 cluster services administrator and is not normally needed unless directed by service.
- -filesystem fs-name -disk
- Starts all the disks that are in Down state in the target file system.
- -host host-name1...host-nameN
- Specifies which hosts the peer domain is to be started on. This option is only available to the Db2 cluster services administrator and is not normally needed unless directed by service.
- -stop
- Stops one of these options:
- -domain domain-name
- Stops the cluster manager across the entire peer domain. This option is only available to the Db2 cluster services administrator and is not normally needed unless directed by service.
- -host host-name1...host-nameN
- Stops the cluster manager on the specified hosts if the instance is stopped. This option is only available to the Db2 cluster services administrator and is not normally needed unless directed by service.
- -force
- Forces the shutdown of the entire cluster manager peer domain without performing checks such as ensuring that operational quorum is kept. When this option is run with the -host option only the specified host is shutdown.
- -cfs
- -create -domain domain-name -host host-name
- Creates a shared file system cluster. The host named in the command must be the local host and must have the Db2 cluster services software already installed. Specify a domain name if you want to change the name of the shared file system cluster to something other than the default: db2gpfsdomain. This option is only available to the Db2 cluster services administrator and is not normally needed unless directed by service.
- -add
- Adds either a host or network resiliency to the shared file system cluster. The command must be
run from an online host that is already in the peer domain. This option is only available to the
Db2 cluster
services
administrator and is not normally needed unless directed by service.
- -host host-name
- Adds a host (one at a time) to the shared file system cluster. The software must already be installed on the host that will be added to the cluster and be ready for usage.
- -network_resiliency
- Adds condition and response resources for the adapter used by the cluster file system. When the
adapter changes state (going offline or online) the cluster manager invokes a response script which
takes appropriate action.
- -gpfsadapter
- Specifies that the IBM Spectrum Scale adapter should be operated on. It is not required because this is currently the only network resiliency resource that can be operated upon.
- -remove -host host-name
- Removes a single host from shared file system cluster. Any members or cluster caching facilities must have already been removed from the host. This command must be run from an online host that is in the shared file system cluster. This option is only available to the Db2 cluster services administrator and is not normally needed unless directed by service.
- -delete
- Deletes either the shared file system cluster or the network resiliency. This option is only
available to the Db2 cluster
services
administrator and is not normally needed unless directed by service.
- -domain domain-name
- Deletes the shared file system cluster.
- -network_resiliency
- Deletes the condition and response resources for the adapter used by the cluster file system.
- -gpfsadapter
- Specifies the IBM Spectrum Scale adapter for the network -network_resiliency that is being deleted.
- -start
- This option is only available to the Db2 cluster
services
administrator and is not normally needed unless directed by service.
- -host host-name1...host-nameN
- Starts the shared file system processes on the specified hosts.
- -all
- Starts the shared file system processes on all hosts.
- -trace
- Enables AIX tracing of the IBM Spectrum Scale component.
- -stop
- This option is only available to the Db2 cluster
services
administrator and is not normally needed unless directed by service.
- -host host-name
- Stops the shared file system daemons on the specified host.
- -force
- Specifies that there is no check to ensure that operational quorum is kept.
- -all
- Stops the shared file system daemons on all hosts.
- -trace
- Stops the AIX tracing of the IBM Spectrum Scale component.
- -list
- Returns information about these options:
- -domain
- Returns the name of the shared file system cluster.
- -host host-name1...host-nameN
- Lists the hosts that are in the shared file system cluster.
- -state
- Returns the state of the hosts that are in the shared file system cluster.
- -network_resiliency
- Lists the names of the network resiliency condition and response
resources in the cluster.
- -gpfsadapter
- Lists the IBM Spectrum Scale adapter for the network -network_resiliency.
- -resources
- Lists the contents of the network resiliency condition and response resources in the cluster.
- -repair -network_resiliency
- Repairs the condition and response resources used by the cluster file system on local host.
- -gpfsadapter
- Souffles the IBM Spectrum Scale adapter for the network -network_resiliency that is being repaired.
- -all
- Repairs the condition and response resources used by the cluster file system on all hosts in the cluster.
- -verify -network_resiliency
- Verifies network resiliency resources on local host.
Examples
- Example 1
- To list the file systems, use the following db2cluster command:
db2cluster -cfs -list -filesystem
- Example 2
- To list any alerts for cluster elements, use the following db2cluster command:
The following is a sample output when you run the command:db2cluster -cm -list -alert
Alert: Db2 member '0' failed to start on its home host 'HostA'. The cluster manager will attempt to restart the Db2 member in restart light mode on another host. Check the db2diag.log for messages concerning failures on host 'HostA' for member '0'." Action: This alert must be cleared manually with the command: 'db2cluster -cm -clear -alert'. Impact: Db2 member '%0' will not be able to service requests until this alert has been cleared and the Db2 member returns to its home host.
- Example 3
-
To query the maintenance state of all the hosts in the cluster, use the following db2cluster command:
DB2INSTANCE=<instanceName> <SQLLIB>/bin/db2cluster -cm -verify -maintenance -all
The following is a sample output when you run the command:Host(s) in maintenance: 'hostA' Host(s) not in maintenance: 'hostB,hostD' Host(s) not reachable from current host: 'hostC'. Run 'db2cluster -cm -verify -maintenance' locally to determine the state. Address any accessibility issue first before re-running the command.
- Example 4
-
To test -rdma_ping option from all adapters on one host to all other adapters in the cluster:
db2cluster -verify -req -rdma_ping -host coralpib257
- Example 5
-
To test -rdma_ping option from all adapters on one host to all adapters on another host:
db2cluster -verify -req -rdma_ping -host coralpib257 -host coralpib258
- Example 6
-
To test -rdma_ping option from all adapters on one host to a specific adapter on another host:
db2cluster -verify -req -rdma_ping -host coralpib257 -host coralpib258 -netname coralpib258-ib0
- Example 7
-
To test -rdma_ping option from a specific adapter, to all other adapters in the cluster:
db2cluster -verify -req -rdma_ping -host coralpib257 -netname coralpib257-ib0
- Example 8
-
To test -rdma_ping from a specific adapter on one host, to a specific adapter on another host:
db2cluster -verify -req -rdma_ping -host coralpib257 -netname coralpib257-ib0 -host coralpib258 -netname coralpib258-ib0
Usage notes
If the ALERT column has a YES entry after running
the command db2instance -list
, the db2cluster
-cm -list -alert
command can be used to find out more information
about corrective action.