mmchfs command

Changes the attributes of a GPFS file system.

Synopsis

mmchfs Device [-A {yes | no | automount}] [-D {posix | nfs4}] [-E {yes | no}]
       [-k {posix | nfs4 | all}] [-K {no | whenpossible | always}]
       [-L LogFileSize] [-m DefaultMetadataReplicas] [-n NumNodes]
       [-o MountOptions] [-p afmAttribute=Value[,afmAttribute=Value...]...]
       [-r DefaultDataReplicas] [-S {yes | no | relatime}]
       [-T Mountpoint] [-t DriveLetter] [-V {full | compat}] [-z {yes | no}]
       [--auto-inode-limit | --noauto-inode-limit]
       [--filesetdf | --nofilesetdf] [--flush-on-close | --noflush-on-close]       
       [--inode-limit MaxNumInodes[:NumInodesToPreallocate]]
       [--inode-segment-mgr {yes | no}] [--log-replicas LogReplicas]
       [--mount-priority Priority] [--nfs4-owner-write-acl {yes | no}]
       [--perfileset-quota | --noperfileset-quota]
       [--rapid-repair | --norapid-repair]
       [--write-cache-threshold HAWCThreshold]

or

mmchfs Device -Q {yes | no}

or

mmchfs Device -W NewDeviceName

or

mmchfs  Device [--maintenance-mode] {yes [--wait] | no }

Availability

Available on all IBM Storage Scale editions.

Description

Use the mmchfs command to change the attributes of a GPFS file system.

Parameters

Device
The device name of the file system to be changed.

File system names need not be fully qualified. fs0 is as acceptable as /dev/fs0. However, file system names must be unique across GPFS clusters.

This must be the first parameter.

-A {yes | no | automount}
Indicates when the file system is to be mounted:
yes
When the GPFS daemon starts.
no
The file system is mounted manually.
automount
On non-Windows nodes, when the file system is first accessed. On Windows nodes, when the GPFS daemon starts.
Note:
  • The file system must be unmounted before the automount settings are changed.
  • IBM Storage Protect for Space Management does not support file systems with the -A option set to automount.
-D {nfs4 | posix}
Specifies whether a deny-write open lock blocks writes, which is required by NFS V4, Samba, and Windows. File systems that support NFS V4 must have -D nfs4 set. The option -D posix allows NFS writes even in the presence of a deny-write open lock. If you intend to export the file system on NFS V4 or Samba, or mount your file system on Windows, you must use -D nfs4. For NFS V3 (or if the file system is not NFS exported at all) use -D posix.
-E {yes | no}
Specifies whether to report exact mtime values. If -E no is specified, the mtime value is periodically updated. If you want to always display exact modification times, specify -E yes.
Important: The new value takes effect the next time the file system is mounted.
-k {posix | nfs4 | all}
Specifies the type of authorization that is supported by the file system:
posix
Traditional GPFS ACLs only (NFS V4 and Windows ACLs are not allowed). Authorization controls are unchanged from earlier releases.
nfs4
Support for NFS V4 and Windows ACLs only. Users are not allowed to assign traditional GPFS ACLs to any file system objects (directories and individual files).
all
Any supported ACL type is permitted. This includes traditional GPFS (posix) and NFS V4 and Windows ACLs (nfs4).

The administrator is allowing a mixture of ACL types. For example, fileA might have a posix ACL, while fileB in the same file system may have an NFS V4 ACL, implying different access characteristics for each file depending on the ACL type that is currently assigned.

Avoid specifying nfs4 or all unless files are exported to NFS V4 or Samba clients, or the file system is mounted on Windows. NFS V4 and Windows ACLs affect file attributes (mode) and have access and authorization characteristics that are different from traditional GPFS ACLs.

-K {no | whenpossible | always}
Specifies whether strict replication is to be enforced:
no
Strict replication is not enforced. GPFS tries to create the needed number of replicas, but still returns EOK if it can allocate at least one replica.
whenpossible
Strict replication is enforced provided the disk configuration allows it. If there is only one failure group, strict replication is not enforced.
always
Strict replication is enforced.

For more information, see Strict replication.

-L LogFileSize
Specifies the size of the internal log files. The LogFileSize must be a multiple of the metadata block size. The default log file size is 32 MiB in most cases. However, if the data block size (parameter -B) is less than 512 KiB or if the metadata block size (parameter --metadata-block-size) is less than 256 KiB, then the default log file size is either 4 MiB or the metadata block size, whichever is greater. The minimum size is 256 KiB and the maximum size is 1024 MiB. Specify this value with the K or M character, for example: 8M. For more information, see mmcrfs command.

The default log size works well in most cases. An increased log file size is useful when the highly available write cache feature (parameter --write-cache-threshold) is enabled.

The new log file size is not effective until you apply one of the two following methods:
  • The first method requires you in part to restart the GPFS daemon on the manager nodes, but you can do so one node at a time. Follow these steps:
    1. Restart the GPFS daemon (mmfsd) on all the manager nodes of the local cluster. This action is required even if the affected file system is not mounted on any of the manager nodes. You can do this action one manager node at a time.
    2. Remount the file system on all the local and remote nodes that have it mounted. You can do this action one node at a time. The new log file size becomes effective when the file system is remounted on the last affected node.
  • The second method requires you to unmount the file system on all the affected nodes at the same time. Follow these steps:
    1. Unmount the file system on all local and remote nodes that have it mounted. The file system must be in the unmounted state on all the nodes at the same time.
    2. Remount the file system on any or all the affected nodes.

-m DefaultMetaDataReplicas
Changes the default number of metadata replicas. Valid values are 1, 2, and 3. This value cannot be greater than the value of MaxMetaDataReplicas set when the file system was created.

Changing the default replication settings using the mmchfs command does not change the replication setting of existing files, including system files. Even if you use this option immediately after the file system is created, it is still necessary to restripe all existing files to ensure that the metadata is replicated according to the updated settings. Issue the mmrestripefs command with the -R option to change all the existing files. You can also use the mmchattr command to change a small number of existing files.

-n NumNodes
Changes the number of nodes for a file system but does not change the existing system metadata structures. This setting is just an estimate and can be used only to affect the layout of the system metadata for storage pools created after the setting is changed.
-o MountOptions
Specifies the mount options to pass to the mount command when mounting the file system. For a detailed description of the available mount options, see Mount options specific to IBM Storage Scale.
-p afmAttribute
Specifies the AFM parameters that to be set on the file system for the file system-level migration by using AFM. If you set AFM parameters while you are creating a new file system, it allows migration of data from a source file system to the newly created IBM Storage Scale file system by using AFM migration method. This method does not require creation of any AFM mode fileset. Instead, AFM will be enabled on the "root" fileset of the file system. The supported AFM mode that can be enabled on the file system are AFM LU mode and AFM RO mode. Conversion of the RO mode to the LU mode is permitted. After the migration is completed, you can disable AFM relationship by using the mmchfileset Device root -p afmTarget=disable command and later use this as a regular file system. After AFM relationship is disabled, it cannot be enabled again. For more information, see Migration from the legacy hardware by using AFM.
AFM supports the following parameters for file system-level migration:
afmTarget
Identifies the home that is associated with the cache. The home is specified in either of the following forms:
nfs://{Host|Map}/Source_Path
Where:
nfs://
Specifies the transport protocol.
Source_Path
Specifies the export path.
Host
Specifies the server domain name system (DNS) name or IP address.
Map
Specifies the export map name. For more information about mapping, see Parallel data transfers.
The afmTarget parameter examples are as follows:
  1. Use NFS protocol without mapping.
    # mmchfs Device ... -p afmTarget=<Host|IP>://Source_Path,afmmode=ro
  2. Use NFS protocol with mapping.
    # mmchfs Device ... -p afmtarget=nfs://<map1>/Source_Path,afmmode=ro
  3. Define the afmTarget parameter to associate a cloud object bucket with a file system by using the manual updates mode for the file system replication.
    # mmchfs Device ... afmtarget=https://s3.us-east.cloud-object-storage.appdomain.cloud:443/bucket1
    https
    Is a protocol.
    s3.us-east.cloud-object-storage.appdomain.cloud:443
    Is an endpoint.
    bucket1
    Is a bucket name. This bucket is created before creating a file system for replication.
afmMode
Specifies the AFM fileset mode. Valid values are as follows:
read-only | ro
Specifies the read-only mode. You can fetch data into the ro-mode fileset for read-only purpose.
local-updates | lu
Specifies the local-updates mode. You can fetch data into the lu-mode fileset and update it locally. The modified data will not be synchronized to the home and stays local.

Conversion of the AFM ro mode to the AFM lu mode is supported for file system-level migration. For more information about AFM fileset modes, see Caching modes.

manual-updates | mu
The manual updates (MU) mode supports manual replication of the files or objects by using ILM policies or user provided object list. The MU mode is supported on AFM to cloud object storage backends for the fileset or file system level replication.
The MU mode fileset provides the flexibility to upload and download files or objects to and from cloud object storage after you finalize the set of objects to upload or download for replication by using the file system or fileset. Unlike other AFM to cloud object storage objectfs fileset modes, MU mode depends on manual intervention from administrators to upload and download the data to be in sync. As an administrator you can also automate upload and download by using ILM policies to search specific files or objects to upload or download.
To disable the AFM relationship from the file system, complete the following steps:
  1. Unmount the file system on all cluster nodes.
  2. Disable the AFM relationship by issuing the following command:
    # mmchfileset fs1 root -p afmMode=disable
    A sample output is as follows:
    Warning! Once disabled, AFM cannot be re-enabled on this fileset. Do you wish to continue? (yes/no) yes
    
    Warning! Fileset should be verified for uncached files and orphans. If already verified, then skip this step. Do you wish to verify same? (yes/no) no
    
    Fileset root changed.
afmDirLookupRefreshInterval

Controls the frequency of data revalidations that are triggered by lookup operations such as ls or stat (specified in seconds). When a lookup operation is performed on a directory, if the specified time passed, AFM sends a message to the home cluster to find out whether the metadata of that directory is modified since the last time it was checked. If the time interval did not pass, AFM does not check the home cluster for updates to the metadata.

Valid values are 0 – 2147483647. The default is 60. Where home cluster data changes frequently, value 0 is recommended.

afmDirOpenRefreshInterval
Controls the frequency of data revalidations that are triggered by such I/O operations as read or write (specified in seconds). After a directory is cached, open requests that are resulting from I/O operations on that object are directed to the cached directory until the specified amount of time has passed. After the specified time passed, the open request is directed to a gateway node rather than to the cached directory.

Valid values are 0 - 2147483647. The default is 60. Set a lower value for a higher level of consistency.

afmFileLookupRefreshInterval
Controls the frequency of data revalidations that are triggered by lookup operations such as ls or stat (specified in seconds). When a lookup operation is performed on a file, if the specified time passed, AFM sends a message to the home cluster to find out whether the metadata of the file is modified since the last time it was checked. If the time interval did not pass, AFM does not check the home cluster for updates to the metadata.

Valid values are 0 – 2147483647. The default is 30. Where home cluster data changes frequently, value 0 is recommended.

afmFileOpenRefreshInterval
Controls the frequency of data revalidations that are triggered by I/O operations such as read or write (specified in seconds). After a file is cached, open requests from I/O operations on that object are directed to the cached file until the specified time passed. After the specified time passed, the open request is directed to a gateway node rather than to the cached file.

Valid values are 0 – 2147483647. The default is 30. Set a lower value for a higher level of consistency.

afmParallelReadChunkSize
Defines the minimum chunk size of the read that needs to be distributed among the gateway nodes during parallel reads. Values are interpreted in bytes. The default value of this parameter is 128 MiB, and the valid range of values is 0 – 2147483647. It can be changed cluster wide with the mmchconfig command. It can be set at fileset level by using the mmcrfileset or mmchfileset commands.
afmParallelReadThreshold
Defines the threshold beyond which parallel reads become effective. Reads are split into chunks when file size exceeds this threshold value. Values are interpreted in MiB. The default value is 1024 MiB. The valid range of values is 0 – 2147483647. It can be changed cluster wide with the mmchconfig command. It can be set at fileset level by using mmcrfileset or mmchfileset commands.
-Q {yes | no}
If -Q yes is specified, quotas are activated automatically when the file system is mounted. If -Q no is specified, the quota files remain in the file system, but are not used.

For more information, see Enabling and disabling GPFS quota management.

-r DefaultDataReplicas
Changes the default number of data replicas. Valid values are 1, 2, and 3. This value cannot be greater than the value of MaxDataReplicas set when the file system was created.

Changing the default replication settings using the mmchfs command does not change the replication setting of existing files. After running the mmchfs command, the mmrestripefs command with the -R option can be used to change all existing files or you can use the mmchattr command to change a small number of existing files.

-S {yes | no | relatime}
Controls how the file attribute atime is updated.
Note: The attribute atime is updated locally in memory, but the value is not visible to other nodes until after the file is closed. To get an accurate value of atime, an application must call subroutine gpfs_stat or gpfs_fstat.
yes
The atime attribute is not updated. The subroutines gpfs_stat and gpfs_fstat return the time that the file system was last mounted with relatime=no. For more information, see the topics mmmount command with the -o parameter and Mount options specific to IBM Storage Scale.
no
The atime attribute is updated whenever the file is read. This option is the default if the minimum release level (minReleaseLevel) of the cluster is less than 5.0.0 when the file system is created.
relatime
The atime attribute is updated whenever the file is read, but only if one of the following conditions is true:
  • The current file access time (atime) is earlier than the file modification time (mtime).
  • The current file access time (atime) is greater than the atimeDeferredSeconds attribute. For more information, see mmchconfig command.
This setting is the default if the minimum release level (minReleaseLevel) of the cluster is 5.0.0 or greater when the file system is created.
For more information, see atime values.
-T Mountpoint
Change the mount point of the file system starting at the next mount of the file system.

The file system must be unmounted on all nodes before this command is issued.

-t DriveLetter
Changes the Windows drive letter for the file system.

The file system must be unmounted on all nodes before the command is issued.

-V {full | compat}
Changes the file system format to the latest format supported by the currently installed level of GPFS. This option might cause the file system to become permanently incompatible with earlier releases of GPFS.
Note: The -V option cannot be used to make file systems that were created earlier than GPFS 3.2.1.5 available to Windows nodes. Windows nodes can mount only file systems that were created with GPFS 3.2.1.5 or later.

Before issuing -V, see Upgrading. Ensure that all nodes in the cluster have been updated to the latest level of GPFS code and that you have successfully run the mmchconfig release=LATEST command.

For information about specific file system format and function changes when you upgrade to the current release, see File system format changes between versions of IBM Storage Scale.

full
Enables all new functionality that requires different on-disk data structures. Nodes in remote clusters that are running an earlier version of IBM Storage Scale will no longer be able to mount the file system. With this option the command fails if it is issued while any node that has the file system mounted is running an earlier version of IBM Storage Scale.
compat
Enables only backward-compatible format changes. Nodes in remote clusters that were able to mount the file system before the format changes can continue to do so afterward.
-W NewDeviceName
Assign NewDeviceName to be the device name for the file system.
Note: You cannot change the file system name if file audit logging or clustered watch folder is enabled for the file system.
-z {yes | no}
Enable or disable DMAPI on the file system. Turning this option on requires an external data management application such as IBM Storage Protect hierarchical storage management (HSM) before the file system can be mounted.
Note: IBM Storage Protect for Space Management does not support file systems with the -A option set to automount.

For further information regarding DMAPI for GPFS, see GPFS-specific DMAPI events.

--auto-inode-limit
Automatically increases the maximum number of inodes per inode space in the file system. If enabled, then the current value that is defined for MaxNumInodes is not used as a limit when the preallocated inodes are expanded on demand. After expansion, if the new number of preallocated inodes is larger than the current value defined in MaxNumInodes, then the maximum number of inodes is increased to bring both at par.
Note: Both the MaxNumInodes and the NumInodesToPreallocate variables are defined for the --inode-limit option.

The --auto-inode-limit option is available only in IBM Storage Scale 5.1.4 with format level 28.00 or later.

--noauto-inode-limit
The maximum number of inodes cannot be expanded on demand. This is the default.
--filesetdf
Specifies that when the details that are reported by the df command are enforced for a fileset other than the root fileset, the numbers that are reported by the df command are based on the quotas for the specific fileset or on the capacity and usage limit at the independent fileset level, rather than the entire file system. The df command reports either quota limit and usage or inode space capacity and usage for the fileset and not for the total file system. This option affects the df command behavior only on Linux® nodes.
The df command reports quota limit and quota usage if quota is enabled for the fileset. If quota is disabled and filesetdf is enabled in IBM Storage Scale 5.1.1 or later with file system version 5.1.1 or later, then the df command reports inode space capacity and inode usage at the independent fileset level. However, the df command reports the block space at the file system level because the block space is shared with the whole file system.
Note: In IBM Storage Scale 5.1.3 or later with the file system version 5.1.1 or later, if quota is enabled but the limits are not defined then the df command reports inode space capacity and inode space usage at the independent fileset level.
--nofilesetdf
Specifies that the numbers reported by the df command are not based on the fileset level. The df command returns the numbers for the entire file system. This is the default.
--flush-on-close | --noflush-on-close
Enable or disable automatic flushing of disk buffers when closing files that were opened for write operations on the device. The minimum release level of the cluster must be 5.1.3 or later and the file system format version must be at 5.1.3.0 (27.00) or later to enable this feature.
The automatic flushing of disk buffers is disabled by default.
Note: Enabling the --flush-on-close might impact the performance of workloads that are running on the file systems for which it is enabled.
--inode-limit MaxNumInodes[:NumInodesToPreallocate]
MaxNumInodes specifies the maximum number of files that can be created. Allowable values range from the current number of created inodes (determined by issuing the mmdf command with -F), through the maximum number of files possibly supported as constrained by the formula:

maximum number of files = (total file system space) / (inode size + subblock size)

Note: This formula works only for simpler configurations. For complex configurations, such as separation of data and metadata, this formula might not provide an accurate result.

If your file system has additional disks added or the number of inodes was insufficiently sized at file system creation, you can change the number of inodes and hence the maximum number of files that can be created.

For file systems that do parallel file creates, if the total number of free inodes is not greater than 5% of the total number of inodes, there is the potential for slowdown in file system access. Take this into consideration when changing your file system.

NumInodesToPreallocate specifies the number of inodes that are preallocated by the system right away. If this number is not specified, GPFS allocates inodes dynamically as needed.

The MaxNumInodes and NumInodesToPreallocate values can be specified with a suffix, for example 100K or 2M. Note that in order to optimize file system operations, the number of inodes that are actually created may be greater than the specified value.

This option applies only to the root fileset. Preallocated inodes cannot be deleted or moved to another independent fileset. It is recommended to avoid preallocating too many inodes because there can be both performance and memory allocation costs associated with such preallocations. In most cases, there is no need to preallocate inodes because GPFS dynamically allocates inodes as needed. When there are multiple inode spaces, use the --inode-limit option of the mmchfileset command to alter the inode limits of independent filesets. The mmchfileset command can also be used to modify the inode limit of the root inode space. The --inode-limit option of the mmlsfs command shows the sum of the inode limits of all inode spaces in the file system. Use the mmlsfileset command to see the inode limit of the root fileset.

--inode-segment-mgr {yes | no}
Specifies whether Inode Segment Manager is enabled.

When its value is yes, nodes request free inode segment from the file system manager to allocate inodes while creating files. Choose this value for workloads of parallel file creation.

When its value is no, nodes independently search for a free inode segment. This is how file systems with format versions prior to 5.2.0.0 (34.00) function.

Note: To enable this feature, the format version of the file system must be 5.2.0 (34.00) or later.
--log-replicas LogReplicas
Specifies the number of recovery log replicas. Valid values are 1, 2, 3, or DEFAULT. If DEFAULT is specified, the number of log replicas is the same as the number of metadata replicas currently in effect for the file system and changes when this number is changed.

Changing the default replication settings using the mmchfs command does not change the replication setting of existing files. After running the mmchfs command, the mmrestripefs command with the -R option can be used to change existing log files.

This option is applicable only if the recovery log is stored in the system.log storage pool. For more information about the system.log storage pool, see The system.log storage pool.

--maintenance-mode Device {yes [--wait] | no}
Turns file system maintenance mode on or off when you change file system attributes. The values are yes and no (the default).

Specifying --wait, which is valid only when you use it with the yes parameter, turns on the maintenance mode after you have unmounted the file system. If you specify yes without --wait and the file system is mounted, the command fails.

For more information on maintenance mode, see File system maintenance mode.

--mount-priority Priority
Controls the order in which the individual file systems are mounted at daemon startup or when one of the all keywords is specified on the mmmount command.

File systems with higher Priority numbers are mounted after file systems with lower numbers. File systems that do not have mount priorities are mounted last. A value of zero indicates no priority.

--nfs4-owner-write-acl {yes | no}
Specifies whether object owners are given implicit NFSv4 WRITE_ACL permission.

A value of yes specifies that object owners are given implicit WRITE_ACL permission.

A value of no specifies that object owners are NOT given implicit WRITE_ACL permission. The default value is yes.
Note:

The minimum release level of the cluster must be 5.1.5 or later and the file system format version must be at 5.1.5 (29.00) or later to enable this feature.

When the option is set to no, copying files and directories using the cp command with owner's privileges from an NFS client may fail with the error E_PERM as a consequence of SETATTR operation failure. For applications that expect cp to return a success status, consider using the ignore_mode_change=true option for NFS Ganesha exports so SETATTR and, consequently, cp return a success status.

On AIXStart of change version below 7300-02-01-2346End of change, when the option is set to no and the owner does not WRITE_ACL permissions for a directory, copying such directory using the cp -r/-R option with owner's privileges will return the error E_PERM.Start of change The error is returned due to the underlying chmod call.End of change

--perfileset-quota
Sets the scope of user and group quota limit checks to the individual fileset level, rather than to the entire file system.
--noperfileset-quota
Sets the scope of user and group quota limit checks to the entire file system, rather than to individual filesets.
--rapid-repair
Keeps track of incomplete replication on an individual file block basis (as opposed to the entire file). This may result in a faster repair time when very large files are only partially ill-replicated.
--norapid-repair
Specifies that replication status is kept on a whole file basis (rather than on individual block basis).
--write-cache-threshold HAWCThreshold
Specifies the maximum length (in bytes) of write requests that will be initially buffered in the highly-available write cache before being written back to primary storage. Only synchronous write requests are guaranteed to be buffered in this fashion.

A value of 0 disables this feature. 64K is the maximum supported value. Specify in multiples of 4K.

This feature can be enabled or disabled at any time (the file system does not need to be unmounted). For more information about this feature, see Highly available write cache (HAWC).

Exit status

0
Successful completion.
nonzero
A failure has occurred.

Security

You must have root authority to run the mmchfs command.

The node on which the command is issued must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages. For more information, see Requirements for administering a GPFS file system.

Examples

To change the default replicas for metadata to 2 and the default replicas for data to 2 for new files created in the fs0 file system, issue the following command:
# mmchfs fs0 -m 2 -r 2
To confirm the change, issue the following command:
# mmlsfs fs0 -m -r
A sample output is as follows:
flag value          description
---- -------------- -----------------------------------
 -m  2              Default number of metadata replicas
 -r  2              Default number of data replicas

See also

Location

/usr/lpp/mmfs/bin