Storage Pools
View status, capacity, and configuration information for IBM Spectrum Protect storage pools.
- Device class storage pools, which use device classes to determine where data is stored. A device class describes a type of storage device. To use a storage device, the device must be assigned to a device class. A device class is also specified for each storage pool to identify the storage devices for the pool.
- Container storage pools, which simplify administration and provide inline compression and deduplication of data. Containers are dynamically allocated space for storage pool data. Depending on the type of container storage pool, the containers are created in file system directories on disk, in a vendor-supplied cloud located off premises, or in an in-house cloud located on premises.
- Capacity utilization status is determined by the monitoring thresholds that you define for the server. To modify the threshold settings, use the UPDATE STATUSTHRESHOLD command.
- Capacity utilization totals for copy pools and container-copy pools can sometimes differ from their related primary or container pools. This discrepancy is due to differences in how the data is stored and managed in each type of pool. If other status indicators are normal, you can disregard minor differences. To verify that a storage pool is protected, see the activity log messages for the PROTECT STGPOOL process.
The following actions and status indicators are available on the page:
- Back Up
- You can back up a primary storage pool to a copy storage pool
to protect your data against a single point of failure in a storage
device. Restrictions:
- You can manually back up device class storage pools, but not container storage pools.
To protect data in a directory-container pool, configure the server to replicate to a target server or copy data to tape by defining a copy storage rule.
- Do not issue a MOVE DRMEDIA command while you are backing up a storage pool.
- To back up a storage pool that has data shredding enabled, you can use the BACKUP STGPOOL command.
Storage pool backups are usually scheduled. If a schedule fails, you can manually start a backup, which runs as a background process.
When you back up multiple storage pools, back them up in migration order. For example, start with the first pool in the migration hierarchy, then continue to the next pool, and so on. This helps you ensure that all eligible files are included in the copy storage pool, which can be critical in a disaster recovery scenario.
If the primary and copy storage pools are configured to use data deduplication, redundant data is removed during the backup.
If the number of backup processes exceeds the available mount points or drives, the additional processes will wait until mount points or drives become available. The processes are canceled if the wait time exceeds the "Wait for mounting" setting for the device class ().
- You can manually back up device class storage pools, but not container storage pools.
- Migrate
- Storage pool migration is usually automated by using capacity
thresholds or schedules. If migration fails, you can manually start
the operation as a background process. Restriction: The storage pool migration action applies only to device class storage pools except container-copy pools. You cannot migrate data from a container storage pool or a container-copy pool.
You can specify the following migration settings:
- Duration
- After the specified time, the server cancels all migration processes for the selected storage pool.
- Stop migration
- For disk-based storage pools, the low threshold calculation is
based on estimated capacity. For FILE pools, this includes scratch
volumes.
For other storage pools, the number of volumes containing data is compared to the total number of volumes in the storage pool (including the maximum number of scratch volumes).
- Reclaim space before starting migration
- If you select this option, any eligible storage pool volumes are reclaimed before migration starts. Eligibility is determined by the value of the RECLAIM parameter of the storage pool definition.
- Processes
- If the number of migration processes exceeds the available mount
points or drives, the additional processes will wait until mount points
or drives become available. The processes are canceled if the wait
time exceeds the "Wait for mounting" setting for the device class
().
If the simultaneous-write function is enabled for migration, each migration process requires a mount point and a drive for each copy storage pool and active-data pool that is defined to the target storage pool.
Tip: While the other migration settings only affect the current operation, changing this setting updates the storage pool definition.
- Reclaim
- Storage pool reclamation is usually automated by using space thresholds or schedules. If
reclamation fails, you can manually start the operation as a background process. Restriction: This action reclaims fragmented space on storage pool volumes, and does not apply to container storage pools, which store data in logical containers. You can use the Operations Center to reclaim space for all types of sequential-access storage pools except container-copy pools.
For container-copy pools, reclamation occurs automatically when data is copied to the pool. As an alternative, you can run reclamation manually by using the RECLAIM parameter of the PROTECT STGPOOL command.
Tip: At least two volumes are required for reclamation. If the volumes are not available, either define more volumes or increase the "Maximum scratch volumes" setting ().You can specify the following reclamation settings:
- Duration
- After the specified time, the server cancels all reclamation processes for the selected storage pool.
- Start reclamation
- Reclaimable space is the amount of space that is occupied by files that are expired or deleted
from the IBM Spectrum Protect database. Reclaimable space also
includes unused space.Tip: Consider selecting a value of at least 50% so that files stored on two volumes can be combined into a single volume.
- Processes
- If the number of reclamation processes exceeds the available mount points or drives, the
additional processes will wait until mount points or drives become available. The processes are
canceled if the wait time exceeds the "Wait for mounting" setting for the device class ().
Tip: While the other reclamation settings only affect the current operation, changing this setting updates the storage pool definition.
- Convert
- Storage pool conversion moves data from a primary storage pool that uses the FILE device class,
a tape device class, or a virtual tape library (VTL) to a directory-container pool or
cloud-container pool.Restrictions:
- If the source pool is specified as a backup, archive, or migration destination in an active policy set that has pending changes, you must activate those changes before you can convert the pool.
- The following data types cannot be converted: table of contents (TOC) backups, virtual volumes, and Network Data Management Protocol (NDMP) data. These data types must be manually deleted from the source pool, moved to another primary pool, or allowed to expire based on policy settings.
- You must create the directory-container pool or cloud-container pool where data will be moved before you start the conversion operation.
- To convert a FILE pool to a directory-container pool, the target pool requires approximately 30% more free space than the capacity used by the source pool. When converting from a VTL or tape-based source pool to a directory-container pool, the target pool requires at least as much free space as the capacity used by the source pool.
- If the source storage pool is used to store TOC backups, another primary storage pool must be
available to store new TOC backups. Existing TOC backups are not moved during conversion.
The TOC pool must use a NATIVE or NONBLOCK data format and a device class other than Centera. To avoid mount delays, use a DISK or FILE device class.
- Type
- The following types can be shown:
- Primary
- Primary storage pools store files that are backed up, archived, or migrated from the client nodes. A device class determines the type of storage device that is used by the storage pool.
- Copy
- Copy storage pools store copies of files from primary storage pools. A device class determines the type of storage device that is used by the storage pool.
- Active-data
- Active-data storage pools store active versions of backup data from primary storage pools. A device class determines the type of storage device that is used by the storage pool.
- Container
- Container storage pools store files that are backed up from clients or replicated from another storage pool. Container pools provide optimized inline data deduplication and compression. Data is stored in logical containers in file system directories, in a vendor-supplied cloud, or in an in-house cloud.
- Container copy
- Container-copy storage pools store data copies from directory-container storage pools. A device class determines the tape storage device that is used by the storage pool.
- Retention
- Retention storage pools store retention set data on tape. A device class determines the type of tape storage device that is used by the pool. The device class can represent 3592 tape devices, LTO tape devices, or StorageTek drives. A retention storage pool has an associated retention-copy storage rule, which is automatically created when you define the pool. The retention-copy storage rule runs once each day to copy retention set data from primary storage to the retention storage pool.
- Status
- The following states can be shown:
If a warning or critical state is caused by insufficient space, investigate and resolve the problem. For information about resolving insufficient space problems, see the description of the Capacity Used column.
If a warning or critical state is caused by a storage pool's access state, determine whether the access state was intentionally set. If the access state of the storage pool was intentionally set to read-only or unavailable, no action is required. Otherwise, you can change the storage pool setting ().
If a warning or critical state for a container storage pool is caused by the access state of a storage pool directory, determine whether the access state was intentionally set. If the access state of the storage pool directory was intentionally set to read-only, unavailable, or destroyed, no action is needed. Otherwise, you can change the access state of the storage pool directory by using the UPDATE STGPOOLDIRECTORY command.
If a critical state for a container storage pool is caused by damaged containers, make sure that the containers are accessible and have valid content. A server marks a container as damaged when it cannot open or write to the container. Make sure that the server can access the file system or cloud storage where the containers are stored. If the server can access the file system or cloud storage, use the AUDIT CONTAINER command to make sure that the containers have valid content.
- Capacity Used
- The estimated amount of used and free space for the storage pool, rounded up to the nearest
gigabyte. For cloud-container storage pools, there is no free space limit.
For device class storage pools, the "No capacity" state means that there are no volumes available for the storage pool, or that a volume is not used yet. For container storage pools, the "No capacity" state means that no containers were created for the storage pool yet.
If the available capacity of a storage pool is low, take the following actions:- If the available capacity is low for a primary pool, ensure that the migration thresholds are
configured correctly. For disk-based pools, you might need to add volumes or change space trigger
settings.
To manage storage pool settings, use the QUERY STGPOOL and UPDATE STGPOOL commands. You can also change some settings on the Properties page ().
- If the available capacity is low for a device class primary pool, verify that the file systems
where the storage pool directories are defined have adequate free space. To view file system
capacity information, select .
If a file system is running out of space, expand the disk capacity or make space available on the disk. You can also add another directory to the directory-container storage pool, so the storage pool performance is not degraded because there are fewer devices to which the server can write. You can add another storage pool directory by using the DEFINE STGPOOLDIRECTORY command.
Tip: For spoke servers running a version of IBM Spectrum Protect earlier than V8.1, the capacity that is shown for tape pools is based on the maximum scratch volumes setting. By default, this value is set very high if you create a storage pool in the Operations Center.To provide a more accurate capacity estimate, you can upgrade the server to V8.1 or later, or adjust the setting for each storage pool. If you lower the value, ensure that enough volumes are available to handle the expected usage.
You can view and change the maximum scratch volumes setting on the Properties page for a storage pool ().
- If the available capacity is low for a primary pool, ensure that the migration thresholds are
configured correctly. For disk-based pools, you might need to add volumes or change space trigger
settings.
- Access
- For storage pools that do not have read/write access, one of the
following access states is shown:
- Read-only
- Client nodes cannot write files to the storage pool, but server processes can move files within volumes in the storage pool. However, no new write operations are permitted to volumes in the storage pool from volumes outside the storage pool.
- Unavailable
- Client nodes cannot read or write files in the storage pool. Server processes can move files within volumes in the storage pool and can also move or copy files from this storage pool to another storage pool. However, no new write operations are permitted to volumes in the storage pool from volumes outside the storage pool.
For either access state, the server skips this storage pool if it is configured as a next pool and other storage pools attempt to migrate data to it.
You can change the Access setting on the Properties page ().
- Storage Type
- Additional information about the purpose of the storage pool or the location of stored data.
If a primary storage pool has a storage type of Cold data cache, it is a cold-data-cache storage pool. A cold-data-cache storage pool consists of one or more file system directories on disk. It used only by object clients as a temporary staging area for sequential volumes during tape backup and restore operations. It is an intermediary storage pool between the object client and a tape device or VTL. The Next Pool column shows the primary sequential access storage pool that represents the tape device or VTL.
If the storage pool is a container storage pool, it creates logical containers for storage pool data, and its storage type shows where the containers are stored. The following storage types can be shown for container storage pools:- Directory
- The containers are created in one or more file system directories that you identify during configuration. The file system directories map to one or more disk devices.
- On-premises cloud
- The containers are created in a cloud environment that you identify during configuration. The physical location of the cloud is on premise.
- Off-premises cloud
- The containers are created in a cloud environment that you identify during configuration. The physical location of the cloud is off premise in a vendor-supplied cloud.
- Cloud Read Cache
- Additional information about the purpose of the cloud read cache.
By temporarily storing containers on disk, a cloud read cache can improve the performance of restore operations from cloud-container storage pools. By default, a read cache is not enabled.
When the setting is enabled, read cache data is not automatically removed during ingest and the ingest operation might experience some issues with space usage.
When the setting is turned on with ingest preferred, read cache data is removed from that directory and caching is paused for 60 seconds if ingested data encounters a space issue.
The following options are available for container storage pools:- Off
- Specifies that the read cache is disabled.
- On
- Specifies that the read cache is enabled.
- On, Prefer Ingest
- Specifies that the read cache is enabled. If ingested data has an out-of-space issue for a storage pool directory, the read cache data is removed from that directory.
For more information about managing storage pools or using IBM Spectrum Protect commands, see the IBM Spectrum Protect documentation.