Storage specifications

The storage specifications for the IBM® PureData® System for Operational Analytics are provided here.

External storage

Each module in the system is allocated external storage servers (a flash storage node, a disk storage node, and a disk storage node expansion). Each flash storage node has two controllers and six 16 Gbps Fibre Channel host ports. Each disk storage node has two controllers, a total of 16 GB cache, and eight 8 Gbps Fibre Channel host ports. All fourteen Fibre Channel host ports are connected to the SAN switch.

Expansion of the system is done by adding data modules. A set includes up to four data modules. As data modules are added, they populate any set that does not contain four data modules. A set is comprised of the following components:
  • 1, 2, 3, or 4 data modules
  • 1 standby data module
  • 2 SAN switches
Each set belongs to a single HA group.

The foundation node, its external storage, and the standby foundation node are connected to one pair of SAN switches. Each additional pair of SAN switches for the data nodes supports connections for one standby data node, up to four data nodes, up to four flash storage nodes, up to four disk storage nodes, and up to four disk storage node expansions.

In all cases, the cabling design uses redundant HBAs on the nodes and redundant controllers on the flash storage nodes and the disk storage nodes. For all nodes, redundant SAN switches are used.

All RAID arrays on the disk storage nodes are created using a segment size of 256 KB.

Foundation module: management host and administration host

The foundation module consists of one flash storage node, one disk storage node, and one disk storage node expansion. There are ten 2.9 TB flash modules that are configured into one (8+P+HS) RAID-5 array. There are forty-eight 1.2 TB, 10 K rpm, serial-attached SCSI (SAS) drives, twenty-four of which are configured into six (3+P) RAID-5 arrays, five are configured into one (3+P+Q) RAID-6 array, eleven are configured into one (9+P+Q) RAID-6 array, and the remaining eight are available as hot-spare drives. Figure 1 and Figure 2 shows the arrays, devices (hdisks), and file systems created on the storage allocated to the management host and the administration host.
Note: The hdisk numbering might be different on your system.
Figure 1. External storage configuration for the foundation module, Part 1
External storage configuration for the foundation module, Part 1
Figure 2. External storage configuration for the foundation module, Part 2
External storage configuration for the foundation module, Part 2
The following table shows the sizes of the file systems created on the storage allocated to the administration host and management host.
Table 1. File system layout for the administration host and management host
File system Size (GB)
/db2fs/bcuaix/NODE000n, n = 0-5 3124 each
/bkpfs/bcuaix/NODE000n, n = 0-5 3350 each
/db2path/bcuaix/NODE000n, n = 0-5 70 each
/stage 9997
/db2home 300
/dwhome 10
/opmfs 800
/usr/IBM/dwe/appserver_001 200
/BCU_share * 600
/pscfs * 100
Note: All of the file systems listed in Table 1 are GPFS™ file systems, except for those indicated by an asterisk (*), which are Enhanced Journaled File System (JFS2) file systems.
The following file systems are created on the external storage allocated to the administration host and management host for the IBM PureData System for Operational Analytics:
  • /db2fs/bcuaix/NODE000n: This GPFS file system stores all permanent table space containers for database partition 0, whether or not the table space is an automatic storage table space. The permanent table spaces stored on this database partition are used for small non-partitioned tables, such as dimension tables, lookup tables, and tables used for monitoring. This file system also contains the catalog files for the database and the diagnostic data directory for database partition 0. The /db2fs file system is also used for temporary table spaces. If mirror logging is enabled, the mirrored database logs are stored on this file system.
  • /bkpfs/bcuaix/NODE000n: This GPFS file system is for fast local DB2® database backups and infrequently-accessed (cold) data storage. This backup file system must not be used for frequently-accessed (hot) active data storage. With multi-temperature data management, previously hot or warm data table spaces that are now infrequently or no longer accessed (cooled down) can be relocated to the cold data storage group contained within the backup file system.
  • /db2path/bcuaix/NODE000n: This GPFS file system stores the database directory and holds the primary logs for database partition 0.
  • /db2home: This file system is used for the DB2 instance home directory. It is a GPFS file system that is shared to all core warehouse hosts using the mount point /db2home.
  • /dwhome: This file system is used as a home directory for users. It is a GPFS file system that is shared to all core warehouse hosts using the mount point /dwhome.
  • /stage: This file system is used for scratch space, staging tables, flat files, and other purposes. It is a GPFS file system that is shared to all core warehouse hosts using the mount point /stage.
  • /opmfs: This GPFS file system is used for the database performance manager database.
  • /pscfs: This JFS2 file system is used for the system console database.
  • /BCU_share: This JFS2 file system is used for fix packs. This file system is mounted at the start of fix pack installation and unmounted after the fix pack is installed and the updates are committed.
  • /usr/IBM/dwe/appserver_001: This GPFS file system is used by the warehouse tools.

Data module

The data module consists of one flash storage node, one disk storage node, and one disk storage node expansion. There are eight 5.7 TB flash modules that are configured into one (6+P+HS) RAID-5 array. There are forty-eight 1.2 TB, 10 K rpm, serial-attached SCSI (SAS) drives, forty of which are configured into ten (3+P) RAID-5 arrays and the remaining eight are available as hot-spare drives. Figure 3 and Figure 4 shows the mapping of the arrays and hdisks to the file systems created on the storage allocated to a data host.
Note: The hdisk numbering might be different on your system.
Figure 3. External storage configuration for a data module, Part 1
External storage configuration for the data module, Part 1
Figure 4. External storage configuration for a data module, Part 2
External storage configuration for the data module, Part 2
Each database partition on a data host is allocated the following GPFS file systems:
  • /db2fs/bcuaix/NODENNNN
  • /bkpfs/bcuaix/NODENNNN
  • /db2path/bcuaix/NODENNNN
where NNNN is the zero-padded number of the database partition and N is the database partition number without padding.
These file systems are used for the same purposes as the file systems created for database partition 0 that were described earlier, with the following exception:
  • /db2fs/bcuaix/NODENNNN: This file system stores all permanent table space containers for database partition NNNN, whether or not the table space is an automatic storage table space. On all database partitions on the data nodes, the permanent table spaces stored on this database partition are used for partitioned data. This file system also contains the diagnostic data directory for database partition NNNN. It is not used for catalog tables for the data hosts. The /db2fs file system is also used for temporary table spaces. If mirror logging is enabled, the mirrored database logs are stored on this file system.
The following table shows the sizes of the file systems created for each database partition on the data host.
Table 2. File system layout for a database partition on the data host
File system Size (GB)
/db2fs/bcuaix/NODENNNN 3124 each
/bkpfs/bcuaix/NODENNNN 3350 each
/db2path/bcuaix/NODENNNN 70 each
Note: All of the file systems listed in Table 2 are GPFS file systems.