ESS 3000 contents
![Start of change](./delta.gif)
ESS 3000 version 6.0.2.6 content stack
Component | Version |
---|---|
![]() ![]() |
![]() ![]() |
Red Hat® Enterprise Linux® |
|
![]() ![]() |
![]() OFED firmware levels:
![]() |
![]() ![]() |
![]() ![]() |
![]() ![]() |
![]() ![]() |
![]() ![]() |
![]() ![]() |
GNU C Library | glibc-2.28-164.el8 |
NVMe drive firmware | SN1MSN1M |
Boot drive firmware (Smart) | FW1361 |
Boot drive firmware (Micron) | ML32 |
Canister firmware (besides boot drive) | 1111 (2.02.000_0B0G_1.73_FB300052_0C32.official) |
BIOS level | 52 |
Podman |
|
![]() ![]() |
![]() ![]() |
xCAT | 2.16.3 |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() ![]() |
![]() ![]() |
Support RPMs |
|
ESA | esagent.pLinux-4.5.7-0 |
![End of change](./deltaend.gif)
![Start of change](./delta.gif)
POWER9 EMS stack
Item | Version |
---|---|
IBM Spectrum Scale | IBM Spectrum Scale 5.0.5.14 |
Operating system | Red Hat Enterprise Linux 8.2 |
ESS | 6.0.2.6 |
Kernel | ![]() ![]() |
Systemd | 239-31.el8_2.8 |
Network Manager | 1.22.8-9.el8_2 |
GNU C Library | glibc-2.28-164.el8.ppc64le.rpm |
Mellanox OFED | MLNX_OFED_LINUX-4.9-4.1.7.2 OFED firmware levels:
|
ESA | ![]() ![]() |
Ansible | ![]() ![]() |
Podman | ![]() ![]() |
Container OS | Red Hat UBI 8.4 |
xCAT | ![]() ![]() |
Firmware RPM | ![]() ![]() |
System firmware | ![]() ![]() |
Boot drive adapter | IPR ![]() ![]() |
Boot drive firmware |
![]()
![]() |
1Gb NIC firmware |
|
Support RPM | |
Network adapter |
|
![End of change](./deltaend.gif)
Major changes from earlier releases
Release | Major changes |
---|---|
ESS 6.0.2.6
|
|
![]()
![]() |
![]()
![]() |
ESS 6.0.2.4
|
|
![]()
![]() |
![]()
![]() |
ESS 6.0.2.2
|
|
ESS 3000 version 6.0.2.6 editions
Note: The package version mentioned in this document might be different than the version of the installation package available at IBM® FixCentral.
The ESS 3000 software is available in two editions:
- Data Management Edition
ess3000_6.0.2.6_0503-15_dme_ppc64le.tgz
- Data Access Edition
ess3000_6.0.2.6_0503-15_dae_ppc64le.tgz
![Start of change](./delta.gif)
Fixes and improvements in ESS 3000 version 6.0.2.6
- Updated code stack
- General bug fixes and improvements
![End of change](./deltaend.gif)
![Start of change](./delta.gif)
Support for signed RPMs
ESS or IBM Spectrum Scale RPMs are signed by IBM.
The PGP key is located in /opt/ibm/ess/tools/conf.
-rw-r-xr-x 1 root root 907 Dec 1 07:45 SpectrumScale_public_key.pgp
You can check whether an ESS or IBM Spectrum Scale
RPM is signed by IBM as follows.
- Import the PGP
key.
rpm --import /opt/ibm/ess/tools/conf/SpectrumScale_public_key.pgp
- Verify the RPM.
rpm -K RPMFile
![End of change](./deltaend.gif)
Security law changes
- New systems and switches shipped from manufacturing now have either an expired password (root password, switches) or one set to the serial number of the component (ASMI passwords).
- It is advised that all passwords are changed after the deployment is complete.
- The default root password for the OS is
ibmesscluster
. After the deployment is complete, it is advised that you change this password on each server. It is a best practice to use the same password on each node, but it is not mandatory. - The default ASMI passwords (login, IPMI, HMC, etc.) are set to the serial number of the server. It is a best practice to set the IPMI password the same on each node.
- If the 1Gb Cumulus switch is shipped racked, the default password is the serial number (S11
number - label found on the back of the switch). If the switch is shipped unracked, you are required
to set the password upon first login. The default password is
CumulusLinux!
but you will be prompted to change the password upon first login. If you have any issues logging in or you need help in setting up a VLAN with the switch, consult this documentation link.
ESS 3000 and ESS 5000 server networking requirements
In any scenario you must have an EMS node and a management switch. The management switch must be
split into two VLANs.
- Management VLAN
- Service/FSP VLAN
ESS 3000
POWER8 or POWER9 EMS
It is recommended to buy a POWER9 EMS with ESS 3000. If you have a legacy environment (POWER8), it is recommended to migrate to IBM Spectrum Scale 5.1.x.x and use the POWER9 EMS as the single management server.
- If you are adding ESS 3000 to a POWER8 EMS:
- An additional connection for the container to the management VLAN must be added. A C10-T2 cable must be run to this VLAN.
- A public/campus connection is required in C10-T3.
- A management connection must be run from C10-T1 (This should be already in place if adding to an existing POWER8 EMS with legacy nodes).
- If you are using an ESS 3000 with a POWER9 EMS:
- C11-T1 must be connected on the EMS to the management VLAN.
- Port 1 on each ESS 3000 canister must be connected to the management VLAN.
- C11-T2 must be connected on the EMS to the FSP VLAN.
- HMC1 must be connected on the EMS to the FSP VLAN.
ESS 5000
POWER9 EMS support only
EMS must have the following connections:
- C11-T1 to the management VLAN
- C11-T2 to the FSP VLAN
- HMC1 to the FSP VLAN
ESS 5000 nodes must have the following connections:
- C11-T1 to the management VLAN
- HMC1 to the FSP VLAN
ESS best practices
- ESS 6.x.x.x uses a new embedded license. It is important to know that installation of any Red Hat packages outside of the deployment upgrade flow is not supported. The container image provides everything required for a successful ESS deployment. If additional packages are needed, contact IBM for possible inclusion in future versions.
- For ESS 3000, consider enabling TRIM support. This is outlined in detail in IBM Spectrum Scale RAID Administration. By default, ESS 3000 only allocates 80% of available space. Consult with IBM development, if going beyond 80% makes sense for your environment, that is if you are not concerned about the performance implications due to this change.
- You must setup a campus or additional management connection before deploying the container.
- If running with a POWER8 and a POWER9 EMS in the same environment, it is best to move all containers to the POWER9 EMS. If there is a legacy PPC64LE system in the environment, it is best to migrate all nodes to ESS 6.1.x.x and decommission the POWER8 EMS altogether. This way you do not need to run multiple ESS GUI instances.
- If you have a POWER8 EMS, you must upgrade the EMS by using the legacy flow if there are xCAT based PPC64LE nodes in the environment (including protocol nodes). If there are just an ESS 3000 system and a POWER8 EMS, you can upgrade the EMS from the ESS 3000 container.
- If you are migrating the legacy nodes to ESS 6.1.x.x on the POWER8 EMS, you must first uninstall xCAT and all dependencies. It is best to migrate over to the POWER9 EMS if applicable.
- You must be at ESS 5.3.7 (Red Hat Enterprise Linux 7.7 / Python3) or later to run the ESS 3000 container on the POWER8 EMS.
- You must run the essrun config load command against all the storage nodes (including EMS and protocol nodes) in the cluster before enabling admin mode central or deploying the protocol nodes by using the installation toolkit.
- If you are running a stretch cluster, you must ensure that each node has a unique
hostid
. Thehostid
might be non-unique if the same IP addresses and host names are being used on both sides of the stretch cluster. Run gnrhealthcheck before creating recovery groups when adding nodes in a stretch cluster environment. You can manually check thehostid
on all nodes as follows:mmdsh -N { NodeClass | CommaSeparatedListofNodes } hostid
If
hostid
on any node is not unique, you must fix by running genhostid. These steps must be done when creating a recovery group in a stretch cluster. - Consider placing your protocol nodes in file system maintenance mode before upgrades. This is not a requirement but you should strongly consider doing it. For more information, see File system maintenance mode.
- Do not try to update the EMS node while you are logged in over the high-speed network. Update the EMS node only through the management or the campus connection.
- After adding an I/O node to the cluster, run the gnrhealthcheck command to ensure that there are no issues before creating vdisk sets. For example, duplicate host IDs. Duplicate host IDs cause issues in the ESS environment.