Fix Readme
Abstract
Release notes for the 3.1.4.31 VIOS Fix Pack release
Content
VIOS 3.1.4.31 Release Notes
Package Information
PACKAGE: Update Release 3.1.4.31
IOSLEVEL: 3.1.4.31
VIOS level is |
The AIX level of the NIM Master level must be equal to or higher than |
Update Release 3.1.4.31 |
AIX 7200-05-07 |
General package notes
Be sure to heed all minimum space requirements before installing.
Review the list of fixes included in Update Release 3.1.4.31
To take full advantage of all the functions available in the VIOS, it may be necessary to be at the latest system firmware level. If a system firmware update is necessary, it is recommended that the firmware be updated before you update the VIOS to Update Release 3.1.4.31.
Microcode or system firmware downloads for Power Systems
If the VIOS being updated has filesets installed from the VIOS Expansion Pack, be sure to update those filesets with the latest VIOS Expansion Pack if updates are available.
Update Release 3.1.4.31 updates your VIOS partition to ioslevel 3.1.4.31. To determine if Update Release 3.1.4.31 is already installed, run the following command from the VIOS command line.
$ ioslevel
If Update Release 3.1.4.31 is installed, the command output is 3.1.4.31.
Note: The VIOS installation DVDs and the level of VIOS preinstalled on new systems might not contain the latest fixes available. It’s highly recommended that customers that get the physical GA level (i.e., 3.1.4.30) to update to the electronic GA levels (i.e., 3.1.4.31) as soon as possible. Missing fixes might be critical to the proper operation of your system. Update these systems to a current service pack level from Fix Central.
For Customers using NVMe Over Fabric (SAN) as their Boot Disk
Booting from NVMeoF disk may fail if certain fabric errors are returned, hence a boot disk set up with multiple paths is recommended. In case there is a failure to boot, the boot process may continue if you exit from the SMS menu. Another potential workaround is to discover boot LUNs from the SMS menu and then retry boot.
For Customers Using Third Party Java-based Software
This only applies to customers who both use third party Java based software and have run updateios -remove_outdated_filesets to remove Java 7 from their system.
To prevent errant behavior when editing customer’s /etc/environment file, updateios does not make changes to that file when run. If a customer is using software that depends on using Java and having the path to it in your PATH environment variable, the following edit should be made to allow programs that use the PATH environment variable to locate Java 8.
In the /etc/environment file, customers should see:
PATH=[various directories]:/usr/java7_64/jre/bin:/usr/java7_64/bin
To address a potential issue with Java-dependent third party software, this should be converted to:
PATH=[various directories]:/usr/java8_64/jre/bin:/usr/java8_64/bin
ITM Agents Software
ITM (IBM Tivoli Monitoring) filesets continue to be pre-installed as part of VIOS 3.x. The agents can be updated using one of the methods below:
- To update the agent and shared components (e.g. JRE, GSKt) to the latest levels, download the latest image included in the IBM Tivoli Monitoring System P Agents 6.22 Fix Pack 4 bundle . Here is a link to the readme file containing information about how to obtain the image and instructions for installing.
- To update just the agent shared components (e.g JRE, GSKit), the latest ITM service pack can be installed.
LDAP fileset updates
For VIOS partitions originally installed at 3.1.4.30 or later, errors updating the idsldap 6.4 filesets can be safely ignored as version 10.0 of the idsldap filesets are already present on the system.
Please ignore below errors:
installp: APPLYING software for:
idsldap.license64.rte 6.4.0.25
…
…
Error: IBM Security Directory Server License not detected. Install cannot continue.
instal: Failed while executing the idsldap.license64.rte.pre_i script.
0503-464 installp: The installation has FAILED for the "usr" part
of the following filesets:
idsldap.license64.rte 6.4.0.25
installp: Cleaning up software for:
idsldap.license64.rte 6.4.0.25
Please ignore below entries in “Installation Summary” :
idsldap.license64.rte 6.4.0.25 USR APPLY FAILED
idsldap.license64.rte 6.4.0.25 USR CLEANUP SUCCESS
idsldap.cltbase64.rte 6.4.0.25 USR APPLY CANCELED
idsldap.cltbase64.adt 6.4.0.25 USR APPLY CANCELED
idsldap.clt64bit64.rte 6.4.0.25 USR APPLY CANCELED
idsldap.clt32bit64.rte 6.4.0.25 USR APPLY CANCELED
Note
- If the Virtual I/O servers are installed on POWER10 systems and configured with “32Gb PCIe4 2-Port FC Adapter, Feature Code(s) EN1J and EN1K”, then the requirement is to update the adapter microcode level to 7710812214105106.070115 before updating the Virtual I/O server to 3.1.4.31 level.
Please refer to the release notes at this link
3.1.4.31 New Features
VIOS 3.1.4.31 adds the following new features:
VIOS Shared Storage Pool Logging Enhancements
The two major enhancements for VIOS Shared Storage Pool in this release are as follows:
- The creation of dbn.log file within a Shared Storage Pool (SSP).
This file tracks all elections and relinquishes of the Database Node (DBN) role and debugs DBN-related problems easily. - The compression and storage of vio_daemon logs.
The number of logs that can be retained is increased by 15 times with no impact to storage capacity. This is done by compressing old VIOS logs and by tagging them with appropriate date and time information. This reduces the risk of logs that might contain critical information from being overwritten by newer logs.
N_Port ID Virtualization (NPIV) Enhancements: NVMeoF Protocol Support
The NPIV is a standardized method for virtualizing a physical Fibre Channel (FC) port. An NPIV-capable FC host bus adapter (HBA) can have multiple N_Ports, each with a unique identity. The NPIV, coupled with the adapter-sharing capabilities of the Virtual I/O Server (VIOS), allows a physical Fibre Channel HBA to be shared across multiple guest operating systems. The PowerVM implementation of NPIV enables POWER® logical partitions (LPARs) to have virtual fibre channel host bus adapters, each with a dedicated
worldwide port name. Each virtual Fibre Channel HBA has a unique storage area network (SAN) identity similar to that of a dedicated physical HBA.
The Non-Volatile Memory Express over Fabrics (NVMeoF) protocol in the NPIV stack is supported in Virtual I/O Server Version 3.1.4.0. A single virtual adapter provides access to both Small Computer Systems Interface (SCSI) and NVMeoF protocols if the physical adapter can support them. The application, which is running on the client partition and capable of handling the NPIV-NVMeoF protocol, can send I/Os in parallel to SCSI and NVMeoF disks that are coming from a single virtual adapter. The hardware and software requirements for NVMeoF protocol enablement in the NPIV stack are as follows:
- VIOS Version 3.1.4.0, or later
- NPIV-NVMeoF capable client (currently AIX® Version 7.3 Technology Level 01, or later)
- POWER 10 system with firmware version FW 1030, or later
- 32 or 64 GB FC adapters with physical NVMeoF support
VIOS Operating System Monitoring Enhancement
This release adds support for monitoring the VIOS operating system state by the POWER Hypervisor. If the VIOS partition is not responsive (due to certain conditions), then the hypervisor restarts the VIOS partition while it takes the system dump for debugging purpose. This helps to recover the VIOS partition from errors, for example, if the CPU is hijacked by the highest-level interrupt, the system progress is stopped. The ioscli viososmon command is added to understand the hang detection interval and the action when the hang is detected. This support requires POWER firmware version FW 1030, or later and VIOS Version 3.1.4.0, or later.
Support for NFSv4 Mounts on VIOS
The ioscli mount command which previously, by default, only supported the NFSv3 mount of the AIX is updated to support NFSv4 mounting. The changes allow VIOS to be able to invoke commands for the following actions:
- Setting Network File System (NFS) domain using chnfsdom from command line interface (CLI)
The setting of the NFS domain is accomplished by adding a Role-based access control (RBAC) support for the chnfsdom command.
- Invoking the NFSv4 mounting
The ioscli mount command is updated to support invoking the NFSv4 mounting. The current ioscli mount command defaults to NFSv3. You can invoke the “o vers=4” option with the new “-nfsvers <version> <Node: > <Directory> < Directory>” option that is added to the ioscli mount command. The values that are supported for the version are 3 and 4.
Note: The ioscli mount command supports NFS versions that are supported by the AIX mount command.
- Starting the nfsrgyd daemon
For version 4, if the mount is successful, a check is done to see if the nfsrgyd daemon is already started. And if it has not yet started, the nfsrgyd daemon is started.
Hardware Requirements
VIOS 3.1.4.31 can run on any of the following Power Systems:
POWER 8 or later.
Known Capabilities and Limitations
The following requirements and limitations apply to Shared Storage Pool (SSP) features and any associated virtual storage enhancements.
Requirements for Shared Storage Pool
- Platforms: POWER 8 and later (includes Blades), IBM PureFlex Systems (Power Compute Nodes only)
- System requirements per SSP node:
- Minimum CPU: 1 CPU of guaranteed entitlement
- Minimum memory: 4GB
- Storage requirements per SSP cluster (minimum): 1 fiber-channel attached disk for repository, 1 GB
- At least 1 fiber-channel attached disk for data, 10GB
Limitations for Shared Storage Pool
Software Installation
- When installing updates for VIOS Update Release 3.1.4.31 participating in a Shared Storage Pool, the Shared Storage Pool Services must be stopped on the node being upgraded.
SSP Configuration
Feature |
Min |
Max |
Number of VIOS Nodes in Cluster |
1 |
16* |
Number of Physical Disks in Pool |
1 |
1024 |
Number of Virtual Disks (LUs) Mappings in Pool |
1 |
8192 |
Number of Client LPARs per VIOS node |
1 |
250* |
Capacity of Physical Disks in Pool |
10GB |
16TB |
Storage Capacity of Storage Pool |
10GB |
512TB |
Capacity of a Virtual Disk (LU) in Pool |
1GB |
4TB |
Number of Repository Disks |
1 |
1 |
Capacity of Repository Disk |
512MB |
1016GB |
Number of Client LPARs per Cluster |
1 |
2000 |
*Support for additional VIOS Nodes and LPAR Mappings:
Prerequisites for expanded support:
- Over 16 VIOS Nodes requires that the SYSTEM (metadata) tier contains only SSD storage.
- Over 250 Client LPARs per VIOS requires each VIOS have at least 4 CPUs and 8 GB memory.
Here are the new maximum values for each of these configuration options, if the associated hardware specification has been met:
Feature |
Default Max |
High Spec Max |
Number of VIOS Nodes in Cluster |
16 |
24 |
Number of Client LPARs per VIOS node |
250 |
400 |
Other notes:
- Maximum number of physical volumes that can be added to or replaced from a pool at one time: 64
- The Shared Storage Pool cluster name must be less than 63 characters long.
- The Shared Storage Pool pool name must be less than 127 characters long.
- The maximum supported LU size is 4TB, however, for high I/O workloads it is recommended to use multiple smaller LUs as it will improve performance. For example, using 16 separate 16GB LUs would yield better performance than a single 256GB LU for applications that perform reads and writes to a variety of storage locations concurrently.
- The size of the /var drive should be greater than or equal to 3GB to ensure proper logging.
Network Configuration
- Uninterrupted network connectivity is required for operation. i.e., The network interface used for Shared Storage Pool configuration must be on a highly reliable network which is not congested.
- A Shared Storage Pool configuration can use IPV4 or IPV6, but not a combination of both.
- A Shared Storage Pool configuration should configure the TCP/IP resolver routine for name resolution to resolve host names locally first, and then use the DNS. For step by step instructions, refer to the TCP/IP name resolution documentation in the IBM Knowledge Center.
- The forward and reverse lookup should resolve to the IP address/hostname that is used for Shared Storage Pool configuration.
- It is recommended that the VIOSs that are part of the Shared Storage Pool configuration keep their clocks synchronized.
Storage Configuration
- Virtual SCSI devices provisioned from the Shared Storage Pool may drive higher CPU utilization than the classic Virtual SCSI devices.
- Using SCSI reservations (SCSI Reserve/Release and SCSI-3 Reserve) for fencing physical disks in the Shared Storage pool is not supported.
- SANCOM will not be supported in a Shared Storage Pool environment.
Shared Storage Pool capabilities and limitations
- On the client LPAR Virtual SCSI disk is the only peripheral device type supported by SSP at this time.
- When creating Virtual SCSI Adapters for VIOS LPARs, the option "Any client partition can connect" is not supported.
- VIOSs configured for SSP require that Shared Ethernet Adapter(s) (SEA) be setup for Threaded mode (the default mode). SEA in Interrupt Mode is not supported within SSP.
- VIOSs configured for SSP can be used as a Paging Space Partition (PSP), but the storage for the PSP paging spaces must come from logical devices not within a Shared Storage Pool. Using a VIOS SSP logical unit (LU) as an Active Memory Sharing (AMS) paging space or as the suspend/resume file is not supported.
- LPAR clients are not supported if they use JFS as their filesystem. If JFS is used, there is a risk of data corruption in the event of a network outage. JFS2 and other file systems are unaffected by this issue.
Installation Information
Pre-installation Information and Instructions
Please ensure that your rootvg contains at least 30 GB and that there is at least 4GB free space before you attempt to update to Update Release 3.1.4.31. Run the lsvg rootvg command, and then ensure there is enough free space.
Example:
$ lsvg rootvg
|
|
|
|
VOLUME GROUP:
|
rootvg
|
VG IDENTIFIER:
|
00f6004600004c000000014306a3db3d
|
VG STATE:
|
active
|
PP SIZE:
|
64 megabyte(s)
|
VG PERMISSION:
|
read/write
|
TOTAL PPs:
|
511 (32704 megabytes)
|
MAX LVs:
|
256
|
FREE PPs:
|
64 (4096 megabytes)
|
LVs:
|
14
|
USED PPs:
|
447 (28608 megabytes)
|
OPEN LVs:
|
12
|
QUORUM:
|
2 (Enabled)
|
TOTAL PVs:
|
1
|
VG DESCRIPTORS:
|
2
|
STALE PVs:
|
0
|
STALE PPs:
|
0
|
ACTIVE PVs:
|
1
|
AUTO ON:
|
yes
|
MAX PPs per VG:
|
32512
|
|
|
MAX PPs per PV:
|
1016
|
MAX PVs:
|
32
|
LTG size (Dynamic):
|
256 kilobyte(s)
|
AUTO SYNC:
|
no
|
HOT SPARE:
|
no
|
BB POLICY:
|
relocatable
|
PV RESTRICTION:
|
none
|
INFINITE RETRY:
|
no
|
VIOS upgrades with SDDPCM
A single, merged lpp_source is not supported for VIOS that uses SDDPCM. However, if you use SDDPCM, you can still enable a single boot update by using the alternate method described at the following location:
SDD and SDDPCM migration procedures when migrating VIOS from version 1.x to version 2.x
Virtual I/O Server support for Power Systems
Updating from VIOS version 3.1.0.00
VIOS Update Release 3.1.4.31 may be applied directly to any VIOS at level 3.1.0.00.
Upgrading from VIOS version 2.2.4 and above
The VIOS must first be upgraded to 3.1.0.00 before the 3.1.4.31 update can be applied. To learn more about how to do that, please read the information provided here.
Before installing the VIOS Update Release 3.1.4.31
Warning: The update may fail if there is a loaded media repository.
Instructions: Checking for a loaded media repository
To check for a loaded media repository, and then unload it, follow these steps.
- To check for loaded images, run the following command:
$ lsvopt
The Media column lists any loaded media.
- To unload media images, run the following commands on all Virtual Target Devices that have loaded images.
$ unloadopt -vtd <file-backed_virtual_optical_device >
- To verify that all media are unloaded, run the following command again.
$ lsvopt
The command output should show No Media for all VTDs.
Instructions: Migrate Shared Storage Pool Configuration
The Virtual I/O Server (VIOS) Version 2.2.2.1 or later, supports rolling updates for SSP clusters. The VIOS can be updated to Update Release 3.1.4.31 using rolling updates.
Non-disruptive rolling updated to VIOS 3.1 requires all SSP nodes to be at VIOS 2.2.6.31 or later. See detailed instructions in the VIOS 3.1 documentation
The rolling updates enhancement allows the user to apply Update Release 3.1.4.31 to the VIOS logical partitions in the cluster individually without causing an outage in the entire cluster. The updated VIOS logical partitions cannot use the new SSP capabilities until all VIOS logical partitions in the cluster are updated.
To upgrade the VIOS logical partitions to use the new SSP capabilities, ensure that the following conditions are met:
- All VIOS logical partitions must have VIOS Update Release version 2.2.6.31 or later installed.
- All VIOS logical partitions must be running. If any VIOS logical partition in the cluster is not running, the cluster cannot be upgraded to use the new SSP capabilities.
Instructions: Verify the cluster is running at the same level as your node.
- Run the following command:
$ cluster -status -verbose - Check the Node Upgrade Status field, and you should see one of the following terms:
UP_LEVEL: This means that the software level of the logical partition is higher than the software level the cluster is running at.
ON_LEVEL: This means the software level of the logical partition and the cluster are the same.
Installing the Update Release
There is now a method to verify the VIOS update files before installation. This process requires access to openssl by the 'padmin' User, which can be accomplished by creating a link.
Instructions: Verifying VIOS update files.
To verify the VIOS update files, follow these steps:
- $ oem_setup_env
- Create a link to openssl
# ln -s /usr/bin/openssl /usr/ios/utils/openssl - Verify the link to openssl was created
# ls -alL /usr/bin/openssl /usr/ios/utils/openssl - Verify that both files display similar owner and size
- # exit
Use one of the following methods to install the latest VIOS Service Release. As with all maintenance, you should create a VIOS backup before making changes.
If you are running a Shared Storage Pool configuration, you must follow the steps in Migrate Shared Storage Pool Configuration.
Note: While running 'updateios' in the following steps, you may see accessauth messages, but these messages can safely be ignored.
Version Specific Warning: Version 2.2.2.1, 2.2.2.2, 2.2.2.3, or 2.2.3.1
You must run updateios command twice to get bos.alt_disk_install.boot_images fileset update problem fixed.
Run the following command after the step of "$ updateios –accept –install –dev <directory_name >" completes.
$ updateios –accept –dev <directory_name >
Depending on the VIOS level, one or more of the LPPs below may be reported as "Missing Requisites", and they may be ignored.
MISSING REQUISITES:
X11.loc.fr_FR.base.lib 4.3.0.0 # Base Level Fileset
bos.INed 6.1.6.0 # Base Level Fileset
bos.loc.pc.Ja_JP 6.1.0.0 # Base Level Fileset
bos.loc.utf.EN_US 6.1.0.0 # Base Level Fileset
bos.mls.rte 6.1.x.x # Base Level Fileset
Warning: If VIOS rules have been deployed.
During update, there have been occasional issues with VIOS Rules files getting overwritten and/or system settings getting reset to their default values.
To ensure that this doesn’t affect you, we recommend making a backup of the current rules file. This file is located here:
/home/padmin/rules/vios_current_rules.xml
First, to capture your current system settings, run this command:
$ rules -o capture
Then, either copy the file to a backup location, or save off a list of your current rules:
$ rules -o list > rules_list.txt
After this is complete, proceed to update as normal. When your update is complete, check your current rules and ensure that they still match what is desired. If not, either overwrite the original rules file with your backup, or proceed to use the ‘rules -o modify’ and/or ‘rules -o add’ commands to change the rules to match what is in your backup file.
Finally, if you’ve failed to back up your rules, and are not sure what the rules should be, you can deploy the recommended VIOS rules by using the following command:
$ rules -o deploy -d
Then, if you wish to copy these new VIOS recommended rules to your current rules file, just run:
$ rules -o capture
Note: This will overwrite any customized rules in the current rules file.
Applying Updates
Warning:
If the target node to be updated is part of a redundant VIOS pair, the VIOS partner node must be fully operational before beginning to update the target node.
Note:
For VIOS nodes that are part of an SSP cluster, the partner node must be shown in 'cluster -status ' output as having a cluster status of OK and a pool status of OK. If the target node is updated before its VIOS partner is fully operational, client LPARs may crash.
Instructions: Applying updates to a VIOS.
- Log in to the VIOS as the user padmin.
- If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here.
- If you use Shared Storage Pools, then Shared Storage Pool Services must be stopped.
$ clstartstop -stop -n <cluster_name > -m <hostname >
- To apply updates from a directory on your local hard disk, follow the steps:
- Create a directory on the Virtual I/O Server.
$ mkdir <directory_name > - Using ftp, transfer the update file(s) to the directory you created.
To apply updates from a remotely mounted file system, and the remote file system is to be mounted read-only, follow the steps:- Mount the remote directory onto the Virtual I/O Server:
$ mount remote_machine_name:directory /mnt
- Mount the remote directory onto the Virtual I/O Server:
- Create a directory on the Virtual I/O Server.
The update release can be burned onto a CD by using the ISO image file(s). To apply updates from the CD/DVD drive, follow the steps:
-
-
- Place the CD-ROM into the drive assigned to VIOS.
- Place the CD-ROM into the drive assigned to VIOS.
-
- Commit previous updates by running the updateios command:
$ updateios -commit
- Verify the updates files that were copied. This step can only be performed if the link to openssl was created.
$ cp <directory_path >/ck_sum.bff /home/padmin
$ chmod 755 </home/padmin>/ck_sum.bff
$ ck_sum.bff <directory_path >
If there are missing updates or incomplete downloads, an error message is displayed.
To see how to create a link to openssl, click here.
- Apply the update by running the updateios command
$ updateios -accept -install -dev <directory_name >
- To load all changes, reboot the VIOS as user padmin .
$ shutdown -restart
Note: If shutdown –restart command failed, run swrole –PAdmin for padmin to set authorization and establish access to the shutdown command properly.
- If cluster services were stopped in step 3, restart cluster services.
$ clstartstop -start -n <cluster_name > -m <hostname >
- Verify that the update was successful by checking the results of the updateios command and by running the ioslevel command, which should indicate that the ioslevel is now 3.1.4.31.
$ ioslevel
Post-installation Information and Instructions
Instructions: Checking for an incomplete installation caused by a loaded media repository.
After installing an Update Release, you can use this method to determine if you have encountered the problem of a loaded media library.
Check the Media Repository by running this command:
$ lsrep
If the command reports: "Unable to retrieve repository data due to incomplete repository structure," then you have likely encountered this problem during the installation. The media images have not been lost and are still present in the file system of the virtual media library.
Running the lsvopt command should show the media images.
Instructions: Recovering from an incomplete installation caused by a loaded media repository.
To recover from this type of installation failure, unload any media repository images, and then reinstall the ios.cli.rte package. Follow these steps:
- Unload any media images
$ unloadopt -vtd <file-backed_virtual_optical_device>
- Reinstall the ios.cli.rte fileset by running the following commands.
To escape the restricted shell:
$ oem_setup_env
To install the failed fileset:
# installp –Or –agX ios.cli.rte –d <device/directory >
To return to the restricted shell:
# exit
- Restart the VIOS.
$ shutdown –restart
- Verify that the Media Repository is operational by running this command:
$ lsrep
Fixes included in this release
APAR |
Description |
IJ41832 |
Install images for bos.loc.utf.JA_JP |
IJ42481 |
SECLDAPCLNTD DOES NOT RECOGNIZE OPENLDAP PASSWORD POLICY |
IJ42703 |
LDAP NESTED GROUP DELAYS DUE TO NO CACHING |
IJ43419 |
PROCFILES FAILS TO PRINT ALL FILENAMES OF DB2SYSC PROCESS |
IJ43890 |
DBN ELECTION MAY TAKE LONG TIME CAUSING VMRM FAILOVER DELAY |
IJ44109 |
LNC2ENTDD NEEDS LSO SECURITY CHECK IN VNIC PATH |
IJ44502 |
LSGROUP WON'T WORK WITH SPACE IN GROUP NAME |
IJ44503 |
LSUSER MAY NOT DISPLAY ALL OF AN LDAP USER'S GROUPS |
IJ44714 |
AIXPERT -U DOES NOT RESTORE DEFAULT USER SETTINGS |
IJ45056 |
POTENTIAL SECURITY ISSUE |
IJ45256 |
SNAP -A (WITHOUT N) GIVE ERRORS ON CHECKING SPACE WITH NIM |
IJ45529 |
VIOS CRASH DURING VFCHOST ADAPTER CONFIGURATION |
IJ45657 |
CHVG USAGE ERROR WHEN CALLED FROM ALT_ROOTVG_OP -W |
IJ45823 |
vlandd: add tracing to provide better troubleshooting |
IJ45879 |
VIRTUAL TAPE DEVICE OPERATIONS MAY FAIL IN AIX 7.2 TL5 |
IJ45885 |
THE ISSUE FROM APAR IJ41424 REOCCURS IN AIX 7200-05-05-2246 |
IJ46163 |
P10FIELD_HMC UI ADD VFC ADAPTERS "FAILED" WITH HSCLA9D7 HSCLA27 |
IJ46352 |
VIOS CONFIGURED WITH MANY ROCE ADAPTERS CAN EXHAUST PINNEDMEMORY |
IJ46384 |
OFED DRIVERS MEMORY LEAK WHEN ROCE ADAPTERS CONFIGURED WITH RDMA |
IJ46490 |
IANA Olson timezone Rules file updates for 2023c version. |
IJ46541 |
potential undetected data loss after running chvg -ky or chvg -g |
IJ46596 |
IMPROPER ASSIGNEMNT FIX IN AIXDISKPCMKE.C |
IJ46694 |
LDAP user logins stop working |
IJ46710 |
Set ANA Delay as per ANATT |
IJ46755 |
NO LVM STATS RETURNED FOR SOME ACTIVE VGS DUE TO INACTIVE VGS |
IJ46798 |
.VI_HISTORY FILE IS NOT CREATING AUTOMATICALLY |
IJ46854 |
IO path timeout and IO failures VFC multi-queue on AIX |
IJ46855 |
vfc adapter reset after migration upon ACA condition from |
IJ46866 |
IN PS OUTPUT COMMANDS ARGUMENTS MISSING IF ONE ARGUMENT IS NULL |
IJ46867 |
ZAMBIE PROCESS IS LEFT BEHIND WHEN BOOT IS DONE. |
IJ46870 |
SNAP OR SNAP SVCOLLECT FAILED TO COLLECT CMDB WHEN RUN IN PADM |
IJ46871 |
INCORRECT STATE SET FOR NETWORK INTERFACE WITH CONFIGURED ALIAS |
IJ46874 |
ICONV UCCINIT CORE DUMPS IF UCS TABLE PERMISSIONS ARE RESTRICT |
IJ46875 |
ALT_DISK_COPY -T -G FAIL IN JFS2J2 BOOTABLE DISK CHECK |
IJ46876 |
LPPCHK -C SHOWS DIRECTORIES NOT FOUND FOR IOS.DATABASE.RTE |
IJ46879 |
IPREPORT FORMATTING TCPDUMP TIMESTAMPS INCORRECTLY |
IJ46880 |
Alternate Master sync using "smit nim_altmstr" fast path fails |
IJ46881 |
In-Core Crypto enablement fails on P9_base mode |
IJ46883 |
suma task defaults help for FilterML attribute mislead |
IJ46885 |
UDAPL communication does not work if used as loopback |
IJ46886 |
NIMSH service in NIM client stops while Alt_mstr sync |
IJ46889 |
Esc chars in cmdnim.msg error msgs not reflectd in console |
IJ46890 |
A POTENTIAL SECURITY ISSUE EXISTS |
IJ46891 |
getwc illegal wide character |
IJ46892 |
MULTIPLE LEAKS IN TRUSTCHK -N TREE |
IJ46893 |
Shutdown hangs when console redirected to a file |
IJ46895 |
Test fails in tetoldif with Failed on LDAP (TSD) |
IJ46896 |
SED-enabled adapter going "Defined" state after hotplug |
IJ46897 |
Incorrect message with SMIT panel to attach the Namespace |
IJ46898 |
cfgmgr is in hung state in NPIV client lpar |
IJ46899 |
port speed is 0 for Nool VFC port |
IJ46900 |
8680ff1514100000 driver ignoring module 7 burn return code. |
IJ46901 |
dlsym failed with EINVAL during VMRM operation |
IJ46902 |
viosbr restore may fail due to xml "id" attr |
IJ46939 |
SPOT creation should comeout gracefully if nim server at lower |
IJ46940 |
NIM command is unable to print NIM Master's certificate |
IJ46941 |
MaxFSSize field doesn't work as per SUMA documentation |
IJ46942 |
cloud_setup hangs as ftp blocks for ftp.software.ibm.com |
IJ46945 |
NIM Master doesn't give errors on certificate expiration |
IJ46946 |
Alt Mstr gives unsecure conn |
IJ46947 |
nim_master_setup command help is incomplete |
IJ46949 |
System crashes in mstor_restart_handler with USB devices |
IJ46950 |
Disk error when using single_path reserves |
IJ47025 |
Validation failure during migration may be seen |
IJ47026 |
C_CH_NFSEXP OVERFLOW WITH LARGE MKSYSB FILE |
IJ47062 |
TAIL -NXX MAY DISPLAY MORE LINES |
IJ47077 |
SAVEBASE MAY FAIL WITH 4K BLOCK SIZE DISK |
IJ47089 |
Compiler issue with main source file name |
IJ47090 |
VFC Client may fail to configure disks |
IJ47110 |
LPM Failed with LU validation syscall error |
IJ47149 |
VIOS CRASHES IN NPIV_STOP_PROC_HANDLER DURING LUN VALIDATION |
IJ47158 |
VIOS LOGS CONTINUOUS SC_DISK_ERR10 FOR PPRC SECONDARY DISK |
IJ47199 |
Customer facing problem with 'lspv -P' not showing all disks |
IJ47209 |
BDIFF MUST RETURN NON-ZERO IF FILES ARE DIFFERENT |
IJ47210 |
Probable system crash during FC driver configuration |
IJ47238 |
VIOS crashes while LPM validation for FCNVMe disks |
IJ47318 |
CHANGING LANG in bsh jobs causes termination in SIGSEGV. |
IJ47322 |
Improve binder Error Message When It Cannot Load libLTO.so |
IJ47341 |
WHEN VIOS GOES DOWN, VSCSI DISK I/O MIGHT HANG |
IJ47355 |
IPSEC tunnel PSK AIX <> ZOS fails to rekey for DH 19 |
IJ47401 |
LPM failed for lpar name more than 32 bytes long |
IJ47450 |
getconfattr returns wrong errno |
IJ47455 |
After LPM npiv client is not accessible |
IJ47459 |
Provide 64-bit assembler as non-default assembler |
IJ47527 |
netstat -P is not displaying proper error message |
IJ47534 |
LPAR got hung while running IO tests with Livedump |
IJ47535 |
IPv6 IPs are not pinging from other network |
IJ47573 |
Migration of multiple lpars fail if low level startinitr fails |
IJ47574 |
LPAR crashed at .backt+000000 while creating threads post LPM |
IJ47597 |
A potential security issue exists |
IJ47598 |
HOSTSALLOWEDLOGIN OR HOSTSDENIEDLOGIN LIST MIGHT FAIL |
IJ47637 |
Update get_fw_version routine |
IJ47640 |
LPAR CRASH DURING INTENSE WORKLOAD AT LVM LAYER |
IJ47688 |
echo command failure |
IJ47690 |
lldpd daemon is crashing on adding a EtherChannel |
IJ47718 |
A possibility of memory leak case in viod_daemon. |
IJ47719 |
DLPAR with Moso results in Core Dump |
IJ47720 |
ADD /ETC/MAIL/SUBMIT.CF TO 'SNAP -T' DATACOLLECTION |
IJ47722 |
HMC mgmt obj passwd_file not replicated in Alt_mstr |
IJ47723 |
bootimage space issue during migration |
IJ47743 |
NIM SPOT CREATION AND CHECK REPORT NFS_CNTL FAILED |
IJ47761 |
Incorrect handling of FC asynchronous Extended Link Services commands |
IJ47795 |
Got postgress@raise in VIOS |
IJ47796 |
VIOS crashes with stack @npiv_enqueue+000010 |
IJ47797 |
AIX lpar can hang or crash on FC_NVM command in LPM validation. |
IJ47798 |
'#define malloc vec_malloc' causes error/warning with ibm-clang |
IJ47810 |
KSH BUILTIN 'PWD' AND 'CD ..' ARE LEAKING MEMORY |
IJ47811 |
LPAR crashed with stack vfc_cancel_pending_admin_cmds |
IJ47814 |
Migration validation failure after FC NVM command timeout |
IJ47815 |
VIOS system dump with stack npiv_cmdq_logout_action |
IJ47817 |
nim_update_all cmd gives success exit code for failure |
IJ47830 |
Errors Observed during cache.mgt fileset installation. |
IJ47832 |
EID being logged in errpt during the adapter firmware upgrade |
IJ47833 |
In Etherchannel-teaming mode possible to add same adapter name |
IJ47839 |
LDATA_DESTROY CALLED AFTER FAILED LDATA_CREATE CAUSES AN ASSERT |
IJ47868 |
APPLYING BOS.RTE.SERV_AID UPDATE MAY LOOP INFINATELY |
IJ47887 |
SEA creation fails when ipv6 offload is not supported |
IJ47888 |
IO Erros seen after IO response drop and with Host crash |
IJ47900 |
SAVEVG FAILS TO BACKUP SOME ATTRIBUTES OF CONCURRENT-CAPABLE VGS |
IJ47901 |
aso dumps core during mpss optimization |
IJ47905 |
unamex should be declared in sys/utsname.h |
IJ47919 |
ALT_DISK_INSTALL CLEANS ALTINST_ROOTVG IF SETUP_WORKDIR FAILS |
IJ47924 |
lsmap not listing the client id |
IJ47944 |
RMDEV -L FCSX ON VFC ADAPTER HUNG |
IJ47947 |
Adapter went into defined state while config |
IJ47949 |
NFS CAN ABEND WITH TRUSTED BINARY/SCRIPT ON AN NFS FS |
IJ47971 |
Memory leak or system crash when using cached routes |
IJ48036 |
SSL not enabled for NIM Primary, Alternate and NIM client |
IJ48038 |
nim -o showres with fileset attribute throws error on lpp |
IJ48039 |
Incomplete error msg on LU cmd if NIM client is not conf |
IJ48040 |
nim master and client conn ok,nim cmds are failing |
IJ48044 |
alt_rootvg_op -X is removing vg, which is nt copy of rootvg |
IJ48050 |
Configured 1500 mtu is reset to 9000 if use_jumbo_frames=set |
IJ48051 |
System crash while stop virtual initiator (SCIOLSTOPINITR) |
IJ48239 |
Console getting spanned while updating mcr.rte fileset |
IJ48245 |
READVGDA PREFERRED READ VALUE FOR SVG W/ MIRROR POOLS INCORRECT |
IJ48250 |
Update Diagnostics VRMF for Fall 2023 |
IJ48252 |
A potential security issue exists |
IJ48270 |
Avoid starting ksys_hsmon in non VMRM HA environment. |
IJ48306 |
vmstat output fi/fo seems incorrect when system is in a paging |
IJ48348 |
bosboot command fails with not enough space error |
IJ48354 |
Observing SCSI path recovery failed on cable pull tests |
IJ48355 |
create_ova created image will not import into PowerVC |
IJ48471 |
tail(1) misidentifies directories as files |
IJ48473 |
dd doesn't write status to error stream when SIGINT |
IJ48474 |
cksum issue with multiple argument when env XPG_SUS_ENV is set |
IJ48475 |
tail -c fails with zero exit status |
IJ48476 |
A potential security issue exists |
IJ48478 |
CHNFSMNT CAN INCORRECTLY MODIFY OPTIONS IN /ETC/FILESYSTEM |
IJ48479 |
cdpd does not add multicast address during port add operation |
IJ48481 |
A potential security issue exists |
IJ48482 |
A potential security issue exists |
IJ48483 |
Increase max ldata pool limit in pfcurh |
IJ48488 |
VIOS crashed with stack ___memmove64 -> npiv_passthru_ |
IJ48549 |
VIOS CAN CRASH DURING ASYNC EVENT ON UNMAPPED VFCHOST |
IJ48604 |
Extra break statement in the middleware find_devices code path |
IJ48606 |
lpar crashed @tcp_trace+00009C |
IJ48663 |
A potential security issue exists |
IJ48671 |
A potential security issue exists |
IJ48686 |
echo writes new line dropping code 72Z and 73B |
IJ48765 |
DISK OPERATION ERROR: PATH HAS FAILED |
Was this topic helpful?
Document Information
Modified date:
11 March 2024
UID
ibm17065536