VIOS 4.1.0.10 Release Notes

Package Information

PACKAGE: Update Release 4.1.0.10
IOSLEVEL: 4.1.0.10

VIOS level is

The AIX level of the NIM Master level must be equal to or higher than

Update Release 4.1.0.10

AIX 7300-02-01

General package notes

Be sure to heed all minimum space requirements before installing.

Review the list of fixes included in Update Release 4.1.0.10

To take full advantage of all the functions available in the VIOS, it may be necessary to be at the latest system firmware level. If a system firmware update is necessary, it is recommended that the firmware be updated before you update the VIOS to Update Release 4.1.0.10.

Microcode or system firmware downloads for Power Systems

If the VIOS being updated has filesets installed from the VIOS Expansion Pack, be sure to update those filesets with the latest VIOS Expansion Pack if updates are available.

Update Release 4.1.0.10 updates your VIOS partition to ioslevel 4.1.0.10. To determine if Update Release 4.1.0.10 is already installed, run the following command from the VIOS command line.

$ ioslevel

If Update Release 4.1.0.10 is installed, the command output is 4.1.0.10.

Note: The VIOS installation DVDs and the level of VIOS preinstalled on new systems might not contain the latest fixes available. It’s highly recommended that customers that get the physical GA level (i.e., 4.1.0.00) to update to the electronic GA levels (i.e., 4.1.0.10) as soon as possible. Missing fixes might be critical to the proper operation of your system. Update these systems to a current service pack level from Fix Central.

Upgrade to VIOS 4.1.x

·       Existing VIOS systems with supported versions of 3.1.x.y can be upgraded to VIOS version 4.1.0.00 (DVD image) or 4.1.0.10 (Flash image) using viosupgrade tool . Although it is recommended to be on VIOS 3.1.4.30 or later SP level before upgrading to VIOS 4.1.x level.

 

·       VIOS systems with SSP configuration must be on 3.1.3.x or later level before upgrading to 4.1.x level or adding 4.1.x nodes into cluster.

 

·       If Active Memory Sharing (AMS) is configured on the VIOS, it should be un-configured before upgrading. You may refer the link on how to un-configure.

 

·       Before upgrading, you may read the viosupgrade blog in PowerVM Community which explains various scenarios.

 

·       After an upgrade to VIOS 4.1.0.0 or VIOS 4.1.0.10, the padmin user may be unable to login due to premature password expiration. Before running viosupgrade user should apply the ifix to avoid the issue. Please refer the support document for more details.

 

For Customers using NVMe Over Fabric (SAN) as their Boot Disk

 

Booting from NVMeoF disk may fail if certain fabric errors are returned, hence a boot disk set up with multiple paths is recommended.  In case there is a failure to boot, the boot process may continue if you exit from the SMS menu. Another potential workaround is to discover boot LUNs from the SMS menu and then retry boot.

 



Note

 

If the Virtual I/O servers are installed on POWER10 systems and configured with “32Gb PCIe4 2-Port FC Adapter, Feature Code(s) EN1J and EN1K”, then the requirement is to update the adapter microcode level to 7710812214105106.070115 before updating the Virtual I/O server to 4.1.0.10 level.

 

Please refer to the release notes at this link

 

4.1.0.10 New Features

VIOS 4.1.0.10 adds the following new features:

 

Security enhancements

·       Supports Trusted Execution, Trusted Update and Secure Boot.

·       Supports new stronger default password (SSHA-256) and out of the box long password support (255 character limit)

·       Data protection with LVM encryption for rootvg and dump devices.       

·       Services that are not secure like rexec, rsh are removed. Telnet / ftp service are disabled. Users can enable telnet / ftp service if required.

·       ksh93 is used as the default ksh in VIOS commands and scripts

 

viosupgrade enhancements

 

The major enhancements done in viosupgrade in this release are as follows.

·       Added new option -F devname to preserve the device names for vfchost adapter devices, fcnvme, nvme, fscsi, iSCSI devices and network adapter devices.

·       The new options “-k” and “-o rerun” is added which is specific to pre-restore script execution.

 

viosbr enhancements

 

Restore of all the PV backed VTDs if the same PV is mapped to multiple vhosts.

 

Others

 

Enhancement in alt_root_vg command, to run it in phases. This enhancement allows the alt_root_vg command to separate the cloning phase from the update phase.

Hardware Requirements

Please check this link for supported hardware.

Known Capabilities and Limitations

The following requirements and limitations apply to Shared Storage Pool (SSP) features and any associated virtual storage enhancements.

Requirements for Shared Storage Pool

 

Limitations for Shared Storage Pool

Software Installation

SSP Configuration

Feature

Min

Max

Number of VIOS Nodes in Cluster

1

16*

Number of Physical Disks in Pool

1

1024

Number of Virtual Disks (LUs) Mappings in Pool

1

8192

Number of Client LPARs per VIOS node

1

250*

Capacity of Physical Disks in Pool

10GB

16TB

Storage Capacity of Storage Pool

10GB

512TB

Capacity of a Virtual Disk (LU) in Pool

1GB

4TB

Number of Repository Disks

1

1

Capacity of Repository Disk

10GB

1016GB

Number of Client LPARs per Cluster

1

2000

 

*Support for additional VIOS Nodes and LPAR Mappings:

Prerequisites for expanded support:

Here are the new maximum values for each of these configuration options, if the associated hardware specification has been met:

Feature

Default Max

High Spec Max

Number of VIOS Nodes in Cluster

16

24

Number of Client LPARs per VIOS node

250

400

 

Other notes:


Network Configuration


Storage Configuration


Shared Storage Pool capabilities and limitations

Installation Information

Pre-installation Information and Instructions

Please ensure that your rootvg contains at least 30 GB and that there is at least 4GB free space before you attempt to update to Update Release 4.1.0.10. Run the lsvg rootvg command, and then ensure there is enough free space.

Example: 

$ lsvg rootvg 
 
 
 
VOLUME GROUP:       
rootvg                   
VG IDENTIFIER:  
00f6004600004c000000014306a3db3d
VG STATE:
active     
PP SIZE:
64 megabyte(s)
VG PERMISSION:            
read/write               
TOTAL PPs:
511 (32704 megabytes)
MAX LVs:                   
256                      
FREE PPs:
64 (4096 megabytes)
LVs:                       
14                       
USED PPs:
447 (28608 megabytes)
OPEN LVs:                    
12 
QUORUM:
2 (Enabled)
TOTAL PVs:                                  
1
VG DESCRIPTORS:
2
STALE PVs:                                  
0
STALE PPs: 
0
ACTIVE PVs:                                         
1
AUTO ON:
yes
MAX PPs per VG:     
32512                                     
 
 
MAX PPs per PV:     
1016 
MAX PVs:
32
LTG size (Dynamic):                
256 kilobyte(s)
AUTO SYNC:
no
HOT SPARE:                
no                       
BB POLICY:
relocatable
PV RESTRICTION:     
none                     
INFINITE RETRY:
no
 

VIOS upgrades with Third Party Software

When the user upgrades from 3.1.x.y to 4.1.0.00 level and above, third-party software is not packaged with the IBM supplied mksysb image. User needs to install the respective third-party software after the upgrade is complete and run viosupgrade -o rerun to restore the respective devices.

 

 

Updating from VIOS version 4.1.0.00

 

VIOS Update Release 4.1.0.10 may be applied directly to any VIOS at level 4.1.0.00.

 

Before installing the VIOS Update Release 4.1.0.10

 

Warning: The update may fail if there is a loaded media repository.

 

Instructions: Checking for a loaded media repository

 

To check for a loaded media repository, and then unload it, follow these steps.

 

1.     To check for loaded images, run the following command:

$ lsvopt 
The Media column lists any loaded media.

 

2.     To unload media images, run the following commands on all Virtual Target Devices that have loaded images.

$ unloadopt -vtd <file-backed_virtual_optical_device >

 

3.     To verify that all media are unloaded, run the following command again.

$ lsvopt 
The command output should show No Media for all VTDs.



Instructions: Migrate Shared Storage Pool Configuration

 

The Virtual I/O Server (VIOS) Version 3.1.x.y or later, supports rolling updates to release 4.1.0.10 for SSP clusters.

 

The rolling updates enhancement allows the user to apply Update Release 4.1.0.10 to the VIOS logical partitions in the cluster individually without causing an outage in the entire cluster. The updated VIOS logical partitions cannot use the new SSP capabilities until all VIOS logical partitions in the cluster are updated.

 

To upgrade the VIOS logical partitions to use the new SSP capabilities, ensure that the following conditions are met:

 

·        All VIOS logical partitions must have VIOS Update Release version 3.1.x.y  or later installed.

·        All VIOS logical partitions must be running. If any VIOS logical partition in the cluster is not running, the cluster cannot be upgraded to use the new SSP capabilities.

Instructions: Verify the cluster is running at the same level as your node.

 

1.     Run the following command:
$ cluster -status -verbose

2.     Check the Node Upgrade Status field, and you should see one of the following terms:


UP_LEVEL: This means that the software level of the logical partition is higher than the software level the cluster is running at.

ON_LEVEL: This means the software level of the logical partition and the cluster are the same.

 

Installing the Update Release

 

There is now a method to verify the VIOS update files before installation. This process requires access to openssl by the 'padmin' User, which can be accomplished by creating a link.

 

Instructions: Verifying VIOS update files.

To verify the VIOS update files, follow these steps:

1.     $ oem_setup_env

2.     Create a link to openssl if required
# 
ln -s /usr/bin/openssl /usr/ios/utils/openssl 

3.     Verify the link to openssl was created 
# 
ls -alL /usr/bin/openssl /usr/ios/utils/openssl 

4.     Verify that both files display similar owner and size 

5.     # exit

 

Use one of the following methods to install the latest VIOS Service Release. As with all maintenance, you should create a VIOS backup before making changes.

 

If you are running a Shared Storage Pool configuration, you must follow the steps in Migrate Shared Storage Pool Configuration.

 

Note: While running 'updateios' in the following steps, you may see accessauth messages, but these messages can safely be ignored.

 

 

Warning:  If VIOS rules have been deployed.


During update, there have been occasional issues with VIOS Rules files getting overwritten and/or system settings getting reset to their default values.

 

To ensure that this doesn’t affect you, we recommend making a backup of the current rules file.  This file is located here:

/home/padmin/rules/vios_current_rules.xml


First, to capture your current system settings, run this command:

$
rules -o capture

 

Then, either copy the file to a backup location, or save off a list of your current rules:

 

$ rules -o list > rules_list.txt

 

After this is complete, proceed to update as normal.  When your update is complete, check your current rules and ensure that they still match what is desired.  If not, either overwrite the original rules file with your backup, or proceed to use the ‘rules -o modify’ and/or ‘rules -o add’ commands to change the rules to match what is in your backup file.

 

Finally, if you’ve failed to back up your rules, and are not sure what the rules should be, you can deploy the recommended VIOS rules by using the following command:

$ rules -o deploy -d

 

Then, if you wish to copy these new VIOS recommended rules to your current rules file, just run:

 

$ rules -o capture

 

Note: This will overwrite any customized rules in the current rules file.

 

Applying Updates

 

Warning:

If the target node to be updated is part of a redundant VIOS pair, the VIOS partner node must be fully operational before beginning to update the target node.

 

Note:

For VIOS nodes that are part of an SSP cluster, the partner node must be shown in 'cluster -status ' output as having a cluster status of OK and a pool status of OK. If the target node is updated before its VIOS partner is fully operational, client LPARs may crash.

 

 

Instructions: Applying updates to a VIOS.

 

  1. Log in to the VIOS as the user padmin.
  2. If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here.
  3. If you use Shared Storage Pools, then Shared Storage Pool Services must be stopped.

    $
     clstartstop -stop -n <cluster_name > -m <hostname >

  4. To apply updates from a directory on your local hard disk, follow the steps:
    1. Create a directory on the Virtual I/O Server.
      $
       mkdir <directory_name >

2.     Using ftp, transfer the update file(s) to the directory you created.

To apply updates from a remotely mounted file system, and the remote file system is to be mounted read-only, follow the steps:

      1. Mount the remote directory onto the Virtual I/O Server:
        $
         mount remote_machine_name:directory /mnt

The update release can be burned onto a CD by using the ISO image file(s). To apply updates from the CD/DVD drive, follow the steps:

      1. Place the CD-ROM into the drive assigned to VIOS.

  1. Commit previous updates by running the updateios command:
    $
     updateios -commit

  2. Verify the updates files that were copied. This step can only be performed if the link to openssl was created.
    $ 
    cp <directory_path >/ck_sum.bff /home/padmin 
    $ 
    chmod 755 </home/padmin>/ck_sum.bff 
    $ 
    ck_sum.bff <directory_path > 
    If there are missing updates or incomplete downloads, an error message is displayed.

    To see how to create a link to openssl, click
    here.

  3. Apply the update by running the updateios command
    $
     updateios -accept -install -dev <directory_name >

  4. To load all changes, reboot the VIOS as user padmin .

$ shutdown -restart

 

Note: If shutdown –restart command failed, run swrole –PAdmin for padmin to set authorization and establish access to the shutdown command properly.

  1. If cluster services were stopped in step 3, restart cluster services.

$ clstartstop -start -n <cluster_name > -m <hostname >

  1. Verify that the update was successful by checking the results of the updateios command and by running the ioslevel command, which should indicate that the ioslevel is now 4.1.0.10.

$ ioslevel

Post-installation Information and Instructions

Instructions: Checking for an incomplete installation caused by a loaded media repository.

 

After installing an Update Release, you can use this method to determine if you have encountered the problem of a loaded media library.

Check the Media Repository by running this command: 
$ 
lsrep

If the command reports: "Unable to retrieve repository data due to incomplete repository structure," then you have likely encountered this problem during the installation. The media images have not been lost and are still present in the file system of the virtual media library.

 

Running the lsvopt command should show the media images.

 

Instructions: Recovering from an incomplete installation caused by a loaded media repository.

 

To recover from this type of installation failure, unload any media repository images, and then reinstall the ios.cli.rte package. Follow these steps:

1.     Unload any media images

$ unloadopt -vtd <file-backed_virtual_optical_device>

2.     Reinstall the ios.cli.rte fileset by running the following commands.

To escape the restricted shell: 
$ 
oem_setup_env 
To install the failed fileset: 
# 
installp –Or –agX ios.cli.rte –d <device/directory > 
 To return to the restricted shell: 
# 
exit

3.     Restart the VIOS.

$ shutdown –restart

4.     Verify that the Media Repository is operational by running this command:

$ lsrep

 

Content modified in this release

ioscli snap command

 

The snap command is enhanced with -gzip option to utilise the on-chip NX GZIP accelerator for faster completion.

 

Software Updated

- python3 is included in VIOS image.

- Two versions of postgres database (version 15.3 and 13.11) are included in base image. Postgres version 13 is required to support mixed mode cluster where node versions 3.1.4.x and 4.1.0.x are part of the cluster.

 

Content removed in this release

ITM Agents software

        ITM (IBM Tivoli Monitoring) filesets are not part of VIOS 4.1.0.00 & above versions. Users need to download and install the ITM software from an external location. The ITM VIOS Premium Agent and ITM CEC Base Agent can be downloaded and installed separately as part of an updated IBM Tivoli Monitoring System P Agents 6.22 Fix Pack 4 or later bundle. Here is a link to the readme file containing information about how to obtain the image and instructions for installing.  When these agents are installed in the default directory of /opt/IBM/ITM, they can continue to utilize the cfgsvc, startsvc, and stopsvc commands to configure, start and stop the agent.

 

AMS

        Active Memory Sharing (AMS) feature is removed.

 

Software Removed

        Following filesets that are deemed not necessary are removed.

·       bos.net.tcp.rcmd, bos.net.tcp.rcmd_server (A copy is saved under: /usr/sys/inst.images/installp/ppc)

·       cas.agent tivoli.tivguid

·       rsct.opt.fence.blade rsct.opt.fence.hmc

·       sysmgt.cim.providers.metrics sysmgt.cim.providers.osbase

·       sysmgt.cim.providers.scc sysmgt.cim.providers.smash

·       sysmgt.cim.smisproviders.hba_hdr sysmgt.cim.smisproviders.hhr

·       sysmgt.cim.smisproviders.vblksrv sysmgt.cimserver.pegasus.rte

·       Java7.jre Java7.sdk Java7_64.jre Java7_64.sdk

·       itm.cec.agent itm.premium.rte itm.vios_premium.agent

·       bos.net.nfs.server devices.vdevice.IBM.vfc-client.rte

·       X11.adt.ext X11.adt.motif X11.apps.clients X11.apps.config X11.apps.custom X11.apps.msmit X11.apps.xdm

·       X11.apps.xterm X11.base.xpconfig X11.compat.adt.Motif12 X11.compat.lib.Motif10 X11.compat.lib.Motif114

·       X11.compat.lib.X11R3 X11.compat.lib.X11R4 X11.Dt.bitmaps X11.Dt.helpinfo X11.Dt.helpmin X11.Dt.helprun

·       X11.Dt.lib X11.Dt.rte X11.Dt.ToolTalk X11.fnt.coreX X11.fnt.deform_JP X11.fnt.fontServer X11.fnt.Gr_Cyr_T1

·       X11.fnt.ibm1046 X11.fnt.ibm1046_T1 X11.fnt.iso1 X11.fnt.iso2 X11.fnt.iso3 X11.fnt.iso4 X11.fnt.iso5

·       X11.fnt.iso7 X11.fnt.iso8 X11.fnt.iso8_T1 X11.fnt.iso9 X11.fnt.iso_T1 X11.fnt.ksc5601.ttf X11.fnt.ucs.cjk

·       X11.fnt.ucs.com X11.fnt.ucs.ttf_CN X11.fnt.ucs.ttf_extb X11.fnt.util X11.loc.en_US.base.lib

·       X11.loc.en_US.base.rte X11.loc.en_US.Dt.rte X11.vsm.lib

 

Fixes included in this release

The list of fixes in 4.1.0.10

 

APAR

Description

IJ48965

sysdumpdev -l return error

IJ48995

viosupgrade with paging space device did not fail

IJ49045

topasrec start using smitty gave usage error