GCPs and zIIPs
IBM® z/OS® Connect runs in a JVM and is primarily written in Java™, so a large proportion of the CPU workload can be offloaded to z Integrated Information Processors (zIIP). In fact, up to 99% of the workload can be offloaded to zIIP.
It is important to have a good balance of GCPs and zIIPs for the products that run in your LPAR. While IBM z/OS Connect runs well on zIIPs, applications such as CICS®, IMS, Db2®, and IBM MQ run mainly on GCPs with a small amount of processing eligible for dispatching on zIIPs.
When monitoring performance, if your LPAR is sharing CPs with other LPARs, you might find that your CPU usage varies. If you are looking for less variation in CPU usage, consider using dedicated CPs.
Even with dedicated CPs, the L3 and L4 memory buffer caches are typically shared with other CPs that are used by other LPARs. This can lead to CPU variation because those caches can have their data invalidated by CPs that are used by the other LPARs.
You must be familiar with the workload that runs in your system, and observe their GCP and zIIP usage. When a request comes to a GCP, it is ascertained whether the work can be offloaded to zIIP. If it is not eligible, the GCP handles the request, and if it is, the GCP forwards the request to be processed by a zIIP. However, if all the zIIPs are already busy, the request is sent back to the GCP. This extra step costs processing time, so it is important to have enough zIIPs to handle the load. It is possible to force all offloadable work to run only on zIIPs by setting IIPHONORPRIORITY to NO in parmlib IEAOPTxx. However, this option is typically not good practice because if there are not enough zIIPs available, the offloadable work waits for a zIIP and this delay can make response times unacceptable and affect SLA targets. Most users keep the default of IIPHONORPRIORITY=YES which indicates that if zIIP processors are unable to run all zIIP-eligible work, GCPs might run zIIP-eligible and non-zIIP-eligible work in priority order. For more information, see Reviewing z/OS® parameter settings in the z/OS MVS Planning: Workload Management documentation.
- SMF 30 subtype 2 records that are written for every address space in an LPAR and useful for viewing CPU usage of long running tasks like IBM z/OS Connect.
- SMF 70 records with which RMF can show the CPU activity for z/OS WLM workloads.
- SMF 72 records with which RMF can show the CPU usage for each z/OS WLM service class or report class. A report class can be for an individual IBM z/OS Connect server. Furthermore you can configure report classes for an individual API, service, or API requester, where you can see the CPU usage for an individual resource.
CPU activity observations with SMF 70 records
- The level of z/OS running in the LPAR is z/OS 2.3 (row 3)
- The SMF interval is 1 minute (row 3)
- The date and time the SMF interval started (rows 3 and 4)
- The CPU type and MSU (Millions of Service Units) for the entire mainframe (row 5). This mainframe has an MSU of 10017 million service units per hour.
- HiperDispatch is enabled (row 6). HiperDispatch is a z/OS workload-dispatching feature that steers tasks to the CPUs most likely to have the fastest access to relevant data already in cache. It works alongside z/OS WLM to help achieve SLAs. HiperDispatch "parks" CPUs that are not needed.
- Three dedicated GCPs (rows 11-13) with LPAR and MVS average "busy" times (row 14)
- Three dedicated zIIPs (rows 15-17) with LPAR and MVS
average "busy" times (row 18). The report shows that most of the work was run on zIIPs rather than
the GCPs during this interval.
- IBM z/OS Connect is up to 99% offloadable.
- CICS runs mainly on GCPs.
- SMT mode is not enabled (rows 19-22). See Simultaneous Multithreading (SMT) for zIIPs later in this topic.
- The DISTRIBUTION OF IN-READY WORK UNIT QUEUE shows minimal queuing of requests waiting for processors to be available with 93% not needing to queue at all. This shows that the CPUs were comfortably managing all the work, including IBM z/OS Connect and CICS, running in the LPAR.

CPU activity observations with SMF 72 records
- Most of the IBM z/OS Connect work was handled by zIIPs (row 38248). However, for this workload there were not enough zIIPs to handle all the offloadable work. This situation is denoted by the IIPCP value in the APPL% column (row 38247). In this particular scenario, another workload was running at the same time and using the zIIPs.
- Some work was handled by the GCP, CP 9.00% in row 38246, but as IIPCP shows a value of 8.41% in row 38248, this indicates that most of this work that ran on the GCP could have run on a zIIP if a zIIP had been available.

Simultaneous Multithreading (SMT) for zIIPs
LOADxx: PROCVIEW CORE|CPU
IEAOPTxx: MT_ZIIP_MODE={1|2}

Security network encryption offload to zIIP
zIIPs can be used to encrypt and decrypt data that is sent to or from IBM z/OS Connect. By running some of the IPsec network encryption instructions on zIIPs, you can reduce your GCP usage and lower CPU costs. For more information, see IP security (IPSec) in the z/OS 2.4 z/OS Communications Server: SNA Network Implementation Guide.
Cryptographic processors
For secure connections to and from IBM z/OS Connect, consider using cryptographic processors to handle the requests, particularly where very strong algorithms are being used. Depending on how you configure the security providers in your java.security file, the use of cryptographic processors can offload the work from the GCPs.
You reference your updated java.security override file in your JVM options
file by using the java.security.properties
system property. For
example:-Djava.security.properties=${server.config.dir}/java.security
.
security.provider.1=com.ibm.jsse2.IBMJSSEProvider2
security.provider.2=com.ibm.crypto.ibmjcehybrid.provider.IBMJCEHYBRID
security.provider.3=com.ibm.crypto.hdwrCCA.provider.IBMJCECCA
security.provider.4=com.ibm.crypto.provider.IBMJCE