IBM Support

Shared Storage Pool config dump command

How To


Summary

New command for SSP configuration information. Run this on your Power Systems Virtual I/O Server.

Objective

Nigels Banner

New Shared Storage Pool (SSP) code and command to output all the details of a SSP that can be found via the VIOS libperfstat library.

Steps

I have been testing out the AIX libperfstat libraries new functions and got to the SSP ones.

So I knock up this command to dump the config details that can be extracted.

So with 128 lines of C code we have the new command - I have not seen a new VIOS SSP command that outputs this data yet so this is new details that I think would help you rebuild a SSP if you had an accident or disaster.

Command line interface

# ./nsspconf -h
Hint:  [-v] [-h]
        List Shared Storage Pool info: global, disk including tier & failure group, LU including maps to client LPARs
        Seem to ignore LU's that are not mapped to a client LPAR
         -v     include VIOS details. Warning: This can add 2 seconds per VIOS
         -h     This help info and stop

Data

  1. SSP
    1. cluster name
    2. pool name
    3. total space
    4. Total used
  2. Disk
    1. hdisk
    2. capacity
    3. free
    4. tier name
    5. failure group name
  3. LU
    1. LU name
    2. type = THIN or THICK
    3. size in MB
    4. free in MB
    5. usage in MB
    6. client LPAR id
    7. machine type model and serial number
    8. udid unique LU name
  4. VIOS (optional)
    1. VIOS name
    2. IP address
    3. MTMS
    4. VIOS LPAR id
    5. oslevel output
    6. ssp status
    7. pool status

Sample Output

# ./nsspconf -v

Global ClusterName=spiral PoolName=spiral TotalSpace=523776 TotalUsedSpace=287926
DiskName=hdisk5 capacity=131072 free=131008 tier=SYSTEM fg=cocoa
DiskName=hdisk6 capacity=131072 free=131008 tier=SYSTEM fg=cocoa
DiskName=hdisk7 capacity=131072 free=131008 tier=SYSTEM fg=cocoa
DiskName=hdisk8 capacity=131072 free=131008 tier=SYSTEM fg=cocoa
DiskName=hdisk10 capacity=131072 free=131008 tier=SYSTEM fg=coffee
DiskName=hdisk11 capacity=131072 free=131008 tier=SYSTEM fg=coffee
DiskName=hdisk12 capacity=131072 free=131008 tier=SYSTEM fg=coffee
DiskName=hdisk13 capacity=131072 free=131008 tier=SYSTEM fg=coffee
LU=testing type=THIN_LU size=32768 free=32770 usage=0 client_id=5 mtm=8408-E8E0221D494V VTD=vtscsi0 DRC=U8408.E8E.21D494V-V3-C11 udid=34b9ea9b78a538bfff13c106
08b00ccb
LU=volume-vm6-bee7a81e-0000001f-boot-0-8309c9c7-cc7e type=THIN_LU size=65536 free=10985 usage=54554 client_id=16 mtm=8408-E8E0221D494V VTD=vtscsi0 DRC=U8408.E
8E.21D494V-V4-C2 udid=849f705b87bfaf2754b87e3393ac9ac9
LU=vm59a type=THIN_LU size=65536 free=59590 usage=5949 client_id=4 mtm=9119-MHE0221C4837 VTD=vtscsi0 DRC=U9119.MHE.21C4837-V1-C3 udid=2eabbd5238fc5fd6fc3ae925
827ca422
LU=vm59a type=THIN_LU size=65536 free=59590 usage=5949 client_id=4 mtm=9119-MHE0221C4837 VTD=vtscsi0 DRC=U9119.MHE.21C4837-V3-C3 udid=2eabbd5238fc5fd6fc3ae925
827ca422
LU=vm60_SLES12 type=THIN_LU size=65536 free=57199 usage=8340 client_id=5 mtm=9119-MHE0221C4837 VTD=vtscsi1 DRC=U9119.MHE.21C4837-V1-C4 udid=f57037e504b95fd22c
656af2bc8d48ef
LU=vm60_SLES12 type=THIN_LU size=65536 free=57199 usage=8340 client_id=5 mtm=9119-MHE0221C4837 VTD=vtscsi1 DRC=U9119.MHE.21C4837-V3-C4 udid=f57037e504b95fd22c
656af2bc8d48ef
LU=volume-vm6-bee7a81e-0000001f-boot-0-8309c9c7-cc7e type=THIN_LU size=65536 free=10985 usage=54554 client_id=16 mtm=8408-E8E0221D494V VTD=vtscsi1 DRC=U8408.E
8E.21D494V-V3-C2 udid=849f705b87bfaf2754b87e3393ac9ac9
LU=brass175a type=THIN_LU size=32768 free=30466 usage=2303 client_id=12 mtm=9040-MR90213601FX VTD=vtscsi10 DRC=U9040.MR9.13601FX-V2-C16 udid=697429078bb8893f4
1643055a08e6f8f
LU=testtwo type=THICK_LU size=101376 free=0 usage=101382 client_id=12 mtm=9040-MR90213601FX VTD=vtscsi11 DRC=U9040.MR9.13601FX-V2-C15 udid=10a8e5f994eb35bf53f
50a095e290b18
LU=volume-brass5-58896c57-00000028-boot-0-294423d0-946d type=THIN_LU size=32768 free=11975 usage=20794 client_id=17 mtm=8408-E8E0221D494V VTD=vtscsi2 DRC=U840
8.E8E.21D494V-V4-C8 udid=e3d95b5b8f6b89e3c074e8733f99f10a
LU=volume-brass5-58896c57-00000028-boot-0-294423d0-946d type=THIN_LU size=32768 free=11975 usage=20794 client_id=17 mtm=8408-E8E0221D494V VTD=vtscsi3 DRC=U840
8.E8E.21D494V-V3-C6 udid=e3d95b5b8f6b89e3c074e8733f99f10a
LU=volume-brass4-1a82b37e-00000027-boot-0-8d799dd5-382e type=THIN_LU size=32768 free=11288 usage=21481 client_id=18 mtm=9080-M9S02130A148 VTD=vtscsi3 DRC=U908
0.M9S.130A148-V2-C12 udid=3e28b766d1e9f6fc3ed2dc8301551fe0
LU=silver2 type=THIN_LU size=32768 free=22972 usage=9797 client_id=4 mtm=9040-MR90213601FX VTD=vtscsi4 DRC=U9040.MR9.13601FX-V1-C7 udid=a66e3cbe397ae243df09f7
83639157b6
LU=silver2 type=THIN_LU size=32768 free=22972 usage=9797 client_id=4 mtm=9040-MR90213601FX VTD=vtscsi4 DRC=U9040.MR9.13601FX-V2-C7 udid=a66e3cbe397ae243df09f7
83639157b6
LU=volume-brass4-1a82b37e-00000027-boot-0-8d799dd5-382e type=THIN_LU size=32768 free=11288 usage=21481 client_id=18 mtm=9080-M9S02130A148 VTD=vtscsi4 DRC=U908
0.M9S.130A148-V1-C13 udid=3e28b766d1e9f6fc3ed2dc8301551fe0
LU=volume-brass10-f35c2b3e-0000002a-boot-0-10525fd8-80a4 type=THIN_LU size=32768 free=29226 usage=3543 client_id=20 mtm=8408-E8E0221D494V VTD=vtscsi5 DRC=U840
8.E8E.21D494V-V3-C8 udid=b5637b33ad1cab0dbec022f95e09a0a5
LU=volume-brass5-58896c57-00000028-boot-0-294423d0-946d type=THIN_LU size=32768 free=11975 usage=20794 client_id=8 mtm=9040-MR90213601FX VTD=vtscsi5 DRC=U9040
.MR9.13601FX-V1-C9 udid=e3d95b5b8f6b89e3c074e8733f99f10a

LU=volume-brass10-f35c2b3e-0000002a-boot-0-10525fd8-80a4 type=THIN_LU size=32768 free=29226 usage=3543 client_id=20 mtm=8408-E8E0221D494V VTD=vtscsi5 DRC=U840
8.E8E.21D494V-V3-C8 udid=b5637b33ad1cab0dbec022f95e09a0a5
LU=volume-brass5-58896c57-00000028-boot-0-294423d0-946d type=THIN_LU size=32768 free=11975 usage=20794 client_id=8 mtm=9040-MR90213601FX VTD=vtscsi5 DRC=U9040
.MR9.13601FX-V1-C9 udid=e3d95b5b8f6b89e3c074e8733f99f10a
LU=silver5 type=THIN_LU size=32768 free=29702 usage=3067 client_id=7 mtm=9040-MR90213601FX VTD=vtscsi5 DRC=U9040.MR9.13601FX-V2-C9 udid=4764ac54b533ab31ec52e7
e4af26787f
LU=volume-brass10-f35c2b3e-0000002a-boot-0-10525fd8-80a4 type=THIN_LU size=32768 free=29226 usage=3543 client_id=20 mtm=8408-E8E0221D494V VTD=vtscsi5 DRC=U840
8.E8E.21D494V-V4-C10 udid=b5637b33ad1cab0dbec022f95e09a0a5
LU=volume-brass11-d7cd4e32-0000002b-boot-0-6933efe2-73c8 type=THIN_LU size=32768 free=31679 usage=1090 client_id=21 mtm=8408-E8E0221D494V VTD=vtscsi6 DRC=U840
8.E8E.21D494V-V4-C11 udid=c77538ccd14912e0d0f63b62eca41a4e
LU=volume-brass6-c4d6f3c7-00000029-boot-0-004788c4-602a type=THIN_LU size=32768 free=28066 usage=4703 client_id=9 mtm=9040-MR90213601FX VTD=vtscsi6 DRC=U9040.
MR9.13601FX-V1-C10 udid=031ca7019f191adb62a3a6a0e86682e9
LU=volume-brass5-58896c57-00000028-boot-0-294423d0-946d type=THIN_LU size=32768 free=11975 usage=20794 client_id=8 mtm=9040-MR90213601FX VTD=vtscsi6 DRC=U9040
.MR9.13601FX-V2-C10 udid=e3d95b5b8f6b89e3c074e8733f99f10a
LU=volume-brass11-d7cd4e32-0000002b-boot-0-6933efe2-73c8 type=THIN_LU size=32768 free=31679 usage=1090 client_id=21 mtm=8408-E8E0221D494V VTD=vtscsi6 DRC=U840
8.E8E.21D494V-V3-C9 udid=c77538ccd14912e0d0f63b62eca41a4e
LU=volume-brass6-c4d6f3c7-00000029-boot-0-004788c4-602a type=THIN_LU size=32768 free=28066 usage=4703 client_id=9 mtm=9040-MR90213601FX VTD=vtscsi7 DRC=U9040.
MR9.13601FX-V2-C11 udid=031ca7019f191adb62a3a6a0e86682e9
LU=volume-AIX72-TL2_on_-762b86ca-0000002f-boot--5c29fade-7e32 type=THIN_LU size=32768 free=30102 usage=2667 client_id=10 mtm=9040-MR90213601FX VTD=vtscsi7 DRC
=U9040.MR9.13601FX-V1-C11 udid=6da314afde95ae5040e450e3edab8c44
LU=brass3a type=THIN_LU size=32768 free=32466 usage=303 client_id=11 mtm=9040-MR90213601FX VTD=vtscsi8 DRC=U9040.MR9.13601FX-V1-C13 udid=44996c57c5f404d00f9dc
f504c4ee46c
LU=volume-AIX72-TL2_on_-762b86ca-0000002f-boot--5c29fade-7e32 type=THIN_LU size=32768 free=30102 usage=2667 client_id=10 mtm=9040-MR90213601FX VTD=vtscsi8 DRC
=U9040.MR9.13601FX-V2-C12 udid=6da314afde95ae5040e450e3edab8c44
LU=brass175a type=THIN_LU size=32768 free=30466 usage=2303 client_id=12 mtm=9040-MR90213601FX VTD=vtscsi9 DRC=U9040.MR9.13601FX-V1-C14 udid=697429078bb8893f41
643055a08e6f8f
LU=brass3a type=THIN_LU size=32768 free=32466 usage=303 client_id=11 mtm=9040-MR90213601FX VTD=vtscsi9 DRC=U9040.MR9.13601FX-V2-C14 udid=44996c57c5f404d00f9dc
f504c4ee46c
VIOS=rubyvios3.aixncc.uk.ibm.com IP=9.137.62.217 MTMS=8408-E8E0221D494V lparid=3 ioslevel=2.2.6.21 status=OK poolstatus=OK
VIOS=rubyvios4.aixncc.uk.ibm.com IP=9.137.62.218 MTMS=8408-E8E0221D494V lparid=4 ioslevel=2.2.6.21 status=OK poolstatus=OK
VIOS=yellowvios1.aixncc.uk.ibm.com IP=9.137.62.57 MTMS=9119-MHE0221C4837 lparid=1 ioslevel=2.2.6.21 status=OK poolstatus=OK
VIOS=yellowvios2.aixncc.uk.ibm.com IP=9.137.62.58 MTMS=9119-MHE0221C4837 lparid=3 ioslevel=2.2.6.21 status=OK poolstatus=OK
VIOS=silvervios2.aixncc.uk.ibm.com IP=9.137.62.105 MTMS=9040-MR90213601FX lparid=1 ioslevel=2.2.6.21 status=OK poolstatus=OK
VIOS=silvervios1.aixncc.uk.ibm.com IP=9.137.62.104 MTMS=9040-MR90213601FX lparid=2 ioslevel=2.2.6.21 status=OK poolstatus=OK
VIOS=brickvios2.aixncc.uk.ibm.com IP=9.137.62.250 MTMS=9080-M9S02130A148 lparid=2 ioslevel=2.2.6.21 status=OK poolstatus=OK
VIOS=brickvios1.aixncc.uk.ibm.com IP=9.137.62.249 MTMS=9080-M9S02130A148 lparid=1 ioslevel=2.2.6.21 status=OK poolstatus=OK

Comments:

  1. I could change this to a comma separated values format for simpler scripting the results. But I intend to add this to njmon in JSON format first.
  2. One thing missing is that it states the Server (machine type model and serial number (MTMS)) and the Client LPAR id - but not the Client LPAR name or hostname (if different). The LU name might include a reference to the LPAR using it or you will need a list of the LPAR names and the client LPAR id from the HMC to match that up.
  3. As this is simple code using the libperfstat to do all the heavy lifting I will release this as a code example.
  4. Comments, below, please, on:
    1. the code
    2. the usefulness
    3. the output format
  5. Compile the code with: cc -o nsspconf -O nsspconf.c -lperfstat 

Downloads

  • nsspconf - binary file of the command. Compiled on AIX 6 for VIOS 2.2.6.+  not tested on earlier versions of VIOS.
  • nsspconf.c - the C source code.  On earlier version of AIX/VIOS you may have to comment out some fields that only appear it later VIOS version.


Here is the same source code in case you want a quick look:

#include<stdio.h>
#include<stdlib.h>
#include<libperfstat.h>

perfstat_ssp_t *ssp_global;
perfstat_ssp_t *ssp_disk;
perfstat_ssp_t *ssp_lu;
perfstat_ssp_t *ssp_node;

int global_count;
int disk_count;
int lu_count;
int node_count;

int vios = 0;

error(char* where)
{
        perror(where);
        exit(-7);
}

int main(int argc, char* argv[])
{
int i;
int rc;
int c;
    while((c = getopt(argc, argv, "hv"))!= EOF){
        switch(c){
        default:
        case 'h':
            printf("Hint: %s [-v] [-h]\n");
            printf("\tList Shared Storage Pool info: global, disk including tier & "
                                 "failure group, LU including maps to client LPARs\n",argv[0]);
            printf("\tSeem to ignore LU's that are not mapped to a client LPAR\n");
            printf("\t -v \tinclude VIOS details. Warning: This can add 2 seconds per VIOS\n");
            printf("\t -h \tThis help info and stop\n");
            exit(0);
        case 'v':
            vios = 1;
            break;
        }
    }

    /* Phase 1: Enable the cluster statistics */
    if( perfstat_config(PERFSTAT_ENABLE|PERFSTAT_CLUSTER_STATS, NULL) < 0)
        error("perfstat_config SSP is not available. Only run this on a VIOS 2.2.6+ with a Shared Storeage Pool");

    /* Phase 2: Enable the cluster statistics using perfstat_config */
    if( (global_count = perfstat_ssp(NULL, NULL, sizeof(perfstat_ssp_t),0,SSPGLOBAL) ) < 0)
            perror("perfstat_ssp(global init)");
    if( (disk_count   = perfstat_ssp(NULL, NULL, sizeof(perfstat_ssp_t),0,SSPDISK) ) < 0)
            error("perfstat_ssp(disk init)");
    if( (lu_count     = perfstat_ssp(NULL, NULL, sizeof(perfstat_ssp_t),0,SSPVTD) ) < 0)
            error("perfstat_ssp(lu init)");

    /* printf("Global=%d Disk=%d lu=%d\n",global_count,disk_count,lu_count); */

    /* Phase 3: Prepare memory buffers */
    ssp_global = (perfstat_ssp_t *) malloc(sizeof(perfstat_ssp_t) * global_count);
    ssp_disk   = (perfstat_ssp_t *) malloc(sizeof(perfstat_ssp_t) * disk_count);
    ssp_lu     = (perfstat_ssp_t *) malloc(sizeof(perfstat_ssp_t) * lu_count);
    if(ssp_global == (perfstat_ssp_t *)NULL || 
       ssp_disk == (perfstat_ssp_t *)NULL || 
       ssp_lu == (perfstat_ssp_t *)NULL )
                error("malloc failure requesting space to store perfstat data");

    /* Phase 4: Collect the data and display it */
    if( (rc = perfstat_ssp(NULL, ssp_global, sizeof(perfstat_ssp_t),global_count,SSPGLOBAL) ) <0)
            error("perfstat_ssp(SSPGLOBAL)");

    printf("Global ");
    printf("ClusterName=%s ", ssp_global->cluster_name);
    printf("PoolName=%s ", ssp_global->spool_name);
    printf("TotalSpace=%lld ", ssp_global->u.global.total_space);
    printf("TotalUsedSpace=%lld\n", ssp_global->u.global.total_used_space);

    if( (rc = perfstat_ssp(NULL, ssp_disk, sizeof(perfstat_ssp_t),disk_count,SSPDISK) ) < 0)
        error("perfstat_ssp(SSPDISK)");

    for(i=0; i<rc; i++){
        printf("DiskName=%s ", ssp_disk[i].u.disk.diskname);
        printf("capacity=%lld free=%lld tier=%s fg=%s\n",
               ssp_disk[i].u.disk.capacity,
               ssp_disk[i].u.disk.free,
               ssp_disk[i].u.disk.tiername,
               ssp_disk[i].u.disk.fgname);
    }

    if( (rc = perfstat_ssp(NULL, ssp_lu, sizeof(perfstat_ssp_t),lu_count,SSPVTD) ) < 0)
        error("perfstat_ssp(SSPLU)");

    for(i=0; i<rc; i++){
        printf("LU=%s type=%s size=%lld free=%lld usage=%lld client_id=%d mtm=%s VTD=%s DRC=%s udid=%s\n",
                ssp_lu[i].u.vtd.lu_name,
                ssp_lu[i].u.vtd.lu_type,
                ssp_lu[i].u.vtd.lu_size,
                ssp_lu[i].u.vtd.lu_free,
                ssp_lu[i].u.vtd.lu_usage,
                ssp_lu[i].u.vtd.client_id,
                ssp_lu[i].u.vtd.mtm,
                ssp_lu[i].u.vtd.vtd_name,
                ssp_lu[i].u.vtd.drcname,
                ssp_lu[i].u.vtd.lu_udid);
    }

    /* Phase 5: output VIOS details - warning this can take 2 seconds per VIOS and
                if VIOS(s) are shutdown or they have network or pool issues then libperfstat will 
                output annoying vague errors */
    if(vios) {
        if( (node_count     = perfstat_ssp(NULL, NULL, sizeof(perfstat_ssp_t),0,SSPNODE) ) < 0)
            error("perfstat_ssp(node init)");

        if( (ssp_node   = (perfstat_ssp_t *) malloc(sizeof(perfstat_ssp_t) * node_count)) == (perfstat_ssp_t *)NULL )
             error("malloc failure");

        if( (rc = perfstat_ssp(NULL, ssp_node, sizeof(perfstat_ssp_t),node_count,SSPNODE) ) < 0)
             error("perfstat_ssp(SSPNODE)");

        for(i=0; i<rc; i++){
                        printf("VIOS=%s ", ssp_node[i].u.node.hostname);
                        printf("IP=%s ", ssp_node[i].u.node.ip);
                        printf("MTMS=%s ", ssp_node[i].u.node.mtms);
                        printf("lparid=%d ", ssp_node[i].u.node.lparid);
                        printf("ioslevel=%s ", ssp_node[i].u.node.ioslevel);
                        printf("status=%s ",      (ssp_node[i].u.node.status==1?"OK":"-"));
                        printf("poolstatus=%s\n", (ssp_node[i].u.node.poolstatus==1?"OK":"-"));
        }
    }

    /* Phase 6: disable cluster statistics */
    perfstat_config(PERFSTAT_DISABLE|PERFSTAT_CLUSTER_STATS, NULL);
    return 0;
}

Additional Information


Other places to find Nigel Griffiths IBM (retired)

Document Location

Worldwide

[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SWG10","label":"AIX"},"Component":"","Platform":[{"code":"PF002","label":"AIX"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB08","label":"Cognitive Systems"}},{"Business Unit":{"code":"BU054","label":"Systems w\/TPS"},"Product":{"code":"HW1W1","label":"Power -\u003EPowerLinux"},"Component":"","Platform":[{"code":"PF016","label":"Linux"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"","label":""}},{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SWG60","label":"IBM i"},"Component":"","Platform":[{"code":"PF012","label":"IBM i"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB57","label":"Power"}}]

Document Information

Modified date:
13 June 2023

UID

ibm11114839