subscribe iconSubscribe to this information
POWER7 information

viosbr command

Purpose

Performs the operations for backing up the virtual and logical configuration, listing the configuration, and restoring the configuration of the Virtual I/O Server (VIOS).

The viosbr command can be run only by the padmin user.

Syntax

To perform a backup:

viosbr -backup -file FileName [-frequency daily|weekly|monthly [-numfiles fileCount]]

viosbr -backup -file FileName -clustername ClusterName [-frequency daily|weekly|monthly [-numfiles fileCount]]

To view a backup file:

viosbr -view -file FileName [[-type devType] [-detail] | [-mapping]]

viosbr -view -file FileName -clustername ClusterName [[-type devType] [-detail] | [-mapping]]

To view the listing of backup files:

viosbr -view -list [UserDir]

To restore a backup file:

viosbr -restore -file FileName [-validate | -inter] [-type devType]

viosbr -restore -file FileName [-type devType] [-force]

viosbr -restore -clustername ClusterName -file FileName -subfile NodeFileName [-validate | -inter | -force] [-type devType] [-skipcluster]

viosbr -restore -clustername ClusterName -file FileName -repopvs list_of_disks [-validate | -inter | -force] [-type devType] [-currentdb]

viosbr -restore -clustername ClusterName -file FileName -subfile NodeFile -xmlvtds

viosbr -restore -file FileName [-skipcluster]

To disable a scheduled backup:

viosbr -nobackup

To recover from a corrupted shared storage pool (SSP) database:

viosbr -recoverdb -clustername ClusterName [-file FileName]

To migrate a backup file from an older release level to a current release level:

viosbr -migrate -file FileName

Description

The viosbr command uses the parameters -backup, -view, and -restore to perform backup, list, and recovery tasks for the VIOS.

This viosbr command backs up all the relevant data to recover VIOS after a new installation. The -backup parameter backs up all the device properties and the virtual devices configuration on the VIOS. This includes information regarding logical devices, such as storage pools, file-backed storage pools, the virtual media repository, and PowerVM® Active Memory Sharing (AMS) paging devices. It also includes the virtual devices, such as Etherchannel, shared Ethernet adapters (SEAs), virtual server adapters, the virtual log repository, and server virtual Fibre Channel (SVFC) adapters. Additionally, it includes the device attributes, such as the attributes for disks, optical devices, tape devices, Fibre Channel SCSI controllers, Ethernet adapters, Ethernet interfaces, and logical Host Ethernet adapters (HEAs). All the configuration information is saved in a compressed XML file. If a location is not specified with the -file option, the file is placed in the default location /home/padmin/cfgbackups. This command can be run once or can be run in a stipulated period by using the -frequency parameter with the daily, weekly, or monthly option. Daily backups occur at 00:00, weekly backups on Sunday at 00:00, and monthly backups on the first day of the month at 00:01. The -numfile parameter specifies the number of successive backup files that are saved, with a maximum value of 10. After reaching the given number of files, the oldest backup file is deleted during the next backup cycle. The format of the file name is <givenfilename>.xx.tar.gz, where xx starts from 01. For cluster backups, the format is <givenfilename>.xx.<clustername>.tar.gz.
Note: Ensure that the file system on the VIOS has sufficient free space before you take a backup of VIOS. Otherwise the backup might fail. In case of cluster backup, ensure the file system on all nodes have enough free space.

The viosbr command does not back up the parent devices of adapters or drivers, device drivers, virtual serial adapters, virtual terminal devices, kernel extensions, the Internet Network Extension (inet0), virtual I/O bus, processor, memory, or cache.

The -view parameter displays the information of all the backed up entities in a formatted output. This parameter requires an input file in a compressed or noncompressed format that is generated with the -backup parameter. The -view parameter uses the option flags type and detail to display information in detail or to display minimal information for all the devices or for a subset of devices. The -mapping option flag provides lsmap-like output for Virtual Small Computer System Interface (VSCSI) server adapters, SEA, server virtual Fibre Channel (SVFC) adapters, and PowerVM Active Memory Sharing paging devices. The entities can be controllers, disks, optical devices, tape devices, network adapters, network interfaces, storage pools, repositories, Etherchannels, virtual log repositories, SEAs, VSCSI server adapters, server virtual Fibre Channel (SVFC) adapters, and paging devices. The -list option displays backup files from the default location /home/padmin/cfgbackups or from a user-specified location.

The -restore parameter uses an earlier backup file as input and brings the VIOS partition to the same state as when the backup was created. With the information available from the input file, the command sets the attribute values for physical devices, imports logical devices, and creates virtual devices and their corresponding mappings. The attributes can be set for controllers, adapters, disks, optical devices, tape devices, and Ethernet interfaces. Logical devices that can be imported are volume groups, storage pools, logical volumes (LVs), file systems, and repositories. Virtual devices that can be created are Etherchannel, SEA, server virtual Fibre Channel (SVFC) adapters, virtual target devices, and PowerVM Active Memory Sharing paging devices. The command creates mappings between virtual SCSI server adapters and the VTD-backing devices, between a virtual Fibre Channel (VFC) server adapter and a Fibre Channel (FC) adapter, and between PowerVM Active Memory Sharing paging devices and backing devices. The viosbr command with the -restore option must be run on the same VIOS partition as the one where the backup was performed. The command uses parameters to validate the devices on the system and restores a category of devices. The -restore option runs interactively so that if any devices fail to restore, you can decide how to handle the failure.

The viosbr command recovers the data that is used to reconfigure an SSP cluster. This command does not recover any of the data, such as the contents of an LU. You must take separate action to back up that data.

The viosbr command recovers an entire cluster configuration by using the -clustername option, which includes re-creating a cluster, adding all the nodes that comprise the cluster, and re-creating all cluster entities on all the nodes. If a node is down during this operation, the node is recovered when it is started if the cluster is not deleted. However, the non-SSP devices are not restored on the nodes that are down.

If a single node is reinstalled and you want to restore the entities of that node, you must use the -subfile option and specify the .xml file that corresponds with the node.

Notes:
  • Do not reboot any other nodes in the cluster when a single node is restored by using the -subfile option.
  • If a node is stopped in a cluster after a backup is taken, it cannot be joined to the cluster during restore, unless it is started from another active node.

If the restore of a cluster fails, rerun the command to resolve the issue. For example, while restoring a four node cluster, if the restore fails after restoring two nodes, rerun the command to restore the other two nodes.

If one of the nodes are not added when restoring a cluster, do not add that node using cluster -addnode. The cluster -addnode command adds a new node to the cluster and this invalidates the existing node information in the database.

Note: If a full cluster is to be restored on nodes with different versions, run the -restore option from the lowest version of the node. Otherwise, you cannot restore all nodes in the cluster.
For example, if a cluster backup is taken on nodes with the configuration node1 (V2 level), complete the steps as follows:
  1. Install three nodes with Version 2.2.2.0.
  2. Create a 3-node cluster.
  3. Take a backup of the cluster.
  4. Reinstall node1 with Version 2.2.2.0, node2 with Version 2.2.3.0, and node3 with Version 2.2.4.0.
  5. Restore the cluster from node1 with Version 2.2.2.0.

An SSP cluster might incur a database corruption. If a database corruption occurs, you must use the -recoverdb option. If this option is used with the -file option, the viosbr command uses the database information from the specified backup file. If the resources of the SSP cluster change after the backup file is made, those changed resources do not appear. The SSP cluster is updated to make a copy of the SSP database on a daily basis. If you prefer this copy of the database to the database stored in the backup, you can exclude the -file option and the backup file from the command-line call. Use the -view option to get the list of xml files in the cluster, choose the correct file from the list by using the MTM and Partition Number.

Note: Recovery of the database is allowed only when all the other nodes in the cluster are down except the node where recovery is initiated.
When VIOS is reinstalled with the a newer level of software then restoring the cluster to newer software level is a two step process as follows:
  1. Migrate the existing backup.
  2. Restore the share storage pool cluster using the migrated backup.
The -migrate option makes a backup file from the file option and migrates the file to form a new backup that can be used for a shared storage pool configuration restore copy on a current release of VIOS. This option must be called before restoring and an cluster must not be present.

Flags

Flag name Description
-backup Takes the backup of VIOS configurations.
-clustername Specifies the Cluster name to back up, restore, or view; including all of its associated nodes.
-currentdb Restores the cluster without restoring the database from the backup. When restoring mapping, some of the mapping can fail if the mappings are not in the current database.
-detail Displays all the devices from the XML file with all their attribute values.
-file Specifies the absolute path or relative path and file name of the file that has backup information. If the file name starts with a slash (/) it is considered an absolute path; otherwise, it is a relative path. For backup, compressed file is created with .tar.gz extension and for cluster backups, compressed file is created with <clustername>.tar.gz extension.
-force If this option is specified in noninteractive mode, restoration of a device that has not been successfully validated is attempted. This option cannot be used in combination with the -inter or -validate options.
-frequency Specifies the frequency of the backup to run automatically.
Note: You can add or edit the crontab entry for backup frequencies other than daily, weekly, monthly. A compressed file in the form file_name.XX.tar.gz is created, where <file_name> is the argument to -file and XX is a number from 01 to numfiles provided by you. The maximum numfiles value is 10. The format of the cluster backup file is file_name.XX.clustername.tar.gz
-inter Interactively deploys each device with your confirmation.
Note: User input can be taken to set properties of all drivers, adapters, and interfaces (disks, optical devices, tape devices, fibre channel SCSI controllers controllers, Ethernet adapters, Ethernet interfaces, and logical HEAs) or each category of logical or virtual devices. This includes logical devices, such as storage pools, file-backed storage pools, and optical repositories, and virtual devices such as Etherchannel, SEA, virtual server adapters, and virtual server fibre channel adapters.
-list This option displays backup files from either the default location /home/padmin/cfgbackups or a user-specified location.
-mapping Displays mapping information for SEA, virtual SCSI adapters, VFC adapters, and PowerVM Active Memory Sharing paging devices.
-migrate Migrates earlier cluster version of backup file to the current version. A new file is created with _MIGRATED string appended to the given filename.
-nobackup This option removes any previously scheduled backups and stops any automatic backups.
-numfiles When backup runs automatically, this number indicates the maximum number of backup files that can be saved. The oldest file is deleted during the next cycle of backup. If this flag is not given, the default value is 10.
-recoverdb Recovers from the shared storage pool database corruption, either from the backup file or from the solid database backup.
-repopvs Takes the list of hdisks to be used as repository disks for restoring the cluster (space-separated list of hdiskX). The given disks must not contain a repository signature.
Note: First release supports only one physical volume.
-restore Takes backup file as input and brings the VIOS partition to the same state when the backup was taken.
-skipcluster Restores all local devices, except cluster0.
-subfile Specifies the node configuration file to be restored. This option must be used when the valid cluster repository exists on the disks. It cannot be used with the -repopvs option. This option is ignored if the backup file is not a cluster backup.
-type Displays information corresponding to all instances of the device type specified. The devType can be pv, optical, tape, controller, interface, sp, fbsp, repository, ethchannel, sea, svsa, svfca, vlogrepo, pool, or paging. With the restore option, the devType option can be net, vscsi, npiv, cluster, vlogrepo, or ams. When deploying a given type of device, all the dependent devices also are deployed. For example, when deploying vscsi, related disks, attributes are set, the corresponding storage pool is imported, and all file-backed storage pools are mounted.
-validate Validates the devices on the server against the devices on the backed-up file. If the inter option is specified, you are prompted to specify how to handle items that do not validate successfully. Without the inter option, if items do not validate successfully, the -restore operation fails.
-view Display the information of all the backed up entities.
-xmlvtds Allows you to restore SSP mappings, which are not in SSP database but are in the backup .xml file. This option is valid only while restoring a node and not while restoring clusters.

A cluster cannot be restored on a system if the cluster or node from the cluster is removed by using the cluster command with the -delete or -rmnode option.

When the cluster backup is taken, the file name of the individual node backedup .xml file is in the format as follows:
<cluster Name>MTM<Machine TYPE MODEL>P<partitionId>.xml

Exit Status

Table 1. Command specific return codes
Return code Description
Return code Description
0 Success
-1 Failure

Examples

  1. To back up all the device attributes and logical and virtual device mappings on theVIOS file called /tmp/myserverbackup, type the following command:
    viosbr -backup -file /tmp/myserverbackup
  2. To back up all the device attributes and virtual device mappings daily on the VIOS and keep the last five backup files, type the following command:
    viosbr -backup -file mybackup -frequency daily -numfiles 5

    The backup files resulting from this command are located under home/padmin/cfgbackups with the names mybackup.01.tar.gz, mybackup.02.tar.gz, mybackup.03.tar.gz, mybackup.04.tar.gz, and mybackup.05.tar.gz for the five most recent files.

  3. To display information about all the entities in a backup file, myserverbackup.012909.tar.gz,, type the following command:
    viosbr -view -file myserverbackup.012909.tar.gz

    The system displays the following output:

    Controllers:
    Name        Phys Loc
    scsi0       U787B.001.DNWFPMH-P1-C3-T1
    scsi1       U787B.001.DNWFPMH-P1-C3-T2
    fscsi0      U789D.001.DQD42T5-P1-C1-T1
    iscsi0      U787B.001.DNWFPMH-P1-T10
    lhea0       U789D.001.DQD42T5-P1
    fcs0        U789D.001.DQD42T5-P1-C1-T1
    
    Physical Volumes:
    Name         Phys loc
    hdisk1       U787B.001.DNWFPMH-P1-C3-T2-L4-L0
    hdisk2       U789D.001.DQD90N4-P3-D2
    
    Optical Devices:
    Name        Phys loc
    cd0         U78A0.001.DNWGLV2-P2-D2
    
    Tape devices:
    Name        Phys loc
    rmt0        U78A0.001.DNWGLV2-P2-D1
    
    Ethernet Interface(s):
    Name
    en0
    en1
    
    Etherchannels:
    Name  Prim adapter(s)     Backup adapter
    ent4  ent0	                NONE	
          ent1				
    
    Shared Ethernet Adapters:
    Name  Target Adapter            Virtual Adapter(s)
    ent3  ent0                      ent1
                                    ent2
    
    Storage Pools (*-default SP):
    SP name          PV Name
    testsp           hdisk1
                     hdisk2
    
    mysp*            hdisk3
                     hdisk4
    
    File-backed Storage Pools:
    Name             Parent SP
    myfbsp           mysp
    
    Optical Repositories:
    Name             Parent SP
    VMLibrary_LV	     mysp
    
    
    VSCSI Server Adapters:
    SVSA      VTD        Phys loc
    vhost0    vtscsi0    U9133.55A.063368H-V4-C3
              vtopt1
    vhost1    vtopt0     U9133.55A.063368H-V4-C4
              vttape0
    
    
    SVFC Adapters:
    Name         FC Adapter   Phys loc
    vfchost0     fcs0         U9117.MMA.06AB272-V5-C17
    vfchost1     -            U9117.MMA.06AB272-V5-C18
    
    VBSD Pools:
    Name
    pool0
    pool1
    
    VRM Pages:
    Name      StreamID
    vrmpage0  0x2000011b7ec18369
    vrmpage1  0x2000011b7dec9128
    
    Virtual Log Repositories:
    =========================
    Virtual Log Repository      State
    ----------------------      -----
    
    vlogrepo0                   AVAILABLE
  4. To display information for only physical disks, type the following command:
    viosbr -view -file myserverbackup.002.tar.gz -type pv

    The system displays the following output:

    Physical Volumes:
    =================
    Name				Phys Loc
    ----        --------
    hdisk0      U789D.001.DQD42T5-P1-C1-T1-W500507630513402B-L4010400000000000
    hdisk1      U789D.001.DQD42T5-P1-C1-T1-W500507630513402B-L4010400100000000
    hdisk2      U789D.001.DQD42T5-P1-C1-T1-W500507630513402B-L4010400400000000
    hdisk3      U789D.001.DQD42T5-P1-C1-T1-W500507630513402B-L4010405C00000000
  5. To restore all the possible devices and display a summary of deployed and nondeployed devices, type the following command:
    viosbr -restore -file /home/padmin/cfgbackups/myserverbackup.002.tar.gz

    The system displays the following output:

    Deployed/changed devices:
    		<Name(s) of deployed devices>
    
    Unable to deploy/change devices:
    		<Name(s) of non-deployed devices>
  6. To back up a cluster and all the nodes (that are UP), type the following command:
    viosbr -backup -clustername mycluster -file systemA
  7. To view the contents of a cluster backup and associated nodes, type the following command:
    viosbr -view -clustername mycluster -file /home/padmin/cfgbackups/systemA.mycluster.tar.gz

    The system displays the following output:

    Files in the cluster Backup
    ===========================
    myclusterDB
    myclusterMTM8233-E8B02HV32001P2.xml
    myclusterMTM8233-E8B02HV32001P3.xml
    
    Details in: /home/ios/mycluster.9240654/myclusterMTM8233-E8B02HV32001P2.xml
    ===========================================================================
    Controllers:
    ============
    
    Name         Phys Loc
    ----         --------
    iscsi0
    pager0       U8233.E8B.HV32001-V3-C32769-L0-L0
    vasi0        U8233.E8B.HV32001-V3-C32769
    vbsd0        U8233.E8B.HV32001-V3-C32769-L0
    fcs0         U5802.001.00H1180-P1-C8-T1
    fcs1         U5802.001.00H1180-P1-C8-T2
    sfwcomm0     U5802.001.00H1180-P1-C8-T1-W0-L0
    sfwcomm1     U5802.001.00H1180-P1-C8-T2-W0-L0
    fscsi0       U5802.001.00H1180-P1-C8-T1
    ent0         U5802.001.00H1180-P1-C2-T1
    fscsi1       U5802.001.00H1180-P1-C8-T2
    ent1         U5802.001.00H1180-P1-C2-T2
    ent2         U5802.001.00H1180-P1-C2-T3
    ent3         U5802.001.00H1180-P1-C2-T4
    sfw0         
    fcnet0       U5802.001.00H1180-P1-C8-T1
    fcnet1       U5802.001.00H1180-P1-C8-T2
    
    Physical Volumes:
    ================
    Name         Phys loc
    ----         --------
    caa_private0 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400400000000
    hdisk0       U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4003402500000000
    hdisk1       U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4003402600000000
    hdisk2       U5802.001.00H1180-P1-C8-T1-W5005076305088075-L4004400100000000
    hdisk5       U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400600000000
    hdisk6       U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400700000000
    cldisk1      U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400500000000
    
    Optical Devices:
    ===============
    Name        Phys loc
    ----        --------
    
    Tape devices:
    ============
    Name        Phys loc
    ----        --------
    
    Ethernet Interfaces:
    ====================
    Name
    ----
    en0
    en1
    en2
    en3
    
    Storage Pools:
    =============
    SP name          PV Name
    -------          -------
    rootvg           hdisk2
    caavg_private    caa_private0
    
    Virtual Server Adapters:
    =======================
    SVSA       Phys Loc                    VTD
    ------------------------------------------
    vhost0     U8233.E8B.HV32001-V3-C2    
    vhost1     U8233.E8B.HV32001-V3-C3    
    vhost2     U8233.E8B.HV32001-V3-C4    
    vhost3     U8233.E8B.HV32001-V3-C5
    
    Cluster:
    =======
    Name       State
    ----       -----
    cluster0   UP
    
    Cluster Name        Cluster ID
    ------------        ----------
    mycluster           ce7dd2a0e70911dfac3bc32001017779
    
    Attribute Name      Attribute Value
    --------------      ---------------
    node_uuid           77ec1ca0-a6bb-11df-8cb9-00145ee81e01
    clvdisk             16ea129f-0c84-cdd1-56ba-3b53b3d45174
    
    Virtual Log Repositories:
    =========================
    Virtual Log Repository     State 
    ----------------------     -----
    
    vlogrepo0                  AVAILABLE
    
    Details in: /home/ios/mycluster.9240654/myclusterMTM8233-E8B02HV32001P3.xml
    ===========================================================================
    
    Controllers:
    ============
    Name               Phys Loc
    ----               --------
    iscsi0
    pager0             U8233.E8B.HV32001-V3-C32769-L0-L0
    vasi0              U8233.E8B.HV32001-V3-C32769
    vbsd0              U8233.E8B.HV32001-V3-C32769-L0
    fcs0               U5802.001.00H1180-P1-C8-T1
    fcs1               U5802.001.00H1180-P1-C8-T2
    sfwcomm0           U5802.001.00H1180-P1-C8-T1-W0-L0
    sfwcomm1           U5802.001.00H1180-P1-C8-T2-W0-L0
    fscsi0             U5802.001.00H1180-P1-C8-T1
    ent0               U5802.001.00H1180-P1-C2-T1
    fscsi1             U5802.001.00H1180-P1-C8-T2
    ent1               U5802.001.00H1180-P1-C2-T2
    ent2               U5802.001.00H1180-P1-C2-T3
    ent3               U5802.001.00H1180-P1-C2-T4
    sfw0
    fcnet0             U5802.001.00H1180-P1-C8-T1
    fcnet1             U5802.001.00H1180-P1-C8-T2
    
    Physical Volumes:
    =================
    Name               Phys Loc
    ----               --------
    caa_private0       U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400400000000
    hdisk0             U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4003402500000000
    hdisk1             U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4003402600000000
    hdisk2             U5802.001.00H1180-P1-C8-T1-W5005076305088075-L4004400100000000
    hdisk5             U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400600000000
    hdisk6             U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400700000000
    cldisk1            U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400500000000
    
    Optical Devices:
    ================
    Name              Phys Loc
    ----              --------
    
    Tape Devices:
    =============
    Name              Phys Loc
    ----              --------
    
    Ethernet Interfaces:
    ====================
    Name
    ----
    en0
    en1
    en2
    en3
    
    Storage Pools:
    ==============
    SP Name          PV Name
    -------          -------
    rootvg           hdisk2
    caavg_private    caa_private0
    
    Virtual Server Adapters:
    ========================
    SVSA       Phys Loc                    VTD
    ------------------------------------------
    vhost0     U8233.E8B.HV32001-V3-C2    
    vhost1     U8233.E8B.HV32001-V3-C3    
    vhost2     U8233.E8B.HV32001-V3-C4    
    vhost3     U8233.E8B.HV32001-V3-C5
    
    Cluster:
    ========
    Cluster    State
    -------    -----
    cluster0   UP
    
    Cluster Name      Cluster ID
    ------------      ----------
    mycluster         ce7dd2a0e70911dfac3bc32001017779
    
    Attribute Name    Attribute Value
    --------------    ---------------
    node_uuid         77ec1ca0-a6bb-11df-8cb9-00145ee81e01
    clvdisk           16ea129f-0c84-cdd1-56ba-3b53b3d45174
  8. To view the details of a cluster backup and associated nodes, type the following command:
    viosbr -view -clustername mycluster -file /home/padmin/cfgbackups/systemA.mycluster.tar.gz 
    -detail
  9. To restore a particular node within the cluster, type the following command:
    viosbr -restore -clustername mycluster -file systemA.mycluster.tar.gz -subfile 
    myclusterMTM8233-E8B02HV32001P3.xml
  10. To restore a cluster and its nodes, type the following command:
    viosbr -restore -clustername mycluster -file systemA.mycluster.tar.gz -repopvs hdisk5
  11. To restore shared storage pool virtual target devices that are in the backup file but not in the shared storage pool database, type the following command:
    viosbr -restore -clustername mycluster -file systemA.mycluster.tar.gz -subfile 
    myclusterMTM8233-E8B02HV32001P3.xml -xmlvtds
  12. To restore only the shared storage pool database from the backup file, type the following command:
    viosbr -recoverdb -clustername mycluster -file systemA.mycluster.tar.gz
  13. To restore only the shared storage pool database from the automated database backups, type the following command:
    viosbr -recoverdb -clustername mycluster
  14. To migrate the older cluster backup file, type the following command:
    viosbr -migrate -file systemA.mycluster.tar.gz

    A new file systemA_MIGRATED.mycluster.tar.gz is created.

  15. To restore legacy device mappings on a node, which is in cluster using cluster backup file, type the following command:
    viosbr -restore -clustername mycluster -file systemA.mycluster.tar.gz -subfile
    myclusterMTM8233-E8B02HV32001P3.xml -skipcluster
  16. To restore cluster from backup file but use the database, which exists on the system, type the following command:
    viosbr -restore -clustername mycluster -file systemA.mycluster.tar.gz -repopvs hdisk5 -currentdb


Send feedback Rate this page

Last updated: Wed, June 03, 2015