Performs the operations for backing up the virtual and logical configuration, listing the configuration, and restoring the configuration of the Virtual I/O Server (VIOS).
The viosbr command can be run only by the padmin user.
To perform a backup:
viosbr -backup -file FileName [-frequency daily|weekly|monthly [-numfiles fileCount]]
viosbr -backup -file FileName -clustername ClusterName [-frequency daily|weekly|monthly [-numfiles fileCount]]
To view a backup file:
viosbr -view -file FileName [[-type devType] [-detail] | [-mapping]]
viosbr -view -file FileName -clustername ClusterName [[-type devType] [-detail] | [-mapping]]
To view the listing of backup files:
To restore a backup file:
viosbr -restore -file FileName [-validate | -inter] [-type devType]
viosbr -restore -file FileName [-type devType] [-force]
viosbr -restore -clustername ClusterName -file FileName -subfile NodeFileName [-validate | -inter | -force] [-type devType] [-skipcluster]
viosbr -restore -clustername ClusterName -file FileName -repopvs list_of_disks [-validate | -inter | -force] [-type devType] [-currentdb]
viosbr -restore -clustername ClusterName -file FileName -subfile NodeFile -xmlvtds
viosbr -restore -file FileName [-skipcluster]
To disable a scheduled backup:
viosbr -nobackup
To recover from a corrupted shared storage pool (SSP) database:
viosbr -recoverdb -clustername ClusterName [-file FileName]
To migrate a backup file from an older release level to a current release level:
viosbr -migrate -file FileName
The viosbr command uses the parameters -backup, -view, and -restore to perform backup, list, and recovery tasks for the VIOS.
The viosbr command does not back up the parent devices of adapters or drivers, device drivers, virtual serial adapters, virtual terminal devices, kernel extensions, the Internet Network Extension (inet0), virtual I/O bus, processor, memory, or cache.
The -view parameter displays the information of all the backed up entities in a formatted output. This parameter requires an input file in a compressed or noncompressed format that is generated with the -backup parameter. The -view parameter uses the option flags type and detail to display information in detail or to display minimal information for all the devices or for a subset of devices. The -mapping option flag provides lsmap-like output for Virtual Small Computer System Interface (VSCSI) server adapters, SEA, server virtual Fibre Channel (SVFC) adapters, and PowerVM Active Memory Sharing paging devices. The entities can be controllers, disks, optical devices, tape devices, network adapters, network interfaces, storage pools, repositories, Etherchannels, virtual log repositories, SEAs, VSCSI server adapters, server virtual Fibre Channel (SVFC) adapters, and paging devices. The -list option displays backup files from the default location /home/padmin/cfgbackups or from a user-specified location.
The -restore parameter uses an earlier backup file as input and brings the VIOS partition to the same state as when the backup was created. With the information available from the input file, the command sets the attribute values for physical devices, imports logical devices, and creates virtual devices and their corresponding mappings. The attributes can be set for controllers, adapters, disks, optical devices, tape devices, and Ethernet interfaces. Logical devices that can be imported are volume groups, storage pools, logical volumes (LVs), file systems, and repositories. Virtual devices that can be created are Etherchannel, SEA, server virtual Fibre Channel (SVFC) adapters, virtual target devices, and PowerVM Active Memory Sharing paging devices. The command creates mappings between virtual SCSI server adapters and the VTD-backing devices, between a virtual Fibre Channel (VFC) server adapter and a Fibre Channel (FC) adapter, and between PowerVM Active Memory Sharing paging devices and backing devices. The viosbr command with the -restore option must be run on the same VIOS partition as the one where the backup was performed. The command uses parameters to validate the devices on the system and restores a category of devices. The -restore option runs interactively so that if any devices fail to restore, you can decide how to handle the failure.
The viosbr command recovers the data that is used to reconfigure an SSP cluster. This command does not recover any of the data, such as the contents of an LU. You must take separate action to back up that data.
The viosbr command recovers an entire cluster configuration by using the -clustername option, which includes re-creating a cluster, adding all the nodes that comprise the cluster, and re-creating all cluster entities on all the nodes. If a node is down during this operation, the node is recovered when it is started if the cluster is not deleted. However, the non-SSP devices are not restored on the nodes that are down.
If a single node is reinstalled and you want to restore the entities of that node, you must use the -subfile option and specify the .xml file that corresponds with the node.
If the restore of a cluster fails, rerun the command to resolve the issue. For example, while restoring a four node cluster, if the restore fails after restoring two nodes, rerun the command to restore the other two nodes.
If one of the nodes are not added when restoring a cluster, do not add that node using cluster -addnode. The cluster -addnode command adds a new node to the cluster and this invalidates the existing node information in the database.
An SSP cluster might incur a database corruption. If a database corruption occurs, you must use the -recoverdb option. If this option is used with the -file option, the viosbr command uses the database information from the specified backup file. If the resources of the SSP cluster change after the backup file is made, those changed resources do not appear. The SSP cluster is updated to make a copy of the SSP database on a daily basis. If you prefer this copy of the database to the database stored in the backup, you can exclude the -file option and the backup file from the command-line call. Use the -view option to get the list of xml files in the cluster, choose the correct file from the list by using the MTM and Partition Number.
Flag name | Description |
---|---|
-backup | Takes the backup of VIOS configurations. |
-clustername | Specifies the Cluster name to back up, restore, or view; including all of its associated nodes. |
-currentdb | Restores the cluster without restoring the database from the backup. When restoring mapping, some of the mapping can fail if the mappings are not in the current database. |
-detail | Displays all the devices from the XML file with all their attribute values. |
-file | Specifies the absolute path or relative path and file name of the file that has backup information. If the file name starts with a slash (/) it is considered an absolute path; otherwise, it is a relative path. For backup, compressed file is created with .tar.gz extension and for cluster backups, compressed file is created with <clustername>.tar.gz extension. |
-force | If this option is specified in noninteractive mode, restoration of a device that has not been successfully validated is attempted. This option cannot be used in combination with the -inter or -validate options. |
-frequency | Specifies the frequency of the backup to run
automatically. Note: You can add or edit the crontab entry
for backup frequencies other than daily, weekly, monthly. A compressed
file in the form file_name.XX.tar.gz is created, where <file_name> is
the argument to -file and XX is a number from 01 to numfiles
provided by you. The maximum numfiles value is 10. The format of the
cluster backup file is file_name.XX.clustername.tar.gz
|
-inter | Interactively deploys each device with your
confirmation. Note: User input can be taken to set properties of all
drivers, adapters, and interfaces (disks, optical devices, tape devices,
fibre channel SCSI controllers controllers, Ethernet adapters, Ethernet
interfaces, and logical HEAs) or each category of logical or virtual
devices. This includes logical devices, such as storage pools, file-backed
storage pools, and optical repositories, and virtual devices such
as Etherchannel, SEA, virtual server adapters, and virtual server
fibre channel adapters.
|
-list | This option displays backup files from either the default location /home/padmin/cfgbackups or a user-specified location. |
-mapping | Displays mapping information for SEA, virtual SCSI adapters, VFC adapters, and PowerVM Active Memory Sharing paging devices. |
-migrate | Migrates earlier cluster version of backup file to the current version. A new file is created with _MIGRATED string appended to the given filename. |
-nobackup | This option removes any previously scheduled backups and stops any automatic backups. |
-numfiles | When backup runs automatically, this number indicates the maximum number of backup files that can be saved. The oldest file is deleted during the next cycle of backup. If this flag is not given, the default value is 10. |
-recoverdb | Recovers from the shared storage pool database corruption, either from the backup file or from the solid database backup. |
-repopvs | Takes the list of hdisks to be used as repository disks
for restoring the cluster (space-separated list of hdiskX). The given
disks must not contain a repository signature. Note: First release
supports only one physical volume.
|
-restore | Takes backup file as input and brings the VIOS partition to the same state when the backup was taken. |
-skipcluster | Restores all local devices, except cluster0. |
-subfile | Specifies the node configuration file to be restored. This option must be used when the valid cluster repository exists on the disks. It cannot be used with the -repopvs option. This option is ignored if the backup file is not a cluster backup. |
-type | Displays information corresponding to all instances of the device type specified. The devType can be pv, optical, tape, controller, interface, sp, fbsp, repository, ethchannel, sea, svsa, svfca, vlogrepo, pool, or paging. With the restore option, the devType option can be net, vscsi, npiv, cluster, vlogrepo, or ams. When deploying a given type of device, all the dependent devices also are deployed. For example, when deploying vscsi, related disks, attributes are set, the corresponding storage pool is imported, and all file-backed storage pools are mounted. |
-validate | Validates the devices on the server against the devices on the backed-up file. If the inter option is specified, you are prompted to specify how to handle items that do not validate successfully. Without the inter option, if items do not validate successfully, the -restore operation fails. |
-view | Display the information of all the backed up entities. |
-xmlvtds | Allows you to restore SSP mappings, which are not in SSP database but are in the backup .xml file. This option is valid only while restoring a node and not while restoring clusters. |
A cluster cannot be restored on a system if the cluster or node from the cluster is removed by using the cluster command with the -delete or -rmnode option.
<cluster Name>MTM<Machine TYPE MODEL>P<partitionId>.xml
Return code | Description |
---|---|
Return code | Description |
0 | Success |
-1 | Failure |
viosbr -backup -file /tmp/myserverbackup
viosbr -backup -file mybackup -frequency daily -numfiles 5
The backup files resulting from this command are located under home/padmin/cfgbackups with the names mybackup.01.tar.gz, mybackup.02.tar.gz, mybackup.03.tar.gz, mybackup.04.tar.gz, and mybackup.05.tar.gz for the five most recent files.
viosbr -view -file myserverbackup.012909.tar.gz
The system displays the following output:
Controllers:
Name Phys Loc
scsi0 U787B.001.DNWFPMH-P1-C3-T1
scsi1 U787B.001.DNWFPMH-P1-C3-T2
fscsi0 U789D.001.DQD42T5-P1-C1-T1
iscsi0 U787B.001.DNWFPMH-P1-T10
lhea0 U789D.001.DQD42T5-P1
fcs0 U789D.001.DQD42T5-P1-C1-T1
Physical Volumes:
Name Phys loc
hdisk1 U787B.001.DNWFPMH-P1-C3-T2-L4-L0
hdisk2 U789D.001.DQD90N4-P3-D2
Optical Devices:
Name Phys loc
cd0 U78A0.001.DNWGLV2-P2-D2
Tape devices:
Name Phys loc
rmt0 U78A0.001.DNWGLV2-P2-D1
Ethernet Interface(s):
Name
en0
en1
Etherchannels:
Name Prim adapter(s) Backup adapter
ent4 ent0 NONE
ent1
Shared Ethernet Adapters:
Name Target Adapter Virtual Adapter(s)
ent3 ent0 ent1
ent2
Storage Pools (*-default SP):
SP name PV Name
testsp hdisk1
hdisk2
mysp* hdisk3
hdisk4
File-backed Storage Pools:
Name Parent SP
myfbsp mysp
Optical Repositories:
Name Parent SP
VMLibrary_LV mysp
VSCSI Server Adapters:
SVSA VTD Phys loc
vhost0 vtscsi0 U9133.55A.063368H-V4-C3
vtopt1
vhost1 vtopt0 U9133.55A.063368H-V4-C4
vttape0
SVFC Adapters:
Name FC Adapter Phys loc
vfchost0 fcs0 U9117.MMA.06AB272-V5-C17
vfchost1 - U9117.MMA.06AB272-V5-C18
VBSD Pools:
Name
pool0
pool1
VRM Pages:
Name StreamID
vrmpage0 0x2000011b7ec18369
vrmpage1 0x2000011b7dec9128
Virtual Log Repositories:
=========================
Virtual Log Repository State
---------------------- -----
vlogrepo0 AVAILABLE
viosbr -view -file myserverbackup.002.tar.gz -type pv
The system displays the following output:
Physical Volumes:
=================
Name Phys Loc
---- --------
hdisk0 U789D.001.DQD42T5-P1-C1-T1-W500507630513402B-L4010400000000000
hdisk1 U789D.001.DQD42T5-P1-C1-T1-W500507630513402B-L4010400100000000
hdisk2 U789D.001.DQD42T5-P1-C1-T1-W500507630513402B-L4010400400000000
hdisk3 U789D.001.DQD42T5-P1-C1-T1-W500507630513402B-L4010405C00000000
viosbr -restore -file /home/padmin/cfgbackups/myserverbackup.002.tar.gz
The system displays the following output:
Deployed/changed devices:
<Name(s) of deployed devices>
Unable to deploy/change devices:
<Name(s) of non-deployed devices>
viosbr -backup -clustername mycluster -file systemA
viosbr -view -clustername mycluster -file /home/padmin/cfgbackups/systemA.mycluster.tar.gz
The system displays the following output:
Files in the cluster Backup
===========================
myclusterDB
myclusterMTM8233-E8B02HV32001P2.xml
myclusterMTM8233-E8B02HV32001P3.xml
Details in: /home/ios/mycluster.9240654/myclusterMTM8233-E8B02HV32001P2.xml
===========================================================================
Controllers:
============
Name Phys Loc
---- --------
iscsi0
pager0 U8233.E8B.HV32001-V3-C32769-L0-L0
vasi0 U8233.E8B.HV32001-V3-C32769
vbsd0 U8233.E8B.HV32001-V3-C32769-L0
fcs0 U5802.001.00H1180-P1-C8-T1
fcs1 U5802.001.00H1180-P1-C8-T2
sfwcomm0 U5802.001.00H1180-P1-C8-T1-W0-L0
sfwcomm1 U5802.001.00H1180-P1-C8-T2-W0-L0
fscsi0 U5802.001.00H1180-P1-C8-T1
ent0 U5802.001.00H1180-P1-C2-T1
fscsi1 U5802.001.00H1180-P1-C8-T2
ent1 U5802.001.00H1180-P1-C2-T2
ent2 U5802.001.00H1180-P1-C2-T3
ent3 U5802.001.00H1180-P1-C2-T4
sfw0
fcnet0 U5802.001.00H1180-P1-C8-T1
fcnet1 U5802.001.00H1180-P1-C8-T2
Physical Volumes:
================
Name Phys loc
---- --------
caa_private0 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400400000000
hdisk0 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4003402500000000
hdisk1 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4003402600000000
hdisk2 U5802.001.00H1180-P1-C8-T1-W5005076305088075-L4004400100000000
hdisk5 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400600000000
hdisk6 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400700000000
cldisk1 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400500000000
Optical Devices:
===============
Name Phys loc
---- --------
Tape devices:
============
Name Phys loc
---- --------
Ethernet Interfaces:
====================
Name
----
en0
en1
en2
en3
Storage Pools:
=============
SP name PV Name
------- -------
rootvg hdisk2
caavg_private caa_private0
Virtual Server Adapters:
=======================
SVSA Phys Loc VTD
------------------------------------------
vhost0 U8233.E8B.HV32001-V3-C2
vhost1 U8233.E8B.HV32001-V3-C3
vhost2 U8233.E8B.HV32001-V3-C4
vhost3 U8233.E8B.HV32001-V3-C5
Cluster:
=======
Name State
---- -----
cluster0 UP
Cluster Name Cluster ID
------------ ----------
mycluster ce7dd2a0e70911dfac3bc32001017779
Attribute Name Attribute Value
-------------- ---------------
node_uuid 77ec1ca0-a6bb-11df-8cb9-00145ee81e01
clvdisk 16ea129f-0c84-cdd1-56ba-3b53b3d45174
Virtual Log Repositories:
=========================
Virtual Log Repository State
---------------------- -----
vlogrepo0 AVAILABLE
Details in: /home/ios/mycluster.9240654/myclusterMTM8233-E8B02HV32001P3.xml
===========================================================================
Controllers:
============
Name Phys Loc
---- --------
iscsi0
pager0 U8233.E8B.HV32001-V3-C32769-L0-L0
vasi0 U8233.E8B.HV32001-V3-C32769
vbsd0 U8233.E8B.HV32001-V3-C32769-L0
fcs0 U5802.001.00H1180-P1-C8-T1
fcs1 U5802.001.00H1180-P1-C8-T2
sfwcomm0 U5802.001.00H1180-P1-C8-T1-W0-L0
sfwcomm1 U5802.001.00H1180-P1-C8-T2-W0-L0
fscsi0 U5802.001.00H1180-P1-C8-T1
ent0 U5802.001.00H1180-P1-C2-T1
fscsi1 U5802.001.00H1180-P1-C8-T2
ent1 U5802.001.00H1180-P1-C2-T2
ent2 U5802.001.00H1180-P1-C2-T3
ent3 U5802.001.00H1180-P1-C2-T4
sfw0
fcnet0 U5802.001.00H1180-P1-C8-T1
fcnet1 U5802.001.00H1180-P1-C8-T2
Physical Volumes:
=================
Name Phys Loc
---- --------
caa_private0 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400400000000
hdisk0 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4003402500000000
hdisk1 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4003402600000000
hdisk2 U5802.001.00H1180-P1-C8-T1-W5005076305088075-L4004400100000000
hdisk5 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400600000000
hdisk6 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400700000000
cldisk1 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400500000000
Optical Devices:
================
Name Phys Loc
---- --------
Tape Devices:
=============
Name Phys Loc
---- --------
Ethernet Interfaces:
====================
Name
----
en0
en1
en2
en3
Storage Pools:
==============
SP Name PV Name
------- -------
rootvg hdisk2
caavg_private caa_private0
Virtual Server Adapters:
========================
SVSA Phys Loc VTD
------------------------------------------
vhost0 U8233.E8B.HV32001-V3-C2
vhost1 U8233.E8B.HV32001-V3-C3
vhost2 U8233.E8B.HV32001-V3-C4
vhost3 U8233.E8B.HV32001-V3-C5
Cluster:
========
Cluster State
------- -----
cluster0 UP
Cluster Name Cluster ID
------------ ----------
mycluster ce7dd2a0e70911dfac3bc32001017779
Attribute Name Attribute Value
-------------- ---------------
node_uuid 77ec1ca0-a6bb-11df-8cb9-00145ee81e01
clvdisk 16ea129f-0c84-cdd1-56ba-3b53b3d45174
viosbr -view -clustername mycluster -file /home/padmin/cfgbackups/systemA.mycluster.tar.gz
-detail
viosbr -restore -clustername mycluster -file systemA.mycluster.tar.gz -subfile
myclusterMTM8233-E8B02HV32001P3.xml
viosbr -restore -clustername mycluster -file systemA.mycluster.tar.gz -repopvs hdisk5
viosbr -restore -clustername mycluster -file systemA.mycluster.tar.gz -subfile
myclusterMTM8233-E8B02HV32001P3.xml -xmlvtds
viosbr -recoverdb -clustername mycluster -file systemA.mycluster.tar.gz
viosbr -recoverdb -clustername mycluster
viosbr -migrate -file systemA.mycluster.tar.gz
A new file systemA_MIGRATED.mycluster.tar.gz is created.
viosbr -restore -clustername mycluster -file systemA.mycluster.tar.gz -subfile
myclusterMTM8233-E8B02HV32001P3.xml -skipcluster
viosbr -restore -clustername mycluster -file systemA.mycluster.tar.gz -repopvs hdisk5 -currentdb