viosbr command
Purpose
Performs the operations for backing up the virtual and logical configuration, listing the configuration, and restoring the configuration of the Virtual I/O Server (VIOS).
Syntax
To perform a backup:
viosbr -backup -file FileName [-frequency daily|weekly|monthly [-numfiles fileCount]]
viosbr -backup -file FileName -clustername clusterName [-frequency daily|weekly|monthly [-numfiles fileCount]]
To view a backup file:
viosbr -view -file FileName [[-type devType] [-detail] | [-mapping]]
viosbr -view -file FileName -clustername clusterName [[-type devType] [-detail] | [-mapping]]
To view the listing of backup files:
To restore a backup file:
viosbr -restore -file FileName [-validate | -inter] [-type devType]
viosbr -restore -file FileName [-type devType] [-force]
viosbr -restore -clustername clusterName -file FileName -subfile NodeFile [-validate | -inter | -force] [-type devType] [-skipcluster] [-skipdevattr]
viosbr -restore -clustername clusterName -file FileName -repopvs list_of_disks [-validate | -inter | -force] [-type devType] [-db]
viosbr -restore -clustername clusterName -file FileName -subfile NodeFile -xmlvtds
viosbr -restore -file FileName [-skipcluster]
To disable a scheduled backup:
viosbr -nobackup
To recover from a corrupted shared storage pool (SSP) database:
viosbr -recoverdb -clustername clusterName [-file FileName]
To migrate a backup file from an older release level to a current release level:
viosbr -migrate -file FileName
To recover the SSP on the secondary setup:
viosbr -dr -clustername clusterName [-file FileName -type devType -typeinputs name:value [ ,... ] -repopvs list_of_disks [ -db ]
To recover the SSP on the secondary setup:
viosbr -autobackup { start | stop | status } [ -type { cluster | node} ]
viosbr -autobackup save
Description
The viosbr command uses the parameters -backup, -view, and -restore to perform backup, list, and recovery tasks for the VIOS.
The viosbr command does not back up the parent devices of adapters or drivers, device drivers, virtual serial adapters, virtual terminal devices, kernel extensions, the Internet Network Extension (inet0), virtual I/O bus, processor, memory, or cache.
The -view parameter displays the information of all the backed up entities in a formatted output. This parameter requires an input file in a compressed or noncompressed format that is generated with the -backup parameter. The -view parameter uses the option flags type and detail to display information in detail or to display minimal information for all the devices or for a subset of devices. The -mapping option flag provides lsmap-like output for Virtual Small Computer System Interface (VSCSI) server adapters, SEA, server virtual Fibre Channel (SVFC) adapters, and PowerVM Active Memory Sharing paging devices. The entities can be controllers, disks, optical devices, tape devices, network adapters, network interfaces, storage pools, repositories, Etherchannels, virtual log repositories, SEAs, VSCSI server adapters, server virtual Fibre Channel (SVFC) adapters, and paging devices. The -list option displays backup files from the default location /home/padmin/cfgbackups or from a user-specified location.
The -restore parameter uses an earlier backup file as input and brings the VIOS partition to the same state as when the backup was created. With the information available from the input file, the command sets the attribute values for physical devices, imports logical devices, and creates virtual devices and their corresponding mappings. The attributes can be set for controllers, adapters, disks, optical devices, tape devices, and Ethernet interfaces. Logical devices that can be imported are volume groups, storage pools, logical volumes (LVs), file systems, and repositories. Virtual devices that can be created are Etherchannel, SEA, server virtual Fibre Channel (SVFC) adapters, virtual target devices, and PowerVM Active Memory Sharing paging devices. The command creates mappings between virtual SCSI server adapters and the VTD-backing devices, between a virtual Fibre Channel (VFC) server adapter and a Fibre Channel (FC) adapter, and between PowerVM Active Memory Sharing paging devices and backing devices. The viosbr command with the -restore option must be run on the same VIOS partition as the one where the backup was performed. The command uses parameters to validate the devices on the system and restores a category of devices. The -restore option runs interactively so that if any devices fail to restore, you can decide how to handle the failure.
During the cluster restore operation, if the viosbr command detects that there are mismatches in the storage pool disks that are part of the cluster backup, and the storage pool disks that are currently on the system, a warning message is displayed and you are requested for confirmation. If you confirm to proceed with the restore operation, the viosbr command restores the cluster, but it might not succeed.
The viosbr command recovers the data that is used to reconfigure an SSP cluster. This command does not recover any of the data, such as the contents of an LU. You must take separate action to back up that data.
The viosbr command recovers an entire cluster configuration by using the -clustername option, which includes re-creating a cluster, adding all the nodes that comprise the cluster, and re-creating all cluster entities on all the nodes. If a node is down during this operation, the node is recovered when it is started if the cluster is not deleted. However, the non-SSP devices are not restored on the nodes that are down. The newly restored cluster uses the SSP database that exists on the system. If you also want to restore the SSP database, you must use the -db option.
If a single node is reinstalled and you want to restore the entities of that node, you must use the -subfile option and specify the .xml file that corresponds with the node.
- Do not reboot any other nodes in the cluster when a single node is restored by using the -subfile option.
- If a node is stopped in a cluster after a backup operation is
complete, it cannot be joined to the cluster during restore. From VIOS version 2.2.4.0, or
later, complete the following steps to restore the already stopped
node.
- Restore the RSCT node ID on the stopped node, by using the -type rsct option.
- Start the stopped node from another active node, by using the clstartstop command.
- Restore the remaining devices on the current node.
If the restore operation of a cluster fails, rerun the command to resolve the issue. For example, while restoring a four node cluster, if the restore operation fails after restoring two nodes, rerun the command to restore the other two nodes.
If one of the nodes is not added when restoring a cluster, do not add that node by using cluster -addnode. The cluster -addnode command adds a new node to the cluster and this invalidates the existing node information in the database.
- Install three nodes with VIOS Version 2.2.2.0.
- Create a 3-node cluster.
- Take a backup of the cluster.
- Reinstall node1 with VIOS Version 2.2.2.0, node2 with Version 2.2.3.0, and node3 with Version 2.2.4.0.
- Restore the cluster from node1 with VIOS Version 2.2.2.0.
An SSP cluster might incur a database corruption. If a database corruption occurs, you must use the -recoverdb option. If this option is used with the -file option, the viosbr command uses the database information from the specified backup file. If the resources of the SSP cluster change after the backup file is formed, those changed resources do not appear. The SSP cluster is updated to copy the SSP database every day. If you prefer this copy of the database to the database stored in the backup, you can exclude the -file option and the backup file from the command-line call. Use the -view option to get the list of XML files in the cluster, choose the correct file from the list by using the MTM and partition number.
- Migrate the existing backup.
- Restore the share storage pool cluster using the migrated backup.
The -dr flag is specific to the disaster recovery solution and is used to recover the SSP during the secondary setup.
It uses the primary setup backup file, the list of hostnames, and the list of pool disks as input and brings up the cluster during the secondary setup.
The viosbr command automatically creates a backup, whenever there are any configuration changes. This functionality is known as the autoviosbr backup. It is triggered every hour, and checks if there are any configuration changes, or any other changes. If it detects any changes, a backup is created. Otherwise, no action is taken. The backup files resulting from the autoviosbr backup are located under the default path /home/padmin/cfgbackups with the names autoviosbr_SSP.<cluster_name>.tar.gz for cluster level and autoviosbr_<hostname>.tar.gz for node level. The cluster-level backup file is present only in the default path of the database node.
The -autobackup flag is provided for the autoviosbr backup functionality. By default, autoviosbr backup is enabled on the system. To disable the autoviosbr backup, use the stop parameter and to enable it you can use the start parameter. When the autoviosbr backupis disabled, no autoviosbr related tar.gz file is generated.
To check if the autoviosbr backup file, present in the default path is up to date, you can use the status parameter. To access the cluster-level backup file on any node of the cluster, use the save parameter. This action is necessary as the cluster-level backup file is present in the default path of the database node only.
If the node is a part of cluster, you can use the -type flag to specify the parameter. The parameter can be either cluster or node, depending on if it is a cluster-level or a node-level backup.
Flags
Flag name | Description |
---|---|
-autobackup | Works only during the autoviosbr backup. It accepts the following parameters: start, stop, status, and save. |
-backup | Takes the backup of VIOS configurations. |
-clustername | Specifies the Cluster name to back up, restore, or view; including all of its associated nodes. |
-db | Restores the SSP database from the backup file. By default, the database from the shared storage pool is used. |
-detail | Displays all the devices from the XML file with all their attribute values. |
-dr | Restores data on different types of devices, from backups that were created on other devices. You can specify the device types, by using the -type flag. |
-file | Specifies the absolute path or relative path and file name of the file that has backup information. If the file name starts with a slash (/) it is considered an absolute path; otherwise, it is a relative path. For backup, compressed file is created with .tar.gz extension and for cluster backups, compressed file is created with <clustername>.tar.gz extension. |
-force | If this option is specified in noninteractive mode, restoration of a device that has not been successfully validated is attempted. This option cannot be used in combination with the -inter or -validate options. |
-frequency | Specifies the frequency of the backup to run
automatically. Note: You can add or edit the crontab entry
for backup frequencies other than daily, weekly, monthly. A compressed
file in the form file_name.XX.tar.gz is created, where <file_name> is
the argument to -file and XX is a number from 01 to numfiles
provided by you. The maximum numfiles value is 10. The format of the
cluster backup file is file_name.XX.clustername.tar.gz
|
-inter | Interactively deploys each device with your
confirmation. Note: User input can be taken to set properties of all
drivers, adapters, and interfaces (disks, optical devices, tape devices,
fibre channel SCSI controllers controllers, Ethernet adapters, Ethernet
interfaces, and logical HEAs) or each category of logical or virtual
devices. This includes logical devices, such as storage pools, file-backed
storage pools, and optical repositories, and virtual devices such
as Etherchannel, SEA, virtual server adapters, and virtual server
fibre channel adapters.
|
-list | This option displays backup files from either the default location /home/padmin/cfgbackups or a user-specified location. |
-mapping | Displays mapping information for SEA, virtual SCSI adapters, VFC adapters, and PowerVM Active Memory Sharing paging devices. |
-migrate | Migrates earlier cluster version of backup file to the current version. A new file is created with _MIGRATED string appended to the given filename. |
-nobackup | This option removes any previously scheduled backups and stops any automatic backups. |
-numfiles | When backup runs automatically, this number indicates the maximum number of backup files that can be saved. The oldest file is deleted during the next cycle of backup. If this flag is not given, the default value is 10. |
-recoverdb | Recovers from the shared storage pool database corruption, either from the backup file or from the solid database backup. |
-repopvs | Takes the list of hdisks to be used as repository disks
for restoring the cluster (space-separated list of hdiskX). The given
disks must not contain a repository signature. Note: First release
supports only one physical volume.
|
-restore | Takes backup file as input and brings the VIOS partition to the same state when the backup was taken. |
-skipcluster | Restores all local devices, except cluster0. |
-skipdevattr | Skips the restore of the physical device attributes. This means that it does not modify the current system's physical device attributes. |
-subfile | Specifies the node configuration file to be restored. This option must be used when the valid cluster repository exists on the disks. It cannot be used with the -repopvs option. This option is ignored if the backup file is not a cluster backup. |
-type | Displays information corresponding to all instances of the device type specified. The devType can be pv, optical, tape, controller, interface, sp, fbsp, repository, ethchannel, sea, svsa, svfca, vlogrepo, pool, or paging. With the restore option, the devType option can be net, vscsi, npiv, cluster, vlogrepo, or ams. When deploying a given type of device, all the dependent devices also are deployed. For example, when deploying vscsi, related disks, attributes are set, the corresponding storage pool is imported, and all file-backed storage pools are mounted. |
-typeInputs | Pass additional inputs for the specified types
with the -type flag.
|
-validate | Validates the devices on the server against the devices on the backed-up file. If the inter option is specified, you are prompted to specify how to handle items that do not validate successfully. Without the inter option, if items do not validate successfully, the -restore operation fails. |
-view | Display the information of all the backed up entities. |
-xmlvtds | Allows you to restore SSP mappings, which are not in SSP database but are in the backup .xml file. This option is valid only while restoring a node and not while restoring clusters. |
A cluster cannot be restored on a system if the cluster or node from the cluster is removed by using the cluster command with the -delete or -rmnode option.
<cluster Name>MTM<Machine TYPE MODEL>P<partitionId>.xml
Exit Status
Return code | Description |
---|---|
Return code | Description |
0 | Success |
-1 | Failure |
Examples
- To back up all the device attributes and logical and virtual device
mappings on theVIOS file
called /tmp/myserverbackup, type the following command:
viosbr -backup -file /tmp/myserverbackup
- To back up all the device attributes and virtual device mappings
daily on the VIOS and
keep the last five backup files, type the following command:
viosbr -backup -file mybackup -frequency daily -numfiles 5
The backup files resulting from this command are located under home/padmin/cfgbackups with the names mybackup.01.tar.gz, mybackup.02.tar.gz, mybackup.03.tar.gz, mybackup.04.tar.gz, and mybackup.05.tar.gz for the five most recent files.
- To display information about all the entities in a backup file, myserverbackup.012909.tar.gz,,
type the following command:
viosbr -view -file myserverbackup.012909.tar.gz
The system displays the following output:
Controllers: Name Phys Loc scsi0 U787B.001.DNWFPMH-P1-C3-T1 scsi1 U787B.001.DNWFPMH-P1-C3-T2 fscsi0 U789D.001.DQD42T5-P1-C1-T1 iscsi0 U787B.001.DNWFPMH-P1-T10 lhea0 U789D.001.DQD42T5-P1 fcs0 U789D.001.DQD42T5-P1-C1-T1 Physical Volumes: Name Phys loc hdisk1 U787B.001.DNWFPMH-P1-C3-T2-L4-L0 hdisk2 U789D.001.DQD90N4-P3-D2 Optical Devices: Name Phys loc cd0 U78A0.001.DNWGLV2-P2-D2 Tape devices: Name Phys loc rmt0 U78A0.001.DNWGLV2-P2-D1 Ethernet Interface(s): Name en0 en1 Etherchannels: Name Prim adapter(s) Backup adapter ent4 ent0 NONE ent1 Shared Ethernet Adapters: Name Target Adapter Virtual Adapter(s) ent3 ent0 ent1 ent2 Storage Pools (*-default SP): SP name PV Name testsp hdisk1 hdisk2 mysp* hdisk3 hdisk4 File-backed Storage Pools: Name Parent SP myfbsp mysp Optical Repositories: Name Parent SP VMLibrary_LV mysp VSCSI Server Adapters: SVSA VTD Phys loc vhost0 vtscsi0 U9133.55A.063368H-V4-C3 vtopt1 vhost1 vtopt0 U9133.55A.063368H-V4-C4 vttape0 SVFC Adapters: Name FC Adapter Phys loc vfchost0 fcs0 U9117.MMA.06AB272-V5-C17 vfchost1 - U9117.MMA.06AB272-V5-C18 VBSD Pools: Name pool0 pool1 VRM Pages: Name StreamID vrmpage0 0x2000011b7ec18369 vrmpage1 0x2000011b7dec9128 Virtual Log Repositories: ========================= Virtual Log Repository State ---------------------- ----- vlogrepo0 AVAILABLE
- To display information for only physical disks, type the following
command:
viosbr -view -file myserverbackup.002.tar.gz -type pv
The system displays the following output:
Physical Volumes: ================= Name Phys Loc ---- -------- hdisk0 U789D.001.DQD42T5-P1-C1-T1-W500507630513402B-L4010400000000000 hdisk1 U789D.001.DQD42T5-P1-C1-T1-W500507630513402B-L4010400100000000 hdisk2 U789D.001.DQD42T5-P1-C1-T1-W500507630513402B-L4010400400000000 hdisk3 U789D.001.DQD42T5-P1-C1-T1-W500507630513402B-L4010405C00000000
- To restore all the possible devices and display a summary of deployed
and nondeployed devices, type the following command:
viosbr -restore -file /home/padmin/cfgbackups/myserverbackup.002.tar.gz
The system displays the following output:
Deployed/changed devices: <Name(s) of deployed devices> Unable to deploy/change devices: <Name(s) of non-deployed devices>
- To back up a cluster and all the nodes (that
are running), type the following command:
viosbr -backup -clustername mycluster -file systemA
The system displays the following output:
Backup of node systemB successful. Backup of this node systemA successful.
Note: If any further changes are made in the cluster configuration such as adding, removing, or replacing a disk, or adding or removing nodes from the cluster, this backup file cannot be used to restore the full cluster. Also, this backup file cannot be used to restore a single cluster node, if the cluster repository disk is changed. In such scenarios, you must take a fresh backup. - To view the contents of a cluster backup and
associated nodes, type the following command:
viosbr -view -clustername mycluster -file /home/padmin/cfgbackups/systemA.mycluster.tar.gz
The system displays the following output:
Files in the cluster Backup =========================== myclusterDB myclusterMTM8233-E8B02HV32001P2.xml myclusterMTM8233-E8B02HV32001P3.xml Details in: /home/ios/mycluster.9240654/myclusterMTM8233-E8B02HV32001P2.xml =========================================================================== Controllers: ============ Name Phys Loc ---- -------- iscsi0 pager0 U8233.E8B.HV32001-V3-C32769-L0-L0 vasi0 U8233.E8B.HV32001-V3-C32769 vbsd0 U8233.E8B.HV32001-V3-C32769-L0 fcs0 U5802.001.00H1180-P1-C8-T1 fcs1 U5802.001.00H1180-P1-C8-T2 sfwcomm0 U5802.001.00H1180-P1-C8-T1-W0-L0 sfwcomm1 U5802.001.00H1180-P1-C8-T2-W0-L0 fscsi0 U5802.001.00H1180-P1-C8-T1 ent0 U5802.001.00H1180-P1-C2-T1 fscsi1 U5802.001.00H1180-P1-C8-T2 ent1 U5802.001.00H1180-P1-C2-T2 ent2 U5802.001.00H1180-P1-C2-T3 ent3 U5802.001.00H1180-P1-C2-T4 sfw0 fcnet0 U5802.001.00H1180-P1-C8-T1 fcnet1 U5802.001.00H1180-P1-C8-T2 Physical Volumes: ================ Name Phys loc ---- -------- caa_private0 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400400000000 hdisk0 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4003402500000000 hdisk1 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4003402600000000 hdisk2 U5802.001.00H1180-P1-C8-T1-W5005076305088075-L4004400100000000 hdisk5 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400600000000 hdisk6 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400700000000 cldisk1 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400500000000 Optical Devices: =============== Name Phys loc ---- -------- Tape devices: ============ Name Phys loc ---- -------- Ethernet Interfaces: ==================== Name ---- en0 en1 en2 en3 Storage Pools: ============= SP name PV Name ------- ------- rootvg hdisk2 caavg_private caa_private0 Virtual Server Adapters: ======================= SVSA Phys Loc VTD ------------------------------------------ vhost0 U8233.E8B.HV32001-V3-C2 vhost1 U8233.E8B.HV32001-V3-C3 vhost2 U8233.E8B.HV32001-V3-C4 vhost3 U8233.E8B.HV32001-V3-C5 Cluster: ======= Name State ---- ----- cluster0 UP Cluster Name Cluster ID ------------ ---------- mycluster ce7dd2a0e70911dfac3bc32001017779 Attribute Name Attribute Value -------------- --------------- node_uuid 77ec1ca0-a6bb-11df-8cb9-00145ee81e01 clvdisk 16ea129f-0c84-cdd1-56ba-3b53b3d45174 Virtual Log Repositories: ========================= Virtual Log Repository State ---------------------- ----- vlogrepo0 AVAILABLE Details in: /home/ios/mycluster.9240654/myclusterMTM8233-E8B02HV32001P3.xml =========================================================================== Controllers: ============ Name Phys Loc ---- -------- iscsi0 pager0 U8233.E8B.HV32001-V3-C32769-L0-L0 vasi0 U8233.E8B.HV32001-V3-C32769 vbsd0 U8233.E8B.HV32001-V3-C32769-L0 fcs0 U5802.001.00H1180-P1-C8-T1 fcs1 U5802.001.00H1180-P1-C8-T2 sfwcomm0 U5802.001.00H1180-P1-C8-T1-W0-L0 sfwcomm1 U5802.001.00H1180-P1-C8-T2-W0-L0 fscsi0 U5802.001.00H1180-P1-C8-T1 ent0 U5802.001.00H1180-P1-C2-T1 fscsi1 U5802.001.00H1180-P1-C8-T2 ent1 U5802.001.00H1180-P1-C2-T2 ent2 U5802.001.00H1180-P1-C2-T3 ent3 U5802.001.00H1180-P1-C2-T4 sfw0 fcnet0 U5802.001.00H1180-P1-C8-T1 fcnet1 U5802.001.00H1180-P1-C8-T2 Physical Volumes: ================= Name Phys Loc ---- -------- caa_private0 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400400000000 hdisk0 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4003402500000000 hdisk1 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4003402600000000 hdisk2 U5802.001.00H1180-P1-C8-T1-W5005076305088075-L4004400100000000 hdisk5 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400600000000 hdisk6 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400700000000 cldisk1 U5802.001.00H1180-P1-C8-T1-W500507630508C075-L4012400500000000 Optical Devices: ================ Name Phys Loc ---- -------- Tape Devices: ============= Name Phys Loc ---- -------- Ethernet Interfaces: ==================== Name ---- en0 en1 en2 en3 Storage Pools: ============== SP Name PV Name ------- ------- rootvg hdisk2 caavg_private caa_private0 Virtual Server Adapters: ======================== SVSA Phys Loc VTD ------------------------------------------ vhost0 U8233.E8B.HV32001-V3-C2 vhost1 U8233.E8B.HV32001-V3-C3 vhost2 U8233.E8B.HV32001-V3-C4 vhost3 U8233.E8B.HV32001-V3-C5 Cluster: ======== Cluster State ------- ----- cluster0 UP Cluster Name Cluster ID ------------ ---------- mycluster ce7dd2a0e70911dfac3bc32001017779 Attribute Name Attribute Value -------------- --------------- node_uuid 77ec1ca0-a6bb-11df-8cb9-00145ee81e01 clvdisk 16ea129f-0c84-cdd1-56ba-3b53b3d45174
- To view the details of a cluster backup and associated
nodes, type the following command:
viosbr -view -clustername mycluster -file /home/padmin/cfgbackups/systemA.mycluster.tar.gz -detail
- To restore a particular node within the cluster,
type the following command:
viosbr -restore -clustername mycluster -file systemA.mycluster.tar.gz -subfile myclusterMTM8233-E8B02HV32001P3.xml
- To restore a cluster and its nodes, type the
following command:
viosbr -restore -clustername mycluster -file systemA.mycluster.tar.gz -repopvs hdisk5
- To restore shared storage pool virtual target
devices that are in the backup file but not in the shared storage
pool database, type the following command:
viosbr -restore -clustername mycluster -file systemA.mycluster.tar.gz -subfile myclusterMTM8233-E8B02HV32001P3.xml -xmlvtds
- To restore only the shared storage pool database
from the backup file, type the following command:
viosbr -recoverdb -clustername mycluster -file systemA.mycluster.tar.gz
- To restore only the shared storage pool database
from the automated database backups, type the following command:
viosbr -recoverdb -clustername mycluster
- To migrate the older cluster backup file, type
the following command:
viosbr -migrate -file systemA.mycluster.tar.gz
A new file systemA_MIGRATED.mycluster.tar.gz is created.
- To restore legacy device mappings on a node,
which is in cluster using cluster backup file, type the following
command:
viosbr -restore -clustername mycluster -file systemA.mycluster.tar.gz -subfile myclusterMTM8233-E8B02HV32001P3.xml -skipcluster
- To restore the cluster along with the SSP
database from a backup file, type the following command:
viosbr -restore -clustername mycluster -file systemA.mycluster.tar.gz -repopvs hdisk5 -db
- To restore a cluster when mismatches occur between
the storage pool disks in the backup file and the storage pool disks
currently present on the system, type the following command:
viosbr -restore -clustername mycl -file systemA -repopvs hdisk14
The system displays the following output:
WARNING: There seem to be mismatches in the current pool disks and the disks that are in backup.WARNING: There seem to be mismatches in the current pool disks and the disks that are in backup. The changes are: The disks that are not in the backup file, but in the pool: hdisk18 Proceeding further may or may not succeed in restoring cluster. If cluster gets restored, there may be I/O errors due to missing disks. Would you like to continue restoring the cluster with the disks that are available in backup file?(y/n):y Backedup Devices that are unable to restore/change ================================================== DEPLOYED or CHANGED devices: ============================ Dev name during BACKUP Dev name after RESTORE ---------------------- ----------------------
- To recover the cluster on another geographic location, type the following
command:
viosbr -dr -clustername mycluster -file systemA.mycluster.tar.gz -type cluster -typeInputs hostnames_file:/home/padmin/nodelist,pooldisks_file:/home/padmin/disklist -repopvs hdisk5
The system displays the following output:
CLUSTER restore successful. Restore summary on M4SSP3V4: Backedup Devices that are unable to restore/change ================================================== DEPLOYED or CHANGED devices: ============================ Dev name during BACKUP Dev name after RESTORE ---------------------- ----------------------
The file pooldisks_file contains a list of universal unique Identifiers (UUIDs) of the disk.
- To trigger the node or cluster-level backup, type the following
command:
ORviosbr -autobackup start -type node
viosbr -autobackup start -type cluster
Note: If the node is not part of the cluster, the -type flag is not necessary.The system displays the following output:
Autobackup started successfully.
- To stop the autoviosbr backup, type the following
command:
viosbr -autobackup stop
- To check the status of the cluster-level auto backup, type the following
command:
viosbr -autobackup status -type cluster
Note: To check the status of the node-level auto backup, you can type: viosbr -autobackup status -type node.The system displays the following output:
Cluster configuration changes:Complete.
- To access the cluster-level autoviosbr backup on the non-database node, type the following
command:
viosbr -autobackup save
The system displays the following output:
After successful completion of the save command, the cluster-level backup file autoviosbr_SSP.<cluster_name>.tar.gz is available in the default path.Saved successfully.