IBM Support

File Restore Known Issues and Limitations: IBM Spectrum Protect for Virtual Environments: Data Protection for VMware V7.1.3 or later

Preventive Service Planning


Abstract

This document details the known Issues and limitations for file restore operations issued from the IBM Spectrum Protect file restore interface.

Content

Beginning with Version 7.1.3, IBM Tivoli® Storage Manager for Virtual Environments is now IBM Spectrum Protect™ for Virtual Environments. Some applications such as the software fulfillment systems and IBM License Metric Tool use the new product name. However, the software and its product documentation continue to use the Tivoli Storage Manager for Virtual Environments product name. To learn more about the rebranding transition, see technote 1963634.

 

Limitations related to restore of virtual machines

Microsoft Bitlocker encryption not supported (internal reference #200411)
File-level restore for BitLocker-encrypted drives on Windows virtual machines is not possible.
This limitation applies to file-level restore in all versions of VMware and Microsoft Hyper-V environments.

 
Clarification on option Vmnovrdmdisks YES Parameter (internal reference #201050)
In all versions of the documentation, the topic Vmnovrdmdisks describes how the option enables the client to restore configuration information and data for vRDM volumes that are associated with a VMware virtual machine, even if the LUNs that were associated with the volumes cannot be found.
However, the following information is misleading in relation to the YES parameter:

Specify this value if you must restore a virtual machine that you backed up, and the original LUNs that were mapped by the raw device mappings file cannot be located. This setting causes the client to skip attempts to locate the missing LUNs used by the vRDM volumes, and restore the configuration information (disk labels) and the data that was backed up. The vRDM volumes are restored as thin-provisioned VMFS VMDKs.

Instead, a more accurate description applies:
Specify this value if you must restore a virtual machine that you backed up, and the original LUNs that were mapped by the raw device mappings file cannot be located. This setting causes the client to skip attempts to locate the LUNs used by the vRDM volumes, and restore the configuration information (disk labels) and the data that was backed up. All vRDM volumes are restored as thin-provisioned VMFS VMDKs.

The most recent version of the information can be found on the Vmnovrdmdisks topic. Instead of the existing information (in italics above) use the new information documented above in bold.
 



 

Limitations related to interface operations

The File-level Restore web interface is not supported for the Mount Proxy itself. (internal reference #190373)
Problem: The File-level Restore (FLR) process in a Windows environment includes the creation and export of a network share created for the selected guest iSCSI disk(s) mounted on the mount proxy host so that the guest itself remotely can access its files from that share.
The FLR web interface for the Windows mount proxy host VMware virtual guest itself is not possible. This is due to Microsoft Windows WMI and authentication process limitations that prevent the user to access a network share from the same host on which it was created.

Workaround: Use the legacy Spectrum Protect for Virtual Environments - Data Protection for VMware web GUI or the Recovery Agent native GUI to mount the needed guest disk to copy the files needed for the restore.

Limitation: Permanent restriction, see APAT IT305543.

 
Maximum of 2,000 objects can be restored from search results in single operation
To restore more than 2,000 objects from search results in the file restore user interface, run multiple restore operations.

 
File restore interface cannot mount a renamed virtual machine (internal reference #99667)
In this scenario, a virtual machine is backed up to the Tivoli Storage Manager server by using the full VM incremental-forever backup type. After the backup completes, the virtual machine is renamed in the VMware vCenter Inventory. A subsequent attempt to mount the virtual machine backup displays the following error message in the file restore interface:

"The system cannot find a backup to load. Contact your administrator and log out."

The backup cannot be loaded because the Tivoli Storage Manager server does not contain a backup image of the renamed virtual machine.

Workaround: To prevent this error, complete either of the following solutions:
-Short term solution: In the VMware vCenter Inventory, rename the virtual machine to the name that was used in the original backup. Then, log in to the file restore interface to access the backup for a file restore operation.
-Long term solution: Back up the renamed virtual machine by using the full VM incremental-forever backup type. Then, log in to the file restore interface to access the backup for a file restore operation. The original virtual machine and the renamed virtual machine both appear in Tivoli Storage Manager server database and storage pool. To prevent any confusion, the Tivoli Storage Manager administrator can delete the file space for the original virtual machine.

 
Log in attempt with host name or IP address fails (internal reference #101537)
In this scenario, the correct host name or IP address is entered in the login page of the file restore interface. However, the following error message is displayed:

"The host cannot be found. Verify the host name and log in again. If the problem persists, contact your administrator."

This scenario occurs when the data mover system and the VMware vCenter Server use different internet protocols. For example, the data mover uses IPv4 and the vCenter Server uses IPv6.

Workaround: To avoid a login failure, complete either of the following tasks:
  • Enter the fully qualified domain name in the login page of the file restore interface. For example: myhost.mycompany.com
  • If IPv4 is not used in the environment, request the domain administrator to remove any IPv4 entries from the domain DNS server for that host name. If IPv6 is not used in the environment, request the domain administrator to remove any IPv6 entries from the domain DNS server for that host name.
     
Log in can take a long time (internal reference #119173)
Logging in to the file restore interface might take a long time depending on the number of guests that are managed by the VMware vCenter server. For example, it can take three minutes to log on for a vSphere environment with 3,000 guests.

 
Date stamp, time stamp, and version number do not display in "Details" or "Job History" view (internal reference #98417)
During a file restore operation, when a file with the same name exists, the restored file's original modification date and time is added to the file name. Subsequent restores of the same file contain a version number (_N) after the original modification date and time. For example: t2.2015-03-07-07-28-03_1.txt

In this scenario, during a restore operation of a file with the same name as an existing file, the restore operation either fails or is canceled by the user. When viewing information about the failure or cancellation in the "Details" or "Job History" view, the file's date stamp, time stamp, and version number do not display. The original file name displays.

Workaround: To view the most recently restored version of the file on the guest virtual machine, look for the file with the highest version number.

 

Tivoli Storage Manager file restore login page requires repeat credentials (internal reference #96412)
This scenario occurs when either the login page is inactive for an extended period of time or the client acceptor service is restarted on the data mover system. After the
correct credentials are entered, you are prompted to login again, even though the correct credentials were entered. To resolve this problem, reenter the correct credentials. The interface then loads the backup selection page.

Workaround: To avoid this issue, refresh the login page after the client acceptor service is restarted on the data mover or the login page has become inactive.

 
No subtitles are shown in the file restore product videos
Subtitles (captions) are not available in the file restore product videos at this time. To view the file restore product videos with subtitles, go to the following websites:
https://www.youtube.com/watch?v=jcvHmk62eZo
https://www.youtube.com/watch?v=IIO6j2iOQjQ

Workaround:  Click the CC button in the video player to display the subtitles in the videos.

 
No sound in the file restore product videos on a remote computer
If you view the product videos from the file restore interface on a remote computer that does not have a sound card, you will not hear any sound in the videos. A sound card must be installed and enabled on the remote computer in order for you to hear sound in the product videos.

Workaround: To view the videos with sound, access the file restore interface from a web browser on a computer that has a sound card installed.

 



 

Limitations related to Linux systems

FileRestore failed to mount backup of a RH 8 guest VM (internal reference #190065)
Problem: In case of an integrity issue, the XFS disk will not mount during FLR. While, it is an  expected behavior, the mount error is not obvious and requires dmesg output from Linux Mount Proxy to diagnose.
In IBM Spectrum Protect traces from Linux MP, the following error can be seen:
09/30/2019 16:13:19.007 [019377] [2077873920] : FileLevelRestore/Utils.cpp( 478): executeCommand: Full command string: timeout 120 mount -o ro -o nouuid /dev/sdb1 /tsmmount/file_restore/shoe67rh08/2019-09-30-13_49_40/Volume1
09/30/2019 16:13:19.769 [019377] [2077873920] : FileLevelRestore/Utils.cpp( 498): executeCommand: Command Output:
mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail or so.
The dmesg output shows the following:
[8635258.335130] XFS (sdb1): Superblock has unknown read-only compatible features (0x4) enabled.
[8635258.335621] XFS (sdb1): Mounting V5 Filesystem
[8635258.913203] XFS (sdb1): Starting recovery (logdev: internal)
[8635258.924638] XFS (sdb1): Superblock has unknown read-only compatible features (0x4) enabled.
[8635258.924645] XFS (sdb1): Attempted to mount read-only compatible filesystem read-write.
[8635258.924647] XFS (sdb1): Filesystem can only be safely mounted read only.
[8635258.924673] XFS (sdb1): metadata I/O error: block 0x0 ("xlog_do_recover") error 22 numblks 1
[8635258.924679] XFS (sdb1): log mount/recovery failed: error -22
While the SP Client only mounts the file system read-only, the system automatically tries to re-mount it read-write in attempt to fix the XFS log. The mount fails due to that.

Workaround:  The XFS should be checked for internal errors on the target VM using tools: xfs_check, xfs_repair. The log journal needs to be fixed, then the mount will succeed.
 



 

vmguest limitations

File system names cannot be identified. (internal reference #199407)
Problem: In the File Level Restore interface, the following information message appears after the backup is loaded for a Linux machine:

The file system names cannot be identified. Files or folders from these file systems can only be restored to an alternate location.

The cause is that the original mount point cannot be found for a file system if both of the following statements are true:
  • /etc/mtab is a symbolic link
  • The file system device is referred using an unsupported naming. The following methods are currently supported:
    • Real device name, e.g. /dev/sdb1
    • Using a UUID, e.g. UUID=6d3997c8-3ac6-403a-8767-c513936435a5
    • Using mapped devices, e.g. /dev/mapper/vg_oc2667817465-lv_home.
Workaround: use one of the currently supported naming methods. 

 
Directories that are created (or recreated) during a restore operation might be assigned incorrect access permissions, ownership information, or both (internal reference #101879)
Upon a successful restore operation, the original user access permissions and ownership information of the restored files is preserved. However, access permissions and ownership information of the recreated directories (if any) change in accordance with the default umask setting and the initial login group of the user that is logged in for restore on the target Linux system. This is a known limitation.

 
Linux temporary file system supports file restore to alternate location only (internal reference #102998)
A Linux file system that is mounted in the guest virtual machine, but is not present in the /etc/fstab system configuration file, supports a file restore to alternate location operation only. It does not support a file restore operation to the original location. In this scenario, it does not matter whether the file system is listed in the /etc/mtab file.

A Linux file system that is mounted in the guest virtual machine, but is not present in the /etc/fstab system configuration file, is represented in the file restore user interface as "volume#". This is a known limitation.

 
Linux mount proxy system cannot mount volumes from a virtual machine with a later Linux operating system (internal reference #102284)
In this scenario, a Red Hat Enterprise Linux (RHEL) 6.5 mount proxy system attempts to mount volumes from an RHEL 7.1, SLES 11, or SLES 12 virtual machine. The operation fails and the mount proxy system reboots. This scenario occurs because a Linux mount proxy system cannot mount Btrfs or XFS volumes from a virtual machine with a later Linux operating system. This is a known limitation.

Workaround: To prevent this situation, ensure that the minimum level of the Linux mount proxy system is at the same level, or later level, than the level of the protected guest virtual machine.

 
Group identifier is not preserved during restore operation by a non-root user (internal reference #102726)
When a non-root user restores a file that is owned by the same non-root user, the group identifier (GID) for this file is not preserved. This is a known limitation.

Workaround: To preserve the GID for a file that is owned by a non-root user, restore the file by using root user authority.

 
Potential UUID collisions on Linux mount proxy system and guest virtual machines (internal reference #101538 and #131419)
File restore operations are not supported when UUID collisions occur. A UUID collision is defined as the mount proxy system has an identical UUID as the guest virtual machine being whose disks are being mounted. UUID collisions might occur between the guest virtual machines and the mount proxy system in any of the following scenarios:
  • The guest virtual machine is cloned from the mount proxy system.
  • The mount proxy system is cloned from the guest virtual machine.
  • The guest virtual machine and mount proxy system are cloned from the same template.
  • The guest virtual machine is cloned from a virtual machine that is already mounted on the mount proxy system.
When a UUID collision exists between the guest virtual machine and the mount proxy system, a file restore operation cannot identify the original restore points on the mounted guest virtual machine. In this situation, the volumes related to the duplicated UUIDs are skipped.

Before using the file restore interface, be sure to resolve any UUID collisions between the mount proxy system and guest virtual machines in Linux. Engage your Linux administrator, or Linux support, for guidance.

Workaround: The following example shows how to generate a new UUID in most file systems, but not all file systems. The workaround is not valid for Btrfs files systems. Ensure your Linux administrator, or Linux support, validates the steps necessary to generate a new UUID for the file systems in use on either the guest virtual machine or mount proxy system.
  1. Generate a new UUID on the cloned system by running the uuidgen command.
  2. Assign the new UUID to a specified block device on the cloned system by running the tune2fs command with the -U option (tune2fs <device> -U <new_UUID>).
     


 
LVM devices are not mounted in case that the LVM importing tool is not able to generate a new UUID
On some Linux distributions, it has been observed that the vgimportclone command is not able to assign a new UUID to the device. This results in duplicated device UUIDs on the mount proxy machine. In this situation, the volumes related to the duplicated UUIDs are skipped and the ANS3184W message is reported in the TSM log file. This could be a signal that the guest virtual machine is a clone of the mount proxy system. See also "Potential UUID collisions on Linux mount proxy system and guest virtual machines".

 
Linux Mount Proxy's lvmetad daemon may activate the LVM volumes before it is activated by the Spectrum Protect services
This behaviour causes the file level restore process not to mount one or more guest filesystem(s) without the mount process issuing any warning or error message.

Workaround: To avoid that, the steps to disable it on the Linux Mount Proxy host are as follows :
  1.     Edit the'/etc/lvm/lvm.conf' file to have the parameter : 'use_lvmetad' set to 0.
  2.     Run the following administration commands:
    • systemctl stop lvm2-lvmetad
    • systemctl disable lvm2-lvmetad
    • systemctl stop lvm2-lvmetad.socket
    • systemctl disable lvm2-lvmetad.socket

       
Linux file restore operations limited to a maximum overall path length of 4096 characters (internal reference #99872)
The overall path for the mount in the file restore interface is composed of the following objects:

/tsmmount/FLR/<VMName>/<SNAPSHOT_DATE>/VolumeX/dir1/dir2/dir3/dir4/dir5/fileName

The maximum path length for the mount in the file restore interface is determined by the total number of combined characters for each object that compose the path of the mount:

/tsmmount/FLR/<VMName>/<SNAPSHOT_DATE>/VolumeX
Maximum length of the mount path: 38 characters

<VMName>
Maximum length of the virtual machine name: 80 characters

fileName
Maximum length of the file name to be restored: 255 characters

dir1/dir2/dir3/dir4/dir5
Maximum length of the path to the file to be restored is determined by the following formula:

4096
- length of mount path (38)
- length of virtual machine name (80)
- length of file name to be restored (255)
____________________________
= Maximum length of the path to the file to be restored

If the total number of combined characters for the path of the mount exceed 4096 characters, the file is skipped and the restore operation continues for other selected files
and directories. No message displays to identify files that are skipped.

Workaround: If a Linux virtual machine is included in file restore operations, make sure that the total number of combined characters for the path of the mount do not exceed 4096 characters.

 
Directory does not restore to alternate location on Linux virtual machine (internal reference #101712)
In this scenario, a directory is selected for a Restore to > Alternate Location operation. After the operation completes, the following error message displays for the directory:

"You do not have permission to restore the file to the destination folder."

This error occurs when either of the following conditions exist:
-The directory has the same name as a file that was selected for restore.
-The directory has the same name as a file that exists on the target virtual machine.

 
SUSE Linux Enterprise Server 12 Btrfs subvolumes are not supported (internal reference #99896)
By default, SUSE Linux Enterprise Server 12 uses Btrfs subvolumes for the root partition. During a file restore operation, the default subvolume is mounted as the root partition. However, Btrfs subvolumes such as /home, /usr, or /opt, are excluded from the snapshot and do not display in the file restore interface. As a result, Btrfs subvolumes are not supported for file restore operations.

 
A deleted Linux file system restores to the root directory (internal reference #97738)
In this file restore scenario, a Linux file system (on the guest virtual machine) is unmounted and the file system mount point is deleted. A subsequent attempt to restore the file system restores the file system data to the root ("/") file system. This issue occurs because the file system to restore is unmounted. As a result, only the mount point is recreated, and the data is restored to that directory under the root ("/") file system.

 
Linux SLES 11 mount proxy system cannot mount Btrfs or XFS file systems from a Linux SLES 12 guest virtual machine (internal reference #100670)
This environment consists of a SUSE Enterprise Linux 11 (Service Pack 3) mount proxy system, and a SUSE Enterprise Linux 12 guest virtual machine. In this scenario, the SUSE Enterprise Linux 11 mount proxy system cannot mount files from a SUSE Enterprise Linux 12 Btrfs or XFS file system. The mount operation fails and the following message displays:

"An error occurred while loading the backup ... ."

 
Local user authentication is required for a virtual machine (internal reference #112473)
Make sure that the user who authenticates to the Linux virtual machine that contains the files to be restored is a local user. Authentication is not available through Windows domain, Lightweight Directory Access Protocol (LDAP), Kerberos, or other network authentication methods.


 
File restore operations from a local VVOL snapshot require a free loop device for each of the guest virtual machine (internal reference #133752)
Old Linux distributions have a limited number of default loop devices and this can limit the number of virtual machines that can be mounted on the mount proxy system.  For some distributions, this number is set to 8 or to 64. This number can be theoretically set to a maximum of 256 loop devices, but requires a reboot of the machine and for some Linux distributions, to recompile part of the kernel.

Installing the Linux mount proxy on old Linux distributions (for example, Suse 11) can limit the number of disks that can be mounted on the mount proxy for file restore VVOL operations. In this situation, the file restore operation is stopped and a cleanup is performed.

For example, if a machine has 8 loop devices and only 6 are free, it is not possible to mount a local snapshot of a machine with 7 or more disks.

Workaround: Use a Linux distribution such as Suse 12 or Redhat 7.3 for the mount proxy. These distributions dynamically create the loop devices upon request.

 
File restore operations from a local VVOL snapshot create a temporary defunct dsmagent on the Linux mount proxy system (internal reference #129811)
For each mount of a snapshot of a VMDK, a defunct dsmagent is displayed in the process list of the mount proxy. This is due to incorrect handling inside the VMware library. The defunct dsmagent is no longer displayed after the main dsmagent has stopped.

 
File restore for Linux VM from local snapshot might fail with nbdssl (V8.1.0 only) (internal reference #140878)
Due to a VMware restriction, this operation might fail if the <ba client install dir>/lib64/libdiskLibPlugin.so library exists on the Linux MP system and the vmvstortransport nbdssl option is used.
The dsmerror.log of the Linux mount proxy will contain the following error:

<date time> ANS9365E VMware vStorage API error for virtual machine 'VM name'.
IBM Spectrum Protect function name : VixMntapi_OpenDisks
IBM Spectrum Protect file : vmvddksdk.cpp
API return code : 6
API error message : The operation is not supported


Additionally ANS3187W will be generated in the error log file of the Windows mount proxy.
The user for the file restore interface will see the information that an error had occurred during loading the backup.

Workaround: Remove <ba client install dir>/lib64/libdiskLibPlugin.so. Alternatively, use the vmvstortransport nbd option.
 


Limitations related to Windows systems

Directory does not restore to alternate location on Windows virtual machine (internal reference #101733)
In this scenario, a directory is selected for a Restore to > Alternate Location operation. After the operation completes, the following error message displays for the directory:

"File not found during backup, Archive or Migrate processing. No file specification entered."

This error occurs when either of the following conditions exist:
-The directory has the same name as a file that was selected for restore, and the file is restored before the directory.
-The directory has the same name as a file that exists on the target virtual machine.

 
Microsoft Windows user account cannot access its own user folders (internal reference #102926)
User folders are a collection of folders that are created by the Windows operating system for each user account. These folders are typically in "C:\Users\<user account>". A Windows user account cannot access its own user folders in the file restore interface.

 
Directories that are restored to an alternate location by a Windows domain administrator become owned by the domain administrator (internal reference #102543)
In this scenario, a Windows domain administrator restores a directory and files that are owned by a Windows domain user, and Restore to > Alternate Location is selected. The restore operation completes successfully, and the Windows domain user retains ownership of the restored files. However, ownership of the restored directory changes to the Windows domain administrator.

 
Windows operating system identifies date and time stamp as a file type
During a file restore operation, when a file with the same name exists, the restored file's original modification date and time is added to the file name. For example, the file 1.txt can contain this date and time stamp:

1.2015-5-10-04-00.txt

However, a Windows operating system identifies the date and time stamp as a file type. For example, when you view the file 1.2015-5-10-04-00.txt in Windows Explorer, the "Type" column displays 2015-5-10-04-00.txt. The file format is not affected.

 
File restore operation for a Windows guest VM from local backups might fail in V8.1.4 and earlier releases (internal reference #149107)

When the Windows mount proxy and the Windows guest VM are on the same datacenter, but on a different host, the file restore operation might fail for local backups. This happens when the host on which the mount proxy is running does not have direct access to the vVol datastore. 

Workaround:
 - Ensure that the host on which the mount proxy is running has access to the vVol datastore
or
 - Move the mount proxy to the host on which the Windows guest VM is running
         
 

[{"Line of Business":{"code":"LOB26","label":"Storage"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSERB6","label":"IBM Spectrum Protect for Virtual Environments"},"ARM Category":[{"code":"a8m0z00000006kpAAA","label":"Virtual Environments (VE)"}],"ARM Case Number":"","Platform":[{"code":"PF016","label":"Linux"},{"code":"PF033","label":"Windows"}],"Version":"7.1.3;7.1.4;7.1.6;7.1.8;8.1.0;8.1.10;8.1.11;8.1.2;8.1.4;8.1.6;8.1.7;8.1.8;8.1.9"}]

Document Information

Modified date:
03 March 2021

UID

swg21964753