Linux Kernel-based Virtual Machine (KVM) requirements and support

This topic describes the requirements and support for the Linux Kernel-based Virtual Machine (KVM) virtualization environment in IBM® Flex System Manager VMControl.

KVM requirements when deploying KVM with the Deploy compute node images task

You can use the Deploy compute node images task in the management software to deploy RHEL and KVM hypervisors to multiple X-Architecture compute nodes. If you use this method, some of the requirements that are listed in the following sections might be completed automatically. For example, the Deploy compute node images task automatically installs the KVM Platform Agent and sets up KVM hosts.

For more information about the Deploy compute node images task and the operating-system settings that it configures, see the section Using the Deploying compute node images task to deploy operating systems.

Requirements with NFS-based storage

  1. IBM Flex System Manager VMControl is activated.
  2. An NFS x86_64 Red Hat Enterprise Linux (RHEL) server is set up and configured. The following diagram shows the KVM virtualization environment with NFS storage.
    Figure 1. KVM virtualization environment with NFS storage
    Diagram of the KVM virtualization environment with NFS storage
    1. At least one NFS export on the NFS server is defined:
      • For image and disk inventory to work, the export path must end in /images. For example: /share/kvm/images. Additionally, if you are not setting up additional security in your environment, you must use the no_root_squash export option. For example:
        $ cat /etc/exports
        /nfs/kvm/images 192.168.0.0/255.255.0.0 \
        (rw,no_root_squash,sync,no_subtree_check)
        If you cannot change your NFS export setup to have image files be inventoried from an export path that ends in something other than /images, complete the following steps:
        1. In the file, /opt/ibm/director/lwi/conf/overrides/USMIKernel.properties, add a line for the following property: director.services.extendeddiscovery.nfs.suffix For example: director.services.extendeddiscovery.nfs.suffix=/img-kvm. This addition results in inventorying the image files within NFS export paths that end in /img-kvm instead of the default, /images.
        2. Restart theIBM Flex System Manager after you add or change the USMIKernel.properties file.
      • For consistency, name image and disk files that are stored on NFS with a .dsk, .img, or .raw extension.
    2. Ensure that the NFS services are started. For example, by running the command service nfs start.
    3. KVM Platform Agent is downloaded and installed. For instructions, see "Installing and Uninstalling the Platform Agent."
      Note: If you use the Deploy compute node images task to deploy RHEL and KVM, the KVM Platform Agent is installed automatically. See Using the Deploying compute node images task to deploy operating systems for more information.
    4. The NFS server is discovered, accessed, and inventoried by IBM Flex System Manager.
  3. The image repository is set up and meets all of the following requirements. The image repository is used for storing and deploying virtual appliances.
    1. Common Agent is installed on your image repository server. For instructions, see "Installing common agent."
    2. The shared NFS exported storage is mounted on the Image Repository server.
    3. The image repository server is discovered and inventory is collected.
    4. The image repository is created from VMControl. For instructions to create an image repository, see "Creating and discovering image repositories for KVM."
  4. One or more RHEL KVM hosts are set up and available:
    Note: If you use the Deploy compute node images task to deploy RHEL and KVM, KVM hosts are set up automatically. See Using the Deploying compute node images task to deploy operating systems for more information.
    1. KVM Platform Agent is downloaded and installed on the KVM hosts. For instructions, see "Installing and Uninstalling the KVM Platform Agent."
    2. KVM hosts are discovered, accessed, and inventoried from your IBM Flex System Manager.
    3. Storage is set up. To set up storage, right-click a KVM host, select Edit Host, and click Storage Pools. Configure a shared NFS storage pool for the NFS server export that you created, then click OK to create the storage pools.
Notes:
  • When you configure KVM hosts, specify the fully qualified name as the host name, for example, hostname.company.com. Use the hostname command on the host to determine the system name. If the host is not configured with its fully qualified host name, the IBM Key Exchange providers might fail to exchange SSH keys during relocation. Also, ensure that the host name and IP address for the target system are recorded correctly in the DNS records.

Requirements with SAN storage

This figure shows an overview of a SAN environment. Refer to it as necessary to help you understand the steps that follow.
Figure 2. VMControl and KVM SAN storage configuration
Diagram of the KVM virtualization environment with SAN storage that does not require an additional agent
  1. IBM Flex System Manager VMControl is activated.
  2. The Fibre Channel storage network is correctly cabled and configured with the appropriate Fibre Channel switches. KVM virtualization with VMControl supports only SAN storage over Fibre Channel. Typically, one of the fabric switches is configured with the zoning information. Additionally, VMControl requires that the Fibre Channel network has hard zoning enabled. For more information about zoning configuration, see the documentation for your Fibre Channel switch. For a list of supported Fibre Channel switches, see "Supported network devices."
  3. One or more RHEL KVM hosts are set up and available:
    • Ensure that the RHEL KVM host is connected to the Fibre Channel network with a supported adapter. For more information, see "Supported network devices."
    • The KVM Platform Agent is downloaded and installed. For information about where to download the KVM Platform Agent from and the steps to install, see "Installing and Uninstalling the KVM Platform Agent."
    • KVM hosts are discovered, accessed, and inventoried from your IBM Flex System Manager.
  4. The SAN storage controllers (also called storage subsystems) are configured and storage pools are set up with the storage space and RAID levels that you want for virtual disk images. VMControl and Storage Control do not provision these RAID storage pools for you.

    For information about the supported storage controllers with VMControl and KVM, see "Task support for storage products." Scroll to the VMControl Provisioning row in the tables.

    Notes:
    • When you relocate a virtual server from one host to another, there is a time when the server storage volumes are mapped to both hosts. Therefore, if live relocation might happen between KVM hosts and your SAN device allows volumes to be mapped to only a single entity at a time, you must configure a host group on the SAN that contains those KVM hosts.
    • It is best practice to create one host definition for each host. Each host definition must include all worldwide port names (WWPNs) for the host (or hosts) that they represent, even if some ports are not physically connected or active. This practice avoids the potential problem of mapping a single volume under different LUN IDs to the same host.

      For example, assume a KVM host has a Fibre Channel card with host ports WWPN1 and WWPN2. An IBM Storwize® V7000 storage subsystem defines host definition KVM_Host1 for that host. In this situation, the host definition must contain both WWPN1 and WWPN2.

  5. If you have a supported storage subsystem with a built-in provider, skip this step. If your SAN storage subsystem does not come with a built-in storage provider, configure a system with a supported version of the SMI-S proxy storage provider running on it. For more information about SMI-S providers, see "Managing SMI-S providers."
  6. A Fibre Channel switch provider is configured in the environment. This role can be handled by the Brocade SMI-S Agent or the Brocade Network Advisor. For more information about SMI-S providers, see "Managing SMI-S providers." For details about the Brocade Network Advisor, search for information in the Brocade Community Web page.
  7. Storage subsystems, storage pools, and the Fibre Channel switch fabric are discovered and inventoried by Flex System Manager for shared access from endpoints in the KVM environment. These endpoints include KVM hosts and image repository servers as shown Figure 2.
    The following methods are the main ways to discover the environment:
    • SAN storage that is managed by Storage Control. In this case, there is no external IBM Tivoli® Storage Productivity Center server. Instead, there is an embedded Storage Control component that is running on the IBM Flex System Manager for storage inventory and management.
      1. Encryption keys are needed for some devices, such as the IBM System Storage® SAN Volume Controller and the IBM Storwize V7000. The encryption keys are used for discovery enablement and to enable FlashCopy®. If necessary, generate an encryption key file in OpenSSH format for your SAN device and store this file on your Flex System Manager server. For instructions to generate an encryption key file for your storage, see the documentation for your SAN storage device.
      2. Use the mkdatasource command to define your storage data source. The mkdatasource command enables SAN storage discovery and inventory collection through Storage Control.
        Examples:
        • Example command for an IBM Storwize V7000 controller:
          smcli mkdatasource -c svc -f /opt/ibm/storwize_tpc.ppk -v V7000 -i \
          192.168.42.165
        The command has different syntax for other devices. For more information about the mkdatasource command, see the topic "mkdatasource."
      3. For a Brocade Fibre Channel network switch, use the mkdatasource command to define the switch data source. The mkdatasource command enables switch fabric discovery and inventory collection through Storage Control. This example command targets a Brocade SMI Agent switch provider:
        smcli mkdatasource -c fabric -i 192.168.31.216 -t https -p 5989 -u admin -w admin -n /interop
        The mkdatasource commands associate the data sources with a managed Farm resource that Storage Control uses. The resource type is Farm and the name might be the Flex System Manager operating system name. For more information about the mkdatasource command, see the topic "mkdatasource."
        Note: For a QLogic Fibre Channel switch, the mkdatasource command is not required. For a QLogic switch, discover the switch with its IP address, request access to unlock the switch, and collect inventory for the switch.
      4. Use resource explorer to find the farm-managed resource and collect inventory on it. The result is that your storage resources and Fibre Channel network switch fabric is discovered and inventoried. Wait for this task to complete.
        Note: If you have many zones or zone groups defined on a fabric switch, the inventory collection task might show an error after the default Flex System Manager timeout period expires. However, zone inventory collection continues to run in the background.
    • SAN storage that is managed by an external IBM Tivoli Storage Productivity Center server.
      1. See the IBM Tivoli Storage Productivity Center information center for instructions to install and use IBM Tivoli Storage Productivity Center.
      2. After your IBM Tivoli Storage Productivity Center server is set up and managing your storage devices and Fibre Channel network switches, you can discover it from Flex System Manager. You must create an advanced discovery profile for the IBM Tivoli Storage Productivity Center server. For instructions to create discovery profiles, see "Creating a discovery profile."
      3. Discover the external IBM Tivoli Storage Productivity Center server through the discovery profile you created. This process creates a farm resource.
      4. Use resource explorer to find the farm-managed resource and collect inventory on it. This process discovers and inventories the storage resources that the IBM Tivoli Storage Productivity Center server is managing. Wait for this task to complete.
        Note: If you have many zones or zone groups defined on a fabric switch, the inventory collection task might show an error after the default Flex System Manager timeout period expires. However, zone inventory collection continues to run in the background.
    • SAN storage that is managed by Flex System Manager and VMControl alone. Perform the following steps on your Flex System Manager server.
      Note: This method applies only if you have a supported storage subsystem without a built-in provider, where you can discover and inventory these SAN subsystems without using Storage Control. However, this method is not the recommended method.
      1. If the SMI-S storage and switch providers use default SMI-S ports, perform basic discovery of the LSI SMI-S storage provider and the Brocade SMI Agent switch provider. This method generally applies if you configured a dedicated server for each type of provider. For instructions, see "Running Discovery and unlocking storage devices using SMI-S providers."

        However, if you combine the SMI-S providers on the same server or combine either with another CIM provider server, you must create an SMI-S discovery profile for each provider. You must specify the provider type and SMI-S port in the discovery profile. Run discovery against each discovery profile you created. For instructions, see "Running Direct Connection discovery and unlocking storage devices using SMI-S providers."

      2. Request access to the discovered SMI-S storage and switch providers. If you provided the user name and password in an advanced discovery profile definition, access is automatically requested.
        When the storage provider is unlocked, the provider-handled SAN storage subsystem is discovered. Likewise, when the Fibre Channel switch provider is unlocked, the Fibre Channel switches in your fabric that are handled by the provider or Brocade Network Advisor are discovered.
        Note: A separate SMI-S switch provider is not required for a QLogic switch.
      3. Select the SAN storage subsystem that was discovered and collect inventory on it.
      4. Select all the Fibre Channel switch managed resources that were discovered and collect inventory on them. The inventory collection task might take a long time.
        Note: If you have many switches, zones, or zone groups defined on a fabric switch, the inventory collection task might show an error after the default Flex System Manager timeout period expires. However, zone inventory collection continues to run in the background.
  8. The image repository is set up and meets all of the following requirements. The image repository is used for storing and deploying virtual appliances.
    1. The image repository server is connected to the Fibre Channel network with a supported Fibre Channel HBA. For more information about adapters, see "Supported network devices".
    2. Common Agent is installed on your image repository server. For instructions, see "Installing common agent."
    3. The image repository server is discovered and inventory is collected on it.
    4. The image repository is created from VMControl. For instructions to create an image repository, see "Creating and discovering image repositories for KVM".
  9. Verify that Flex System Manager and VMControl can manage the environment.
    • Run dumpstcfg to see the storage configuration information.

      Example output:

      Host Accessible Containers
      --------------------------
      NAME: STORAGE SUBSYSTEM/POOL
      IBM Host01:   Storwize V7000-2076/RAID5_Pool_KVM
        Storwize V7000-2076/RAID0_Pool_800GB

      IBM Host01 is a KVM host, Storwize V7000-2076 is the storage subsystem, and the KVM host can access both RAID5_Pool_KVM and RAID0_Pool_800GB storage pools. The output indicates that inventory collection has modeled connectivity from the host to the storage correctly.

      Additionally, verify that the image repository server can access the SAN storage containers in the same way. For information about the dumpstcfg command, see the topic "dumpstcfg."

    • Run testluncreate to verify that the SAN storage configuration is complete. The command tries to allocate a volume on a subsystem and storage pool then attach it to a host. This host might be your image repository server. For information about this command, see the topic "testluncreate".
    • If dumpstcfg or testluncreate shows problems, there might be a configuration problem. Correct the problem and collect inventory again on each endpoint, farm, storage, and switch resources.

Supported hosts, Linux versions, and firmware versions

KVM virtualized environments must run on X-Architecture compute nodes.

You must use the following Linux versions for the KVM virtualization environment.

  • IBM Flex System Manager: You can use any IBM Flex System Manager with IBM Flex System Manager VMControl activated.
  • Hosts require Red Hat Enterprise Linux version 6.0, 6.1, 6.2, 6.3, or 6.4 with KVM installed. For more information about installing KVM on Red Hat Enterprise Linux 6.0, 6.1, 6.2, 6.3, or 6.4, see the "Red Hat Enterprise Linux Virtualization Guide."

Supported networks

VMControl supports the following network configurations for the KVM hypervisor:
  • Virtual Ethernet Bridging (VEB)
  • Virtual Ethernet Port Aggregator (VEPA) network (Requires IBM Flex System Manager Network Control and that the host is in a network system pool)
  • Limited support for KVM hypervisor networks
Notes:
  • Use paravirtualized (virtio) drivers for enhanced performance.
  • Use Virtio and e1000 model configurations for virtual network server adapters.

Supported storage

Image repository and virtual disk storage options include the following types:

  • NFS version 3 server that is running on RHEL version 6.2 and 6.3.
  • NFS version 3 server that is running on RHEL version 6.0, 6.1, 6.2, 6.3, or 6.4 with KVM installed.
  • Supported SAN devices. See "Task support for storage products" to determine support.

Supported tasks

In the KVM virtualization environment, you can complete these tasks:

  • Create and delete NFS storage pools on a host
  • Create and delete NFS or SAN virtual disks
  • Suspend or resume virtual servers and workloads (without release of resources)
  • Create, edit, and delete virtual servers
  • Power® operations for virtual servers
  • Relocate virtual servers
  • Turn maintenance mode on and off for hosts that are in server system pools
  • Import a virtual appliance package that contains one or more raw disk images
  • Capture a workload or virtual server into a virtual appliance
  • Deploy a virtual appliance package to a new virtual server with hardware and product customizations
  • Deploy a virtual appliance package to an existing virtual server with adequate resources.
  • Start, stop, and edit a workload
  • Create, edit, and delete server system pools
  • Create, edit, and delete network system pools (If you are using IBM Flex System Manager Network Control with VMControl)
  • Adjust the virtualization monitor polling interval for KVM using the KvmPlatformPollingInterval parameter

KVM requirements

In addition to the packages required by the KVM platform agent, the genisoimage.x86_64 package must also be installed for VMControl support. See "Installing and uninstalling KVM Platform Agent" to determine other necessary packages.

Tip: These packages might be available from your installation software.

You must update the firewall to allow INPUT network traffic based on certain rules. To update the firewall, complete the following steps:

  1. Insert the following rules before any reject rules that might exist on the INPUT chain:
    1. iptables -I INPUT -p tcp --dport 427 --syn -j ACCEPT
    2. iptables -I INPUT -p udp --dport 427 -j ACCEPT
    3. iptables -I INPUT -p tcp --dport 22 --syn -j ACCEPT
    4. iptables -I INPUT -p tcp --dport 15988 --syn -j ACCEPT
    5. iptables -I INPUT -p tcp --dport 15989 --syn -j ACCEPT
  2. To save the firewall settings, run the following command: service iptables save
  3. Update the firewall rules to allow Virtual Network Computing (VNC) connectivity to the virtual server (VS) console. Add the following rule before any reject rules that might exist on the INPUT chain: -A INPUT -p tcp -m tcp --dport 5900:5999 -j ACCEPT. This rule allows 100 virtual servers for the host. Update the port numbers if more virtual servers are deployed on the host.
  4. Update the firewall rules to allow virtual server relocation between hosts. Add the following rule before any reject rules that might exist on the INPUT chain: -A INPUT -p tcp --dport 49152:49215 -j ACCEPT..
  5. Firewall reject rules might restrict VS network traffic. So it is important to update the firewall rules to ensure that traffic on bridges that are used by virtual servers are allowed. For example, if VS uses bridge br500, you can insert the following rule to precede reject rules and allow unrestricted network traffic on bridge br500: -A FORWARD -i br500 -j ACCEPT.
Notes:
  • The SSH service must be configured and running on the KVM host. This configuration ensures that an SSH remote service access point for port 22 gets created for each host in addition to the CIM RSAP on ports 15988 and 15989.
  • When a SAN storage solution is being used, there is a requirement to have at least several Megabytes of free file system space under /var/opt/ibm and /var/lib/libvirt on the KVM host. The user that requests access to the host from IBM Flex System Manager must have authority to write to these directories.
  • For the installation of Windows guest operating system, if the boot disk is of VirtIO type, apply Windows VirtIO drivers for KVM.

Restrictions

The following limitations and restrictions apply to using the KVM virtualization environment:
  • Restrictions when you manage X-Architecture compute nodes. The following templates cannot be used to configure the operating system:
    • SNMP Agent Configuration template
    • Asset ID template
    A user account cannot be copied and used to create a user. Asset information cannot be configured for a managed system.
  • Do not use the RHEL Virtual Machine Manager or other means to create virtual servers or to manage them directly on a KVM Managed System. IBM Flex System Manager does not receive events when operations are performed outside of VMControl in these cases. Results might vary if these external interfaces are used.
  • Storage system pools are not supported in VMControl.
  • Only NFS version 3 mounts are supported by VMControl. If both your NFS server and NFS client (hosting image repository) support version 4, you might need to either use the nfsvers=3 mount option to downgrade the mount or configure your NFS server to give only version 3 mounts. To disable NFS version 4, complete the following steps:
    1. Find the NFS version 4 disablement line in the /etc/sysconfig/nfs file and uncomment it as follows:
      # Turn off v4 protocol support
      RPCNFSDARGS="-N 4" 
    2. Restart the NFS service by issuing the following command: service nfs --full-restart
  • Red Hat Enterprise Linux Version 5.5 is not supported for the image repository server that hosts SAN repositories.
  • Network-attached storage (NAS) devices like the IBM NSeries family are not supported as an NFS server in VMControl.
  • For SAN storage, VMControl do not support Device-Mapper Multipath configurations on KVM hosts or the image repository server. Most SAN storage environments are considered Multipath I/O enabled because multiple controllers can manage the same storage and there might be multiple cabled ports on the host Fibre Channel card.

    Although KVM can function in these environments, if a failure occurs, VMControl operations that are in progress or running virtual servers might encounter offline storage. This situation might happen if a SAN controller fails, for example.

  • When you use the Linux or Windows VMControl-shipped activation engine, you can activate these guest operating systems:
    • SUSE Linux Enterprise Server (SLES) 10 Service Pack 1, Service Pack 2, Service Pack 3, and Service Pack 4
    • SLES 11 Service Pack 1, Service Pack 2, and Service Pack 3
    • SUSE Linux Enterprise Server 12
    • Red Hat Enterprise Linux versions 5.x, 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6 and 7.0.
    • Windows 2008 Server Editions, 64-bit
    Third-party activation engines might support a different set of guest operating systems.
    Note: Discovery of a guest operating system is not required for capture. The capture wizard displays OS types that are supported for capture, with the following restrictions:
    1. The OS type selection wizard is displayed only if the OS type (via discovery or inferred from the virtual appliance it was deployed from) is activation engine unsupported or unknown.
    2. The user selection from the OS type panel determines the OVF product customization properties to include in the virtual appliance definition. So a selection of none or unknown results in no product customization parameters.
    3. Windows 2008 is the default selection on the OS type selection panel if the OS type is an unsupported Windows OS. Linux is selected if the OS type is an unsupported Linux OS. Otherwise, the default is None.
    4. If no product customization parameters are available, then the transport is set to null.
    5. The OS type that is selected from the OS type selection panel is inserted into the OVF envelope as the OS type, only if the OS type for the virtual server is not available. If available, the OS type in the database for the virtual server is inserted. This is for the case when the discovered OS type is not supported by the activation engine.
  • When a new Linux Kernel-based Virtual Machine (KVM) file-backed disk (non-SAN) is requested and is larger that 65,535 MB, the actual capacity of the newly created disk might be larger because the size is rounded up to the next GB.