HyperSwap configuration by using interswitch links

You can use interswitch links (ISLs) in paths between nodes to configure an IBM® HyperSwap® topology system. If the cable distance between the two production sites exceeds 100 km, potential performance impacts can result.

Using ISLs for node-to-node communication requires configuring two separate SANs, each of them with two separate redundant fabrics:
  1. Configure one SAN with two separate fabrics so that it is dedicated for node-to-node communication. This SAN is referred to as a private SAN. This private SAN can be used by more than one HyperSwap topology system if it has enough bandwidth for all these systems. For more information, see Additional bandwidth requirements.
  2. Configure one SAN with two separate fabrics so that it is dedicated for host attachment; storage system attachment; and Global Mirror, Metro Mirror, or IBM HyperSwap operations.

The network hardware that carries the ISLs must maintain the physical separation and independence of the two redundant fabrics. For example, do not connect the two fabrics into a single dark fibre link. If there are two dark fibre links, dedicate one link for each fabric. Do not cross-connect the two fabrics onto each of the two links.

Rules for HyperSwap configurations that use ISLs

In a HyperSwap configuration, a site is defined as an independent failure domain. Different types of sites protect against different types of fault. For example, if configured properly, the system continues to operate after the loss of one failure domain.

However, the system does not guarantee that it can survive the failure of two sites.

  • For every storage system, create one zone that contains ports from every control enclosure and all storage system ports, unless otherwise stated by the zoning guidelines for that storage system. However, do not connect a storage system in one site directly to a switch fabric in the other site. Instead, connect each storage system only to switched fabrics in the local site. (In HyperSwap configurations with ISLs in control enclosure-to-control enclosure paths, these fabrics belong to the public SAN).

    Storage systems that are configured to one of the main sites (1 or 2) need to be visible by the control enclosures in that site. Storage systems in site 3 or storage systems that have no site that is defined must be zoned to all control enclosures.

  • Each control enclosure must have direct Fibre Channel or Ethenet connections to at least two fabrics. One is a public fabric and the other is private, in its location.
  • Some service actions require the ability to do actions on the front panel or through the technician port of all control enclosures in a system within a short-time window. If you use systems with HyperSwap configurations, you are required to assist the support engineer and provide communication technology to coordinate these actions between the sites.
  • The storage system at the third site must support extended quorum disks. This information is available in the interoperability matrixes that are available at the following support website:
    www.ibm.com/support
Each SAN consists of at least one fabric that spans both production sites. At least one fabric of the public SAN includes also the quorum site. You can use different approaches to configure private and public SANs.
  • Use dedicated switches for each SAN.
  • Use separate virtual fabrics or virtual SANs for each SAN.
    Note: ISLs must not be shared between private and public virtual fabrics.

To implement private and public SANs with dedicated switches, any combination of supported switches can be used. For the list of supported switches and for supported switch partitioning and virtual fabric options, see the interoperability website:

www.ibm.com/support

Like for every managed disk, all control enclosures need access to the quorum disk by using the same storage system ports. If a storage system with active/passive controllers (such as IBM DS3000, IBM DS4000®, IBM DS5000, or IBM FAStT) is attached to a fabric, the storage system must be connected with both internal controllers to this fabric.

By using FCIP, passive WDM, or active WDM for quorum site connectivity, you can add to the extension. The connections must be reliable. It is strictly required that the links from both production sites to the quorum site are independent and do not share any long-distance equipment. FCIP links are supported also for ISLs between the two production sites in public and private SANs. A private SAN and a public SAN can be routed across the same FCIP link. However, to ensure bandwidth to the private SAN, it is typically necessary to configure FCIP tunnels. Similarly, it is permissible to multiplex multiple ISL links across a DWDM link.

Note: It is not required to UPS-protect FCIP routers or active WDM devices that are used only for the control enclosure-to-quorum communication.

A HyperSwap configuration is supported only when the storage system that hosts the quorum disks supports extended quorum. Although the system can use other types of storage systems for providing quorum disks, access to these quorum disks is always through a single path.

Additional bandwidth requirements

A bandwidth equal to the peak write bandwidth (as sum from all hosts) is required for intersite communication between I/O groups. This bandwidth must be available in the private SAN. Additionally, you need intersite bandwidth in the public SAN for host-to-node communication if a host accesses nodes in the other sites. For example, after a failure of the local I/O group of the host, or to access volumes that do not use the HyperSwap function. The guideline for a bandwidth equal to the peak write bandwidth for private SANs gives the minimal bandwidth supported for HyperSwap operations. In some non-optimal configurations, additional bandwidth is required to avoid potential performance issues. For example, if hosts at different sites share a volume, then the private SAN needs bandwidth equal to two times the peak write bandwidth plus the peak read bandwidth.