Configuring the NVMe-oF initiator for VMware ESXi

Configure the NVMe-oF initiator for VMware vSphere Hypervisor (ESXi). You can set up a VMware ESXi host as a NVMe/TCP initiator.

About this task

NVMe/TCP is supported by VMware vSphere Hypervisor (ESXi) 7.0U3 or later.
Note: The NVMe gateway supports VMware vSphere APIs (VAAI).

Before you begin

  • A VMware ESXi host running VMware vSphere Hypervisor (ESXi) 7.0U3 version or later.
  • Ceph NVMe-oF gateway deployed.
  • IBM Storage Ceph cluster and ceph-nvmeof configuration is ready and healthy.
  • A subsystem defined within the gateway. For more information, see Defining an NVMe-oF subsystem.
  • NVMe/TCP adapter is configured.
    • Enabled NVMe/TCP on a physical network interface controller (NIC).
       esxcli nvme fabrics enable --protocol TCP --device vmnicN
      Replace N with the number of NIC.
    • Tag a VMkernel NIC to permit NVMe/TCP traffic.
      esxcli network ip interface tag add --interface-name vmkN --tagname NVMeTCP
      Replace N with the ID of the VMkernel.

Procedure

Configuring the VMware ESXi host for NVMe/TCP transport includes discovering the NVMe/TCP targets and connecting to them.

  1. List the NVMe-oF adapter.
    esxcli nvme adapter list
    For example,
    [root@host01:~] esxcli nvme adapter list
    Adapter  Adapter Qualified Name           Transport Type  Driver     Associated Devices
    -------  -------------------------------- --------------  ---------  ------------------
    vmhba64  aqn:nvmetcp:ac-1f-6b-0a-18-74-T  TCP             nvmetcp    vmnic0
    
  2. Discover any NVMe-oF-gateway subsystems.
    esxcli nvme fabrics discover -a NVME_TCP_ADAPTER -i GATEWAY_IP -p 4420
    For example,
    [root@host01:~] esxcli nvme fabrics discover -a vmhba64 -i 10.0.211.196 -p 4420
    
    Transport Type Address Family Subsystem Type Controller ID Admin Queue Max Size Transport Address Transport Service ID Subsystem NQN              Connected
    -------------- -------------- -------------- ------------- -------------------- ----------------- -------------------- -------------------------- ---------
    TCP            IPv4           NVM            65535         128                   10.0.211.196     4420                 nqn.2016-06.io.spdk:cnode1  false
  3. Connect to NVMe-oF gateway subsystem.
    esxcli nvme fabrics connect -a NVME_TCP_ADAPTER -i GATEWAY_IP -p 4420 -s SUBSYSTEM_NQN
    For example,
    [root@host01:~] esxcli nvme fabrics connect -a vmhba64 -i 10.0.211.196 -p 4420 -s nqn.2016-06.io.spdk:cnode1
  4. List NVMe/TCP controller list.
    esxcli nvme controller list
    [root@host01:~] esxcli nvme controller list
    Name                                                                                        Controller Number  Adapter  Transport Type  Is Online
    ------------------------------------------------------------------------------------------  -----------------  -------  --------------  ---------
    nqn.2016-06.io.spdk:cnode1#vmhba64#10.0.211.196:4420                                                      301  vmhba64  TCP                  true
  5. List NVMe-oF namespaces in the subsystem.
    esxcli nvme namespace list
    For example,
    [root@host01:~] esxcli nvme namespace list
    Name                                Controller Number  Namespace ID  Block Size  Capacity in MB
    ------------------                  -----------------  ------------  ----------  --------------
    eui.0100000001000000e4d25c00001ae214               256             1         512          953869
    eui.01abc123def456g7e4d25c00001ae214               301             1         512             500
    eui.02abc123def456g7e4d25c00001ae215               301             2         512             500
    eui.03abc123def456g7e4d25c00001ae216               301             3         512             500

What to do next

Verify that the initiator is set up correctly.
  1. From the vSphere Client, go to the ESXi host.
  2. On the Storage page, go to the Devices tab.
  3. Verify that the NVMe/TCP disks are listed in the table.