Comparison of SD and NSD

The first figure shows the average ICN transaction response times in milliseconds at a high load level (1000 virtual ICN users) for the SD and NSD cluster configurations.

Figure 1. Average ICN response times for Spectrum Scale cluster configurations
This graphic provides an overview of how the average ICN response times varied for Spectrum Scale cluster configurations)
As shown in Figure 1:
  • The ICN transaction response time was around 30% higher for NSD compared to SD. These findings support similar findings that the SD cluster configuration provides good performance for smaller Spectrum Scale clusters.
  • When looking at the hypervisor CPU load (z/VM total load), at the highest workload level (with 1000 virtual ICN users) the additional overhead for NSD showed an increase of 1.7 IFL processors for the complete SUT.
  • For SD, the complete SUT consumed around 24 IFL processors.
  • For NSD, the complete SUT consumed around 25.7 IFL processors.
Figure 2 shows how the CPU costs varied for the cluster configurations.
Figure 2. CPU costs for Spectrum Scale cluster configurations
This graphic provides an overview of how the CPU costs varied for Spectrum Scale cluster configurations)

As shown in Figure 2, the NSD cluster configuration introduced some overhead due to network block I/O and additional control information flow between the NSD clients and servers. When implementing NSD in the SUT, there were three additional z/VM virtual machines added as NSD servers.

Figure 3 shows how the network block I/O overhead for NSD became apparent when looking at the Linux network packet rate metrics. It shows the stacked Linux network packet rates (received [rx] + transmitted [tx]) for a single ECM node (NSD client) out of four. The other three ECM nodes showed the same network packet rates.

Figure 3. ECM node (NSD client) Linux network packet rates for SD and NSD
This graphic provides an overview of how the ECM node (NSD client) Linux network packet rates varied for SD and NSD)
Specifically in Figure 3:
  • For NSD there was a major increase of Linux network packets from 40,000 packets/sec to over 70,000 packets/sec. This was an increase of 74% for NSD.
  • The rx network packet rate (inbound traffic) was nearly 4 times higher compared to the SD rate.
  • This increase of incoming network packets can be explained with NSD network block I/O done for reading ECM content data from the NSD servers.
Tuning considerations:
  • Because of the increase of network I/O for NSD, it was worth looking at other possibilities to tune the network.
  • The measurement series for the above Spectrum Scale cluster configuration comparison was done with the default network maximum transmission unit (MTU) of 1492 bytes.
  • If the bandwidth for the Spectrum Scale network is not sufficient, it can quickly become a bottleneck for a NSD cluster. Therefore, a common Spectrum Scale recommendation is to consider a larger MTU (jumbo frames) for the Spectrum Scale network. This consideration is discussed in more detail in the next chapter.