Release slots on blocked hosts

Typically, IBM® Spectrum Symphony does not release any slots remaining on a blocked host, enabling running, pending, and new tasks to run on those slots. You can, however, enable IBM Spectrum Symphony to gradually release slots of a blocked host even as running tasks complete on the host.

Releasing slots of blocked hosts by default has the following configuration:
  • Disabled for IBM Spectrum Symphony applications. To enable the feature, edit the application profile from the cluster management console or in an XML editor.
  • Enabled for MapReduce applications. To disable the feature, edit the application profile in an XML editor.
Table 1. Enabling or disabling IBM Spectrum Symphony to release slots on blocked hosts
Application type Action Action and behavior
IBM Spectrum Symphony From the cluster management console:

Workload > Symphony > Application Profiles > application_name > General Settings > Release all slots on blocked host

  • Select the check box to enable slots on a blocked host to be gradually released back to EGO. Disabled by default.
  • Deselect the check box to disable slots on a blocked host from being gradually released back to EGO.
IBM Spectrum Symphony In an XML editor:

SOAM > RetriedTaskAndBlockedHostCtrl > releaseAllSlotsOnBlockedHost

  • Set to true to enable slots on a blocked host to be gradually released back to EGO. Disabled by default.
  • Set to false to disable slots on a blocked host from being gradually released back to EGO.
MapReduce In an XML editor:

SOAM > RetriedTaskAndBlockedHostCtrl > releaseAllSlotsOnBlockedHost

  • Set to true to enable slots on a blocked host to be gradually released back to EGO. Enabled by default.
  • Set to false to disable slots on a blocked host from being gradually released back to EGO.

If you unblock a host while slots are being released, the SSM continues to release the existing slots on this host and returns them to EGO. It also avoids sending workload to the existing slots. If EGO allocates the slots on this host back to the SSM, the SSM sends workload only to the newly allocated slots. Note though that the tasks may fail again on the newly allocated slots and trigger the host to be blocked again.

Be aware that when an SSM fails over on a blocked host and slots are being drained on the host, any remaining slots that are allocated on the host and are in the process of draining may be kept allocating and not drain when the SSM restarts after the recovery.