Stop an inference service

Stop an inference service using the command line interface or the Watson Machine Learning Accelerator console.

Procedure

  • Stop an inference service using the command line interface.
    1. Ensure that you have downloaded and configured the dlim tool, see Download and configure the elastic distributed inference (dlim) command line tool.
    2. Run the following command:
      dlim model stop model-name
      where model-name is the name of the model that you want to stop.
  • Stop an inference service from the console.
    1. Log in to Watson Machine Learning Accelerator.
    2. Navigate to Workload > Deployed Models.
    3. Select the model that you want to stop and click Stop the model.

Results

The inference service is stopped.