Running lifecycle scripts on virtual machines

When a virtual application is deployed, the lifecycle scripts are copied to the deployed virtual machines.

Before you begin

If you must debug and adjust the scripts, connect to the virtual machines by using SSH.

SSH must be configured on the virtual machine that you want to work with. If you do not have an SSH set up for the virtual application instance that contains the virtual machine, see "Configuring SSH key-based access."

For the purposes of troubleshooting plug-ins, you might also consider installing the debug and unlock plug-ins to help you debug more effectively. For more information, see:
  • "Debug plug-in"
  • "Unlock plug-in"

Procedure

  1. Determine the deployed virtual machine that you want run scripts on, and get its IP address. The example in these instructions uses 172.16.37.128.
    For information about monitoring virtual application instances and looking at logs, see the following information:
    • "Monitoring virtual machine instances"
    • "Viewing virtual application instance logs"
  2. Log on to the virtual machine by using SSH.
    The following example uses the OpenSSH command with the IP address 172.16.37.128.
    ssh -i id_rsa virtuser@172.16.37.128
    For detailed instructions for making an SSH connection from the command line or by using PuTTY on Windows, see virtual application instance.
  3. Become the root user.
    sudo su -
  4. Set up the shell variables:
     .  /0config/nodepkgs/common/scripts/pdk-debug/setEnv.sh
    The environment variables are logged in /0config/0config.log at the top of the file. The environment variable $NODEDIR is the node working directory. The directory is /opt/IBM/maestro/agent/usr/servers/{vm-template-name}.{timestamp}. For example, for a IBM® WebSphere® Application Server node:
    /opt/IBM/maestro/agent/usr/servers/Web_Application-was.11312470007562

    The "vm-templates" element of the topology document contains "name":"Web_Application-was".

  5. Find the script that you want to run by using reqDirs.sh and the request directory.
    reqDirs.sh
    The output is like this example:
    $NODEDIR = /opt/IBM/maestro/agent/usr/servers/Web_Application-wasce.11313595092512
    $NODEDIR/python/log_injector.py RequestDir: $NODEDIR/pyworkarea/requests/1444249909829418986
    $NODEDIR/python/log_injector.py RequestDir: $NODEDIR/partsInstall
    …
    $NODEDIR/scripts/WASCE/install.py RequestDir: $NODEDIR/pyworkarea/requests/2799038048538593654
    $NODEDIR/scripts/WASCE/configure.py RequestDir: $NODEDIR/pyworkarea/requests/9078005070867166367
    $NODEDIR/scripts/AGENT/start.py RequestDir: $NODEDIR/pyworkarea/requests/637746887665724204
    $NODEDIR/scripts/SSH/start.py RequestDir: $NODEDIR/pyworkarea/requests/7163718071124984320
    $NODEDIR/scripts/WASCE/start.py RequestDir: $NODEDIR/pyworkarea/requests/7130019476062423261
    Scripts are in $NODEDIR/scripts/{role}

    For example, to rerun the WASCE install.py script, find WASCE/install.py in the left column of the reqDirs.sh script output, and its request directory in the right column.

  6. Change the current directory to the request directory.
    For example:
    cd $NODEDIR/pyworkarea/requests/2799038048538593654
  7. Print formatted .json files by using dumJson.sh.
    dumpJson.sh out.json

    The out.json file is the output from the last time the script ran. The in.json file is the input to the script, containing input parameters from the topology document.

    Both in.json and out.json are formatted, so you can use the commands cat, edit, and less, or view the files to see them formatted.

  8. Run the script by using runScript.sh.
    For example:
    runScript.sh $NODEDIR/scripts/WASCE/install.py

Deployed node startup flow

  1. Run 0config.sh in /0config
  2. Download activator .zip files, and extract them from BOOTSTRAP_URL.
  3. Change to the /0config/start directory and run .sh scripts in numerical order. Scripts start with a number. These instructions use the example 5_exec_vm_tmpl.sh.
  4. The script 5_exec_vm_tmpl.sh calls /0config/exec_vm_tmpl/exec_vm_tmpl.py
  5. The exec_vm_tmpl.py script reads the topology.json file, and for each node part does the following tasks:
    1. Downloads the node part.
    2. Runs the setup.py script for the node part, if it exists. The setup.py script has any parameters from the topology document that is set in its environment, available from the maestro package.
  6. The 5_exec_vm_tmpl.sh script then calls node part installation scripts (.py or .sh) in numerical order from /0config/nodepkgs/common/install.
    To rerun this script:
    For a .sh script
    1. Set up the environment:
      . /0config/nodepkgs/common/scripts/pdk-debug/setEnv.sh
      cd /0config/nodepkgs/common/install
    2. Start the .sh script directly from the command line.
    For a .py script
    Run the script with:
    cd /0config/nodepkgs/common/install
    runScript.sh {script-name}
  7. The 5_exec_vm_tmpl.sh script then calls node part start scripts (.py or .sh) in sequential order from /0config/nodepkgs/common/start.
    To rerun this script:
    For a .sh script
    1. Set up the environment:
      . /0config/nodepkgs/common/scripts/pdk-debug/setEnv.sh
      cd /0config/nodepkgs/common/start
    2. Start the .sh script directly from the command line.
    For a .py script
    Run the script with:
    cd /0config/nodepkgs/common/start
    runScript.sh {script-name}
  8. The /0config/nodepkgs/common/start/9_agent.sh script starts last. This script starts the maestro agent code which downloads and installs parts, and runs the part lifecycle scripts.
  9. For each part, the following steps occur:
    1. Download the part .tgz file and extract into {tmpdir}.
    2. Run {tmpdir}/install.py, passing any associated parameters that are specified in the topology document.
    3. Delete {tmpdir} if the script is successful. The directory is not deleted if the script fails or if the virtual application is deployed with a debug component with Deployment for manual debugging configured.
  10. Each role in the vm-template runs concurrently. For each role:
    1. Run {role}/install.py, if it exists.
    2. For each dependency of the role, run {role}/{dependency}/install.py, if it exists.
    3. Run {role}/configure.py, if it exists.
    4. For each dependency of the role, run {role}/{dependency}/configure.py, if it exists.
    5. Run {role}/start.py, if it exists.
  11. React to changes in dependencies with {role}/{dependency}/changed.py and peers with {role}/changed.py, if they exist.

Back to the top of the page.