Deployment

Consider these aspects of deployment when you design your plug-in.

Activation

Each image contains a startup script, /0config/0config.sh, that executes after the virtual machine starts.

Each virtual machine is assigned a unique name within the deployment. The name is set as an environment variable named SERVER_NAME. The value is formed by appending an instance number to the corresponding vm-template name, for example, application-was.11373380043317, application-was.2237183401478347. Activation proceeds as follows:
  1. Get the vm-template for this virtual machine from the topology document, for example, if SERVER_NAME == application-was.1, then get the vm-template named application-was.
  2. For each node part in the vm-template:
    1. Download the node part .tgz file and extract into the {nodepkgs_root} directory.
    2. Invoke{nodepkgs_root}/setup/setup.py, if the script exists. Associated parms from the topology document are available as maestro.parms.
    3. Delete {nodepkgs_root}/setup/.
  3. Run the installation scripts ({nodepkgs_root}/common/install/*.sh|.py) in ascending numerical order.
  4. Run the start scripts ({nodepkgs_root}/common/start/*.sh|.py) in ascending numerical order.
In Step 2, node parts must not rely on the order of installation; that is, the setup/setup.py script must rely on contents of that node part only. One exception is the maestro module. The module initialization script is in place, so that the setup.py script can use the maestro HTTP client utility methods, for example, maestro.download(), and maestro.parms, to obtain configuration parameters.

Both installation and start scripts are ordered. By convention, these scripts are named with a number prefix, such as 5_autoscaling.sh or 9_agent.sh. These scripts are said to be in slot 5 or slot 9. All installation scripts in slot 0 are run before any installation script in slot 1. All of the installation scripts are run in sequential order, and then all of the start scripts are run in sequential order. Set up and installation scripts are run one time for each virtual machine; start scripts are run on every start or restart. For more information, see the section, ‎Recovery: Reboot or replace. The workload agent is packaged and installed as a node part.

Node parts

Node parts are installed by the activation script and generally contain binary files and script files to augment the operating system. Review the following information about node parts:
  • Conventions
    Node parts are packaged as .tgz files. The contents are organized into a directory structure by convention. The following files and directories are optional:
    common/python/maestro/{name}.py
    	common/scripts/{name}
    	common/start/{N}_{name}
    	common/stop/{N}_{name}
    {name}/{any files}
    setup/setup.py
    
    The root directory is common/ for the following shared files:
    • common/python
      Applies to the Python scripts started by the workload agent and includes the following directories:
      • common/python, which is added to the PYTHONPATH

        All files in common/python/maestro/*.py are added to the maestro package

      • common/scripts

        Added to the PATH before the start scripts are run.

      • common/start

        Contains script files that are started automatically by the activation script; scripts are named with a number prefix, for example, 5_autoscaling.sh or 9_agent.sh, and run in a sequential order that starts with 0_*.

      • common/stop

        Contains script files that are stopped automatically by the activation script; scripts are named with a number prefix, for example, 5_autoscaling.sh or 9_agent.sh, and run in reverse sequential order.

    {name}/ is a private directory for the node part.

    setup/ is intended for one-time setup of the node part. The script, setup/setup.py, is started by the activation script with the associated parameters specified in the topology document. The setup/ directory is deleted after the setup/setup.py script returns.

  • Setup script
    If present, the setup/setup.py script is started with the associated parameters. For example, the workload agent node part is configurable for the installation location, IaaS server and a command port; parameters are specified in the topology document within the node part object, such as this excerpt from the sample topology document in the section, Application model and topology document examples:
    "node-parts":[
                {
                   'parms':{
                      "iaas-port":"8080",
                      "agent-dir":"\/opt\/IBM\/maestro\/agent",
                      "http-port":9999,
                      "iaas-ip":"127.0.0.1"
                   },
                   "node-part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/agent\/nodeparts\/agent-linux-x64.tgz"
                },
    The setup/setup.py script can import the maestro module. Configuration parameters are available as maestro.parms, for example:
    
    import json
    import os
    import subprocess
    import sys
    
    import maestro
    
    parms = maestro.parms
        
    subprocess.call('chmod +x *.sh', shell=True)
    
    rc = subprocess.call('./setup_agent.sh "%s" %d %s %s' % (parms['agent-dir'], parms['http-port'], parms['iaas-ip'], parms['iaas-port']), shell=True)
    maestro.check_status(rc, 'setup_agent.sh: rc == %s' % rc)
    The implementation is arbitrary. The previous example passes the parameter values to a shell script, which in turn starts sed for token replacement in the support scripts.
    Node parts are added to vm-templates during the resolve phase of deployment.Node parts are added to a vm-template in two ways:
    • Explicit inclusion as part of a named package.
    • Implicit inclusion as part of a matching default package.
    For example, the config.json file from the workload agent plug-in is shown here:
    {
       "name":"agent",
       "packages":{
          "default":[
             {
                "requires":{
                   "arch":"x86_64",
                   "os":{
                      "RHEL":"*"
                   }
                },
                "node-parts":[
                   {
                      "node-part":"nodeparts/agent-linux-x64.tgz",
                      'parms':{
                         "agent-dir":"/opt/IBM/maestro/agent",
                         "http-port":9999,
                         "iaas-ip":"@IAAS_IP@",
                         "iaas-port":"@IAAS_PORT@"
                      }
                   }
                ]
             }
          ]
       }
    }
    
    In general, config.json can define any number of named packages. The previous example shows one package that is named "default". Each package is an array that contains any number of objects, where each object is a candidate combination of node parts, parts, or both. The candidates are specified by mapped values for requires, node parts, and parts. Package contents are additive within a pattern type. For example, if two plug-ins are part of the same pattern type and both define package "FOO" in config.json, then resolving package "FOO" considers the union of candidates from both config.json files. In config.json, "default" is a special package name. The resolve phase always includes the default package; other named packages are resolved only when explicitly named.
  • config.json file
    The config.json file must specify the following information:
    Plug-in name
    The plug-in name is a string. The name cannot contain a forward slash (/). For example:
    "name":"was"
    Plug-in version
    The format of the version number must be N.N.N.N. For example, a valid version number is 1.0.0.3.
    Associated pattern type
    At a minimum, one pattern type association must be defined in a patterntypes element, and it can be either a primary or a secondary pattern type. You can specify only one primary pattern type, but you can specify multiple secondary pattern types. For example, the DB2® plug-in declares primary association with the database as a service pattern type, which means the plug-in is covered by the license and enablement of the database as a service pattern type entity. The plug-in also declares secondary association with the Web Application Pattern type, which means that the enabled DB2 plug-in is also available for creating virtual application patterns with the Web Application Pattern type.
    Use a JSON object to define the primary pattern type. For example:
    "patterntypes":{
       "primary":{
          "webapp":"2.0"
        }
    }
    
    Define one or more secondary pattern types by using a JSON array that contains one or more JSON objects. An example of a plug-in without a primary pattern type is the debug plug-in. It uses the following configuration to specify that the plug-in has a secondary association with all pattern types.
    "patterntypes" : {
        "secondary" : [ { "*" : "*" } ]
    },
    If you want to define associations with specific secondary pattern types, you can include multiple pattern types in the secondary element.
    "patterntypes" : {
        "secondary" : [ 
            { "pattern1":"2.0" },
            { "pattern2":"2.1" }
        ]
    },

    Plug-in linking

    The simplest way to associate a plug-in with a pattern type is by directly defining the primary and secondary pattern types in config.json. This approach, however, creates static relationships between plug-ins and pattern types. Linking is a more flexible approach to associating pattern types and plug-ins. In the following example, the hello plug-in declares a link relationship with the dbaas pattern type in config.json.
    "patterntypes": {
         "primary":{
             "hello":"2.0"
         },
         "linked":{
             "dbaas":"1.1"
         }
     }
    When the hello plug-in is imported, any plug-ins that are associated with version 1.1 of the dbaas pattern type are automatically associated with version 2.0 of the hello pattern type as well. For example, the plugin.com.ibm.db2 plugin declares dbaas as its primary pattern type, so this plug-in is automatically associated with the hello pattern type.
    "patterntypes":{
          "primary":{
             "dbaas":"1.1"
          },
          "secondary":[
             {
                "webapp":"*"
             }
          ]
     },
    Linking relationships are not enforced the same way as primary and secondary pattern type relationships. For the hello plug-in example, the hello plug-in can be installed even if the dbaas pattern type is not installed or enabled. The dbaas pattern type can also be removed if there are no active associations with virtual application instances or if no virtual applications are locked to the pattern type.

    Requires

    The value of requires is an object that qualifies the candidate in terms of applicable operating system and architecture. Supported keys and mapped values are as follows:

    Key arch; mapped value is a string or array of strings. Supported strings include x86 and x86_64 (ESX) and ppc_64 (PowerVM®).

    For example,
    "arch" : "ppc_64"
    or
    	"arch" : ["x86", "x86_64"]
    Key OS; mapped value is an object of OS name/version pairs. Supported OS name strings include RHEL and AIX®. The following expressions are valid:
    "*"
    Matches all versions.
    "[a, b]"
    Matches all versions between a and b, inclusive.
    "(a, b)"
    Matches all versions between a and b, exclusive.
    "[*, b]"
    Matches all versions up to b, inclusive.
    "[a, *)"
    Matches all versions a and greater.
    Package names must be unique across all installed plug-ins, except for the reserved default package name. The default package is an additive. All plug-ins can append candidates to the default package and all candidates are automatically evaluated for a match during the resolve phase of deployment. Thus, the default package is the basis for implicit inclusion; all other named packages are only included explicitly.
  • Standard node parts
    All vm-templates include workload agent and firewall node parts. Each of the node parts defines a standard interface for integration with and used by other node parts.
    • Workload agent

      The workload agent is an OSGi-based application responsible for installing parts and driving the lifecycle of the roles and dependencies in a plug-in.

      The workload agent processes the following sequence of operations to drive roles and dependencies toward a running application:
      1. Get the virtual machine vm-template from the topology document, for example, if SERVER_NAME == application-was.1, then get the vm-template named application-was.
      2. For each part in the vm-template, run the following steps sequentially:
        1. Download the part .tgz file and extract the file into {tmpdir}.
        2. Start {tmpdir}/install.py, passing any associated parameters that are specified in the topology document.
        3. Delete {tmpdir}.
      3. For each role in the vm-template, run the following steps concurrently:
        1. Start {role}/install.py, if exists.
        2. For each dependency of the role, start {role}/{dependency}/install.py, if exists.
        3. Start {role}/configure.py, if exists.
        4. For each dependency of the role, start {role}/{dependency}/configure.py, if exists.
        5. Start {role}/start.py, if exists.
      4. React to the changes in the dependencies ({role}/{dependency}/changed.py) and peers ({role}/changed.py), if exists. Role status is automatically advanced up to CONFIGURING, but not to RUNNING. A plug-in script must set the role status to RUNNING. Role status must be set by only the {role}/start.py and changed.py scripts (role or dependency). Role status is set as follows:
        import maestro
        maestro.role_status = 'RUNNING'
      There are several custom features of the workload agent:
      • The workload agent is extensible.
      • Other node parts can install features into the OSGi-based application.
      Complete the following steps to install node part features into the agent:
      1. Provide a .tgz file that contains the following files:
        • lib/{name}.jar - bundle Java™ archive files
        • lib/features/featureset_{name}.blst - list of bundles for the feature set
        • usr/configuration/{name}.cfg - OSGi configuration code
      2. Provide a start script before slot 9 that installs the .tgz file contents into the agent.
      Other node parts do not need to know the installation location of the agent application. Rather, the agent provides the shared script, agent_install_ext.sh, to install custom features. Shared scripts are always on the PATH, so a typical start script for a node part to install a custom feature is as follows:
      #!/bin/sh
      
      agent_install_ext.sh ../../autoscaling/autoscaling.tgz
      The agent_install_ext.sh script extracts the contents of the specified .tgz file into the agent application. The custom feature is included when the agent is started.
    • Firewall
      The firewall node part defines a generic API for shell scripts and Python scripts to manipulate the firewall settings on the virtual machine. The default implementation for Red Hat Enterprise Linux® is based on iptables; other implementations can be built. The firewall.sh script is the shell script that is provided for Linux. The following content summarizes the shell usage:
      firewall.sh open in [-p <protocol>] [-src <src>]
      		[-sport <port>] \ [-dest <dest>] [-dport <port>]
      firewall.sh open out [-p <protocol>] [-src <src>]
      		[-sport <port>] \ [-dest <dest>] [-dport <port>]
      firewall.sh open tcpin [-src <src>] [-dport <port>]
      firewall.sh open tcpout [-dest <dest>] [-dport <port>]
      
      firewall.sh close in [-p <protocol>] [-src <src>]
      		[-sport <port>] \ [-dest <dest>] [-dport <port>] 
      firewall.sh close out [-p <protocol>] [-src <src>]
      		[-sport <port>] \ [-dest <dest>] [-dport <port>]
      firewall.sh close tcpin [-src <src>] [-dport <port>]
      firewall.sh open tcpout [-dest <dest>] [-dport <port>] 
      
      The open tcpin directive is tailored for TCP connections and opens corresponding rules in the INPUT and OUTPUT tables to allow request and response connections. The open in directive opens the INPUT table only. For src and dest, private is a valid value. This value indicates that <src> and <dest> are limited to the IP range defined for the cloud. The value private is defined in the config.json file for the firewall plug-in as follows:
      {
         "name":"firewall",
         "packages":{
            "default":[
               {
                  "requires":{
                     "arch":"x86_64",
                     "os":{
                        "RHEL":"*"
                     }
                  },
                  "node-parts":[
                     {
                        "node-part":"nodeparts/firewall.tgz",
                        'parms':{
                           "private":"PRIVATE_MASK"
                        }
                     }
                  ]
               }
            ]
         }
      }
      
      Currently, the value of PRIVATE_MASK is set as part of the maestro provisioning steps. The cloud-specific value is found in the cloud project build.xml file, for example, cloud.HSLT/build.xml. For an Orion cloud, PRIVATE_MASK == 10.0.0.0/255.0.0.0.
      The Python API is similar. Callers must import the maestro package. The provided firewall methods are as follows:
      maestro.firewall.open_in(**args) 
      maestro.firewall.open_out(**args) 
      maestro.firewall.open_tcpin(**args) 
      maestro.firewall.open_tcpout(**args)  
      
      maestro.firewall.close_in(**args) 
      maestro.firewall.close_out(**args) 
      maestro.firewall.close_tcpin(**args) 
      maestro.firewall.close_tcpout(**args)
      where **args represents keyword args.

      Valid keywords are as follows: protocol, src, sport, dest, and dport.

      The following example is for one valid invocation:
      maestro.firewall.open_tcpin(src='private', dport=8080) 

Parts

Parts are installed by the workload agent and generally contain binary and lifecycle scripts that are associated with roles and dependencies. Review the following information about parts:
  • Conventions

    All parts must have an install.py script at the root. More files are allowed.

  • Common scripts
    By default, the maestro package contains the following functions:
    • maestro.download(url, f): Downloads the resource from the url and saves the resource locally as file f.
    • maestro.downloadx(url, d): Downloads and extracts a .zip, .tgz, or .tar.gz file into directory d. The .tgz and .tar.gz files are streamed through extraction; the .zip file is downloaded and then extracted.
    • maestro.decode(s): Decodes strings that are encoded with the maestro encoding utility, such as from a transformer by using com.ibm.ws.security.utils.XOREncoder.encode(String).
    • maestro.install_scripts(d1): Utility function for copying lifecycle scripts into {scriptdir} and making the shell scripts executable (dos2unix and chmod +x).
    • maestro.check_status(rc, message): Utility function for logging and exiting a script for non-zero rc.
  • Data objects
    The agent appends data objects or dictionaries to the maestro package when it starts the part installation scripts as follows:
    maestro.parturl : fully-qualified URL from which the part .tgz file was obtained (string; RO) 
    maestro.filesurl : fully-qualified URL prefix for the shared files in storehouse (string; RO)  
    maestro.parms : associated parameters specified in the topology document (JSON object; RO)  
    maestro.node['java'] : absolute path to Java executable (string; RO) 
    maestro.node['deployment.id'] : deployment ID, for example,  d-xxx (string; RO) 
    maestro.node['tmpdir'] : absolute path to working directory. This path is cleared after use (string; RO) 
    maestro.node['scriptdir'] : absolute path to the root of the script directory (string; RO) 
    maestro.node['name'] : server name (same as env variable SERVER_NAME) (string; RO) 
    maestro.node['instance'][ 'private-ip'] (string; RO) 
    maestro.node['instance'][ 'public-ip'] (string; RO)  
    maestro.node['parts'] : shared with all Python scripts invoked on this node (JSON object; RW)

Roles

A role represents a managed entity within a virtual application instance. Each role is described in a topology document by a JSON object, which is contained within a corresponding vm-template like the following example:

maestro.role['tmpdir']: role-specific working directory; not cleared (string; RO)

You can import custom scripts, for example, import my_role/my_lib.py:
utilpath = maestro.node['scriptdir'] + '/my_role' 
if not utilpath in sys.path:     
sys.path.append(utilpath) 
import my_lib
The following example is a role from a topology document:
"roles":[
{
"plugin":"was\/2.0.0.0",
'parms':{
"ARCHIVE":"$$1",
"USERID":"virtuser",
"PASSWORD":"$$6"
},
"depends":[
{
"role":"database-db2.DB2",
'parms':{
"db_provider":"DB2 Universal JDBC Driver Provider",
"jndiName":"TradeDataSource",
"inst_id":1,
"POOLTIMEOUT":"$$11",
"NONTRAN":false,
"db2jarInstallDir":"\/opt\/db2jar",
"db_type":"DB2",
"db_dsname":"db2ds1",
"resourceRefs":[
{
"moduleName":"tradelite.war",
"resRefName":"jdbc\/TradeDataSource"
}
],
"db_alias":"db21"
},
"type":"DB2",
"bindingType":"javax.sql.DataSource"
}
],

Role names and role types

Role names must be unique within the vm-template. In the topology, it identifies a specific section in the vm-template where role parameters are defined. Role names do not need to match the role type. The role name identifies the directory where the role scripts are stored. For example, if your vm-template includes these lines:
{
	name: A_name
	type: A
	parms:{}
}
The following actions occur when the virtual machine is deployed:
  • A role that is called A_name is created.
  • The Python lifecycle scripts for the role A_name are in scripts/A/
  • The lifecycle scripts can use maestro.parms to read the parameters that are defined for the role A_name.
The following example shows a deployment document that is generated for a caching shared service deployment. The fully qualified role name for each node is in the format
{server name}.{role name}
The {server name} is based on the vm-template name that is defined in the topology document. The {role name} is the role name that is defined in the topology document with a unique instance number appended to the name.
 ROLES: [
    {
      time_stamp: 1319543308833,
      state: "RUNNING",
      private_ip: "172.16.68.128",
      role_type: "CachingContainer",
      role_name: "Caching-Container.11319542242188.Caching",
      display_metrics: true,
      server_name: "Caching-Container.11319542242188",
      pattern_version: "2.0",
      pattern_type: "foundation",
      availability: "NORMAL"
    },
    {
      time_stamp: 1319543269980,
      state: "RUNNING",
      private_ip: "172.16.68.129",
      role_type: "CachingCatalog",
      role_name: "Caching-Catalog.21319542242178.Caching",
      display_metrics: false,
      server_name: "Caching-Catalog.21319542242178",
      pattern_version: "2.0",
      pattern_type: "foundation",
      availability: "NORMAL"
    },
    {
      time_stamp: 1319544107162,
      state: "RUNNING",
      private_ip: "172.16.68.131",
      role_type: "CachingPrimary_node",
      role_name: "Caching-Primary_node.11319542242139.Caching",
      display_metrics: true,
      server_name: "Caching-Primary_node.11319542242139",
      pattern_version: "2.0",
      pattern_type: "foundation",
      availability: "NORMAL"
    },
    {
      time_stamp: 1319543249613,
      state: "RUNNING",
      private_ip: "172.16.68.130",
      role_type: "CachingCatalog",
      role_name: "Caching-Catalog.11319542242149.Caching",
      display_metrics: false,
      server_name: "Caching-Catalog.11319542242149",
      pattern_version: "2.0",
      pattern_type: "foundation",
      availability: "NORMAL"
    }
  ],

Role state and status

The agent implements a state machine that drives each role through a basic progression as follows:
  1. INITIAL: Roles start in the initial state. The {role}/install.py script for each role starts. For each dependency of the role, {role}/{dependency}/install.py starts, if it exists. If the scripts complete successfully, the role progresses automatically to the INSTALLED state. If the install.py script fails, the role moves to the ERROR state, and the deployment fails.
  2. INSTALLED: From this state, the {role}/configure.py script runs, if one exists. For each dependency of the role, {role}/{dependency}/configure.py starts, if it exists. If the scripts complete successfully, the role progresses automatically to the CONFIGURING state. If the configure.py script fails, the role moves to the ERROR state, and the deployment fails.
  3. CONFIGURING: From this state, the start.py script runs, if one exists. The role reacts to changes in the dependent roles ({role}/{dependency}/changed.py) and peers ({role}/changed.py), if they exist.
    Note: For more information about {role}/changed.py and {role}/{dependency}/changed.py, see the pydoc.
  4. STARTING: The automatic state setting stops. A lifecycle script must explicitly set the role state to RUNNING. Role status is set as follows:
    import maestro
    maestro.role_status = 'RUNNING'
  5. RUNNING
If the process is stopped or an unrecoverable error occurs, the role moves to an ERROR state. If an error is recoverable, you can keep the role state as RUNNING and change the role status to FAILED. For example, if WebSphere® Application Server crashes, and the crash is detected by wasStatus.py, the wasStatus.py script sets maestro.role_status = "FAILED". When a user starts WebSphere Application Server from the Instance Console console, one of the following processes occurs:
  • If there is no dependency, operation.py sets maestro.role_status="RUNNING".
  • If WebSphere Application Server depends on another role (such as DB2), operation.py sets maestro.role_status="CONFIGURING". The was/{dependency}/changed.py script starts as the result of a role status change from FAILED to CONFIGURING, and the script starts WebSphere Application Server, processes dependency information from maestro.deps and sets maestro.role_status="RUNNING".

If the deployment is stopped or destroyed, the stop.py script runs, and the role moves to the TERMINATED state. Roles are only moved to the TERMINATED state by external commands.

The role status can change during transitions and within a state. The following table shows the state progression that is described in the previous part, with the details of status and lifecycle scripts started:
Table 1. Role state and status
Role state script Transition Update status Aspect Set role status Invoke
Initial Initial => INSTALLED on entry   INITIAL -
INSTALLED

{role}/install.py then all {role}/{dep}/install.py

{role}/configure/py then all {role}/{dep}/configure.py

{role}/start.py

INSTALLED => RUNNING

during

 

INSTALLING

CONFIGURING

STARTING (role status by script)

 
RUNNING

{role}/start.py

 
  • on entry
  • on changed
 

role_status (set by script)

 

For information about status for an entire virtual application instance, see the Related tasks.

Existing resources

Plug-ins can interact with existing resources. Although the existing resource is not a managed entity within the plug-in, it is modeled as a role, which allows for a consistent approach, whether dealing with pattern-deployed or existing resources. Specifically:
  • Integration between resources is modeled as a dependency between two roles. The target role (pattern-deployed or existing) exports properties that are used by a dependency script on the source ({role}/{dep}/changed.py) to realize the integration. This design provides reuse of the source dependency script. For example, in the wasdb2 plug-in, the WAS/DB2/changed.py script manages a WebSphere Application Server data source for any pattern-deployed or existing database.
  • User interactions in the Cloud Pak System Software for x86 deployment user interface are consistent for resources and integrations. Resources (pattern-deployed or existing) are represented as roles, meaning they are displayed on the Operation tab of the deployment panel in the product user interface. For example, you can look for a role when you change a password. For a pattern-deployed resource, the change is applied to the resource, then exported for dependencies to react. For an existing resource, change is exported for dependencies to react like when the password is already changed externally.

    Managing configuration of the interactions (links) is handled through the source role.

An existing resource is modeled by a component in appmodel/metadata.json file. Typical component attributes are required to connect to the resource, such as host name/IP address, port, and application credentials.

Integration with existing resources is modeled by a link in the appmodel/metadata.json file.

If a type of resource displays as pattern-deployed or existing, then consolidation is possible by adding a role to represent the external resource. This role can export parameters from the existing resource that the dependent role for the pattern deployed case can handle.

Consider the case of an application that is using an existing resource, such as wasdb2, imsdb, and wasctg plug-ins. At the application model level, the existing database is a component, and WebSphere Application Server uses it, on behalf of the application, as a represented link to that component. Typical attributes of the existing database are its host name or IP address and port, and the user ID and password for access.

In older service approaches, the existing database component has a transform that builds a JSON target fragment that stores the attributes, and the link transform uses these attributes. In IMS, for example, the link transform creates a dependency in the WebSphere Application Server role in the WebSphere Application Server node, with the parameters of the existing database that are passed from the component. The dependent role configure.py script is used to configure WebSphere Application Server to use the existing database that is based on the parameters, which are sufficient, but in the deployment panel the parameters of the existing database appear in the WebSphere Application Server role, which is not sensible.

In the new role approach, the target component creates a role JSON object and the link transform adds it to the WebSphere Application Server virtual machine template list of roles. The wasdb2 plug-in creates an xDB role to connect to existing DB2 and Informix® databases. IMS can convert to this model, and move its configure.py and change.py scripts to a new xIMS role. The advantage here is in the deployment panel, which lists each role for a node separately in a left column where its parameters and operations are better separated for user access.

The wasdb2 plug-in provides an extra feature that IMS and CTG might not use. The plug-in also supports pattern-deployed DB2 instances. In the pattern-deployed scenario, the DB2 target node is a node that is started. The correct model is a dependent role and the link configuration occurs when both components, source WebSphere Application Server and target DB2, start. The changed.py script is then run. For the existing database scenario, the wasdb2 plug-in exports the same parameters as the DB2 plug-in, and then processing for pattern-deployed and existing cases can be performed in the changed.py script. IMS and wasctg do not require this process and can use a configure.py role script for new roles.

Repeatable tasks

At run time, a role might need to perform some actions repeatedly. For example, the logging service must back up local logs to the remote server in a fixed period. The plug-in framework allows a script that is started after a specified time to meet this requirement.

You can run a task from any lifecycle script that belongs to a role, such as install.py, configure.py, start.py, and changed.py. Call the maestro.tasks.append() method to run the task. For example:
task = {}
task['script']='backupLog.py' 
task['interval'] = 10

taskParms={}  
taskParms['hostname'] = hostname
taskParms['directory'] = directory
taskParms['user'] = user
taskParms['keyFile'] = keyFile
task['parms'] = taskParms

maestro.tasks.append(task)

When you are troubleshooting a task that does not run, check the script that calls the task with maestro.tasks.append() first.

You must have a dictionary object named task. You can change name this name to another valid name. The target script is specified by task['script'] and the interval is specified by task['interval']. You can add parameters to the script by using task['parms']. This addition to the script is optional. The maestro.tasks.append(task) is used to enable this task. In this sample, backupLog.py, which is in the folder {role}/scripts, is started after 10 seconds when the current script is completed. Using the backupLog.py script, you can retrieve the task parameters from maestro.task['parms'] and retrieve the interval from maestro.task['interval']. This script is only started one time. If the backupLog.py script is required to be started repeatedly, you must add the same codes into the backupLog.py script. When the current script is completed, it is started after the new specified internal and parameters.

Recovery: Reboot or replace?

If a virtual machine stops unexpectedly, the master agent recovers the failed virtual machine. The action depends on the virtual machine type. A persistent virtual machine is rebooted. Other virtual machines are replaced.

A virtual machine is persistent if it is an instance of a vm-template with a true-valued persistent property, as follows:
"vm-templates": [
        {
        	"persistent":true,
            "scaling": {
                "min": 1, 
                "max": 1
            },
There are two ways for a plug-in to mark a virtual machine as persistent:
  • Direct

    The transformer adds the persistent property to the vm-template.

  • Indirect

    The package configuration specifies the persistent attribute.

The direct method supersedes the indirect. That is, if the vm-template is marked persistent (true or false), that is the final value. If the vm-template is not marked persistent, the resolve phase of deployment derives a persistent value for the vm-template that is based on the packages that are associated with that vm-template. The vm-template is marked persistent or true if any package declares persistent true.

The indirect method provides more flexibility to integrate parts and node parts, without requiring global knowledge of where persistence is required. A transformer adds the property as follows:
 "vm-templates": [
        {
        	"persistent":true,
            "scaling": {
                "min": 1, 
                "max": 1
            }, 
            "name": "Caching_Primary_node", 
            "roles": [
                {
                    "depends": [{
                        "role": "Caching_Worker_node.Caching"
                    }],
                    "type": "CachingPrimary_node", 
                    "name": "Caching",
                    'parms':{
                	 "PASSWORD": "$XSAPassword"
                	}
                }
            ], 
            "packages": [
                "CACHING"
            ]
        },
Package configuration is specified in the config.json file as follows:
{
    "name" : "db2",
    "version" : "1.0.0.0",
    "packages" : {
        "DB2" : [{
            "requires" : {
                "arch" : "x86_64",
                "memory" : 0},
                "persistent" : true,
            "parts" : [
                {"part" : "parts/db2-9.7.0.3.tgz",
                 'parms' : {
                    "installDir" : "/opt/ibm/db2/V9.7"}},
                {"part" : "parts/db2.scripts.tgz"}]
        }]
    }
}

Instance Console

You can manage virtual application instances from the Operation tab of the Instance Console. The tab displays roles in a selected virtual application instance and each role provides actions that the underlying plug-ins defined for it, such as retrieving data, applying a software update, or modifying a configuration setting.

There are two types of actions you can perform to manage or modify deployed roles. The main difference between these actions is the scope.
Operation
For an operation, the action affects only current deployed roles. An operation is defined in operation.json.
Configuration
For a configuration, attribute changes are saved in the topology. If a virtual machine is restarted or more virtual machines are deployed, the configuration attribute value is applied to these virtual machines. A configuration is defined in tweak.json.

Operation

  • Define an operation.
    Create a JSON file named operation.json in the plugin/appmodel directory. Each operation.json file must contain a JSONObject. The following code example shows a part of the WebSphere Application Server operation.json file, where the key is the role type WAS, and the value is a JSONArray that contains these operation definitions:
     "WAS":
    [
            {
                "id": "setWASTrace",
                "label": "TRACE_LEVEL_LABEL",
                "description": "TRACE_LEVEL_DESCRIPTION",
                "script": "debug.py setWASTrace",
    		    "attributes": [
                    {
                        "label": "TRACE_STRING_LABEL",
                        "type": "string",
                        "id": "traceString",
                        "description": "TRACE_STRING_DESCRIPTION" 
                    } 
                ] 
            },
    where
    • script defines the operation debug.py script that is started when the operation is submitted. The operation script name can also be followed by a method name such as setWASTrace that is included in the previous code sample. The method name can be retrieved later in the operation script. The operation script must be placed under the role scripts path, for example, plugin/parts/was.scripts/scripts/WAS.
    • attributes define the operation parameters that you must input. The operation parameters can be retrieved later by the operation script.
  • Attributes for operations against multiple instances.

    If a role has more than one instance, you can use these attributes in the operation definition to control how an operation is applied to instances. The following attributes are validated if a role has more than one instance:
    rolling
    Determines whether an operation is applied sequentially or concurrently on instances.
    • Apply an operation concurrently by setting "rolling": false. This setting is the default.
    • Apply an operation sequentially by setting "rolling": true.
    You can configure a group update by adding rolling_config with the group_size attribute. If group_size is set, the instances in a cluster are divided into the specified number of groups. The operation is invoked group by group. For example, if you set the following attributes:
    {
      "rolling":true,
        "rolling_config": {
            "group_size": 2
         }
    }
    when the operation is invoked, the instances are divided into two groups. The operation is first performed on group one. After the operation completes on all of the instances in group one, the operation is invoked on the instances in group two.
    target
    Determines whether an operation is applied on a single instance or all instances.
    • Apply an operation on all instances by setting "target": All. This setting is the default.
    • Apply an operation on a single instance by setting "target": Single
    See the WebSphere Application Server operation.json file for an example.
  • Setting a particular role status for an operation.

    By default, when an operation is being performed, the role status is set to CONFIGURING and then is set back to RUNNING when the operation is complete. This change in status can sometimes stop the application itself. Some operations, such as exporting logs, do not require a role to change its status from RUNNING. For these types of operations, you can explicitly set the role status to use when the operation starts. For example, to keep the role status as RUNNING when the operation starts, add the following attribute to the operation definition:

    "preset_status": "RUNNING"

    The role status remains as RUNNING unless an error occurs during the operation.

    See the WebSphere Application Server operation.json file for an example

  • Operation script

    The operation script can import the maestro module. The information that is retrieved in the role lifecycle part script can be retrieved the same way in the deployment panel, such as maestro.role, maestro.node, maestro.parms, and maestro.xparms. Also, all of the utility methods, such as download and downloadx, can be used. Parameters that are configured at the deployment panel are passed into the script and are retrieved at maestro.operation['parms']. The method name that is defined in the operation.json file is retrieved at maestro.operation['method'], and operation ID is retrieved at maestro.operation['id'].

  • File download and upload
    All downloaded artifacts must be placed under the fixed root path. The operation script can get the root path from maestro.operation['artifacts_path']. To specify a file that is downloaded later, insert maestro.return_value="file://key.p12" in the script. The prefix, file://, indicates that a file is required for download. After the script is complete, the deployment panel displays a link to download the file. Uploaded files are placed in a temporary folder under the deployment path in the storehouse. The operation script retrieves the full storehouse path of the uploaded files, for example:
    uploaded_file_path=maestro.operation['parms'][{parm_name}]
    After the file path is retrieved, the maestro.download() method downloads the file. When the operation is complete, the temporary files in storehouse are deleted. When the operation script interacts with kernel services and storehouse, the script prefers to use the authorization token that is passed from the user interface, but not use the agent token, so that the operations can be audited later. The operation script must use maestro.operation["user_token"] to retrieve the user token that is passed from the user interface, which contains user uuID. You can use it later in communication with kernel services and storehouse. For example, to upload a file into storehouse, use upload_if_new(url, file), which uses agent_token to generate the authorization header. You can also use upload_if_new(url, file, user_token), where the passed-in user_token is used to generate the header.
  • Operation result

    You can specify the result of an operation by using maestro.operation['successful']=True/False. The default value is True. After the operation script completes successfully, the user interface displays the result as SUCCESS or FAIL. If the script ran with failures such as return code!=0, the user interface result displays as ERROR, and the responding role changes to ERROR state. If you want to return a more meaningful result, insert maestro.operation['return_value']="my return value". The return value displays on the user interface when the script completes.

    Use a try and catch statement around the operation script to prevent the role from entering an unrecoverable ERROR status. If an exception occurs and is caught by the catch statement, you can use maestro.operation['successful'] = False to indicate that the operation did not complete successfully, but the role status remains as RUNNING.

  • Depends-on role operation
    In some scenarios, one role update causes another role to also need an update. This scenario is called the depends-on role operation. An example is if DB2 changes a password, WebSphere Application Server also updates the data source configuration. In the operation script, export the changed value, such as: maestro.export['DB_PASSWORD']="<XOR>UIK8CNz". Then, change the depends-on role changed.py script to add code to handle the other update.
    if len(updated)== 1:
    		myrole = updated[0]
    		if deps[myrole] ['is_export_changed'] :
    				print 'export data changed'
    		else:
    				print 'only status changed'
  • Running scripts concurrently

    By default, all lifecycle scripts and operation scripts run serially on a role to avoid conflicts. You can also run scripts concurrently, but with some limitations on the scripts. For scripts that run in parallel, all maestro.* variables are read-only, except maestro.operation and maestro.tasks. For operations that must run concurrently and write some maestro.*, the operation script can keep the codes that can start concurrently and schedule a task to run the codes that must start at the same time.

    The following attributes are used to configure parallel running of scripts:
    parallel
    If set to true, scripts run concurrently. If set to false, scripts run serially.
    timeout
    For non-parallel operations, the response from an operation returns immediately after operation information is added into an operation queue. When scripts run in parallel, the response does not return until all the operation scripts complete. For some scripts that take a long time to run, the default timeout setting might cause a timeout to occur before the operation script can complete. To adjust for this issue, you can specify an appropriate timeout value in seconds.
  • Testing operation scripts
    During plug-in development, you can test your operation scripts incrementally.
    1. Deploy the application.
    2. Log on to the virtual machine with the scripts that you want to update.
    3. Replace the operation scripts with updated versions.
    4. Trigger the operation script by running the operation from the deployment inlet interface. The update takes effective immediately and the updated operation script runs.

Configuration

In the deployment panel, a configuration update is handled as a special type of operation. The configuration processes include the following tasks:
  • Define a configuration.

    Add the tweak.json file under the plug-in plugin/appmodel folder to specify which configuration parameters can be changed during run time. This means that some parameters in the topology model can be changed and validated at run time. Each tweak.json file is a JSONArray. Each object describes a parameter that can be tweaked in the topology model. For example, in the WebSphere Application Server plug-in, add the following code example to a tweak.json file. The parameter ARCHIVE under the WAS role can be tweaked.

    The value of "id" is composed with {role_type}.{parm_name}. Other attributes such as "label" and "description" are prepared for the user interface, similar to the definition in the metadata.json file in the /appmodel directory.
    {
            "id": "WAS.ARCHIVE",
            "label":"WAR/EAR File",
            "description":"Specifies the web/enterprise application to be uploaded. ",
            "type":"file",
            "extensions":[
    	         "war",
    	         "ear"
            ]
        }
    For a parameter under the depends section, the value of "id" is composed with {role_type}.{depends_role_type}.{parm_name}:
    
        {
          "id": "WAS.DB2.MINPOOLSIZE",
          "type": "number"
      }
    
    To enable this feature, you must add the following code to the operation.json file:
    {
    "id": "your_id",
    "type": "configuration"
    "label": "CONFIGURATION_LABEL",
    "description": "CONFIGURATION_DESCRIPTION",
    "script": "change.py" 
    },
    

    The "configuration" value indicates that parameters must be saved.

    Configuration script: change.py

    The change.py script is similar to the operation script, except for:
    • The script name must be change.py.
    • When you use the depends role configuration, the change.py script is placed under the plugin/parts/{parts_package}/scripts/{role_type}/{depends_role_type} path.
    • After the script is started successfully, the changed values are automatically persisted to storehouse.
    • For the artifacts-type configuration update, such as when you update the WebSphere Application Server ARCHIVE, if the change.py script completes successfully, the artifacts in temp folder in storehouse are cloned to the /deployment/{deployment_id}/artifacts path before they are deleted.
  • Configuration for non-role component such as remote DB2 and Tivoli® Directory Service.

    If the transformer needs configuring, use wasxdb2 as shown in the example:

    In XDB2Transformer.java:

    Change
    result.put("service", attributes);
    to
    result.put("attributes", attributes);
    result.put("service", prefix);
    In WASXDB2Transformer.java:
    Change
    JSONObject serviceParms = (JSONObject) targetFragment.get("service');
    To
    JSONObject serviceParms = (JSONObject) targetFragment.get("attributes');
    and add following line:
    //WASxDB2 acts as an extension, not a dependency (so, no role defined)
    depend.put("type", "xDB2");
    depend.put("service", targetFragment.get (service));
  • Configuration for multi-links between two components, such as remote WebSphere Application Server and Tivoli Directory Service.
    Besides the tweak.json file, the transformer needs specific codes to handle this scenario. The target topology segment should look like:
      "depends": [
            {
                "role": "User_Registry-tds.TDS1",
                "deps_key_id": "WAS.xLDAP.xLDAP_ROLE_MAPPING",
                'parms': {
                    "manager": {
                        "SPECIALSUBJECTS_xLDAP_ROLE_MAPPING": "None",
                        "GROUP_xLDAP_ROLE_MAPPING": "manager",
                        "xLDAP_ROLE_MAPPING": "manager",
                        "USER_xLDAP_ROLE_MAPPING": ""
                    },
                    "employee": {
                        "USER_xLDAP_ROLE_MAPPING": "",
                        "GROUP_xLDAP_ROLE_MAPPING": "employee",
                        "xLDAP_ROLE_MAPPING": "employee",
                        "SPECIALSUBJECTS_xLDAP_ROLE_MAPPING": "None"
                    }
                }
            }
        ]    ]
    The parms are a nested structure and you must specify a "deps_key_id" as the key for the subgroup in the parms. You can use Java based code or a template to complete the transformer. In parts scripts changed.py and change.py, you can retrieve the parameters by using "for cycle" as follows:
    for key in parms:
       roleParms = parms[key]
       print key
       print roleParms['xLDAP_ROLE_USER_MAPPING'] 
       print roleParms['xLDAP_ROLE_GROUP_MAPPING'] 
       print roleParms['xLDAP_SPECIAL_SUBJECTS_MAPPING']