Deployment
Consider these aspects of deployment when you design your plug-in.
Activation
Each image contains a startup script, /0config/0config.sh, that executes after the virtual machine starts.
SERVER_NAME. The value is formed
by appending an instance number to the corresponding vm-template name,
for example, application-was.11373380043317, application-was.2237183401478347.
Activation proceeds as follows:- Get the
vm-templatefor this virtual machine from the topology document, for example, ifSERVER_NAME == application-was.1, then get thevm-templatenamedapplication-was. - For each node part in the
vm-template:- Download the node part .tgz file and extract into the {nodepkgs_root} directory.
- Invoke{nodepkgs_root}/setup/setup.py, if
the script exists. Associated
parmsfrom the topology document are available asmaestro.parms. - Delete {nodepkgs_root}/setup/.
- Run the installation scripts ({nodepkgs_root}/common/install/*.sh|.py) in ascending numerical order.
- Run the start scripts ({nodepkgs_root}/common/start/*.sh|.py) in ascending numerical order.
maestro module.
The module initialization script is in place, so that the setup.py script
can use the maestro HTTP client utility methods,
for example, maestro.download(), and maestro.parms,
to obtain configuration parameters. Both installation and start scripts are ordered. By convention, these scripts are named with a number prefix, such as 5_autoscaling.sh or 9_agent.sh. These scripts are said to be in slot 5 or slot 9. All installation scripts in slot 0 are run before any installation script in slot 1. All of the installation scripts are run in sequential order, and then all of the start scripts are run in sequential order. Set up and installation scripts are run one time for each virtual machine; start scripts are run on every start or restart. For more information, see the section, Recovery: Reboot or replace. The workload agent is packaged and installed as a node part.
Node parts
- Conventions Node parts are packaged as .tgz files. The contents are organized into a directory structure by convention. The following files and directories are optional:
The root directory is common/ for the following shared files:common/python/maestro/{name}.py common/scripts/{name} common/start/{N}_{name} common/stop/{N}_{name} {name}/{any files} setup/setup.py- common/pythonApplies to the Python scripts started by the workload agent and includes the following directories:
- common/python, which is added to the
PYTHONPATHAll files in common/python/maestro/*.py are added to the maestro package
- common/scripts
Added to the
PATHbefore the start scripts are run. - common/start
Contains script files that are started automatically by the activation script; scripts are named with a number prefix, for example, 5_autoscaling.sh or 9_agent.sh, and run in a sequential order that starts with
0_*. - common/stop
Contains script files that are stopped automatically by the activation script; scripts are named with a number prefix, for example, 5_autoscaling.sh or 9_agent.sh, and run in reverse sequential order.
- common/python, which is added to the
{name}/ is a private directory for the node part.
setup/ is intended for one-time setup of the node part. The script, setup/setup.py, is started by the activation script with the associated parameters specified in the topology document. The setup/ directory is deleted after the setup/setup.py script returns.
- common/python
- Setup scriptIf present, the setup/setup.py script is started with the associated parameters. For example, the workload agent node part is configurable for the installation location, IaaS server and a command port; parameters are specified in the topology document within the node part object, such as this excerpt from the sample topology document in the section, Application model and topology document examples:
The setup/setup.py script can import the"node-parts":[ { 'parms':{ "iaas-port":"8080", "agent-dir":"\/opt\/IBM\/maestro\/agent", "http-port":9999, "iaas-ip":"127.0.0.1" }, "node-part":"https:\/\/localhost:9444\/storehouse\/admin\/plugins\/agent\/nodeparts\/agent-linux-x64.tgz" },maestromodule. Configuration parameters are available as maestro.parms, for example:
The implementation is arbitrary. The previous example passes the parameter values to a shell script, which in turn startsimport json import os import subprocess import sys import maestro parms = maestro.parms subprocess.call('chmod +x *.sh', shell=True) rc = subprocess.call('./setup_agent.sh "%s" %d %s %s' % (parms['agent-dir'], parms['http-port'], parms['iaas-ip'], parms['iaas-port']), shell=True) maestro.check_status(rc, 'setup_agent.sh: rc == %s' % rc)sedfor token replacement in the support scripts.Node parts are added tovm-templatesduring the resolve phase of deployment.vm-templatein two ways:- Explicit inclusion as part of a named package.
- Implicit inclusion as part of a matching default package.
In general, config.json can define any number of named packages. The previous example shows one package that is named "default". Each package is an array that contains any number of objects, where each object is a candidate combination of node parts, parts, or both. The candidates are specified by mapped values for requires, node parts, and parts. Package contents are additive within a pattern type. For example, if two plug-ins are part of the same pattern type and both define package "FOO" in config.json, then resolving package "FOO" considers the union of candidates from both config.json files. In config.json, "default" is a special package name. The resolve phase always includes the default package; other named packages are resolved only when explicitly named.{ "name":"agent", "packages":{ "default":[ { "requires":{ "arch":"x86_64", "os":{ "RHEL":"*" } }, "node-parts":[ { "node-part":"nodeparts/agent-linux-x64.tgz", 'parms':{ "agent-dir":"/opt/IBM/maestro/agent", "http-port":9999, "iaas-ip":"@IAAS_IP@", "iaas-port":"@IAAS_PORT@" } } ] } ] } } - config.json fileThe config.json file must specify the following information:
- Plug-in name
- The plug-in name is a string. The name cannot contain a forward
slash (
/). For example:"name":"was" - Plug-in version
- The format of the version number must be
N.N.N.N. For example, a valid version number is1.0.0.3. - Associated pattern type
- At a minimum, one pattern type association must be defined in
a
patterntypeselement, and it can be either a primary or a secondary pattern type. You can specify only one primary pattern type, but you can specify multiple secondary pattern types. For example, the DB2® plug-in declares primary association with the database as a service pattern type, which means the plug-in is covered by the license and enablement of the database as a service pattern type entity. The plug-in also declares secondary association with the Web Application Pattern type, which means that the enabled DB2 plug-in is also available for creating virtual application patterns with the Web Application Pattern type.Use a JSON object to define the primary pattern type. For example:"patterntypes":{ "primary":{ "webapp":"2.0" } }Define one or more secondary pattern types by using a JSON array that contains one or more JSON objects. An example of a plug-in without a primary pattern type is thedebugplug-in. It uses the following configuration to specify that the plug-in has a secondary association with all pattern types.
If you want to define associations with specific secondary pattern types, you can include multiple pattern types in the"patterntypes" : { "secondary" : [ { "*" : "*" } ] },secondaryelement."patterntypes" : { "secondary" : [ { "pattern1":"2.0" }, { "pattern2":"2.1" } ] },
Plug-in linking
The simplest way to associate a plug-in with a pattern type is by directly defining the primary and secondary pattern types inconfig.json. This approach, however, creates static relationships between plug-ins and pattern types. Linking is a more flexible approach to associating pattern types and plug-ins. In the following example, thehelloplug-in declares a link relationship with thedbaaspattern type in config.json.
When the"patterntypes": { "primary":{ "hello":"2.0" }, "linked":{ "dbaas":"1.1" } }helloplug-in is imported, any plug-ins that are associated with version 1.1 of thedbaaspattern type are automatically associated with version 2.0 of thehellopattern type as well. For example, theplugin.com.ibm.db2 plugindeclaresdbaasas its primary pattern type, so this plug-in is automatically associated with thehellopattern type.
Linking relationships are not enforced the same way as primary and secondary pattern type relationships. For the"patterntypes":{ "primary":{ "dbaas":"1.1" }, "secondary":[ { "webapp":"*" } ] },helloplug-in example, thehelloplug-in can be installed even if thedbaaspattern type is not installed or enabled. Thedbaaspattern type can also be removed if there are no active associations with virtual application instances or if no virtual applications are locked to the pattern type.Requires
The value of
requiresis an object that qualifies the candidate in terms of applicable operating system and architecture. Supported keys and mapped values are as follows:Key arch; mapped value is a string or array of strings. Supported strings include x86 and x86_64 (ESX) and ppc_64 (PowerVM®).
For example,
or"arch" : "ppc_64""arch" : ["x86", "x86_64"]Key OS; mapped value is an object of OS name/version pairs. Supported OS name strings include RHEL and AIX®. The following expressions are valid:"*"- Matches all versions.
"[a, b]"- Matches all versions between
aandb, inclusive. "(a, b)"- Matches all versions between
aandb, exclusive. "[*, b]"- Matches all versions up to
b, inclusive. "[a, *)"- Matches all versions
aand greater.
- Standard node partsAll
vm-templatesinclude workload agent and firewall node parts. Each of the node parts defines a standard interface for integration with and used by other node parts.- Workload agent
The workload agent is an OSGi-based application responsible for installing parts and driving the lifecycle of the roles and dependencies in a plug-in.
The workload agent processes the following sequence of operations to drive roles and dependencies toward a running application:- Get the virtual machine
vm-templatefrom the topology document, for example, if SERVER_NAME == application-was.1, then get thevm-templatenamedapplication-was. - For each part in the
vm-template, run the following steps sequentially:- Download the part .tgz file and extract the file into {tmpdir}.
- Start {tmpdir}/install.py, passing any associated parameters that are specified in the topology document.
- Delete {tmpdir}.
- For each role in the
vm-template, run the following steps concurrently:- Start {role}/install.py, if exists.
- For each dependency of the role, start {role}/{dependency}/install.py, if exists.
- Start {role}/configure.py, if exists.
- For each dependency of the role, start {role}/{dependency}/configure.py, if exists.
- Start {role}/start.py, if exists.
- React to the changes in the dependencies ({role}/{dependency}/changed.py)
and peers ({role}/changed.py), if exists. Role
status is automatically advanced up to CONFIGURING,
but not to RUNNING. A plug-in script must
set the role status to RUNNING. Role status
must be set by only the {role}/start.py and changed.py scripts
(role or dependency). Role status is set as follows:
import maestro maestro.role_status = 'RUNNING'
There are several custom features of the workload agent:- The workload agent is extensible.
- Other node parts can install features into the OSGi-based application.
- Provide a .tgz file that contains the following
files:
- lib/{name}.jar - bundle Java™ archive files
- lib/features/featureset_{name}.blst - list of bundles for the feature set
- usr/configuration/{name}.cfg - OSGi configuration code
- Provide a start script before slot 9 that installs the .tgz file contents into the agent.
The agent_install_ext.sh script extracts the contents of the specified .tgz file into the agent application. The custom feature is included when the agent is started.#!/bin/sh agent_install_ext.sh ../../autoscaling/autoscaling.tgz - Get the virtual machine
- FirewallThe firewall node part defines a generic API for shell scripts and Python scripts to manipulate the firewall settings on the virtual machine. The default implementation for Red Hat Enterprise Linux® is based on
iptables; other implementations can be built. The firewall.sh script is the shell script that is provided for Linux. The following content summarizes the shell usage:
The open tcpin directive is tailored for TCP connections and opens corresponding rules in the INPUT and OUTPUT tables to allow request and response connections. Thefirewall.sh open in [-p <protocol>] [-src <src>] [-sport <port>] \ [-dest <dest>] [-dport <port>] firewall.sh open out [-p <protocol>] [-src <src>] [-sport <port>] \ [-dest <dest>] [-dport <port>] firewall.sh open tcpin [-src <src>] [-dport <port>] firewall.sh open tcpout [-dest <dest>] [-dport <port>] firewall.sh close in [-p <protocol>] [-src <src>] [-sport <port>] \ [-dest <dest>] [-dport <port>] firewall.sh close out [-p <protocol>] [-src <src>] [-sport <port>] \ [-dest <dest>] [-dport <port>] firewall.sh close tcpin [-src <src>] [-dport <port>] firewall.sh open tcpout [-dest <dest>] [-dport <port>]open indirective opens the INPUT table only. Forsrcanddest, private is a valid value. This value indicates that<src>and<dest>are limited to the IP range defined for the cloud. The value private is defined in the config.json file for the firewall plug-in as follows:
Currently, the value of PRIVATE_MASK is set as part of the{ "name":"firewall", "packages":{ "default":[ { "requires":{ "arch":"x86_64", "os":{ "RHEL":"*" } }, "node-parts":[ { "node-part":"nodeparts/firewall.tgz", 'parms':{ "private":"PRIVATE_MASK" } } ] } ] } }maestroprovisioning steps. The cloud-specific value is found in the cloud project build.xml file, for example, cloud.HSLT/build.xml. For an Orion cloud, PRIVATE_MASK == 10.0.0.0/255.0.0.0.The Python API is similar. Callers must import themaestropackage. The provided firewall methods are as follows:
wheremaestro.firewall.open_in(**args) maestro.firewall.open_out(**args) maestro.firewall.open_tcpin(**args) maestro.firewall.open_tcpout(**args) maestro.firewall.close_in(**args) maestro.firewall.close_out(**args) maestro.firewall.close_tcpin(**args) maestro.firewall.close_tcpout(**args)**argsrepresents keyword args.Valid keywords are as follows:
protocol,src,sport,dest, anddport.The following example is for one valid invocation:maestro.firewall.open_tcpin(src='private', dport=8080)
- Workload agent
Parts
- Conventions
All parts must have an install.py script at the root. More files are allowed.
- Common scriptsBy default, the
maestropackage contains the following functions:maestro.download(url, f): Downloads the resource from the url and saves the resource locally as filef.maestro.downloadx(url, d): Downloads and extracts a .zip, .tgz, or .tar.gz file into directory d. The .tgz and.tar.gzfiles are streamed through extraction; the .zip file is downloaded and then extracted.maestro.decode(s): Decodes strings that are encoded with themaestroencoding utility, such as from a transformer by usingcom.ibm.ws.security.utils.XOREncoder.encode(String).maestro.install_scripts(d1): Utility function for copying lifecycle scripts into {scriptdir} and making the shell scripts executable (dos2unixandchmod +x).maestro.check_status(rc, message): Utility function for logging and exiting a script for non-zerorc.
- Data objectsThe agent appends data objects or dictionaries to the
maestropackage when it starts the part installation scripts as follows:maestro.parturl : fully-qualified URL from which the part .tgz file was obtained (string; RO) maestro.filesurl : fully-qualified URL prefix for the shared files in storehouse (string; RO) maestro.parms : associated parameters specified in the topology document (JSON object; RO) maestro.node['java'] : absolute path to Java executable (string; RO) maestro.node['deployment.id'] : deployment ID, for example, d-xxx (string; RO) maestro.node['tmpdir'] : absolute path to working directory. This path is cleared after use (string; RO) maestro.node['scriptdir'] : absolute path to the root of the script directory (string; RO) maestro.node['name'] : server name (same as env variable SERVER_NAME) (string; RO) maestro.node['instance'][ 'private-ip'] (string; RO) maestro.node['instance'][ 'public-ip'] (string; RO) maestro.node['parts'] : shared with all Python scripts invoked on this node (JSON object; RW)
Roles
A role represents a managed
entity within a virtual application instance. Each role is described
in a topology document by a JSON object, which is contained within
a corresponding vm-template like the following example:
maestro.role['tmpdir']: role-specific
working directory; not cleared (string; RO)
utilpath = maestro.node['scriptdir'] + '/my_role'
if not utilpath in sys.path:
sys.path.append(utilpath)
import my_lib "roles":[
{
"plugin":"was\/2.0.0.0",
'parms':{
"ARCHIVE":"$$1",
"USERID":"virtuser",
"PASSWORD":"$$6"
},
"depends":[
{
"role":"database-db2.DB2",
'parms':{
"db_provider":"DB2 Universal JDBC Driver Provider",
"jndiName":"TradeDataSource",
"inst_id":1,
"POOLTIMEOUT":"$$11",
"NONTRAN":false,
"db2jarInstallDir":"\/opt\/db2jar",
"db_type":"DB2",
"db_dsname":"db2ds1",
"resourceRefs":[
{
"moduleName":"tradelite.war",
"resRefName":"jdbc\/TradeDataSource"
}
],
"db_alias":"db21"
},
"type":"DB2",
"bindingType":"javax.sql.DataSource"
}
],Role names and role types
vm-template. In the
topology, it identifies a specific section in the vm-template where
role parameters are defined. Role names do not need to match the role
type. The role name identifies the directory where the role scripts
are stored. For example, if your vm-template includes
these lines:{
name: A_name
type: A
parms:{}
}The following actions occur when the virtual machine
is deployed:- A role that is called
A_nameis created. - The Python lifecycle scripts for the role
A_nameare in scripts/A/ - The lifecycle scripts can use
maestro.parmsto read the parameters that are defined for the roleA_name.
{server name}.{role name}The {server name} is based on the
vm-template name that is defined in the topology document. The {role
name} is the role name that is defined in the topology document with a unique instance
number appended to the name. ROLES: [
{
time_stamp: 1319543308833,
state: "RUNNING",
private_ip: "172.16.68.128",
role_type: "CachingContainer",
role_name: "Caching-Container.11319542242188.Caching",
display_metrics: true,
server_name: "Caching-Container.11319542242188",
pattern_version: "2.0",
pattern_type: "foundation",
availability: "NORMAL"
},
{
time_stamp: 1319543269980,
state: "RUNNING",
private_ip: "172.16.68.129",
role_type: "CachingCatalog",
role_name: "Caching-Catalog.21319542242178.Caching",
display_metrics: false,
server_name: "Caching-Catalog.21319542242178",
pattern_version: "2.0",
pattern_type: "foundation",
availability: "NORMAL"
},
{
time_stamp: 1319544107162,
state: "RUNNING",
private_ip: "172.16.68.131",
role_type: "CachingPrimary_node",
role_name: "Caching-Primary_node.11319542242139.Caching",
display_metrics: true,
server_name: "Caching-Primary_node.11319542242139",
pattern_version: "2.0",
pattern_type: "foundation",
availability: "NORMAL"
},
{
time_stamp: 1319543249613,
state: "RUNNING",
private_ip: "172.16.68.130",
role_type: "CachingCatalog",
role_name: "Caching-Catalog.11319542242149.Caching",
display_metrics: false,
server_name: "Caching-Catalog.11319542242149",
pattern_version: "2.0",
pattern_type: "foundation",
availability: "NORMAL"
}
],Role state and status
INITIAL: Roles start in the initial state. The {role}/install.py script for each role starts. For each dependency of the role, {role}/{dependency}/install.py starts, if it exists. If the scripts complete successfully, the role progresses automatically to theINSTALLEDstate. If the install.py script fails, the role moves to theERRORstate, and the deployment fails.INSTALLED: From this state, the {role}/configure.py script runs, if one exists. For each dependency of the role, {role}/{dependency}/configure.py starts, if it exists. If the scripts complete successfully, the role progresses automatically to theCONFIGURINGstate. If the configure.py script fails, the role moves to theERRORstate, and the deployment fails.CONFIGURING: From this state, the start.py script runs, if one exists. The role reacts to changes in the dependent roles ({role}/{dependency}/changed.py) and peers ({role}/changed.py), if they exist.Note: For more information about {role}/changed.py and {role}/{dependency}/changed.py, see the pydoc.STARTING: The automatic state setting stops. A lifecycle script must explicitly set the role state toRUNNING. Role status is set as follows:import maestro maestro.role_status = 'RUNNING'RUNNING
RUNNING and
change the role status to FAILED. For example, if WebSphere® Application Server crashes,
and the crash is detected by wasStatus.py, the wasStatus.py script
sets maestro.role_status = "FAILED". When a user
starts WebSphere Application Server from
the Instance Console console,
one of the following processes occurs:- If there is no dependency, operation.py sets
maestro.role_status="RUNNING". - If WebSphere Application Server depends
on another role (such as DB2), operation.py sets
maestro.role_status="CONFIGURING". Thewas/{dependency}/changed.pyscript starts as the result of a role status change fromFAILEDtoCONFIGURING, and the script starts WebSphere Application Server, processes dependency information from maestro.deps and setsmaestro.role_status="RUNNING".
If the deployment is stopped or destroyed, the stop.py script
runs, and the role moves to the TERMINATED state.
Roles are only moved to the TERMINATED state by
external commands.
| Role state script | Transition | Update status | Aspect | Set role status | Invoke |
|---|---|---|---|---|---|
| Initial | Initial => INSTALLED | on entry | INITIAL | - | |
| INSTALLED {role}/install.py then all {role}/{dep}/install.py {role}/configure/py then all {role}/{dep}/configure.py {role}/start.py |
INSTALLED => RUNNING |
during |
INSTALLING CONFIGURING STARTING (role status by script) |
||
| RUNNING {role}/start.py |
|
role_status (set by script) |
For information about status for an entire virtual application instance, see the Related tasks.
Existing resources
- Integration between resources is modeled as a dependency between two roles. The target role (pattern-deployed or existing) exports properties that are used by a dependency script on the source ({role}/{dep}/changed.py) to realize the integration. This design provides reuse of the source dependency script. For example, in the wasdb2 plug-in, the WAS/DB2/changed.py script manages a WebSphere Application Server data source for any pattern-deployed or existing database.
- User interactions in the Cloud Pak System Software for x86 deployment
user interface are consistent for resources and integrations. Resources
(pattern-deployed or existing) are represented as roles, meaning they
are displayed on the Operation tab of the deployment
panel in the product user interface. For example, you can look for
a role when you change a password. For a pattern-deployed resource,
the change is applied to the resource, then exported for dependencies
to react. For an existing resource, change is exported for dependencies
to react like when the password is already changed externally.
Managing configuration of the interactions (links) is handled through the source role.
An existing resource is modeled by a component in appmodel/metadata.json file. Typical component attributes are required to connect to the resource, such as host name/IP address, port, and application credentials.
Integration with existing resources is modeled by a link in the appmodel/metadata.json file.
If a type of resource displays as pattern-deployed or existing, then consolidation is possible by adding a role to represent the external resource. This role can export parameters from the existing resource that the dependent role for the pattern deployed case can handle.
Consider the case of an application that is using an existing
resource, such as wasdb2, imsdb,
and wasctg plug-ins. At the application model level,
the existing database is a component, and WebSphere Application Server uses it, on
behalf of the application, as a represented link to that component.
Typical attributes of the existing database are its host name or
IP address and port, and the user ID and password for access.
In older service approaches, the existing database component has a transform that builds a JSON target fragment that stores the attributes, and the link transform uses these attributes. In IMS, for example, the link transform creates a dependency in the WebSphere Application Server role in the WebSphere Application Server node, with the parameters of the existing database that are passed from the component. The dependent role configure.py script is used to configure WebSphere Application Server to use the existing database that is based on the parameters, which are sufficient, but in the deployment panel the parameters of the existing database appear in the WebSphere Application Server role, which is not sensible.
In the new role approach, the target component
creates a role JSON object and the link transform adds it to the WebSphere Application Server
virtual machine template list of roles. The wasdb2 plug-in
creates an xDB role to connect to existing DB2 and Informix® databases. IMS can convert to this model, and move its configure.py and change.py scripts
to a new xIMS role. The advantage here is in the
deployment panel, which lists each role for a node separately in a
left column where its parameters and operations are better separated
for user access.
The wasdb2 plug-in provides
an extra feature that IMS and
CTG might not use. The plug-in also supports pattern-deployed DB2 instances. In the pattern-deployed
scenario, the DB2 target node
is a node that is started. The correct model is a dependent role
and the link configuration occurs when both components, source WebSphere Application Server
and target DB2, start. The changed.py script
is then run. For the existing database scenario, the wasdb2 plug-in
exports the same parameters as the DB2 plug-in,
and then processing for pattern-deployed and existing cases can be
performed in the changed.py script. IMS and wasctg do not require
this process and can use a configure.py role
script for new roles.
Repeatable tasks
At run time, a role might need to perform some actions repeatedly. For example, the logging service must back up local logs to the remote server in a fixed period. The plug-in framework allows a script that is started after a specified time to meet this requirement.
maestro.tasks.append() method
to run the task. For example:task = {}
task['script']='backupLog.py'
task['interval'] = 10
taskParms={}
taskParms['hostname'] = hostname
taskParms['directory'] = directory
taskParms['user'] = user
taskParms['keyFile'] = keyFile
task['parms'] = taskParms
maestro.tasks.append(task)When you are troubleshooting
a task that does not run, check the script that calls the task with maestro.tasks.append() first.
You
must have a dictionary object named task. You can
change name this name to another valid name. The target script is
specified by task['script'] and the interval is specified
by task['interval']. You can add parameters to the
script by using task['parms']. This addition to the
script is optional. The maestro.tasks.append(task) is
used to enable this task. In this sample, backupLog.py,
which is in the folder {role}/scripts, is started
after 10 seconds when the current script is completed. Using the backupLog.py script,
you can retrieve the task parameters from maestro.task['parms'] and
retrieve the interval from maestro.task['interval'].
This script is only started one time. If the backupLog.py script
is required to be started repeatedly, you must add the same codes
into the backupLog.py script. When the current
script is completed, it is started after the new specified internal
and parameters.
Recovery: Reboot or replace?
If a virtual machine stops unexpectedly, the master agent recovers the failed virtual machine. The action depends on the virtual machine type. A persistent virtual machine is rebooted. Other virtual machines are replaced.
vm-template with a true-valued persistent
property, as follows:"vm-templates": [
{
"persistent":true,
"scaling": {
"min": 1,
"max": 1
},- Direct
The transformer adds the persistent property to the
vm-template. - Indirect
The package configuration specifies the persistent attribute.
The direct method supersedes the indirect. That is, if
the vm-template is marked persistent (true or false),
that is the final value. If the vm-template is not
marked persistent, the resolve phase of deployment derives a persistent
value for the vm-template that is based on the packages
that are associated with that vm-template. The vm-template is
marked persistent or true if any package declares
persistent true.
"vm-templates": [
{
"persistent":true,
"scaling": {
"min": 1,
"max": 1
},
"name": "Caching_Primary_node",
"roles": [
{
"depends": [{
"role": "Caching_Worker_node.Caching"
}],
"type": "CachingPrimary_node",
"name": "Caching",
'parms':{
"PASSWORD": "$XSAPassword"
}
}
],
"packages": [
"CACHING"
]
},
Package configuration is specified in the config.json file as
follows: {
"name" : "db2",
"version" : "1.0.0.0",
"packages" : {
"DB2" : [{
"requires" : {
"arch" : "x86_64",
"memory" : 0},
"persistent" : true,
"parts" : [
{"part" : "parts/db2-9.7.0.3.tgz",
'parms' : {
"installDir" : "/opt/ibm/db2/V9.7"}},
{"part" : "parts/db2.scripts.tgz"}]
}]
}
}Instance Console
You can manage virtual application instances from the Operation tab of the Instance Console. The tab displays roles in a selected virtual application instance and each role provides actions that the underlying plug-ins defined for it, such as retrieving data, applying a software update, or modifying a configuration setting.
- Operation
- For an operation, the action affects only current deployed roles. An operation is defined in operation.json.
- Configuration
- For a configuration, attribute changes are saved in the topology. If a virtual machine is restarted or more virtual machines are deployed, the configuration attribute value is applied to these virtual machines. A configuration is defined in tweak.json.
Operation
- Define an operation. Create a JSON file named operation.json in the plugin/appmodel directory. Each operation.json file must contain a JSONObject. The following code example shows a part of the WebSphere Application Server operation.json file, where the key is the role type WAS, and the value is a JSONArray that contains these operation definitions:
where"WAS": [ { "id": "setWASTrace", "label": "TRACE_LEVEL_LABEL", "description": "TRACE_LEVEL_DESCRIPTION", "script": "debug.py setWASTrace", "attributes": [ { "label": "TRACE_STRING_LABEL", "type": "string", "id": "traceString", "description": "TRACE_STRING_DESCRIPTION" } ] },- script defines the operation debug.py script that is started when the operation is submitted. The operation script name can also be followed by a method name such as setWASTrace that is included in the previous code sample. The method name can be retrieved later in the operation script. The operation script must be placed under the role scripts path, for example, plugin/parts/was.scripts/scripts/WAS.
- attributes define the operation parameters that you must input. The operation parameters can be retrieved later by the operation script.
Attributes for operations against multiple instances.
If a role has more than one instance, you can use these attributes in the operation definition to control how an operation is applied to instances. The following attributes are validated if a role has more than one instance:- rolling
- Determines whether an operation is applied sequentially or concurrently
on instances.
- Apply an operation concurrently by setting
"rolling": false. This setting is the default. - Apply an operation sequentially by setting
"rolling": true.
You can configure a group update by addingrolling_configwith thegroup_sizeattribute. Ifgroup_sizeis set, the instances in a cluster are divided into the specified number of groups. The operation is invoked group by group. For example, if you set the following attributes:
when the operation is invoked, the instances are divided into two groups. The operation is first performed on group one. After the operation completes on all of the instances in group one, the operation is invoked on the instances in group two.{ "rolling":true, "rolling_config": { "group_size": 2 } } - Apply an operation concurrently by setting
- target
- Determines whether an operation is applied on a single instance
or all instances.
- Apply an operation on all instances by setting
"target": All. This setting is the default. - Apply an operation on a single instance by setting
"target": Single
- Apply an operation on all instances by setting
Setting a particular role status for an operation.
By default, when an operation is being performed, the role status is set to
CONFIGURINGand then is set back toRUNNINGwhen the operation is complete. This change in status can sometimes stop the application itself. Some operations, such as exporting logs, do not require a role to change its status fromRUNNING. For these types of operations, you can explicitly set the role status to use when the operation starts. For example, to keep the role status asRUNNINGwhen the operation starts, add the following attribute to the operation definition:"preset_status": "RUNNING"The role status remains as
RUNNINGunless an error occurs during the operation.See the WebSphere Application Server operation.json file for an example
- Operation script
The operation script can import the
maestromodule. The information that is retrieved in the role lifecycle part script can be retrieved the same way in the deployment panel, such asmaestro.role,maestro.node,maestro.parms, andmaestro.xparms. Also, all of the utility methods, such asdownloadanddownloadx, can be used. Parameters that are configured at the deployment panel are passed into the script and are retrieved atmaestro.operation['parms']. The method name that is defined in the operation.json file is retrieved atmaestro.operation['method'], and operation ID is retrieved atmaestro.operation['id']. - File download and upload All downloaded artifacts must be placed under the fixed root path. The operation script can get the root path from
maestro.operation['artifacts_path']. To specify a file that is downloaded later, insertmaestro.return_value="file://key.p12"in the script. The prefix, file://, indicates that a file is required for download. After the script is complete, the deployment panel displays a link to download the file. Uploaded files are placed in a temporary folder under the deployment path in the storehouse. The operation script retrieves the full storehouse path of the uploaded files, for example:
After the file path is retrieved, theuploaded_file_path=maestro.operation['parms'][{parm_name}]maestro.download()method downloads the file. When the operation is complete, the temporary files in storehouse are deleted. When the operation script interacts with kernel services and storehouse, the script prefers to use the authorization token that is passed from the user interface, but not use the agent token, so that the operations can be audited later. The operation script must usemaestro.operation["user_token"]to retrieve the user token that is passed from the user interface, which contains useruuID. You can use it later in communication with kernel services and storehouse. For example, to upload a file into storehouse, useupload_if_new(url, file), which usesagent_tokento generate the authorization header. You can also useupload_if_new(url, file, user_token), where the passed-inuser_tokenis used to generate the header. - Operation result
You can specify the result of an operation by using
maestro.operation['successful']=True/False. The default value is True. After the operation script completes successfully, the user interface displays the result as SUCCESS or FAIL. If the script ran with failures such asreturn code!=0, the user interface result displays as ERROR, and the responding role changes to ERROR state. If you want to return a more meaningful result, insertmaestro.operation['return_value']="my return value". The return value displays on the user interface when the script completes.Use a
tryandcatchstatement around the operation script to prevent the role from entering an unrecoverableERRORstatus. If an exception occurs and is caught by thecatchstatement, you can usemaestro.operation['successful'] = Falseto indicate that the operation did not complete successfully, but the role status remains asRUNNING. - Depends-on role operationIn some scenarios, one role update causes another role to also need an update. This scenario is called the depends-on role operation. An example is if DB2 changes a password, WebSphere Application Server also updates the data source configuration. In the operation script, export the changed value, such as:
maestro.export['DB_PASSWORD']="<XOR>UIK8CNz". Then, change the depends-on role changed.py script to add code to handle the other update.if len(updated)== 1: myrole = updated[0] if deps[myrole] ['is_export_changed'] : print 'export data changed' else: print 'only status changed' Running scripts concurrently
By default, all lifecycle scripts and operation scripts run serially on a role to avoid conflicts. You can also run scripts concurrently, but with some limitations on the scripts. For scripts that run in parallel, all
maestro.*variables are read-only, exceptmaestro.operationandmaestro.tasks. For operations that must run concurrently and write somemaestro.*, the operation script can keep the codes that can start concurrently and schedule a task to run the codes that must start at the same time.The following attributes are used to configure parallel running of scripts:- parallel
- If set to
true, scripts run concurrently. If set tofalse, scripts run serially. - timeout
- For non-parallel operations, the response from an operation returns immediately after operation information is added into an operation queue. When scripts run in parallel, the response does not return until all the operation scripts complete. For some scripts that take a long time to run, the default timeout setting might cause a timeout to occur before the operation script can complete. To adjust for this issue, you can specify an appropriate timeout value in seconds.
- Testing operation scriptsDuring plug-in development, you can test your operation scripts incrementally.
- Deploy the application.
- Log on to the virtual machine with the scripts that you want to update.
- Replace the operation scripts with updated versions.
- Trigger the operation script by running the operation from the deployment inlet interface. The update takes effective immediately and the updated operation script runs.
Configuration
- Define a configuration.
Add the tweak.json file under the plug-in plugin/appmodel folder to specify which configuration parameters can be changed during run time. This means that some parameters in the topology model can be changed and validated at run time. Each tweak.json file is a JSONArray. Each object describes a parameter that can be tweaked in the topology model. For example, in the WebSphere Application Server plug-in, add the following code example to a tweak.json file. The parameter
ARCHIVEunder theWASrole can be tweaked.The value of "id" is composed withFor a parameter under the{role_type}.{parm_name}. Other attributes such as "label" and "description" are prepared for the user interface, similar to the definition in the metadata.json file in the /appmodel directory.{ "id": "WAS.ARCHIVE", "label":"WAR/EAR File", "description":"Specifies the web/enterprise application to be uploaded. ", "type":"file", "extensions":[ "war", "ear" ] }dependssection, the value of "id" is composed with{role_type}.{depends_role_type}.{parm_name}:
To enable this feature, you must add the following code to the operation.json file:{ "id": "WAS.DB2.MINPOOLSIZE", "type": "number" }{ "id": "your_id", "type": "configuration" "label": "CONFIGURATION_LABEL", "description": "CONFIGURATION_DESCRIPTION", "script": "change.py" },The
"configuration"value indicates that parameters must be saved.Configuration script: change.py
The change.py script is similar to the operation script, except for:- The script name must be change.py.
- When you use the
dependsrole configuration, the change.py script is placed under the plugin/parts/{parts_package}/scripts/{role_type}/{depends_role_type} path. - After the script is started successfully, the changed values are automatically persisted to storehouse.
- For the artifacts-type configuration update, such as when you
update the WebSphere Application
Server
ARCHIVE, if the change.py script completes successfully, the artifacts in temp folder in storehouse are cloned to the /deployment/{deployment_id}/artifacts path before they are deleted.
- Configuration for non-role component such as remote DB2 and Tivoli® Directory
Service.
If the transformer needs configuring, use wasxdb2 as shown in the example:
In XDB2Transformer.java:
Change
toresult.put("service", attributes);
In WASXDB2Transformer.java:result.put("attributes", attributes); result.put("service", prefix);ChangeJSONObject serviceParms = (JSONObject) targetFragment.get("service');To
and add following line:JSONObject serviceParms = (JSONObject) targetFragment.get("attributes');//WASxDB2 acts as an extension, not a dependency (so, no role defined) depend.put("type", "xDB2"); depend.put("service", targetFragment.get (service)); - Configuration for multi-links between two components, such as
remote WebSphere Application
Server and Tivoli Directory
Service.Besides the tweak.json file, the transformer needs specific codes to handle this scenario. The target topology segment should look like:
The"depends": [ { "role": "User_Registry-tds.TDS1", "deps_key_id": "WAS.xLDAP.xLDAP_ROLE_MAPPING", 'parms': { "manager": { "SPECIALSUBJECTS_xLDAP_ROLE_MAPPING": "None", "GROUP_xLDAP_ROLE_MAPPING": "manager", "xLDAP_ROLE_MAPPING": "manager", "USER_xLDAP_ROLE_MAPPING": "" }, "employee": { "USER_xLDAP_ROLE_MAPPING": "", "GROUP_xLDAP_ROLE_MAPPING": "employee", "xLDAP_ROLE_MAPPING": "employee", "SPECIALSUBJECTS_xLDAP_ROLE_MAPPING": "None" } } } ] ]parmsare a nested structure and you must specify a "deps_key_id" as the key for the subgroup in theparms. You can use Java based code or a template to complete the transformer. In parts scripts changed.py and change.py, you can retrieve the parameters by using "for cycle" as follows:for key in parms: roleParms = parms[key] print key print roleParms['xLDAP_ROLE_USER_MAPPING'] print roleParms['xLDAP_ROLE_GROUP_MAPPING'] print roleParms['xLDAP_SPECIAL_SUBJECTS_MAPPING']
as illustrated in Figure 2: Phases of deployment.