IBM Support

Release Notes for Tivoli Workload Scheduler, version 8.5.1

Release Notes


Abstract

This document supplies the release notes for IBM® Tivoli® Workload Scheduler, version 8.5.1.

Content

After December 10th 2013, the Tivoli Workload Scheduler installation might fail because the embedded WebSphere Application Server automatically renews the default Tivoli Workload Scheduler certificates. For more information about the steps to perform before installing to avoid the failure, see the following technote: TWS 8.5. 8.5.1 and 8.6 GA release installation failure after December 10th. If you already had the failure, perform the steps described in the technote and restart the installation process.

The Release Notes for Tivoli Workload Scheduler, version 8.5.1 contain the following topics:

To download the appropriate package for your operating system, see the Tivoli Workload Scheduler download page.

For detailed system requirements for all operating systems, see the Detailed System Requirements page.

To access the Tivoli Workload Scheduler documentation, see the online Information Center


What is new in this release

This section summarizes the product enhancements included in this version of IBM Tivoli Workload Scheduler:

Dynamic scheduling capability

The new dynamic scheduling capability is provided by integrating the product previously known as IBM Tivoli Dynamic Workload Broker in Tivoli Workload Scheduler and making it a Tivoli Workload Scheduler component. The new dynamic workload broker component adds the possibility at run time to dynamically associate your submitted workload (or part of it) to the best available resources.

The Tivoli Workload Scheduler V8.5.1 installation process includes the option to install the dynamic scheduling capability. If you select this option, you get the functionality previously available with Tivoli Dynamic Workload Broker version 1.2 built in Tivoli Workload Scheduler.

The dynamic scheduling capability helps you maintain business policies and ensure service level agreements by:

  • Automatically discovering scheduling environment resources
  • Matching job requirements to available resources
  • Controlling and optimizing use of resources
  • Automatically following resource changes
  • Requesting additional resources when needed

Dynamic workload broker is deployed in the same embedded WebSphere Application Server instance used by the master domain manager and backup masters, shares the Tivoli Workload Scheduler database, and follows the failover mechanism implemented by the switchmanager command. The component no longer relies on the Common Agent Services (CAS) architecture, but uses the newly upgraded Tivoli Workload Scheduler agent for the scanning of resources within the Tivoli Workload Scheduler network.

You can submit Tivoli Workload Scheduler jobs, including jobs defined to run on extended agents, as well as J2EE applications (if you selected the option to install the runtime environment for Java jobs at installation time). To schedule workload dynamically, you:

  1. Use the Tivoli Dynamic Workload Console to define the agents you want to use for running workload as logical resources or groups of resources.
  2. Update your Tivoli Workload Scheduler job definitions to make as destination CPU the dynamic workload broker workstation (this workstation works as a bridge between the scheduler engine and the pool of resources).
  3. For every Tivoli Workload Scheduler job, add a JSDL (Job Submission Description Language) job definition where you match the job with required resources, candidate hosts, and scheduling and optimization preferences. Use the Job Brokering Definition Console (also provided as part of dynamic workload broker) to do this easily.

When a job is thus submitted, either as part of a job stream in the plan or through ad hoc submission, Tivoli Workload Scheduler checks the job requirements, the available resources, and the related characteristics and submits the job to the resource that best meets the requirements.

New command to manage logical resources

When installed with dynamic scheduling capabilities, Tivoli Workload Scheduler version 8.5.1 provides a new command to manage resources.

Using this command, you can manage logical resources, groups of resources, and computers. In particular, you can create, modify, delete, and query resources. You can also set computers, logical resources, or groups of resources online and offline.

There are many scenarios that require automated procedures to update the resources. With this command, you have a flexible way to model applications that run in a cluster environment as logical resources. You define one instance of a logical resource of the same type for each clustered node and automatically set offline the logical resource representing the application component in stand-by.

Another example is based on scenarios where servers or applications need to be set offline at specific times of the day or of the week or of the month for scheduled maintenance activities. For example, when it is necessary to assign different types of workload to servers during the morning, the afternoon, or the night or just to isolate servers during a scheduled period of maintenance.

Support for Informix Dynamic Server database

In addition to DB2 and Oracle, the Tivoli Workload Scheduler database on Linux workstations can now be also Informix Dynamic Server.

Note that using Informix Dynamic Server excludes the capability for dynamic scheduling.

FIPS 140-2 compliance

Federal Information Processing Standards (FIPS) are standards and guidelines issued by the National Institute of Standards and Technology (NIST) for federal government computer systems. FIPS are developed when there are compelling federal government requirements for standards, such as for security and interoperability, but acceptable industry standards or solutions do not exist. Government agencies and financial institutions use these standards to ensure that the products conform to specified security requirements.

Tivoli Workload Automation has been enhanced to allow the use of cryptographic modules that are compliant with the Federal Information Processing Standard FIPS-140-2. Certificates used internally are encrypted using FIPS-approved cryptography algorithms. FIPS-approved modules can optionally be used for the transmission of data.

To satisfy the FIPS 140-2 requirement, you must use IBM Global Security Kit (GSKit) version 7d runtime dynamic libraries instead of OpenSSL. GSKit uses IBM Crypto for C version 1.4.5 which is FIPS 140-2 level 1 certified by the certificate number 755. IBM Java JSSE FIPS 140-2 Cryptographic is another module used by Tivoli Workload Automation. It has the certificate number 409.

GSKit is automatically installed with Tivoli Workload Scheduler. It is based on dynamic libraries and offers several utilities for certificate management.

To comply with FIPS, all the components of Tivoli Workload Automation that you use must be FIPS-compliant. You must use Tivoli Dynamic Workload Console or the command-line client as the interface to Tivoli Workload Scheduler. The Job Scheduling Console is not FIPS-compliant. Also, you must use DB2 as your Tivoli Workload Scheduler database.

If FIPS-compliance is not a concern to your organization, you can continue to use SSL for secure connections across your network.

New tool for capturing configuration data

The metronome tool is replaced by another program that captures Tivoli Workload Scheduler configuration data and that requires less resources to run. The new script is named tws_inst_pull_info, is included with the product, and is described in the information center.

You can use tws_inst_pull_info to produce information about your Tivoli Workload Scheduler environment and get a snapshot of the Tivoli Workload Scheduler database and configuration data on the master domain manager, saving them as a package with an associated date.


<

Tivoli Dynamic Workload Console enhancements

This section summarizes the product enhancements included in this version of IBM Tivoli Dynamic Workload Console.

Product update beacon keeps you always up-to-date with product information

This feature keeps you constantly informed about product news and updates Every time a product update is released, a light bulb appears on your display. Click the light bulb and a popup opens, describing the update and giving you a direct link to it. If you do not want to receive these notifications, you can easily unsubscribe at any time.

Graphical views

A set of new graphical views help Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS job schedulers and scheduling operators model, monitor, and manage jobs and job streams. They are:

Job stream view (for modeling)
It is a graphical extension to the Workload Designer. It can be used by developers as a graphical editor to create and modify job streams in the database.
Job stream view (for monitoring)
Can be used by operators to open a graphical representation of any job stream in plan, to work with it and with all its dependencies.
Impact view (for monitoring)
Can be used by operators to perform impact and root-cause analysis by navigating successors and predecessors through an expansible graphical representation of job streams and jobs in plan.
Plan view (for monitoring)
Can be used by operators to select a set of job streams in plan and see them and their mutual dependencies in a graphical representation.

Visual help

Short demos can be launched directly from Tivoli Dynamic Workload Console panels to show you the main features available from those panels. By clicking the camera icon on the toolbar, you can choose which demo to launch from the displayed menu.

Bookmark your tasks

Using this feature, you can now save your Monitor jobs and job streams tasks in the list of your favorite boookmarks. As a result, you can directly launch your saved tasks from a browser session.

 

 

What has changed in the IBM Tivoli Workload Automation version 8.5.1 publications

These are the changes made to the Tivoli Workload Automation publications, which are divided into a series of libraries:

  • The welcome page of the Information center has been renewed and enriched with new content.
  • New demos have been published in the Scenarios and How To Demos page, covering usability, getting started and feature-specific topics.
  • New getting started information has been added to the Planning and Installation Guide and more detailed configuration information has been added to the Administration Guide.

Tivoli Workload Automation cross-product publications

The following summarizes the changes made to the Tivoli Workload Automation cross-product publications:

Scenarios and How To Demos
These are live demos that describe at a glance how to accomplish key tasks. They can help you become more familiar with the product and understand how it works.
Tivoli Workload Scheduler distributed library

The following summarizes the changes made to the Tivoli Workload Scheduler distributed library:

Scheduling Workload Dynamically
This new publication describes how to dynamically allocate resources to run your workload using the services of the dynamic workload broker component of Tivoli Workload Scheduler. Dynamic workload broker is an on-demand scheduling infrastructure, which provides dynamic management of your environment.
Tivoli Workload Scheduler for z/OSlibrary

The following summarizes the changes made to the Tivoli Workload Scheduler for z/OS library:

Scheduling End-to-end with Fault Tolerance Capabilities
The name of this publication has changed from Scheduling End-to-end and explains how to set up IBM Tivoli Workload Scheduler for z/OS and IBM Tivoli Workload Scheduler to generate an end-to-end scheduling environment.
Scheduling End-to-end with z-centric Capabilities
This new publication describes how to set up IBM Tivoli Workload Scheduler for z/OS to generate a lightweight end-to-end scheduling environment in which to schedule and control workload from the mainframe to distributed systems.

For full details of what is new, see IBM Tivoli Workload Automation Overview.


Interoperability tables

Support at level of older product or component: For all products and components described in this section, the level of support is at that of the older product or component.

In the tables in this section the following acronyms are used:

TWS Tivoli Workload Scheduler
MDM Tivoli Workload Scheduler master domain manager
BKM Tivoli Workload Scheduler backup master domain manager
DM Tivoli Workload Scheduler domain manager
FTA Tivoli Workload Scheduler fault-tolerant agent
z/OS Conn Tivoli Workload Scheduler z/OS® connector feature
z/OS Controller Tivoli Workload Scheduler for z/OS Controller
JSC Job Scheduling Console
TDWC Tivoli Dynamic Workload Console
TWS for Apps Tivoli Workload Scheduler for Applications
FP Fix pack

Tivoli Workload Scheduler: compatibility

 

MDM
8.5.1, 8.5, 8.4, 8.3

DM
8.5.1, 8.5, 8.4, 8.3

FTA
8.5.1, 8.5, 8.4, 8.3

z/OS Conn
8.5.1, 8.5, 8.3

z/OS Controller
8.5.1, 8.3, 8.2

TWS for Apps
8.5, 8.4, 8.3
MDM 8.5.1
Y
Y
Y
 
Y
DM 8.5.1
Y
Y
Y
Y
Y
Y
Agent 8.5.1
Y
Y
Y
Y
Y
Y

Tivoli Dynamic Workload Console: compatibility

 
MDM/DM/FTA
8.5.1, 8.5
MDM/DM/FTA
8.4
MDM/DM/FTA
8.3
TDWC 8.5, 8.5.1
Y
Y
With Fix Pack 2 (or later fix packs) installed on the Tivoli Workload Scheduler V8.4 component.

Y
With Fix Pack 5 (or later fix packs) installed on the Tivoli Workload Scheduler V8.3 component.

TDWC 8.4,
with or without any fix packs
 
Y
With or without any fix packs installed on the Tivoli Workload Scheduler V8.4 component.

Y
With either:

  • Fix Pack 4 (or later fix packs) installed on the Tivoli Workload Scheduler V8.3 component, or
  • Fix Pack 3 plus Fix PK47309 installed on the Tivoli Workload Scheduler V8.3 component
TDWC 8.3    
Y
With Fix Pack 2 (or later fix packs) installed on the Tivoli Workload Scheduler V8.3 component
Note:
The indicated fix pack levels are the minimum required. You are strongly recommended to maintain all Tivoli Workload Scheduler components at the latest fix pack level.

Tivoli Workload Scheduler Job Scheduling Console: compatibility

 
MDM/DM/FTA
8.5.1, 8.5

MDM/DM/FTA
8.4

MDM/DM/FTA
8.3

z/OS Conn
8.3

z/OS Controller
8.3

z/OS Controller
8.2

z/OS Controller
8.1

JSC 8.4
Y (1)
Y
Y (3)
Y (4)
Y (5)
Y (6)
Y
JSC 8.3
Y (1)
Y (2)
Y
Y
Y (5)
Y (6)
Y
Notes
  1. The list below summarizes the behavior and limitations in using the Job Scheduling Console V8.4 or version V8.3 with a master domain manager V8.5 and higher:
    • Limitations in using variable tables:
      • You cannot manage variable tables.
      • You can manage only the parameters defined in the default variable table.
      • You cannot create parameters when the default variable table is locked.
      • You can perform only the browse action on objects that address variable tables. For example, to modify the properties of a job stream that references a variable table you must use the command line or the Tivoli Dynamic Workload Console.
      • During the submission, you cannot change the properties of a job stream that uses a variable table.
      • You can perform only the browse action on jobs that are flagged as critical.
    • Limitations in using the integration with Tivoli Dynamic Workload Broker:
      • Both in the database and in the plan, you cannot perform any operations on workstations defined as broker workstation; you can only browse them. They are shown as standard agent workstations. In particular, the Create Another function, although not disabled, must not be used, because it does not produce a copy of the original broker workstation definition, but it produces a standard agent workstation definition.
      • Both in the database and in the plan, you cannot perform any operations on jobs that manage Tivoli Dynamic Workload Scheduler jobs; you can only browse them. They are shown as UNKNOWN. In particular, the Create Another function, although not disabled, must not be used because it does not produce a copy of the original job definition, but it produces a job definition without the workload broker specifications.
  2. For firewall support, Java™ Virtual Machine 1.4.2 SR9 must be installed on the computer where the Job Scheduling Console is installed
  3. For firewall support, Fix PK47309 must be installed on the Tivoli Workload Scheduler component
  4. For firewall support, Fix Pack 3 of the z/OS Connector 8.3 must be installed
  5. With fix PK41611 installed on the z/OS controller
  6. If you have applied the standard agent integration PTFs (UK17981, UK18052, UK18053) you must also install fix PK33565 on the z/OS Controller

Tivoli Workload Scheduler Command-line client: compatibility

 

MDM/BKM
8.5.1, 8.5

MDM/BKM
8.4

MDM/BKM
8.3

Command-line client, version 8.5
Y
 
 
Command-line client, version 8.4
 
Y
 
Command-line client, version 8.3
 
 
Y

Tivoli Workload Scheduler for Applications: compatibility

 
MDM/DM/FTA
8.5.1, 8.5, 8.4, 8.3
JSC
8.4, 8.3
TDWC
8.5.1, 8.5, 8.4, 8.3
TWS for Apps
8.4, 8.3
Y
Y
Y
Note:
It is recommended that you upgrade Tivoli Workload Scheduler for Applications to the latest fix pack version before upgrading the Tivoli Workload Scheduler engine to version 8.5 and higher. This applies to all Tivoli Workload Scheduler for Applications versions up to version 8.4. If you have already upgraded the Tivoli Workload Scheduler engine to version 8.5 and higher and now need to upgrade any version of Tivoli Workload Scheduler for Applications up to version 8.4, refer to Tech Note 1384969.

Tivoli Workload Scheduler database, version 8.5.1: compatibility

 
Engine V8.5, V8.5.1
Engine V8.4
Engine V8.3
Database version V8.5 freshly installed
Y
 
 
Database version migrated from version V8.4 to V8.5
Y
Y
 
Database version migrated from version V8.3 to V8.5
Y
Y
Y
Note:
A database at any given level can only be used with a freshly installed engine, if that engine is at the same level as the database. This means, for example, that a version 8.5.1 database, that you migrated yesterday from version 8.4, cannot be used with a version 8.4 engine that you installed today, but only with a version 8.4 engine that had already been installed prior to the migration. It can also be used with any version 8.5.1 engine, whenever installed.

 

Fix packs released for this version of the product

To find the fix packs released for Tivoli Workload Scheduler, and the Dynamic Workload Console including links to the related readmes files, see Fixes by version.


Installation limitations and problems, and their workarounds

 

Installation limitations and problems, and their workarounds

The following are software limitations that affect the installation of Tivoli Workload Scheduler on all platforms:

On the installation window some buttons seem to disappear if you press next and then back

In the installation window, if you choose Install a new instance, click Next, and then Back, in the previous window you can no longer see the buttons Install a new instance or Use an existing instance.

Workaround:

When you click Back, the two buttons mentioned do not disappear; they are just moved a little further down in the window. You must scroll down to see the buttons again.

Parentheses () are not permitted in the Tivoli Workload Scheduler installation path

When installing Tivoli Workload Scheduler, you cannot specify parentheses in the installation path field.

In the interactive InstallShield wizard on UNIX® and Linux® platforms, any passwords you enter are not validated at data input

No password validation is performed in the interactive InstallShield wizard on UNIX and Linux platforms at data input. If you make an error in the password, it is only discovered when the installation wizard attempts to use the password during the performance of an installation step.

Workaround:

Rerun the installation, using the correct value for the password.

On Red Hat Enterprise Linux V5.0 the automount feature does not work

On Red Hat Enterprise Linux V5.0 after inserting the DVD and double-clicking the desktop icon, the following message is displayed:

./setup.sh: /bin/sh: bad interpreter: Permission denied

This is because the automount feature that mounts the DVD does so with the option -noexec, which is not compatible with the way Tivoli Workload Scheduler needs to use the DVD.

Workaround:

To solve the issue, umount the DVD, and manually remount it, using the following command:

mount /dev/scd0 /media

Multiple installations on the same workstation with the same TWSuser cannot exist (52946)

Two installations cannot coexist on the same workstation if they have the same TWSuser name, where one is a local user account and the other is a domain user account.

Workaround:

Install two instances of a Tivoli Workload Scheduler with different TWSuser names.

In a remote silent installation on Windows 64, the installation fails (52485)

During a remote silent installation on Windows 64, the installation fails due to a defect in the InstallShield. On Windows 64, it is not possible to perform a remote Tivoli Workload Scheduler silent installation using a scheduling product.

Workaround:

Use the graphic desktop to install Tivoli Workload Scheduler in silent mode.

Installation fails when DB2 administrator password contains parentheses

If you try to silently install Tivoli Workload Scheduler when the DB2 administrator password contains parentheses "()", the installation fails with AWSJIS038E error.

Workaround:

To correctly install Tivoli Workload Scheduler using the current DB2 password with the special "unix shell" characters, install the provided fix as follows:

  1. Copy the Tivoli Workload Scheduler version 8.5.1 GA image to disk.
  2. Make the Tivoli Workload Scheduler, version 8.5.1 Fix Pack 01 (or higher) CD image available to the system.
  3. Backup the TWS/platform/tws_tools/_createdb_root.sh file.
  4. Copy all the files provided in the fix pack CD under GA_fixes directory to the GA setup directory under TWS\platform, replacing the existing files.
  5. Launch the installation again.
Alternatively, you can manually edit the file as follows:
  1. Open the installation script: TWS CD/tws_tools/_createdb_root.sh
  2. Insert the single quote ' around the seventh input parameter $7 as follows:
    .......
    su - $DB2_ADMINISTRATOR -c "cd $TWS_TEMPDIR/scripts &&
    ./dbsetup.sh $1
    $2 $3 $4 $5 $6 '$7' $8 $9"
    ...
    ...
    ...

    su - $DB2_ADMINISTRATOR -c "cd $TWS_TEMPDIR/scripts &&
    ./dbmigrate.sh $1_DB $6 '$7'"

  3. Launch the installation again.
 

Upgrade notes

Upgrade procedure for platforms out of support

To upgrade master domain managers or backup masters running on platforms that are no longer supported by version 8.5 (such as AIX® 32 bit kernel), run the parallel upgrade procedure on a supported system. Upgrade procedures are documented in the IBM Tivoli Workload Scheduler: Planning and Installation Guide (44363).

Direct backup master upgrade modifies the security file (52390)

The upgrade of the backup master modifies the security file when centralized security is enabled. In this scenario, the backup master does not function after the upgrade.

Workaround:

Before beginning the backup master upgrade, complete the following steps:

  1. Set the attribute AUTOLINK to OFF for the backup master workstation.
  2. Run JnextPlan -for 0000 to include the backup master workstation changes.
  3. Make a backup of the security file (/TWS/Security) into a temporary directory.

Run the upgrade and perform the following steps after the upgrade:

  1. Substitute the upgraded security file (/TWS/Security) with the copy previously backed up in the temporary directory.
  2. Set the attribute AUTOLINK to ON for the backup master workstation.
  3. Make a backup of the security file (/TWS/Security) into a temporary directory.
  4. Run JnextPlan -for 0000 to include the backup master workstation changes.
Upgrade fails on integrated installation of Tivoli Workload Scheduler components on Windows (51013)

On Windows, the upgrade fails on an integrated installation of two Tivoli Workload Scheduler components sharing an instance of WebSphere Application Server. In this scenario, two instances of a Tivoli Workload Scheduler component are installed with a different user and one of them is configured in LDAP. The upgrade fails because the WebSphere Application Server service does not stop.

Workaround:

Stop the WebSphere Application Server service and retry the upgrade.


Software limitations and workarounds

 

Software limitations and workarounds

The following are software limitations that affect Tivoli Workload Scheduler, and their workarounds (when available):
Runtime libraries required on AIX 5.3

To ensure the correct operation of Tivoli Workload Scheduler on your AIX 5.3 system, make sure you run xlC.aix50.rte.6.0.0.3 or later (9.0.0.1 or later if using version 9 runtime).

If you run Technical Level 5, bos.rte.libc:5.3.0.53 or greater is required.

Interoperability problem on Linux platforms

On Linux platforms there are connectivity problems between Tivoli Dynamic Workload Console version 8.5 and older Tivoli Workload Scheduler versions configured with the Custom user registry.

The connection does not work because of a missing variable in the configuration of the embedded WebSphere Application Server on Tivoli Workload Scheduler.

If you run Tivoli Workload scheduler version 8.4 (where Custom user registry is the default configuration value on Linux), or if you use PAM authentication, you must expect to experience such connectivity problems with Tivoli Dynamic Workload Console version 8.5.

To check if the Tivoli Workload Scheduler version 8.4 engine on the Linux system is configured based on the Custom user registry, do the following on the master domain manager of the Tivoli Workload Scheduler version 8.4 engine:

  1. Run the showSecurityProperties.sh script from <TWS_home>/wastools and check if the value of the activeUserRegistry property is Custom. Also, take note of the value of LocalOSServerREALM.
  2. If the value of activeUserRegistry is not Custom, there are no connectivity problems and you need do nothing. If the value is Custom, then you need to apply the following workaround on Tivoli Workload Scheduler:
    1. Edit the <TWS_home>/appserver/profiles/twsprofile/config/cells/DefaultNode/security.xml file as follows:
      1. Find the activeUserRegistry key. The value is similar to:
        CustomUserRegistry_<series of numbers>
        
      2. Search the userRegistries section of the file, where the value of xmi:id is the same as the value of the activeUserRegistry key.
      3. In the same section, after the LocalOSSserverPassword variable declaration, add the following string:
        realm=LocalOSServerREALM value
        

        where LocalOSServerREALM value is the value you took note of in step 1.

    2. Save the file.
    3. Run stopWas.sh and then startWas.sh from <TWS_home>/wastools to restart the embedded WebSphere Application Server (44551).
File eif.templ is recreated when migrating from version 8.4 GA to version 8.5

When Tivoli Workload Scheduler version 8.4.0 (the General Availability version without the addition of any fix pack) is migrated to version 8.5, before the embedded WebSphere Application Server is restarted, the file <TWS_home>/appserver/profiles/twsprofile/temp/TWS/EIFListener/eif.templ is removed and replaced with a new one.

This means that, if you had changed the value of property BuffEvtmaxSize in this file, your changes are lost.

If this happens, you must set the value of this property again in the new version of the file. The section Managing the event processor in the IBM Tivoli Workload Scheduler: Administration Guide documents how to do this. (38971)

Note that the new copy of the file is created in the new path, which is: <TWA_home>/eWas/profiles/twaprofile/temp/TWS/EIFListener/eif.templ

Deploying large numbers of event rules

The rule deployment process (run either automatically or with the planman deploy command) performs slowly when you deploy high numbers of new and changed rules (2000 and more).

Workaround

If you need to deploy large numbers of event rules, there are two actions you can take to improve performance:

Use planman deploy with the -scratch option

To deploy large numbers of rules collectively in an acceptable time limit, use planman deploy with the -scratch option (37011).

Increase the Java heap size of the application server

Increase the Java heap size of the application server, as described in the Scalability section of the Performance chapter in the Administration Guide. The critical point at which you should increase your heap size is difficult to calculate, but consider a deployment of several thousands of rules as being at risk from an out of memory failure.

"Absolute" keyword required to resolve time zones correctly with enLegacyStartOfDayEvaluation set

If the master domain manager of your network runs with the enLegacyStartOfDayEvaluation and enTimeZone options set to yes to convert the startOfDay time set on the master domain manager to the local time zone set on each workstation across the network, and you submit a job or job stream with the at keyword, you must also add the absolute keyword to make sure that the submission times are resolved correctly.

The absolute keyword specifies that the start date is based on the calendar day rather than on the production day. (41192)

Deploy (D) flag not set after ResetPlan command used

The deploy (D) flag is not set on workstations after the ResetPlan command is used.

This is not a problem that affects the processing of events but just the visualization of the flag that indicates that the event configuration file has been received at the workstation.

Workaround

You can choose to do nothing, because the situation will be normalized the next time that the event processor sends an event configuration file to the workstation.

Alternatively, if you want to take a positive action to resolve the problem, do the following:

  1. Create a dummy event rule that applies only to the affected workstations
  2. Perform a planman deploy to send the configuration file
  3. Monitor the receipt of the file on the agent
  4. When it is received, delete the dummy rule at the event processor. (36924 / 37851)
Some data not migrated when you migrate database from DB2® to Oracle, or to Informix Dynamic Server, or vice versa

Neither of the two migration procedures migrate the following information from the source database:

  • The pre-production plan.
  • The history of job runs and job statistics.
  • The state of running event rule instances. This means that any complex event rules, where part of the rule has been satisfied prior to the database migration, are generated after the migration as new rules. Even if the subsequent conditions of the event rule are satisfied, the record that the first part of the rule was satisfied is no longer available, so the rule will never be completely satisfied. (38017)
Event LogMessageWritten is not triggered correctly

You are monitoring a log file for a specific log message, using the LogMessageWritten event. The message is written to the file but the event is not triggered.

Cause

The SSM agent monitors the log file. It sends an event when a new message is written to the log file that matches the string in the event rule. However, there is a limitation. It cannot detect the very latest message to be written to the file, but only messages prior to the latest. Thus, when message line "n" is written containing the string that the event rule is configured to search for, the agent does not detect that a message has been written, because the message is the last one in the file. When any other message line is written, whether or not it contains the monitored string, the agent is now able to read the message line containing the string it is monitoring, and sends an event for it.

Workaround

There is no workaround to resolve this problem. However, note that in a typical log file, messages are being written by one or other processes frequently, perhaps every few seconds, and the writing of a subsequent message line will trigger the event in question. If you have log files where few messages are written, you might want to attempt to write a dummy blank message after every "real" message, to ensure that the "real" message is never the last in the file for any length of time. (33723)

Microsoft® Remote Desktop Connection must be used with "/console" option

If you use the Microsoft Remote Desktop Connection to operate Tivoli Workload Scheduler, you must use it always with the "/console" parameter, otherwise Tivoli Workload Scheduler gives inconsistent results.

The planman showinfo command displays inconsistent times (IZ05400)

The plan time displayed by the planman showinfo command might be incongruent with the time set in the operating system of the workstation. For example, the time zone set for the workstation is GMT+2 but planman showinfo displays plan times according to the GMT+1 time zone.

This situation arises when the WebSphere Application Server Java Virtual Machine does not recognize the time zone set on the operating system.

Workaround:

Set the time zone defined in the server.xml file equal to the time zone defined for the workstation in the Tivoli Workload Scheduler database. Proceed as follows:

  1. Create a backup copy of this file:
    appserver/profiles/twsprofile/config/cells/DefaultNode
         /nodes/DefaultNode/servers/server1
  2. Open server1. xml with an editor.
  3. Find the genericJvmArguments string and add:
    genericJvmArguments="-Duser.timezone=time zone"
    where time zone is the time zone defined for the workstation in the Tivoli Workload Scheduler database.
  4. Stop WebSphere Application Server.
  5. Restart WebSphere Application Server.
WebSphere Application Server limitations in a pure IPV6 environment when using the Job Scheduling Console or the Tivoli Dynamic Workload Console (35681)

When you install Tivoli Workload Scheduler, the following WebSphere Application Server variables are initialized as follows to allow communication in a mixed IPV4 and IPV6 environment:

java.net.preferIPV6Addresses=false
java.net.preferIPv4Stack=false
If your configuration requires the use of a pure IPV6 environment, or you have specific firewall configuration settings that block IPV4 packets, the connection between the Tivoli Workload Scheduler master domain manager and the Tivoli Dynamic Workload console or the Job Scheduling Console fails.

Workaround:

To establish a connection in this specific environment, you must initialize the variable as follows:

java.net.preferIPV6Addresses=true

by editing the server.xml file in the following path:

$TWS_home/appserver/profiles/twsprofile/config/cells/DefaultNode
    /nodes/DefaultNode/servers/server
If, instead, you want to use IPV4 communication exclusively, set:
java.net.preferIPv4Stack=true
Different behavior of UNIX and Windows operating systems at springtime daylight savings (94279)

During the springtime daylight savings time change (01:59:59-03:00:00), different operating systems behave differently.

For example:

    Windows

    The command: conman submit job at=02xx is set to 01xx

    HP-UX

    The command: conman submit job at=02xx is set to 03xx

Workaround

Avoid creating jobs that have a start time in the "lost" hour and are due to run on the night of the daylight savings change (a Saturday night in spring).

The writer process on a fault-tolerant agent does not download the Symphony file (22485)

If you delete the Symphony file on a fault-tolerant agent, writer should automatically download it when it next links to the master domain manager. However, this does not happen if you launch conman before the file has downloaded.

Workaround

Delete the mailbox.msg file and writer downloads the Symphony file.

File monitor provider events: older event configurations might stay active on workstations after rule redeployment (34103)

If you deployed rules containing event types from the FileMonitor event provider, and then you redeploy rules that no longer include these file monitoring events, you might find that, despite the new configurations, the local monitoring agents still forward the file monitoring events to the event processor. The event processor correctly ignores these events, accordingly with the new configuration specifications deployed on the server; however, a certain amount of CPU time and bandwidth are needlessly misused.

The status of the local monitoring configuration on the agent is corrected when one of the following occurs:

  • The planman deploy -scratch command is issued
  • The event processing server is restarted
  • Another rule containing an event condition involving the FileMonitor provider is deployed to the agents.
File monitor provider events: if you specify a file path with forward slashes (/) in an event rule definition, on Windows workstations this results in the failed detection of the file. (53843)

This is due to the fact the SSM agent is not currently designed to automatically convert the slash sign to be Windows compatible when executing the rule.

Workaround

Use backward slashes when you specify file paths in the definitions of rules that will run on Windows systems.

Event rule management: Deploy flag is not maintained in renewed symphony (36924)

The deploy flag (D) indicates that a workstation is using an up-to-date package monitoring configuration and can be displayed by running the conman showcpus command. Testing has revealed that the flag is lost from the symphony file when the file is renewed after a JnextPlan or ResetPlan command. Although the event monitoring configuration deployed to the agents is the latest one, and event management works correctly, an incorrect monitoring agent status is shown on the workstations.

Tivoli Dynamic Workload Broker server is not started when the database is down (52307)

If the database is not running, the Tivoli Dynamic Workload Broker Server is not started.

Workaround

When the database is started, you must manually start the Tivoli Dynamic Workload Broker server. See the Administration Guide for instructions about how to start the Tivoli Dynamic Workload Broker server.

Event rule management: Submit docommand within event rule (49697)

Defining an event rule with the submit docommand (SBD) action, containing character ‘\’, for instance:<action actionProvider="TWSAction" actionType="sbd" responseType="onDetection">

<parameter name="JobUseUniqueAlias">
<value>true</value>
</parameter>
<parameter name="JobWorkstationName">
<value>MyWorkstation</value>
</parameter>
<parameter name="JobTask">
<value>"dir c:\"</value>
</parameter>
<parameter name="JobType">
<value>Command</value>
</parameter>
<parameter name="JobLogin">
<value>TwsUser</value>
</parameter>
</action>

when the event arrives, the action is not triggered because the submit action is incorrect. The problem is caused by the ‘\’ charter in the action definition.

Messages in the TWSMERGE.log file in English are corrupt (51518)

Messages that appear in the TWSMERGE.log file in English are corrupt.

Workaround:

On windows, you must set the TWS_TISDIR environment variable at system level and then restart the workstation.

Certificates for SSL communication defined at Job Brokering Definition Console startup cannot be modified (52753 / 51957)

In the Job Brokering Definition Console, the certificates for SSL communication defined at the Job Brokering Definition Console startup are used during the entire session and cannot be modified until the Job Brokering Definition Console is restarted.

If you want to set the SSL parameters on the dynamic workload broker component connection, you must include your own certificates in the keystore and truststore that appear on the SSL section. Or, you can change the keystore and truststore that appear on the SSL section with your own certificates and restart the Job Brokering Definition Console to save the updates.

WebSphere Application Server cannot be stopped on AIX (53173)

Because the appserverman is down, WebSphere Application Server cannot be stopped on AIX using the conman stopappserver command.

Workaround:

Use wastools with the -direct parameter to stop the WebSphere Application Server or, if you want to stop the WebSphere Application Server using conman stopappserver, you must do it BEFORE you run the conman shut command.

SSL V3 Renegotiation vulnerability exposure (53251)

There is a vulnerability exposure if you are using an SSL/TLS client that connects to a server whose certificate contains a DSA or ECDSA key. This does not affect users with an SSL/TLS client that connects to a server whose certificate uses an RSA key. Verification of client certificates of servers for any key type is not affected.

Return code mapping for Tivoli Workload Scheduler jobs that are migrated to broker is ignored (51582)

Return code mapping for Tivoli Workload Scheduler jobs that are migrated to broker using the JSDL template is ignored. Any script or command issued with a nonzero return code is considered abended.

Workaround:

Do not use return code mapping for static jobs that are migrated to dynamic job using the JSDL template.

Jobmanrc is unable to load libraries on AIX 6.1, if streamlogon is not root. (IZ36977)
If you submit a job using as streamlogon a non-root user, you see in the joblog several error messages like these:
  • Could not load program /usr/maestro/bin/mecho.
  • Could not load module /usr/Tivoli/TWS/ICU/3.4.1/lib/libicuuc.a.
  • Dependent module libicudata34.a could not be loaded.
  • Could not load module libicudata34.a.
  • System error: No such file or directory.
  • Could not load module mecho.
  • Dependent module /usr/Tivoli/TWS/ICU/3.4.1/lib/libicuuc.a could not be loaded.
  • Could not load module.
 
If you run date command after setting tws_env, you see your date referred to in GMT, and not in your TimeZone. (IZ36977)
 
On AIX 6.1 batchman process does not correctly recognize the timezone of the local workstation (IZ70267)
On AIX 6.1 batchman process does not correctly recognize the timezone of the local machine that is set to GMT, even if, in the Tivoli Workload Scheduler CPU definition, it is correctly set to the correct timezone. You see in the stdlist log the following message: "10:29:39 24.11.2008|BATCHMAN:AWSBHT126I Time in CPU TZ (America/Chicago): 2008/11/24 04:29 10:29:39 24.11.2008|BATCHMAN:AWSBHT127I Time in system TZ (America/Chicago): 2008/11/24 10:29 10:29:39 24.11.2008|BATCHMAN:+ 10:29:39 24.11.2008|BATCHMAN:+ AWSBHT128I Local time zone time differs from workstation time zone time by 360 minutes."

Batchman does not recognize the correct timezone because AIX 6.1 uses ICU (International Components for Unicode) libraries to manage the timezone of the system, and these ICU libraries are in conflict with the Tivoli Workload Scheduler ones.

Workaround

Export the TZ environment variable before starting Tivoli Workload Scheduler to the old POSIX format, for example CST6CDT. This is an example of a POSIX name convention instead of an Olson name convention (for example America/Chicago). This avoids the new default TimeZone management through the ICU libraries in AIX 6.1, by switching to the old POSIX one (as in AIX 5.x).


Globalization notes

This section describes software limitations, problems, and workarounds for globalized versions of Tivoli Workload Scheduler.
The InstallShield wizard installation fails if DBCS characters are used in the -is:tempdir path (36979)

If you are installing using the -is:tempdir option and you specify DBCS characters in the path, the installation fails.

Workaround:

Do not specify DBCS characters when using this option.

In the output of the composer list and display commands, the list and report headers are in English (22301, 22621, 22654)

This has been done to avoid a misalignment of the column headers in DBCS versions that was making it difficult to understand the information.

In the output of the product reports, the report headers are in English

This has been done to avoid a misalignment of the column headers in DBCS versions that was making it difficult to understand the information.

Data input is shorter in DBCS languages (IY82553, 93843)

All information is stored and passed between modules as UTF-8, and some characters occupy more than one byte in UTF-8. For DBCS languages, each character is three bytes long. Western European national characters are two bytes long. Other Western European characters are one byte long.

On Windows operating systems, you cannot create a calendar with a name containing Japanese characters using the "makecal" command (123653).

Workaround:

Enclose the calendar name in double quotes.

On Windows operating systems, the Tivoli Workload Scheduler joblog is created with incorrect characters (APAR IY81171)

You are working in a non-English language environment and you have correctly set the LANG and TWS_TISDIR environment variables. However, the Tivoli Workload Scheduler joblog is created with incorrect characters in the body of the log (the headers and footers of the log are correct).

Workaround

The problem is caused by the code page in use. Windows editors and applications use code page 1252 which is correct for writing text files. However, the DOS shell uses the default code page 850. This can cause some problems when displaying particular characters.

To resolve this problem for Tivoli Workload Scheduler jobs, add the following line to the beginning of the file jobmanrc.cmd on the workstation:

chcp 1252

For further details about the jobmanrc.cmd file, see the section on customizing job processing on a workstation in the Tivoli Workload Scheduler: User's Guide and Reference.

It is possible to resolve this problem for all applications on the workstation, by using regedit to set the DOS code page globally in the following registry keyword:

HKEY_LOCAL_MACHINE/system/current Control set/
Control/Nls/Code page/OEMCP =1252

You must reboot the workstation to implement the change.

Note:
Microsoft warns you to take particular precautions when changing registry entries. Ensure that you follow all instructions in Microsoft documentation when performing this activity.

License files in some languages are not installed (53271)

The following license files are not installed in their respective languages: Greek.txt, Lithuanian.txt, Russian.txt, Slovak.txt.

Workaround:

These license files are not installed but can be retrieved from the DVD media.

APARS Fixed in this release


Tivoli Workload Scheduler version 8.5.1 includes all applicable APARs fixed in V8.3 Fix Pack 7 and V8.4 Fix Pack 4, plus the following:
  • IZ38754: MONBOX IS STILL UPDATED (ITS SIZE INCREASES ) EVEN IF EVENT MANAGEMENT HAS BEEN STOPPED

  • IZ40749: LINKING ISSUES AFTER FINAL JOB STREAM IN TWS 8.4.

  • IZ43803: COMPOSER WRONGLY ADD ONUNTIL CANC AND CONT

  • IZ41172: MESSAGES FROM FTA TO DOMAIN MANAGER LOST WHEN ENABLE SWITCH

  • IZ40190: ALTPASS FAILING TO CHANGE PASSWORD CORRECTLY

  • IZ58715:MESSAGE AWSBHU510E IS GARBLED WHEN LANG=JA_JP.

  • IZ52139: AD-HOC JOBS ARE IGNORING DEPENDENCIES

  • IZ32363: THE PART OF "AWSDEB007I" MESSAGE IS GARBLED IF LANG=JA_JP

  • IZ64725: FINAL JOBS ABEND IF MDM IS INSTALLED TO THE DIRECTORY OTHER THAN C-DRIVE ON WINDOWS 2008.

  • IZ63868: ATTEMPTING TO DELETE DUPLICATE OPENS DEPENDENCIES FROM JSC CAUSES APPLICATION SERVER JAVA TO CORE DUMP

  • IZ62730: TWS EVENTS DUPLICATION

  • IZ43716: WAS HANG/CRASH OCCURS BY PERFORMING "CONMAN SBS" AND "SET ALTERNATE PLAN" CONCURRENTLY

  • IZ54566: THE SAME JOBNAMES IN USERJOBS CAN NOT BE WORKED FROM JSC.

  • IZ53812: AWSDEJ005E WHEN ATTEMPTING TO REPLY TO A LONG PROMPT IN COMBINATION WITH A SECURITY FILE NAME FILTER

  • IZ43721: PARMS -E RETURNS RC=1 ALWAYS FROM SCRIPT OR CMD

  • IZ45232: JOBSTDL / MORESTDL GIVE USAGE ERROR WHEN USING -NAME OR SCHEDIDFLAGS ON SOLARIS

  • IZ51311: "SCHEDULED TIME" DISPLAYS IN GMT TZ WHEN TZ IS DISABLED (CONMAN AND JSC AFFECTED)

  • IZ50418: COMPOSER REPLACE RETURNS EXIT CODE 134 ADDING WHEN IMPORTING A JOB DEFINITION WITH LONG SCRIPTNAME OR DOCOMMAND FIELD

  • IZ51761: JSC/TDWC Symphony not refreshed after JnextPlan

  • IZ38892: TWS 8.3 AND HIGHER SUPPORT FOR NIS AUTHENTICATION ON AIX.

  • IZ52178: SCHEDULE IS NOT PLANNED CORRECTLY AFTER DELETING THE SAME JOB STREAM NAME WITH "VALIDFROM" OPTION

  • IZ54736: NETWORK DEPENDENCY STRING TRUNCATED IF SCHED SUBMITTED FROM JSC

  • IZ60209: DEPLOYMENT OF EVENT DRIVEN RULES FAIL WHEN WORKSTATTION NAME IS LESS THAN 3 CHARACTERS LONG

  • IZ60373: Rerun job "from" not resolve parms from WEBUI

  • IZ58709: TEPCONFIG.SH RETURNS THE ERROR "SHIFT: BAD NUMBER".

  • IZ60180: UPDATEWASSERVICE.BAT DOES NOT INITIALIZE %WASPASSWORD% CORRECTLY

  • IZ60517: CONMAN NOT HANDING INTERNETWORK DEPENDENCY WHEN CARRY FORWARD

  • IZ61327: TWA INSTALLATION FAILS ON WINDOWS 2008 64 bit WHEN A DOMAIN ACCOUNT IS USED

  • IZ62447: EDWA: "DOES NOT MATCH" OPTION NOT RESPECTED AT JOBSTREAM LEVEL

  • IZ60614: RESETPLAN -SCRATCH AND JSC QUERY ON JOB STEAMS IN PLAN CAUSE WEBSPHERE TO GO DOWN.

  • IZ31273: "within absolute interval" dependency is lost over JnextPlan

  • IZ55625: OPENS FILE DEPENDENCIES INCORRECT FILE NAMES

  • IZ55767: "CONMAN SBS" FAILS WITH AWSJPL006E ON COCURRENT SUBMISSION

  • IZ55723: DDJ COMMAND ISSUE

  • IZ56263: THE EVENT "MODIFICATIONCOMPLETED" IS TRAPPED ALTHOUGH THE TARGETFILE IS NOT MODIFIED.

  • IZ57373: REPTR SHOWS ONLY A PART OF SCHEDULED JOB STREAMS.

  • IZ55321: AFTER SWITCHMGR FROM DM TO BDM, SOME EVENTS WERE LOST.

  • IZ56691: Jobmon create a file called MAESTROHOME\ UNEXPECTLY.

  • IZ48584: LATE IS PERFORMED TO THE CANCELLED JOB STREAM, IF TWS IS RESTARTED.

  • pIZ48682: MAILMAN SERVERS ON GRAND CHILD DM TERMITATE BY PERFORMING SWITCHMGR TO CHILD DM ON E2E ENVIRONMENT.

  • IZ49339: AWSUI5015E UNABLE TO OPEN THE EVENT RULE EDITOR... RECEIVED AFTER RESTART TDWC SERVER, RUN REPORT, & OPEN EVENT RULE EDITOR

  • IZ44260: SUBMITTING JOB STREAMS WITH NEEDS FAILS DURING JNEXTPLAN FAILS.

  • IZ46776: MAILMAN WRITER FAILURE ON MDM DUE TO ABNORMAL MESSAGE

  • IZ45456: COMPOSER REPLACE CREATES CORE DUE TO OPENSSL CODE

  • IZ47677: XREF, REP7 AND COMPOSER CREATE CORE DUMP

  • IZ47299: The tiimeout rule doesn't work in a specific scenario

  • IZ46911: COMPOSER ERRONEOUSLY RETURNS "DAT:" IF AN EMPTY COMMENT (* CHARACTER ONLY) IS PRESENT IN THE SCHEDULE DEFINITION.

  • IZ49759: MAILMAN FAILURE ON SYMPHONY COMPRESSION WHEN INITIALIZING FTA

  • IZ53569: RESETPLAN DOES NOT REMOVE SYMPHONY FILE IF SCHEDLOG DIRECTORY ISA SYMBOLIC LINK TO A DIRECTORY ON A DIFFERENT FILE SYSTEM

  • IZ52913: TWS EVENTS ARE NOT SHOWN CORRECTLY ON TEP CLIENT.

  • IZ52311: Warning message for cpu ignore state At JnextPlan

  • IZ54609: bacthman abend if programmatic jobs and large Symphony

  • IZ54552: AWSJSY101E THE SYMPHONY PLAN OPERATION "QUERYJOBS" COULD NOT BE COMPLETED

  • IZ54262: DMS AND BDMS GOT UNLINKED AFTER SWITCHMGR.

  • IZ51448: TWS 8.4 DOES NOT ISSUE WARNING AT PLAN GENERATION TIME IF PARAMETERS ARE TOO LONG.

  • pIZ22085: On distributed pure use MDM-DM-DM-fta and stop/start during JnextPlan.

  • IZ51213: REP8 "STAT" FIELD SHOWING INCORRECT OUTPUT

  • IZ52028: MAILMAN UNABLE TO LINK TO AGENTS IF NUMBER OF AGENTS IS MORE THAN HALF THE NUMBER OF FILE DESCRIPTORS

  • IZ51662: UNDOCUMENTED LIMIT TO SECURITY FILE SIZE, 32,767

  • IZ51564: FILE DEPENDENCIES ARE WORKING DIFFERENTLY IN TWS 8.4 THAN IT DID IN TWS 8.2.1 DUE TO A PROBLEM WITH HANDLING OF DOUBLE QUOTES.

  • IZ33462: UNISON_JOB ENVIRONMENT VARIABLE INCORRECTLY SET

  • IZ43713: TWS 8.4 EVENT RUNCOMMAND WON'T WORK WITH ARGUMENT PASSED.

  • IZ41933: NETMAN TERMINATES WITH SIGSEGV FOLLOWING CONMAN SHUT

  • IZ35437: MONMAN PROCESS HOLDING NETMAN AND TWSMERGE FILES OPEN WHICH WERE OPENED AT STARTUP AND NOT SWITCHING FILES FOR NEW DAYS

  • IZ33611: JOBSTREAM IS SAVED EVEN THOUGH A "NON-ALLOWABLE" ALIAS JOBNAME IS USED INSIDE THE SCHEDULE.

  • IZ43228: UNIXSSH EXTENDED AGENT FAILS TO LAUNCH JOBS IF USER NAME ON REMOTE SYSTEM IS GREATER THAN 8 CHARACTERS IN LENGTH

  • IZ31257: AUTHENTICATION CAN FAIL USING PAM MODULE ON LINUX PLATFORM

  • IZ31912: JOB STREAM IS NOT LISTED ON THE LAST DAY OF THE MONTH BY R11XTR COMMAND IF AT=0900 OR LATER AND SOD IS LATER THAN 0901.

  • IZ60401: WRONG FILES PERMISSIONS ON WAS FILES

  • IZ59485: INSTALLING CLI IN TWSHOME CHANGES PERMISSIONS

  • IZ55206: NEED PROCEDURE TO REMOVE TWS_HOM/TRACE DIRECTORY DURING UPGRADE

  • IZ49332: DIRECT UPGRADE FROM 8.3 RESET some Ports not allow JSC connect


Documentation updates

All updates to the Tivoli Workload Scheduler documentation that are required for version 8.5.1 are included in the editions of the publications at the Information Center.
Top © Copyright International Business Machines Corporation 1991, 2009. All rights reserved. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.]

Original Publication Date

11 December 2009

[{"Product":{"code":"SSGSPN","label":"IBM Workload Scheduler"},"Business Unit":{"code":"BU053","label":"Cloud & Data Platform"},"Component":"Not Applicable","Platform":[{"code":"PF002","label":"AIX"},{"code":"PF010","label":"HP-UX"},{"code":"PF012","label":"IBM i"},{"code":"PF016","label":"Linux"},{"code":"PF027","label":"Solaris"},{"code":"PF025","label":"Platform Independent"},{"code":"PF033","label":"Windows"}],"Version":"8.5.1","Edition":"","Line of Business":{"code":"LOB45","label":"Automation"}}]

Document Information

Modified date:
24 January 2019

UID

swg27017194