What's new and changed in IBM Spectrum Conductor?

IBM® Spectrum Conductor 2.5.0 includes various new features, and enhancements to, existing features.

Key terminology changes

Instance group components

Spark, notebooks (such as Jupyter), and Dask are all components for instance groups, with Dask being the latest addition (see the Dask as an instance group component section in these release notes). Additionally, the cluster management console now also introduces a new Resources > Frameworks > Component Management option to manage (add, update, or remove) Dask versions. Any future components will also reside under this menu option. Managing Spark versions and notebooks packages continue under their respective options under Resources > Frameworks.

Content within IBM Knowledge Center use the new components terminology when referring to Spark, notebooks, and Dask as a group or in a general sense. Topics specific to each component (such as adding or removing a Spark, notebooks, or Dask), refer to each component by name.

Inclusive terminology
IBM Spectrum Conductor 2.5.0 content has changed to remove the use of any non-inclusive terminology and replaced those terms with more inclusive ones. Specifically, the content now uses the terms primary host (and primary candidate host), allow list, and block list; instead of master host (and master candidate host), white list, and black list.

While IBM values the use of inclusive language, terms that are outside of IBM's direct influence are sometimes required for the sake of maintaining user understanding. As other industry leaders join IBM in embracing the use of inclusive language, IBM will continue to update the documentation to reflect those changes.

System configurations and integrations

IBM Spectrum Conductor has been extended to support additional or upgraded system configurations.
Before you install and configure IBM Spectrum Conductor 2.5.0, familiarize yourself with what's supported in this version from the information within Supported system configurations. Here are the highlights of the system configuration changes for this version of IBM Spectrum Conductor:
Supported system configurations
  • IBM Spectrum Conductor 2.5.0 supports and includes a resource orchestrator (enterprise grid orchestrator or EGO 3.9).
  • Miniconda, instead of Anaconda, is bundled with IBM Spectrum Conductor 2.5.0. You can use Miniconda or Anaconda distributions with IBM Spectrum Conductor 2.5.0; see Supported Spark, Miniconda, Jupyter, and Dask versions for details.
  • Dask 2.5.2 to 2.30.0 is supported for IBM Spectrum Conductor 2.5.0 and is a newly supported component for instance groups. See Supported Spark, Miniconda, Jupyter, and Dask versions for details, and refer to later sections within these release notes for Dask usage and management.
  • If you are installing or upgrading with other IBM Spectrum Computing family products, IBM Spectrum Conductor 2.5.0 supports a multiple product environment with these IBM Spectrum Computing family products:
  • You can perform a rolling upgrade of IBM Spectrum Conductor from version 2.4.0 or 2.4.1 to IBM Spectrum Conductor 2.5.0. To upgrade any other version of IBM Spectrum Conductor to 2.5.0, perform a parallel upgrade instead.

Installing, upgrading and configuring

Distinct installation files for the host factory feature
Starting with IBM Spectrum Conductor 2.5.0, the installer installs distinct files for the host factory component within IBM Spectrum Conductor; there are two packages (one for the core host factory framework (hfcore-version.architecture.rpm), and one for GUI management (hfmgmt-version.noarch.rpm).

For details about these files, and their installation dependencies, see Files within the installation packages. Additionally, if you require uninstalling the host factory packages, also refer to Uninstalling individual packages installed with IBM Spectrum Conductor.

Host factory installs separate from the EGO features

Starting with IBM Spectrum Conductor 2.5.0, the host factory version displays as HF_VERSION (for example starting at 1.1) instead of EGO_VERSION (for example 3.9). The host factory 1.1 depends on EGO 3.9 libraries.

The eservice directory is no longer part of the host factory file path:
  • In IBM Spectrum Conductor 2.4.1 and earlier, the host factory default installation directory was $EGO_TOP/eservice/hostfactory
  • Starting in IBM Spectrum Conductor 2.5.0, the host factory default installation directory is now $EGO_TOP/hostfactory

In addition, the EGO_ESRVDIR is no longer an environment variable used for host factory and is replaced with $EGO_CONFDIR/../../

See Directory structure for configuration files.

Enhancements to installing product fixes using egoinstallfixes
The egoinstallfixes command installs fixes to your IBM Spectrum Conductor installation. IBM Spectrum Conductor 2.5.0 provides several updates to the egoinstallfixes command:
Option to save space and not create a backup when running egoinstallfixes
The egoinstallfixes command installs fixes to your IBM Spectrum Conductor installation. For this release, the command includes a new --nobackup option so that the command does not create a backup, and simply applies the fix and overwrites existing files. See the Enhanced CLI table in these release note, and egoinstallfixes, for more details.
Support to delete files no longer required for IBM Spectrum Conductor
The egoinstallfixes command now cleans up the file system by deleting files that the IBM Spectrum Conductor cluster no longer requires. These files may have been installed with IBM Spectrum Conductor, or applied during an IBM Spectrum Conductor fix. If the command detects the file, it deletes it. If the command cannot find the file (for example, it was included in a fix that you did not apply), then it provides an informational message to indicate that it will skip deleting the file. See egoinstallfixes for details. Additionally, running pversions -q now also indicates if a file has been deleted (see pversions for Linux for details).

Instance groups and notebooks

Dask as an instance group component
New to IBM Spectrum Conductor 2.5.0 is Dask support. You can use the Dask component with your instance groups, and manage Dask versions with the new Resources > Frameworks > Component Management option within the cluster management console to add, update, or remove Dask. For details, see Configuring Dask settings for an instance group and Dask versions.
Decoupling of components for instance groups -- use some or just one
New to IBM Spectrum Conductor is decoupling of the components (Spark, notebooks, and Dask) you can use with instance groups. For example, Spark is no longer required for an instance group. You can have an instance group that is Jupyter only, or Dask only.
Basic Settings tabs simplified: dedicated tabs for instance group components (Spark, notebooks, and Dask)
The Basic Settings tab when creating or modifying an instance group within the cluster management console now contains only the core information required for every instance group: name, deployment directory, execution user (and a few optionally settings).
We have introduced new component-specific tabs; a component can be either Spark, a notebook, or Dask. Each component tab contains only the dedicated component settings for the instance group (for more information on creating and using these tabs, see Defining basic settings for an instance group):
Spark tab
By default, the latest Spark version installed on your system is the version of Spark that your instance group uses. This tab contains version specific Spark settings for the instance group.

The Spark tab now also includes other instance group information related to Spark, separated by sections for consumers, resource groups and plans, containers, and data connectors. Some of this information was previously in other tabs. For details working with these sections, see Spark settings for instance groups.

Notebooks tab
The Notebooks tab does not show by default when you create a new instance group; to add this tab, click Add to and select Notebook as the component name to add it as a tab. You can then use this tab to select the notebook to deploy with each instance group. The tab contains all configuration settings for your notebook; for multiple notebooks, the tab shows information in applicable sections. For details, see Enabling notebooks for an instance group.
Dask tab
New for IBM Spectrum Conductor 2.5.0 is Dask as a component for instance groups. To add a tab for Dask, click Add and select Dask and the version. For details, see Configuring Dask settings for an instance group.

By default, when you first use the cluster management console to create or modify instance groups or instance group templates, the page shows only a tab for the latest Spark version installed on your system. To customize this list to see tabs for other components (such as notebooks and Dask), or to change the Spark version, specify the component and version information using the new INSTANCE_GROUP_DEFAULT_COMPONENTS parameter within the ascd.conf configuration file. The cluster management console then displays a tab for each component within your list on the create and modify instance group pages within the cluster management console.

New environment variables for configuring cleanup for instance groups
The list of new environment variables for configuring cleanup for instance groups:
  • LOCAL_DIR_RETENTION_IN_MINS to achieve the same behavior of FILE_RETENTION_IN_MINS but particular to the local directory of Spark.
  • SERVICE_LOG_RETENTION_IN_MINS to manage the SparkCleanup service for Service log files.
  • STAGGER_START_INTERVAL_IN_SECONDS to start the process on each host at a staggered rate.
  • Three new environment variables available to manage the SparkCleanup service for event logs:
    • EVENT_LOG_CLEANUP
    • EVENT_LOG_RETENTION_IN_MINS
    • EVENT_LOG_INCOMPLETE_RETENTION_IN_MINS
  • Two new environment variables available to minimize the REST calls to ascd to help improve the CPU performance by caching data:
    • USE_INSTANCE_GROUP_LIST_CACHE
    • USE_INSTANCE_GROUP_CONFIG_CACHE
For more information, see Configuring cleanup for instance groups.
Multiple resource groups for an instance group
By default, when you create an instance group, you select one resource group for Spark executors, so that Spark workload only runs on one resource group. With IBM Spectrum Conductor 2.5.0, an instance group administrator can change this configuration so that Spark applications run workload on multiple resource groups. See Configuring Spark applications to run on multiple resource groups for details.
Parallel deployment to all hosts in an instance group while in Started state
Configure the new ASC_ALLOW_DEPLOY_IN_STARTED_STATE in the ascd.conf configuration file to specify whether a full redeployment of an application instance or instance group can be performed while the instance group is in Started state. A value of ON for this parameter allows IBM Spectrum Conductor to deploy to all hosts in parallel when the instance group is started.
Control how you block and unblock hosts services and workload
Use the new host blocking configuration parameters within the ascd.conf configuration file to indicate how IBM Spectrum Conductor should handle host blocking: set the ASC_AUTO_BLOCK_NEW_HOSTS parameter to ON to block new hosts. Upon blocking these hosts, if you want IBM Spectrum Conductor to open the hosts when they are added to a resource group, set ASC_AUTO_OPEN_NEW_HOSTS to ON. Finally, after all packages for the instance have been successfully deployed on the host, you can allow IBM Spectrum Conductor to automatically unblock hosts, by setting the ASC_AUTO_UNBLOCK_HOSTS_AFTER_DEPLOY parameter to ON.
Simplified My Applications & Notebooks page
The My Applications & Notebooks page has been simplified to show notebooks only by default. To view Spark applications and the previous UI, select the Show Applications checkbox, as follows:
Previous flow and naming New 2.5.0 flow and naming
Workload > My Applications & Notebooks menu option Workload > My Notebooks & Applications menu option
My Notebooks & Applications page My Notebooks page.

To see Spark applications associated with the notebook, select Show Applications checkbox to change the display to the My Notebooks & Applications page.

All applicable topics have been changed to reflect the new flow and naming.

Notebook data directories preserved until manually removed
We are prioritizing data safety for notebook applications: notebook data directories will remain, even if you undeploy an instance group with a notebook. Further, to be able to reuse the same deployment home directory, first manually remove the notebook data directory that remains. In previous releases of IBM Spectrum Conductor, undeployment would remove the directory and its contents.

The only case where your notebook directory will not be deleted is for Jupyter notebooks: if you base data directory is inside of your Jupyter deployment directory, and that directory is unused and empty, the directory will be removed.

Notebooks support both default and exclusive GPU mode
If your cluster is enabled for GPUs, starting with IBM Spectrum Conductor 2.5.0, you can use either default or exclusive GPU mode (in previous it was exclusive mode only). You select the GPU mode when creating your resource group and when setting the number of GPUs for a notebook. For details, see the new GPU mode check box described in Enabling notebooks for an instance group.
Notebooks no longer automatically restart when ascd service, REST service, or the Spark notebook master instance go down
Previously, any time ascd or REST services, or the Spark notebook master instance restart after going down, notebooks associated with the instance group are also restarted and could interrupt work. As a usability enhancement, notebooks no longer restart when any of these restart.
Jupyter 6.0.0 as the built-in Jupyter service package
IBM Spectrum Conductor 2.5.0 provides Jupyter 6.0.0 as the built-in Jupyter service package. It includes the version of the service package itself, and is completely independent of the conda Jupyter notebook package that is used to run the Jupyter notebook and Jupyter Enterprise Gateway. This new Jupyter service package does support notebook package version 6.0.0 and higher, and Jupyter Enterprise Gateway version 2.1.1 and higher, but does not include them.

This new service package does not include any conda or pip packages, so there is no risk conflict with your conda environments. For convenience, as part of the notebook service package deployment, it will let you know if you do not have the required packages in your conda environment.

The new service package improves user and administrator user experiences, and leverages the new sample conda environment files to quickly get your instance groups up and running.

New Jupyter notebook environment variables for defining Python and R script executables
Use the new NOTEBOOK_SPARK_PYSPARK_PYTHON and NOTEBOOK_SPARK_R_COMMAND Jupyter notebook environment variables to specify the path to the Python or R script executables in a notebook. For more information, see Jupyter notebook environment variables.

Spark applications

If you have an Amazon Simple Storage Service (Amazon S3) cloud storage file system enabled, you can configure IBM Spectrum Conductor to access your Amazon S3 file system when submitting Spark applications. This configuration involves adding the Amazon S3-specific access files to your instance group and setting Spark submission parameters.
By default, the Spark resource usage metrics for Spark applications are collected every 30 seconds. You can increase the frequency to collect these metrics, or even disable collection, by configuring the new application30sIntervalMultiplier parameter in the sparkresusageloader (Spark resource usage data loader).
Multiple tasks on one EGO slot
In addition to supporting one CPU or GPU Spark task on multiple EGO slots, IBM Spectrum Conductor 2.5.0 now supports running multiple tasks on one EGO slot. To configure this, specify a negative integer that is less than -1 (such as -2, -3, or -4) as the value for the applicable parameter:
  • An instance group administrator can set this at the instance group level using the cluster management console to set Spark configuration. Setting SPARK_EGO_SLOTS_PER_TASK=-2 or SPARK_EGO_GPU_SLOTS_PER_TASK=-2 means that there are two tasks running on one slot.)
  • Likewise, an instance group user can set this at the application level when running a Spark application using the spark-submit command. For example, setting --conf spark.ego.slots.per.task=-2 or --conf spark.ego.gpu.slots.per.task=-2 also means that there are two tasks running on one slot, but at the application level.

See Spark on EGO instance group parameters and Spark on EGO Spark application parameters for parameter and usage details.

Cluster and resource management

New systemd support for managing automatic startup of EGO on Linux systems
To manage automatic startup of EGO on Linux hosts, in addition to the egosetrc.sh and egoremoverc.sh scripts that support the init daemon, IBM Spectrum Symphony now provides the egosetsystemd.sh and egoremovesystemd.sh scripts, which support the systemd daemon. The systemd daemon starts processes in parallel (versus as soon as your system starts), and therefore, reduces boot time and computational overhead. It's considered the next generation Linux system manager and is supported on some newer Linux distribution versions. If you use a Linux system with IBM Spectrum Conductor that leverages systemd, use the new egosetsystemd and egoremovesystemd commands to manage EGO startup.
EGO_DISABLE_RECLAIM_HYBRID_OWN parameter now supported for consumer level exclusivity
The EGO_DISABLE_RECLAIM_HYBRID_OWN ego.conf configuration file parameter has been extended to support consumer level exclusivity to enhance reclaim behavior. Therefore, if EGO_DISABLE_RECLAIM_HYBRID_OWN=Y, then when an exclusive policy uses exclusive slots at the consumer level:
  • For a leaf consumer that is an exclusive consumer, EGO will only reclaim the number of slots exceeding the consumer’s hybrid owned slots.
  • For a leaf consumer that is a non-exclusive consumer, the consumer is in the same group as all other leaf consumers that can share the same host with it. This parameter will take effect at the group level (that is, EGO will only reclaim the number of slots exceeding the group’s hybrid owned slots, which is the sum of hybrid owned slots of all leaf consumers in this group).
See Configuring exclusive slots at the consumer level for the setting the EGO_DISABLE_RECLAIM_HYBRID_OWN parameter for consumer level exclusivity.
Run tasks to completion for a consumer with reclaimed resources
By default, when there are more than one leaf consumers under the same ancestor (such as the same parent, same grandparent, same great-grandparent consumer), EGO uses the reclaimed leaf consumer's grace period. To change this so that EGO uses the ancestor's grace period, leverage the new EGO_USE_ANCESTOR_GRACE_PERIOD_FOR_RECLAIM=Y setting in the ego.conf file, so that EGO uses the ancestor's grace period, instead of the leaf consumer's grace period. Refer to Resource reclaim for details on EGO_USE_ANCESTOR_GRACE_PERIOD_FOR_RECLAIM.

Performance and stability

ascd performance enhancements
For enhanced ascd performance, this version of IBM Spectrum Conductor offers two new ascd.conf configuration parameters:
ASC_MONITOR_APP_THREADS
The default monitoring cycle (defined by ASC_MONITOR_SLEEP_MS) is 10000 milliseconds (or 10 seconds). In general, instance groups complete monitoring within 10 seconds, using 5 threads to monitor instance groups and application instances within the monitoring cycle. For larger cluster, if there are delays (such as starting and stopping instance groups or notebooks), increase the number of threads used for monitoring by increasing the ASC_MONITOR_APP_THREADS value to greater than 5 threads, as your CPU on management host allows.
ASC_MONITOR_APP_CYCLE_TIMEOUT_MS
The timeout for an ascd monitoring cycle is used as a fail-safe measure to prevent problematic instance groups and application instances from stalling other instances. Typically, the default (30000 milliseconds, or 5 minutes) is sufficient, and you should not need to adjust this timeout value.

Additionally, the default value of the ASC_RG_MONITOR_SLEEP_MS parameter has changed from 30000 milliseconds to 100000 milliseconds (or 10 seconds). This reduced interval improves ascd performance for single host deployments.

Batch mode: efficiently close or remove resources
Leverage the new -b (batch mode) option when closing or removing resources (hosts). In this mode the egosh resource command automatically combines resources, and submits the request as one for all the listed resources, eliminating the need to run this command multiple times and improves performance. In this mode, the command does not return individual resource action status results; it returns a confirmation that the action for the batch was accepted. This is especially useful for improving performance when removing a dynamic host if you use the host factory feature with IBM Spectrum Symphony Advanced Edition, as it reduces the number of requests sent to the system because it handles them by batch, versus individually.

Security

Security-Enhanced Linux (SELinux) support
SELinux allows users and administrators more control over access control. In an SELinux-enabled environment, IBM Spectrum Conductor can run processes (started by PEM, and Docker container processes used for ), with specific SELinux security context. Specifically, SELinux allows these processes to run with the default security context of the execution user, corresponding to the SELinux user's context when the user logs in to the host using SSH. You can switch to an SELinux context for use with IBM Spectrum Conductor.
Deploy instance groups with root user execution disabled
For enhanced security, only PEM should be run as root; any EGO services should be executed by a non-root user (such as by a cluster administrator user), thereby avoiding someone running malicious code as root user. Configure this setting by disabling root execution for EGO services by adding EGO_DISABLE_ROOT_REX=Y to the ego.conf configuration file, and then setting a non-root user to execute services from a host. Once configured, you can safely deploy an instance group with root user execution disabled.

Note that this EGO_DISABLE_ROOT_REX=Y setting is analogous to enabling the export ROOT_SQUASH_INSTALL=Y parameter during IBM Spectrum Conductor installation. The EGO_DISABLE_ROOT_REX setting is an alternative way to secure your cluster by reducing access rights for the root user. You can set either EGO_DISABLE_ROOT_REX or ROOT_SQUASH_INSTALL; it is not necessary to set both.

Activity level credentials for daemon authentication
Activity level credentials is a new type of EGO service credentials, generated for EGOSC (EGO service controller) and EGO services. It replaces the previous method of generating EGO service credentials for IBM Spectrum Conductor daemon authentication. For added security, activity level credentials are encrypted with RSA.
Use the new EGO_ACTIVITY_LEVEL_CREDENTIALS ego.conf configuration file parameter to control how EGO handles daemon authentication. This parameter is supported for the IBM Spectrum Conductor default (sec_ego_default) security plug-in. With this parameter, you can set EGO to handle daemon authentication as follows:
  • Use activity level credentials, so that credentials have the same lifespan as its corresponding activity. Once the activity is no longer available (for example, an EGO service stops), then these credentials are no longer valid; instead, EGO generates new credentials for the cluster to use when the activity restarts).
  • Use previously generated credentials that can be valid for ten years.
  • Use activity level credentials and still accept previously generated credentials. This is the default option.
AES-256 encrypted EGO service credentials, when credentials are transferred between VEMKD and PEM
If you do not have SSL enabled between VEMKD and PEM you can leverage the new EGO_KEYFILE_VEMKD_PEM ego.conf configuration file parameter to enable encrypted EGO service credentials, when credentials are transferred between VEMKD and PEM. When set, VEMKD generates an AES-256 encrypted key and updates it to the key file daily and uses it to encrypt credentials; PEM then reads the key from the key file to decrypt the received credentials.
New security solution replacing Search-guard
For enhanced security, IBM Spectrum Conductor no longer includes Search-guard. Instead, IBM Spectrum Conductor uses an internal security solution (called the orchestrator search plug-in) that provides EGO-based authentication and authorization, and included automatically when you install IBM Spectrum Conductor (specifically, the egoelastic-version.architecture.rpm package).

Host factory framework for cloud bursting

Configuring cloud host monitoring for hours used
Configure cloud hosts that join your cluster to track the core-hours used by each host's core. Known as core-hour usage (also known as variable use). You can query the total core-hours that are used in your cluster between two dates.
Cost-based cloud host selection
When cloud capabilities are enabled by using Host Factory for IBM Spectrum Conductor, the cluster can now scale out by automatically selecting the least-cost cloud hosts combination across cloud providers and host types. For more information about the template priceInfo parameter, see the reference page of any one of the configuration files: For more information about the resourceRequestParameters parameter, see the hostRequestors.json reference topic.
Rank-based cloud host selection
When cloud capabilities are enabled by using Host Factory for IBM Spectrum Conductor, the cluster can now scale out by automatically selecting hosts based on assigned rank values across cloud providers and host types. For more information about the template rank parameter, see the reference page of any one of the configuration files:
Configuring the log rotation settings
The log rotation settings define the maximum count of log backups and the maximum file size in megabytes (MB) of each log file. By default, log rotation settings are propagated from the host factory service to its sub-components. If you want, you can specify different log rotation settings for different sub-components in their respective configuration files: For more information, see Configuring log rotation.
Scheduled cloud requests with recurrence
You can now schedule cloud requests for specific dates and times, and optionally configure them to recur. You can also schedule return of cloud hosts. Refer to Manually scheduling cloud host requests and returns for details.

Additionally, the IBM Spectrum Conductor 2.5.0 host factory API for cloud requests has been enhanced for scheduled requests; and there are two new APIs (to close scheduled requests and to list scheduled requests) to support this new feature.

Multiple instances of the cws requestor plug-in and of the provider plug-ins
To use different cloud bursting configurations in your cluster, you can now configure multiple instances of the cws requestor plug-in, such that each instance is assigned with its own configuration. Examples of cloud bursting parameters that you can configure for different instances of the cws requestor plug-in in your cluster:
  • cloud provider accounts
  • monitored applications or host groups
  • parameters for generating resource demand or return requests
  • parameters for processing cloud requests
For more information, see the Multiple dynamic requestors section in the Configuration options for a dynamic requestor which is using the cws requestor plug-in topic.
You can also configure multiple instances of the built-in or custom provider plug-ins. Each instance can encapsulate the parameters of a different cloud account or a different set of available cloud resources. Each instance of the cws requestor plug-in can be associated with one or more provider instances. For more information, see the references of the following configuration files:
With this change, the configuration file for each requestor instance is named requestorname_config.json, where requestorname is the name of each instance as specified by the name parameter in the hostRequestors.json file. The sample requestor configuration uses cws as the requestor name, with the configuration file generated as cws_config.json.
Also with this change, the configuration file for each provider instance is named providernameprov_*.json, where providername is the name of each instance as specified by the name parameter in the hostProviders.json file.
Enhancements for the cws requestor plug-in
Utilization based scale-out and scale-in policy

The new utilization based scale-out and scale-in policy enables the cluster to dynamically scale-out and scale-in based on monitoring the utilization of cluster hosts. Using this policy, cloud hosts are added when utilization of cluster hosts exceeds a utilization threshold beyond a threshold duration, and returned when utilization is lower than a utilization threshold. Monitoring is based on resource (host) groups. This policy supports any type of workload running in the cluster.

The policy is provided in addition to the workloads SLA based scale-out and scale-in policy, which enables the cluster to dynamically scale-out and scale-in based on workloads required completion times and workloads profiling. Using this policy, cloud hosts are added when more resources are needed to meet workloads completion time requirements, and returned when there is an excess capacity compared with workloads completion time requirements. This policy supports specifically Spark batch workloads.

For more information, see the Cluster utilization based policy and Workloads' requirements based policy sections of the Configuration options for a dynamic requestor which is using the cws requestor plug-in topic.

Defined limits on cloud slots that can be requested
You can define limits on cloud slots that can be requested in a time unit or absolute.

For more information, see the DemandMaxSlotsReqTimeUnitType and DemandMaxSlotsReqPerTimeUnit parameters in the requestorname_config.json reference topic.

Cloud hosts return enhancements
  • Configure the ForceReturnAfterDurationSec parameter to limit the duration on detaching a cloud host from the cluster, such that after this duration the host is returned to the cloud provider without graceful detachment, in parallel to continuing the detachment procedure of the host from the cluster.
  • Configure the HostReturnIdleOnly parameter to enforce that only idle cloud hosts (namely, hosts that do not run workloads) must be returned to the cloud provider.
  • Configure the HostReturnUtilizationLimitPercent parameter to define a host utilization limit for return, where a cloud host is considered for return only if its utilization percent does not exceed this limit.
For more information, see the requestorname_config.json reference topic.
AWS spot instances support
AWS reclaimed spot instances are gracefully detached from the cluster.
Enhancements for the AWS provider plug-in
AWS connections through a proxy server
Enable host factory connections to AWS through a proxy server by configuring the AWS_PROXY_HOST, AWS_PROXY_PORT, and AWS_CONNECTION_TIMEOUT_MS parameters in the awsprov_config.json file.
Capacity optimized allocation for Spot instances
Enable a new Capacity Optimized allocation strategy for AWS Spot instances by configuring the capacityOptimized option for the allocationStrategy parameter in the awsprov_templates.json host template.
Custom volume size for Elastic Block Store root devices
Configure the Elastic Block Store (EBS) root device volume size for AWS EC2 On-Demand and Spot instances. By default, the root device volume size for EC2 instances is the root device volume size of the Amazon Machine Image (AMI) used for provisioning instances. When EBS-backed AMIs are used, you can now set a root device volume size that is larger than that of the associated AMI, by configuring the rootDeviceVolumeSize parameter in the awsprov_templates.json host template.
Request retry attempts
A known AWS intermittent timeout error can display in the $EGO_TOP/hostfactory/log/awsinst-provider.hostname.log file:
[2020-07-16 15:28:06.502]-[ERROR]-[com.ibm.spectrum.util.AwsUtil.requestSpotInstance(AwsUtil.java:809)] Create instances error.
com.amazonaws.SdkClientException: Unable to execute HTTP request: Read timed out
Specify up to 10 request retry attempts by configuring the AWS_REQUEST_RETRY_ATTEMPTS parameter. For more information, see the awsprov_config.json reference.
Enhancements for the IBM Cloud (previously SoftLayer) provider plug-in
API endpoint URL
Specify the API endpoint URL of the IBM Cloud by configuring the SOFTLAYER_API_ENDPOINT_BASE_URL parameter. For more information, see the ibmcloudprov_config.json reference.
Performance enhancement for removing dynamic hosts
Leverage the new -b (batch mode) option when removing dynamic hosts used with the host factory feature. For details on usage, see Batch mode: efficiently close or remove resources in these release notes.

Application template

Application template parameters
Now you can set the custom constraint for consumer parameters to exclusive (consumer_exclusive) or anti-affinity (consumer_antiaffinity). For more information, see the Application template parameters topic.
Application template resource types and properties
Now you can define a conditions property for any resource types in the application template. The conditions property specifies sections of an application instance template conditional on existence or value of an application template parameter. For more information, see the Application template reference topic.

Cluster management console

Usability enhancements for Anaconda or Miniconda operations
Error handling options
Now, when completing Anaconda or Miniconda operations using the cluster management console, if you encounter errors, they display in the console with options:
  • Click Retry beside an error message to retry that operation again.
  • Click Clear Error beside an error message to remove the error from the list. This cleans up the list; use it if you do not want to retry the operation.
Support for multiple conda channels
Now, when you perform Anaconda or Miniconda operations, you can specify more than one conda channels (using a comma delimited list), as the Anaconda or Miniconda repository to use. See Managing packages within conda environments for details.
Update conda packages
You can now update existing conda packages for your conda environment with a new Update feature, as described in Managing packages within conda environments for details.
Access the cluster management console through a proxy
Configure the new REST_PROXY_URL parameter in the pmc.conf configuration file to specify a web proxy server, enabling you to access the cluster management console through the proxy server.

Troubleshooting and logging

Dynamic debugging support for the egosc daemon
The egosh debug sub-command has been extended to include enabling or disabling dynamic debugging for the EGO servicer controller (egosc) daemon.
Web service gateway log file contains timestamp information
To help better troubleshoot web service gateway issues, the wsg.log log file now includes timestamp information. This log file is under $EGO_TOP/eservice/wsg/log.
ascd upgraded to use log4j2 logging framework
IBM Spectrum Conductor now uses the log4j2 logging framework for ascd (upgraded from the log4j framework). To configure settings for log files relating to ascd, use the new log4j2.properties file on the host that is running ascd.

Enhanced CLI

This version of IBM Spectrum Conductor offers enhanced command syntax and subcommands to provide more robust command flexibility and support. (For any new commands, refer the feature descriptions in this what's new and changed topic, instead of this section.)
Table 1. Commands changed in this version
CLI in previous versions CLI in IBM Spectrum Conductor Command usage and change
egoinstallfixes [-f env_file][--silent] package ... egoinstallfixes [-f env_file][--silent] [--nobackup] package ... The egoinstallfixes command installs fixes to your IBM Spectrum Conductor 2.5.0 installation. By default, the command backs up the current binary files to the fix backup directory before installing a fix. New for this release, you can use the --nobackup option so that the command does not create a backup, and simply applies the fix and overwrites existing files. This option is useful if you want to save space; however, it does not give you the option to rollback files that were updated by the fix. See egoinstallfixes.