Preparing for Database Component Creation

Database Component Descriptions and Installation Requirements

A database component is a grouping of database objects that can be used by one or more products. For example, Integration Servers write service status data to the ISCoreAudit database component, and Monitor reads the data and displays it in its user interface.

This section lists the products that have database components and describes the database components to create. Each database component is named for the type of data that is written to it.

Note: For detailed information on product concepts mentioned in the sections below, see the relevant product documentation.

ActiveTransfer Server Database Components

If you are going to install ActiveTransfer Server, create the ActiveTransfer database component. ActiveTransfer Server writes listener (port), server, user configuration, and other metadata to this database component. ActiveTransfer Server also writes run-time data, such as file transfer transaction information and audit data. If you are clustering ActiveTransfer Servers, create a single ActiveTransfer database component for the cluster to share. If you are not clustering, create an ActiveTransfer database component for each ActiveTransfer Server.

If you want to archive ActiveTransfer transaction data, create a separate schema or database, as appropriate for your RDBMS, in which to store the archived data. The ActiveTransferArchive database component installs stored procedures for archiving the data. The stored procedures need access to the production database tables listed in the section on archiving the database in Managing File Transfers with webMethods ActiveTransfer. If your organization will allow access to production database tables from the schema or database where the archived data resides, install the ActiveTransferArchive database component in the archive schema or database. If your organization will not allow such access, install the ActiveTransferArchive database component in the production schema or database, and give the location of the archive schema or database during the archive process.

Note: If you are using Oracle, you might be able to use partitioning for the ActiveTransferArchive database component. Contact Software AG Professional Services for information.

API Gateway Database Component

API Gateway comes with a data store in which it stores core data, events, and metrics. If you want to store information about runtime event invocations for analytics and auditing purposes, for either one API Gateway or a cluster of API Gateways, create one APIGatewayEvents database component.

CloudStreams Database Component

If you are going to install CloudStreams, create the CloudStreamsEvents database component. CloudStreams Server writes lifecycle (startup/shutdown), error, policy violation, monitoring, performance metric, and, optionally, transaction events to the CloudStreamsEvents database component. CloudStreams Analytics reads the events data and displays it using the MashZone NextGen dashboard. If you have multiple CloudStreams Server instances, create a single CloudStreamsEvents database component for them to share, regardless of whether they are clustered or not clustered.

Integration Server or Microservices Runtime Database Components

The table below describes the types of data Integration Server or Microservices Runtime can persist.

Database Component Types of Data When Data is Written
ISInternal Service results, scheduled tasks, client certificate mappings, run-time data for pub.storage services, guaranteed delivery transactions, trigger joins, active OpenID requests, WS-ReliableMessaging runtime data, and configuration and runtime data for OAuth and Account Locking. You are using the features listed in the Types of Data column
ISCoreAudit
  • Error, guaranteed delivery, service, security, and session audit data.
The audit logger for the type of data is enabled
 
  • Documents that are in doubt, have failed, or have exhausted trigger retries.
You are using triggers
CrossReference Cross-referencing data for publish-and-subscribe solutions. You are using publish-and-subscribe solutions
DocumentHistory Document history data for exactly-once processing in publish-and-subscribe solutions. Integration Server uses the data to detect and reject duplicate documents. You are using exactly-once processing
DistributedLocking Information that coordinates access to resources across distributed servers and processes. Executing services in the pub.storage folder.

External RDBMS versus Embedded Database

You must use an external RDBMS with Integration Server or Microservices Runtime if you are going to:

  • Cluster server instances.
  • Install in a production environment with high transaction volumes or large datasets. The definition of "high" and "large" depends on the amount of memory available to your JVM, the speed of your CPU, and the kind of work your application is doing (for example, more processing intensive, or more IO intensive).
  • Need a database that is scalable, visible, and highly reliable, and that has robust management and recovery tools.
  • Write document history data.
  • Have services log their input pipelines.
  • Use triggers for joins.
  • Use pub.storage for high-volume storage.
  • Write Process Audit Log and Process Engine data for business processes.
  • Use Business Rules with Auditing or Hot Deployment.
  • Use Monitor.

If you are not going to do any of the above, and your database demands are low, your needs might be met by using the embedded database. For example, you can use the embedded database when you are using Integration Server or Microservices Runtime as a runtime for adapters, eStandards Modules, API Gateway, or Deployer.

Using the Embedded Database

Integration Server and Microservices Runtime use Derby, a file-based database, as their embedded database. When you use the embedded database, Integration Server or Microservices Runtime writes IS Internal and Cross Reference data to that database, and writes IS Core Audit Log data to files.

Important: If one database file gets corrupted, the entire database might become unusable. Software AG therefore recommends backing up the \IntegrationServer\instances\instance_name\db directory for Integration Server or the Software AG_directory \db directory for Microservices Runtime periodically so you will be able to return to the last known good state if necessary.

If you later want to write these types of data to an external RDBMS instead, you will need to create the necessary database components in the RDBMS (see Database Component Descriptions and Installation Requirements) and configure Integration Server or Microservices Runtime to write to them (see Connecting Products to Database Components). You must also complete the steps for switching from the embedded database to an external RDBMS (see webMethods Integration Server Administrator’s Guide).

Using the External RDBMS

When you use an external RDBMS, create the ISCoreAudit, ISInternal, and DistributedLocking database components in the RDBMS. You must create the ISInternal and DistributedLocking database components in the same schema or database, as appropriate for your RDBMS. Also create other database components as needed. For example, if you are using publish-and-subscribe solutions and exactly-once processing, you will need the CrossReference and DocumentHistory database components; if you are running business processes, you will need the ProcessAudit and ProcessEngine database components. For complete information about the database components you will need to create, see Database Component Descriptions and Installation Requirements.

If you are clustering server instances, create an ISCoreAudit database component, an ISInternal database component, and a DistributedLocking database component in the same schema or database, as appropriate for your RDBMS, for the cluster to share. Also create a CrossReference database component and a DocumentHistory database component for the cluster to share.

If you are not clustering server instances, create an ISInternal database component and a Distributed Locking database component for each server instance. Also create the ISCoreAudit, CrossReference, and DocumentHistory database components; you can create one of each of those database components for all server instances to share, or you can create each of those database components for each server instance.

Archiving the ISCoreAudit Database Component

If you want to archive data from the ISCoreAudit database component (called the "source" database component in this section), create the Archive database component. The Archive database component includes a copy of the source database components. You must create the Archive database component in the same type of RDBMS and on the same database server as the source database components, but in a different schema or database, as appropriate for your RDBMS, than the source database components.

The OperationManagement database component is automatically created when you create the Archive database component. It provides a common method for configuration, logging, and error handling for the Archive database component.

If you are also going to archive ProcessAudit data, you can use the same Archive database component for both the ISCoreAudit and ProcessAudit database components.

Mobile Support Database Component

If you are going to install the Mobile Support package on Integration Server, create the MobileSupport database component. Mobile Support writes data used for synchronizing mobile solutions, and metadata about that data, to this database component.

My webMethods Server Database Components

Create the MywebMethodsServer database component if you are going to install any of the following:

  • My webMethods Server, with or without Task Engine, the user interface for Monitor, Optimize, or Trading Networks, or the Central User Management package for Integration Server
  • Task Engine on Integration Server
  • The Central User Management package and the Common Directory Service Support runtime library for Integration Server, to use with the Integration Server Administrator user interface (preview feature).

My webMethods Server writes data about deployment, configuration, security, portal pages, and run-time operations to this database component. It uses this data to manage the My webMethods user interfaces (for example, the user interfaces for Monitor, Optimize, and Trading Networks) and to support central user management in Integration Server.

The Task Engine on My webMethods Server and the Task Engine on Integration Server write task status, task distribution, and business data. The Task Engine on My webMethods Server displays this data in its user interface in My webMethods. The Task Engine on Integration Server displays this data in its gadgets in Business Console and in a Task Engine browser-based user interface.

If you are clustering My webMethods Servers, create a single MywebMethodsServer database component for the cluster to share. If you are not clustering, create a MywebMethodsServer database component for each My webMethods Server.

The CentralConfiguration database component is automatically created when you create the MywebMethodsServer database component. If you are using Optimize, the CentralConfiguration database component stores the Optimize configuration information you enter in the Central Configurator interface in My webMethods. The CentralConfiguration database component is not automatically dropped when you drop the MywebMethodsServer database component. If you want to drop the CentralConfiguration database component, you must do so manually.

webMethods OneData Database Components

If you are going to install webMethods OneData, create the webMethods OneData database components.

  • webMethodsOneDataMetadata: webMethods OneData writes internal configuration data to this database component.
  • webMethodsOneDataWorkArea: webMethods OneData users create data objects and work-in-progress data values to this database component.
  • webMethodsOneDataReleaseArea: Contains the same data objects as the webMethodsOneDataWorkArea database component, and approved data values. Data values are deployed from the webMethodsOneDataReleaseArea database component.

You must create each webMethods OneData database component in a separate schema or database, as appropriate for your RDBMS, and use a different database user for each database component. You must create the webMethodsOneDataMetadata database component first, then the webMethodsOneDataWorkArea database component, then the webMethodsOneDataReleaseArea database component.

Optimize Database Components

If you are going to install Optimize, create the Analysis and ProcessTracker database components.

Optimize Analytic Engines write computed analytics and process and monitoring data received from Infrastructure Data Collectors and Web Service Data Collectors to the Analysis database component. The Optimize user interface displays the data. Create a single Analysis database component for all Optimize Analytic Engines to share. If you are going to use root cause analysis for business processes, install the Analysis and ProcessTracker database components in the same schema or database, as appropriate for your RDBMS.

Optimize Analytic Engines write business and process status data received from processes that are not orchestrated by the Process Engine to the ProcessTracker database component. The Optimize user interface displays the data. Monitor reads process status data from this database and displays it in the Monitor interface in My webMethods. Create a single ProcessTracker database component for all Optimize Analytic Engines to share. If you are going to use root cause analysis for business processes, install the ProcessTracker and Analysis database components in the same schema or database, as appropriate for your RDBMS.

The DataPurge and DatabaseManagement database components are automatically created when you create the Analysis or ProcessTracker database component. The DataPurge database component provides a common method for purging data from the Analysis and ProcessTracker database components. The DatabaseManagement database component provides core routines for data purge functionality.

The OperationManagement database component is automatically created when you create the Analysis or ProcessTracker database component. It provides a common method for configuration, logging, and error handling for those database components.

Process Engine Database Components

If you are going to install Process Engine, create the ProcessEngine and ProcessAudit database components.

Process Engines write process execution data for processes they orchestrate to the ProcessEngine database component. If you are distributing business process steps, you cluster the Process Engines that orchestrate the steps. Create a single ProcessEngine database component for the cluster to share. If you are not clustering, create a ProcessEngine database component for each Process Engine.

Process Engine and the following write to the ProcessAudit database component:

  • Process Engines write process audit data for business processes they orchestrate.
  • Task Engines write task audit data for task steps orchestrated by Process Engine.
  • Third-party products can write process execution data.

The following read from the ProcessAudit database component:

  • Optimize Analytic Engines read process data so they can analyze capabilities such as transition duration and write data about analysis-enabled processes, then displays this data in the Optimize interface in My webMethods.
  • Monitor reads process data and displays it in the Monitor interface in My webMethods, where you can track the status of process instances, view logged values, and, in some cases, resubmit process instances.
  • Business Console reads and displays process data so you can monitor process instances in real time.

If you are distributing business process steps, you cluster the Process Engines that orchestrate the steps. Create a single ProcessAudit database component for the cluster to share. If you are not distributing business process steps, and therefore not clustering Process Engines, you can create either a ProcessAudit database component for each Process Engine or a single ProcessAudit database component for all Process Engines to share.

If you are using Process Engines, Task Engines, Optimize Analytic Engines, or some combination of these, create a single ProcessAudit database component for all to share.

Note: You might be able to use partitioning for the ProcessAudit database component. Contact Software AG Professional Services for information.

Archiving the ProcessAudit Database Component

If you want to archive data from the ProcessAudit database component (called the "source" database component in this section), create the Archive database component. The Archive database component includes a copy of the source database components. You must create the Archive database component in the same type of RDBMS and on the same database server as the source database components, but in a different schema or database, as appropriate for your RDBMS, than the source database components.

The OperationManagement database component is automatically created when you create the Archive database component. It provides a common method for configuration, logging, and error handling for the Archive database component.

If you are also going to archive ISCoreAudit data, you can use the same Archive database component for both the ProcessAudit and ISCoreAudit database components.

Rules Engine Database Component

If you are going to install the Rules Engine, create the BusinessRules database component. As each Rules Engine instance starts, it registers itself in this database component and stores information about deployed projects and the state of business rules instances. When you modify a business rule, the hot deployment functionality in the Business Rules user interface enables you to deploy changes to all of the instances registered in this database component. If you install multiple Rules Engine instances, create a single BusinessRules database component for the instances to share.

Note:

You do not need to create the BusinessRules database component since the Rules Engine does not need a database connection if it is excluded from hot deployment and does not use auditing. For more information on hot deployment and writing business rules auditing information, see the webMethods Business Rules documentation.

Trading Networks Server Database Components

If you are going to install Trading Networks Server, create the TradingNetworks database component. Trading Networks Server writes metadata (partner profiles, trading partner agreements, document types, processing rules, and partner profile groups) and run-time data (documents, document content parts, attributes, and delivery information) to the TradingNetworks database component. Trading Networks Server also logs entries about document processing steps. The Trading Networks user interface displays the data. If you are clustering Trading Networks Servers, create a single TradingNetworks database component for the cluster to share. If you are not clustering, create a TradingNetworks database component for each Trading Networks Server.

If you want to archive Trading Networks data, create the TradingNetworksArchive database component. Create the TradingNetworksArchive database component in the same schema or database, as appropriate for your RDBMS, as the source TradingNetworks database component.

Note: If you are using Oracle, you might be able to use partitioning for the TradingNetworksArchive database component. Contact Software AG Professional Services for information.

Database Drivers for DB2, MySQL Enterprise Edition, Oracle, PostgreSQL, and SQL Server

Software AG products that support DB2, MySQL Enterprise Edition, Oracle, PostgreSQL, or SQL Server use the DataDirect Connect JDBC driver to communicate with those RDBMSs. Oracle and SQL Server use DataDirect Connect JDBC driver 6.x and the other RDBMSs use DataDirect Connect JDBC driver 5.x.

The products come with the client component of this driver; it is a JDBC native-protocol driver that does not have a server component. The products execute DDL commands at install time and sometimes design time for each RDBMS with which they interact. For information about the DataDirect Connect JDBC driver, see the DataDirect Connect documentation, available on the Software AG Documentation website.

Important: You must use the provided DataDirect Connect JDBC driver with your products and the RDBMSs listed above unless the documentation for a product states that it supports another driver.

Prepare for Database Component Creation in MySQL

Software AG products that support MySQL Community Edition use the MySQL Community Edition database driver to communicate with the RDBMS. For information about the driver, see the vendor documentation.

Note: If you experience any issues with the MySQL native driver, or with fixes, service packs, or patch-bundles for the driver, report them directly to the vendor. Software AG is not responsible for addressing driver-related issues.

Install the MySQL Community Edition Database Driver for Integration Server, Microservices Runtime, and Hosted Products

Download the MySQL Community Edition driver to the Software AG_directory \IntegrationServer\lib\jars\custom directory.

For MySQL Community Edition, you must specify the useCursorFetch parameter in the URL to prevent the return of all the query results in a single batch. The MySQL JDBC driver used with the MySQL Community Edition does not honor the cursor fetch size when returning the results of an SQL query. Instead of returning the JDBC-configured number of rows for each cursor fetch to Integration Server, the MySQL JDBC driver returns all the rows for the query in a single batch. This can result in the exhaustion of the available heap memory for Integration Server. Any JDBC connection pool that connects to MySQL Community Edition must specify the following connection option in the Database URL: useCursorFetch=true

Install the MySQL Community Edition Database Driver and Edit the MySQL Configuration File for All Other Products

Procedure

  1. Download the MySQL Community Edition driver to the Software AG_directory \common\lib\ext directory.
  2. Rename the MySQL Community Edition driver jar to mysql-connector-java.jar.
  3. Go to the Software AG_directory\common\db\bin directory, open the setEnv.bat file, and add the MySQL Community Edition driver jar to the classpath, as follows:
    set CLASSPATH=%CLASSPATH%;%DCI_HOME%\..\lib\ext\mysql-connector-java.jar
  4. Open the MySQL configuration file.
    1. Remove the values NO_ZERO_IN_DATE and NO_ZERO_DATE from the sql_mode variable.
    2. If MySQL is on a Linux system, set the parameter lower_case_table_names=1. If MySQL is on an AWS RDS system, set the parameter log_bin_trust_function_creators=1.
    3. Add the parameter transaction-isolation=READ-COMMITTED.

Prepare for Database Component Creation for All RDBMSs

Database Administrator Account

You need DBA access only if you want to use the Database Component Configurator to create the necessary database users and storage objects.

Your products are designed to use the database users you will create for the database components using the instructions in this guide.

Character Set and Sort Order

Your products are globalized and support Unicode. Software AG strongly recommends choosing a Unicode encoding for your database and the most appropriate sort order for your environment. A database character set determines which languages a database can represent. Database sort order determines collation and comparison behavior.

The sections below list the most appropriate Unicode character encoding and sort order for each RDBMS that your products support. If you want to use a different character set or sort order than recommended below, consult your database administrator and your RDBMS vendor's documentation so you can carefully choose a database character set that supports the languages your data is in.

If you use the Database Component Configurator to create your database components, you can check whether the selected RDBMS is configured for the Unicode character set. If the RDBMS does not support Unicode, the configurator lists the character set the RDBMS does support.

Important: You must set character set and sort order before creating storage.

DB2

Database schemas for DB2 use character data types. DB2 supports UTF-8 for character data types and UTF-16 for graphic data types.

The table below lists the character sets and sort order recommended by Software AG.

For Software AG Recommendation
Character set

CCSID 1208 (UTF-8)

My webMethods Server requires this character set.

Graphic Character Set UTF-16
Sort order

IDENTITY_16BIT

This sort order ensures the same sorting result for both character and graphic data types.

You can check the database configuration using the GET DATABASE CONFIGURATION command.

MySQL Community Edition and Enterprise Edition

The server character set and collation are used as default values if the database character set and collation are not specified in CREATE DATABASE statements. They have no other purpose.

You can determine the current server character set and collation settings from the values of the character set server and collation server system variables. You can change these variables at runtime.

The table below lists the character set and sort order recommended by Software AG.

For Software AG Recommendation
Character set UTF-8
Collation utf8_general_ci

You can check the database configuration using the SHOW VARIABLES LIKE command.

Oracle

Database schemas for Oracle use character data types. For character data types, Oracle supports the UTF8 and AL32UTF8 Unicode encodings. While UTF8 is CESU-8 compliant and supports the Unicode 3.0 UTF-8 Universal character set, AL32UTF8 conforms to the Unicode 3.1 or higher UTF-8 Universal character set. For nchar data types, Oracle supports the AL32UTF8 and AL16UTF16 Unicode encodings. The supported Unicode version for AL32UTF8 depends on the Oracle database version. Oracle database schemas for your products do not have linguistic indexes.

The table below lists the character sets and sort order recommended by Software AG.

For Software AG Recommendation
Character set AL32UTF8
Nchar character set AL16UTF16
Sort order Binary

You can check database configuration and session settings by viewing the SYS.NLS_DATABASE_PARAMETERS or V$NLS_PARAMETERS parameter.

PostgreSQL

PostgreSQL uses UTF-8 encoding by default.

SQL Server

Database schemas for SQL Server use nchar data types. SQL Server provides support for UTF-16 through its nchar data types. Since nchar data types are always in UTF-16, you do not have to perform any special database configuration.

Some products, such as Process Engine, require a double-byte character set (DBCS). Choose the most appropriate code page for your environment as a database character set.

The table below lists the character sets and sort order recommended by Software AG.

For Software AG Recommendation
Character set The appropriate encoding for the languages your data is in.
Nchar character set UTF-16
Sort order

Any case-insensitive collation type.

If you do not choose a case-insensitive sort order, you will not be able to create some database components in SQL Server.

You can check the database configuration using the sp_helpdb database stored procedure.

Page and Block Size

The table below provides the page and block sizes specified to use for each type of RDBMS.

RDBMS Required Page and Block Size Default
DB2 32K page size 4K
MySQL 16K page size 16K
Oracle 8K page/block size 8K
PostgreSQL 8K page/block size 8K
SQL Server 8K page/block size 8K

Set Database Options

You must set database options for the RDBMSs below.

MySQL

Set the parameter default_time_zone='+00:00' in the MySQL configuration file for the [client] and [mysqld] sections.

Oracle

For your products to function properly, you must set the NLS_LENGTH_SEMANTICS initialization parameter to BYTE.

Important: Your products use BYTE rather than CHAR semantics because BYTE is the default configuration of Oracle database and is used by most Oracle users. If you try to use your products with CHAR semantics, unexpected results might occur. For example, since CHAR semantics allow storage of longer data than BYTE semantics (for Japanese, varchar(12 byte) takes four characters in UTF8 while varchar(12 char) takes 12), using CHAR semantics could cause the buffer in some products to overflow.

SQL Server

Software AG recommends the following database options for your products to function properly:

ALTER DATABASE database_name SET ALLOW_SNAPSHOT_ISOLATION ON;           
ALTER DATABASE database_name SET READ_COMMITTED_SNAPSHOT ON;

In addition, enable Named Pipes and TCP/IP protocols using the SQL Server Configuration Manager.

By default, SQL Server uses case-insensitive collations. If you create the My webMethods Server, Integration Server, or Trading Networks database component, do not change this option from the default; My webMethods Server,Trading Networks, and Integration Server do not support case-sensitive collations.

webMethods modules are not tested for case sensitivity. Setting different collation in the database might or might not work, but it is not tested and not supported.