Input page for write operations

On the Input page, specify the properties to perform a classicfedwrite operation.

General tab

On the General tab, you can specify an optional description of the input link.

Properties tab

On the Properties tab, specify properties for the input link. The properties determine the data source, the operation to be performed on the data, the output resource, and so on. The following table lists the properties and their attributes. A more detailed description of each property follows.

Table 1. Target category properties of the Input page for classicfedwrite operations
Property Values Default value Required? Property dependent on
Table Table name N/A Yes N/A
Write method
  • Upsert
  • Write
Write Yes N/A
Write mode
  • Append
  • Create
  • Replace
  • Truncate
Append Yes N/A
Table 2. Connection category properties of the Input page for classicfedwrite operations
Property Values Default value Required Property dependent on
Data source Data source name N/A Yes N/A
User User name N/A Yes Data source
Password Password N/A Yes Data source
Table 3. Options category properties of the Input page for classicfedwrite operations
Property Values Default value Required Property dependent on
Insert array size Integer 2000 No N/A
Truncate column names
  • False
  • True
False Yes N/A
Open command SQL statement No No N/A
Close command SQL statement No No N/A
Isolation level
  • Read uncommitted
  • Read committed
  • Repeatable read
  • Serializable
Read uncommitted No N/A
Create statement String N/A No Write method and Write mode
Row commit interval Integer N/A No Insert array size
Drop unmatched column
  • False
  • True
False No N/A
Delimiter ASCII character @ No N/A

Target category

Specify the Table, Write Method, and Write Mode values:

Table
Specify the appropriate value to connect the stage to a target file that is located on a remote host.
Write Method
Specify how data is written and exported.
Write Mode
Specify that the stage writes and exports data to a single table. Define how the records from the data source are inserted into the destination table. Write Mode can have one of the following values. Each of the modes requires specific user privileges:
  • Append - This is the default mode. Append new row to the existing destination table. The destination table must exist, and the record schema of the data set must be compatible with the schema of the table. The schema of the existing table determines the input interface of the stage.
  • Create - Creates a new destination table. If a table exists with the same name as the one being created, the operation terminates and an error message is displayed. The schema of the DataStage data set determines the schema of the new table. The table is created with simple default properties. To create a table with any properties other than the default properties, such as partitioned, indexed, or in a non-default table space, use the -createstmt option with the - createtable statement.
  • Replace - Replace an existing destination table. ClassicFedWrite operation drops the table that you specify as the destination table and creates a new one in its place. If a table exists with the same name as the one being created, it is overwritten. The schema of the DataStage data set determines the schema of the new table.
  • Truncate - Requires a destination table. The ClassicFedWrite operation retains the attributes of the destination table, but discards existing records and appends new records. The schema of the existing table determines the input interface of the stage.

Connection category

Specify Data source, Password, and User values:

Data source
This property is mandatory. Specify the database connection with one of these methods:
  • Enter the data source name in the Data source field on the right side of the Properties page.
  • Insert the required value as a job parameter. Click the pointer button on the extreme right side of the Properties page, and then click Insert Job Parameters.

    A dialog is displayed with a list of available job parameters. To create a new parameter for the job, click [New...] from the list and create an appropriate environment variable in the Job Properties dialog.

    Note: With IBM® InfoSphere® DataStage® and QualityStage® Administrator, you can create parameters for all jobs within the project.
User
This field is optional. Specify the user name for connections to the data source with one of these methods:
  • Enter the user name in the User field on the right side of the Properties page.
  • Insert the required value as a job parameter. Click the pointer button on the extreme right side of the Properties page, and then click Insert Job Parameters.

    A dialog appears with a list of available job parameters. Create a new parameter for the job, click [New...] from the list, and create an appropriate environment variable with the Job Properties dialog.

    Note: If you insert all or some of the Connection category values from the job parameter list, then this job takes the environment variables from the operating system. At runtime, you are prompted to modify the values for those environment variables.
Password:
This is field is optional. Specify the password for connections to the data source with one of these methods:
  • Enter the password in the Password field on the right side of the Properties page.
  • Insert the required value as a job parameter. Click the pointer button on the extreme right side of the Properties page, and then clickInsert Job Parameters.

    A dialog appears with a list of available job parameters from which you can choose. If you want to create a new parameter for the job, click [New...] from the list and create an appropriate environment variable in the Job Properties dialog.

Options category

Specify the values for the Insert Array Size, Truncate Column Names, Close Command, Length to Truncate, Drop unmatched column, Open Command, and Isolation level properties. The Truncate Column Names property is the default. You can add the other properties from the Available properties to add list.

Insert Array Size
Specify the size of the insert host array. You can only enter an integer. The default value is 2000.
Truncate Column Names
Set one value depending upon your requirements:
  • True - Indicate that column names are truncated to the size that is allowed by the Classic Federation driver.
  • False - Disable truncation of the column names.
Close Command
Enter the SQL statement to be run after an insert array is processed. This statement is run only once on the conductor node.
Drop unmatched column
Set one value depending upon your requirements:
  • True - Indicate that unmatched columns in the data set are dropped. An unmatched column is a column for which there is no similar column in the data source table.
  • False - Indicate that unmatched fields of the data set are not dropped.
Open Command
Enter the SQL statement to be run before the insert array is processed. This statement is run only once on the conductor node.
Isolation level
Select the isolation level for accessing data.
Create statement
Specify the SQL statement to create the table. This property is displayed only if the Write mode is create or replace.

The database that is specified in the data source determines the default isolation level.

Partitioning tab

On the Partitioning tab, specify details to determine how the incoming data is partitioned or collected before it is written to the destination database. You can also specify that the data is sorted before it is written to the destination database.

By default, the stage partitions data in Auto mode. Auto mode provides the best partitioning method depending on the execution mode of the current and preceding stages and how many nodes are specified in the configuration file.

If the stage is operating in sequential mode, the stage collects data before writing the data to the destination database with the default Auto collection method. Use the Partitioning tab to override this default behavior. The properties on this tab behave differently according to whether the current and preceding stages are set to run in parallel or sequential mode.

Columns tab

On the Columns tab, view and modify column metadata for the input link. Use the Save button to save any modifications that you make in the column metadata. Use the Load button to load an existing source table.

From the Table Definitions window, select the appropriate table to load and click OK. The Select Column dialog is displayed. To ensure the appropriate conversion of data types, clear the Ensure all Char columns use Unicode check box.

Advanced tab

On the Advanced tab, you can specify how input data for the stage is buffered. By default, data is buffered so that no deadlocks can arise. A deadlock is a situation in which a number of stages are mutually dependent. An example is a situation in which one or more stages are waiting for input from another stage, and those stages cannot output data until they have received input.

Columns button

Use the Columns button to define column names for the destination table.