Using Server Studio JE with Informix Dynamic Server

IBM® Informix® Dynamic Server is well known for its ease of administration. When you combine it with Server Studio, administrative tasks become even easier. Learn how Server Studio's proven tools for DBAs and developers emulate natural workflow and improve your efficiency when performing common database tasks. In this article, you'll learn how to connect Server Studio to your Informix servers and get started with some basic database administration tasks.

Prasanna Alur Mathada, Informix-Interoperability Team, India Software Lab, IBM

Prasanno Mathada photoPrasanna Mathada is a certified Informix Professional. He currently works for the Interoperability team, ISL, India.



Dhanashri Kudgavkar (dkudgavk@in.ibm.com), Informix-Interoperability Team, India Software Lab, IBM

Dhanashri Kudgavkar photoDhanshri Kudgavkar works for the Interoperability team, ISL, India. She has extensive experience with IDS behaviors and features.



03 January 2007

Also available in Chinese

Introduction

Server Studio JE

Server Studio JE (SSJE) is a stand-alone, Java™-based integrated development environment for IBM Informix Dynamic Server (IDS). Server Studio helps you manage database tasks such as storage management, performance tuning, and process and event response management. In addition, it minimizes error-prone, time-consuming, command-line-driven administrative tasks. An integrated graphical console brings together your development, administration, and maintenance activities for all your IDS data servers, giving you a centralized point of control as you manage for maximum availability and performance, regardless of the location of your data servers.

Informix Dynamic Server

IBM Informix Dynamic Server is a complete and versatile relational database management system with excellent performance, high scalability, and low administration cost. Best suited for online transaction processing, it provides secure and reliable database environment. With its continuous availability and failover recovery feature, it helps ensure continuous business operations.

IDS is well-known for its low maintenance cost. With some of the latest new features, including a deployment wizard for lowering the server footprint, SQL APIs for administration activities, and a new data query scheduler for monitoring events and resources, you can bring down your administration costs even further.

Connecting SSJE to an IDS database

Connection Manager is the SSJE tool you'll use to define connections between SSJE and IDS. It also helps you to assign connections to connection groups.

To create a new connection to IDS, follow these steps:

  1. From the Tools menu, choose Connection Manager.
  2. In the General tab, specify a name for a new connection, and choose the version of IDS that you are using.
    Figure 1. Configuring your connection
    Configuring your connection
  3. Using the Group list box, you can assign this connection to a connection group if required.
  4. Use the Database tab to provide server-specific JDBC connection paramaters, including server name, hostname where IDS is installed, the port that it is listening to, and the database that you wish to connect to. Userid and password fields are optional and can be provided at the time of connection.
    Figure 2. Database settings
    Database settings
  5. The Host Access tab allows you to provide configuration information for accessing IDS on the remote host using Telnet or SSH.
  6. Test the connection using the Test button, and then save this new connection using the Save button from this dialog box.

Managing content

Once you have successfully connected to your IDS database, you should be able to see the general properties of the connection, provided on the General tab under the Property window. While under the Object Explorer window, you can see the various functionalities provided by the IDS database as shown in Figure 3:

Figure 3. Object Explorer
Object Explorer

The other tabs along with the General tab are Profile tab, Logical Logs tab, ONCONFIG tab and Message Log tab.

From the Profile tab, you will see various functions associated with the current connection, their values, and their description. For instance, take a look at the following three lines:

delete	21382		Number of times the delete function is called 
commit	43094		Number of times the iscommit() function was called
rollbk	158		Number of times transactions rolled back

Here, with respect to the first line, "delete" specifies the function name, and "21382" is the number of times the function delete was called to execute, followed by a description of the purpose of the function.

The Logical Logs tab provides you with information on states and statistics of all the logical logs. The ONCONFIG tab displays the various parameters associated with the IDS server, their values both during boot time and the current value and the last column specifies the default value, so that, the user can get a measure of the difference in values.

Managing databases

When you choose the databases section, you are presented with a high level summary of the existing databases. The summary includes:

  • Database name
  • Database owner
  • Total size allocated
  • Total size being used
  • Name of the DBSpace containing the database
  • The collate value
  • Information on whether the database is logged or not
  • Information on whether the database is buffered or not
  • Type of database (ANSI or non-ANSI)
  • Date created
Figure 4. Database summary
Database summary

Select a database that you wish to work with and expand it to view its components. Figure 5 shows the components of an individual database.

Figure 5. Database components
Database components

Managing routines (SPL and external)

The following operations can be performed under the Routines section:

Function Functionality
NewCreate a new routine either using SQL Editor or using routines.
Filter Set Filter to the routines, specifying various criteria.
Debugger Open / Edit/ Debug the routine that needs to be debugged .
Update Statistics Run Update Statistics.
DB Schema Export/Copy the routines, either through a script file available on the local machine or by directly deploying to the selected database. Also you can print the routines.

Managing tables

The following operations can be performed under the Tables section:

Function Functionality
NewCreate new tables using either SQL Editor or using Table option by feeding necessary inputs.
Filter Set Filter to the tables, specifying various criteria.
Reorganize Select the routine that has to be debugged.
Calculate Extent Sizes Calculate extent sizes being used
Update Statistics Run Update Statistics.
DB Schema Here, you can Export/Copy the routines, either through a script file available on the local machine or by directly deploying to the selected database. Also you can print the routines.
Data You can edit the data within the tables. Select the table either by selecting it from the Object List or by specifying the name of the table directly into the field "Name". Clicking OK will take you to next screen where it poses three options to retrieve the rows of the table. You can choose between Selecting all the Rows, Selecting first N number of Rows, or Use Query to select specific rows to edit the data.
Generate Test Data Populate a table by generating fake values specifying the table name and the number of the rows to be populated.
Import / Export Data Import or Export data using either regular Data Export method or using Informix HPL

Managing the system tables

The following operations can be performed under the System Tables section:

Function Functionality
NewQuery existing system tables using SQL Editor.
Filter Set Filter for the tables, specifying various criteria.

Managing indexes

The following operations can be performed under the Indexes section:

Function Functionality
NewCreate a new index using either SQL Editor or using the Index option specifying the required inputs.
Filter Set Filter to the tables, specifying various criteria.
Enable / Disable Indexes Provides option to either enable or disable the indexes on required fields of the table.

Managing constraints

The following operations can be performed under the Constraints section:

Function Functionality
NewSpecify new constraints or edit existing one's using either SQL Editor. Also, Foreign key can be specified using the Foreign Key option mentioning the required inputs.
Filter Set Filter to the tables, specifying various criteria.
Enable / Disable Indexes Provides option to either enable or disable the contraints on required feilds.

Managing views

The following operations can be performed under the Views section:

Function Functionality
NewCreate new views using either SQL Editor or using the View option specifying the required inputs.
Filter Set Filter to the tables, specifying various criteria.
DB Schema Here, one can Export/Copy the views, either through a script file available on the local machine or by directly deploying to the selected database. Also one can print the views.

Managing triggers

The following operations can be performed under the Triggers section:

Function Functionality
NewCreate new triggers using either SQL Editor or using the Trigger option specifying the required inputs.
Filter Set Filter to the tables, specifying various criteria.
Enable / Disable Triggers Provides option to either enable or disable the triggers on required feilds.
DB Schema Here, one can Export/Copy the triggers, either through a script file available on the local machine or by directly deploying to the selected database. Also one can print the triggers.
Debugger Open / Edit/ Debug the part of the code that needs to be debugged .

Managing data types

The following operation can be performed under the Datatype section:

Function Functionality
New Make use of SQL Editor to work on with various datatypes.

Managing synonyms

The following operations can be performed under the Synonyms section:

Function Functionality
NewCreate a new synonym using either SQL Editor or using the Synonym option specifying the required inputs.
Filter Set Filter to the tables, specifying various criteria.
DB Schema Here, one can Export/Copy the synonyms, either through a script file available on the local machine or by directly deploying to the selected database. Also one can print the synonyms.

Managing security

The following operation can be performed under the Security section:

Function Functionality
NewCreate a new User or a new Role using the Role and User options specifying the required inputs.

Under the sub-category "User," you can create a new entry choosing one of the following:

  • Connect
  • Resource
  • DBA

Figure 6 displays a snapshot of several entries created under User and Role:

Figure 6. Sample figure containing an image
Sample figure containing an image

Figure 7 illustrates how you can edit and grant permissions to your users based on what tables, procedures, and views they are going to use.

Figure 7. Summary of permissions provided to a specific user
Summary of permissions provided to a specific user

Managing E/R diagrams

The following operations can be performed under the E/R Diagrams section:

Function Functionality
NewCreate a new diagram using the Diagram option specifying the required input.
Open Diagram Open the existing E/R diagrams to analyze and gather information.

Figure 8 shows part of an E/R diagram obtained from querying the 'sysmaster' database.

Figure 8. Section of the ER diagram from Sysmaster database
Section of the ER diagram from Sysmaster database

Managing storage space

The Storage utility is broken down into subparts, namely:

  • Spaces
  • Chunks

Figure 9 shows the summary details that appear when Spaces and Chunks are selected.

Figure 9. Spaces and chunks
Spaces and chunks

We'll examine these individually.

Managing spaces

The Spaces section describes the DBSpaces that currently exist. When you select one of the spaces, as shown in the Figure 10, you will see a page providing an overview of that space listing its name, owner, type of space, number of chunks allocated, the total size allocated, remaining free size, size of the individual pages, and whether the space is mirrored or not. All these details are provided in the General tab, one of four tabs under the Spaces section.

Figure 10. General tab of rootdbs DBSpace
General tab of rootdbs DBSpace

The Chunks tab provides information on the device holding the space, total space (number of pages and megabytes used), available free space, and the percentage of the space used. On double-clicking the existing information, you will be redirected to the General tab of the respective chunk.

The third tab provides information on all the database objects under the given space, including the name of the database object, the database containing it, the number of extents being used by that object, size allocated, size utilized, and percentage size made use of in the DBSpace.

The fourth tab, Logical Logs, provides information on the number of the log, its unique id, the allocated size and the size used.

Managing chunks

The Chunks section describes the chunk details that currently exist. The chunks mentioned in this section and the chunks mentioned as a part of the Spaces section point to the same page. Only the General tab varies slightly, containing information on the device over which the chunk resides, the mode of the chunk (may be offline, recovering, or inconsistent), whether it is mirrored or not, whether a blob chunk is present or not, the allocated size, and the free size available. Figure 11 depicts the General tab for the Chunk 1.

Figure 11. General tab of Chunk #1
General tab of Chunk #1

Adding a space or a chunk

Follow these steps to add a space:

  1. Specify the DBSpace name and select one of the following types:
    • BLOBSpace
    • SBLOBSpace
    • Extspace
  2. Specify the page size in kilobytes.
  3. For the chunk file option, specify the absolute path where you want your DBSpace to reside. Specify the size in kilobyles, megabytes, or gigabyles.
  4. If you need a mirror chunk, check that box and specify the offset.

As you enter this information, the Command Preview box shows you the command that is executed to create the DBSpace.

Figure 12. Adding a new space
Adding a new space

A new chunk can be added in a similar way, specifying the DBSpace name and making use of the options available in the dropdown menu.

Virtual processors

This section provides information on the various virtual processors as provided by the IDS database. The Virtual Processors section has three tabs: Statistic, Ready Queue and I/O Queue.

Here are the classes of virtual processors, as displayed on the Statistics tab, showing the type of disk I/O:

  • CPU : Kernel-asynchronous I/O
  • AIO : Asynchronous I/O
  • PIO : physical-log I/O
  • LIO : logical-log I/O
  • ADM : Administrator I/O
  • MSC : Miscellaneous I/O

Figure 13 displays the details of all the above mentioned virtual processors:

Figure 13. Classes of virtual processors
Classes of virtual processors

Managing the data loader (High Performance Loader)

The High Performance Loader (HPL) is a feature of IDS that allows users to load and unload large amounts of data efficiently to or from a database. The HPL consists of the onpload utility, ipload, and the onpload database. The onpload database can be found in Object Explorer, under the listing of databases available in the connection created to IDS.

You need to configure the HPL in order for it to perform the load and unload activities successfully. Right click on the Data Loader (HPL) in the Object Explorer and choose the Configure option. Provide the appropriate values for various configuration parameters according to your requirements.

The following screen appears when you choose to work with HPL:

Figure 14. HPL Repository
HPL Repository

The HPL lets you organize work by specifying projects. A project is a collection of individual pieces that you use to load and unload data. You can assign the load and unload jobs to various projects to organize the jobs into functional groups.

Creating a new load job using the SSJE Data Loader

The example here explains how to load a few rows of data in a table from a file by creating a load job. To get information on how to use other kind of devices, such as tape, pipe, and so on, refer to documentation links mentioned in the Resources section of this article.

Step 1: Right click on Data Loader (HPL) and choose New Load job. TheHPL Project Name list box will have no items in the list in the beginning. Choose the connection that was created earlier from the Connection list box and the Database list box will be automatically populated with the database name used when you created the connection. Choose the Object Type as Table from the list. The Select button provided beside Name will show a list of database objects, in this case a list of tables from the selected database. Choose the table that is to be loaded with data rows.

Figure 15. New load job
New load job

In the second part of this screen, you need to provide device array details. In the HPL Device Array Name list box, choose Enter New Name and provide a name for the device array that will be used for the loading job. Click on the Add button. A dialog box to add a new device will appear, as shown Figure 16. Provide the details of the device that is going to be used by the load job, in this case the file mydata, located at /ATM on the machine where the IDS instance is running. Click the OK button to continue.

Figure 16. HPL device parameters
HPL device parameters

The file that will be used for loading the data into the table contains the following data:

5|e|100
				
6|f|75
				
7|g|40
				
8|h|15

The new device added to the device array will show up as follows:

Figure 17. HPL Job Type
HPL Job Type

Job Type gives you the option of specifying Conversion or No-Conversion. There are two modes of loading, Express and Delux. No-Conversion is used to choose Express mode, which is the fast mode, and Conversion is used to choose Delux mode. There are some restrictions on Express mode. For example, the table will be read-only during load and no triggers can be invoked on the table during load. You can find more information on these two modes in the IDS-related documentation links provided in the Resources section of this article. HPL map and HPL format will be generated automatically during execution for Non-Conversion (fast) jobs.

Click on the Next button to continue.

Step 2: Choose the HPL Project Name on this screen. HPL provides a default project named Default. You can choose a different name than the default. In this example, the project name used is Default.

A map specifies the relationship between the fields of a data file and the columns of a database table. To load data into a database, you define a load map, which associates fields from records in a data file to columns in a database table.

Define a name in the Map Name drop down list. A table on this screen allows you to create the required map. Field Name is the name of every filed in the data file, and Column Name is the name for every column of the table to be loaded. You can associate any Field Name to any Column Name as required. Click on the Apply button to apply the defined map. This example associates the first field from the mydata file to the third column of the table, second field to the second column and the third field to the first column.

The Format specifies the organization of the input data. Data files can be formatted in a variety of ways. The HPL supports data file records of the following formats:

  • Fixed length
  • Delimited
  • COBOL
  • Other formats

The format name should be provided in the Format Name drop down box. Format Type in this example is Delimited, as the fields in the data file are separated by a delimiter "|" (pipe). Select Field Separator ( "|"' in this example) and Record end character ("newline" in this example) from the available list box.

Figure 18 illustrates these configuration parameters:

Figure 18. New load job
New load job

Click on Next to proceed to the next screen.

Step 3: The HPL also allows you to manipulate and filter the data as load and unload operations are performed. A filter is used to choose the records from the data file that should be inserted into the table. HPL Filter part of the following screen shows how to define a filter. This example loads all the data from the datafile 'mydata', so no filters are defined.

This screen also allows you to define other parameters for the load job, such as Lock Mode, Max allowed errors, Log file, Discarded records file, and so on.

Figure 19. Manipulate and filter the data
Manipulate and filter the data

Click on the Next button to continue.

Step 4: This is the last screen in the process of creating a new load job. You need to provide a name for the new load job. Save this new job using the Save button.

Figure 20. New load job execution
New load job execution

This new job can be executed by clicking the Execute button from this screen. There is an additional step to perform before the job is executed using SSJE. You must provide the Host Access information to successfully execute a load or unload job from SSJE. Right click on the connection that was created previously, and choose Edit. The Connection Manager window appears. Go to the Host Access tab and provide the necessary information (INFORMIXDIR, ONCONFIG, and protocol) to connect to the host that is running the IDS instance. Click on the Test button to test the connection to the database and also to confirm the correctness of the Host Access configuration.

Figure 21 illustrates setting these parameters from the Connection Manager window:

Figure 21. Connection Manager window
Connection Manager window

Creating an unload job

As with a load, unload components are grouped together into an unload job. Unload jobs can be saved, retrieved, and rerun as many times as necessary. Unload jobs can be grouped together with load jobs in the same project. The HPL uses the same components for an unload as for a load, with one exception. For an unload, an SQL query is used that extracts selected data from the table.

Step 1: To create a new unload job, right click on Data Loader (HPL) from the Object Explorer and choose New Unload Job. The HPL Unload screen will appear. This is very similar to the screen for a Load job, except that we have one extra drop down box to enter Query Name. This query is used to fetch the data from the specified table.

Also, there are two radio buttons available , one for Single database object and the other for HPL query. Choose HPL query to enter a select query to fetch rows from the table to unload. Selecting HPL query brings up a small SQL select statement panel that shows a select query for the table in use. You can modify the query as required. This screen also has a section where you can create new device array or use one from the existing list.

Figure 22. HPL New Job Unload
HPL New Job Unload

Click on the Next button to move on to the next screen.

Step 2: The Unload job can also be performed in two ways, Conversion and Non-Conversion as explained in the load job creation section. If you chooses the Non-Conversion mode, then you do not have the choice to create a map, format, and so on. The next screen will ask you to save this new job, and the map and format will be created at the time of unload job execution. If Conversion mode is chosen for the unload job, then you can create format, map, filter, and so on. For an unload job, an unload map should be created. The unload map associates the columns that a query retrieves from one or more tables to the fields in a data file. Similar to a load job, unload job can be run from SSJE.

Managing memory pools

Wondering how to find out what pools have been created and the class where they are residing ? This information can be obtained under the Memory Pools section. Here, along with the pool name and the class in which they are available, you can also get information on the sessions that have created them, the total size allocated to that pool, available free size and the duration since the pool was created. You can view properties for memory pools which are only associated with a session.

Figure 23 shows you the memory pools summary:

Figure 23. Section of Memory Pool
Section of Memory Pool

Report generation

This section proves to be handy in situations where you need information on particular operations, the terms involved within them and their respective values. You can obtain reports related to DBSpaces and the latest values obtained from Update Statistics, with respect to locks, tables, chunks, and their I/O activities.

Figure 24 shows you the summary of the various options on which you can generate reports:

Figure 24. "Summary of the options on which reports can be obtained
Summary of the options on which reports can be obtained

Choose the options that you are interested to generate a report. Now you are provided with Run, Save and Print. Click on Run to obtain a report with the latest values. Either save or print the obtained results, to use as you require.

Figure 25 illustrates the screen where you select the options to generate a report.

Figure 25. Process of obtaining a report
Process of obtaining a report

Conclusion

In this article, we've introduced Server Studio and showed you how to use it to help manage your IDS databases. We've illustrated specific tasks including data load and unload. Follow the links below to download Server Studio and see how it can start making your database management tasks easier and faster.

Resources

Learn

Get products and technologies

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management
ArticleID=280231
ArticleTitle=Using Server Studio JE with Informix Dynamic Server
publish-date=01032007