Informix Dynamic Server 11.50 Fundamentals Exam 555 certification preparation, Part 4: Examining database objects

Tables, constraints, views, indexes, triggers, sequences, and synonyms

This tutorial continues your journey into IBM® Informix® Dynamic Server by discussing many of the objects that can be created and used inside of a database. Some of these objects include tables, indexes, triggers, and views. This tutorial discusses what they are, how they are used, and how to create them.

Jeffrey S. Bohm (jbohm@us.ibm.com), Advisory Software Engineer, IBM

Jeff Bohm

Jeff has been working with Relational Database Management Systems (RDBMS) products for over 16 years. He has experienced databases from every angle: as a customer, a consultant, an instructor, and a developer. He was hired on at Informix Software in 1994 and has been working with it ever since. His jobs at Informix included advanced technical support, performance tuning and troubleshooting consultant, and customer education specialist. He currently holds a Software Engineer position with the IBM Informix Dynamic Server Stress QA team.

Jeff has worked with hundreds of IDS clients, from the smallest of businesses to the largest, including multiple large retail, grocery, and telco companies. He has helped them get the most out of their IDS installs, addressing the most basic SQL needs to the most demanding performance and feature rich implementations.

Jeff was involved in writing the IDS 9 Certification exams and currently holds Certifications for IDS 5, 7, 9, 10, and 11.

Jeff also helped author several classes that are being taught in IBM's IDS and RDBMS curriculum.



03 September 2009

Before you start

This tutorial teaches you about the objects that can be created and used inside of a database. Some of these objects are there to hold the raw data, while others exist for pure performance reasons, and still others help keep the integrity of the data intact.

The objects being discussed in this tutorial are tables, views, indexes, triggers, constraints, synonyms, and sequences.

This tutorial discusses a little bit about their purpose as well as how to create them.

The tutorial then ends with a little discussion on a built-in tool to print out the SQL statements that could help recreate or duplicate the objects.

About this series

This complimentary series of nine tutorials has been developed to help you prepare for the IBM Informix Dynamic Server 11.50 Fundamentals certification exam (555). This certification exam will test your knowledge of entry-level administration of IDS 11.50, including basic SQL (Structured Query Language), how IDS 11.50 is installed, how to create databases and database objects, security, transaction isolation, backup and recovery procedures, and data replication technologies and purposes. These tutorials provide a solid base for each section of the exam. However, you should not rely on these tutorials as your only preparation for the exam.

About this tutorial

This tutorial is dedicated to covering the topics in Section 4 of the exam titled "IDS Tables, Views, and Indexes."

Objectives

When you finish this tutorial, you should be able to do the following:

  • Explain what the purpose of a database table is and how to create one
  • Explain the difference between a non-fragmented, fragmented, and partitioned table
  • Describe what a constraint is and the different types of constraints available for use in an Informix database
  • Explain how to create constraints in the database
  • Describe the concept of a view and how it differs from a table
  • Explain how to create a view
  • Describe what role indexes play in a database and how to create them
  • Explain what synonyms and sequences are, and how to create them
  • Describe what a database trigger is and how to create one
  • Explain how to print out the SQL statements (schema) of the objects in the database

Prerequisites

This tutorial is written for up-and-coming database administrators (DBAs). Although some basic database concept knowledge may help, it is not necessarily needed.

System requirements

To complete this tutorial, you do not need a copy of IDS. However, if you have one available to use, you will definitely get more out of the tutorial. If you don't have a copy already, you can download the free trial version (see Resources). .


Object list

Now that databases have been discussed, let's move on to some of the objects that can be created inside of the database for use by the users. Table 1 describes the objects that will be discussed in this tutorial. Although this isn't an all-encompassing list of the objects, it will get you familiar with quite a few of them.

Table 1. Objects and their descriptions
ObjectDescription
TableHolds data in row and column format
ConstraintConstrains data values and helps keep integrity of data
ViewA virtual table
IndexAn access method used to get data from a table
TriggerAn automatic response to an event that occurs in the database
SequenceA database object to generate unique integers
SynonymAn alternative name for an existing table, view, or sequence object

Tables

As described in Part 1 of this tutorial series, a table is similar to a spreadsheet that houses data in rows and columns. Each column describes a piece of data being held by this table, and each row contains those pieces of data being held about a particular instance of the subject of this table.

The CREATE TABLE SQL statement is used to create a database table. The syntax of the CREATE TABLE statement can get quite complicated with many different options, so this tutorial won't show the syntax, but will instead show examples of using the different clauses of the syntax as it progresses.

The simplest form of the CREATE TABLE statement is creating a table with just basic columns.

Listing 1. Creating a table with basic columns
CREATE TABLE customer (
	SSN  char(11),
	LName  char(50),
	Age  integer,
	Birthday  date
	);

This example created a table called 'customer' with four columns in it: 'SSN', 'LName', 'Age', and 'Birthday'. Notice how the column definitions are a comma-delimited list that is set inside a pair of parentheses.

The example in Listing 2 adds on the storage clause. As described in Part 3 of this series, tables are stored in dbspaces. So the storage clause lists the dbspace name in which the table should be created.

Listing 2. Storage clause
CREATE TABLE customer (
	SSN  char(11),
	LName  char(50),
	Age  integer,
	Birthday  date
	)
	IN dbspace1;

The next example (Listing 3) adds sizing attributes. The actual data inside a table is stored in pieces of a disk known as Informix 'pages'. Informix pages are not the same as O/S disk pages. Informix pages are the I/O mechanism that Informix uses when reading and writing from disk. Informix pages can be between 2K and 16K in size. Even though data is stored on pages, IDS doesn't want single pages for a table scattered all over the disk, so IDS uses a mechanism called an extent. An extent is a collection of contiguous pages. So the sizing of a table is to tell IDS how big, or how many pages, you want in an extent. Extents by default are eight pages. IDS allows a DBA to specify the size of the very first extent that is allocated to the table, as well as the size of any other extents that are allocated to the table. A second extent will be allocated to the table when the first extent's pages are full. A third extent will be allocated to the table when the first and second extents' pages are full. IDS adds more and more extents as pages are needed to hold data being added to the table.

Note: Extents on disk are in pages, but when defined, the DBA sizes them in kilobytes. So it is important to understand the Informix page size being used on your system. For most UNIX and Linux 32-bit systems, the default page size is 2K. For UNIX and Linux 64-bit systems and Windows systems, the default page size is 4K. You can also specify a non-default page size.

The example in Listing 3 creates a table that has a first extent size of 50 pages and a next extent size of 25 pages, using the assumption that you have a 4K Informix page size:

Listing 3. Sizing attributes
CREATE TABLE customer (
	SSN  char(11),
	LName  char(50),
	Age  integer,
	Birthday  date
	)
	EXTENT SIZE 200 NEXT SIZE 100;

The next example (Listing 4) brings in what is called the lock mode clause. When a user attempts to update a row in a table, a lock must be placed on the area being updated so that another user cannot attempt to update the same area at the same time. By default a table has the lock mode page put on it, which means that the area being locked is the whole page. Since a page can have multiple rows on it, even though the user is just trying to update one row on the page, all the rows on the page are locked and inaccessible to other updating users. The locking area can be changed by the DBA to be of lock mode row. When the table has lock mode row set on it, the updating user only locks the one row being updated on a page, allowing all other rows on that page to be accessible to other updating users. The example in Listing 4 adds the lock mode row clause; the lock mode page clause is the alternative.

Listing 4. lock mode row clause
CREATE TABLE customer (
	SSN  char(11),
	LName  char(50),
	Age  integer,
	Birthday  date
	)
	LOCK MODE ROW;

Each way of locking has its advantages and disadvantages. To more fully understand this, read the "Locking" section of the Administration topics in the Information Center (see Resources).

Fragmented/partitioned tables

When a table is created, by default, it has to completely exist within a single dbspace. The dbspace it has to exist within is either given to it by the server or specified in the CREATE TABLE SQL statement with the IN clause (see Listing 2 above). This single dbspace constraint puts several limitations on a table, including a size limit. To allow for much larger tables, you can create fragmented tables. These special tables break the single dbspace rule and allow parts of the table to exist within two or more dbspaces. A table is not fragmented until it is explicitly done so by the DBA.

A table can be fragmented in one of two ways:

  • By round-robin – evenly distributes the rows across the fragments listed
  • By expression - specifies an expression that locates the rows in the appropriate fragment

Note: Fragmented tables can only have one fragment per dbspace, so you have to specify a dbspace name for each fragment of the table.

Listing 5 shows how a DBA can create a new table that uses round-robin-based fragmentation to split the table among three dbspaces.

Listing 5. Round-robin-based fragmentation of new table
CREATE TABLE customer (
	SSN  char(11),
	LName  char(50),
	Age  integer,
	Birthday  date
	)
	FRAGMENT BY ROUND ROBIN IN dbs1, dbs2, dbs3;

Listing 6 shows how a DBA can create a new table that uses expression-based fragmentation. The expression is evaluated every time the table is queried to determine which fragments need to be accessed.

Listing 6. Expression-based fragmentation
CREATE TABLE customer (
	SSN  char(11),
	LName  char(50),
	Age  integer,
	Birthday  date
	)
	FRAGMENT BY EXPRESSION
	(Age>50) IN dbs1
	(Age<51) in dbs2;

In the expression in Listing 6, the value of the Age column determines which fragment the data resides in. If Age = 28, then the fragment in dbs1 is used to store the row, because 28 > 50. Because each dbspace can only have one fragment of the table, the CREATE TABLE SQL statement uses the dbspace name to specify the table fragment.

The examples in Listings 5 and 6 are just the beginning. A round-robin-based fragmented table can have as many fragments as the number of dbspaces defined in the instance.

An expression-based fragmented table can also have that many fragments, and can use as complicated an expression as needed. An example of a more complicated expression would be:

Listing 7. More complicated expression-based fragmentation
FRAGMENT BY EXPRESSION 
	(Age < 50 AND 
	(Birthday BETWEEN  '01/01/1998' AND '4/15/2008') 
		OR LName IN ("Smith","Zimmermann") ) IN dbs1,
	(Age > 51 OR Age = 17 
		AND LName NOT LIKE '%John%') IN dbs2;

Although the expressions can get complicated (and even more complicated than the one in Listing 7), the general rule of thumb is to keep them as simple as possible, while still maintaining good distribution of your data across the fragments.

The right fragmentation scheme can improve performance dramatically. To understand how, read more in the "Fragmentation Guidelines" section of the Performance Guide under the Administering topic in information center.

Partitioned tables are almost identical to fragmented tables, with the exception of two things:

  • They can have more than one partition per dbspace
  • Their CREATE TABLE syntax

Partitioned tables allow the table to have more than one partition per dbspace. The convenience here is that a table with 10 partitions could be spread across 10 dbspaces or less, where a table with 10 fragments requires 10 dbspaces.

Since partitioned tables allow more than one partition per dbspace, the syntax of the SQL statement has to change because the dbspace name can no longer be used by default to reference the partitions inside it.

Listing 8. Syntax for partitioned tables
CREATE TABLE customer (
	SSN  char(11),
	LName  char(50),
	Age  integer,
	Birthday  date
	)
	PARTITION BY EXPRESSION
	PARTITION part1 (Age>50) IN dbs1
	PARTITION part2 (Age<51) in dbs1;

The example in Listing 8 shows how both partitions are created in the same dbspace (dbs1) and are both given names (part1 and part2).

The importance of the name of a fragment or partition is to allow the user to manipulate the fragment or partition at a later date. A quick example of this without going into much detail is:

Listing 9. Modifying fragment
ALTER FRAGMENT ON TABLE customer 
	MODIFY dbs1 TO Age < 55 IN dbs1;

The example in Listing 9 modifies the expression that was created for that fragment from Age < 50 to Age < 55. Notice how the modify clause references exactly which fragment's expression is going to be modified by specifying the dbspace name, dbs1, after the MODIFY keyword.

To change this example and instead use the partition example from above, it would look like:

Listing 10. Modifying partition
ALTER FRAGMENT ON TABLE customer 
	MODIFY PARTITION part1 TO Age < 55 IN dbs1;

Notice how the example in Listing 10 uses the partition name, part1, to specify which partition expression is being modified.

Temporary tables

Temporary tables act just like real tables with a few minor exceptions:

  1. Temporary tables are automatically dropped when the user that created them disconnects from the database
  2. Temporary tables can be unlogged in all types of databases
  3. Temporary tables are only visible to the user who created them

A temporary table can be created in one of two ways:

  • Using the SELECT … INTO TEMP… SQL statement
  • Using the CREATE TEMP TABLE… SQL statement

After a temporary table has been created, it can be used just like any other visible table in the database.

Listing 11. Creating a temporary table
CREATE TEMP TABLE temp_cust (Name char(50), Old int, Bday date);
	
SELECT * FROM customer, temp_tab
	WHERE customer.Age = temp_cust.Old;

Temporary tables don't have all the same capabilities as regular tables, but they do include fragmentation, storage, indexes, and some constraint options.

The WITH NO LOG clause is used to turn logging off on a temporary table:

Listing 12. Turning logging off on a temporary table
CREATE TEMP TABLE temp_cust 
	(Name char(50), Old int, Bday date) WITH NO LOG;

Constraints

It's not possible to talk about table objects without also talking about constraints. In fact, it's one type of constraint that led to the name RDBMS (Relational Database Management System). Although the constraint type is called referential and not relational, referential constraints are built on the idea of relationships (hence relational).

Many times when talking about constraints, we talk about integrity (for example, referential constraints are commonly referred to as referential integrity). In fact, it is the constraints that help us keep the integrity of the data. According to the Merriam-Webster Thesaurus, the words "integrity" and "corrupt" are "near antonyms," or words of opposite meanings. This makes sense to a database because the last thing you want is corrupted data (as in data that doesn't make sense or is not valid).

You therefore use different types of constraints to help keep the integrity of the data intact. A prime example of this would be a unique constraint on a Social Security Number (SSN) column in your customer table. Since every US citizen is supposed to have a different SSN, you would want to put a constraint on that column to maintain that every row in the customer table has to have a different, or unique, SSN value. Without the constraint in the database, either the application would have to maintain the uniqueness, or you could end up with multiple customers with the same SSN. If those multiple customers just so happened to be the same person, then your data wouldn't be corrupted, but nothing would be stopping someone from putting a different person in the table with a SSN that is already in use by somebody else, leading to corrupted, or incorrect data.

Note: Constraints do not have to exist in the database; they can also be programmatically controlled in the application. A discussion of which is better would is too lengthy for this tutorial. Just make sure that you use one of the ways and don't ignore constraints altogether.

The example in Listing 13 creates a unique constraint on SSN and names it ssn_unq:

Listing 13. Creating a unique constraint
CREATE TABLE customer (
	SSN  char(11) UNIQUE CONSTRAINT ssn_unq,
	LName  char(50),
	Age  integer,
	Birthday  date
	);

Note: IDS also allows for the DISTINCT keyword to be used in place of UNIQUE. IDS considers these two words to be synonymous.

Note: Constraints do not have to be named by the DBA. An internal name will be automatically generated for the constraint if the DBA does not specify one. By naming the constraint through the SQL syntax, it can make identifying a constraint violation error easier when that constraint name comes up in an error condition.

Table 2 describes the different types of constraints available:

Table 2. Constraints and their descriptions
ConstraintDescription
UNIQUE or DISTINCTConstrains every value in a column to be different
Primary keyUses the unique constraint and allows for referential integrity on this column set
Foreign keyColumn(s) used in referential integrity to point back to a primary key column
CheckAn expression on a column that is used to verify the data is valid for the column
Not nullConstrains that every row has to have a value for this column
DefaultSpecifies a default value to be used if a value is not given for this column when inserting a new row

Referential constraint

Referential constraints actually rely on other constraints to exist first. Referential constraints exist between two sets of data. The sets of data in Relational databases are known as columns in a table. So, referential constraints exist between two or more columns in one or more tables. Many times these constraints are called primary/foreign key relationships. The reason for this is that the foreign key column references back to the primary key column. Although this is a very common practice, it is not a rule. Referential constraints can exist between two columns, whether those columns are key columns (primary and foreign) or not. Certain rules do have to apply for a referential constraint to be created.

Some of the rules include (not an exhaustive list):

  • The column being referenced must be unique, either through a primary key constraint or a unique/distinct constraint definition.
  • The columns must be of the same data type or compatible data types.
  • The columns must be in the same database.

The example in Listing 14 expands the example from Listing 13 by adding a second table that has a referential constraint definition back to a column in the first table:

Listing 14. Referential constraint on second table
CREATE TABLE customer (
	SSN  char(11) UNIQUE CONSTRAINT ssn_unq,
	LName  char(50),
	Age  integer,
	Birthday  date
	);
			  
CREATE TABLE orders (
	OrderNumber integer,
	OrderingCustomer char(11) REFERENCES customer(SSN),
	OrderTotal money
	);

Listing 15 provides an example of accomplishing the same thing with a primary Key:

Listing 15. Referential constraint on second table using a primary key
CREATE TABLE customer (
	SSN  char(11) PRIMARY KEY CONSTRAINT ssn_unq,
	LName  char(50),
	Age  integer,
	Birthday  date
	);
			  
CREATE TABLE orders (
	OrderNumber integer,
	OrderingCustomer char(11) REFERENCES customer(SSN),
	OrderTotal money
	);

It is also possible to have a table that requires multiple columns for uniqueness. If this is the case, then a different syntax is used to define the unique or primary key constraint. A different syntax is also required for the referencing columns.

Note: If multiple columns are used for the unique or primary key, then the same number of columns must be used on the referencing side of the constraint.

Listing 16. Using multiple columns
CREATE TABLE stock (
	StockNumber integer,
	ManufacturerCode char(3),
	Description  char(50),
	UNIQUE (StockNumber,ManufacturerCode)  CONSTRAINT uq_stock
	);

CREATE TABLE catalog (
	CatalogNumber integer,
	StockNumber integer,
	ManufacturerCode char(3),
	FOREIGN KEY (StockNumber, ManufacturerCode) 
	REFERENCES stock (StockNumber, ManufacturerCode)
	CONSTRAINT fk_catalog
	);

The foreign key syntax in Listing 16 would also be used to set up a referential constraint even if you used the following syntax to denote a multi-column primary key:

Listing 17. Multi-column primary key
CREATE TABLE stock (
	StockNumber integer,
	ManufacturerCode char(3),
	Description  char(50),
	PRIMARY KEY (StockNumber, ManufacturerCode) Constraint pk_stock
	);

Check constraint

A check constraint is an expression put on a column that specifies the valid data domain for that column. The expression is evaluated every time a new value is inserted into the column or an existing value in the column is updated. If the new value does not meet the conditions of the expression, then an error is returned back to the SQL statement. However, if the evaluation of the check constraint returns a NULL value, then the value is accepted and no error is returned.

Listing 18 provides an example of a check constraint where the data domain for an Age of a person has to be between 1 and 124:

Listing 18. Check constraint
CREATE TABLE customer (
	SSN  char(11),
	LName  char(50),
	Age  integer,
	Birthday  date,
	CHECK ( 0 < Age AND  Age < 125 )
	);

The following syntax could also have been used when defining the same check constraint:

	CHECK ( Age BETWEEN 1 AND 124)

In this case the values have to be 1 and 124, because BETWEEN is inclusive of the ends, and if you used 0, then the Age of 0 would be a valid value. I chose in my example to not be able to have an age=0 as a valid value.

NOT NULL constraint

A NOT NULL constraint prevents the column from including NULL values. A NULL value is an unknown or undefined value. A NULL value is not equal to 0, it is not equal to a blank (or space), and it is not equal to a series of blanks (or spaces). In fact, a NULL value is not even equal to another NULL value.

Therefore, the expression WHERE ColumnName = NULL is not valid. Instead, use WHERE ColumnName IS NULL or WHERE ColumnName IS NOT NULL. Since a NOT NULL column does not allow NULL values, sometimes people refer to this as a required field, since it requires an actual value.

Listing 19. NOT NULL constraint
CREATE TABLE customer (
	SSN  char(11) NOT NULL CONSTRAINT nn_customer,
	LName  char(50),
	Age  integer,
	Birthday  date
	);

DEFAULT constraint

A DEFAULT constraint is used to supply a default value for the column when no value is given during an INSERT or UPDATE statement. Many times, customer will use the NOT NULL constraint and the DEFAULT constraint together to stop errors from occurring. That way if an INSERT statement doesn't have a value for a NOT NULL column, it will use the DEFAULT value.

Listing 20 provides an example of putting a DEFAULT constraint on one column and both a DEFAULT and NOT NULL constraint on another column:

Listing 20. DEFAULT constraint
CREATE TABLE customer (
	SSN  char(11) DEFAULT '000-00-0000' NOT NULL CONSTRAINT nn_customer,
	LName  char(50),
	Age  integer,
	Birthday  date DEFAULT TODAY
	);

Views

A view is a virtual table.

A view takes on the appearance of a table and is used just like a table, except that a view has no rows of its own. A view is dynamically populated by the SELECT statement that defines it when the view is accessed.

Example SQL statement to create a view:

CREATE VIEW cust_view_21 AS 
	SELECT * FROM customer WHERE age = 21;

Now this view can be used just like a regular table:

SELECT count(*) FROM cust_view_21 WHERE Birthday = '02/25/1990';

The actual count of rows that comes back from this SELECT statement is not the number of rows that match the WHERE clause in the view, but the number of rows that match the WHERE clause in the "real" table that the view is built from (in this case, the customer table).

Notice that this example uses another column, Birthday, from the customer table in the SELECT statement against cust_view_21. This is possible because the SELECT statement that defines the view is using a SELECT * (all columns) from the customer table. If you defined the view as follows:

CREATE VIEW cust_view_21 AS
	SELECT SSN, Age FROM customer WHERE age = 21;

then the above SELECT statement would be invalid because Birthday was not included in the SELECT statement that comprises the view.

It is possible to use different column names in the view than the column names that are used in the underlying tables of the view.

Listing 21. Example with column names
CREATE VIEW cust_view_21(SocialSecurityNumber, HowOld) AS
	SELECT SSN, Age FROM customer;

If you define column names for the view, then you must use those column names where accessing the view:

SELECT HowOld FROM cust_view_21;

A view isn't limited to data or columns from only a single table.

Listing 22. Example of a multi-table view
CREATE VIEW join_view AS
	SELECT SSN, LName, OrderTotal 
	FROM customer, orders
	WHERE LName = OrderingCustomer;

The SELECT statement is using the same tables defined earlier in the "Constraints: Referential constraints" section. This view has two columns from the customer table, and one column from the orders table.

Now you can select from this view like you would any normal table:

Listing 23. Example SELECT statement against multi-table view
SELECT LName, OrderTotal from join_view WHERE OrderTotal > 50;

Note: The actual join between the customer and orders table will take place when an SQL statement accesses the view. Depending on the complexity of the SELECT statement that comprises the view, this can make what looks like a simple SELECT statement accessing a view take a long time to run since it is actually running a more complex SELECT statement behind the scenes.

Since a view is just a special kind of table, it has the capabilities to both be read from (SELECT) as well as modified (UPDATE, INSERT, DELETE). Each have their own restrictions.

The SELECT statement that comprises a view cannot use the INTO TEMP clause or the ORDER BY clause. However, the SQL statement that accesses the view can.

SELECT * from join_view ORDER BY OrderTotal DESC;

You can modify a view as long as the underlying SELECT statement that comprises the view does not contain any of the following:

  • A join of two or more tables
  • An aggregate function or the GROUP BY clause
  • The UNIQUE or DISTINCT keyword
  • The UNION keyword
  • Calculated or literal values

Other restrictions apply as well to each individual statement type. An UPDATE statement cannot update a derived column in the view. An INSERT statement also cannot insert into a view with a derived column. In an INSERT statement, if the view does not contain all of the underlying columns of table, then a NULL value will be used for those columns.

The INSTEAD OF trigger is used to overcome some of the above restrictions. (Let's save that discussion until the "Triggers" section.)

The VIEW SQL statement has the WITH CHECK OPTION clause. When a view uses the WITH CHECK OPTION clause, you cannot modify any rows through the view that do not meet the underlying SELECT statement of the view.

An example to explain:

CREATE VIEW cust_view_21 AS 
	SELECT * FROM customer WHERE age = 21;

The view above is only going to contain rows where the Age column is equal to 21 in the customer table. Since the WITH CHECK OPTION clause was not specified in the creation of this view, and this is a modifiable view, you can run the following statement to insert a new row without errors:

INSERT INTO cust_view_21 
	VALUES ("123-45-6789","Flintstone","29","01/02/1012");

Notice that the age being inserted—29—is not equal to 21. So if you then turned around and did the following

SELECT * from cust_view_21 WHERE SSN="123-45-6789";

You would get "No rows found." Why? Because the age of Mr. Flintstone is not equal to 21, so it is not in the view.

If you changed the view to include the WITH CHECK OPTION:

CREATE VIEW cust_view_21 AS 
	SELECT * FROM customer WHERE age = 21
	WITH CHECK OPTION;

The above insert statement will no longer run successfully. It will fail with a "Data value out of range" error.


Indexes

Regular indexes

Indexes are an integral part of getting the optimal performance from your database. An index is an access method for a table. When an SQL statement is run against a table, the database server must access the table to find the rows that match the criteria of the SQL statement. In general, the database server has two ways it can access the table: a sequential scan or an index scan. The access method used has a lot of determining factors, including the selectivity of the criteria in the SQL statement (filters in the WHERE clause). The selectivity tells the database server if the SQL statement wants .1% of the table returned or 100% of the table returned.

A sequential scan accessing a table is just like you reading a book. If you want to know the whole story, you have to start on page 1 and read until you get to the end. If you start anywhere in the middle, or stop before reaching the end, you could miss something along the way. So a sequential scan needs to start on the first page of the table and read every page till it hits the end of the table. Because of the cost of sequential scans, unless the selectivity of the SQL statement needs it, sequential scans are avoided. This is especially true in OLTP applications. OLTP applications usually get small subsets or data from larger tables.

Since OLTP environments avoid sequential scans, a faster access method is needed instead. That access method is known as an index. Using the book analogy again, books have indexes in the back of them. Why? For faster lookup. If you were writing an SQL statement and couldn't remember the syntax, you could pull out the Guide to SQL: Syntax and start looking through it for the particular SQL statement you were writing. If you wanted to make sure you didn't miss it, you would start on page 1 and look at every page until you found what you were looking for. Or you could go to the index of the book, look up your topic, and the index would point you directly to the page that the topic is discussed on. Which would be faster? If the topic just so happened to be on page 3, then a sequential scan would be faster; but if the topic was on page 695, then the index lookup would be faster. Say that the index in the back of the book was three pages long. During an index lookup, at most you would have to look at four pages, three index pages, and the actual page where the topic is discussed. So as long as the topic isn't discussed within the first five pages of the book, then the index lookup would be faster then the sequential scan method. However, remember that a sequential scan cannot stop when it first finds the topic in the book because that same topic might be talked about on another page in the book as well. If the sequential scan stopped at the first one, then it would miss the rest. Whereas in an index, you can reference all the pages where the topic exists in the same index entry.

Many people have asked me, "If indexes are so great, why don't we index everything?"

The answer is: every advantage has a disadvantage. We've talked about the advantage of indexes. Let's take a look at the disadvantages of indexes:

  • Disk space – just like an index in a book takes pages of the book, so does the index of a table
  • Performance - an index slows down modifications to the table (INSERTS, UPDATES, and DELETES).

Because of the price of disk space these days, the first disadvantage is almost a moot point. But anything that slows down performance, like the second disadvantage, is something that should be looked into further.

The reason why modifications to the table are slowed down is the extra overhead that has to be done. Back to the book analogy, if you add another topic to the book, you had better update the index with information from that topic. If you don't update the index, then anybody who is interested in learning more about that new topic will either have to do a sequential scan of the book to find it, or just think that the book doesn't cover it and go buy somebody else's book.

So when an SQL statement makes a change to the table (new row inserted, existing row updated, or get rid of a row by deleting it), you need to update the index immediately so that if another user tries to use the index to find the rows they are looking for, their result set will be accurate based upon the rows in the table. The index must completely and accurately represent all the rows in the table at all times. If it gets out of sync with the table, then it becomes a liability and can cause integrity problems for the users.

So when a new row is inserted into the table, if that table has one index on it, you have effectively doubled the amount of work that needs to be done before the insert can complete. You have to insert the row into its data page, and you have to update the appropriate index page with that row's information. Add another index, and you have tripled the work (one data page insert, one index page insert for index #1, and one index page insert for index #2). Add a third index on the table, and you have four times the amount of work to complete, as opposed to inserting a row into a table that has no indexes. The same thing happens for both updates and deletes.

So the more indexes a table has, the more work that has to be done while modifying the data, slowing down performance of those operations.

Now that you have seen what indexes are used for and their advantages and disadvantages, let's look at how to create indexes on your tables.

The CREATE INDEX SQL statement is used to create a new index:

CREATE INDEX SSN_idx ON customer(SSN);

By default, an index allows duplicate values. Just like a topic in a book could be discussed on more than one page, for which all page numbers are listed in the index, an indexed value in a table could exist in more than one row of the table, so the index has to point to all the rows where that value exists.

You can put a unique constraint on the table by specifying that each index value has to be unique (can only exist once in the table). This is effectively the same as creating a UNIQUE constraint on the table, as described in the "Constraints" section.

CREATE UNIQUE INDEX SSN_idx ON customer(SSN);

An index can exist on more than one column.

CREATE INDEX cust_idx ON customer (LName, Birthday);

When an index is on more than one column, the first column listed in the index is the main sort order of the index, followed by the rest of the columns.

Let's use the phone book to explain this idea. The phone book is ordered by last name (main sort order), but there are multiple listings for Jones in the phone book. So when you look at the pages of Joneses, you see that the phone book goes to a secondary sort order with the Jones values, and that sort order is the first name. Well, there are many John Joneses, so then the phone book goes to another sort level and orders by middle initial. If there are duplicate John J Joneses, then it goes to another sort level, and so on.

This is the same way that multi-column indexes work in IDS. If there are duplicates in the main (first column) sort level, then those duplicates are ordered by the second column listed, and so on. The maximum key size of the index (the sum of all indexed column data type sizes) is 390 bytes.

Just like the CREATE TABLE SQL statement, the CREATE INDEX SQL statement has a storage clause.

CREATE INDEX cust_idx ON customer (SSN) IN dbspace1;

An index can be created in ascending or descending order, with ascending order being the default. Since IDS can traverse the index in a bidirectional fashion, specifying the order of the index is not that important.

Listing 24. Example with a single column index
CREATE INDEX cust_idx ON customer (SSN DESC);
Listing 25. Example with a multi-column index
CREATE INDEX cust_idx ON customer (birthday DESC, age);

The example in Listing 25 will create the index with birthday in descending order. When duplicate birthdays are found, they would be ordered by the age value in ascending order, since ascending is the default and no order was specified for age.

Clustered indexes

Clustered indexes are a way of ordering the table the same as the index. An index always has an order to it. The order of the index is decided by the DESC and ASC keywords in the CREATE INDEX statement. However, by default, a table has no order, and the rows get stored into the pages of the table in whatever order they are inserted. As rows are deleted from the table, new rows being inserted fill in the gaps left by the deletions. This can lead to a very unorganized (unordered) table. The disadvantage of an unorganized table comes with duplicate or range-type searches.

For example, say that you want to get all the customer information out of your customer table where the last name of the customer is equal to 'Jones'. If you have more than one customer with the last name of 'Jones', you have to find all the rows that match the last name of 'Jones' no matter what page they exist on. If you have 10 rows with last name of Jones, worst case scenario, you could actually end up reading 10 different pages to get the data (one row per page). What would happen if you would "cluster" all the like values together? If you could fit 10 rows per page, you could end up reading only one page to get all 10 rows with the value of Jones. Think of a page as an I/O. Which is better for performance? 10 I/Os or 1 I/O? Since I/O is one of the slowest pieces and can take valuable time, the less I/O, the better. So 1 I/O is much better for performance than 10 I/Os.

The same thing can go for range searches.

For example, say that you want to get all the customers that have ordered from you within a certain time frame. So you want to return all the rows where the order date is between two dates. If the table is not ordered by date, then you could end up searching all over the table for the rows that fit between those two dates. But if you order the table by date, then all of the rows with the same date will be "clustered" together in a set of pages. This allows for better performance because only a set of pages have to be read, rather than what could amount to the whole table.

The table is clustered (ordered) based upon an index. To change the order of the table you have to cluster an index. The clustering of an index is done through the CLUSTER keyword of the CREATE INDEX statement:

CREATE CLUSTER  INDEX cust_idx ON customer(SSN);

Note: A table can only be clustered (ordered) in one way, based upon one index. So each table can only have one clustered index.

Fragmentation and indexes

Indexes are also allowed to be fragmented. When creating an index on a fragmented table, by default, the index fragmentation will be the same as the table fragmentation. If the table has three fragments, then the index will have three fragments. Each index fragment will only index data in its associated data fragment.

Indexes can also be given an explicit fragmentation scheme that doesn't have to match the table's fragmentation scheme. This can only be done with the FRAGMENT BY EXPRESSION clause. The FRAGMENT BY ROUND ROBIN clause is not valid in the CREATE INDEX SQL statement.

Listing 26. FRAGMENT BY EXPRESSION clause
CREATE TABLE customer (
	SSN  char(11),
	LName  char(50),
	Age  integer,
	Birthday  date
	)
	FRAGMENT BY EXPRESSION 
	(Age < 50) IN dbs1,
	(Age > 51) IN dbs2;

CREATE INDEX cust_idx1 on customer(Age);

CREATE INDEX cust_idx2 on customer (SSN) in dbspace5;

CREATE INDEX cust_idx3 ON customer(Birthday) 
	FRAGMENT BY EXPRESSION
	(Birthday < '01/01/1980') in dbspace1,
	(Birthday BETWEEN '01/01/1980' AND '01/01/2000') IN dbspace2,
	(Birthday > '01/01/2000') in dbspace3;

In Listing 26, the indexes will differ as such:

  • The cust_idx1 index will be fragmented in the same way as the table. Since the table has two fragments, the index will have two fragments, and they will be stored in the same dbspaces as the data.
  • The cust_idx2 index will not be fragmented, and the whole index will be stored in dbspace5.
  • The cust_idx3 index will have three fragments based upon the expression putting each index fragment into its appropriate dbspace.

Just like on tables, the expression used for fragmentation of indexes can be very complex. However, the general rule of thumb is to keep it as simple as possible while still meeting performance requirements. Indexes can also use the partitioned syntax to allow for multiple partitions in a single dbspace.

Putting a clustered index on a fragmented table will only cause each fragment of the table to become clustered, not the whole table to become clustered.


Triggers

Triggers

Triggers are defined objects that produce an automatic response to an event that occurs in the database.

A good example for what triggers are sometimes used for is trying to keep two tables in sync (replication). Since the UPDATE statement is a single table statement, you can't update two different tables with one SQL statement. So a trigger would be placed on one table, and when a value changed in that table (event), the database would automatically update the other table with the appropriate information (response).

Since triggers are an automatic response to an event occurring, there are a couple of basic things that you need to know when defining a trigger:

  • What the event is you are waiting for
  • What your response is going to be
  • When you are going to process the response based upon the event occurring
  • What you are going to name your trigger
  • If you want to use the values of the event in your response
Listing 27. Example trigger
CREATE TRIGGER cust_ins_trig 
	INSERT ON customer
	REFERENCING NEW AS new 
	FOR EACH ROW 
	WHEN (new.Birthday < '01/01/2000')
	  (EXECUTE PROCEDURE ins_mailing_list(new.LName, new.Birthday)
	);

The example in Listing 27 would take the customer information that was being inserted into the customer table and pass two columns of it—the LName and Birthday columns—to a stored procedure that would use that information to do something, like insert them into a mailing list table. But the stored procedure would only be called if the value of Birthday was before 01/01/2000.

The answers to your five-point checklist above would be:

  1. You are waiting for an INSERT event on the customer table
  2. Your response is to run a stored procedure
  3. You are going to process this for each row that is inserted
  4. you are naming your trigger cust_ins_trig
  5. You are using the values using the REFERENCING NEW AS new clause.

The example in Listing 27 uses two optional clauses, which are the REFERENCING clause and the WHEN clause.

The triggering event can be any one of the following:

  • Insert of a row in a table
  • Delete of a row in a table
  • Update of a row in a table
  • Update of a column in a table
  • Select of a row in a table
  • Select of a column in a table

The trigger response can be any one of the following:

  • Insert into a table
  • Delete from a table
  • Update of a table or column
  • Execution of a Stored Procedure or Function

The response can be run at three different times:

  • For each row, as shown in the example in Listing 27
  • Before the triggering event processes the SQL statement
  • After the triggering event finishes processing the SQL statement
Listing 28. Another trigger example
CREATE TRIGGER cust_del_orders DELETE on customer
	REFERENCING OLD AS abc
	FOR EACH ROW
	( DELETE FROM orders WHERE SSN = abc.SSN);

The example in Listing 28 deletes any orders from the orders table when the associated customer, using SSN, is deleted from the customer table. Here, the REFERENCING AS clause uses the OLD AS syntax instead, because this is data that is being removed. The reference correlation value that you give is just a variable and can be anything. This example uses 'abc'.

Note: When a triggering event happens because of an SQL statement that is run, the trigger response is considered part of the SQL statement. This means that if the trigger response fails, then both the response and the initial trigger event will roll back.

INSTEAD OF triggers

The "Views" section talked about the restrictions placed on modifying data through a view. For example, you cannot insert a new row into a multi-table view. This is where INSTEAD OF triggers come in.

INSTEAD OF triggers are triggers that are defined on a view. Remember that two of the most important pieces of a trigger are the triggering event and the trigger response. It takes the triggering event to happen from the user before the triggering response is invoked.

If you create a multi-table view and then try to insert into that view, you would immediately get back an error. unless you create an INSTEAD OF trigger on the view that said "instead of" doing the event (insert into view), just perform the response of the trigger.

Listing 29. Example INSTEAD OF trigger
CREATE TABLE customer ( cust_num int, Name char(50), phone char(12));
CREATE TABLE orders (order_num serial, cust_num int, order_total money); 
CREATE VIEW join_view AS
	SELECT Name, order_total FROM customer, orders
	WHERE customer.cust_num = orders.cust_num;
CREATE TRIGGER instdof_trig 
	INSTEAD OF INSERT ON join_view
	REFERENCING NEW AS new
	FOR EACH ROW
	( INSERT INTO customer VALUES (25, new.Name, "555-1212"), 
	  INSERT INTO orders VALUES (0, 25, new.order_total));

INSERT INTO join_view VALUES ("Fred", 25.68);

The INSERT statement in Listing 29 will not get back and error because it is not actually inserting a row into the view (which is a no-no on a multi-table view). Instead, it is issuing two single table INSERT statements. The example needs a little work, as it contains have hard-coded values for some of the columns you wouldn't want hard-coded values in, but it gives you the idea of what INSTEAD OF triggers are trying to accomplish.


Sequences

A sequence is a database object that users can use to generate unique integers. Unlike a serial data type that exists within a single table and is part of a transaction on that table, the sequence object is a separate object and can be used for multiple tables and is also independent of the transactions using it.

A sequence uses the INT8 range from –(263 -1) to 263-1.

The CREATE SEQUENCE SQL statement is used to create a sequence:

Listing 30. CREATE SEQUENCE statement
CREATE SEQUENCE sequence_1 
	INCREMENT BY 1
	START WITH 0;

The CREATE SEQUENCE statement has several different clauses, only two of which are shown in the example in Listing 30. The two shown allow for the capability to increment by different values, as well as the number with which to start the sequence.

Some other clauses include capabilities like maximum value, minimum value, and cycling around to the minimum value after hitting the maximum value.

To get the current value of the sequence, use the sequence.CURRVAL expression. To get the next value in the sequence, use the sequence.NEXTVAL expression. These expressions can be used in SELECT, DELETE, INSERT, and UPDATE SQL statements.

Listing 31. Example CREATE SEQUENCE clauses
CREATE SEQUENCE order_num_seq
	INCREMENT BY 1
	START WITH 0;

INSERT INTO orders ( order_num, SSN) 
	VALUES (order_num_seq.NEXTVAL, "111-11-1111");

SELECT order_num_seq.CURRVAL, SSN FROM orders;

Synonyms

A synonym is an alternative name for an existing table, view, or sequence object. Synonyms can be created to reference tables or views in other databases, on the same database server, or on another database server.

This allows the user to write SQL that can reference a remote table without having to know the remote table syntax.

Listing 32. Example synonym
CREATE SYNONYM orders_syns for southsales@dallas:informix.orders;
CREATE SYNONYM orders_synw for westsales@sanfran:informix.orders;

The first synonym in Listing 32 points to the orders table in the southsales database on the dallas database server. The second synonym in Listing 32 points to the orders table in the westsales database on the sanfran database server.

As a user, you can join these two tables as if they were local tables in your database:

SELECT * from orders_syns, orders_synw WHERE …

Note: Although synonyms allow ease of writing SQL code for developers, they also hide the fact that the tables being used are remote and might need more time to access, causing a slowdown in performance of the SQL statement.


Object modes

Indexes, triggers, and constraints allow for different "modes" of operation. The three basic modes of operation include:

  • Enabled
  • Disabled
  • Filtering (only available on constraints and unique indexes)

If an object is set to "disabled" mode, the database server stops using the object but does not remove the object from the database.

The SET SQL statement is used to change the mode of an object.

Listing 33. Example SET SQL statement for disable/enable
SET INDEXES cust_idx DISABLED;
SET INDEXES cust_idx ENABLED;
SET CONSTRAINTS uniq_lname DISABLED;
SET CONSTRAINTS uniq_lname ENABLED;

More than one object of the same type can be specified in the same SQL statement:

SET INDEXES cust_idx, phone_idx DISABLED;

When an index on a table is disabled, that index is no longer available for use as an access method for that table and no modifications to the table are synced up in the index. However, the index is still defined as an object in the database. To start using the index again, it needs to be enabled. When an index is enabled, it is completely rebuilt, as changes to the table while it was disabled are not reflected in the index.

When a constraint is set to disabled, that constraint is no longer used to verify the integrity of the data for the table on which it is defined. When a constraint is re-enabled, all data in the table must be verified against the constraint since some data may have been modified while it was disabled and might not meet the constraint. When re-enabling a constraint, if a row is found to violate the constraint, an error will be returned, and the constraint will not be re-enabled.

When a trigger is set to disabled, that trigger is not fired for any matching event. Once re-enabled, the trigger would then start firing for any future matching event.

Both constraints and unique indexes can also be set to FILTERING mode. FILTERING mode allows the constraint or index to be used. But if a modification to the table would violate the constraint or index, the row would be written to a violations table in the database, and the modification statement could keep running. In order for this to work successfully, a violations and diagnostics table must be started for the table where the constraint or index is set to filtering mode.

Example:

Listing 34. Example filtering mode
CREATE TABLE customer ( Name char(50), phone char(12), age int);

CREATE UNIQUE INDEX uniq_phone ON customer(phone);

SET INDEXES uniq_phone FILTERING WITHOUT ERROR;

START VIOLATIONS TABLE FOR customer;

INSERT INTO  customer VALUES ("Flintstone", "110-555-1212", 28);
INSERT INTO  customer VALUES ("Rubble","110-555-1212", 24);

The second insert in Listing 34 violates the unique constraint on phone by trying to insert the same phone number for another row. The insert would not error out (FILTERING WITHOUT ERROR), but would also not insert the row into the customer table. Instead, it would insert the row into the customer violations table (customer_vio) along with diagnostics information for the problem into the customer diagnostics table (customer_dia). To see any violation and diagnostic information, run regular SELECT statements against the violations and diagnostic tables.

Once the filtered object is taken out of filtering mode and placed back in enabled mode, it is good practice to stop the violations table as well. The STOP VIOLATIONS TABLE SQL statement is used for that purpose.

Example continued from above:

Listing 35. Stop the violations table
SET INDEXES uniq_phone ENABLED;

STOP VIOLATIONS TABLE FOR CUSTOMER;

Note: Stopping the violations table will not drop the violations and diagnostics tables. To start the violations tables again for a given table, you first need to drop the existing violation and diagnostic tables for that table.


dbschema command

The dbschema command not only allows a DBA to see what SQL statements were run to create objects in the database, but it also allows for an easy way to recreate objects in a database.

All objects discussed so far have a simple DROP statement to remove the object and all its references from the database.

Listing 36. Example DROP statements
DROP TABLE customer;
DROP INDEX cust_idx;
DROP VIEW cust_view;
DROP SEQUENCE  cust_seq;
DROP TRIGGER cust_ins_trigger;
DROP SYNONYM cust_syn;

When the DROP SQL statement is finished, the object no longer exists and leaves nothing behind to show that it had existed. In order to get the object back, the object must be created again with the CREATE SQL statement.

What happens if the person recreating the object wasn't the original person that created it, so they don't know the CREATE statement that was used originally, or it was so long ago that they forgot what the CREATE statement looked like?

The answer is the dbschema command. The dbschema command allows a DBA to request the SQL statement that was used to create an object from the database, including the whole database and all objects in it.

The dbschema command is a command line executable; it is not an SQL statement.

The following code listings (Listings 37 - 41) show different options available to the dbschema command. The four options being shown are not the only four options available; they are also not mutually exclusive of each other, which means that you can place all of them in the same dbschema command if you want to. The only option that is not optional to the dbschema command is the -d database-name option.

The example in Listing 37 specifies just the -d option with the database name of testdb. This example will give back ALL objects in the database, including tables, views, synonyms, constraints, permissions, triggers, and more.

Listing 37. -d option
dbschema –d testdb

The example in Listing 38 uses the optional -t tablename option along with the database name to print out only the information for that one table.

Listing 38. -t tablename option
dbschema –d testdb –t customer

Listing 39 shows how the output of this command would look:

Listing 39. Output for -t tablename option
DBSCHEMA Schema Utility       INFORMIX-SQL Version 11.50.FC4    

{ TABLE "informix".customer row size = 69 number of columns = 4 index size = xx }
create table "informix".customer 
  (
    ssn char(11),
    lname char(50),
    age integer,
    birthday date
  );

revoke all on "informix".customer from "public" as "informix";

The example in Listing 40 actually uses the tablename, but asks for the histograms that are generated for that table during an UPDATE STATISTICS run. These histograms show the distribution of the data in the table based upon the values that were used when the last UPDATE STATISTICS was run against the table.

Listing 40. Request for histograms
dbschema –d testdb –hd customer

The example in Listing 41 uses the -f routine-name option to print out the ins_mailing_list stored procedure:

Listing 41. -f routine-name option
dbschema -d testdb -f ins_mailing_list

Note: The dbschema command must be run before dropping the object. Once the object has been dropped, nothing is left of the object in the database including the definition needed for dbschema to print it out.

Some objects cannot be printed out separately through dbschema. There are no options to print out just indexes, triggers, or constraints, as these all belong to tables. To print them out, the table must be printed out.

Listing 42. Example with more objects defined on customer table
dbschema –d testdb –t customer
Listing 43. Command return
DBSCHEMA Schema Utility       INFORMIX-SQL Version 11.50.FC4    

{ TABLE "informix".customer row size = 69 number of columns = 4 index size = xx }
create table "informix".customer 
  (
    ssn char(11),
    lname char(50),
    age integer,
    birthday date ,
    unique (lname)  constraint "informix".uniq_lname,
    check ((age > 0 ) AND (age < 125 ) )
  );

revoke all on "informix".customer from "public" as "informix";

create index "informix".cust_idx on "informix".customer (ssn) 
    using btree ;

create trigger "informix".cust_ins_trigger insert on "informix"
    .customer referencing new as new
    for each row
        when ((new.birthday < DATE ('01/01/2000' ) ) )
            (
            execute procedure "informix".ins_mailing_list(new.lname 
    ,new.birthday ));

The example in Listings 42 and 43 does not include any server-specific information about the objects that are being printed out. Server-specific information includes table lock mode, table extent sizing, table fragmentation, and more. In order to get the server-specific information, you must specify the –ss option on the command line.

Listing 44. –ss option
dbschema –d testdb –t customer –ss
Listing 45. Command return
DBSCHEMA Schema Utility       INFORMIX-SQL Version 11.50.FC4    

{ TABLE "informix".customer row size = 69 number of columns = 4 index size = xx }
create table "informix".customer 
  (
    ssn char(11),
    lname char(50),
    age integer,
    birthday date,
    unique (lname)  constraint "informix".uniq_lname,   
    check ((age > 0 ) AND (age < 125 ) )
  ) extent size 100 next size 50 lock mode row;
alter fragment on table "informix".customer init 
  fragment by round robin in dbs1 , dbs2 ;

revoke all on "informix".customer from "public" as "informix";
		
create index "informix".cust_idx on "informix".customer (ssn) 
    using btree ;

create trigger "informix".cust_ins_trigger insert on "informix"
    .customer referencing new as new
    for each row
        when ((new.birthday < DATE ('01/01/2000' ) ) )
            (
            execute procedure "informix".ins_mailing_list(new.lname 
    ,new.birthday ));

Notice how the extent sizing, lock mode, and fragmentation scheme were all kept in the statement. The purpose of the –ss option is to be able to recreate an exact duplicate of the database or table, and not just a look-a-like.

The dbschema command is a very handy command to understand and use. It is common to be at facilities where the dbschema command is used, along with regular backups, as part of a disaster recovery plan. A recent dbschema output can become very useful when somebody drops a table and doesn't remember if that table had any indexes, triggers, or constraints because they forgot to check before dropping it.

Make sure that you get familiar with the command and its various options.

Listing 46. Usage from the dbschema command
>dbschema --

USAGE:

    dbschema [-q] [-t tabname] [-s user] [-p user] [-r rolename] [-f procname]
             [-hd tabname] -d dbname [-w passwd] [-seq sequence] [-l [num]]
             [-u [ia] udtname [all]] [-it [Type]] [-ss [-si]] [filename]
             [-sl length]

    -q      Suppress the db version from header

    -t      table name or "all" for all tables

    -s      synonyms created by user name
            or "all" for all users

    -p      permissions granted to user name
            or "all" for all users

    -r      create and grant of the role
            or "all" for all roles :Not a valid option for SE

    -f      SPL routine name
            or "all" for all SPL routines


    -hd     Histograms of the distribution for columns of
            of a specified table, a specific table column,
            or "all" for all tables.

    -d      database name

    -w      database password

    -seq    generate sequence specific syntax

    -u      Prints the definitions of user-defined data types

    -ui     Prints the definitions of user-defined data types,
            including type inheritance

    -ua     Prints the definitions of user-defined data types,
            including all functions and casts defined over a type

    -u all  Directs dbschema to include all the tables
            in the display of distributions

    -it     Type of isolation can be DR, CR, CS or RR

    -l      set lock mode to wait [number] optional

    -ss     generate server specific syntax

    -si     excludes the generation of index storage clauses for
            non-fragmented tables

    -sl     the maximum length (in bytes) of unformatted CREATE TABLE and ALTER 
FRAGMENT statements.

            filename is the name
            of file that the SQL
            script goes in.

Note:dbschema only prints out the definition of the object, and has nothing to do with the data being stored in the object. In order to duplicate the data in a table, another means must be used, like the SQL statements UNLOAD/LOAD.


Summary

After finishing this tutorial, you should have a good understanding of many of the objects that can be created and used inside of a database. This list includes tables, indexes, views, constraints, triggers, synonyms, sequences, and triggers.

This tutorial has discussed:

  • How tables are used to store data in both un-fragmented and fragmented ways
  • How indexes are used for performance during retrieval of data
  • How constraints help keep the integrity of the data
  • How triggers can be used to perform an automatic response to an event that occurs inside the database, including special triggers known as INSTEAD OF triggers
  • How views are used in place of tables to limit the amount of data a user can interact with
  • How sequences are used to generate values
  • How synonyms are used in place of remote table SQL syntax
  • How object modes can be used to help with error handling and violations
  • How dbschema can be used to help print out the SQL definitions

By now, you should feel comfortable defining and destroying these objects. All of the objects are defined with their own variation of the CREATE SQL statement and destroyed with their own variation of the DROP SQL statement. Remember that once an object is dropped, not only does it no longer exist in the database, but its definition or anything that was defined on it also do not exist. A prime example of this would be a table. Once a table is dropped, all data in the table, as well as any indexes, constraints, or triggers that belong to it, are also dropped.

The dbschema command is a great addition to any disaster recovery plan, as it can help make sure that you have an up-to-date definition picture of any object in the database, including the whole database.

Resources

Learn

Get products and technologies

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management
ArticleID=425286
ArticleTitle=Informix Dynamic Server 11.50 Fundamentals Exam 555 certification preparation, Part 4: Examining database objects
publish-date=09032009