The Maximo schema changed significantly starting with version 6. The users are now contained within the schema, so only the schema needs to be exported and imported.
Resolving The Problem
There have been several major changes to the schema beginning with version 6 which affect the process for copying a schema with the Oracle export and import utilities:
- Maximo userids are no longer database userids by default, so you no longer need to create a full database export in order to copy the userids. There is an option for creating some MAXIMO userids as database userids for report development, but if this has been done those few ids can be recreated after an import.
- Maximo passwords are encrypted and stored in a Maximo table. You must remember at least the password of a Maximo administrator userid (e.g. MAXADMIN) in order to reset passwords on the copied schema.
- Maximo uses the Oracle CLOB datatype in several tables. Oracle import will only import tables containing CLOBs into the same tablespace name as the table that they exported from. For this reason you must generally create the SAME Maximo tablespace names on the target database as you have on the source database.
- Maximo uses Oracle Text indexes to speed up searching by description. These text indexes have lexer preferences that do not export with the Maximo schema, so they must be pre-created with a script prior to import. Oracle 10.2 makes a change to one of these preferences which can require that the indexes be imported separately.
Oracle Data Pump
If you are running on Oracle 10g or higher, then you should use the Oracle Data Pump export and import utilities (expdp and impdp) rather than the old exp and imp utilities. Data Pump is superior to the old utilities. See the Oracle Database Utilities manual for details on using Data Pump.
Exporting a Maximo Schema
NOTE: You must use the same version of the export utility as the version of the source database, and you must use the same version of the import utility as the version of the target database. You must also import to a version of Oracle which is the same or higher version as the source database. For example, you must use export version 18.104.22.168 to export Maximo from a 22.214.171.124 version database. Then you must use import version 10.1.0.3 in order to import into a version 10.1.0.3 database.
WARNING: If you export the schema while data is being added, then one or more sequences will be incorrect after the import and will result in ORA-00001 errors. You will need to compare sequence values with the maximum value in the column that they populate and correct the invalid ones.
WARNING: Before exporting the schema you should stop the Oracle Text index synchronization job so that you do not export text indexes which are in mid-update. If you do not stop this job, the export file will contain invalid text index data for some of these indexes which will cause errors in Maximo after importing. After the synchronization job has stopped, run the synchronization procedure one time by hand to ensure that the text indexes are updated:
WARNING: If you export from a live database, such as production, while data is being added or modified, then the text indexes will be missing some of the changes. This can cause text queries to fail to return rows that exist when run on the target database. The solution is to rebuild all the text indexes on the target database. (This is not a critical problem if the target database is for functional testing, upgrade testing, and so forth.)
Run the Oracle export utility according to the Oracle Database Utilities manual. Export only the Maximo schema (userid). The following is a sample export parameter file for the exp utility (for export run on Windows):
Importing the Maximo Schema
Since the target instance must have the same tablespace names for any tables containing CLOB or BLOB datatypes, you can run the following statement in the source instance to list the tablespace names use by tables that contain BLOB or CLOB columns:
select distinct tablespace_name from user_tables where table_name in
(select table_name from user_tab_columns where data_type in ('BLOB','CLOB'))
Before importing the schema you must create the MAXIMO userid. If you are refreshing a test schema from production, you need to ensure that all the old objects have been dropped/deleted. The only way to guarantee that this has been done correctly is to drop the schema and then recreate it.
Run the following commands to create the Maximo schema in the target instance. This sequence of commands below is taken from CREATEMaximoES.SQL and is documented in the Maximo Installation Guide. If you use a schema name or tablespace name other than MAXIMO then the installation guide explains what to change in the script.
create user maximo identified by maximo;
alter user maximo default tablespace maximo quota unlimited on maximo;
alter user maximo temporary tablespace temp;
grant create trigger to maximo;
grant create session to maximo;
grant create sequence to maximo;
grant create synonym to maximo;
grant create table to maximo;
grant create view to maximo;
grant create procedure to maximo;
grant alter session to maximo;
grant execute on ctxsys.ctx_ddl to maximo;
You may need to give the schema owner quota on additional tablespaces. Copy and change the following command for each additional tablespace
alter user maximo quota unlimited on second_tablespace;
Starting with Maximo Patch 6.0.3 on Oracle 10g or higher, the schema will also need CREATE JOB privilege because the text index synchronization job will be run by DBMS_SCHEDULE instead of DBMS_JOB. (This privilege is not needed and does not exist on Oracle 9i.) On Oracle 10g or 11i also execute:
grant create job to maximo;
The schema export will not export the Oracle Text preference and sub-lexer definitions. You must login as MAXIMO with SQLPLUS and execute the following to set these up before you run the import.
call ctx_ddl.create_preference('global_lexer', 'multi_lexer');
call ctx_ddl.add_sub_lexer('global_lexer','simplified chinese','chinese_lexer','zh');
NOTE: The above incorporates the korean_morph_lexer that is required for Oracle 10.2. This lexer is also available with Oracle 9.2 and 10.1, so the above commands can be used with all the supported versions of Oracle.
WARNING: If you are using the Oracle Data Pump utilities, then the text indexes will import without error even if the preferences have not been created. The preferences are only used when creating a text index with the CREATE INDEX command. After the import, if you run Database Configuration, upgrade utilities, and so forth, you will encounter fatal errors if the preferences are missing.
The following is a sample import parameter file for imp. Note the use of FROMUSER/TOUSER syntax. Even if you have a full database export, do not perform a full database import. A full import will update the CTXSYS schema unnecessarily and then and cause problems with the korean_lexer.
If you import into the same instance from which you exported you will use a new Maximo schema name, and you may receive an error at the end of the import when creating the maximo_ts_job batch job.
If the import was successful, but the job which automatically updates the Oracle Text indexes has not been started. Please download Tech Note number 1262037 for an explanation of how to correct that. MAXIMO will run OK without this, but searches on description fields will not find all the records until this is fixed.
If you have imported any Maximo tables or indexes into different tablespaces than in the original database, you will need to update this information in MAXTABLE, MAXTABLECFG, and MAXSYINDEXES. Please see Knowledge Base document 1262981 for the SQL updates used to do this update.
There are a number of tasks which may need to be performed after the import before you can start the application server(s) and used the new system.
Checking for Valid Text Indexes
After the import has completed, you may have on or more invalid text indexes. This can occur if a text index was being updated when the export was running. The following statement will list any invalid text indexes:
select index_name,index_type,status,domidx_status,domidx_opstatus from user_indexes
where index_type = 'DOMAIN' and (domidx_status <> 'VALID' or domidx_opstatus <> 'VALID');
If you find any invalid indexes, you can fix them with the following command:
alter index indexname rebuild;
If there are a large number of them, you can run SQLPLUS on Windows, login as the Maximo schema owner, and run the following to create and execute generate and run a script of rebuild commands:
set feedback off
set hea off
set pagesize 10000
select 'ALTER INDEX '||index_name||' REBUILD;' from user_indexes where index_type = 'DOMAIN'
and (domidx_status <> 'VALID' or domidx_opstatus <> 'VALID');
If the export is performed while the source database is in use, then the sequences may be used after their definitions have been exported which can result in ORA-00001 errors when adding records to some of the tables in the copy of the database.
After exporting from the source database, execute the following query and save the output. This will record the highest number used for each sequence while the export was running:
select sequence_name,last_number from user_sequences order by sequence_name;
After importing into the target database, run the above query on the target and save the output. Compare the results from the two databases. For each instance where the source database sequence has a higher value for last_number than the target, you must drop and recreate that sequence in the target database in order to reset it:
DROP SEQUENCE sequence_name;
CREATE SEQUENCE sequence_name START WITH higher_number+1
If the source system is running Integration Framework processes, then these must be cleaned up in the target database before starting the application server.
- Set all the Integration Framework crontasks to inactive status by updating CRONTASKINSTANCE.ACTIVE to 0 for crontasks: JMSQSEQCONSUMER, FLATFILECONSUMER, IFACETABLECONSUMER, and XMLFILECONSUMER.
- Drop any tables in the Maximo schema used for the JMS queue data store. (For example, if on WebSphere drop any tables which have a name that starts with SIB.)
After starting the application server, the JMS queues may need to be recreated for the new system, new data stores defined, etc. Reference the Tech Notes on setting up Integration Framework JMS queues and crontasks for details on doing this.
Was this topic helpful?
17 June 2018