Topic
16 replies Latest Post - ‏2012-05-29T06:52:36Z by FGTX_Gang_Duan
FGTX_Gang_Duan
FGTX_Gang_Duan
7 Posts
ACCEPTED ANSWER

Pinned topic Derby size question

‏2012-05-08T08:36:59Z |
Guys, I have installed Mashup Center 3.0 on RHEL 5.5, having 5 normal feed and 1 real-time feed, but now it takes 40G diskspace for derby database, is this normal? are there any tools I can look into the db and see what is stored there?
Updated on 2012-05-29T06:52:36Z at 2012-05-29T06:52:36Z by FGTX_Gang_Duan
  • Stan
    Stan
    340 Posts
    ACCEPTED ANSWER

    Re: Derby size question

    ‏2012-05-08T14:31:21Z  in response to FGTX_Gang_Duan
    This is not a normal situation. Excel and CSV feeds are stored in the database so if you have defined several large ones the database would grow to store them. Otherwise I would not expect he DB to be 40GB. If the large file / files are in the LOG difrecotry then restarting the Server will clear the transaction log and on startup and release space. If the space is used in SEG0 then a large amount of data is stored in the DB.

    If the large feeds are deleted the data files will still not return space to the OS. Compress table will need to be run for space cleared by deleting objects will be returned to the OS.

    Compress information: http://db.apache.org/derby/docs/10.3/ref/rrefaltertablecompress.html
  • Stan
    Stan
    340 Posts
    ACCEPTED ANSWER

    Re: Derby size question

    ‏2012-05-09T21:50:59Z  in response to FGTX_Gang_Duan
    The files you listed support tables of the tracking system:
    MMMS_OFFERING_DATA file:Cd20.dat and it's indexes
    and OBJECTDATA files: C4d0.dat and indexes Cb*.dat

    You need to prune the old tracking data
    - see: http://publib.boulder.ibm.com/infocenter/mashhelp/v3/topic/com.ibm.muh.apiref.doc/api_metrics_cleartrackingdata.htm

    Then running the Derby compress table command will return data to the OS.
  • FGTX_Gang_Duan
    FGTX_Gang_Duan
    7 Posts
    ACCEPTED ANSWER

    Re: Derby size question

    ‏2012-05-10T08:31:24Z  in response to FGTX_Gang_Duan
    Many thanks, Stan.

    I checked the link about Clear tracking data service, but I do not know how to make a REST call of the service, using firefox to view as webpage will return a download of file named prune, which is a text file:

    {"result":"CDHUB0001E: You should be logged into an account.","code":1}

    so, can you please be kind to tell me how to make the call?
    • Stan
      Stan
      340 Posts
      ACCEPTED ANSWER

      Managing / pruning Tracking data [was: Derby size question]

      ‏2012-05-11T16:23:08Z  in response to FGTX_Gang_Duan
      Once you login to Mashup Center as the administrator you will get furhter but I found I ran to successive other issue like needing a DATE then something about the API not being available CDHUB0699E: The requested tracking API does not exist: prune%2001-01-2011. This really needs an example or to highlight the alternative option of doing it via the Admin console.

      Anyway, an error preceeding the one above gave me at hint at a URL parameter (from) and it worked - it stated:
      {"result":"CDHUB0604E: The validation of the FROM parameter failed. Ensure that it is in the correct format, mm-dd-yyyy.","code":604}

      working URL to prune from Jan 1 2011:
      https://localhost:9443/mashuphub/client/tracking/prune?from=01-01-2011

      I'll work with pruning via URL as well as the Admin interface and add clarification to the WIKI document.

      From the docs (with the options numbered):
      The removal of old data is requested
      1) either from the MashupHub administration user interface or
      2) through a REST service. The service requires HTTP Basic Auth for authentication as a MashupHub administrator. The prune tracking data service is invoked by issuing a REST call to

      https://host:port/mashuphub/client/tracking/prune

      passing a date. All tracking entries older than that date will be removed. The format of the data is a string in the format mm-dd-yyyy.
    • Stan
      Stan
      340 Posts
      ACCEPTED ANSWER

      Re: Derby size question

      ‏2012-05-11T16:32:18Z  in response to FGTX_Gang_Duan
      Should just go to the Admin Console - appears very easy. Logged in as Admin select:
      Settings > Administraton menu: Oject Statistics > Statistics settings
      On the resulting screen
      the section 'Tracking:' - allows you to enable/disable tracking
      'Clear Tracking Information' - allows you to enter the date before which the data should be cleared
      'Schedule Automatic Data Maintenance' I recommend this to you - allows you setup automatic pruning by specifying the number of days to keep the data.
      • Stan
        Stan
        340 Posts
        ACCEPTED ANSWER

        Compress fails ERROR 25502 'read-only connection' [was: Derby size question

        ‏2012-05-16T14:45:16Z  in response to Stan
        Responding on the forum for the benefit of others who might need to reduce the disk footprint of the Mashup Center Derby catalog database.

        Problem encountered:
        cd /opt/IBM/MashupCenter/3.0/AppServer/derby/bin/embedded/
        ./ij.sh

        ij>connect 'jdbc:derby:/data/hubdb;user=admin;password=password';
        ij> call SYSCS_UTIL.SYSCS_COMPRESS_TABLE('ADMIN','MMMS_OFFERING_DATA',1);
        ERROR 25502: An SQL data change is not permitted for a read-only connection, user or database.

        So could you please tell me how to connect to hubdb not in READ-ONLY mode?

        SUGGESTION:
        Derby will mark a connection 'read-only' when it cannot complete a write operation - usually the failed write is the boot message to the derby.log (by default in your default directory).
        ToDo: check / correct the file ownership and permissions on the Database file tree (/data/hubdb) and the default working directory where the derby.log will be written. Please post here whether or not this resolves your problem.

        OTHER THINGS:
        1) Be sure to shutdown Mashup Center before connecting to the database or unrecoverable corruption may result
        2) Do not use a network mount to access the derby database files
        3) You connection URL shows an odd location for the derby database: /data/hubdb. If this is a copy of the DB (a very good idea to work with a copy until everything works as expected)be certain the copy was made when Mashup Center was shutdown. If a copy is made while the system is running a file could be updated during the copy operation making that file incosistant with the rest of the DB. Recover from backup is the only to restore the DB.
        • FGTX_Gang_Duan
          FGTX_Gang_Duan
          7 Posts
          ACCEPTED ANSWER

          Re: Compress fails ERROR 25502 'read-only connection' [was: Derby size question

          ‏2012-05-17T08:01:27Z  in response to Stan
          Many THANKS, Stan :-)

          There was a time that the harddisk was full, no space left, maybe that was the reason why derby in read-only mode?

          About /data/hubdb, it is the real path of hubdb, while mashup center uses its soft-link, showing below:

          root@rcc-pok-idg-#### mashuphub-enterprise.war# ll
          total 44
          drwxrwxr-x 13 root root 4096 Mar 16 2011 client
          -rw-rw-r-- 1 root root 1486 Oct 22 2010 hubconfig.bat
          -rwxrwxr-x 1 root root 1455 Mar 16 2011 hubconfig.sh
          lrwxrwxrwx 1 root root 12 Nov 29 04:51 hubdb -> /data/hubdb/
          drwxr-xr-x 2 root root 4096 May 17 02:20 hubdb_archived
          -rw-rw-r-- 1 root root 712 Oct 22 2010 hubie.bat
          -rwxrwxr-x 1 root root 713 Mar 16 2011 hubie.sh
          drwxrwxr-x 5 root root 4096 Apr 10 2011 hubsample
          -rw-rw-r-- 1 root root 426 Oct 22 2010 index.jsp
          drwxrwxr-x 6 root root 4096 Mar 16 2011 META-INF
          drwxrwxr-x 5 root root 4096 Mar 16 2011 server
          drwxrwxr-x 5 root root 4096 Mar 16 2011 WEB-INF

          I checked the file owner and permission, they all looks right, directories with 755 while files with 644, owner is root.root,
          so I try to start Mashup Center, and watch the log file at the same time, it shows something:

          root@rcc-pok-idg-#### derby# tail -f derby.log

          2012-05-17 07:03:23.273 GMT:
          Booting Derby version The Apache Software Foundation - Apache Derby - 10.3.3.1 - (883215): instance c013800d-0137-599b-a9b9-00004cb62b53
          on database directory /opt/IBM/MashupCenter/3.0/mm/MashupDB

          Database Class Loader started - derby.database.classpath=''

          2012-05-17 07:03:35.883 GMT:
          Booting Derby version The Apache Software Foundation - Apache Derby - 10.3.3.1 - (883215): instance a02ac013-0137-599b-a9b9-00004cb62b53
          on database directory /data/hubdb

          Incomplete log record detected in file /data/hubdb/log/log143922.dat clearing log file from position 61 to eof at 32,768.

          And it stop there for a long time, so I checked the /data/derby directory, found some lock file are there, is this *NORMAL*?

          root@rcc-pok-idg-#### hubdb# ll
          total 284
          -rw-r--r-- 1 root root 4 May 17 03:03 dbex.lck
          -rw-r--r-- 1 root root 38 May 17 03:03 db.lck
          drwxr-xr-x 2 root root 266240 May 17 03:48 log
          drwxr-xr-x 2 root root 4096 Mar 16 2011 seg0
          -rw-r--r-- 1 root root 913 Mar 16 2011 service.properties
          drwxr-xr-x 2 root root 4096 May 17 03:03 tmp

          I will keep here updated if it prints more log.
          • Stan
            Stan
            340 Posts
            ACCEPTED ANSWER

            Re: Compress fails ERROR 25502 'read-only connection' [was: Derby size question

            ‏2012-05-17T17:21:34Z  in response to FGTX_Gang_Duan
            My concern here is that putting any software layer between the Derby DBMS engine and the physical data files can lead to failures and corruption of the Derby database. This includes but is not limited to network mounts, symbolic / soft links, RAM disks, etc. Mashup Center does not create soft links for it's catalog database so this was a dangerous customization. The hubdb directory tree needs to be physically located in mashuphub_enterprise.war not a symbolic link the files. I suspect "May 17 02:20 hubdb_archived" might be a backup of hubdb which will be useful if the DB in /data is corrupted.

            The error reported could very well be caused by having Derby write to the database via the symbolic link. Here is what is needed to determine whether the database needs to be recovered from backup.

            > Shutdown Mashup Center
            > Attempt to boot the database in IJ using the physical path to the DB (/data/hubdb).
            > If the database will not boot you will need to recover the database from backup or rebuild the database from scratch.
            > If the database boots run a consistency check on all tables and indexes
            > It it runs clean we are good
            > If errors are found in indexes we MAY be able to restore integrity by dropping and recreating the index(s).
            > If errors are found in tables you will need to recover the database from backup or rebuild the database from scratch.

            Let me know what you find.
            • Stan
              Stan
              340 Posts
              ACCEPTED ANSWER

              Derby Consistency check [was: Derby size question

              ‏2012-05-17T17:25:46Z  in response to Stan
              Forgot to provide information on the consistency check. Remeber that multiple runs are needed if the check_table reports errors:

              >> http://wiki.apache.org/db-derby/DatabaseConsistencyCheck
  • FGTX_Gang_Duan
    FGTX_Gang_Duan
    7 Posts
    ACCEPTED ANSWER

    Re: Derby size question

    ‏2012-05-22T13:38:09Z  in response to FGTX_Gang_Duan
    Hi Stan,

    Sorry for the late reply, it really takes time running those compress command :-(

    ij> select count(*) from ADMIN.MMMS_OFFERING_DATA;
    1

    143482029

    1 row selected
    ij> call SYSCS_UTIL.SYSCS_COMPRESS_TABLE('ADMIN','MMMS_OFFERING_DATA',1);
    0 rows inserted/updated/deleted

    ij>commit;
    it is weird that it still takes the same, nothing happened. So I checked the seg0 directory, and found these big boys, but I do not recognize these ones, they are not in your 'translation' list, can you please help on this?

    root@rcc-pok-idg-#### seg0# ll -Sh|head -n20
    total 33G
    -rw-r--r-- 1 root root 19G May 22 07:49 cea0.dat
    -rw-r--r-- 1 root root 7.6G May 22 10:03 cec1.dat
    -rw-r--r-- 1 root root 3.2G May 22 08:13 ceb1.dat
    -rw-r--r-- 1 root root 1.4G May 8 23:49 cb11.dat
    -rw-r--r-- 1 root root 686M May 8 23:49 cb01.dat
    -rw-r--r-- 1 root root 665M May 8 23:49 caf1.dat
    -rw-r--r-- 1 root root 390M Mar 2 02:51 c8f0.dat
    -rw-r--r-- 1 root root 197M Mar 2 02:51 c780.dat
    -rw-r--r-- 1 root root 26M Mar 2 03:33 ca70.dat
    -rw-r--r-- 1 root root 2.6M Mar 16 2011 c990.dat
    -rw-r--r-- 1 root root 896K Mar 2 02:51 c530.dat
    -rw-r--r-- 1 root root 216K May 18 05:55 c230.dat
    -rw-r--r-- 1 root root 132K Sep 15 2011 ce70.dat
    -rw-r--r-- 1 root root 128K Mar 2 02:51 c3c0.dat
    -rw-r--r-- 1 root root 100K May 18 05:55 c90.dat
    -rw-r--r-- 1 root root 100K Sep 15 2011 ce91.dat
    -rw-r--r-- 1 root root 68K May 22 08:13 c20.dat
    -rw-r--r-- 1 root root 68K Mar 2 02:51 c4a0.dat
    -rw-r--r-- 1 root root 68K May 18 05:55 ca1.dat

    your old list:

    DB_TABLE |FILE |ISIN&

    OBJECTS |C3c0.dat |false
    OBJECTS |C3d1.dat |true
    ASSOCIATIONS |C3e0.dat |false
    ASSOCIATIONS |C3f1.dat |true
    ASSOCIATIONS |C401.dat |true
    ASSOCIATIONS |C411.dat |true
    CATEGORIES |C420.dat |false
    CATEGORIES |C431.dat |true
    CATEGORIES |C441.dat |true
    CATEGORYNAMES |C450.dat |false
    CATEGORYNAMES |C461.dat |true
    COMMENTS |C470.dat |false
    COMMENTS |C481.dat |true
    COMMENTS |C491.dat |true
    ACLS |C4a0.dat |false
    ACLS |C4b1.dat |true
    ACLS |C4c1.dat |true
    OBJECTDATA |C4d0.dat |false
    OBJECTDATA |C4e1.dat |true
    OBJECTDATA |C4e1.dat |true
    SESSIONS |C4f0.dat |false
    SESSIONS |C501.dat |true
    PLUGINS |C510.dat |false
    PLUGINS |C521.dat |true
    PLUGINDATA |C530.dat |false
    PLUGINDATA |C541.dat |true
    PLUGINDATA |C551.dat |true
    PLUGINDATA |C561.dat |true
    FAVORITES |C570.dat |false
    FAVORITES |C581.dat |true
    FAVORITES |C591.dat |true
    PARAMS |C5a0.dat |false
    PARAMS |C5b1.dat |true
    PARAMS |C5c1.dat |true
    PUREXMLDATA |C5d0.dat |false
    PUREXMLDATA |C5e1.dat |true
    RATINGS |C5f0.dat |false
    RATINGS |C601.dat |true
    RATINGS |C611.dat |true
    SQLDATA |C620.dat |false
    SQLDATA |C631.dat |true
    TAGS |C640.dat |false
    TAGS |C651.dat |true
    TAGS |C661.dat |true
    USERS |C670.dat |false
    USERS |C681.dat |true
    GROUPS |C690.dat |false
    GROUPS |C6a1.dat |true
    KEYWORDS |C6b0.dat |false
    KEYWORDS |C6c1.dat |true
    KEYWORDS |C6d1.dat |true
    URLINFO |C6e0.dat |false
    URLINFO |C6f1.dat |true
    TRACKINGCONFIG |C700.dat |false
    TRACKINGCONFIG |C711.dat |true
    TRACKEDEXTERNAL |C720.dat |false
    TRACKEDEXTERNAL |C731.dat |true
    TRACKEDEXTERNAL |C741.dat |true
    TRACKEDLOCAL |C750.dat |false
    TRACKEDLOCAL |C761.dat |true
    TRACKEDLOCAL |C771.dat |true
    RESOURCES |C780.dat |false
    RESOURCES |C791.dat |true
    RESOURCES |C7a1.dat |true
    EVENTS |C7b0.dat |false
    EVENTS |C7c1.dat |true
    EVENTS |C7d1.dat |true
    LOG |C7e0.dat |false
    LOG |C7f1.dat |true
    DB_CONNS |C800.dat |false
    DB_CONNS |C811.dat |true
    DB_CONNS |C821.dat |true
    TMPDB_CONNS |C830.dat |false
    TMPDB_CONNS |C841.dat |true
    CONNECTIONS |C850.dat |false
    CONNECTIONS |C861.dat |true
    TMPCONNECTIONS |C870.dat |false
    TMPCONNECTIONS |C881.dat |true
    TMPOBJECTS |C890.dat |false
    TMPOBJECTS |C8a1.dat |true
    TMPPLUGINDATA |C8b0.dat |false
    TMPPLUGINDATA |C8c1.dat |true
    TMPPARAMS |C8d0.dat |false
    TMPPARAMS |C8e1.dat |true
    TMPRESOURCES |C8f0.dat |false
    TMPRESOURCES |C901.dat |true
    TMPRC |C910.dat |false
    SYSTEMDATA |C920.dat |false
    SYSTEMDATA |C931.dat |true
    FEEDDEPENDENCIES |C940.dat |false
    FEEDDEPENDENCIES |C951.dat |true
    FEEDDEPENDENCIES |C961.dat |true
    FEEDCONTROL |C970.dat |false
    FEEDCONTROL |C981.dat |true
    FEEDCONTROL |C981.dat |true
    PLUGINRESOURCES |C990.dat |false
    PLUGINRESOURCES |C9a1.dat |true
    PLUGINRESOURCES |C9b1.dat |true
    PLUGINRESOURCEALIASES |C9c0.dat |false
    PLUGINRESOURCEALIASES |C9d1.dat |true
    PLUGINRESOURCEALIASES |C9e1.dat |true
    PLUGINRESOURCEALIASES |C9f1.dat |true
    TMPPLUGINRESOURCES |Ca00.dat |false
    TMPPLUGINRESOURCES |Ca11.dat |true
    TMPPLUGINRESOURCEALIASES |Ca20.dat |false
    TMPPLUGINRESOURCEALIASES |Ca31.dat |true
    OBJECTMETA |Ca40.dat |false
    OBJECTMETA |Ca51.dat |true
    OBJECTMETA |Ca61.dat |true
    REVISIONS |Ca70.dat |false
    JDBCDRIVERS |Ca80.dat |false
    JDBCDRIVERS |Ca91.dat |true
    OBJECTS |Caa1.dat |true
    OBJECTS |Cab1.dat |true
    OBJECTS |Cac1.dat |true
    OBJECTS |Cad1.dat |true
    OBJECTS |Cae1.dat |true
    OBJECTDATA |Caf1.dat |true
    OBJECTDATA |Cb01.dat |true
    OBJECTDATA |Cb11.dat |true
    ACLS |Cb21.dat |true
    ACLS |Cb31.dat |true
    ACLS |Cb41.dat |true
    TAGS |Cb51.dat |true
    PLUGINRESOURCES |Cb61.dat |true
    PLUGINRESOURCEALIASES |Cb71.dat |true
    RESOURCES |Cb81.dat |true
    USERS |Cb91.dat |true
    CONNECTIONS |Cba1.dat |true
    CONNECTIONS |Cbb1.dat |true
    DB_CONNS |Cbc1.dat |true
    REVISIONS |Cbd1.dat |true
    REVISIONS |Cbe1.dat |true
    OBJECTMETA |Cbf1.dat |true
    TMPOBJECTS |Cc01.dat |true
    TMPPLUGINDATA |Cc11.dat |true
    TMPPLUGINDATA |Cc21.dat |true
    TMPPARAMS |Cc31.dat |true
    TMPRESOURCES |Cc41.dat |true
    TMPRESOURCES |Cc51.dat |true
    TMPCONNECTIONS |Cc61.dat |true
    TMPCONNECTIONS |Cc71.dat |true
    TMPDB_CONNS |Cc81.dat |true
    TMPDB_CONNS |Cc91.dat |true
    TMPDB_CONNS |Cca1.dat |true
    TMPRC |Ccb1.dat |true
    TMPPLUGINRESOURCES |Ccc1.dat |true
    TMPPLUGINRESOURCES |Ccd1.dat |true
    TMPPLUGINRESOURCEALIASES |Cce1.dat |true
    MMMS_VENDERS |Ccf0.dat |false
    MMMS_VENDERS |Cd01.dat |true
    MMMS_VENDERS |Cd11.dat |true
    MMMS_OFFERING_DATA |Cd20.dat |false
    MMMS_OFFERING_DATA |Cd31.dat |true
    MMMS_OFFERING_DATA |Cd41.dat |true
    MMMS_SERVICES |Cd50.dat |false
    MMMS_SERVICES |Cd61.dat |true
    MMMS_SERVICES |Cd71.dat |true
    MMMS_CONSUMERS |Cd80.dat |false
    MMMS_CONSUMERS |Cd91.dat |true
    MMMS_CONSUMERLIMITS |Cda0.dat |false
    MMMS_CONSUMERLIMITS |Cdb1.dat |true
    MMMS_GROUPS |Cdc0.dat |false
    MMMS_GROUPS |Cdd1.dat |true
    MMMS_GROUPLIMITS |Cde0.dat |false
    MMMS_GROUPLIMITS |Cdf1.dat |true
    MMMS_FORMULAS |Ce00.dat |false
    MMMS_FORMULAS |Ce11.dat |true
    MMMS_DLOG |Ce20.dat |false
    MMMS_DLOG |Ce31.dat |true
    MMMS_BLACKLIST |Ce40.dat |false
    MMMS_BLACKLIST |Ce51.dat |true
    MMMS_BLACKLIST |Ce61.dat |true
    MMMS_CDRELEMENTS |Ce70.dat |false
    MMMS_CDRELEMENTS |Ce81.dat |true
    MMMS_CDRELEMENTS |Ce91.dat |true
    • FGTX_Gang_Duan
      FGTX_Gang_Duan
      7 Posts
      ACCEPTED ANSWER

      Re: Derby size question

      ‏2012-05-22T14:07:24Z  in response to FGTX_Gang_Duan
      I did some query to SYS.SYSCONGLOMERATES, and find that cea0 equals to 3744,
      meaning MMMS_OFFERING_DATA,

      ij> select TABLENAME, CONGLOMERATENUMBER
      from SYS.SYSCONGLOMERATES a, SYS.SYSTABLES b
      where a.TABLEID = b.TABLEID AND b.TABLETYPE 'S' AND CONGLOMERATENUMBER=3744 ;

      TABLENAME |CONGLOMERATENUMBER

      MMMS_OFFERING_DATA |3744

      1 row selected

      but there are 3 records in that tables, all with table name MMMS_OFFERING_DATA,

      ij> select TABLENAME, CONGLOMERATENUMBER
      from SYS.SYSCONGLOMERATES a, SYS.SYSTABLES b
      where a.TABLEID = b.TABLEID AND b.TABLETYPE 'S' AND TABLENAME='MMMS_OFFERING_DATA';

      TABLENAME |CONGLOMERATENUMBER

      MMMS_OFFERING_DATA |3744
      MMMS_OFFERING_DATA |3761
      MMMS_OFFERING_DATA |3777

      3 rows selected

      what are they referring to?
      • Stan
        Stan
        340 Posts
        ACCEPTED ANSWER

        Re: Derby size question

        ‏2012-05-22T16:43:51Z  in response to FGTX_Gang_Duan
        The listing is the table and two indexes - include the column isIndex to see which are indexes.
    • Stan
      Stan
      340 Posts
      ACCEPTED ANSWER

      Re: Derby size question

      ‏2012-05-22T16:57:59Z  in response to FGTX_Gang_Duan
      We must create the database from scratch on the install so the file names on your system are not the same on our two systems. I thought we just laid the pre-populated files into disk.

      You need to remove rows from the table before running COMPRESS - Compress only reclaims space left by rows that were removed. Your table has many rows.
      • FGTX_Gang_Duan
        FGTX_Gang_Duan
        7 Posts
        ACCEPTED ANSWER

        Re: Derby size question

        ‏2012-05-24T08:38:37Z  in response to Stan
        Hi Stan, I did some query:

        ij> SHOW INDEXES FROM ADMIN.MMMS_OFFERING_DATA;
        TABLE_NAME |COLUMN_NAME |NON_U&|TYPE|ASC&|CARDINA&|PAGES

        MMMS_OFFERING_DATA |ID |false |3 |A |NULL |NULL
        MMMS_OFFERING_DATA |SID |true |3 |A |NULL |NULL

        2 rows selected
        ij> SELECT TABLENAME, CONGLOMERATENUMBER, ISINDEX
        from SYS.SYSCONGLOMERATES a, SYS.SYSTABLES b
        where a.TABLEID = b.TABLEID AND b.TABLETYPE 'S' AND TABLENAME='MMMS_OFFERING_DATA' ;
        TABLENAME |CONGLOMERATENUMBER |ISIN&

        MMMS_OFFERING_DATA |3744 |false
        MMMS_OFFERING_DATA |3761 |true
        MMMS_OFFERING_DATA |3777 |true

        3 rows selected

        So my understanding is like this:

        3744 referring to a real table, while 3761 and 3777 means 2 indexes for this table?

        Are there any easy way to rebuild those indexes?
        I only found http://wiki.apache.org/db-derby/DatabaseConsistencyCheck shows a java method,
        while the bad part is I do not know java much.

        One more thing, now I know there are many rows in this table, can you please tell what
        are stored in this tables? only tracking information? can I delete from this table? since it
        would refill it very soon.
        • Stan
          Stan
          340 Posts
          ACCEPTED ANSWER

          Re: Derby size question

          ‏2012-05-24T22:05:26Z  in response to FGTX_Gang_Duan
          Compress table (SYSCS_UTIL.SYSCS_COMPRESS_TABLE) will rebuild the indexes.

          Or you can use DBLOOK to dump the DDL for the objects then drop the indexes and recreate then using the DDL from DBLOOK. See the Tools and Utilities Guide:
          http://db.apache.org/derby/docs/10.3/tools/

          Why do you want to recreate the indexes?
  • FGTX_Gang_Duan
    FGTX_Gang_Duan
    7 Posts
    ACCEPTED ANSWER

    Re: Derby size question

    ‏2012-05-29T06:52:36Z  in response to FGTX_Gang_Duan
    Hi Stan,

    it looks now everything is fine, we saved 40G more space on this DB.

    Many thanks!

    root@rcc-pok-idg-#### embedded# df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/vda1 36G 19G 16G 55% /
    tmpfs 941M 0 941M 0% /dev/shm
    /dev/vdb 99G 3.5G 90G 4% /data