Topic
4 replies Latest Post - ‏2013-01-08T00:26:20Z by SystemAdmin
SystemAdmin
SystemAdmin
17917 Posts
ACCEPTED ANSWER

Pinned topic In-Memory Tables (without using large Bufferpool workaround) & w/o solidDB?

‏2012-11-20T16:51:09Z |
Dear Forum,

is there a support for In-Memory Tables in DB2 without using the workaround of having a large (table fitting) bufferpool and an appropriate tablespace?

And of course not using solidDB? The idea is that we can simple put huge amounts of data into the cloud (readonly) at a high performance and saving disk space.

Temporary tables don't seem to do the job since their lifetime is limited.


Romeo Kienzler
Data Scientist and Architect
M. Sc. (ETH) in Information Systems, Statistics and Bioinformatics
Technology Consultant, IBM Innovation Center
r o m e o . k i e n z l e r ( a - t ) c h . i b m . c o m
Updated on 2013-01-08T00:26:20Z at 2013-01-08T00:26:20Z by SystemAdmin
  • SystemAdmin
    SystemAdmin
    17917 Posts
    ACCEPTED ANSWER

    Re: In-Memory Tables (without using large Bufferpool workaround) & w/o solidDB?

    ‏2012-11-20T21:57:47Z  in response to SystemAdmin
    Hi,

    It is an interesting question, however I am not sure to understand the part about saving disk space? How can you think to do that? The bufferpool size is equal or less than the tablespace size, so there is not size reduction (well, some compression if you use deep compression, but this could reduce the performance).

    If you want to have a lot of data available at a high rate, my advise is to have a bufferpool with a big block size, and this will keep continuous blocks of disk in a similar way in memory, reducing the "jumps". But this only works, if the tables in the related tablespaces can fit in that space. If not, you will waste memory.

    Also, to have a well tuned tablespace according to the prefetch size and extents, according to the RAID used, and if you can adjust the seektime and transfer rate will be better.

    Finally, you could have a set of SSD in order to provide a higher speed.

    If the information is read only, try to do the queries with UR, that will reduce the locks.
    • SystemAdmin
      SystemAdmin
      17917 Posts
      ACCEPTED ANSWER

      Re: In-Memory Tables (without using large Bufferpool workaround) & w/o solidDB?

      ‏2012-11-21T09:59:11Z  in response to SystemAdmin
      Dear Andres,

      thanks a lot for you answer. My scenario is as follows:

      1. We have a 100 GB DB where we want to put RO-Versions into the cloud - since we prefer to have all data in Main Memory anyways and we have a redundant copy "at home" we don't want to have the data stored in the cloud to hard drives but in Main-Memory only.

      So if DB would support In-Memory only tables we could load this data into the tables on startup remotely.

      But I think we have to go for solidDB
      • SystemAdmin
        SystemAdmin
        17917 Posts
        ACCEPTED ANSWER

        Re: In-Memory Tables (without using large Bufferpool workaround) & w/o solidDB?

        ‏2012-12-08T00:48:26Z  in response to SystemAdmin
        Create a bufferpool as large as your table, and dedicate it to the tablespace the table resides in (assuming you have only 1 table in this tablespace). I've done this with dimensional tables in our model, and the tables are staying in memory and attaining hit rates of up to 8 million rows per second.

        Randy
    • SystemAdmin
      SystemAdmin
      17917 Posts
      ACCEPTED ANSWER

      Re: In-Memory Tables (without using large Bufferpool workaround) & w/o solidDB?

      ‏2013-01-08T00:26:20Z  in response to SystemAdmin
      Andres,
      Compression doesn't necessarily mean performance degradation. In most cases it's a performance gain. Rethink about it, what is the bottleneck in DB systems? It's IO operations. If you double the number of pages you can fetch in one IO operation, you seem to have double your performance.