Topic
5 replies Latest Post - ‏2012-12-08T19:33:31Z by SystemAdmin
SteveW.
SteveW.
13 Posts
ACCEPTED ANSWER

Pinned topic Cube optimization

‏2012-11-09T14:48:24Z |
We have several cubes, one is 20G in size. Don't ask my why. IBM claims size doesn't matter, so the customer built one expecting "lightning speed". Well, it's VERY slow in comparison to their expectations. Especially when first opening reports.

Wondering if the data is in memory when a cube is first utilized after it has been updated. If scheduling a report against a cube will cause the data to be stored in RAM so that the next report will be "Lightening fast".

Any other optimization tips for large (enormous) cubes is appreciated as well.
Updated on 2012-12-08T19:33:31Z at 2012-12-08T19:33:31Z by SystemAdmin
  • SystemAdmin
    SystemAdmin
    658 Posts
    ACCEPTED ANSWER

    Re: Cube optimization

    ‏2012-11-11T22:13:07Z  in response to SteveW.
    Hi Steve,

    One option could be to use the ViewConstruct command in a TurboIntegrator process to pre calculate and store a view in memory.

    You could use this command as part of your data load processes and also link this with the StartUpChore parameter in the tm1s.cfg file to ensure this process runs each time the server is restarted.

    In relation to cube design, the basic rule I have always used is to order of dimensions in cubes based on small to large (# elements) and sparse to dense (data points per element).

    Hope this helps.

    Rod
    • SteveW.
      SteveW.
      13 Posts
      ACCEPTED ANSWER

      Re: Cube optimization

      ‏2012-11-12T17:28:09Z  in response to SystemAdmin
      What does "store a view in memory" do for performance?
      • SystemAdmin
        SystemAdmin
        658 Posts
        ACCEPTED ANSWER

        Re: Cube optimization

        ‏2012-11-12T22:04:28Z  in response to SteveW.
        Hi Steve,

        The primary benefit is that initial response times should be good for all users. Performance wise I expect you won't notice any difference provided the view(s) used for preloading are reflective of regular usage patterns.

        Potential downsides are:
        • server start time will take longer to allow for running the pre caching process
        • preloading an unreasonable view may chew up additional RAM
        • if the cube is subject to a lot of writes then the benefits of pre caching may be lost

        Something I didn't ask, is the 20G cube actually smaller in size than the source data? If the answer is no then your cube design is probably wrong.

        If performance is a big issue then you may need to review processors and memory as well as your general cube architecture.

        Hope this helps.

        Rod
        • SystemAdmin
          SystemAdmin
          658 Posts
          ACCEPTED ANSWER

          Re: Cube optimization

          ‏2012-11-22T16:41:07Z  in response to SystemAdmin
          Just want to go back to checking a few basics as occassionally this gets missed.

          1. Does this 20g cube have rules.
          • are you using SKIPCHECK and FEEDERS
          • could you be over feeding

          2. Have you considered re ordering the cube dimensions using the dimension reorder utility.

          The above 2 will help reduce the size of the cube down and in turn help with performance.

          3. What front end are you using? If using Excel ensure you are using the "=view" function in Excel
          TM1 caches any view requested into memory making the 2nd query faster
          If however an underlying number is changed, the cache is lost and TM1 will re-calculate the view.
  • SystemAdmin
    SystemAdmin
    658 Posts
    ACCEPTED ANSWER

    Re: Cube optimization

    ‏2012-12-08T19:33:31Z  in response to SteveW.
    Steve,

    Definitely look into reordering the dimensions and ensure that SKIPCHECK; and FEEDERS; is included in the rules. Overfeeding can also be a real killer also.

    If developing "reports," one practice that I tend to follow is using a non-rule based cube, or a cube with very few rules in it. I know there's the temptation to generate the reports off a rule-driven cube, but you may run into latency issues if that cube is heavily rule driven. You'll want a TI process to move all the data into a report cube. Given the amount of data you have, you may have to be careful in how you transfer data to a report cube.