1 reply Latest Post - ‏2011-08-03T01:55:23Z by averyw
7 Posts

Pinned topic Do not seem to be able to get caching right

‏2011-08-01T23:12:46Z |
We are having problems with the data caching of Cubing Services - we do not seem to be able to ensure the data is cached at the start of a business day, even if we try a seed query and fire of some prebuilt reports there are still reports that take a long time to run. When the data is cached performance is great - most Cognos reports coming back in just a couple of seconds. When data is not cached the performance is terrible with reports taking 30 seconds to a minute. We have spent a lot of time on database optimisation and MQTs but what we really need is certainty that the data is in the cache when the user makes a request. We do not want to execute hundreds of reports every single morning after a Warehouse refresh - is there an easier way to make sure the cache is loaded? Is there a type of seed query that fills the cache?
Updated on 2011-08-03T01:55:23Z at 2011-08-03T01:55:23Z by averyw
  • averyw
    1 Post

    Re: Do not seem to be able to get caching right

    ‏2011-08-03T01:55:23Z  in response to vmcburney
    Hi -- There are two main areas where we can improve performance: load the cache so that reports run very quickly, and optimize the underlying database using MQTs for times when the cache does not have all the values needed for a report.

    As for loading the data cache, you do have the ability to specify a seed query in the Admin Console. But, it is only one MDX query statement against one Cubing Services cube that runs at cube server start up.

    Every query that is run and goes against the underlying data source will help fill the cache. You mentioned firing off prebuilt reports -- this is another technique to fill the cache since you have the flexibility to add any number of reports to run. The trick is to get a representative enough set of reports to run to populate the cache. Some basics to come up with a representative set of cache populating reports, it will help to know the usage pattern of your end users, such as:
    • if there is a set of saved reports that everyone accesses first and then proceed to drill, or
    • if there are common drill patterns from these known starting points, or
    • if there are any particular reports the VIPs run.

    Assuming there is a maintenance window every morning after a Warehouse refresh, and the cube server is restarted, you may choose to run these reports in parallel, or in some particular order of importance to ensure priority reports are done before others.

    There will, inevitably, be some ad hoc reports that require cell values that aren't in the cache yet, and will have to go against the underlying data source. This is where database optimization, MQTs, and even updating the model will help reduce the time it takes for reports on the underlying datasource to run. If you have a report that is taking longer than expected, look at the nature of the report. There are different approaches for speeding up query times depending on the nature of the report.

    What are the measures, dimensions and slices involved? Are these covered and routed to an MQT? Should it have? Is this common enough to have an MQT to cover this case?

    What sort of functions and calculations are involved? Some OLAP functions, such as those involving top/bottom count/percent rank, are more resource intensive than others. Depending on the CS version, there may be some cube server specific tuning options available that may help. If there are calculations, could any calculated measures be re-written (in the model) to be more efficient? Sometimes re-evaluating whether the underlying model and calculations are as efficient as they can be will help query performance a lot more than trying to speed up the layers on top.

    Hope this helps.

    BTW, what version of Cubing Services, including fix pack level, are you on?