Using Batch Updates

In direct writeback mode or after committing sandbox data, any edits you apply to cubes are written to the RAM of the IBM® TM1® server containing the cube. Each time a value in RAM is updated, a lock is placed on the server and any views stored in memory affected by the updated value are deleted, which is detrimental to performance.

Batch updates allow you to improve the performance of input-intensive applications by holding the changes to the cube data and saving those changes to the server memory in a single batch. A batch update minimizes the amount of time a server is locked and reduces the impact on the views stored in memory.

When you initiate batch updates, a temporary storage structure is created that is associated with a selected server. All edits to cubes residing on that server are held in the storage structure until you save the batch update. All edits are then committed to the server and the temporary storage structure is destroyed after the batch is sent.

Attention: Edits held in batch updates are not written to the server's Tm1s.log file until you save the batch updates. Edits lost due to a disconnection from the server cannot be recovered because the records of the edits do not exist in Tm1s.log. Here are the possible causes for losing edits:
  • You do not save your batch updates before disconnecting from the server.
  • Your client loses its connection to the server. This includes instances when an administrator disconnects your client from a server without warning, or when your client is disconnected from a server that is configured to disconnect idle client connections.
  • The server comes down before you save your batch updates.