Machine sizing

IBM® Db2® Event Store is designed for high performance insert and query workloads. To achieve this performance, you must deploy Db2 Event Store on a cluster of servers with adequate hardware.

The following table outlines four system sizes and the performance that you can expect from each configuration.

In a multi-node environment, the tables gives the sizing for each node in the cluster, not the entire cluster.

The guidance in the table is based on the following assumptions:

  • You are using a 3-node Enterprise Edition cluster.
  • All configurations require a 10 Gigabit Ethernet (GbE) connection.

    In a multi-node deployment, you must also have a 10 GbE connection between the nodes of the cluster.

  • IOPS requirements assume that 80% of the local storage I/O is write and the remaining 20% is read.
  • Queries are classified as either low complexity or high complexity:
    • Low complexity queries can be answered using the index and typically target the last 7 days of data. For example, a low complexity query might be a point lookup or range query where the relevant data is available in the index.
    • High and medium complexity queries target a subset of the database, because the query predicate is used to filter the data. The size of the data set after the data has been filtered determines which configuration you can use:
      • Small - 250 GB
      • Medium - 400 GB
      • Large - 500 GB
      • Extra large - 750 GB
Important: The sizing in Table 1 accounts for the minimum system requirements for IBM Watson™ Studio, which is installed during the installation of IBM Db2 Event Store Enterprise Edition. However, the sizing in Table 1 does not account for other IBM Watson Studio workloads that you might run, which require additional resources. For more information on sizing for IBM Watson Studio, refer to System requirements for Watson Studio Local.
Table 1. Per node sizing considerations for IBM Db2 Event Store Standalone/IBM Watson® Studio deployments
Size Insert workload (per node) Query workload (concurrent queries, per node) Cores Memory Local disk requirements Shared disk requirements
Small

You require significant insert speeds but run low complexity queries.

250,000 inserts (at 40 bytes each) per second.
  • High complexity: 1
  • Medium complexity: 3
  • Low complexity: 150
24 128 250 GB at 20,000 IOPS
  • 50 Gb on NFS or hostPath (a mounted directory on a cluster file system, such as IBM Spectrum Scale)
  • In addition, shared storage can be extended using Cloud Object Storage or on the same NFS/hostPath based on data retention policy
  • Minimum 150 MB/s
Medium

You require high insert speeds and run a mix of high complexity and low complexity queries.

500,000 inserts (at 40 bytes each) per second.
  • High complexity: 2
  • Medium complexity: 6
  • Low complexity: 500
40 256 500 GB at 40,000 IOPS
  • 50 Gb on NFS or hostPath (a mounted directory on a cluster file system, such as IBM Spectrum Scale)
  • In addition, shared storage can be extended using Cloud Object Storage or on the same NFS/hostPath based on data retention policy
  • Minimum 300 MB/s
Large

You require very high insert speeds and you need to run an analytical workload.

1,000,000 inserts (at 40 bytes each) per second.
  • High complexity: 3
  • Medium complexity: 9
  • Low complexity: 750
56 384 1000 GB at 80,000 IOPS
  • 50 Gb on NFS or hostPath (a mounted directory on a cluster file system, such as IBM Spectrum Scale)
  • In addition, shared storage can be extended using Cloud Object Storage or on the same NFS/hostPath based on data retention policy
  • Minimum 600 MB/s
Extra large

You require very high insert speeds and you need to run a complex analytical workload.

1,000,000 inserts (at 40 bytes each) per second.
  • High complexity: 3
  • Medium complexity: 9
  • Low complexity: 1000
72 512 1000 GB at 80,000 IOPS
  • 50 Gb on NFS or hostPath (a mounted directory on a cluster file system, such as IBM Spectrum Scale)
  • In addition, shared storage can be extended using Cloud Object Storage or on the same NFS/hostPath based on data retention policy
  • Minimum 600 MB/s
Important: The sizing in Table 2 and the sizing for your IBM Cloud Pak for Data configuration are the maximum requirement for the dedicated worker nodes. For more information on sizing for IBM Cloud Pak for Data, refer to System requirements for stand-alone Cloud Pak for Data installations.
Table 2. Per node sizing considerations for IBM Db2 Event Store on IBM Cloud Pak for Data deployments
Size Insert workload (per node) Query workload (concurrent queries, per node) Cores Memory Local disk requirements Shared disk requirements
Small

You require significant insert speeds but run low complexity queries.

250,000 inserts (at 40 bytes each) per second.
  • High complexity: 1
  • Medium complexity: 3
  • Low complexity: 150
8 64 250 GB at 20,000 IOPS
  • 50 Gb on NFS or hostPath (a mounted directory on a cluster file system, such as IBM Spectrum Scale)
  • In addition, shared storage can be extended using Cloud Object Storage or on the same NFS/hostPath based on data retention policy
  • Minimum 150 MB/s
Medium

You require high insert speeds and run a mix of high complexity and low complexity queries.

500,000 inserts (at 40 bytes each) per second.
  • High complexity: 2
  • Medium complexity: 6
  • Low complexity: 500
24 192 500 GB at 40,000 IOPS
  • 50 Gb on NFS or hostPath (a mounted directory on a cluster file system, such as IBM Spectrum Scale)
  • In addition, shared storage can be extended using Cloud Object Storage or on the same NFS/hostPath based on data retention policy
  • Minimum 300 MB/s
Large

You require very high insert speeds and you need to run an analytical workload.

1,000,000 inserts (at 40 bytes each) per second.
  • High complexity: 3
  • Medium complexity: 9
  • Low complexity: 750
40 320 1000 GB at 80,000 IOPS
  • 50 Gb on NFS or hostPath (a mounted directory on a cluster file system, such as IBM Spectrum Scale)
  • In addition, shared storage can be extended using Cloud Object Storage or on the same NFS/hostPath based on data retention policy
  • Minimum 600 MB/s
Extra large

You require very high insert speeds and you need to run a complex analytical workload.

1,000,000 inserts (at 40 bytes each) per second.
  • High complexity: 3
  • Medium complexity: 9
  • Low complexity: 1000
56 448 1000 GB at 80,000 IOPS
  • 50 Gb on NFS or hostPath (a mounted directory on a cluster file system, such as IBM Spectrum Scale)
  • In addition, shared storage can be extended using Cloud Object Storage or on the same NFS/hostPath based on data retention policy
  • Minimum 600 MB/s