Detailed System Requirements
The IBM® Maximo® Application Suite 8.7.0 system requirements consist of environment requirements and software dependencies, capacity planning information, and software product compatibility reports.
- Download and open the linked spreadsheet in Excel.
- Select or enter values for the yellow fields only to match your planned application deployment.
- The calculator provides estimated total system requirements in VPCs and Memory (GB) for your configuration in the Resulting Complete Environments Requirements section of the Output table.
- IBM Maximo Application Suite Software Product Compatibility Report
- IBM Maximo Application Suite - Manage Application Compatibility Report
|Max number of simultaneously connected devices||200||5,000||50,000|
|Max data rate (totaled over all connected devices)||0.4 kB/s||10 kB/s||100 kB/s|
|Max msg rate (totaled over all connected devices)||4 msg/s||100 msg/s||1,000 msg/s|
|Max Db2® insert rate||4 inserts/s||100 inserts/s||1,000 inserts/s|
- Number of devices
- Number of data points sent in each message
- Number of messages per second
- 1,000 devices that each send 1 message per minute, where each message contains 1 datapoint.
- 1 device that sends 1,000 messages per minute, where each message contains 1 datapoint.
- 500 devices that each send 2 messages per minute, where each message contains 1 datapoint.
- 1 device that sends 1 message per minute, where each message contains 1,000 data points.
Persistent storage requirements
To view a list of available storage classes in your cluster, run the following OpenShift console command:
oc get storageclasses
Choose the appropriate storage class and size for your workload size.
The storage class can be used to dynamically provision a persistent volume with access mode RWO (ReadWriteOnce).
No specific filesystem permissions are required.
The following storage providers have been tested with the IoT Tool:
Openshift Container Storage
IBM Cloud block storage class:
|Maximo® Application Suite core||RLKS: 5 Gb|
|IoT - Developer||Message Gateway: 64 Gb|
|IoT - Small||Message Gateway: 64 Gb|
|IoT - Medium||Message Gateway: 128 Gb|
|Kafka||ZooKeeper: 20 Gb
Kafka: 50 Gb
|Cloud Pak for Data (Db2)||Cloud Pak for Data: 20 Gb
Db2 Warehouse: (100 Gb * 2) + Db2 user storage based on retention policy
User storage requirements for Db2® are primarily determined by two factors:
- Incoming event data rate
- Retention policy
A typical allocation might be 1 Mb storage per day per 300 bytes/minute (3 IoPoints).
The following table is based on the workload sizes estimates.
|Data rate||IoPoints||Db2® Storage Requirements|
|Developer||400 bytes/sec||200||80 Mb storage per day|
|Small||10,000 bytes/sec||5,000||2 Gb storage per day|
|Medium||100,000 bytes/sec||50,000||20 Gb storage per day|
For example, if you wanted to store 1 month's worth (30 days) of data and also match the benchmark figures, the user storage requirements are as follows:
- Developer: 2.4 Gb (30 * 0.08 Gb)
- Small: 60 Gb (30 * 2 Gb)
- Medium: 0.6 Tb (30 * 20 Gb)
|Assets (subset of assets in Health||600||6,000|
|Number of sensors per asset||10||30|
|Failure history for active assets||10,500||75,000|
|Number of models|
|Time to failure||2 (1 for each failure mode)|
|Failure prediction||2 (1 for each failure mode)|
Persistent storage requirements
|Maximo Health||IBM Db2 Warehouse (dedicated instance): 120 GB|
|Maximo Predict*||20 GB|
- At least one 64-bit x86-compatible processor.
- 60GB of memory.
- NVIDIA Pascal, Volta, or Turing GPU architectures. Architectures from other vendors, for example the NVIDIA Ampere architecture, are not supported.
- At least 16GB of GPU memory per GPU.
- A PVC storage class that supports ReadWriteMany access mode.
- NFS based, file-storage based, or IBM Cloud File Storage implementations that offer ReadWriteMany access modes are supported.
- Storage provider implementations that offer ReadWriteOnce access mode, such as most block storage implementations, are NOT supported.
- A PVC storage size of at least 40GB for data sets and models.
- Increased numbers for data sets and models lead to increased storage requirements.
- Most storage providers allow storage allocations to be increased.
- If you cannot increase your allocation, use the following rule to determine a rough estimate of your storage utilization:
(100 KB per image) x (number of images) x (number of data sets) +
1GB x (number of models) +
Note: images and video clips vary greatly in resolution and compression ratios. This rule might not be appropriate for your workload.
- At least 10,000 input/output operations per second (IOPS) of random read/write performance for data set uploads and downloads.
- Less storage bandwidth means that certain actions are visibly slower.
- As more storage IOPS are allocated, application performance scales up to approximately 50,000 random read/write IOPS.
- Configuration of IOPS and storage quotas varies from provider to provider. See how to manage IBM Cloud File Storage IOPS or refer to your storage provider's documentation.
Was this topic helpful?
24 February 2022