To customize the workload in IBM® Maximo® Manage, modify the
configuration for the pod that is handled by the custom resource object.
Maximo Manage supports the following
podTemplates fields:
| podTemplates field name |
Example |
replicas and resources |
apiVersion: apps.mas.ibm.com/v1
kind: ManageWorkspace
metadata:
name: inst1-masdev
namespace: mas-inst1-manage
labels:
mas.ibm.com/applicationId: manage
mas.ibm.com/instanceId: inst1
mas.ibm.com/workspaceId: masdev
spec:
podTemplates:
- name: monitoragent
replicas: 2
containers:
- name: monitoragent
resources:
limits:
cpu: 0.25
memory: 350Mi
requests:
cpu: 0.1
memory: 256Mi
|
affinity |
apiVersion: apps.mas.ibm.com/v1
kind: ManageWorkspace
metadata:
name: inst1-masdev
namespace: mas-inst1-manage
labels:
mas.ibm.com/applicationId: manage
mas.ibm.com/instanceId: inst1
spec:
podTemplates:
- name: monitoragent
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: runtimeType
operator: In
values:
- frontend
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S1
topologyKey: topology.kubernetes.io/zone
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S2
topologyKey: topology.kubernetes.io/zone
|
tolerations |
- name: monitoragent
tolerations:
- key: "key1"
operator: "Exists"
effect: "NoSchedule"
|
securityContext - both at the pod and init or container level and
nodeSelector |
apiVersion: apps.mas.ibm.com/v1
kind: ManageWorkspace
metadata:
name: tfin-masdev
namespace: mas-tfin-manage
labels:
mas.ibm.com/applicationId: manage
mas.ibm.com/instanceId: tfin
mas.ibm.com/workspaceId: masdev
spec:
podTemplates:
- name: manage-maxinst
nodeSelector:
reservedFor: MAS
securityContext:
level: 's0:c30,c0'
seLinuxOptions: null
containers:
- name: manage-maxinst-maxinst
securityContext:
fsGroup: 1000870000
seLinuxOptions:
level: 's0:c30,c0'
seccompProfile:
type: RuntimeDefault
|
hostAliases and hostnames |
spec:
podTemplates:
- name: manage-maxinst
hostAliases:
- ip: "10.10.1.1"
hostnames:
- "ldap1.com"
- "ldap2.com"
|
topologySpreadConstraints |
apiVersion: apps.mas.ibm.com/v1
kind: ManageServerBundle
metadata:
name: all
namespace: mas-tfin-manage
labels:
app.kubernetes.io/instance: tfin
app.kubernetes.io/managed-by: ibm-mas-manage
app.kubernetes.io/name: ibm-mas-manage
mas.ibm.com/applicationId: manage
mas.ibm.com/instanceId: tfin
mas.ibm.com/workspaceId: masdev
spec:
podTemplates:
- name: all
topologySpreadConstraints:
- labelSelector:
matchLabels:
mas.ibm.com/appType: serverBundle
mas.ibm.com/appTypeName: all
mas.ibm.com/applicationId: manage
mas.ibm.com/instanceId: tfin
mas.ibm.com/workspaceId: masdev
maxSkew: 2
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
|
Remember: For the build-config pod, only resources and
nodeSelector podTemplates are applicable.
Note: For the ManageServerBundle
pod, securityContext, affinity, nodeSelector,
hostAliases, hostname, and
topologySpreadConstraints were previously handled through the
passThroughDeployment spec only. Starting in
Maximo Application
Suite
9.0, podTemplates is the new approach and takes higher precedence when both podTemplates and the
spec are applied.
ManageApp custom resource object
The following deployment pods are handled by the ManageApp custom resource
object:
Table 1. Deployment pods that are handled by the
ManageApp custom resource object
| Pod name |
Container type |
Container name |
Default replicas |
Default resources |
entitymgr-primary-entity |
Container |
manager |
1 |
resources:
requests:
cpu: 0.01
memory: 64Mi
limits:
cpu: 0.2
memory: 512Mi
|
entitymgr-appstatus |
Container |
manager |
1 |
resources:
requests:
cpu: 0.2
memory: 300Mi
limits:
cpu: 0.8
memory: 1024Mi
|
entitymgr-bdi |
Container |
manager |
1 |
resources:
requests:
cpu: 0.03
memory: 128Mi
limits:
cpu: 0.8
memory: 1024Mi
|
entitymgr-ws |
Container |
manager |
1 |
resources:
requests:
cpu: 0.2
memory: 500Mi
limits:
cpu: 0.8
memory: 2Gi
|
entitymgr-acc |
Container |
manager |
1 |
resources:
requests:
cpu: 30m
memory: 128Mi
limits:
cpu: 800m
memory: 1Gi
|
usersyncagent |
Container |
manage-usersyncagent |
1 |
resources:
requests:
cpu: 0.03
memory: 128Mi
limits:
cpu: 0.25
memory: 256Mi
|
groupsyncagent |
Container |
manage-groupsyncagent |
1 |
resources:
requests:
cpu: 0.03
memory: 128Mi
limits:
cpu: 0.25
memory: 256Mi
|
ibm-mas-imagestitching-operator |
Container |
imagestitching |
1 |
resources:
requests:
cpu: 0.2
memory: 300Mi
ephemeral-storage: 2Mi
limits:
cpu: 0.5
memory: 1024Mi
ephemeral-storage: 2Gi
|
healthext-entitymgr-ws |
Container |
healthext |
1 |
resources:
requests:
cpu: 0.1
memory: 128Mi
ephemeral-storage: 2Mi
limits:
cpu: 0.5
memory: 512Mi
ephemeral-storage: 2Gi
|
ibm-mas-slackproxy-operator |
Container |
slackproxy |
1 |
resources:
requests:
cpu: 0.5
memory: 300Mi
ephemeral-storage: 2Mi
limits:
cpu: 1Gi
memory: 1Gi
ephemeral-storage: 2Gi
|
Note: For all the deployments in Table 1, more than one replica is not advisable.
ManageWorkspace custom resource object
The following deployment pods are handled by the ManageWorkspace custom
resource object:
Table 2. Deployment pods that are handled by the
ManageWorkspace custom resource object
| Pod name |
Container type |
Container name |
Default replicas |
Default resources |
monitoragent |
Container |
monitoragent |
1 |
resources:
requests:
cpu: 0.1
memory: 256Mi
limits:
cpu: 0.25
memory: 350Mi
|
manage-maxinst |
Container |
manage-maxinst-maxinst |
1 |
resources:
requests:
cpu: 0.5
memory: 500Mi
limits:
cpu: 2
memory: 4Gi
|
Server bundle name which is dynamic while activating ManageWorkspace,
for example, mea |
Container |
Server bundle name which is dynamic while activating ManageWorkspace,
for example, mea |
1 |
resources:
requests:
cpu: 0.5
memory: 2Gi
limits:
cpu: 6
memory: 10Gi
|
Server bundle name which is dynamic while activating ManageWorkspace,
for example, mea |
Container |
monitoragent |
1 |
resources:
requests:
cpu: 0.1
memory: 256Mi
limits:
cpu: 1
memory: 512Mi
|
healthext-model-engine |
Container |
healthext-model-engine |
1 |
resources:
requests:
cpu: 0.1
memory: 128Mi
ephemeral-storage: 2Mi
limits:
cpu: 1
memory: 1Gi
ephemeral-storage: 2Gi
|
imagestitching |
Container |
image-stitching |
1 |
resources:
requests:
cpu: 1
memory: 4Gi
limits:
cpu: 3
memory: 16Gi
|
slackproxy |
Container |
slack-proxy |
1 |
resources:
requests:
cpu: 1
memory: 4Gi
limits:
cpu: 3
memory: 16Gi
|
Dynamic name that matches the spec.bdiConfiguration.name from
ManageWorkspace custom resource |
Container |
bdiservice |
1 |
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 3
memory: 16Gi
|
build-config |
Container |
adminbuild |
1 |
resources:
limits:
cpu: 2
memory: 512Mi
ephemeral-storage: 100Gi
requests:
cpu: 1
memory: 256Mi
ephemeral-storage: 30Gi
|
build-config |
Container |
bundlebuild |
1 |
resources:
limits:
cpu: 2
memory: 512Mi
ephemeral-storage: 100Gi
requests:
cpu: 1
memory: 256Mi
ephemeral-storage: 30Gi
|
Note: For all the deployments in Table 2, more than one replica is not advisable.
BDI custom resource object
The following deployment pods are handled by the BDI custom resource
object:
Table 3. Deployment pods that are handled by the
BDI custom resource object
| Pod name |
Container type |
Container name |
Default replicas |
Default resources |
Dynamic name that matches the spec.bdiConfiguration.name from
ManageWorkspace custom resource |
Container |
bdiservice |
1 |
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 3
memory: 16Gi
|
Imagestitching custom resource object
The following deployment pods are handled by the Imagestitching custom
resource object:
Table 4. Deployment pods that are handled by the
Imagestitching custom resource object
| Pod name |
Container type |
Container name |
Default replicas |
Default resources |
imagestitching |
Container |
image-stitching |
1 |
resources:
requests:
cpu: 1
memory: 4Gi
limits:
cpu: 3
memory: 16Gi
|
SlackProxy custom resource object
The following deployment pods are handled by the SlackProxy custom resource
object:
Table 5. Deployment pods that are handled by the
SlackProxy custom resource object
| Pod name |
Container type |
Container name |
Default replicas |
Default resources |
slackproxy |
Container |
slack-proxy |
1 |
resources:
requests:
cpu: 1
memory: 4Gi
limits:
cpu: 3
memory: 16Gi
|
HealthExtWorkspace custom resource object
The following deployment pods are handled by the HealthExtWorkspace custom
resource object:
Table 6. Deployment pods that are handled by the
HealthExtWorkspace custom resource object
| Pod name |
Container type |
Container name |
Default replicas |
Default resources |
healthext-model-engine |
Container |
healthext-model-engine |
1 |
resources:
requests:
cpu: 0.1
memory: 128Mi
ephemeral-storage: 2Mi
limits:
cpu: 1
memory: 1Gi
ephemeral-storage: 2Gi
|
ManageAccelerators custom resource object
The following deployment pods are handled by the ManageAccelerators custom
resource object:
Table 7. Deployment pods that are handled by the
ManageAccelerators custom resource object
| Pod name |
Container type |
Container name |
Default replicas |
Default resources |
healthext-entitymgr-acc |
Container |
healthext-acc |
1 |
resources:
requests:
cpu: 100m
memory: 128Mi
ephemeral-storage: 2Mi
limits:
cpu: 500m
memory: 512Mi
ephemeral-storage: 2Gi
|
HealthExtAccelerators custom resource object
The following deployment pods are handled by the HealthExtAccelerators
custom resource object:
Table 8. Deployment pods that are handled by the
HealthExtAccelerators custom resource object
| Pod name |
Container type |
Container name |
Default replicas |
Default resources |
healthext-acc-job |
Container |
healthext-acc-job |
1 |
resources:
requests:
cpu: 100m
memory: 128Mi
ephemeral-storage: 2Mi
limits:
cpu: 1000m
memory: 1Gi
ephemeral-storage: 2Gi
|
uninstall-health-acc-job |
Container |
uninstall-health-acc-job |
1 |
resources:
requests:
cpu: 100m
memory: 128Mi
ephemeral-storage: 2Mi
limits:
cpu: 1000m
memory: 1Gi
ephemeral-storage: 2Gi
|