Ephemeral storage issues
Ephemeral storage is any storage that is not guaranteed to be persisted and is usually an emptyDir or a writable layer in the container. Use the following sections to resolve the issues that are related to ephemeral-storage.
Errors and issues associated with ephemeral-storage
Use the following sections to resolve the issues related to ephemeral-storage for cloud deployments.
Topology pods are evicted for exceeding the ephemeral-storage limits
In cloud deployments, topology pods are evicted for exceeding the ephemeral-storage limits.
Problem
- Uploading a large file to an observer or the observer-service.
- Excessive pod log output, for example log level set to
DEBUG
. - Observer jobs configured with
write_file_observer_file=true
option.
Following is the example error message from the evicted pod:
Status: Failed
Reason: Evicted
Message: Pod ephemeral local storage usage exceeds the total limit of containers 750Mi.
You can increase the ephemeral-storage for any affected pods.
Resolution
The ephemeral-storage limits can be tuned to allow pods to run with extra storage.
- Using
helmValuesASM
field in theNOI
orNOIHybrid
resource. - Tuning ephemeral-storage limits for topology service using custom sizing
ConfigMap
if you already have it installed.
Using the helmValuesASM
field in the NOI
or
NOIHybrid
resource
The helmValuesASM
needs to be added to NOI
resource or
NOIHybrid
resource.
- To edit the
NOI
resource, use the following command:
Whereoc edit noi <noi-instance> --namespace <namespace>
<noi-instance>
is the name of your Netcool® Operations Insight® instance and<namespace>
is the project (namespace) where Netcool Operations Insight is installed. - To edit the
NOIHybrid
resource, use the following command:
Whereoc edit noihybrid <hybrid-instance> --namespace <namespace>
<hybrid-instance>
is the name of your Netcool Operations Insight hybrid deployment instance and<namespace>
is the project (namespace) where Netcool Operations Insight hybrid deployment instance is installed.
topology
prefix will apply these limits to the topology service deployment.
Following is an example for topology service deployment: spec:
helmValuesASM:
topology.resources.limits.ephemeral-storage: 800Mi
topology.resources.requests.ephemeral-storage: 500Mi
spec
configuration to the
NOI
or NOIHybrid
resource.
Service Name | Properties |
AAI ONAP Service |
|
ALM Observer |
|
Ansible AWX Observer |
|
AppDisco Observer |
|
AppDynamics Observer |
|
Amazon Web Services (AWS) Observer |
|
Microsoft Azure Observer |
|
BCF Observer |
|
BigFix Inventory Observer |
|
Ciena Blue Planet Observer |
|
Cisco ACI Observer |
|
Juniper Contrail Observer |
|
Datadog Observer |
|
DNS Observer |
|
Docker Observer |
|
Dynatrace Observer |
|
File Observer |
|
GitLab Observer |
|
Google Cloud Observer |
|
HP NFVD Observer |
|
IBM Cloud Observer |
|
Inventory Service |
|
ITNM Observer |
|
Jenkins Observer |
|
Juniper Networks CSO Observer |
|
Kubernetes Observer |
|
Layout Service |
|
Merge Service |
|
NetDisco Observer |
|
New Relic Observer |
|
NOI Integration Gateway |
|
NOI Integration Probe |
|
Observer Service |
|
OpenStack Observer |
|
Rancher Observer |
|
REST Observer |
|
SDC ONAP Observer |
|
ServiceNow Observer |
|
SevOne Observer |
|
Status Service |
|
TADDM Observer |
|
Topology Service |
|
Topology UI API Service |
|
Viptela Observer |
|
VMware vCenter Observer |
|
VMware NSX Observer |
|
Zabbix Observer |
|
Tuning ephemeral-storage limits for topology service using custom sizing
ConfigMap
ConfigMap
to tune the resource
allocation for the topology pods, then you need to tune the ephemeral-storage settings in the config
map, since the earlier settings take precedence. Review the default settings in the following
example and compare it to your own ConfigMap
. The defaults shown in the following
example are for a production sized deployment.apiVersion: v1
kind: ConfigMap
metadata:
name: "noi-topology-sizing"
data:
asm: |
aaionap:
specs:
replicas: 2
containers:
aaionap:
resources:
limits:
cpu: "2.0"
ephemeral-storage: 500Mi
memory: 2500Mi
requests:
cpu: "1.0"
ephemeral-storage: 200Mi
memory: 700Mi
env:
- name: JVM_ARGS
value: "-Xms512M -Xmx2G"
alm-observer:
specs:
replicas: 1
containers:
alm-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
ansibleawx-observer:
specs:
replicas: 1
containers:
ansibleawx-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
appdisco-observer:
specs:
replicas: 1
containers:
appdisco-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
appdynamics-observer:
specs:
replicas: 1
containers:
appdynamics-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
aws-observer:
specs:
replicas: 1
containers:
aws-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
azure-observer:
specs:
replicas: 1
containers:
azure-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
bigcloudfabric-observer:
specs:
replicas: 1
containers:
bigcloudfabric-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
bigfixinventory-observer:
specs:
replicas: 1
containers:
bigfixinventory-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
cienablueplanet-observer:
specs:
replicas: 1
containers:
cienablueplanet-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
ciscoaci-observer:
specs:
replicas: 1
containers:
ciscoaci-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
contrail-observer:
specs:
replicas: 1
containers:
contrail-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
datadog-observer:
specs:
replicas: 1
containers:
datadog-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
dns-observer:
specs:
replicas: 1
containers:
dns-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
docker-observer:
specs:
replicas: 1
containers:
docker-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
dynatrace-observer:
specs:
replicas: 1
containers:
dynatrace-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
file-observer:
specs:
replicas: 1
containers:
file-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 1500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
gitlab-observer:
specs:
replicas: 1
containers:
gitlab-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
googlecloud-observer:
specs:
replicas: 1
containers:
googlecloud-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
hpnfvd-observer:
specs:
replicas: 1
containers:
hpnfvd-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 1500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
ibmcloud-observer:
specs:
replicas: 1
containers:
ibmcloud-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
inventory:
specs:
replicas: 2
containers:
inventory:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 800Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 600Mi
env:
- name: JVM_ARGS
value: "-Xms512M -Xmx512M"
itnm-observer:
specs:
replicas: 1
containers:
itnm-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 1200Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 950Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx800M"
jenkins-observer:
specs:
replicas: 1
containers:
jenkins-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
junipercso-observer:
specs:
replicas: 1
containers:
junipercso-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
kubernetes-observer:
specs:
replicas: 1
containers:
kubernetes-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
layout:
specs:
replicas: 2
containers:
layout:
resources:
limits:
cpu: "1.5"
ephemeral-storage: 500Mi
memory: 2500Mi
requests:
cpu: "1.0"
ephemeral-storage: 200Mi
memory: 700Mi
env:
- name: JVM_ARGS
value: "-Xms512M -Xmx2G"
merge:
specs:
replicas: 1
containers:
merge:
resources:
limits:
cpu: 4500m
ephemeral-storage: 500Mi
memory: 2500Mi
requests:
cpu: 2000m
ephemeral-storage: 200Mi
memory: 2000Mi
env:
- name: JVM_ARGS
value: "-Xms1G -Xmx2G"
netdisco-observer:
specs:
replicas: 1
containers:
netdisco-observer:
resources:
limits:
cpu: "2.0"
ephemeral-storage: 500Mi
memory: 4500Mi
requests:
cpu: "1.0"
ephemeral-storage: 200Mi
memory: 1500Mi
env:
- name: JVM_ARGS
value: "-Xms1G -Xmx4G"
newrelic-observer:
specs:
replicas: 1
containers:
newrelic-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
noi-gateway:
specs:
replicas: 1
containers:
nco-g-xml:
resources:
limits:
cpu: 1000m
ephemeral-storage: 500Mi
memory: 2Gi
requests:
cpu: 500m
ephemeral-storage: 100Mi
memory: 1800Mi
noi-probe:
specs:
replicas: 1
containers:
nco-p-messagebus:
resources:
limits:
cpu: 500m
ephemeral-storage: 500Mi
memory: 512Mi
requests:
cpu: 250m
ephemeral-storage: 100Mi
memory: 256Mi
observer-service:
specs:
replicas: 1
containers:
observer-service:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 1500Mi
memory: 1500Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx1G"
openstack-observer:
specs:
replicas: 1
containers:
openstack-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
rancher-observer:
specs:
replicas: 1
containers:
rancher-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
rest-observer:
specs:
replicas: 1
containers:
rest-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
sdconap-observer:
specs:
replicas: 1
containers:
sdconap-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
servicenow-observer:
specs:
replicas: 1
containers:
servicenow-observer:
resources:
limits:
cpu: "2.0"
ephemeral-storage: 500Mi
memory: 4500Mi
requests:
cpu: "1.0"
ephemeral-storage: 200Mi
memory: 1500Mi
env:
- name: JVM_ARGS
value: "-Xms1G -Xmx4G"
sevone-observer:
specs:
replicas: 1
containers:
sevone-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
status:
specs:
replicas: 1
containers:
status:
resources:
limits:
cpu: "3.0"
ephemeral-storage: 500Mi
memory: 2G
requests:
cpu: "2.0"
ephemeral-storage: 200Mi
memory: 1G
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx1200M"
taddm-observer:
specs:
replicas: 1
containers:
taddm-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
topology:
specs:
replicas: 2
containers:
topology:
resources:
limits:
cpu: "3.0"
ephemeral-storage: 500Mi
memory: 3600Mi
requests:
cpu: "2.0"
ephemeral-storage: 200Mi
memory: 1200Mi
env:
- name: JVM_ARGS
value: "-Dcom.ibm.jsse2.overrideDefaultTLS=true -Xms1G -Xmx3G"
ui-api:
specs:
replicas: 2
containers:
ui-api:
resources:
limits:
cpu: "0.8"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.4"
ephemeral-storage: 200Mi
memory: 350Mi
viptela-observer:
specs:
replicas: 1
containers:
viptela-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
vmvcenter-observer:
specs:
replicas: 1
containers:
vmvcenter-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
vmwarensx-observer:
specs:
replicas: 1
containers:
vmwarensx-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
zabbix-observer:
specs:
replicas: 1
containers:
zabbix-observer:
resources:
limits:
cpu: "1.0"
ephemeral-storage: 500Mi
memory: 750Mi
requests:
cpu: "0.5"
ephemeral-storage: 200Mi
memory: 350Mi
env:
- name: JVM_ARGS
value: "-Xms256M -Xmx400M"
ephemeral-storage
field is added for all services and the limits that you use in
your ConfigMap
are greater than or equal to the limits provided in the preceding
ConfigMap
example.Restart the pod
Restart the asm-operator
pod for these new settings to be applied.