Operator configuration examples
Browse the WebSphere® Liberty operator examples to learn how to use custom resource (CR) parameters to configure your operator.
For more information about the WebSphereLibertyApplication custom resource definition (CRD) configurable parameters, see WebSphereLibertyApplication custom resource.
- Reference image streams (.spec.applicationImage)
- Configure service account (.spec.serviceAccount)
- Add or change labels (.metadata.labels)
- Add annotations (.metadata.annotations)
- Set environment variables for an application container (.spec.env or .spec.envFrom)
- Override console logging environment variable default values (.spec.env)
- Configure multiple application instances for high availability (.spec.replicas)
- Configure Horizontal Pod Autoscaling for high availability (.spec.autoscaling)
- Set privileges and permissions for a pod or container (.spec.securityContext)
- Persist resources (.spec.statefulSet and .spec.volumeMounts)
- Monitor resources (.spec.monitoring)
- Specify multiple service ports (.spec.service.port* and .spec.monitoring.endpoints)
- Configure probes (.spec.probes)
- Configure file-based probes with mpHealth-4.0 (spec.probes.enableFileBased)
- Deploy serverless applications with Knative (.spec.createKnativeService)
- Expose applications externally (.spec.expose, .spec.createKnativeService, .spec.route)
- Allowing or limiting incoming traffic (.spec.networkPolicy)
- Bind applications with operator-managed backing services (.status.binding.name and .spec.service.bindable)
- Limit a pod to run on specified nodes (.spec.affinity)
- Constrain how pods are spread between nodes and zones (.spec.topologySpreadConstraints)
- Configure DNS (.spec.dns.policy and .spec.dns.config)
- Configure tolerations (.spec.tolerations)
Reference image streams (.spec.applicationImage)
To deploy an image from an image stream, you must specify a .spec.applicationImage field in your CR.
spec:
applicationImage: my-namespace/my-image-stream:1.0
The previous example looks up the 1.0 tag from the
my-image-stream image stream in the my-namespace project and
populates the CR .status.imageReference field with a referenced image such as
image-registry.openshift-image-registry.svc:5000/my-namespace/my-image-stream@sha256:*****.
The operator watches the specified image stream and deploys new images as new ones are available for
the specified tag.
To reference an image stream, the .spec.applicationImage field must follow
the
project_name/image_stream_name[:tag]
format. If project_name or tag is not specified, the operator
uses the default values of the CR namespace and of latest. For example, the
applicationImage: my-image-stream configuration is the same as the
applicationImage: my-namespace/my-image-stream:latest configuration.
The operator tries to find an image stream name first with the project_name/image_stream_name format and falls back to the registry lookup if it can't to find any image stream that matches the value.
This feature is only available if you are running on Red Hat® OpenShift®. The operator requires
ClusterRole permissions if the image stream resource is in another namespace.
Configure service account (.spec.serviceAccount)
The operator can create a ServiceAccount resource when deploying an WebSphereLibertyApplication custom resource (CR). If .spec.serviceAccount.name is not specified in a CR, the operator
creates a service account with the same name as the CR (such as my-app). In
addition, this service account is dynamically updated when pull secret changes are detected in the
CR field .spec.pullSecret.
Alternatively, the operator can use a custom ServiceAccount that you provide. If
.spec.serviceAccount.name is specified in a CR, the operator uses the
service account as is, with read only permissions when provisioning new Pods. It is
your responsibility to add any required image pull secrets to the service account when accessing
images behind a private registry.
By default, the operator verifies that the service account that is used has a reference to a
valid pull secret. If a custom service account is being used, this check can be disabled by setting
.spec.serviceAccount.skipPullSecretValidation to true in the
CR.
.spec.serviceAccountName is now deprecated. The operator still looks up the
value of .spec.serviceAccountName, but you must switch to using
.spec.serviceAccount.name.You can set .spec.serviceAccount.mountToken to disable mounting the service
account token into the application pods. By default, the service account token is mounted. This
configuration applies to either the default service account that the operator creates or to the
custom service account that you provide.
If applications require specific permissions but still want the operator to create a
ServiceAccount, you can manually create a role binding to bind a role to the
service account that the operator created. To learn more about role-based access control (RBAC), see
the Kubernetes documentation.
Add or change labels (.metadata.labels)
By default, the operator adds the following labels into all resources that are created for a WebSphereLibertyApplication CR.
| Label | Default value | Description |
|---|---|---|
app.kubernetes.io/instance |
metadata.name |
A unique name or identifier for this component. You cannot change the default. |
app.kubernetes.io/name |
metadata.name |
A name that represents this component. |
app.kubernetes.io/managed-by |
websphere-liberty-operator |
The tool that manages this component. |
app.kubernetes.io/component |
backend |
The type of component that is created. For a full list, see the Red Hat OpenShift documentation. |
app.kubernetes.io/part-of |
applicationName |
The name of the higher-level application that this component is a part of. If the component is not a stand-alone application, configure this label. |
app.kubernetes.io/version |
version |
The version of the component. |
You can add new labels or overwrite existing labels, excluding the
app.kubernetes.io/instance label. To set labels, specify them in your CR as
key-value pairs in the .metadata.labels field.
apiVersion: liberty.websphere.ibm.com/v1
Kind: WebSphereLibertyApplication
metadata:
name: my-app
labels:
my-label-key: my-label-value
spec:
applicationImage: quay.io/my-repo/my-app:1.0
After the initial deployment of the CR, any changes to its labels are applied only if a spec field is updated.
When running in Red Hat OpenShift, there are additional labels and annotations that are standard on the platform. Overwrite defaults where applicable and add any labels from the Red Hat OpenShift list that are not set by default using the previous instructions.
Add annotations (.metadata.annotations)
To add new annotations into all resources created for a WebSphere Liberty operator, specify them in your CR
as key-value pairs in the .metadata.annotations field. Annotations in a CR
override any annotations specified on a resource, except for the annotations set on
Service with .spec.service.annotations.
apiVersion: liberty.websphere.ibm.com/v1
Kind: WebSphereLibertyApplication
metadata:
name: my-app
annotations:
my-annotation-key: my-annotation-value
spec:
applicationImage: quay.io/my-repo/my-app:1.0
After the initial deployment of the CR, any changes to its annotations are applied only if a spec field is updated.
When running in Red Hat OpenShift, there are additional annotations that are standard on the platform. Overwrite defaults where applicable and add any labels from the Red Hat OpenShift list that are not set by default using the previous instructions.
Set environment variables for an application container (.spec.env or .spec.envFrom)
To set environment variables for your application container, specify .spec.env or
.spec.envFrom fields in a CR. The environment variables can come
directly from key-value pairs, ConfigMap, or Secret. The
environment variables set by the .spec.env or
.spec.envFrom fields override any environment variables that are specified in
the container image.
Use .spec.envFrom to define all data in a ConfigMap or a
Secret as environment variables in a container. Keys from
ConfigMap or Secret resources become environment variable names in
your container. The following CR sets key-value pairs in .spec.env and
.spec.envFrom fields.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
env:
- name: DB_NAME
value: "database"
- name: DB_PORT
valueFrom:
configMapKeyRef:
name: db-config
key: db-port
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-credential
key: adminUsername
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credential
key: adminPassword
envFrom:
- configMapRef:
name: env-configmap
- secretRef:
name: env-secrets
For another example that uses .spec.envFrom.secretRef, see Using environment variables for basic authentication credentials. For an example that overrides the console logging environment variable default values, see Override console logging environment variable default values (.spec.env).
Override console logging environment variable default values (.spec.env)
The WebSphere Liberty operator sets environment variables that are related to console logging by default. You can override the console logging default values with your own values in your CR .spec.env list.
The following table lists the console logging environment variables and their default values.
| Variable name | Default value |
|---|---|
WLP_LOGGING_CONSOLE_LOGLEVEL |
info |
WLP_LOGGING_CONSOLE_SOURCE |
message,accessLog,ffdc,audit |
WLP_LOGGING_CONSOLE_FORMAT |
json |
To override default values for the console logging environment variables, set your preferred values manually in your CR .spec.env list. For information about values that you can set, see the Open Liberty logging documentation.
The following example shows a CR .spec.env list that sets nondefault values for the console logging environment variables.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
env:
- name: WLP_LOGGING_CONSOLE_FORMAT
value: "DEV"
- name: WLP_LOGGING_CONSOLE_SOURCE
value: "messages,trace,accessLog"
- name: WLP_LOGGING_CONSOLE_LOGLEVEL
value: "error"
For more information about overriding variable default values, see Set environment variables for an application container (.spec.env or .spec.envFrom). For information about monitoring applications and analyzing application logs, see Observing with the WebSphere Liberty operator
Configure static replicas for high availability (.spec.replicas)
To run multiple instances of your application for high availability with a fixed number of replicas, use the .spec.replicas field. The .spec.replicas field maintains a constant number of application instances regardless of resource consumption. This configuration is ignored when autoscaling is configured using the .spec.autoscaling field.
Configure Horizontal Pod Autoscaling for high availability (.spec.autoscaling)
- The .spec.autoscaling.maxReplicas field is required for all autoscaling configurations.
-
The .spec.resources.requests field is required for autoscaling and sets the minimum allowed amount of compute resources.
- The .spec.resources.requests.cpu field is required for autoscaling based on CPU usage with the .spec.autoscaling.targetCPUUtilizationPercentage field.
- The .spec.resources.requests.memory field is required for autoscaling based on memory usage with the .spec.autoscaling.targetMemoryUtilizationPercentage field.
Set privileges and permissions for a pod or container (.spec.securityContext)
A security context controls privilege and permission settings for a pod or application container. By default, the operator sets several .spec.securityContext parameters for an application container as shown in the following example.
spec:
containers:
- name: app
securityContext:
capabilities:
drop:
- ALL
privileged: false
runAsNonRoot: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
To override the default values or set more parameters, change the .spec.securityContext parameters, for example:
spec:
applicationImage: quay.io/my-repo/my-app:1.0
securityContext:
readOnlyRootFilesystem: true
runAsUser: 1001
seLinuxOptions:
level: "s0:c123,c456"
The WebSphere Liberty operator sets the
securityContext field to the RuntimeDefault seccomp profile.
If your Kubernetes cluster uses custom security context constraints, seccompProfiles must be set to
runtime/default.
To use custom security context constraints with your Kubernetes cluster, add the following section.
seccompProfiles:
- runtime/default
If the application requires seccomp to be disabled, the seccompProfile must be set to
unconfined in both the security context constraints and the
WebSphereLibertyApplication CR
seccompProfiles:
- unconfinedspec:
securityContext:
seccompProfile:
type: UnconfinedFor more information, see Set the security context for a Container.
Persist resources (.spec.statefulSet and .spec.volumeMounts)
If storage is specified in the WebSphereLibertyApplication CR,
the operator can create a StatefulSet and PersistentVolumeClaim
for each pod. If storage is not specified, StatefulSet resource is created without
persistent storage.
The following CR definition uses .spec.statefulSet.storage to provide basic storage. The
operator creates a StatefulSet with the size of 1Gi that mounts to
the /data folder.
apiVersion: liberty.websphere.ibm.com/v1
Kind: WebSphereLibertyApplication
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
statefulSet:
storage:
size: 1Gi
mountPath: "/data"
A WebSphereLibertyApplication CR definition can provide more
advanced storage. With the following CR definition, the operator creates a
PersistentVolumeClaim called pvc with the size of
1Gi and ReadWriteOnce access mode. The operator enables users to
provide an entire .spec.statefulSet.storage.volumeClaimTemplate for full
control over the automatically created PersistentVolumeClaim. To persist to more
than one folder, the CR definition uses the .spec.volumeMounts field instead of
.spec.statefulSet.storage.mountPath.
apiVersion: liberty.websphere.ibm.com/v1
Kind: WebSphereLibertyApplication
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
volumeMounts:
- name: pvc
mountPath: /data_1
subPath: data_1
- name: pvc
mountPath: /data_2
subPath: data_2
statefulSet:
storage:
volumeClaimTemplate:
metadata:
name: pvc
spec:
accessModes:
- "ReadWriteMany"
storageClassName: 'glusterfs'
resources:
requests:
storage: 1Gi
StatefulSet is created, the
persistent storage and PersistentVolumeClaim cannot be added or changed.The following CR definition does not specify storage and creates StatefulSet
resources without persistent storage. You can create StatefulSet resources without
storage if you require only ordering and uniqueness of a set of pods.
apiVersion: liberty.websphere.ibm.com/v1
Kind: WebSphereLibertyApplication
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
statefulSet: {}
Monitor resources (.spec.monitoring)
A WebSphere Liberty operator can create a
ServiceMonitor resource to integrate with Prometheus Operator.
ServiceMonitor.At minimum, provide a label for Prometheus set on ServiceMonitor objects. In the
following example, the .spec.monitoring label is apps-prometheus.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
monitoring:
labels:
app-prometheus: ''
endpoints:
- interval: '30s'
basicAuth:
username:
key: username
name: metrics-secret
password:
key: password
name: metrics-secret
tlsConfig:
insecureSkipVerify: true
For more advanced monitoring, set many ServicerMonitor parameters such as
authentication secret with Prometheus Endpoint.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
monitoring:
labels:
app-prometheus: ''
endpoints:
- interval: '30s'
basicAuth:
username:
key: username
name: metrics-secret
password:
key: password
name: metrics-secret
tlsConfig:
insecureSkipVerify: true
Specify multiple service ports (.spec.service.port* and .spec.monitoring.endpoints)
To provide multiple service ports in addition to the primary service port, configure the primary service port with the .spec.service.port, .spec.service.targetPort, .spec.service.portName, and .spec.service.nodePort fields. The primary port is exposed from the container that runs the application and the port values are used to configure the Route (or Ingress), Service binding and Knative service.
To specify an alternative port for Service Monitor, use the .spec.monitoring.endpoints field and specify either the port or targetPort field, otherwise it defaults to the primary port.
Specify the primary port with the .spec.service.port field and additional ports with the .spec.service.ports field as shown in the following example.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
service:
type: NodePort
port: 9080
portName: http
targetPort: 9080
nodePort: 30008
ports:
- port: 9443
name: https
monitoring:
endpoints:
- basicAuth:
password:
key: password
name: metrics-secret
username:
key: username
name: metrics-secret
interval: 5s
port: https
scheme: HTTPS
tlsConfig:
insecureSkipVerify: true
labels:
app-monitoring: 'true'
Configure probes (.spec.probes)
Probes are health checks on an application container to determine whether it is alive or ready to receive traffic. The WebSphere Liberty operator has startup, liveness, and readiness probes.
Probes are not enabled in applications by default. To enable a probe with the default values, set
the probe parameters to {}. The following example enables all 3 probes to use
default values.
apiVersion: liberty.websphere.ibm.com/v1
Kind: WebSphereLibertyApplication
metadata:
name: my-app
spec:
probes:
startup: {}
liveness: {}
readiness: {}
httpGet:
path: /health/started
port: 9443
scheme: HTTPS
timeoutSeconds: 2
periodSeconds: 10
failureThreshold: 20
httpGet:
path: /health/live
port: 9443
scheme: HTTPS
initialDelaySeconds: 60
timeoutSeconds: 2
periodSeconds: 10
failureThreshold: 3
httpGet:
path: /health/ready
port: 9443
scheme: HTTPS
initialDelaySeconds: 10
timeoutSeconds: 2
periodSeconds: 10
failureThreshold: 10
To override a default value, specify a different value. The following example overrides a liveness probe initial delay default of 60 seconds and sets the initial delay to 90 seconds.
spec:
probes:
liveness:
initialDelaySeconds: 90
When a probe initialDelaySeconds parameter is set to 0, the
default value is used. To set a probe initial delay to 0, define the probe instead
of using the default probe. The following example overrides the default value and sets the initial
delay to 0.
spec:
probes:
liveness:
httpGet:
path: "/health/live"
port: 9443
initialDelaySeconds: 0
Configure file-based probes with mpHealth-4.0 (.spec.probes.enableFileBased)
Starting in Liberty operator version
1.5.2, a new Boolean field, .spec.probes.enableFileBased, allows you to set
file-based health checks using the MicroProfile Health 4.0 feature.
File-based probes are not enabled in applications by default. The Liberty operator defaults to using HTTP GET probes.
- The Liberty image specified at
.spec.applicationImagerequires thempHealth-4.0feature to be installed and enabled on a Liberty server running version 25.0.0.6 or higher. - To enable a file-based probe with the default values, set
enableFileBasedtotrueand the probe parameters to{}. The following example enables all three probes to use file-based default values.spec: probes: enableFileBased: true startup: {} liveness: {} readiness: {}
The file-based startup, liveness and readiness probes inherits the same defaults (without setting
httpGet) as outlined in the Configure probes (.spec.probes) section.
Once file-based probes are enabled, the Liberty operator configures the application
to monitor files within the /output/health directory of the container.
sh-4.4$ ls -la /output/health
total 0
drwxr-x---. 2 1000730000 root 46 Nov 21 16:07 .
drwxrwx---. 1 default root 65 Nov 21 16:07 ..
-rw-r-----. 1 1000730000 root 0 Nov 21 16:07 live
-rw-r-----. 1 1000730000 root 0 Nov 21 16:07 ready
-rw-r-----. 1 1000730000 root 0 Nov 21 16:07 started
Every few seconds, a Liberty server
that uses the mpHealth-4.0 feature might create/update the live,
ready and/or started files to a newly adjusted timestamp to
indicate an UP status. The interval at which Liberty checks these files can be modified
using the .spec.probes.checkInterval and
.spec.probes.startupCheckInterval fields. By default, these check intervals are set
to 5s and 100ms, respectively.
spec:
probes:
enableFileBased: true
startup: {}
liveness: {}
readiness: {}
checkInterval: 5s
startupCheckInterval: 100ms
To align with the behavior of non-file-based probes, set the Liberty server to check the
live, ready, or started files faster than the
time it takes for Kubernetes to perform a single probe. For example, set up a buffer so that the
check intervals maintain the property of interval * 1.5 ⇐ periodSeconds. In this
case, the WebSphereLibertyApplication probes have a default
periodSeconds of 10, which satisfies the constraint with check
intervals of 5s and 100ms. If you modify the
periodSeconds, checkInterval, or
startupCheckInterval property values, maintain this constraint to avoid unexpected
pod downtimes.
spec:
probes:
enableFileBased: true
startup:
# periodSeconds: 10
liveness:
# periodSeconds: 10
readiness:
# periodSeconds: 10
checkInterval: 5s
startupCheckInterval: 100ms
Deploy serverless applications with Knative (.spec.createKnativeService)
If Knative is
installed on a Kubernetes cluster, to deploy serverless applications with Knative on the cluster,
the operator creates a Knative Service resource which manages the
entire life cycle of a workload. To create a Knative service, set .spec.createKnativeService to true.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
createKnativeService: true
The operator creates a Knative service in the cluster and populates the resource with applicable
WebSphereLibertyApplication fields. Also, it ensures non-Knative
resources such as Kubernetes Service, Route, and
Deployment are deleted.
The CRD fields that can populate the Knative service resource include .spec.applicationImage, .spec.serviceAccount.name, .spec.probes.liveness, .spec.probes.readiness, .spec.service.Port, .spec.volumes, .spec.volumeMounts, .spec.env, .spec.envFrom, .spec.pullSecret and .spec.pullPolicy. Startup probe is not fully supported by Knative, thus .spec.probes.startup does not apply when Knative service is enabled.
For details on how to configure Knative for tasks such as enabling HTTPS connections and setting up a custom domain, see the Knative documentation.
Autoscaling fields in WebSphereLibertyApplication are not used to configure Knative Pod Autoscaler (KPA). To learn how to configure KPA, see Configuring the Autoscaler.
Expose applications externally (.spec.expose, .spec.createKnativeService, .spec.route)
Expose an application externally with a Route, Knative Route, or Ingress resource.
To expose an application externally with a route in a non-Knative deployment, set .spec.expose
to true.
The operator creates a secured route based on the application service when .spec.manageTLS is enabled. To use custom certificates, see information about .spec.service.certificateSecretRef and .spec.route.certificateSecretRef.
apiVersion: liberty.websphere.ibm.com/v1
Kind: WebSphereLibertyApplication
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
expose: true
To expose an application externally with Ingress in a non-Knative deployment, complete the following steps.
- To use the Ingress resource to expose your cluster, install an Ingress controller such a Nginx or Traefik.
- Ensure that a
Routeresource is not on the cluster. The Ingress resource is created only if theRouteresource is not available on the cluster. - To use the Ingress resource, set the
defaultHostNamevariable in the Operator ConfigMap object to a hostname such asmycompany.com - Enable TLS. Generate a certificate and specify the secret that
contains the certificate with the .spec.route.certificateSecretRef
field.
apiVersion: liberty.websphere.ibm.com/v1 Kind: WebSphereLibertyApplication metadata: name: my-app namespace: backend spec: applicationImage: quay.io/my-repo/my-app:1.0 expose: true route: certificateSecretRef: mycompany-tls - Specify .spec.route.annotations to configure the Ingress resource.
Annotations such as Nginx, HAProxy, Traefik, and others are specific to the Ingress controller
implementation.
The following example specifies annotations, an existing TLS secret, and a custom hostname.
apiVersion: liberty.websphere.ibm.com/v1 Kind: WebSphereLibertyApplication metadata: name: my-app namespace: backend spec: applicationImage: quay.io/my-repo/my-app:1.0 expose: true route: annotations: # You can use this annotation to specify the name of the ingress controller to use. # You can install multiple ingress controllers to address different types of incoming traffic such as an external or internal DNS. kubernetes.io/ingress.class: "nginx" # The following nginx annotation enables a secure pod connection: nginx.ingress.kubernetes.io/ssl-redirect: true nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" # The following traefik annotation enables a secure pod connection: traefik.ingress.kubernetes.io/service.serversscheme: https # Use a custom hostname for the Ingress host: app-v1.mycompany.com # Reference a pre-existing TLS secret: certificateSecretRef: mycompany-tls
To expose an application as a Knative service, set .spec.createKnativeService and
.spec.expose to true. The operator creates an unsecured
Knative route. To configure secure HTTPS connections for your Knative deployment, see Configuring
HTTPS with TLS certificates.
apiVersion: liberty.websphere.ibm.com/v1
Kind: WebSphereLibertyApplication
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
createKnativeService: true
expose: true
Allowing or limiting incoming traffic (.spec.networkPolicy)
By default, network policies for an application isolate incoming traffic.
- The default network policy created for applications that are not exposed limits incoming traffic
to pods in the same namespace that are part of the same application. Traffic is limited to only the
ports that are configured by the service. By default, traffic will be exposed to
.spec.service.targetPortwhen specified and otherwise fallback to using the.spec.service.port. Using the same logic, traffic will be exposed for each additionaltargetPortorportprovided in the.spec.service.ports[]array. - Red Hat OpenShift supports network policies by default. For exposed applications on Red Hat OpenShift, the network policy allows incoming traffic from the Red Hat OpenShift ingress controller on the ports in the service configuration. The network policy also allows incoming traffic from the Red Hat OpenShift monitoring stack.
- For exposed applications on other Kubernetes platforms, the network policy allows incoming traffic from any pods in any namespace on the ports in the service configuration. For deployments to other Kubernetes platforms, ensure that your network plug-in supports the Kubernetes network policies.
To disable the creation of network policies for an application, set .spec.networkPolicy.disable to true.
spec:
networkPolicy:
disable: true
You can change the network policy to allow incoming traffic from specific namespaces or pods. By
default, .spec.networkPolicy.namespaceLabels is set to the same namespace to
which the application is deployed, and .spec.networkPolicy.fromLabels is set to
pods that belong to the same application specified by .spec.applicationName.
The following example allows incoming traffic from pods that are labeled with the
frontend role and are in the same namespace.
spec:
networkPolicy:
fromLabels:
role: frontend
The following example allows incoming traffic from pods that belong to the same application in
the example namespace.
spec:
networkPolicy:
namespaceLabels:
kubernetes.io/metadata.name: example
The following example allows incoming traffic from pods that are labeled with the
frontend role in the example namespace.
spec:
networkPolicy:
namespaceLabels:
kubernetes.io/metadata.name: example
fromLabels:
role: frontend
Bind applications with operator-managed backing services (.status.binding.name and .spec.service.bindable)
The Service Binding Operator enables application developers to bind applications
together with operator-managed backing services. If the Service Binding Operator is installed on
your cluster, you can bind applications by creating a ServiceBindingRequest custom
resource.
You can configure a WebSphere Liberty
application to behave as a Provisioned Service that is defined by the Service Binding
Specification. According to the specification, a Provisioned Service resource must define a
.status.binding.name that refers to a Secret. To expose your application as a
Provisioned Service, set the .spec.service.bindable field to a value of
true. The operator creates a binding secret that is named
CR_NAME-expose-binding and adds the host,
port, protocol, basePath, and
uri entries to the secret.
To override the default values for the entries in the binding secret or to add new entries to the
secret, create an override secret that is named
CR_NAME-expose-binding-override and add any entries to the
secret. The operator reads the content of the override secret and overrides the default values in
the binding secret.
After a WebSphere Liberty application is exposed as a Provisioned Service, a service binding request can refer to the application as a backing service.
The instructions that follow show how to bind WebSphere Liberty applications as services or producers to other workloads (such as pods or deployments). Two WebSphere Liberty applications that are deployed through the WebSphere Liberty operator cannot be bound. For more information, see Known issues and limitations.
- Set up the Service Binding operator to access WebSphere Liberty applications.By default, the Service Binding operator does not have permission to interact with WebSphere Liberty applications that are deployed through the WebSphere Liberty operator. You must create two RoleBindings to give the Service Binding operator view and edit access for WebSphere Liberty applications.
- In the Red Hat OpenShift dashboard, navigate to .
- Select Create binding.
- Set the Binding type to
Cluster-wide role binding (ClusterRoleBinding). - Enter a name for the binding. Choose a name that is related to service bindings and view access for WebSphere applications.
- For the role name, enter
webspherelibertyapplications.liberty.websphere.ibm.com-v1-view. - Set the Subject to
ServiceAccount. - A Subject namespace menu appears. Select
openshift-operators. - In the Subject name field, enter
service-binding-operator. - Click Create.
Now that you have set up the first role binding, navigate to the RoleBindings list and click Create binding again. Set up edit access by using the following instructions.- Set Binding type to
Cluster-wide role binding (ClusterRoleBinding). - Enter a name for the binding. Choose a name that is related to service bindings and edit access for WebSphere applications.
- In the Role name field, enter
webspherelibertyapplications.liberty.websphere.ibm.com-v1-edit. - Set Subject to
ServiceAccount. - In the Subject namespace list, select
openshift-operators. - In the Subject name field, type
service-binding-operator. - Click Create.
Service bindings from WebSphere Liberty applications (or "services") to pods or deployments (or "workloads") now succeed. After a binding is made, the bound workload restarts or scales to mount the binding secret to
/bindingsin all containers. - Set up a service binding by using the Red Hat method.For more information, see the Red Hat documentation or the Red Hat tutorial.
- On the Red Hat OpenShift web dashboard, click Administrator in the sidebar and select Developer.
- In the Topology view for the current namespace, hover over the border of the WebSphere application to be bound as a service, and drag an arrow to the Pod or Deployment workload. A tooltip appears entitled Create Service Binding.
- The Create Service Binding window opens. Change the name to value that is fewer than 63 characters. The Service Binding operator might fail to mount the secret as a volume if the name exceeds 63 characters.
- Click Create.
- A sidebar opens. To see the status of the binding, click the name of the secret and then scroll until the status appears.
- Check the pod/deployment workload and verify that a volume is mounted. You can also open a
terminal session into a container and run
ls /bindings.
- Set up a service binding using the Spec API Tech Preview / Community method.This method is newer than the Red Hat method but achieves the same results. You must add a label to your WebSphere Liberty application, such as
app=frontend, if it does not have any unique labels. Set the binding to use a label selector so that the Service Binding operator looks for a WebSphere Liberty application with a specific label.- Install the Service Binding operator by using the Red Hat OpenShift Operator Catalog.
- Select and set the namespace to the same one used by both your WebSphere application and pod/deployment workload.
- Open the Service Binding (Spec API Tech Preview) page.
- Click Create ServiceBinding.
- Choose a short name for the binding. Names that exceed 63 characters might cause the binding secret volume mount to fail.
- Expand the Service section.
- In the Api Version field, enter
liberty.websphere.ibm.com/v1. - In the Kind field, enter
WebSphereLibertyApplication. - In the Name field, enter the name of your application. You can get this name from the list of applications on the WebSphere Liberty operator page.
- Expand the Workload section.
- Set the Api Version field to the value of
apiVersionin your target workload YAML. For example, if the workload is a deployment, the value isapps/v1. - Set the Kind field to the value of
kindin your target workload YAML. For example, if the workload is a deployment, the value isDeployment. - Expand the Selector subsection, and then expand the Match Expressions subsection.
- Click Add Match Expression.
- In the Key field, enter the label key that you set earlier. For example,
for the label
app=frontend, the key isapp). - In the Operator field, enter
Exists. - Expand the Values subsection and click Add Value.
- In the Value field, enter the label value that you set earlier. For
example, if using the label
app=frontend, the value isfrontend. - Click Create.
- Check the Pod/Deployment workload and verify that a volume is mounted, either by scrolling down
or by opening a terminal session into a container and running
ls /bindings.
Limit a pod to run on specified nodes (.spec.affinity)
Use .spec.affinity to constrain a Pod to run only on specified nodes.
To set required labels for pod scheduling on specific nodes, use the .spec.affinity.nodeAffinityLabels field.
apiVersion: liberty.websphere.ibm.com/v1
Kind: WebSphereLibertyApplication
metadata:
name: my-app
namespace: test
spec:
applicationImage: quay.io/my-repo/my-app:1.0
affinity:
nodeAffinityLabels:
customNodeLabel: label1, label2
customNodeLabel2: label3
The following example requires a large node type and preferences for two zones,
which are named zoneA and zoneB.
apiVersion: liberty.websphere.ibm.com/v1
Kind: WebSphereLibertyApplication
metadata:
name: my-app
namespace: test
spec:
applicationImage: quay.io/my-repo/my-app:1.0
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/instance-type
operator: In
values:
- large
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 60
preference:
matchExpressions:
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- zoneA
- weight: 20
preference:
matchExpressions:
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- zoneB
Use pod affinity and anti-affinity to constrain which nodes your pod is eligible to be scheduled based on labels on pods that are already running on the node rather than based on labels on node.
The following example shows that pod affinity is required and that the pods for
Service-A and Service-B must be in the same zone. Through pod
anti-affinity, it is preferable not to schedule Service_B and
Service_C on the same host.
apiVersion: liberty.websphere.ibm.com/v1
Kind: WebSphereLibertyApplication
metadata:
name: Service-B
namespace: test
spec:
applicationImage: quay.io/my-repo/my-app:1.0
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- Service-A
topologyKey: failure-domain.beta.kubernetes.io/zone
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: service
operator: In
values:
- Service-C
topologyKey: kubernetes.io/hostname
Constrain how pods are spread between nodes and zones (.spec.topologySpreadConstraints)
Use the .spec.topologySpreadConstraints YAML object to specify constraints on
how pods of the application instance (and if enabled, the Semeru Cloud Compiler instance) are spread
between nodes and zones of the cluster.
Using the .spec.topologySpreadConstraints.constraints field, you can specify a
list of Pod TopologySpreadConstraints to be added, such as in the
following example:
apiVersion: liberty.websphere.ibm.com/v1
Kind: WebSphereLibertyApplication
metadata:
name: my-app
namespace: test
spec:
topologySpreadConstraints:
constraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/instance: my-app
By default, the operator will add the following Pod topology spread constraints on the
application instance's pods (and if applicable, the Semeru Cloud Compiler instance's pods). The
default behaviour is to constrain the spread of pods which are owned by the same application
instance (or Semeru Cloud Compiler generation instance), denoted by <instance name> with a
maxSkew of 1.
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/instance: <instance name>
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/instance: <instance name>
To remove the operator's default topology spread constraints, set the
.spec.topologySpreadConstraints.disableOperatorDefaults flag to
true.
apiVersion: liberty.websphere.ibm.com/v1
Kind: WebSphereLibertyApplication
metadata:
name: my-app
namespace: test
spec:
topologySpreadConstraints:
disableOperatorDefaults: true
Alternatively, override each constraint manually by creating a new TopologySpreadConstraint under
.spec.topologySpreadConstraints.constraints for each topologyKey
you want to modify.
disableOperatorDefaults: true flag. If cluster-level default
constraints are not enabled, by default, the K8s scheduler will use its own internal default Pod
topology spread constraints as outlined in https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints. Configure DNS (.spec.dns.policy and .spec.dns.config)
DNS can be configured in WebSphereLibertyApplication CR by using
the .spec.dns.policy field or the .spec.dns.config field.
The .spec.dns.policy field is the DNS policy for the application pod and
defaults to the ClusterFirst policy. The .spec.dns.config
field is the DNS config for the application pod.
Default: The pod inherits the name resolution configuration from the node that the pods run on.ClusterFirst: Any DNS query that does not match the configured cluster domain suffix, such as www.kubernetes.io, is forwarded to an upstream name server by the DNS server. Cluster administrators can have extra stub-domain and upstream DNS servers configured.ClusterFirstWithHostNet: Set the DNS policy toClusterFirstWithHostNetif the pod runs withhostNetwork. Pods running withhostNetworkand set to theClusterFirstpolicy behaves like theDefaultpolicy.Note:ClusterFirstWithHostNetis not supported on Windows. For more information, see DNS Resolution on Windows.None: A pod can ignore DNS settings from the Kubernetes environment. All DNS settings are provided by using the .spec.dns.config field of WebSphereLibertyApplication CR.
Default is not the default
DNS policy. If .spec.dns.policy is not explicitly specified, then
ClusterFirst is used.DNS Config allows users more control over the DNS settings for an application Pod.
The .spec.dns.config field is optional and it can work with any
.spec.dns.policy settings. However, when a
.spec.dns.policy is set to None, the
.spec.dns.config field must be specified.
- .spec.dns.config.nameservers: a list of IP addresses that are used as DNS
servers for the Pod. Up to 3 IP addresses are specified. When .spec.dns.policy
is set to
None, the list must contain at least one IP address, otherwise this property is optional. The servers that are listed are combined to the base name servers generated from the specified DNS policy with duplicate addresses removed. - .spec.dns.config.searches: a list of DNS search domains for hostname lookup in the Pod. This property is optional. When specified, the provided list is merged into the base search domain names that are generated from the chosen DNS policy. Duplicate domain names are removed. Kubernetes allows up to 32 search domains.
- .spec.dns.config.options: an optional list of objects where each object must have a name property and can have a value property. The contents in this property are merged to the options generated from the specified DNS policy. Duplicate entries are removed.
spec:
dns:
policy: "None"
config:
nameservers:
- 192.0.2.1 # this is an example
searches:
- ns1.svc.cluster-domain.example
- my.dns.search.suffix
options:
- name: ndots
value: "2"
- name: edns0
For more information on DNS, see the Kubernetes DNS documentation.
Configure tolerations (.spec.tolerations)
Node affinity is a property that attracts pods to a set of nodes either as a preference or a hard requirement. However, taints allow a node to repel a set of pods.
Tolerations are applied to pods and allow a scheduler to schedule pods with matching taints. The scheduler also evaluates other parameters as part of its function.
Taints and tolerations work together to help ensure that application pods are not scheduled onto inappropriate nodes. If one or more taints are applied to a node, the node cannot accept any pods that do not tolerate the taints.
Tolerations can be configured in WebSphereLibertyApplication CR
by using the .spec.tolerations field.
spec:
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
For more information on taints and toleration, see the Kubernetes taints and toleration documentation.