Configuring cluster security
About this task
Kubernetes can be used to secure a containerized application, enforce role-based access control (RBAC) policies, and restrict the ability of a container to access resources on the host server. Communications need authentication in Kubernetes to minimize the risk that an attack in one pod does not spread to other pods.
The following sections provide guidance on how to configure your clusters to protect and secure your data:
- Secure custom routes and ingress
- Proxy servers
- Network policies to manage access to services in the cluster (ingress)
- Deny all network policies
- Network policies to manage access to external services (egress)
- Transport Layer Security (TLS) and cipher configuration
- Protection of secrets and data
- Secure custom routes and ingress
-
It is recommended to use secure communication protocols for services, so make sure that your custom routes and ingresses from the FNCM endpoints are not configured with the unsecured port 9080.
The default routes and ingresses created by the operator use the secured port 9443.
- Proxy servers
-
By default, FNCM uses fully qualified hostnames when it connects to external services. If you configured a proxy for outbound connections to external services, then set the
NO_PROXYenvironment variable in the cluster to "*.svc". The variable enables all the internal connections to work in a proxy-enabled environment.NO_PROXY=*.svc - Network policies to manage access to services in the cluster (ingress)
-
FNCM uses network policies to control the communications between its pods and the ingress traffic. You can allow explicitly the following traffic by adding further network policies:
- Ingress traffic from OpenShift Ingress Controller (Router).
- Ingress traffic from OpenShift Monitoring.
- Traffic between pods in the same FNCM namespace.
-
Example 1: The following example sets a network policy to allow ingress traffic from OpenShift Ingress Controller (Router):
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress -
Example 2: The following example sets a network policy to allow ingress traffic from OpenShift Monitoring:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress -
Example 3: The following example sets a network policy to allow traffic between pods in the same FNCM namespace:
kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}Note: The pod selectors (podSelector) in these examples are for all pods deployed in the same namespace which includes non-FNCM pods. Therefore, all the pods deny traffic that is not explicitly allowed by the network policies of the namespace.
- Deny all network policies
-
If a
deny-allnetwork policy is defined in the cluster as shown in the followingNetworkPolicy, then delete it from the cluster.apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: {} policyTypes: - Ingress - EgressNote: The operator creates network policies to restrict egress during the installation and they cannot work properly if adeny-allpolicy exists. For more information, see Network policies to manage access to external services (egress). - Network policies to manage access to external services (egress)
-
-
When the shared_configuration.sc_egress_configuration.sc_restricted_internet_access parameter in the custom resource is set to "
false”, the following resources are created by the operator:- A
deny-allegress network policy ({{ meta.name }}-fncm-egress-deny-all). - An
allow-allegress network policy ({{ meta.name }}-fncm-egress-allow-all). - An
allow-same-namepaceegress network policy in the FNCM deployment namespace ({{ meta.name }}-fncm-egress-allow-same-namespace). - An egress communications policy for the Kubernetes API and DNS services (
{{ meta.name }}-fncm-egress-allow-k8s-services). - A directory service server or LDAP egress network policy (
{{ meta.name }}-fncm-egress-allow-ldap). - Egress network policies for some databases associated with the deployment.
Where
{{ meta.name }}is "fncmdeploy" for FileNet Content Manager deployments.Attention:When your OpenShift cluster uses OpenShift software-defined networking (SDN) as the network plug-in type, the
allow-same-namespaceandallow-k8s-servicesegress network policies for the FNCM deployment use an empty pod selector (podSelector: {}). If non-FNCM pods run in the same namespace as FNCM, then they might be affected by the empty pod selectors. If needed, create your own egress network policy to allow outgoing traffic from the non-FNCM pods.For CNCF environments, you need to update the following parameter values:- Set the value of
sc_api_namespaceas"default". - Set the value of
sc_dns_namespaceas"kube-system".
- A
-
When the shared_configuration.sc_egress_configuration.sc_restricted_internet_access is set to
true, all the same network policies are used as when the value isfalse, except theallow-allpolicy. Modify the default access by creating egress network policies for some of the components to work properly with external services. You can create the egress network policies with a single network policy or individual network policies for each component.Note: When the shared_configuration.sc_drivers_url parameter has a valid value, a{{ meta.name }}-fncm-operator-default-egressnetwork policy is also created to provide access to the server that is defined in the parameter value.If you decide to change the sc_restricted_internet_access value from
falsetotrueto restrict your network, communications of the various components of FNCM with external services might be temporarily interrupted as the change can take some time to propagate. The{{ meta.name }}-fncm-egress-allow-allnetwork policy is removed from the cluster.-
Option 1: Create a custom "catch-all" egress network policy
The following example shows a "catch all" network policy that uses the podSelector.matchLabels parameter set to
com.ibm.fncm.networking/egress-external-app: "true". The policy allows all the FNCM components to communicate to the addresses defined in theipBlock.apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: "{{ meta.name }}-fncm-egress-external-app" namespace: "{{ meta.namespace }}" labels: app.kubernetes.io/name: "{{ meta.name }}-fncm-egress-external-app" spec: podSelector: matchLabels: com.ibm.fncm.networking/egress-external-app: "true" policyTypes: - Egress egress: - to: - ipBlock: cidr: # IP address your external application . For example: 1.2.3.4/32 ports: - protocol: #Protocol UDP or TCP port: #Port number - to: - ipBlock: cidr: # IP address your external application . For example: 1.2.3.4/32 ports: - protocol: #Protocol UDP or TCP port: #Port numberWhere
{{ meta.name }}is "fncmdeploy" for FileNet Content Manager deployments. The{{ meta.namespace }}value is the namespace of your FNCM deployment (for examplefncm-project).Tip: Tools such as dig or nslookup can be used to do a DNS lookup on a host to retrieve the IP address. -
Option 2: Create a custom "component-level" egress network policy
The following example shows a "component-level" network policy that uses the podSelector.matchLabels parameter set to
com.ibm.fncm.networking/egress-external-app-component: "BAN". The network policy allows the FNCM component to communicate to the addresses defined in theipBlock.apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: "{{ meta.name }}-fncm-egress-external-app-component" namespace: "{{ meta.namespace }}" labels: app.kubernetes.io/name: "{{ meta.name }}-fncm-egress-external-app-component" spec: podSelector: # matchExpressions: # - key: com.ibm.fncm.networking/egress-external-app-component # operator: In # values: # - BAN # - FNCM matchLabels: com.ibm.fncm.networking/egress-external-app-component: "BAN" policyTypes: - Egress egress: - to: - ipBlock: cidr: 1.2.3.4/32 ports: - protocol: TCP port: 443The following table lists the capabilities that have a need to access external services.
Table 1. Component labels for the pod selector Capability matchLabels Description Navigator com.ibm.fncm.networking/egress-external-app-component: "BAN" The following plug-ins, which are located in /opt/ibm/intPlugins, connect to external services.- ./eDS/edsPlugin.jar
- ./BAI/BAIPlugin.jar
- ./iccsap/iccsapPlugin.jar
- ./SSO/application-discovery.jar
- ./SSO/platformPlugin.jar
- ./sharePoint/ sharePointOnlinePlugin.jar
- ./DocuSign/DocuSignPlugin.jar
Deployments using an external data service for
edsPluginmust allow its endpoint.The
application-discoveryandplatformPluginmust be able to connect to the App Engine pods.Other plug-ins that might need to connect outside of the cluster, includes:
- Send mail smtp services.
- Microsoft office for the web.
- Sharepoint online services.
- DocuSign.
- CM8.
- CMOD.
- Box.
FileNet Content Manager com.ibm.fncm.networking/egress-external-app-component: "FNCM" If you extend your deployment to include external services that are not deployed within the same FileNet P8 platform namespace as your deployment, create additional egress network policies by using the component name “FNCM”. The following examples are external services that are not deployed within the same namespace as the FileNet P8 platform deployment:
- Custom code that calls out to external services incorporated into CPE as code modules or associated with component queues or added to the classpath. Also custom code that is integrated as BAN plug-ins that use external services.
- Full text indexing engines such as OpenSearch or Elasticsearch.
- Storage devices or storage technologies accessible to the CPE component using APIs that do not utilize the Kubernetes storage provisioners and persistent volumes.
- eMail services integrated with the CPE component and BAN.
- IBM products associated with the FileNet P8 Platform, such as Content Collector and Enterprise Records that have tools or services residing outside of the Kubernetes cluster where the Kubernetes deployed services use call-back or act as a client.
- External data services invoked from Process Engine (WSInvoke) or BAN (Navigator External Data Services).
- Database services not listed in the CR that are directly invoked from custom code or by using the FileNet Process Engine DBExecute function.
- IBM products with deployments on WebSphere traditional that utilize the services from FNCM or BAN from a containerized deployment such as IBM Content on Demand, or IBM Business Automation Workflow. Components in the Kubernetes-based FNCM or BAN deployments must be able to call out to the services hosted in WebSphere traditional.
- Webhooks as a part of a CS-Graphql application.
- External Key management services.
- Multiple deployments of FNCM in distinct OCP clusters to support geographically dispersed FileNet P8 domain.
- Legacy deployments of the FileNet P8 Platform based on application servers that exist side-by-side against the same FileNet P8 domain.
-
Tip: The IP address of an external service can change over time. When an IP address changes, you must update the egress network policy to prevent connection issues. To update an IP address of an egress network policy, use the following steps:- Retrieve the new IP address for a host. For example, by using
nslookup:nslookup <hostname> - Extract the definition of the existing egress network policy by running the following
command:
oc get networkpolicy <network policy that needs to be updated> -oyaml > update_np.yml - Edit the update_np.yml file to remove everything under
“
metadata”, except “metadata.labels”, “metadata.name”, and “metadata.namespace”. If you installed theyqutility, then you can combine thegetandeditcommands by running the following command:oc get networkpolicy <network policy that needs to be updated> -oyaml | \ yq -r 'del (.metadata.annotations,.metadata.creationTimestamp,.metadata.generation,.metadata.resourceVersion,.metadata.uid,.status)' \ > update_np.yml - Edit the update_np.yml to include the new IP addresses.
- Apply the updated network policy by running the following
command:
oc apply -f update_np.yml
-
- Transport Layer Security (TLS) and cipher configuration
- OCP relies on the OpenShift Ingress Controller (Router) for the TLS and cipher configuration. To
configure TLS and ciphers, create a custom TLS profile. For more information, see Ingress controller TLS profiles. The following command
provides an example configuration.
oc patch --type=merge \ --namespace openshift-ingress-operator ingresscontrollers/default \ --patch '{"spec": {"tlsSecurityProfile": { "minTLSVersion": "VersionTLS12", "type": "Custom", "custom": { "ciphers": ["ECDHE-ECDSA-AES256-GCM-SHA384", "ECDHE-ECDSA-AES128-GCM-SHA256", "ECDHE-RSA-AES256-GCM-SHA384", "ECDHE-RSA-AES128-GCM-SHA256", "AES256-GCM-SHA384", "AES128-GCM-SHA256"], "minTLSVersion": "VersionTLS12"} } } }'The following information is used in the configuration.- A custom TLS profile must be created. The other predefined TLS profiles (Old, Intermediate, Modern) do not accept a list of explicit ciphers, and expose known TLS vulnerabilities.
- The list of ciphers does not include any CBC ciphers, and it includes two ciphers for Elliptic Curve certificates (the first two ciphers), and four ciphers for RSA certificates.
- The configuration needs to be applied once per cluster.
- The configuration can be displayed by using the status command.
oc get Ingresscontroller/default -n openshift-ingress-operator -o yamlThe command displays a "
tlsProfile" section with the pertinent fields, which shows that the command worked. - You can also verify the configuration by using a browser to visualize the chosen cipher, or use other known TLS tests.
- Protection of secrets and data
-
- Encryption of Kubernetes secrets
-
By default, the Kubernetes master (API server) stores secrets as base64 encoded plain text in etcd.
Cluster administrators need to do extra steps to encrypt secret data at rest. For more information, see Encrypting Secret Data at Rest. For extra security, you might also want to use an external Key Management Service (KMS) provider.
- Encryption of data in transit
-
The container deployment uses a number of databases. Database connections from the consumers to the databases are internal cluster connections, and the network policy forbids egress traffic for pods that run a database.
By default, all components configure the database connections with Secure Sockets Layer (SSL). If you use a component that provides a configuration setting to enable SSL, then you must enable it.
- Encryption of data at rest
-
If sensitive information is generated and stored in a database, it is recommended to encrypt the database content. For more information about native database encryption, see the documentation. The encryption of the data at rest is not visible to the FNCM applications.
- Encryption of persistent volumes (PVs)
-
Persistent volumes must be configured on a cluster as part of the preparation of a deployment. PVs need to be encrypted at disk level to protect the data stored in them. Several vendors provide encryption solutions. It is also possible to use disk encryption, such as Linux® Unified Key Setup (LUKS). For more information, see Configuring LUKS: Linux Unified Key Setup.