Configuring cluster security

Cluster security needs extra work and tools beyond Kubernetes to secure messages between its access points. While Kubernetes automates many of the tasks to deploy containerized applications, it does not manage the security of a cluster.

About this task

Kubernetes can be used to secure a containerized application, enforce role-based access control (RBAC) policies, and restrict the ability of a container to access resources on the host server. Communications need authentication in Kubernetes to minimize the risk that an attack in one pod does not spread to other pods.

The following sections provide guidance on how to configure your OpenShift clusters to protect and secure your data:

Secure custom routes and ingress

It is recommended to use secure communication protocols for services, so make sure that your custom routes and ingresses from the Cloud Pak for Business Automation endpoints are not configured with the unsecured port 9080.

The default routes and ingresses created by the Cloud Pak for Business Automation operator use the secured port 9443.

Proxy servers

By default, Cloud Pak for Business Automation uses fully qualified hostnames when it connects to external services. If you configured a proxy for outbound connections to external services, then set the NO_PROXY environment variable in the cluster to "*.svc". The variable enables all the CP4BA internal connections to work in a proxy-enabled environment.

NO_PROXY=*.svc
Network policies to manage access to services in the cluster (ingress)
Cloud Pak for Business Automation uses network policies to control the communications between its pods and the ingress traffic. You can allow explicitly the following traffic by adding further network policies:
  • Ingress traffic from OpenShift Ingress Controller (Router).
  • Ingress traffic from OpenShift Monitoring.
  • Traffic between pods in the same Cloud Pak for Business Automation namespace.
  • Example 1: The following example sets a network policy to allow ingress traffic from OpenShift Ingress Controller (Router):

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-from-openshift-ingress
    spec:
      ingress:
      - from:
        - namespaceSelector:
            matchLabels:
              network.openshift.io/policy-group: ingress
      podSelector: {}
      policyTypes:
      - Ingress
  • Example 2: The following example sets a network policy to allow ingress traffic from OpenShift Monitoring:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-from-openshift-monitoring
    spec:
      ingress:
      - from:
        - namespaceSelector:
            matchLabels:
              network.openshift.io/policy-group: monitoring
      podSelector: {}
      policyTypes:
      - Ingress
    
  • Example 3: The following example sets a network policy to allow traffic between pods in the same Cloud Pak for Business Automation namespace:

    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: allow-same-namespace
    spec:
      podSelector: {}
      ingress:
      - from:
        - podSelector: {}
    Note: The pod selectors (podSelector) in these examples are for all pods. Therefore, all the pods deny traffic that is not explicitly allowed by the network policies of the namespace.
Deny all network policies

If a deny-all network policy is defined in the cluster as shown in the following NetworkPolicy, then delete it from the cluster.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
Note: The CP4BA operator and its dependency operators create network policies to restrict egress during the installation and they cannot work properly if a deny-all policy exists. For more information, see Network policies to manage access to external services (egress).
Network policies to manage access to external services (egress)
  • When the shared_configuration.sc_egress_configuration.sc_restricted_internet_access parameter in the CP4BA custom resource is set to "false”, the following resources are created by the CP4BA operator:
    • A deny-all egress network policy ({{ meta.name }}-cp4a-egress-deny-all).
    • An allow-all egress network policy ({{ meta.name }}-cp4a-egress-allow-all).
    • An allow-same-namepace egress network policy in the CP4A deployment namespace ({{ meta.name }}-cp4a-egress-allow-same-namespace).
    • An egress communications policy for the Kubernetes API and DNS services ({{ meta.name }}-cp4a-egress-allow-k8s-services).
    • A directory service server or LDAP egress network policy ({{ meta.name }}-cp4a-egress-allow-ldap).
    • Egress network policies for some databases associated with the CP4BA deployment.

    Where {{ meta.name }} is "icp4adeploy" by default for the CP4BA multi-pattern deployment and "content" for CP4BA FileNet Content Manager only deployments.

    Attention: When your OpenShift cluster uses OpenShift software-defined networking (SDN) as the network plug-in type, the allow-same-namespace and allow-k8s-services egress network policies for the Cloud Pak for Business Automation deployment use an empty pod selector (podSelector: {}). If non-CP4BA pods run in the same namespace as CP4BA, then they might be affected by the empty pod selectors. If needed, create your own egress network policy to allow outgoing traffic from the non-CP4BA pods.
  • When shared_configuration.sc_egress_configuration.sc_restricted_internet_access is set to true on an OCP cluster that has containerized databases in a different namespace to the CP4BA namespace, then the following egress network policy must also be created.

    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: cp4a-egress-for-db
    spec:
      podSelector:
        matchLabels:
          com.ibm.cp4a.networking/egress-allow-k8s-services: 'true'
      egress:
        - to:
            - podSelector: {}
              namespaceSelector:
                matchLabels:
                  kubernetes.io/metadata.name: "<DB NAMESPACE>"
      policyTypes:
        - Egress

    Replace <DB NAMESPACE> with the name of the namespace of the containerized databases.

  • When shared_configuration.sc_egress_configuration.sc_restricted_internet_access is set to true, all the same network policies are used as when the value is false, except the allow-all policy. Modify the default access by creating egress network policies for some of the capabilities to work properly with external services. You can create the egress network policies with a single network policy or individual network policies for each capability or component.

    Note: When the shared_configuration.sc_drivers_url parameter has a valid value, a {{ meta.name }}-cp4ba-operator-default-egress network policy is also created to provide access to the server that is defined in the parameter value.

    If you decide to change the sc_restricted_internet_access value from false to true to restrict your network, communications of the various components of CP4BA with external services might be temporarily interrupted as the change can take some time to propagate. The {{ meta.name }}-cp4a-egress-allow-all network policy is removed from the cluster.

    • Option 1: Create a custom "catch-all" egress network policy

      The following example shows a "catch all" network policy that uses the podSelector.matchLabels parameter set to com.ibm.cp4a.networking/egress-external-app: "true". The policy allows all the CP4A components to communicate to the addresses defined in the ipBlock.

      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: "{{ meta.name }}-cp4a-egress-external-app"
        namespace: "{{ meta.namespace }}"
        labels:
          app.kubernetes.io/name: "{{ meta.name }}-cp4a-egress-external-app"
      spec:
        podSelector:
          matchLabels:
            com.ibm.cp4a.networking/egress-external-app: "true"
        policyTypes:
        - Egress
        egress:
        - to:
          - ipBlock:
              cidr: # IP address your external application .  For example: 1.2.3.4/32
          ports:
          - protocol: #Protocol UDP or TCP
            port: #Port number
        - to:
          - ipBlock:
              cidr: # IP address your external application .  For example: 1.2.3.4/32
          ports:
          - protocol: #Protocol UDP or TCP
            port: #Port number

      Where {{ meta.name }} is "icp4adeploy" by default for the CP4BA multi-pattern deployment and "content" for CP4BA FileNet Content Manager only deployments. The {{ meta.namespace }} value is the namespace of your CP4BA deployment (for example cp4ba-project).

      Tip: Tools such as dig or nslookup can be used to do a DNS lookup on a host to retrieve the IP address.
    • Option 2: Create a custom "component-level" egress network policy

      The following example shows a "component-level" network policy that uses the podSelector.matchLabels parameter set to com.ibm.cp4a.networking/egress-external-app-component: "BAN". The network policy allows the CP4A component to communicate to the addresses defined in the ipBlock.

      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: "{{ meta.name }}-cp4a-egress-external-app-component"
        namespace: "{{ meta.namespace }}"
        labels:
          app.kubernetes.io/name: "{{ meta.name }}-cp4a-egress-external-app-component"
      
      spec:
        podSelector:
          # matchExpressions:
          #   - key: com.ibm.cp4a.networking/egress-external-app-component
          #     operator: In
          #     values:
          #       - BAN
          #       - BAS
          #       - ADP
          matchLabels:
            com.ibm.cp4a.networking/egress-external-app-component: "BAN"
        policyTypes:
        - Egress
        egress:
        - to:
          - ipBlock:
              cidr: 1.2.3.4/32
          ports:
          - protocol: TCP
            port: 443

      The following table lists the capabilities that might need to access external services.

      Table 1. Capability labels for the pod selector
      Capability matchLabels Description
      EDB Postgres com.ibm.cp4a.networking/egress-external-app-component: "EDB" If the EDB Postgres option is enabled for the CP4BA deployment and you want to give the EDB pod access to an external location for backup and restore, then use "EDB" as the component name.
      Automation Document Processing com.ibm.cp4a.networking/egress-external-app-component: "ADP"
      • Use ADP as the component name if you are configuring egress to an external application for the webhook feature.
      • If you have an external GitHub connection, you need to configure the egress for the GitHub.
      • If you have two environments for ADP, one for Authoring and one for Runtime, then create a custom egress network policy rule to allow Content Project Project Deployment (CPDS) to access the remote Content Designer Repo API (CDRA) by performing a nslookup on the CDRA’s host.
      Business Automation Navigator com.ibm.cp4a.networking/egress-external-app-component: "BAN" The following plug-ins, which are located in /opt/ibm/intPlugins, connect to external services.
      • ./eDS/edsPlugin.jar
      • ./BAI/BAIPlugin.jar
      • ./iccsap/iccsapPlugin.jar
      • ./SSO/application-discovery.jar
      • ./SSO/platformPlugin.jar
      • ./sharePoint/ sharePointOnlinePlugin.jar
      • ./DocuSign/DocuSignPlugin.jar

      Deployments using an external data service for edsPlugin must allow its endpoint.

      The application-discovery and platformPlugin must be able to connect to the App Engine pods.

      Other plug-ins that might need to connect outside of the cluster, include:

      • Send mail smtp services
      • Microsoft office for the web
      • Sharepoint online services
      • DocuSign
      • CM8
      • CMOD
      • Box
      FileNet® Content Manager com.ibm.cp4a.networking/egress-external-app-component: "FNCM" If you extend your deployment to include external services that are not deployed within the same FileNet P8 platform namespace as your deployment, create additional egress network policies by using the component name “FNCM”.

      The following examples are external services that are not deployed within the same namespace as the FileNet P8 platform deployment:

      • Custom code that calls out to external services incorporated into CPE as code modules or associated with component queues or added to the classpath. Also custom code that is integrated as BAN plug-ins that use external services.
      • Full text indexing engines such as OpenSearch or Elasticsearch.
      • Storage devices or storage technologies accessible to the CPE component using APIs that do not utilize the Kubernetes storage provisioners and persistent volumes.
      • eMail services integrated with the CPE component and BAN.
      • IBM products associated with the FileNet P8 Platform, such as Content Collector and Enterprise Records that have tools or services residing outside of the Kubernetes cluster where the Kubernetes deployed services use call-back or act as a client.
      • External data services invoked from Process Engine (WSInvoke) or BAN (Navigator External Data Services).
      • Database services not listed in the CR that are directly invoked from custom code or by using the FileNet Process Engine DBExecute function.
      • IBM products with deployments on WebSphere traditional that utilize the services from FNCM or BAN from a containerized deployment such as IBM Content on Demand, or IBM Business Automation Workflow. Components in the Kubernetes-based FNCM or BAN deployments must be able to call out to the services hosted in WebSphere traditional.
      • Webhooks as a part of a CS-Graphql application.
      • External Key management services.
      • Multiple deployments of FNCM in distinct OCP clusters to support geographically dispersed FileNet P8 domain.
      • Legacy deployments of the FileNet P8 Platform based on application servers that exist side-by-side against the same FileNet P8 domain.
      Content Collector for SAP com.ibm.cp4a.networking/egress-external-app-component: “CCSAP An SAP server that is hosted on the same network as the OCP cluster.
      Business Automation Studio com.ibm.cp4a.networking/egress-external-app-component: "BAS" Use BAS as the component name if you are configuring egress to an external application for:
      Business Automation Workflow com.ibm.cp4a.networking/egress-external-app-component: "BAW" Use BAW as the component name if you are configuring egress to an external application for:
      Business Automation Application com.ibm.cp4a.networking/egress-external-app-component: "BAA" Use BAA as the component name if you are configuring egress to an external application for:
      Workflow Process Service com.ibm.cp4a.networking/egress-external-app-component: "WfPS" Use WfPS as the component name if you are configuring egress to an external application for:
      Automation Decision Services com.ibm.cp4a.networking/egress-external-app-component: "ADS" Use ADS as the component name if you are configuring egress to:
      • An external Mongo instance
      • A Git server that is configured by users or administrators
      • An S3 instance
      • A Kafka instance
      • An external libraries repository
      • Machine learning providers
    Tip: The IP address of an external service can change over time. When an IP address changes, you must update the egress network policy to prevent connection issues. To update an IP address of an egress network policy, use the following steps:
    1. Retrieve the new IP address for a host. For example, by using nslookup:
      nslookup <hostname>
    2. Extract the definition of the existing egress network policy by running the following command:
      oc get networkpolicy <network policy that needs to be updated> -oyaml > update_np.yml
    3. Edit the update_np.yml file to remove everything under “metadata”, except “metadata.labels”, “metadata.name”, and “metadata.namespace”. If you installed the yq utility, then you can combine the get and edit commands by running the following command:
      oc get networkpolicy <network policy that needs to be updated> -oyaml | \
      yq  -r 'del (.metadata.annotations,.metadata.creationTimestamp,.metadata.generation,.metadata.resourceVersion,.metadata.uid,.status)' \
      > update_np.yml
    4. Edit the update_np.yml to include the new IP addresses.
    5. Apply the updated network policy by running the following command:
      oc apply -f update_np.yml
    Attention: The Operational Decision Manager capability does not match the egress-external-app-component label to override the default egress and ingress settings. The default network policy for Operational Decision Manager is to allow all outgoing communication (egress), so you must configure a different network policy to restrict the ingress and egress traffic. For more information, see Configuring the network policy.
  • If you see connection issues to the external services, such as databases or an LDAP, after you installed your CP4BA deployment, then you must create an egress network policy to allow communication between your CP4BA deployment namespace and the namespaces of the external services.

    The following YAML provides an example egress network policy to allow the CP4BA deployment pods to communicate with the external services in different namespaces.

    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: access-to-ldap-db-ocp
      namespace: <cp4ba-namespace>
    spec:
      podSelector: {}
      egress:
        - ports:
            - protocol: TCP
              port: 5432
            - protocol: TCP
              port: 636
          to:
            - podSelector: {}
              namespaceSelector:
                matchLabels:
                  kubernetes.io/metadata.name: <external services namespaces>
      policyTypes:
        - Egress 
Transport Layer Security (TLS) and cipher configuration

OCP relies on the OpenShift Ingress Controller (Router) for the TLS and cipher configuration. To configure TLS and ciphers, create a custom TLS profile. For more information, see Ingress controller TLS profiles. The following command provides an example configuration.

oc patch --type=merge \ 
--namespace openshift-ingress-operator ingresscontrollers/default \ 
--patch 
'{"spec":
    {"tlsSecurityProfile":
        {
        "minTLSVersion": "VersionTLS12",
        "type": "Custom",
        "custom": {
            "ciphers": ["ECDHE-ECDSA-AES256-GCM-SHA384",
                        "ECDHE-ECDSA-AES128-GCM-SHA256",
                        "ECDHE-RSA-AES256-GCM-SHA384",
                        "ECDHE-RSA-AES128-GCM-SHA256",
                        "AES256-GCM-SHA384",
                        "AES128-GCM-SHA256"],
            "minTLSVersion": "VersionTLS12"}
        }
    }
}'
The following information is used in the configuration.
  • A custom TLS profile must be created. The other predefined TLS profiles (Old, Intermediate, Modern) do not accept a list of explicit ciphers, and expose known TLS vulnerabilities.
  • The list of ciphers does not include any CBC ciphers, and it includes two ciphers for Elliptic Curve certificates (the first two ciphers), and four ciphers for RSA certificates.
  • The configuration needs to be applied once per cluster.
  • The configuration can be displayed by using the status command.
    oc get Ingresscontroller/default -n openshift-ingress-operator -o yaml

    The command displays a "tlsProfile" section with the pertinent fields, which shows that the command worked.

  • You can also verify the configuration by using a browser to visualize the chosen cipher, or use other known TLS tests.
Protection of secrets and data
Encryption of Kubernetes secrets

By default, the Kubernetes master (API server) stores secrets as base64 encoded plain text in etcd.

Cluster administrators need to do extra steps to encrypt secret data at rest. For more information, see Encrypting Secret Data at Rest. For extra security, you might also want to use an external Key Management Service (KMS) provider.

Encryption of data in transit

IBM Cloud Pak® for Business Automation uses a number of databases. Database connections from the consumers to the databases are internal cluster connections, and the network policy forbids egress traffic for pods that run a database.

By default, all components configure the database connections with Secure Sockets Layer (SSL). If you use a component that provides a configuration setting to enable SSL, then you must enable it.

Encryption of data at rest

If sensitive information is generated and stored in a database for IBM Cloud Pak for Business Automation, it is recommended to encrypt the database content. For more information about native database encryption, see the documentation. The encryption of the data at rest is not visible to the IBM Cloud Pak for Business Automation applications.

Encryption of persistent volumes (PVs)

Persistent volumes must be configured on a cluster as part of the preparation of a deployment. PVs need to be encrypted at disk level to protect the data stored in them. Several vendors provide encryption solutions. It is also possible to use disk encryption, such as Linux® Unified Key Setup (LUKS). For more information, see Configuring LUKS: Linux Unified Key Setup.