Docker image for Security Verify Access

The Security Verify Access Docker image contains the services that can be used to configure the Security Verify Access environment for Docker.

Consider the following points when you start a container.
  • The docker container is started as the 'isam' user (UID: 6000). In a standard docker environment, the container start happens automatically, but in a Kubernetes environment the security context must be set to allow the container to start as this particular user.
  • The following Linux® capabilities are required by the container (these capabilities are allowed by default in a standard Docker environment):
    • CHOWN
    • DAC_OVERRIDE
    • FOWNER
    • KILL
    • NET_BIND_SERVICE
    • SETFCAP
    • SETGID
    • SETUID
  • The following environment variables are used by the container:
    CONTAINER_TIMEZONE
    The time zone that is used by the container. For example, "Australia/Brisbane".
    SNAPSHOT
    The name of the configuration data snapshot that is to be used when the container is started. It defaults to the newest published configuration.
    SNAPSHOT_ID
    The identifier of the snapshot that is used by the container. The full snapshot name is constructed as:
    ‘isva_<product_version>_<snapshot_id>.snapshot’
    If no identifier is specified, an identifier of 'published' is used. If a full snapshot name is specified by using the SNAPSHOT environment variable, this variable is ignored.
    Note: This environment variable is not available before version 10.0.3.0.
    CONFIG_SNAPSHOT_SECRETS
    The ordered list of secrets that is used to encrypt the configuration snapshot file. The list of secrets is separated by the || (two pipe) characters. Each secret must be longer than 16 characters. If more than one secret is defined, the first secret in the list is used to encrypt the configuration snapshot file. Every secret in the list is tried to decrypt the configuration snapshot file. If the configuration snapshot cannot be decrypted, the container fails to bootstrap. If no configuration snapshot secrets are defined, the configuration snapshot file is not encrypted.
    Note: If the secret that is used to encrypt a snapshot is lost, the snapshot cannot be recovered.​​​​​
    FIXPACKS
    A space-separated ordered list of fix packs to be applied when the container is started. If this environment variable is not present, any fix packs present in the fixpacks directory of the configuration volume are applied in alphanumeric order.
    CONFIG_SERVICE_URL
    The URL to which the snapshot data is published. When an administrator chooses to publish a snapshot the generated snapshot file is sent, by way of an HTTP POST operation, to the specified service. Multiple services can be specified as a comma-separated list.
    CONFIG_SERVICE_USER_NAME
    The user that is used when a snapshot is published to a remote service.
    CONFIG_SERVICE_USER_PWD
    The password for the user that is used when a snapshot is published to a remote service.
    CONFIG_SERVICE_TLS_CACERT
    The CA certificate bundle that is used to verify connection to the configuration snapshot service. Valid values for this property are:
    file:<file.pem>
    The file prefix and the path to a PEM formatted certificate bundle. For example: file:/path/to/ca.pem
    disabled
    Disable certificate verification for the configuration service.
    operator
    Use the Kubernetes service account CA certificate that the Kubernetes/OpenShift PKI infrastructure provides. The service account must have permission to read secrets in the namespace that the Verify Access container is deployed to.
    ADMIN_PWD
    The initial seeded password for the built in 'admin' user that is used when the configuration service is accessed. If this parameter is not specified, the default password 'admin' is used.
    Note: If this environment variable is not supplied, it is strongly recommended to change the password using the local management interface or REST API after the container has started.
    Note: This environment variable is not available before version 9.0.5.0.
    USE_CONTAINER_LOG_DIR
    This environment variable, if set to any value, is used to indicate that the log files is written to a container-specific logging directory (underneath the '/var/application.logs' path). This action allows multiple container replicas to write log information to the same persistent volume. An alternative, in a Kubernetes environment, is to deploy the containers in a 'StatefulSet'. For information about StatefulSets, see the official Kubernetes documentation.
    Note: This environment variable is not available before version 10.0.0.0.
    VERIFY_FILES
    This environment variable, if set to any value, causes the container to verify all binary files in the container at start-up to ensure that they were not modified. If this variable is not set, the files are not checked during the container start-up. By electing to not verify the files, the length of time that the container takes to start is decreased. It also means that the binary files on the file system do not get verified to ensure that they were not tampered with.
    LANG
    The language in which messages which are sent to the console will be displayed. If no language is specified the messages appear in English. The following table lists the supported languages:
    Language Environment Variable Value
    Czech cs_CZ.utf8
    German de_DE.utf8
    Spanish es_ES.utf8
    French fr_FR.utf8
    Hungarian hu_HU.utf8
    Italian it_IT.utf8
    Japanese ja_JP.utf8
    Korean ko_KR.utf8
    Polish pl_PL.utf8
    Portuguese (Brazil) pt_BR.utf8
    Russian ru_RU.utf8
    Chinese (Simplified) zh_CN.utf8
    Chinese (Traditional) zh_TW.utf8
    LOGGING_CONSOLE_FORMAT
    The required format for the log messages. Valid values are basic or json. The default value is basic.
    LOG_TO_CONSOLE
    The types of messages, as a space separated list, which are logged to the console. The following table lists the valid message types.
    Message type Description
    policy.server If set, the policy server message log is sent to the console of the configuration container.
    policy.server.audit.azn If set, the policy server-auditing log for the audit.azn component is enabled and sent to the console of the configuration container.
    Note: If this message type is enabled, the policy.server type is automatically enabled.
    policy.server.audit.authn If set, the policy server-auditing log for the audit.authn component is enabled and sent to the console of the configuration container.
    Note: If this message type is enabled, the policy.server type is automatically enabled.
    policy.server.audit.mgmt If set, the policy server-auditing log for the audit.mgmt component is enabled and sent to the console of the configuration container.
    Note: If this message type is enabled, the policy.server type is automatically enabled.
    system.alerts If set, all system alerts are sent to the console of the configuration container, and to the destinations that are configured by using the web console.

Consider the following points about user registry support when you configure Verify Access in a Docker environment:

  • The embedded user registry can only be used only to house the secAuthority=Default suffix in conjunction with basic users. If full Security Verify Access users are required, the secAuthority=Default suffix must be stored in an external user registry.
  • An external user registry is always required for the user suffix. Configure the external user registry as a federated user registry if the embedded user registry is being used for the secAuthority=Default suffix.

Migrating an appliance to Docker

To migrate your appliance to the Docker environment, you can create a snapshot of the appliance in its original environment and import the snapshot into a running Security Verify Access configuration container.

You can import a snapshot from an appliance only if the following conditions are met.

  • For a Security Verify Access Base only activation, the snapshot was taken on version 9.0.0.0 or later. For an Advanced Access Control or Federation activation, the snapshot was taken on version 9.0.2.0 or later.
  • The appliance was configured with an embedded configuration database and an external runtime database.
  • The appliance runtime environment was using an external LDAP server. Alternatively, if the appliance was running Security Verify Access 9.0.4.0, an embedded LDAP server can be used if the "wga_rte.embedded.ldap.include.in.snapshot" advanced tuning parameter was set to true before the snapshot is generated.

When a snapshot from an appliance is imported to a Docker container:

  • The LMI HTTPS listening port is rewritten to 9443.
  • Any reverse proxy instances have their HTTPS and HTTP ports rewritten to 9443 and 9080.

Migrating to the Lightweight container

From IBM Security Verify Access v10.0.4, the verify-access Docker image can now be started only as a configuration container. If you attempt to start this image with the SERVICE environment variable set to runtime, webseal, or dsc, the container displays an error message and then stops. The corresponding lightweight Docker image in the following table are used to provide the worker services for IBM Security Verify Access:
Image name Worker service
verify-access-dsc Distribute session cache.
verify-access-runtime Runtime profile (Federation, Advanced Access Control)
verify-access-wrp Web Reverse Proxy (also known as WebSEAL)
Take note of the following points when you are migrating to the new lightweight Docker containers:
  • The provided container service listens on port 9443, by default, whereas it listens on port 443 when the legacy verify-access image is used.
  • The lightweight containers do not require any elevated container security capabilities and privileges (for example: the SETUID capability is not required).
  • Logging records are sent to the console in JSON format so that the container logging infrastructure can manage the logging records. No support is provided for natively forwarding logging messages to a remote syslog server.
  • The wrpadmin binary is provided, in the verify-access-wrp image, as an alternative to the legacy pdadmin binary for managing aspects of the running WebSEAL process.

Restrictions

Security Verify Access, when run in a Docker environment, has the following restrictions:

  • Any configuration changes require the service containers to be reloaded. You can use the CLI to trigger a manual reload. Changes to the Federation configuration and the policy database do not result in any service downtime. Changes to junction definitions and Web Reverse Proxy configuration results in minimal service downtime while the Web Reverse Proxy is restarted. See CLI in a Docker environment.
  • The authorization server (pdacld) is not supported.
  • The front-end load balancer capability of the Security Verify Access appliance is not supported.
  • The IP reputation policy information point (PIP) capability of Advanced Access Control is not supported.
  • A sample geo-location database is not provided. If a sample geo-location database is required, obtain it from the downloads area of a running virtual or hardware appliance. See Updating location attributes.
  • Preinstalled federation partner templates are not provided. See Managing federation partner templates. The connector package is available from the IBM Security App Exchange site (https://www.ibm.com/security/community/app-exchange) as the ‘IBM Security Access Manager Extension for SAML Connectors’ package.
  • Web Reverse proxy flow data or PAM statistics are not supported.
  • The embedded user registry can be used only to hold static data and is not used to hold any user data. As a result the embedded user registry is used only with a federated registry to store the user data, and basic users. The Security Verify Access integration component of the SCIM support is not available if the embedded user registry is in use.
  • Authentication that uses RSA SecurID tokens is not supported.
  • The container cannot be run from within a Docker user namespace.
  • A few differences exist when junctions are managed with the configuration container.
    • Validation of junction server connectivity does not take place when creating a junction.
    • Fine grained authorization checks on junction management operations, and policy object space operations, does not take place. This means that any administrator who is able to authenticate to the policy server (using, for example, pdadmin) is able to manage junctions and the Web Reverse Proxy policy object space.

Shared configuration data

The shared configuration volume is a section of the file system that is reserved for the storage of data to be shared among multiple containers. The data on the shared configuration volume is persisted even if the containers are deleted.

The shared configuration volume is mounted in a Security Verify Access container at '/var/shared'. Snapshots, support files, and fix packs are stored in this volume. To manage these files, you can use the System > Network Settings > Shared Volume page of the configuration container LMI.

Snapshots

Snapshots are located in the snapshots directory of the configuration volume.

When a snapshot is published from the configuration container, it is stored on the shared volume. When a runtime container is started, it uses the snapshot to perform configuration and bootstrap successfully. Snapshots can be created only by using the configuration container, though an administrator can also manually add or remove snapshots by directly accessing the Docker volume.

Fix packs

Fix packs are located in the fixpacks directory of the configuration volume.

When a container is started, fix packs that are specified in the FIXPACKS environment variable will be applied in the order that they are specified. If the FIXPACKS environment variable is not present, any fix packs present in the fixpacks directory of the configuration volume will be applied in alphanumeric order.

To manage fix packs, you can either access the Docker volume manually, or use the System > Network Settings > Shared Volume page of the configuration container LMI. On the Shared Volume page, you can view the contents of the fixpacks directory of the configuration volume, upload, delete, or rename fix packs.

The System > Updates and Licensing > Fixpack LMI page is read-only in a Docker environment. You can use that page to see which fix packs were applied, but cannot use it to apply or roll back fix packs.

Log files

By default, Docker uses a layered file system to help reduce the disk space utilization of the Docker containers. However, this file system has slower write speeds than standard file systems. As such, a standard Docker practice is to place any files that are updated frequently (for example, log files) on a shared volume. All of the log files that are used by Security Verify Access are located in the '/var/application.logs' directory. Therefore, the recommended approach is to create this directory as a shared volume when you create your container.

You can view the log files through the Monitor > Application Log Files panel of the LMI.

Multiple containers must not reference the same persistent volume for log storage, otherwise multiple containers attempt to write to the same log file at the same time, which causes data write and integrity issues. In a Kubernetes environment, this problem can be overcome by deploying the containers in a StatefulSet (see the official Kubernetes documentation for information on StatefulSets). An alternative is to set the USE_CONTAINER_LOG_DIR environment variable in the container. When this variable is set, the log files are written to a container-specific log subdirectory. This environment variable is not available before version 10.0.0.
Note: In IBM Security Verify Access version 9.0.7.0, a container-specific log subdirectory is always used.

The log file directory structure is shown in the following table.

Table 1. Logs directory structure
Log file Subdirectory (relative to the root log directory)
Local management interface log files lmi
Security Verify Access policy server log and trace files isam_runtime/policy
Embedded User Registry log files isam_runtime/user_registry
System log files system
Remote system log forwarder files rsyslog_forwarder
Note: The recommended approach is to configure Security Verify Access to send the log files to a remote syslog server wherever possible.