Sending log output to S3 Cloud Object Storage

You can send server bundle logs for Maximo® Manage to Simple Storage Service (S3) Cloud Object Storage. You also can redirect logs from the administrative server to S3 Cloud Object Storage.

Before you begin

You must provision an S3 Cloud Object Storage location and note its S3 bucket name, endpoint URL, S3 access key properties, and the value of the S3 secret. If you intend to send server bundle logs to S3, you must have or obtain an API key in the Administrative Work Center.

About this task

The Maximo Manage application is deployed in one or more workloads, which are called server bundles. Server bundles include the ui, all, mea, reports, and cron types. Server bundle logs can be uploaded from your log location on WebSphere® Application Server Liberty to your S3 location.

The administrative server handles commands for the updatedb, maxinst, and the integrity checker utilities. Administrative server logs can be redirected to your S3 location.

Procedure

  1. In your Red Hat® OpenShift® Container Platform deployment, go to Workloads > Secrets and add a key and value secret for your S3 configuration.
    1. For the secret name, specify s3secretkey.
    2. Set the key name to accessSecretKey.
    3. Get the value from your S3 configuration.
  2. In the Administration section, modify the custom resource (CR) for the ManageWorkspace custom resource definition (CRD) by taking the following steps:
    1. Select the ManageWorkspace custom resource definition (CRD).
    2. Select the Instance tab, then select the custom resource (CR).
    3. On the YAML tab, add or modify the spec.deployment section, and ensure that the custom resource (CR) for your instance contains the loggingS3destination property and the following child properties and values that match your S3 configuration:
      • bucketName: The S3 bucket name of the log destination.
      • endpointURL: The S3 endpoint URL. URLs for S3 can vary, depending on the S3 service that you use, such as a public, private, or regional URL service. Choose the URL that matches your deployment.
      • accessKey: The S3 access key.
      • secretKey: The name of the secret that contains the S3 access credential that you configured in Step 1.
      After you modify the CR for your instance, the values are propagated to the CRs for the administrative server and the server bundle pods. If any property or value is not specified, logs are not sent to S3 Cloud Object Storage.
      The following code sample for the CR shows the required properties and some fictitious values:
              deployment:                      
                            loggingS3Destination:                            
                                accessKey: 6xg5v1EacUtyMLXgzXbg                              
                                bucketName: xinyutest
                                endpointURL: https://s3.us.cloud-object-storage.appdomain.cloud
                                secretKey:
                                 secretName:s3secretkey                                                      
                              
    Logs that result from the running the maxinst, updatedb, or integrity checker utilities are now redirected to your S3 storage location. Each modified log is compressed and has a name that contains the source location and the timestamp of when the administrative command started to run.
  3. To upload server bundle logs from your log location on WebSphere Application Server Liberty, make the following API request: POST https://host:port/maximo/api/service/logging?action=wsmethod:submitUploadLogRequest. Provide the API key in the header of the request. The body is empty.
    The API request creates an entry in the LOGREQUEST table of Maximo Manage for each server bundle. A continuously running cron task uploads the compressed log files to your S3 storage location when the table is updated. The name of each file contains the source location and the timestamp of when the command started to run.
    You can query the LOGREQUEST table to troubleshoot any issues. Clean up old table entries by configuring an instance of the LOGREQUESTCLEANUP cron task to run each day.