Post-upgrade to 10.2409.1

Attention: If you are upgrading from an earlier version to a higher or the latest version, ensure that you complete the post-upgrade steps for each intermediate release that are listed on the post-upgrade page. If a version is not listed on the page, it means there are no post-upgrade steps required for that release.

To enable querying for empty values in both the Search and Aggregation APIs of Sterling Intelligent Promising, you must reindex the supply and demand Elasticsearch indexes.

About this task

You can run the Elasticsearch API requests by using any of the following tools:
  • Kibana Dev tools: A built-in tool in Kibana that allows you to run API requests directly against your Elasticsearch cluster.
  • HTTP Clients: Tools like Postman or cURL enables you to send HTTP requests to the Elasticsearch APIs.

Procedure

  • Steps to reindex Elasticsearch supplies index
    1. Stop loading data into Elasticsearch supplies index by updating the replica count to 0 for logstash-supply server, and wait until the replicas are fully scaled down to 0 for the logstash-supply StatefulSet. The following example is of production mode.
      logstashServers:
        - active: true
          names:
            - logstash-supply
          replicaCount: 0
    2. Retrieve the existing index to identify the source index by using a pattern that aligns with the Sterling Intelligent Promising naming convention. The following command retrieves the index that matches the supplies-<tenant_id>-* pattern, allowing you to identify the correct <previous_date_id> for the source index.
      GET _cat/indices/supplies-<tenant_id>-*?v
      In the following sample output, the supplies-<tenant_id>-<previous_date_id> is the existing source index.
      health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
      yellow open   supplies-<tenant_id>-<previous_date_id> abcdefghijklmnop 2   1          100            0      5.2mb          5.2mb
         
    3. Create destination Elasticsearch supplies index for corresponding tenant and suffix it with current date in yyyy-MM-dd format with appropriate settings.
      PUT supplies-<tenant_id>-<current_date_id>
      {
        "settings": {
          "index": {
            "number_of_shards": 2,  
            "number_of_replicas": 0,
            "refresh_interval" : "-1"
          }
        }
      }
      • The <tenant_id> is default, which is configured in the SIPEnvironment.
      • Example for <current_date_id> is 2024-10-24.
    4. Use Elasticsearch asynchronous reindexing API with automatic slicing that is based on the number of shards to copy data from the source index to the newly created destination index.
      • The <source_index> defined in the following example is the index retrieved in step 2 that is supplies-<tenant_id>-<previous_date_id>.
      • The <destination_index> defined in the following example is the index that is created in step 3 that is supplies-<tenant_id>-<current_date_id>.
      POST _reindex?slices=auto&wait_for_completion=false
      {
        "source": {
          "index": "<source_index>"
        },
        "dest": {
          "index": "<destination_index>"
        }
      }
      Expected response:
      {
        "task" : "<task_id>"
      }
    5. Monitor the status of the reindexing task by using the task API.
      GET _tasks/<task_id>
      Example of a <task_id> is 5Eahw9KaTCStcX4GYoqNPA:1191199.
      Once the reindexing task is complete, check the task statistics, including total, updated, created, deleted, batches, version_conflicts, no-ops, retries, and slices. Ensure that these statistics align with expectations, confirming the correct record count and the absence of conflicts.

    6. After the reindexing is complete, re-enable the refresh interval and set the required number of replicas.
      PUT <destination_index>/_settings
      {
        "index": {
          "number_of_replicas": 1,
          "refresh_interval" : "1s"
        }
      }
      1. Wait until the status of the index is green, as the data is copied to the replicas.
        GET <destination_index>/_stats
      2. Check the status for shards total, successful, and failed.

        Expected response:

          "_shards" : {
            "total" : 4,
            "successful" : 2,
            "failed" : 0
          }
          ....
      3. Get the expected count of records for the source Elasticsearch supplies index.
        GET <source_index>/_search?track_total_hits=true
      4. Fetch the value of <expected_count_of_records>.
          "hits" : {
            "total" : {
              "value" : <expected_count_of_records>,
              "relation" : "eq"
            }
          }
      5. Get the actual count of records for the destination Elasticsearch supplies index.
        GET <destination_index>/_search?track_total_hits=true
      6. Compare <expected_count_of_records> with <actual_count_of_records>.
          "hits" : {
            "total" : {
              "value" : <actual_count_of_records>,
              "relation" : "eq"
            }
          }
    7. Update the alias for the new destination index and remove the alias from the old source index. The supplies-<tenant_id> in the following example is the Elasticsearch supplies alias.
      POST _aliases
      {
        "actions": [
          {
            "add": {
              "index": "<destination_index>",
              "alias": "supplies-<tenant_id>",
              "is_write_index": true
            }
          },
          {
            "remove": {
              "index": "<source_index>",
              "alias": "supplies-<tenant_id>"
            }
          }
        ]
      }
      Verify whether the Elasticsearch supplies alias points to the destination Elasticsearch supplies index.
      GET _alias/supplies-<tenant_id>
      Expected response:
      {
        "<destination_index>" : {
          "aliases" : {
            "supplies-<tenant_id>" : {
              "is_write_index" : true
            }
          }
        }
      }
    8. After the verification is complete, delete the old source index.
      DELETE <source_index>
    9. To start loading data into the Elasticsearch supplies index, update the replica count for the Logstash supply server to the required number and wait until the replicas are fully scaled up for the logstash-supply StatefulSet. The following example is of production mode.
      logstashServers:
        - active: true
          names:
            - logstash-supply
          replicaCount: 2
  • Steps to reindex Elasticsearch demands index
    1. To stop loading data into the Elasticsearch demands index, update the replica count to 0 for logstash-demand server, and wait until the replicas are fully scaled down to 0 for the logstash-demand StatefulSet. The following example is of production mode.
      logstashServers:
        - active: true
          names:
            - logstash-demand
          replicaCount: 0
    2. Retrieve the existing index to identify the source index by using a pattern that aligns with the Sterling Intelligent Promising naming convention. The following command retrieves the index that matches the demands-<tenant_id>-* pattern, allowing you to identify the correct <previous_date_id> for the source index.
      GET _cat/indices/demands-<tenant_id>-*?v
      In the following sample output, the demands-<tenant_id>-<previous_date_id> is the existing source index.
      health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
      yellow open   demands-<tenant_id>-<previous_date_id> abcdefghijklmnop 2   1          100            0      5.2mb          5.2mb
      
    3. Create a destination Elasticsearch demands index for corresponding tenant and suffix it with current date in the yyyy-MM-dd format with appropriate settings.
      PUT demands-<tenant_id>-<current_date_id>
      {
        "settings": {
          "index": {
            "number_of_shards": 2,  
            "number_of_replicas": 0,
            "refresh_interval" : "-1"
          }
        }
      }
      • The <tenant_id> is default, which is configured in the SIPEnvironment.
      • Example for <current_date_id> is 2024-10-24.
    4. Use Elasticsearch asynchronous reindexing API with automatic slicing that is based on the number of shards to copy data from the source index to the newly created destination index.
      • The <source_index> defined in the following example is the index that is retrieved in step 2 which is demands-<tenant_id>-<previous_date_id>.
      • The <destination_index> defined in the following example is the one created in step 3 which is demands-<tenant_id>-<current_date_id>.
      POST _reindex?slices=auto&wait_for_completion=false
      {
        "source": {
          "index": "demands-<tenant_id>-<previous_date_id>"
        },
        "dest": {
          "index": "demands-<tenant_id>-<current_date_id>"
        }
      }
      Expected response:
      {
        "task" : "<task_id>"
      }
    5. Monitor the status of the reindexing task by using the task API.
      GET _tasks/<task_id>
      Example of a <task_id> is 5Eahw9KaTCStcX4GYoqNPA:1191199.
      After the reindexing task is complete, check the task statistics, including total, updated, created, deleted, batches, version_conflicts, no-ops, retries, and slices. Ensure that these statistics align with expectations, confirming the correct record count and the absence of conflicts.
    6. After the reindexing is complete, re-enable the refresh interval and set the desired number of replicas.
      PUT demands-<tenant_id>-<current_date_id>/_settings
      {
        "index": {
          "number_of_replicas": 1,
          "refresh_interval" : "1s"
        }
      }
      1. Wait until the status of the index is green, as the data is copied to the replicas.
        GET demands-<tenant_id>-<current_date_id>/_stats
      2. Check the status of shards total, successful, and failed.

        Expected response:

          "_shards" : {
            "total" : 4,
            "successful" : 2,
            "failed" : 0
          }
          ....
      3. Get the expected count of records for the source Elasticsearch demands index.
        GET <source_index>/_search?track_total_hits=true
      4. Fetch the value of <expected_count_of_records>.
          "hits" : {
            "total" : {
              "value" : <expected_count_of_records>,
              "relation" : "eq"
            }
          }
      5. Get the actual count of records for the destination Elasticsearch demands index:
        GET <destination_index>/_search?track_total_hits=true
      6. Compare <expected_count_of_records> with <actual_count_of_records>.
          "hits" : {
            "total" : {
              "value" : <actual_count_of_records>,
              "relation" : "eq"
            }
          }
    7. Update the alias for the new destination index and remove the alias from the old source index. The demand-<tenant_id> in the following example is the Elasticsearch demand alias.
      POST _aliases
      {
        "actions": [
          {
            "add": {
              "index": "<destination_index>",
              "alias": "demands-<tenant_id>",
              "is_write_index": true
            }
          },
          {
            "remove": {
              "index": "<source_index>",
              "alias": "demands-<tenant_id>"
            }
          }
        ]
      }
      Verify whether Elasticsearch demands alias points to the destination Elasticsearch demands index.
      GET _alias/supplies-<tenant_id>
      Expected response:
      {
        "demands-<tenant_id>-<current_date_id>" : {
          "aliases" : {
            "demands-<tenant_id>" : {
              "is_write_index" : true
            }
          }
        }
      }
    8. Once verification is complete, delete the old source index.
      DELETE <source_index>
    9. To start loading data into the Elasticsearch demand index, update the replica count for the Logstash demand server to the desired number and wait until the replicas are fully scaled up for the logstash-demand StatefulSet. The following example is of production mode.
      logstashServers:
        - active: true
          names:
            - logstash-demand
          replicaCount: 2
  • Steps to help ensure effective synchronization of rules between Inventory Visibility and the Rules services. The following steps applies only when Sterling Intelligent Promising is deployed in production or flexible modes. Hence, no action is required for development mode.
    1. Add the RULES_TRIGGER_TOPIC environment variable in serverProperties in the SIPEnvironment custom resource. The value for this environment variable is read from the existing configMap. Hence, use the same configMap value that is used in the existing configMap as shown in the following example.
        serverProperties:
          envVars:
            - groupName: ruletriggertopic
              propertyRef:
                - name: RULES_TRIGGER_TOPIC
                  valueFrom:
                    configMapKeyRef:
                      key: rules-trigger-topic
                      name: iv-kafka-configs
    2. Save the changes that you configured in step 1.
    3. Set the ruletriggertopic groupName under property in IVServiceGroup and then save the custom resource. This step helps ensure the RulesTriggerConsumer backend server uses the RULES_TRIGGER_TOPIC environment variable.
      - active: true
        names:
          - 'RulesTriggerConsumer:5'
        property:
          envVars: ruletriggertopic
      The RulesTriggerConsumer pod now restarts.
    4. Verify that no errors exist in the RulesTriggerConsumer pod to proceed further.