After updating the service image to 10.2503.2, complete the post-upgrade for
the new capabilities to function correctly and existing indexes to operate efficiently.
To support changes in index mappings and enable new auditing capabilities, you must
reindex the backing indices of the audit data stream. This post-upgrade help ensure that the updated
schema is applied to all historical audit data.
About this task
You can run the Elasticsearch API requests by using any of the following tools:
- Kibana Dev tools: A built-in tool in Kibana to run API requests directly against your
Elasticsearch cluster.
- HTTP Clients: The tools such as Postman or cURL to send HTTP requests to the Elasticsearch
APIs.
To apply updated mappings or settings to the audit data stream without disrupting ongoing
operations, this release uses a two-phase reindexing strategy. This approach help ensure that the
audit data is preserved and reindexed under the new schema maintaining data integrity and minimizing
downtime.
The following procedure lists the steps to reindex Elasticsearch audit data stream backing
indices.
Procedure
- Stop loading data into the Elasticsearch audit index by updating the replica count to 0
for audit-logstash server. Wait until the replicas are scaled down to 0 for the audit-logstash
StatefulSet. The following example is of production mode.
logstashServers:
- active: true
names:
- logstash
replicaCount: 0
- Choose a name for the new data stream. In the following example,
audits-ds-00001 is a name that is used for the new data stream. For this data
stream, the data is reindexed as shown in the following query. The query must not return any
existing indices, aliases, or data-streams.
GET /_resolve/index/audits-ds-00001
Output:
{
"indices" : [ ],
"aliases" : [ ],
"data_streams" : [ ]
}
- Use the name of the stream that was given in step 2 and create the new data stream.
PUT /_data_stream/audits-ds-00001
- Fetch the document count for the existing data stream. The name of the existing data
stream that the Sterling Intelligent
Promising creates is in the
.ds-audits-<date>-000001 format. Save this count for future
reference.
- Reindex data to the new data stream that you created in step 3.
POST /_reindex ?wait_for_completion=false
{
"source": {
"index": "audits"
},
"dest": {
"index": "audits-ds-00001",
"op_type": "create"
},
"script": {
"source":
"if (ctx._source.triggeredBy == null) {ctx._source.triggeredBy =
'USER'; ctx._source.isRootAction = true}",
"lang": "painless"
}
}
Expected response:
- Verify that the reindex is complete. Use the task value from the response in step 5 of
the reindex API and track the progress. Verify the response contains
"completed" :
true
.
- Fetch the document count for the new data stream. Verify that it matches with the count
that was saved as reference in step 4.
GET audits-ds-00001/_count
- Delete the existing old data stream.
DELETE _data_stream/audits
- Create a new data stream with the old name that was deleted in step 8.
- Reindex data to the new data stream that you created in step 9.
POST /_reindex ?wait_for_completion=false
{
"source": {
"index": " audits-ds-00001"
},
"dest": {
"index": "audits",
"op_type": "create"
}
}
Expected response:
{
"task" : "<task_id>"
}
- Verify that the reindexing is complete. Use the task value from the response in step 5 of
the reindex API and track the progress. Verify the response contains
"completed" :
true
.
- Fetch document count for the
audits
data stream and verify that it
matches with the value in step 4.
- To start loading data into the Elasticsearch audits data stream, update the replica count
for the audit Logstash server to the required number and wait until the replicas are fully scaled up
for the audit-logstash StatefulSet. The following example is of production mode.
logstashServers:
- active: true
names:
- logstash
replicaCount: 2
- Delete the unused data stream.
DELETE _data_stream/audits-ds-00001