Update the secret that secures backed-up data for the Analytics subsystem in an API Connect deployment.
About this task
When you configure S3 storage for backing up Analytics data, you encrypt the S3 provider's access
key and save it as a Kubernetes secret for your Analytics deployment. If the S3 access key or your
Kubernetes secret is compromised, obtain a new key from the S3 provider and use it to generate a new
Kubernetes secret.
Important: When you invalidate a key, be sure to save a copy of it with a notation
listing the dates when it was used. If you need to restore a backup that was created with the
invalidated key, you will need to use that key to access the data. One method of saving the key is
to create a new version of the CR every time you change the key, so that you can easily refer to the
settings and key that were used for each backup.
This task assumes that you have already configured the backup settings for your Analytics subsystem and now want to
replace the Kubernetes secret that is specified as the analytics-backup-secret
in
the Analytics CR.
Procedure
-
Update the secret for your S3 storage:
-
Obtain a new access key and corresponding access key secret from your S3 provider.
-
Invalidate the previous credentials that were specified in the Analytics CR.
-
Verify that the secret is now invalid by running the following command:
kubectl -n namespace exec -it storage_master/shared_pod -- curl_es _snapshot/apic-analytics/_verify -XPOST
where:
A successful invalidation operation returns a repository_verification_exception
message as in the following example:
{"error":{"root_cause":[{"type":"repository_verification_exception","reason":"[apic-analytics] path is not accessible on master node"}],"type":"repository_verification_exception","reason":"[apic-analytics] path is not accessible on master node","caused_by":{"type":"i_o_exception","reason":"Unable to upload object [tests-5PQ7Hm0zRs6ryr6wm_ZA1A/master.dat-temp] using a single upload","caused_by":{"type":"amazon_s3_exception","reason":"The AWS Access Key ID you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 8ea5a567-e1dd-4959-bc1a-5933a8a559c6; S3 Extended Request ID: null)"}}},"status":500}
If the invalidation was not successful, then the secret is still valid and the response includes
the names of the analytics-storage pods, as in the following example:
{"nodes":{"NsrFYadiQhumJBb-6XuQUw":{"name":"analytics-storage-master-0"},"uoXoBx4KQHO0bhi-yqWekQ":{"name":"analytics-storage-data-0"}}}
Attention: Make sure that you successfully invalidate the current secret to prevent
unauthorized access to your data.
-
Delete the invalidated Kubernetes secret by running the following command:
kubectl -n namespace delete secret name
where:
namespace
is the namespace where Analytics is deployed.
name
is the name of the current (invalidated) Kubernetes
secret, which is specified in the credentials
setting in the
databaseBackup
section of the Analytics CR. Every time you update the secret, you
must assign it a new name.
-
Create a new Kubernetes secret by running the following command and filling in your access key,
access key secret, and namespace:
kubectl -n namespace create secret generic new_secret_name --from-literal=username='S3_access_key' --from-literal=password='S3_access_key_secret'
where:
namespace
is the namespace where Analytics is deployed.
new_secret_name
is a new name for storing the new Kubernetes
secret. Every time you update the secret, you must assign it a new name.
S3_access_key
and
S3_access_key_secret
are the values that you received from the
S3 provider.
-
Edit the Analytics CR file and replace the invalidated secret with the new secret.
-
In the CR, locate the
credentials
setting in the
databaseBackup
section.
-
Replace the invalidated secret name with the name of the new secret that you just
generated.
-
Save and close the CR file.
-
Run the following command to apply the updated CR so that the new secret takes effect:
kubectl -n namespace apply -f path/to/analytics-cr
where:
namespace
is the namespace where Analytics is deployed.
path/to/analytics-cr
is the path and file name of the
Analytics CR.
-
After the pods restart, run the following command to verify that the new secret is valid and in
use:
This step uses the same command as step 2 to determine whether a key is valid, but this time you
want to ensure that the new key is valid.
kubectl -n namespace exec -it storage_master/shared_pod -- curl_es _snapshot/apic-analytics/_verify -XPOST
where:
If the new secret is valid and in use, the response includes the names of the analytics-storage
pods, as in the following
example:
{"nodes":{"NsrFYadiQhumJBb-6XuQUw":{"name":"analytics-storage-master-0"},"uoXoBx4KQHO0bhi-yqWekQ":{"name":"analytics-storage-data-0"}}}
If the new secret is not valid, the response indicates a
repository_verification_exception
as in the following example:
{"error":{"root_cause":[{"type":"repository_verification_exception","reason":"[apic-analytics] path is not accessible on master node"}],"type":"repository_verification_exception","reason":"[apic-analytics] path is not accessible on master node","caused_by":{"type":"i_o_exception","reason":"Unable to upload object [tests-5PQ7Hm0zRs6ryr6wm_ZA1A/master.dat-temp] using a single upload","caused_by":{"type":"amazon_s3_exception","reason":"The AWS Access Key ID you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 8ea5a567-e1dd-4959-bc1a-5933a8a559c6; S3 Extended Request ID: null)"}}},"status":500}
In this case, wait 1 minute and then run the command again. If the response still indicates an
invalid secret, new S3 access key and secret are valid before repeating the procedure.