If you have a two data center warm-standby deployment, you can configure the analytics
subsystem in your active data center to offload incoming API event records to the analytics
subsystem in your warm-standby data center.
About this task
If you have a two data center disaster recovery deployment, with an analytics subsystem
deployed in each data center, then you can replicate the API event data between the data
centers.
The offload feature is
used for the API event data replication. In 2DCDR deployments, since both data centers have the same
contents in their management subsystem databases, the analytics data from all data centers can be
seen from the management UIs in either data center.
Analytics two data center replication key points:
- Only API event data that is generated by API Gateways can be replicated. Data from V5 compatible
gateways cannot be replicated.
- The feature requires additional CPU, RAM, and storage space. Use dedicated storage and horizontal scaling to meet
the resource requirements.
- The limitation of no analytics data available in the portal sites still exists with this
configuration.
- The feature is not recommended on VMware deployments because it is not possible to scale
horizontally beyond three replicas.
The steps in this task refer to your two data centers as DC1 (the active data center) and DC2
(the standby data center).
Note: For OpenShift® users: The example steps in this topic use the Kubernetes kubectl
command. On OpenShift, use the equivalent oc
command in its place. If you are using a top-level CR you must edit the
APIConnectCluster
CR (the top-level CR), instead of directly editing the subsystem
CRs. If the subsystem section is not included in the top-level CR, copy and paste the section from
the subsystem CR to the APIConnectCluster
CR.
Procedure
- In DC1, identify the analytics ingestion client secret, by running this
command in the management subsystem namespace:
kubectl -n <management namespace> get secrets | grep "\-client"
The secret name is either analytics-ingestion-client
or
a7s-ing-client
, possibly prefixed with other text.
- In DC1, view the contents of the analytics ingestion client
secret:
kubectl -n <management namespace> get secret -o yaml <analytics ingestion secret>
The output should look like this:
apiVersion: v1
data:
ca.crt: <ca.crt encoded certificate string>
tls.crt: <tls.crt ncoded certificate string>
tls.key: <tls.key encoded certificate string>
...
- In DC1, convert the
tls.key
value to PKCS #8
format.
- Base64 decode
tls.key
, and copy to a file called
tls.key:
echo <tls.key encoded certificate string> | base64 --decode > tls.key
- Convert the tls.key file contents to PKCS #8, and Base64 encode
it to a file called converted_tls.key:
openssl pkcs8 -topk8 -in tls.key -nocrypt | base64 -w 0 > converted_tls.key
- In DC2, create or update your
analytics_offload_certificates.yaml file, using the DC1 certificate strings and
TLS key that you obtained in steps 2 and
3.
If you did not previously configure analytics offload targets, then create a new file that is
called
analytics_offload_certificates.yaml and paste in the following contents:
apiVersion: v1
kind: Secret
metadata:
name: offload-certificates
data:
replication_ca.crt: <ca.crt string from ingestion secret>
replication_tls.crt: <tls.crt string from ingestion secret>
replication_tls.key: <converted_tls.key file contents>
If you already have other offload targets configured, then edit your existing
analytics_offload_certificates.yaml file and add the certificate strings from
the previous step under the data section:
apiVersion: v1
kind: Secret
metadata:
name: offload-certificates
data:
replication_ca.crt: <ca.crt string from ingestion secret>
replication_tls.crt: <tls.crt string from ingestion secret>
replication_tls.key: <converted_tls.key file contents>
existing_offload_target1_ca.crt: hGywkdWE...
existing_offload_target1_tls.crt: kY7fW..
...
...
- In DC2, apply the analytics_offload_certificates.yaml file to create
the
offload-certificates
secret:
kubectl -n <analytics namespace> apply -f analytics_offload_certificates.yaml
A new secret that is called
offload-certificates
is created. Verify that this
secret exists by running:
kubectl -n <analytics namespace> get secrets
If you previously created offload targets, then this secret is updated.
- In DC2, identify the analytics ingestion client secret, by running this command in the
management subsystem namespace:
kubectl -n <management namespace> get secrets | grep "\-client"
The secret name is either analytics-ingestion-client
or
a7s-ing-client
, possibly prefixed with other text.
- In DC2, view the contents of the analytics ingestion client
secret:
kubectl -n <management namespace> get secret -o yaml <analytics ingestion secret>
The output should look like this:
apiVersion: v1
data:
ca.crt: <ca.crt encoded certificate string>
tls.crt: <tls.crt ncoded certificate string>
tls.key: <tls.key encoded certificate string>
...
- In DC2, convert the
tls.key
value to PKCS #8
format.
- Base64 decode
tls.key
, and copy to a file called
tls.key:
echo <tls.key encoded certificate string> | base64 --decode > tls.key
- Convert the tls.key file contents to PKCS #8, and Base64 encode
it to a file called converted_tls.key:
openssl pkcs8 -topk8 -in tls.key -nocrypt | base64 -w 0 > converted_tls.key
- In DC1, create or update your analytics_offload_certificates.yaml
file, using the DC2 certificate strings and TLS key that you obtained in steps 7 and 8.
If you did not previously configure analytics offload targets, then create a new file that is
called
analytics_offload_certificates.yaml and paste in the following contents:
apiVersion: v1
kind: Secret
metadata:
name: offload-certificates
data:
replication_ca.crt: <ca.crt string from ingestion secret>
replication_tls.crt: <tls.crt string from ingestion secret>
replication_tls.key: <converted_tls.key file contents>
If you already have other offload targets configured, then edit your existing
analytics_offload_certificates.yaml file and add the certificate strings from
the previous step under the data section:
apiVersion: v1
kind: Secret
metadata:
name: offload-certificates
data:
replication_ca.crt: <ca.crt string from ingestion secret>
replication_tls.crt: <tls.crt string from ingestion secret>
replication_tls.key: <converted_tls.key file contents>
existing_offload_target1_ca.crt: hGywkdWE...
existing_offload_target1_tls.crt: kY7fW..
...
...
- In DC1, apply the analytics_offload_certificates.yaml file to create
the
offload-certificates
secret:
kubectl -n <analytics namespace> apply -f analytics_offload_certificates.yaml
A new secret that is called
offload-certificates
is created. Verify that this
secret exists by running:
kubectl -n <analytics namespace> get secrets
If you previously created offload targets, then this secret is updated.
- In DC1, edit the analytics CR, and set the
spec.external.offload
section
as shown:
kubectl edit a7s
external:
offload:
enabled: true
output: |
if [gateway_service_name] == "<DC1 gateway service name>" {
http {
url => "https://<DC2 analytics ingestion endpoint>/ingestion"
http_method => "post"
codec => "json"
content_type => "application/json"
id => "offload1_http"
ssl_certificate_authorities => "/etc/velox/external_certs/offload/replication_ca.crt"
ssl_certificate => "/etc/velox/external_certs/offload/replication_tls.crt"
ssl_key => "/etc/velox/external_certs/offload/replication_tls.key"
}
}
secretName: offload-certificates
where:
- <DC1 gateway service name> is the name of the gateway service in DC1.
- <DC2 analytics ingestion endpoint> is the analytics ingestion endpoint of
the analytics service in DC2.
If you have multiple offload targets, then see Multiple offload targets.
After you save the updated the CR, the analytics ingestion pod restarts, and new API events are
sent to DC2.
Note: If you later return to edit properties in the
external.offload.output
section, you might need to restart the ingestion pod
manually:
kubectl delete pod <analytics ingestion pod>
- In DC2, edit the analytics CR, and set the
spec.external.offload
section
as shown:
kubectl edit a7s
external:
offload:
enabled: true
output: |
if [gateway_service_name] == "<DC2 gateway service name>" {
http {
url => "https://<DC1 analytics ingestion endpoint>/ingestion"
http_method => "post"
codec => "json"
content_type => "application/json"
id => "offload1_http"
ssl_certificate_authorities => "/etc/velox/external_certs/offload/replication_ca.crt"
ssl_certificate => "/etc/velox/external_certs/offload/replication_tls.crt"
ssl_key => "/etc/velox/external_certs/offload/replication_tls.key"
}
}
secretName: offload-certificates
where:
- <DC2 gateway service name> is the name of the gateway service in DC2.
- <DC1 analytics ingestion endpoint> is the analytics ingestion endpoint of
the analytics service in DC1.
After saving the updated the CR, the analytics ingestion pod restarts, and new API events are
sent to DC1.
- Confirm that replication is working by making a test API call to your DC2 gateway service
and confirming that you see the event in the Discover view of
the API
Manager UI in
DC1.
To verify replication from DC1 to DC2, you must complete a failover of the management subsystem so that you can access the API
Manager UI in DC2 and see data
from the analytics subsystem in DC2.
If API events are not replicating between your data centers, then check the
ingestion pod logs for error messages when API calls are
made:
kubectl logs <analytics ingestion pod>