You can copy the persistent volume claim data from Cloud Object Storage directly to the
pod. Create a network policy to allow access and then create a job to provide the Cloud Object
Storage credentials and restore the persistent volume claim data.
Before you begin
Get the
VI user and
group ID.
oc get deployment ${MAS_INSTANCE_ID}-service -n mas-${MAS_INSTANCE_ID}-visualinspection -o yaml | yq .spec.template.spec.securityContext.runAsUser
About this task
You need to create a Red Hat® OpenShift® job to copy the persistent
volume claim data to the pod. Because the
Maximo® Application
Suite namespace blocks the
egress network by default, you need to create a network policy to allow the restore job to access
Cloud Object Storage.
Procedure
-
Create a network policy.
In the
Red Hat OpenShift web console, open the
Import YAML page. Copy and paste the following
NetworkPolicy
content, replace the variables, and click
Create.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-vi-restore
namespace: mas-{MAS_INSTANCE_ID}-visualinspection
spec:
podSelector:
matchLabels:
job-name: vi-restore
egress:
- {}
policyTypes:
- Egress
- Create a Red Hat OpenShift job to restore the persistent claim
volume data from Cloud Object Storage to a pod.
In the
Red Hat OpenShift web console, open the
Import YAML page. Copy and paste the following
Job
content, replace the variables, and click
Create.
apiVersion: batch/v1
kind: Job
metadata:
name: vi-restore
namespace: mas-{MAS_INSTANCE_ID}-visualinspection
spec:
parallelism: 1
completions: 1
backoffLimit: 6
template:
metadata:
name: vi-restore
labels:
app: vi-restore
spec:
serviceAccountName: ibm-mas-visualinspection-operator
serviceAccount: ibm-mas-visualinspection-operator
securityContext:
runAsUser: {VI_USER}
runAsGroup: {VI_USER}
runAsNonRoot: true
containers:
- name: vi-restore
image: rclone/rclone:1.62.2
command:
- sh
- '-c'
- >-
rm -rf /opt/powerai-vision/data/*; export RCLONE_CONFIG=/tmp/rclone.conf; rclone config create brcos s3 provider={S3_PROVIDER} endpoint={S3_URL} access_key_id={S3_ACCESS_KEY} secret_access_key={S3_SECRET_KEY} region={S3_REGION}; rclone --links --progress --no-check-certificate --config /tmp/rclone.conf copy brcos:{S3_BUCKET}/data /opt/powerai-vision/data;
volumeMounts:
- name: data-mount
mountPath: /opt/powerai-vision/data
subPath: data
securityContext:
privileged: false
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
restartPolicy: OnFailure
volumes:
- name: data-mount
persistentVolumeClaim:
claimName: {MAS_INSTANCE_ID}-data-pvc
The following
values are used in the
Job
content:
- {MAS_INSTANCE_ID}
- The Maximo Application Suite instance ID.
- {S3_PROVIDER}
- The Amazon S3 storage provider that you are using. For more information, see
Amazon S3
Storage Providers.
- {S3_URL}
- Endpoint for S3 API.
- {S3_ACCESS_KEY}
- Your Amazon Web Services access key ID.
- {S3_SECRET_KEY}
- Your Amazon Web Services secret access key or password.
- {S3_REGION}
-
The region to connect to.
- {S3_BUCKET}
- The backup data is stored in {S3_BUCKET}/data.
- {VI_USER}
- The owner of the data folders.
- In the Red Hat OpenShift web console, wait for the vi-restore
job to be completed.
You can check the log of this job for details of the
progress.
What to do next
After the job is completed, delete the network policy and job.