To install an MDM ongoing synchronization server instance in an internet-connected
Kubernetes cluster, run a file called
ongoing-playbook-installer-bin-<VERSION>.bin to start the
setup process.
Before you begin
Before you begin installing an MDM ongoing synchronization server in an online Kubernetes cluster:
- Ensure that the client computer from which you plan to run the Ansible® Playbook has access to the internet.
-
Review the prerequisites. Ensure that the prerequisites
are in place before continuing. This includes provisioning your Kubernetes cluster and setting up a
storage provider (NFS is configured by default).
- Download the installation assets.
- Ensure that the following endpoints are installed and configured:
- InfoSphere® MDM
- IBM Match 360 with
Watson™
If you plan to install more than one instance of the MDM ongoing synchronization server on a
single cluster, see Installing multiple instances of the ongoing synchronization server on a single Kubernetes cluster.
About this task
The MDM ongoing synchronization server installation and deployment is done using an Ansible Playbook. Ansible Playbooks provide a simple, repeatable, and reusable way to run automated configuration and deployment functions.
The ongoing synchronization server uses Apache Kafka and an Eclipse Jetty Server or an IBM WebSphere® Liberty Profile instance to manage the communications between the various ongoing synchronization endpoints.
The MDM Publisher
distribution comes with an installation file called
ongoing-playbook-installer-bin-<VERSION>.bin. When you run
the file, it creates a directory called
mdm-ongoing-<VERSION>. This directory contains artifacts required to set up an MDM ongoing synchronization server, including an Ansible Playbook. The folder also includes information about using the artifacts to set up and configure your MDM ongoing synchronization server.
Procedure
- Prepare the client computer from which you intend to run the playbook.
- Install the
pip installation application:
curl https://bootstrap.pypa.io/pip/2.7/get-pip.py -o get-pip.py
sudo yum -y install python3
sudo alternatives --set python /usr/bin/python3
python get-pip.py --user
- Install Ansible:
export PATH=$PATH:/root/.local/bin/
pip install ansible
Note: If you are using CentOS, use the following command
instead:
export PATH=$PATH:/root/.local/bin/
pip3 install ansible
- Add your Kubernetes cluster to your known hosts. For example:
ssh-keyscan example.ibm.com >> ~/.ssh/known_hosts
- Install the
sshpass tool:
wget https://archives.fedoraproject.org/pub/archive/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm
yum --enablerepo=epel -y install sshpass
- Prepare the Kubernetes cluster for the MDM ongoing synchronization server
deployment.
- Install the
pip3 installation application:
sudo yum install python3-pip -y
- Install the Python library for Red Hat® OpenShift®:
- Create the namespace for this
project:
kubectl create ns <PROJECT-NAME>
For
example:
kubectl create ns mdm-ongoing
- Prepare the cluster's persistent volume.
By default, the MDM Publisher
installer is set to use NFS storage for the persistent volume. Make sure the NFS provisioner is
installed and create an NFS storage class using the name nfs-publisher-sc to
accommodate the MDM Publisher
deployment.
For example, if the NFS provisioner chart is
nfs-server-provisioner-1.3.1 and
the provisioner release name is
nfs-server-provisioner, use the following storage
class YAML to create the
nfs-ongoing-sc storage
class.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-ongoing-sc
annotations:
meta.helm.sh/release-name: publisher-nfs-server-provisioner
meta.helm.sh/release-namespace: kube-system
labels:
app: nfs-server-provisioner
app.kubernetes.io/managed-by: Helm
chart: nfs-server-provisioner-1.3.1
heritage: Helm
release: nfs-server-provisioner
mountOptions:
- vers=3
provisioner: cluster.local/publisher-nfs-server-provisioner
reclaimPolicy: Delete
volumeBindingMode: Immediate
- On your client computer, extract the MDM Publisher Ansible Playbook assets by running
ongoing-playbook-installer-bin-<VERSION>.bin.
./ongoing-playbook-installer-bin-<VERSION>.bin
Confirm that the script created a directory called
mdm-ongoing-<VERSION>.
- Go to
mdm-ongoing-<VERSION>.
- Edit
./hosts to set the master host name of the remote Kubernetes
cluster.
For example, you can delete any existing placeholder host from the ./hosts file and add a new
one, as follows:
[exampleserver]
acme.example.server02.com
- Edit
/roles/masterdatamanagementongoing/vars/main.yml to set up the
ongoing synchronization endpoints.
For example:
endpoints:
authorized_endpoints:
- type: Match360
alias: mdc444a
aspera_alias: asp444a
aspera_host: worker2.acme.cp.com
aspera_port: 31000
host: cpd-cpd40.apps.acme.cp.com
port: 443
- type: KAFKA
cluster_composite_url: "acme.server03.com:9092"
- type: APPSERVER
host: acme.server03.com
alias: mdmacme
port: 9443
user: mdmadmin
- Optional: If the ongoing synchronization server requires authorization,
include the following endpoint:
- type: ONGOING
host: <ONGOING-SYNC-POD>.<ONGOING-SYNC-SERVICE-NAME><ONGOING-SYNC-NAMESPACE>
alias: <ALIAS-LABEL>
port: 9443
For example:
- type: ONGOING
host: mdm-ongoing-0.svc-wlp-ongoing.mdm-ongoing
alias: ongoing_wlp_acme
port: 9443
- Optional: If you wish to perform a silent installation, complete the
following silent mode configuration steps.
Silent installation enables you to define the
options for installing the MDM ongoing synchronization server in an installation response file, then
run the installation process without interactive input. This method is useful for performing
repeated installations.
- Edit the
env.sh file.
- Configure the SSH connection and the silent mode installation variables for the
ongoing synchronization server playbook and instances.
For example:
# The ssh connection key file to be used by silent mode install
export TLS_KEY_FILE=<id_rsa file location>
#additional silent mode variables
export INST_BIN_PUBLISHER_PASSPHRASE=<password>
export INST_BIN_KEY_PLAIN_PASS=<password>
export INST_BIN_KEY_STORE_PLAIN_PASS=<password>
export INST_BIN_LTPA_KEY_PLAIN_PASSWORD=<password>
export INST_BIN_TRUST_STORE_PASS=<password>
export INST_BIN_WAS_ADMIN_PASSWORD=<password>
- Start the MDM Publisher installation by running the Ansible Playbook.
cd mdm-ongoing-playbook-<VERSION>/bin
- Optional: Configure SSL security for ongoing synchronization
jobs.
You can secure your Apache Kafka streaming by enabling SSL either for one-way
communication or two-way communication.
To secure one-way communication with Kafka:
- Use your certificate file to overwrite the following temporary file:
/root/on/mdm-ongoing-playbook-<VERSION>/custom_certificates/kafka_ssl/oneway/to_be_replaced.txt.
- Run the following script to apply the changes:
${INSTALL_LOC}/mdm-ongoing-playbook-<VERSION>/bin/provision_instance.sh
To secure two-way communication with Kafka:
- Use your certificate files to overwrite the following temporary
files:
/root/on/mdm-ongoing-playbook-<VERSION>/custom_certificates/kafka_ssl/oneway/to_be_replaced.txt
/root/on/mdm-ongoing-playbook-<VERSION>/custom_certificates/kafka_ssl/twoway/to_be_replaced.txt
- Run the following script to apply the
changes:
${INSTALL_LOC}/mdm-ongoing-playbook-<VERSION>/bin/provision_instance.sh
- Enable the Apache Kafka JAAS client.
- Copy the
kafka_client_jaas.conf file inside the
mdm-ongoing-0 pod to the following location:
/persist-data/kafka_jaas/kafka_client_jaas.conf.
- Obtain
jvm.options by running the following command:
export_config.sh jvm.options
- Edit
jvm.options to uncomment the following line:
-Djava.security.auth.login.config=/persist-data/kafka_jaas/kafka_client_jaas.conf
For
example:
#-Djava.security.properties=/usr/ibmpacks/ongoing/0.1.1209-pr-28885-SNAPSHOT/conf/java.security
-Xmx512m
-Dcom.ibm.ws.logging.log.directory=/var/log/ongoing
-Dfile.encoding=UTF8
-Djava.security.auth.login.config=/persist-data/kafka_jaas/kafka_client_jaas.conf
- Apply your changes by running the following command:
import_config.sh jvm.options
What to do next
Now that you have installed and deployed the MDM ongoing synchronization server on a Kubernetes
cluster, you might want to take the following actions: