Updating the HDP mount point configuration
Procedure
- Log in to the NFS node as the root user.
- Update the Liberty Docker.
- Open /fcisi/sifs-liberty-instance/config/kafka.properties in a
text editor, and replace the
bootstrap.serversvalue with the Kafka host. - Open /fcisi/sifs-liberty-instance/krb5.conf in a text editor, and
replace the
admin_serverandkdcvalues with the Ambari host. - Copy /etc/security/keytabs/sifsuser.keytab from the Ambari node to the /fcisi/sifs-liberty-instance/keytabs directory on the NFS node.
- Copy the kafka.service.keytab file from HDP Kafka broker host to the directory that is referenced in the useKeytab parameter in the /fcisi/sifs-liberty-instance/config/sifs-jass.conf file.
- Update the principal values in the /fcisi/sifs-liberty-instance/config/sifs-jass.conf file.
- From the Hadoop master node, copy the /usr/hdp/2.6.4.0-91/hadoop/
directory to the /fcisi/sifs-liberty-instance/ directory.
scp –r /usr/hdp/2.6.4.0-91/hadoop/ root@<nfs server>:/fcisi/sifs-liberty-instance/
- Open /fcisi/sifs-liberty-instance/config/kafka.properties in a
text editor, and replace the
- Update the Solr Docker.
- Copy the /etc/security/keytabs/solr.keytab file from the Ambari node to the /fcisi/sifs-solr-instance/keytabs/ directory on the NFS node.
- Create a folder that is named hadoop in the /fcisi/sifs-solr-instance directory on the NFS server.
- Copy the conf directory from the /usr/hdp/2.6.4.0-91/hadoop directory (available at the HDFS namenode) to the /fcisi/sifs-solr-instance/hadoop directory on the NFS server.
- Open /fcisi/sifs-solr-instance/krb5.conf in a text editor, and replace the values for the NFS server.
- Open /fcisi/sifs-solr-instance/solrconfig.xm in a text editor,
and replace the
<hdfs.namenode>values with the your HDFS node. - Run the following command on the Solr container:
kubectl exec -it <solr> bash cd /opt/ibm/sifs ./create_solr_cores.shNote: If you see errors while creating the core, from the Ambari console, click HDFS and hover your mouse hover over the Active NameNode. Ensure that you use the displayed hostname as the value for<hdfs.namenode>in the /fcisi/sifs-solr-instance/solrconfig.xml file. Then, rerun create_solr_cores.sh.After the cores are successfully created, you can see the data and index directories get created in the /user/solruser HDFS directory as follows:
[root@servername hadoop]# hadoop fs -ls /user/solruser/sifs/data Found 3 items drwxrwxr-x - solruser hadoop 0 2018-07-10 04:12 /user/solruser/sifs/data/index drwxrwxr-x - solruser hadoop 0 2018-07-10 04:02 /user/solruser/sifs/data/snapshot_metadata drwxrwxr-x - solruser hadoop 0 2018-07-10 04:12 /user/solruser/sifs/data/tlog
- Establish a mount point for the Kibana pod.
- Run the following command:
kubectl edit deployment sifs-base-kibana - Modify the paths in env and
volumeMounts:
/usr/share/kibana/config/kibana.crt to /usr/share/certificates/kibana.crt
/usr/share/kibana/config/kibana.key to /usr/share/certificates/kibana.key
- Run the following command: