Linux-UNIX: Ranger HDFS for Hortonworks and Cloudera 7 FAQs

Find answers to some basic issues with Ranger HDFS integration.

HDFS audits are not being processed and the following message is seen in the STAP logs:
2020.04.22 12:55:35 ranger_hdfs_integration/guard_hdfs.cc(78) HDFS: cannot load libhdfs, no path provided
Verify that ranger_hdfs_lib_location is set to the directory containing libhdfs.so on the server hosting S-TAP.
HDFS audits are not being processed and the following message is seen in the STAP logs:
2020.04.22 12:57:52 ranger_hdfs_integration/guard_hdfs.cc(111) HDFS: unable to open /tmp/libhdfs.so, error: /tmp/libhdfs.so: cannot open shared object file: No such file or directory
Verify that ranger_hdfs_lib_location is set to the directory containing libhdfs.so on the server hosting S-TAP.
HDFS audits are not being processed and the following message is seen in the STAP logs:
2020.04.22 12:59:20 ranger_hdfs_integration/guard_hdfs.cc(111) HDFS: unable to open /usr/hdp/3.1.0.0-78/usr/lib/libhdfs.so, error: libjvm.so: cannot open shared object file: No such file or directory
Verify that ld_library_paths is set to the directory containing libjvm.so on the server hosting S-TAP.
HDFS audits are not being processed and the following message is seen in the STAP logs:
hdfsExists: invokeMethod((Lorg/apache/hadoop/fs/Path;)Z) error: ConnectException: Connection refusedjava.net.ConnectException: Call From <STAP_HOSTNAME>/<STAP_IP> to <NN_HOSTNAME>:<PORT> failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

For more details see: http://wiki.apache.org/hadoop/ConnectionRefused.

Verify that ranger_hdfs_namenode and ranger_hdfs_port are correct. Also, ensure that the HDFS NameNode is up and running.
HDFS audits are not being processed and the following message is seen in the STAP logs:
hdfsBuilderConnect(forceNewInstance=0, nn=hw3-cl1-01.swg.usma.ibm.com, port=8020, kerbTicketCachePath=/usr/local/guardium/guard_stap/hdfs_reader_ticket, userName=foobar) error: LoginException: Unable to obtain password from user org.apache.hadoop.security.KerberosAuthException: failure to login: for principal: foobar using ticket cache file: /usr/local/guardium/guard_stap/hdfs_reader_ticket javax.security.auth.login.LoginException: Unable to obtain password from user
There is an issue with the keytab or principal S-TAP is configured to use. Verify that the ranger_hdfs_keytab and ranger_hdfs_user are correct for the environment.
HDFS audits are not being processed and a message similar to the following is seen in the STAP logs: :
CannotObtainBlockLengthException: Cannot obtain block length for LocatedBlock {BP-475011667-9.32.164.237-1546906377615:blk_1074457421_717409; getBlockSize()=428; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[9.32.164.137:1019,DS-39fe5b73-f666-4285-bd50-805b4acc9250,DISK], DatanodeInfoWithStorage[9.32.164.148:1019,DS-fd105fa3-aa8e-44f5-a878-7d409ea6d24f,DISK]]}
ON an HDFS client for the HDFS cluster, enter the command hdfs debug recoverLease for the ranger audit log file whose name appears just before the CannotObtainBlockLengthException message, using the syntax:
hdfs debug recoverLease -path <path to audit file> -retries <number of times to try to recover the lease on the HDFS file>
For example:
hdfs debug recoverLease -path /ranger/audit/solr/20200623/solr_ranger_audit_hw3-cl1-02.log -retries 10
hdfs debug recoverLease -path /ranger/audit/storm/20200812/storm_ranger_audit_hw3-cl1-01.log -retries 10
For more information, see Cannot obtain block length for LocatedBlock.