IBM Support

IJ02461: ENABLING RANGER HDFS PLUGIN PREVENTS HDFS SERVICE FROM STARTING

Subscribe to this APAR

By subscribing, you receive periodic emails alerting you to the status of the APAR, along with a link to the fix after it becomes available. You can track this item individually or track all items by product.

Notify me when this APAR changes.

Notify me when an APAR for this component changes.

 

APAR status

  • Closed as program error.

Error description

  • When the Ranger HDFS plugin is enabled, the HDFS service
    must be restarted in order for the change to take effect.
    When Spectrum Scale HDFS Transparency has been
    integrated,
    enabling the Ranger HDFS plugin causes the HDFS service
    to
    fail to start.
    
    Reported in:
    Sepctrum Scale 4.2.3.4
    HDP 2.6.2
    HDFS Transparency Connector 2.7.3.1
    Spectrum Scale Ambari management pack 2.4.2.1
    RHEL 7.3 ppc64le
    
    Verification Steps:
    When you try to start the HDFS NameNode, Ambari will show
    a message like the following:
    
    safemode: Call From mn01-dat.cluster.com/10.0.0.1 to
    mn01-dat.cluster.com:8020 failed on connection exception:
    java.net.ConnectException: Connection refused; For more
    details see:
    http://wiki.apache.org/hadoop/ConnectionRefused
    
    If you try to query the state of HDFS using the
    mmhadoopctl command directly, you will see an error
    similar to:
    
    # mmhadoopctl connector getstate
    17/12/05 13:14:23 FATAL conf.Configuration: error parsing
    conf core-default.xml
    javax.xml.parsers.ParserConfigurationException: Feature
    'http://apache.org/xml/features/xinclude' is not
    recognized.
    at
    org.apache.xerces.jaxp.DocumentBuilderFactoryImpl.newDocu
    mentBuilder(Unknown Source)
    at
    org.apache.hadoop.conf.Configuration.loadResource(Configu
    ration.java:2530)
    at
    org.apache.hadoop.conf.Configuration.loadResources(Config
    uration.java:2492)
    at
    org.apache.hadoop.conf.Configuration.getProps(Configurati
    on.java:2405)
    at
    org.apache.hadoop.conf.Configuration.set(Configuration.ja
    va:1143)
    at
    org.apache.hadoop.conf.Configuration.set(Configuration.ja
    va:1115)
    at
    org.apache.hadoop.conf.Configuration.setBoolean(Configura
    tion.java:1451)
    at
    org.apache.hadoop.util.GenericOptionsParser.processGenera
    lOptions(GenericOptionsParser.java:321)
    at
    org.apache.hadoop.util.GenericOptionsParser.parseGeneralO
    ptions(GenericOptionsParser.java:487)
    at
    org.apache.hadoop.util.GenericOptionsParser.<init>(Generi
    cOptionsParser.java:170)
    at
    org.apache.hadoop.util.GenericOptionsParser.<init>(Generi
    cOptionsParser.java:153)
    at
    org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)
    at
    org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
    at
    org.apache.hadoop.hdfs.tools.GetConf.main(GetConf.java:33
    2)
    Exception in thread "main" java.lang.RuntimeException:
    javax.xml.parsers.ParserConfigurationException: Feature
    'http://apache.org/xml/features/xinclude' is not
    recognized.
    at
    org.apache.hadoop.conf.Configuration.loadResource(Configu
    ration.java:2648)
    at
    org.apache.hadoop.conf.Configuration.loadResources(Config
    uration.java:2492)
    at
    org.apache.hadoop.conf.Configuration.getProps(Configurati
    on.java:2405)
    at
    org.apache.hadoop.conf.Configuration.set(Configuration.ja
    va:1143)
    at
    org.apache.hadoop.conf.Configuration.set(Configuration.ja
    va:1115)
    at
    org.apache.hadoop.conf.Configuration.setBoolean(Configura
    tion.java:1451)
    at
    org.apache.hadoop.util.GenericOptionsParser.processGenera
    lOptions(GenericOptionsParser.java:321)
    at
    org.apache.hadoop.util.GenericOptionsParser.parseGeneralO
    ptions(GenericOptionsParser.java:487)
    at
    org.apache.hadoop.util.GenericOptionsParser.<init>(Generi
    cOptionsParser.java:170)
    at
    org.apache.hadoop.util.GenericOptionsParser.<init>(Generi
    cOptionsParser.java:153)
    at
    org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)
    at
    org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
    at
    org.apache.hadoop.hdfs.tools.GetConf.main(GetConf.java:33
    2)
    Caused by:
    javax.xml.parsers.ParserConfigurationException: Feature
    'http://apache.org/xml/features/xinclude' is not
    recognized.
    at
    org.apache.xerces.jaxp.DocumentBuilderFactoryImpl.newDocu
    mentBuilder(Unknown Source)
    at
    org.apache.hadoop.conf.Configuration.loadResource(Configu
    ration.java:2530)
    ... 12 more
    
    A similar error message and java stack trace would also
    be logged by the NameNode start attempt in /var/log/
    hadoop/root/hadoop-root-namenode-'hostname'.log
    

Local fix

  • There are two options available to work around the issue.
    
    1) Disable the Ranger HDFS plugin. This will allow for
    the
    normal operation of HDFS Transparency.
    
    2) While no local fix is available for the problem of
    stopping and starting HDFS from Ambari, it is possible to
    modify the hadoop-env.sh configuration file by hand, and
    control HDFS transparency using the mmhadoopctl command
    line.
    
    Remove the following 3 lines from
    /usr/lpp/mmfs/hadoop/etc/hadoop/hadoop-env.sh on the
    HDFS Transparency NameNode server:
    
      for f in /usr/share/java/*.jar; do
        export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
      done
    
    You can now start HDFS Transparency using the following
    command:
    
    /usr/lpp/mmfs/bin/mmhadoopctl connector start
    
    You can query the state on all nodes using
    
    mmhadoopctl connector getstate
    
    And you can stop HDFS Transparency using
    
    mmhadoopctl connector stop
    
    Using Ambari to attempt to restart the HDFS service will
    result in hadoop-env.sh being reverted to its
    error-causing state, so only mmhadoopctl should be used
    until a fix is installed.
    

Problem summary

  • If the HADOOP_CLASSPATH contains all packages from the
    /usr/share/jar directory, it might cause HDFS Transparency from
    starting up.
    

Problem conclusion

  • The HADOOP_CLASSPATH in the
    /usr/lpp/mmfs/hadoop/etc/hadoop/hadoop-env.sh file and the
    connector.py for Ambari file only requires to add the
    /usr/share/java/mysql-connector-java.jar to its export class
    path instead of all the packages under the /usr/share/jar
    directory.
    

Temporary fix

Comments

APAR Information

  • APAR number

    IJ02461

  • Reported component name

    SPECTRUM SCALE

  • Reported component ID

    5725Q01LP

  • Reported release

    423

  • Status

    CLOSED PER

  • PE

    NoPE

  • HIPER

    NoHIPER

  • Special Attention

    NoSpecatt / Xsystem

  • Submitted date

    2017-12-06

  • Closed date

    2018-02-20

  • Last modified date

    2018-02-20

  • APAR is sysrouted FROM one or more of the following:

  • APAR is sysrouted TO one or more of the following:

Fix information

  • Fixed component name

    SPECTRUM SCALE

  • Fixed component ID

    5725Q01LP

Applicable component levels

[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"STXKQY","label":"IBM Spectrum Scale"},"Component":"","ARM Category":[],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"423","Edition":"","Line of Business":{"code":"LOB26","label":"Storage"}},{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSFKCN","label":"General Parallel File System"},"Component":"","ARM Category":[],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"423","Edition":"","Line of Business":{"code":"","label":""}}]

Document Information

Modified date:
20 February 2018