Limitations

This topic lists the limitations for CDP Private Cloud Base with IBM Storage Scale.

  • The TLS certificates that are created by using the automation script /usr/lpp/mmfs/hadoop/scripts/gpfs_tls_configuration.py have only 90 days validity and would expire thereafter. This will be fixed in a future release of HDFS Transparency. For an interim fix, contact IBM support.
  • Open JDK is only supported. Oracle JDK is not supported.
  • TLS/SSL is supported from CDP Private Cloud Base 7.1.6.
  • IPV6 is not supported.
  • Short-circuit read and Short-circuit write are not supported.
  • FPO is not supported.
  • HDFS encryption is supported from CDP Private Cloud Base 7.1.6.
  • Upgrading from HDP to CDP Private Cloud Base is not supported.
  • Kudu and Ozone are not supported.
  • For production, it is recommended to have a minimum of 2 NameNodes (HA) and 3 DataNodes for the CES HDFS cluster setup.
  • For Hive Warehouse Connector to work in Spark client mode, the spark.driver.log.persistToDfs.enabled parameter must be set to false. Therefore, the logs are written to the local storage and not IBM Storage Scale.
  • The installation toolkit cannot be used if the CES HDFS cluster is kerberized. Instead, use manual installation.
  • Ubuntu and SLES are not supported.
  • Do not use Java™ 1.8.0_242 and later when Kerberos ticket_lifetime and/or renew_lifetime is set. Using a higher version results in a failure for HDFS Transparency to start.
  • Hadoop services and CES HDFS cannot be colocated on the same node as the ECE node.
  • The NameNode cannot be colocated with the DataNode or with any other Hadoop services.
  • Starting from CDP 7.1.8, Impala is supported only on the x86 platform. Impala is not supported on IBM Power.

For more information, see Limitations and differences from native HDFS.