Service check failures
- MapReduce service check failsSolution:
- If the MapReduce service check failed with /user/ambari-qa not found:
Look for the ambari-qa folder in the DFS user directory. If it does not exist, create it. If this step is skipped, MapReduce service check will fail with the /user/ambari-qa path not found error.
As root:- mkdir <gpfs mount>/user/ambari-qa
- chown ambari-qa:hadoop /gpfs/hadoopfs/user/ambari-qa
- If the MapReduce service check time out or job failed due to permission failure for yarn:
For Yarn, ensure that the yarn yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs are writable for the user yarn. If this is not the case, add write permission to the yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs directories for the user yarn.
- If the MapReduce service check failed with /user/ambari-qa not found:
- What to do when the Accumulo service start or service check
fails?
Solution:
Note:- Ensure that the HDFS and Zookeeper services are running before you proceed.
- If it is non root environment, run the commands in the workaround steps after logging in with non root user only.
- If GPFS is unintegrated, remove the tserver.wal.blocksize entry from Accumulo. From Ambari, go to and remove the tserver.wal.blocksize value and save the configuration.
- If GPFS is integrated, follow the workaround for tserver.wal.blocksize as mentioned in the FAQ Accumulo Tserver failed to start.
If the problem still exists, contact IBM® service.
- Atlas service check fails.
Solution:
- Restart the Ambari Infra service.
- Restart the Hbase service.
- Re-run the Atlas service check.
- Falcon service check fails.
Solution:
This is a known issue with HDP. For information on resolving this issue, go to Falcon Web UI is inaccessible(HTTP 503 error) and Ambari Service Check for Falcon fails: "ERROR: Unable to initialize Falcon Client object".
- What to do when the Hive service check fails with the following error:
Templeton Smoke Test (ddl cmd): Failed. : {"error":"Unable to access program: /usr/hdp/${hdp.version}/hive/bin/hcat"}http_code <401>
Solution:
HDP is not able to properly parse the ${hdp.version} value. To set the HDP version, execute the following steps:- Get the HDP version for your environment by running the /usr/bin/hdp-select versions command on any Ambari node.
- In the Ambari GUI, click templeton.hcat field under the Advanced webhcat-site. . Find the
- Replace the ${hdp.version} in the templeton.hcat field
with the hardcoded hdp.version value found in step a.
For example, if the value of hdp.version is 2.6.5.0-292, set the templeton.hcat value from /usr/hdp/${hdp.version}/hive/bin/hcat to /usr/hdp/2.6.5.0-292/hive/bin/hcat.
- Restart the HIVE service components.
- Re-run the HIVE service check.