In "Introduction to YARN," I describe YARN, which is a new architecture of the processing layer in Apache Hadoop. YARN significantly changes how distributed applications are run. It offers improved scalability, higher resource utilization, and an option to run additional kinds of workloads on the same cluster.
Because YARN is now production-ready, and because both small and large Hadoop clusters clearly benefit from using it, there is no reason not to migrate from MRv1 to YARN. Companies such as Yahoo!, Spotify, and eBay have already migrated, and each day they run thousands of applications on top of YARN. Large Hadoop vendors also favor YARN and offer extensive support for Hadoop clusters powered by YARN.
This article explains and demonstrates how to migrate a single-node Hadoop cluster from MapReduce to YARN in an agile way.
The agile migration
Although Hadoop vendors such as Cloudera and Hortonworks provide excellent and detailed documentation for installing YARN, they follow an all-or-nothing approach. With this approach, you perform almost all of the migration steps first, then you start the cluster and verify that it is correctly migrated. If the migration fails, you review the migration steps to determine where the misconfiguration was made. Because the migration to YARN is a complex and error-prone process, it can be challenging to troubleshoot an almost-migrated cluster.
In contrast, this article describes how to use an agile approach with quick and frequent iterations. In the first iteration, you install only the necessary components and start the YARN cluster to verify whether it runs applications successfully. In the next iterations, you extend the cluster's functionality and optimize the most important configuration settings. The goal is to have a working YARN cluster that can process users' applications after each iteration. Using this approach, administrators have the ability to temporarily halt the migration process after each iteration and continue it later at a convenient time.
Scope of the migration
The focus of this article is to outline the steps necessary to migrate from MRv1 to YARN. It assumes you already have the Hadoop MRv1 cluster installed and running.
YARN is available in Hadoop 2.x. If you use Hadoop 1.x, you need to first upgrade the Hadoop Distributed File System (HDFS) to 2.x and make sure that any other components you use, such as Pig and Hive, are compatible with Hadoop 2.x.
This article does not cover the steps needed to upgrade HDFS and components such as Pig and Hive.
To demonstrate the process of the migration, start with a single-node Hadoop cluster, where all Hadoop daemons run in separate Java™ virtual machines (JVMs). The single-node cluster is often referred to as pseudo-distributed. Although a single-node Hadoop cluster is impractical for production use, it provides a simple and efficient method for learning Hadoop basics and experimenting with its configuration.
This migration example uses Ubuntu 12.04 with 4 GB of RAM, one 4-core CPU, and Hadoop MRv1 installed from Cloudera's distribution including Apache Hadoop (CDH4). Although the example uses CDH4, you can use these steps on different distributions, including the standard Apache Hadoop or Hortonworks Data Platform 2.0 (HDP2). CDH4 was selected only because it provides Debian packages for MRv1 used to demonstrate the process of the migration from MRv1 to YARN on Ubuntu server.
Iteration 1: Building a single-node YARN cluster
In this iteration, you prepare a minimal (but working) set of configuration files. Then you uninstall MRv1 daemons and install two required YARN daemons. At the end of the iteration, you have a YARN cluster running and processing data. This cluster, however, has a few important features missing and is configured with many default and non-optimal configuration settings.
Step 1. Configure properties for YARN clusters
To prepare the minimalistic configuration for the YARN cluster, create a custom directory based on your current configuration directory, as shown below.
Listing 1. Create a custom directory
$ sudo cp -r /etc/hadoop/conf.dist /etc/hadoop/conf.yarn
The /etc/hadoop/conf.yarn directory contains all the
necessary Hadoop-related configuration files for a YARN cluster. Later,
when the configuration is ready, you will make this directory the
preferred one by setting the
First, add two configuration settings in mapred-site.xml.
Listing 2. The mapred-site.xml file
<property> <name>mapreduce.framework.name</name> <value>yarn</value> <description>The runtime framework for executing MapReduce jobs. Can be one of local, classic or yarn.</description> </property> <property> <name>yarn.app.mapreduce.am.staging-dir</name> <value>/user</value> <description>The staging dir used while submitting jobs.</description> </property>
YARN introduces a configuration file yarn-site.xml that contains values
of YARN-related configuration settings. The most important settings are
port parameters of the
ResourceManager, so clients know where to submit applications.
Check the hostname of your machine by issuing the command
Listing 3. Check the hostname of your machine
$ hostname hakunamapdata
Note: Use your own hostname everywhere the
hakunamapdata parameter is used. A real hostname, rather
localhost parameter, makes it possible for the
cluster to expand to multiple nodes without having to change the value of
yarn.resourcemanager.address setting to permit new nodes
to be connected to the ResourceManager.
In the first iteration, the yarn-site.xml file contains only four configuration settings, as shown below.
Listing 4. yarn-site.xml with four configuration settings
<property> <name>yarn.resourcemanager.address</name> <value>hakunamapdata:8032</value> <description>The address of the applications manager interface in the RM.</description> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce.shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.application.classpath</name> <value> $HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*, $HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*, $HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*, $YARN_HOME/*,$YARN_HOME/lib/* </value> <description>Classpath for typical applications.</description> </property>
In MRv1, the shuffle function required to run a MapReduce job is part of the TaskTracker. In YARN, however, the TaskTrackers no longer exist because they are replaced by the NodeManagers. Moving MapReduce shuffle code to the NodeManager might be a bad design decision, but because the NodeManager is a generic process, it should not contain MapReduce specific code.
As a result, the shuffle is implemented as an auxiliary service loaded by the NodeManager. This service starts up a web server and knows how to send the intermediate data from local disks to reduce tasks when they request them.
yarn.application.classpath parameter in the yarn-site.xml
file contains multiple environmental variables. Before proceeding further,
you need to set two of them in
Listing 5. Setting two environmental variables in hadoop-env.sh
#!/bin/bash export HADOOP_CONF_DIR=/etc/hadoop/conf export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
Step 2. Adjust the heap size settings for Hadoop daemons
This is an optional, but recommended, step. The default maximum heap size for most of the Hadoop daemons is 1 GB, but since all daemons are run on a single 4GB machine, you need to decrease the size. You can use the exact values shown in Listing 6 or in Listing 7, or adjust them to fit your machine. The setting related to heap sizes for YARN daemons are configured in the yarn-env.sh file.
Listing 6. Configure heap sizes of YARN daemons in yarn-env.sh
In contrast, the heap sizes for HDFS daemons are configured in the hadoop-env.sh file.
Listing 7. Configure heap sizes of HDFS daemons in hadoop-env.sh
export HADOOP_NAMENODE_OPTS="-Xmx64m" export HADOOP_SECONDARYNAMENODE_OPTS="-Xmx64m" export HADOOP_DATANODE_OPTS="-Xmx64m"
Step 3. Configure the staging directory
Because users' job files are written to the staging directory (specified
yarn.app.mapreduce.am.staging-dir) on behalf of a user
who has submitted a job, use the command in Listing 8 to make
sure that users can write to the /user directory.
Listing 8. Enable users to write to the /user directory
$ sudo -u hdfs hadoop fs -chmod 777 /user
Step 4. Download YARN packages
Cloudera and Hortonworks provide remote yum, SLES, and apt repositories from which you can download and install MRv1 and YARN packages. You must set up access to the appropriate remote repository.
If you have already installed the MRv1 cluster using CDH4, you should already have access to a remote repository that contains YARN packages compatible with your operating system. Otherwise, to gain access, issue the commands shown below.
Listing 9. Access a remote repository that contains YARN packages
$ wget http://archive.cloudera.com/cdh4/one-click-install/precise/amd64/cdh4-repository_1.0_all.deb sudo dpkg -i cdh4-repository_1.0_all.deb
Retrieve a new list of packages and verify that YARN packages are accessible.
Listing 10. Retrieve a new list of packages
$ sudo apt-get update $ sudo apt-cache search yarn hadoop-0.20-conf-pseudo - Hadoop installation in pseudo-distributed mode with MRv1 hadoop-yarn - The Hadoop NextGen MapReduce (YARN) hadoop-yarn-nodemanager - Node manager for Hadoop hadoop-yarn-proxyserver - Web proxy for YARN hadoop-yarn-resourcemanager - ResourceManager for Hadoop
In a similar way, you can get access to the HDP2 packages. If you are using the standard Apache Hadoop, you can download Hadoop from http://mirrors.ibiblio.org/apache/hadoop/common/stable/hadoop-2.2.0.tar.gz.
Step 5. Create users and groups
A good practice is to run the various Hadoop daemons using separate and dedicated accounts. YARN introduces two new daemons: the ResourceManager and the NodeManager, run with a YARN account.
Use the command in Listing 11 to determine whether you have already created the user yarn and whether it belongs to the hadoop group (for example, it might have been created when you installed MRv1 using CDH4 first time).
Listing 11. Check to see if the user yarn is already created
$ id yarn uid=118(yarn) gid=129(yarn) groups=129(yarn),126(hadoop)
If not, create a user
yarn that is a member of the
hadoop group, as shown below.
Listing 12. Create a user yarn
$ groupadd hadoop $ useradd -g hadoop yarn
Step 6. Stop MRv1 cluster and uninstall MRv1 packages
After the configuration files are created, you can stop MRv1 daemons.
Listing 13. Stop MRv1 daemons and uninstall their packages
$ sudo /etc/init.d/hadoop-0.20-mapreduce-tasktracker stop $ sudo /etc/init.d/hadoop-0.20-mapreduce-jobtracker stop
If these demons are successfully stopped, you should not see them in the
output of the
sudo jps command.
sudo jps command
$ sudo jps 7757 Jps 5389 NameNode 5581 DataNode 7714 SecondaryNameNode
Remove the MRv1 packages, as shown below.
Listing 15. Remove the MRv1 packages
$ sudo apt-get remove hadoop-0.20-mapreduce-jobtracker $ sudo apt-get remove hadoop-0.20-mapreduce-tasktracker $ sudo apt-get remove hadoop-0.20-mapreduce
Double-check that the packages are removed successfully.
Listing 16. Use the purge option
$ sudo dpkg-query -l | grep mapreduce rc hadoop-0.20-mapreduce ... rc hadoop-0.20-mapreduce-jobtracker ... rc hadoop-0.20-mapreduce-tasktracker ...
The first column in the output above should have an
which means that packages are removed but the configuration files remain.
To remove packages and the configuration files, you need to
use the purge option
Step 7. Configure alternatives
By using the alternatives framework, you ensure that a symbolic link
/etc/hadoop/conf points to a directory with the YARN
Listing 17. Use the alternative framework
$ sudo update-alternatives --verbose --install /etc/hadoop/conf hadoop -conf /etc/hadoop/conf.yarn 50
Make sure that
/etc/hadoop/conf.yarn is a directory that has
the highest priority and that it is displayed as the current best version.
Listing 18. Ensure that /etc/hadoop/conf.yarn has the highest priority
$ sudo update-alternatives --display hadoop-conf hadoop-conf - auto mode link currently points to /etc/hadoop/conf.yarn /etc/hadoop/conf.dist - priority 30 /etc/hadoop/conf.empty - priority 10 /etc/hadoop/conf.yarn - priority 50 Current 'best' version is '/etc/hadoop/conf.yarn'.
Step 8. Install YARN packages and start YARN daemons
In the first iteration, you install the ResourceManager, the NodeManager, and MRv2 packages, as shown in Listing 19. MRv2 is simply the re-implementation of the classic MapReduce paradigm that runs on top of YARN. The ResourceManager and the NodeManager are new daemons introduced by YARN.
Listing 19. Get the ResourceManager and NodeManager
$ sudo apt-get update $ sudo apt-get install hadoop -yarn-nodemanager $ sudo apt-get install hadoop-mapreduce
By default, after installing packages with CDH, the daemons are automatically started (This is a different behavior from what is done in the HDP2 and standard Apache Hadoop distributions).
Start the daemons using the commands shown below.
Listing 20. Start the daemons
$ sudo /etc/init.d/hadoop-yarn-resourcemanager start $ sudo /etc/init.d/hadoop-yarn-nodemanager start
Note: If you have adjusted the heap sizes for the HDFS daemons in a previous step, restart the NameNode, the DataNode, and the SecondaryNameNode for these changes to take effect.
Verify that the daemons are started with new memory settings. In case of
multiple occurrences of the
-Xmx option, the value of the
last one is used. Issue the command
$ sudo jps -v | grep Xmx to verify that a right amount of memory is used by the Hadoop daemons.
If the daemons do not start, troubleshoot by looking at the logs in /var/log/hadoop-yarn/yarn-yarn-resourcemanager-*.log and /var/log/hadoop-yarn/yarn-yarn-nodemanager-*.log first.
Step 9. Browse the ResourceManager's web user interface
Point your browser to ResourceManager's web UI at http://hakunamapdata:8088/cluster, which is similar to what is shown in Figure 1.
Figure 1. The ResourceManager's web UI after a fresh installation
Step 10. Submit the first MapReduce job on the YARN cluster
To verify the installation, smoke test the cluster using the
Pi Estimator MapReduce job.
Note: Use a different JAR file (/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar), as shown in Listing 21. The directory /usr/lib/hadoop-0.20-mapreduce gets deleted when the hadoop-0.20-mapreduce package is removed. The directory /usr/lib/hadoop-mapreduce gets created when the hadoop-mapreduce package is installed.
Listing 21. Smoke test the cluster
$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 10 ... 14/03/23 20:53:53 INFO client.YarnClientImpl: Submitted application application_1395603842382_0001 to ResourceManager at hakunamapdata/127.0.1.1:8032 14/03/23 20:53:53 INFO mapreduce.Job: The url to track the job: http://hakunamapdata:8088/proxy/application _1395603842382_0001/ 14/03/23 20:53:53 INFO mapreduce.Job: Running job: job _1395603842382_0001 14/03/23 20:54:01 INFO mapreduce.Job: map 0% reduce 0% 14/03/23 20:54:47 INFO mapreduce.Job: map 60% reduce 0% 14/03/23 20:55:20 INFO mapreduce.Job: map 100% reduce 0% 14/03/23 20:55:21 INFO mapreduce.Job: map 100% reduce 100% 14/03/23 20:55:21 INFO mapreduce.Job: Job job _1395603842382_0001 completed successfully ... Job Finished in 89.679 seconds Estimated value of Pi is 3.20000000000000000000
Ensure that you can see this job on http://hakunamapdata:8088/cluster, as shown in Figure 2.
Figure 2. ResourceManager's web UI that contains a successfully completed application
Step 11. Check the statistics of MapReduce job
If you click History in the last column to see the
statistics of the job, you get an error message
connect, as shown in Figure 3. The browser tries to
connect the MapReduce Job History Server but the MapReduce Job History
Server has not been deployed yet.
Note:/jobhistory/ becomes a part of the URL is http://hakunamapdata:19888/jobhistory/job/.
Figure 3. Figure 3. Unable to connect error
Iteration 2: Configuring the MapReduce Job History Server
Although the YARN cluster is running and processing data, you cannot see any metrics from MapReduce jobs. This situation is unlike what happens in MRv1, where metrics of currently running and recently finished MapReduce jobs are easily accessed through the JobTracker's web UI.
In YARN, the JobTracker no longer exists. The job life cycle and metrics management becomes the responsibility of the short-lived ApplicationMasters. When a user submits an application, an instance of the ApplicationMaster is started to coordinate the execution of all tasks within the application. When the application is complete, the ApplicationMaster is ended, and you have no access to the metrics of this application any longer.
For this reason, a new MapReduce Job History Server (MR JHS) has been introduced. It maintains information about MapReduce jobs after their ApplicationMasters end.
When an ApplicationMaster is running, the ResourceManager web UI forwards the requests to this ApplicationMaster. When the ApplicationMaster finishes, the ResourceManager web UI simply forwards the requests to the MR JHS.
The next step is to deploy MapReduce Job History Server.
Step 1. Configure properties for the MapReduce Job History Server
Specify configuration settings that indicate where the MR JHS runs and stores the history files.
Listing 22. Specify location of MR JHS in mapred-site.xml
<property> <name>mapreduce.jobhistory.address</name> <value>hakunamapdata:10020</value> <description>MapReduce JobHistory Server IPC host:port</description> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>hakunamapdata:19888</value> <description>MapReduce JobHistory Server Web UI host:port</description> </property> <property> <name>mapreduce.jobhistory.intermediate-done-dir</name> <value>/mr-history/tmp</value> Directory where history files are written by MapReduce jobs. </property> <property> <name>mapreduce.jobhistory.done-dir</name> <value>/mr-history/done</value> Directory where history files are archived by the MR JobHistory Server. </property>
You can use two optional, additional settings:
mapreduce.jobhistory.max-age-ms— Defaults to 604,800,000 milliseconds (1 week), and specifies the retention period for history files. The default value might be too short if you want to run an analysis of historical MapReduce jobs.
mapreduce.jobhistory.joblist.cache.size— Defaults to 20.000 jobs and specifies the number of the most recent jobs that MR JHS should keep in the memory (the information about older jobs can still be read from HDFS). Obviously, the higher
mapreduce.jobhistory.joblist.cache.size, the more memory the MR JHS consumes.
Step 2. Configure directories
In the previous step, you specified two directories where MapReduce history files are to be temporarily and permanently stored. Create those directories now.
Listing 23. Creating directories for MapReduce history files
$ sudo -u hdfs hadoop fs -mkdir -p /mr-history/tmp /mr-history/done $ sudo -u hdfs hadoop fs -chown -R mapred:hadoop /mr-history/tmp /mr-history/done $ sudo -u hdfs hadoop fs -chmod -R 777 /mr-history/tmp /mr-history/done
Step 3. Install and start the Job History Server
To install and start the MR JHS.
Listing 24. Install and start the MR JHS
$ sudo apt-get install hadoop-mapreduce-historyserver $ sudo /etc/init.d/hadoop-mapreduce-historyserver start
After a successful configuration, installation, and startup, the MapReduce JHS should be available at http://hakunamapdata:19888/jobhistory. If you notice problems, you can troubleshoot by looking at the logs in /var/log/hadoop-mapreduce/yarn-mapred-historyserver-*.log first.
Step 4. Submit the first MapReduce job on the YARN cluster
In this step, run the same job as in the previous iteration.
Listing 25. Submit first MapReduce job on YARN cluster
$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 10
This time, look for the job's metrics after the job completes.
Point your browser to the ResourceManager to see the job running there. When you click History, you will access the MapReduce JHS to see useful information about the job, the configuration, links to its counters, and the list of map and reduce tasks.
Figure 4. Basic information about a MapReduce job exposed by the MapReduce JHS
To get counters for this job, click Counters in the menu in the top-left corner.
Figure 5. Counters for a MapReduce job exposed by the MapReduce JHS
Step 5. Check the cluster's configuration
At this point, if you go back to the ResourceManager's web UI and click Nodes in the menu in the top-left corner, you might also notice that the single-node YARN cluster is configured to use 8 GB of the memory. Because the node has only 4 GB of memory available, the NodeManager is misconfigured and reports a higher amount of memory than it really has.
Figure 6. List of the NodeManagers in the YARN cluster configured with the default amount of available memory to run application's containers
Iteration 3: Tweaking memory configuration settings
In this iteration, you configure how much memory the NodeManager can use for running application containers. You also specify possible minimum and maximum sizes of any requested container and default sizes for containers that run the MapReduce tasks and the MapReduce ApplicationMasters.
Step 1. Configure the amount of memory available for running containers
The node in this example has 4 GB of RAM and one four-core CPU. Each of the five daemons (the ResourceManager, NameManager, NameNode, DataNode and Secondary NameNode) running consumes 64 MB, so the total amount is 320 MB (5 x 64 MB).
Of the 4 GB, reserve 1 GB of RAM for system processes and Hadoop daemons. The remaining 3 GB are to be used by the NodeManager for running the containers.
Listing 26. yarn-site.xml
<property> <name>yarn.nodemanager.resource.memory-mb</name> <value>3072</value> <description>Amount of physical memory, in MB, that can be allocated for containers.</description> </property>
Restart the NodeManager for this change to take effect. Use the command
$ sudo /etc/init.d/hadoop-yarn-nodemanager restart. After
restarting the NodeManager, the total memory resource changes to
Figure 7. A single-node YARN cluster that contains one NodeManager with 3GB of memory available to run application containers
Step 2. Specify the limits of minimum and maximum size of the containers
Define the memory limits for minimum and maximum size of the containers that any ApplicationMaster can request. These limits ensure that developers do not submit applications that request unreasonably small and large containers.
Listing 27. yarn-site.xml
<property> <name> yarn.scheduler.minimum-allocation-mb</name> <value>512</value> <description>The minimum allocation for every container request at the RM, in MBs. Memory requests lower than this won't take effect, and the specified value will get allocated at minimum.</description> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>3072</value> <description>The maximum allocation for every container request at the RM, in MBs. Memory requests higher than this won't take effect, and will get capped to this value.</description> </property>
Restart the ResourceManager for this change to take effect:
$ sudo /etc/init.d/hadoop-yarn-resourcemanager restart.
Step 3. Specify the default sizes for containers that run map and reduce tasks
Map and reduce tasks are run in the containers. The memory size of these containers can be configured in the mapred-site.xml file.
Listing 28. Memory size in mapred-site.xml
<property> <name>mapreduce.map.memory.mb</name> <value>512</value> </property> <property> <name>mapreduce.map.java.opts</name> <value>-Xmx410m</value> </property> <property> <name>mapreduce.reduce.memory.mb</name> <value>1024</value> </property> <property> <name>mapreduce.reduce.java.opts</name> <value>-Xmx819m</value> </property>
mapreduce.map.memory.mb is the logical size of a container,
mapreduce.map.java.opts specifies the actual JVM
settings for a map task. The JVM's heap specified in
mapreduce.map.java.opts needs to be
mapreduce.map.memory.mb. In most cases,
the value of heap in
mapreduce.map.java.opts should be equal
to around 75 percent of
mapreduce.map.memory.mb. The same applies for
If needed, you can later override these parameters in the job's configuration. For example, memory-intensive jobs can request bigger containers for their map and reduce tasks to use more memory.
Step 4. Specify the NodeManager memory
Because ApplicationMasters also run in the containers, you need to specify the size of containers where the MapReduce ApplicationMasters will be started.
Listing 29. MapReduce ApplicationMaster container size in mapred-site.xml
<property> <name>yarn.app.mapreduce.am.resource.mb</name> <value>1024</value> </property> <property> <name>yarn.app.mapreduce.am.command-opts</name> <value>-Xmx819m</value> </property>
Step 5. Submit MapReduce job
Changes introduced in Steps 3 and 4 are referred to as client-side changes. For client-side changes, no daemons must be restarted to make them work. Individual clients can later override these settings and provide their own customized configuration for its jobs (unless administrators mark these configuration settings as final).
Try the settings by running a MapReduce job. Submit the WordCount job that counts the number of occurrences of each word in the configuration files.
Listing 30. Running the MapReduce job
On the sample YARN cluster, this job fails very quickly with the error
Current usage: 63.4 MB of 512
MB physical memory used; 1.2 GB of 1.0 GB virtual memory used. Killing
Figure 8. Tasks in a MapReduce job failing due to running beyond virtual memory limits
Tasks in a MapReduce job failed because the containers where the tasks run have been ended because they consumed too much virtual memory. The NodeManager is responsible for monitoring the resource usage in the containers, and it stops a container if it uses more virtual or physical memory than the limits allow.
Because you have not configured any setting that limits the consumption of the virtual memory, a default value was assumed and applied.
The limit of virtual memory that a container can consume is configured in
terms of physical memory and, by default, it is 2.1 times higher than the
limit for physical memory. Use
yarn.nodemanager.vmem-pmem-ratio to specify the limit.
Is it not interesting that you can use the new YARN cluster to estimate the value of Pi but not to count words?
Step 6. Disable virtual memory limits
Look at all failed tasks and their messages, such as
X.Y MB of 512 MB physical memory used; 1.2 GB of 1.0 GB virtual memory
used. In this sample run, no task consumed more than 82.6MB of physical
This information indicates that application containers do not require much physical memory, but they consume much virtual memory. In such a situation, you can significantly increase the limit for virtual memory or have no limit at all.
Because the virtual memory can be difficult to control, and the virtual-to-physical memory ratio can be high with some versions of Java and Linux®, it is recommended you disable virtual memory limits for the containers. Follow this recommendation, as shown below.
Listing 31. yarn-site.xml
<property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value> <description>Whether virtual memory limits will be enforced for containers.</description> </property>
Restart the NodeManager for the change to take effect:
$ sudo /etc/init.d/hadoop-yarn-nodemanager restart.
Step 7. Re-run previously failed WordCount job
Re-run the WordCount job, as shown below.
Listing 32. Re-running the WordCount job
Analyze how the job runs. In the setup, the NodeManager can use 3 GB for running containers. One 1-GB container is started for the MapReduce ApplicationMaster. The remaining 2 GB are distributed between containers running map or reduce tasks. Therefore, at the most, you can run four map tasks (4 x 512 MB = 2 GB, or two map tasks and one reduce task (2 x 512 MB + 1,024 MB = 2 GB) simultaneously.
Figure 9. A MapReduce job with multiple tasks running at the same time on YARN cluster
In the ResourceManager logs (find these by default in /var/log/hadoop-yarn/yarn-yarn-resourcemanager-*.log), see detailed information about the containers allocated to that application.
Listing 33. Checking the ResourceManager logs
2014-03-26 08:56:54,934 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler .common.fica.FiCaSchedulerNode: Assigned container container_1395788831923_0004_01_000007 of capacity <memory:512, vCores:1> on host hakunamapdata:43466, which currently has 5 containers, <memory:3072, vCores:5> used and <memory:0, vCores:3> available
The WordCount application should finish successfully. Check part of the output, as shown below.
Listing 34. Checking the output
$ hadoop fs -cat output/wordcount/conf.yarn/* | head -n 10 != 3 "" 6 "". 4 "$JAVA_HOME" 2 "$YARN_HEAPSIZE" 1 "$YARN_LOGFILE" 1 "$YARN_LOG_DIR" 1 "$YARN_POLICYFILE" 1 "*" 17 "AS 13
Iteration N. Adding more features
Although you have a YARN cluster running, this step is only the beginning of the Hadoop YARN journey. Many features need to be configured, and numerous configuration settings need to be adjusted. A complete description of all of them is outside the scope of this article, but a few features are highlighted here:
- Enable application logs aggregation— In the current
setup, you cannot use the web UI or command-line interface to check
the logs (
syslog) of the application after the application ends. This limitation makes it difficult to debug failed applications. Enable log aggregation to solve the problem.
- Configure local directories where the NodeManager is to store
its localized files (
yarn.nodemanager.local-dirs) and container log files (
yarn.nodemanager.log-dirs)— Specify a directory on each of the JBOD mount points to spread the load across multiple disks.
- Configure and tweak a scheduler of your choice— For example, use Capacity Scheduler (the default) or Fair Scheduler.
- Enable Uberization— This option makes it possible to run all tasks of a MapReduce job in the ApplicationMaster's JVM if the job is small enough. Using this option for small MapReduce jobs, you can avoid the overhead of requesting containers from the ResourceManager and asking the NodeManagers to start them.
- Enable application recovery after the restart of ResourceManager— The ResourceManager can store information about running applications and completed tasks in HDFS. If the ResourceManager is restarted, it re-creates the state of applications and re-runs only incomplete tasks.
- Take advantage of high availability for the ResourceManager— Although this JIRA ticket is still in progress, this feature is available in distributions from some vendors.
- Set the minimum fraction of number of disks to be healthy for
the NodeManager to launch new containers— Use
- Make use of the NodeManager's ability to run health checks on scripts— The NodeManager can check the health of the node by running a configured script frequently.
- Define the size of containers in terms of virtual CPU cores (in addition to memory)
- Enable the garbage collection logging on the Resource Manager
This article shows how to migrate a single-node cluster from MRv1 to YARN and explains the most important features and configuration settings.
Although the migration of a production cluster from MRv1 to YARN is a well-documented process, it is still time-consuming and error-prone. Simplify the process by implementing an agile approach, in which the cluster is migrated step by step. The more often you verify the validity of the setup, the shorter time you spend troubleshooting potential issues. After each iteration, it's important to ensure that the cluster is working and users can submit applications to it. By using an agile process, administrators have the ability to temporarily halt the migration process after each iteration and continue it later at a convenient time.
Regardless of whether you spend a day or a day and a night to migrate to YARN, you will notice a significant return on your investment of time. After the migration, you can observe almost-perfect resource utilization of the Hadoop cluster (because the fixed slots have been replaced by flexible and generic containers) and you can run many types of applications, such as Giraph, Spark, Tez, Impala, and others on the same hardware. By migrating to YARN, you can now scale to ten thousands of nodes.
I would like to thank Piotr Krewski and Rafal Wojdyla for their technical review of this article.
- YARN articles on developerWorks: Explore Yet Another Resource Navigator (YARN), included in Apache Hadoop 2.0.
- Apache Hadoop YARN: Moving beyond MapReduce and Batch Processing with Apache Hadoop 2 (Arun C. Murthy et al., Addison-Wesley Professional, 2014): Read how YARN increases scalability and cluster utilization, enables new programming models and services, and opens new options beyond Java and batch processing.
- Understanding Big Data: Analytics for Enterprise Class Hadoop and Streaming Data: Learn details on two key IBM big data technologies.
- Apache Hadoop Project: The Apache website includes getting started information and download links.
- "Using MapReduce and load balancing on the cloud" (developerWorks, July 2010): Learn how to implement the Hadoop MapReduce framework in a cloud environment and how to use virtual load balancing to improve the performance of both a single- and multiple-node system.
- Hadoop: The Definitive Guide (Tom White, O'Reilly Media, ISBN: 1449389732, 2010): A comprehensive resource, this book shows you how to build and maintain reliable, scalable, distributed systems with the Hadoop framework.
Get products and technologies
- Participate in developerWorks blogs and get involved in the developerWorks community.
Dig deeper into Big data and analytics on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Crazy about Big data and analytics? Sign up for our monthly newsletter and the latest Big data and analytics news.
Software development in the cloud. Register today to create a project.
Evaluate IBM software and solutions, and transform challenges into opportunities.