|
Measures reported by HdpMRJobDetTest
Hadoop MapReduce is a software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.
The MapReduce process typically goes through four phases, namely:
Splitting: An input to a MapReduce job is divided into fixed-size pieces called input splits Input split is a chunk of the input that is consumed by a single map
Mapping: This is the very first phase in the execution of map-reduce program. In this phase data in each split is passed to a mapping function to produce output values.
Shuffling: This phase consumes the output of Mapping phase. Its task is to consolidate the relevant records from Mapping phase output.
Reducing: In this phase, output values from the Shuffling phase are aggregated. This phase combines values from Shuffling phase and returns a single output value.
This complete execution process is controlled by the following entities:
JobTracker: Acts like a master (responsible for complete execution of submitted job)
Multiple TaskTrackers: Act like slaves, each of them performing the job
For every job submitted for execution in the system, there is one JobTracker that resides on NameNode and there are multiple TaskTrackers which reside on multiple DataNodes.
A job is divided into multiple tasks which are then run on multiple DataNodes in a cluster.
It is the responsibility of JobTracker to coordinate the activity by scheduling tasks to run on different DataNodes.
The TaskTracker on individual DataNodes is responsible for the execution of the assigned tasks - i.e., the portion of the job assigned to each of them as tasks.
Periodically, the TaskTracker on a DataNode sends a progress report to the JobTracker, updating it with the status of task executed on that node.
In addition, the TaskTracker periodically sends a 'heartbeat' signal to the JobTracker so as to notify him of the current state of the system.
Thus, the JobTracker keeps track of the overall progress of each job. In the event of task failure, the JobTracker can reschedule it on a different TaskTracker.
Hadoop users generally do not take kindly to the failure of MapReduce jobs, as such failures impact application performance. To assure users of an above- par experience with a Hadoop cluster therefore, administrators have to monitor the MapReduce jobs that each user is running on a cluster and track their status. This way, they can instantly identify failed jobs, isolate the user who is running such jobs, and investigate the reason for the failure. Periodic evaluation of the performance of MapReduce jobs is also necessary because it may point administrators to processing bottlenecks and resource crunches caused by improper configuration of the jobs. With the help of the HdpMRJobDetTest test, administrators can achieve all of the above!
This test auto- discovers the users running MapReduce jobs on the cluster, and for each user, reports the count of jobs in different states. In the process, the test alerts administrators to failed jobs and jobs with errors. Additionally, for each user, the test also measures how much time the jobs run by that user took to complete. This points administrators to slow jobs and the user running them. The test also highlights users whose jobs took the maximum time for map/reduce processing. Detailed diagnostics not only shed light on such jobs, but also accurately tell where the job execution was bottlecked - in running map tasks? or in running reduce tasks? This greatly aids troubleshooting. Moreover, the test also pinpoints jobs requiring more heap memory. This way, the test reveals to administrators if improper job configuration is what caused job execution to slow down.
Outputs of the test : One set of the results for each user running MapReduce jobs on the cluster
The measures made by this test are as follows:
| Measurement |
Description |
Measurement Unit |
Interpretation |
| Total_jobs |
Indicates the total number of MapReduce jobs that are run by this user |
Number |
Compare the value of this measure across users to know which user is imposing the maximum MapReduce load on the cluster. |
| New_jobs |
Indicates the number of MapReduce jobs started by this user during the last measurement period. |
Number |
|
| Init_jobs |
Indicates the number of MapReduce jobs that this user has initiated. |
Number |
Job setup is performed during initialization. For example, create the temporary output directory for the job during the initialization of the job. |
| Running_jobs |
Indicates the number of jobs of this user that is currently in the RUNNING state. |
Number |
Once the setup task completes, the job will be moved to RUNNING state. |
| Succeded_jobs |
Indicates the number of jobs run by this user that successfully completed execution. |
Number |
A high value is desired for this measure. Ideally, the value of this measure should be equal to the value of the Running jobs measure. |
| Failed_jobs |
Indicates the number of jobs run by this user that failed. |
Number |
A low or 0 value is desired for this measure.
A non-zero value indicates that one/more jobs have failed for this user. In such a case, use the detailed diagnosis of this measure to know which jobs failed.
By default, a job fails when one/more tasks it constitutes fail more than four times. If a task fails four times, it will not be retried again. This value is
configurable. The maximum number of attempts to run a task is controlled by the mapreduce.map.maxattempts property for map tasks and mapreduce.reduce.maxattempts for reduce tasks.
The most common occurrence of this failure is when user code in the map or reduce task throws a runtime exception. If this happens, the task JVM reports the
error back to its parent application master before it exits. The error ultimately makes it into the user logs. The application master marks the task attempt as failed, and frees up the container so its resources are available for another task.
Another failure mode is the sudden exit of the task JVM perhaps there is a JVM bug that causes the JVM to exit for a particular set of circumstances exposed
by the MapReduce user code. In this case, the node manager notices that the process has exited and informs the application master so it can mark the attempt as failed.
Hanging tasks are dealt with differently. The application master notices that it hasn’t received a progress update for a while and proceeds to mark the task as failed. The task JVM process will be killed automatically after this period. The timeout period after which tasks are considered failed is normally 10 minutes and can be configured on a per-job basis (or a cluster basis) by setting the mapreduce.task.timeout property to a value in milliseconds. |
| Kill_wait_jobs |
Indicates the number of jobs initiated by this user waiting to be killed. |
Number |
Use the detailed diagnosis of this measure to identify the jobs that are in the KILLWAIT state. |
| Killed_jobs |
Indicates the number of jobs that this user killed. |
Number |
Use the detailed diagnosis of this measure to identify the killed jobs. |
| Error_jobs |
Indicates the number of jobs of this user that spewed errors. |
Number |
Ideally, the value of this measure should be 0.
If a non-zero value is reported, then you can use the detailed diagnosis of this measure to know which jobs experienced errors during execution. |
| Avg_job_duration |
Indicates the average time that the jobs of this user took for execution. |
Seconds |
An unusually high value for this measure is a cause for concern, as it implies that the jobs of a user are taking too long to execute than normal. To identify which jobs are taking maximum time, use the detailed diagnosis of this measure. |
| Max_job_duration |
Indicates the maximum time taken by the jobs of this user for execution. |
Seconds |
|
| Max_mapreduce_proc_time |
Indicates the maximum time taken by the jobs of this user to perform MapReduce processing. |
Seconds |
If this value is abnormally high for any user, then use the detailed diagnosis of this measure to know which jobs took the longest to perform map/reduce processing and where they spent maximum time - in running map tasks? or in running reduce tasks? |
| Max_gc_pct |
Indicates the maximum time spent by the jobs of this user in garbage collection. |
Percent |
A high value of this measure indicates that jobs do not have enough heap memory for processing, owing to which garbage collection occurs often. If this value is very high for any user, then use the detailed diagnosis of this measure to figure out which specific jobs spent too much time in garbage collection. By allocating the heap size of such jobs, you can reduce this percentage significantly. |
| No_inputrecrd_reducetask |
Indicates the number of jobs of this user that is taking the maximum number of input records
for processing reduce tasks. |
Number |
To know which jobs are taking the maximum number of input records, use the detailed diagnosis of this measure. |
| No_spilledrecord_to_dsk |
Indicates the number of jobs of this user that has spilled the maximum number of records to disk. |
Number |
Map output data is stored in memory, but when the buffer fills up data is dumped to disk. A separate thread is spawned to merge all of the spilled data into a single larger sorted file for reducers. Spills to disk should be minimized, as more than one spill will result in an increase in disk activity as the merging thread has to read and write data to disk. Tracking this metric can help determine if configuration changes need to be made (like increasing the memory available for map tasks) or if additional processing (compression) of the input data is necessary.
If this measure reports a high value for any user, then use the detailed diagnosis of this measure to know which jobs spilled the maximum number of records
to disk. You can then review the configuration of such jobs to see if it needs to be tweaked to reduce disk spills and improve performance. |
|