Use coupon etl2016 50% discount to Buy the full report Click Here
Answers in green
1-In Classic MapReduce, which factors contribute to the number of slots (which would run the tasks), on a Task Tracker machine.
- Amount of RAM installed on TaskTracker Nodes.
- The number of the CPU cores on the TaskTracker Nodes.
2-Map emits key value pair per record processing. Of the following which best describes, how many key value pairs can a Map emit?
- Map can emit any number of key value pairs per processing.
- maximum 1
3-You are designing a new architecture for your company. Which of the following would you prefer as Online Transaction Processing Database (OLTP).
- Hadoop Distributed File System.
- Any Relational Data Base.
4-Arbitrary Modifications are not supported in Hadoop. Only append is supported in the latest release.
5-I have designed a job and have run it with 2 reducers by setting up property “mapred.reduce.tasks” = 2. This results in two output files. What would be the disadvantage if I ran it with a default partitioner?
- I won’t be able to take advantage of combiner.
- The two output files would be sorted but I cannot concatenate both of them into one large sorted file.
- Reducer phase would be longer to execute.
- None of the above problems would be there. Infact it is advisable to run multiple reducers.
6-In Classic MapReduce, on failure of tasktracker, the tasks (scheduled on that tasktracker) which got completed but their jobs didn’t complete are as well rescheduled and are rerun. The reason for this is that the intermediate results are lost which may be required for the incomplete jobs.
Click here to see more :