YouTube Icon

Interview Questions.

Top 100+ Hadoop Interview Questions And Answers - May 31, 2020

fluid

Top 100+ Hadoop Interview Questions And Answers

Question 1. On What Concept The Hadoop Framework Works?

Answer :

It works on MapReduce, and it's miles devised by the Google.

Question 2. What Is Mapreduce?

Answer :

Map reduce is an set of rules or idea to system Huge quantity of statistics in a quicker manner. As according to its call you can divide it Map and Reduce.

The essential MapReduce process generally splits the input facts-set into impartial chunks. (Big information units in the a couple of small datasets)
MapTask: will manner these chunks in a completely parallel manner (One node can system one or greater chunks).The framework sorts the outputs of the maps.
Reduce Task : And the above output could be the input for the reducetasks, produces the final end result.
Your enterprise logic could be written within the MappedTask and ReducedTask. Typically both the enter and the output of the job are stored in a document-device (Not database). The framework takes care of scheduling obligations, tracking them and re-executes the failed duties

Informatica Interview Questions
Question 3. What Is Compute And Storage Nodes?

Answer :

Compute Node: This is the pc or device where your actual commercial enterprise common sense might be executed.

Torage Node: This is the computer or machine where your document gadget reside to store the processing records.

In maximum of the instances compute node and storage node will be the same device.

Question four. How Does Master Slave Architecture In The Hadoop?

Answer :

The MapReduce framework consists of a single master JobTracker and more than one slaves, every cluster-node may have one TaskTracker. The master is accountable for scheduling the jobs' aspect tasks at the slaves, tracking them and re-executing the failed responsibilities. The slaves execute the obligations as directed with the aid of the grasp.

Informatica Tutorial
Question 5. How Does An Hadoop Application Look Like Or Their Basic Components?

Answer :

Minimally an Hadoop software could have following components.

Input location of records
Output vicinity of processed information.
A map mission.
A decreased undertaking.
Job configuration
The Hadoop task consumer then submits the activity (jar/executable etc.) and configuration to the JobTracker which then assumes the obligation of distributing the software program / configuration to the slaves, scheduling obligations and tracking them, offering status and diagnostic information to the job-client.

Teradata Interview Questions
Question 6. Explain How Input And Output Data Format Of The Hadoop Framework?

Answer :

The MapReduce framework operates solely on pairs, this is, the framework perspectives the input to the task as a set of pairs and produces a fixed of pairs because the output of the process, conceivably of various kinds.

See the go with the flow referred to below

(input) -> map -> -> integrate/sorting -> -> lessen -> (output)

Question 7. What Are The Restriction To The Key And Value Class ?

Answer :

he key and fee training should be serialized by using the framework. To lead them to serializable Hadoop affords a Writable interface. As you already know from the java itself that the key of the Map should be comparable, therefore the key has to implement one extra interface Writable Comparable.

Teradata Tutorial Java Interview Questions
Question eight. Explain The Wordcount Implementation Via Hadoop Framework ?

Answer :

We will count number the phrases in all the enter report waft as beneath

input Assume there are  files each having a sentence Hello World Hello World (In file 1) Hello World Hello World (In file 2)
Mapper : There might be each mapper for the a file For the given sample input the primary map output:
< Hello, 1>
< World, 1>
< Hello, 1>
< World, 1>

The 2nd map output:

< Hello, 1>
< World, 1>
< Hello, 1>
< World, 1>

Combiner/Sorting (This is performed for each individual map) So output seems like this The output of the primary map:
< Hello, 2>
< World, 2>

The output of the second map:

< Hello, 2>
< World, 2>

Reducer : It sums up the above output and generates the output as under
< Hello, 4>
< World, 4>

Output

Final output might look like

Hello four instances
World four instances

Question nine. Which Interface Needs To Be Implemented To Create Mapper And Reducer For The Hadoop?

Answer :

org.Apache.Hadoop.Mapreduce.Mapper
org.Apache.Hadoop.Mapreduce.Reducer

Hadoop MapReduce Interview Questions
Question 10. What Mapper Does?

Answer :

Maps are the character obligations that remodel input facts into intermediate information. The converted intermediate information do now not want to be of the identical type as the enter facts. A given enter pair can also map to zero or many output pairs.

Hadoop Tutorial
Question eleven. What Is The Inputsplit In Map Reduce Software?

Answer :

An InputSplit is a logical representation of a unit (A bite) of enter work for a map undertaking; e.G., a report call and a byte range inside that report to system or a row set in a text report.

Apache Pig Interview Questions
Question 12. What Is The Inputformat ?

Answer :

The InputFormat is liable for enumerate (itemise) the InputSplits, and generating a RecordReader with a view to turn the ones logical work gadgets into real bodily enter records.

Informatica Interview Questions
Question 13. Where Do You Specify The Mapper Implementation?

Answer :

Generally mapper implementation is targeted in the Job itself.

Java Tutorial
Question 14. How Mapper Is Instantiated In A Running Job?

Answer :

The Mapper itself is instantiated within the running process, and could be surpassed a MapContext item which it can use to configure itself.

Question 15. Which Are The Methods In The Mapper Interface?

Answer :

The Mapper consists of the run() method, which name its own setup() approach simplest as soon as, it also name a map() approach for each input and subsequently calls it cleanup() method. All above methods you may override in your code.

Machine studying Interview Questions
Question sixteen. What Happens If You Don't Override The Mapper Methods And Keep Them As It Is?

Answer :

If you do no longer override any methods (leaving even map as-is), it will act as the identity function, emitting each enter document as a separate output.

Hadoop MapReduce Tutorial
Question 17. What Is The Use Of Context Object?

Answer :

The Context item permits the mapper to interact with the relaxation of the Hadoop device. It Includes configuration information for the job, as well as interfaces which allow it to emit output.

NoSQL Interview Questions
Question 18. How Can You Add The Arbitrary Key-value Pairs In Your Mapper?

Answer :

You can set arbitrary (key, value) pairs of configuration facts on your Job, e.G. With
Job.GetConfiguration().Set("myKey", "myVal"), and then retrieve this facts on your mapper with
Context.GetConfiguration().Get("myKey"). This form of capability is typically performed in the Mapper's setup() method.

Teradata Interview Questions
Question 19. How Does Mapper's Run() Method Works?

Answer :

The Mapper.Run() approach then calls map(KeyInType, ValInType, Context) for each key/cost pair within the InputSplit for that task

Apache Pig Tutorial
Question 20. Which Object Can Be Used To Get The Progress Of A Particular Job ?

Answer :

Context

HBase Interview Questions
Question 21. What Is Next Step After Mapper Or Maptask?

Answer :

The output of the Mapper are looked after and Partitions might be created for the output. Number of partition depends on the wide variety of reducer.

Question 22. How Can We Control Particular Key Should Go In A Specific Reducer?

Answer :

Users can control which keys (and as a result statistics) go to which Reducer by way of imposing a custom Partitioned.

HBase Tutorial
Question 23. What Is The Use Of Combiner?

Answer :

It is an non-obligatory component or magnificence, and may be specify via Job.SetCombinerClass(ClassName), to carry out local aggregation of the intermediate outputs, which allows to cut down the quantity of records transferred from the Mapper to the Reducer.

MongoDB Interview Questions
Question 24. How Many Maps Are There In A Particular Job?

Answer :

The range of maps is typically pushed by using the entire length of the inputs, this is, the total variety of blocks of the enter documents.

Generally it's miles around 10-a hundred maps per-node. Task setup takes awhile, so it is fine if the maps take as a minimum a minute to execute.

Suppose, in case you anticipate 10TB of enter statistics and feature a block size of 128MB, you'll become with 82,000 maps, to control the quantity of block you may use the mapreduce.Process.Maps parameter (which best affords a touch to the framework). Ultimately, the number of duties is managed by way of the range of splits again by way of the InputFormat.GetSplits() method (which you can override).

Java Interview Questions
Question 25. What Is The Reducer Used For?

Answer :

Reducer reduces a set of intermediate values which share a key to a (usually smaller) set of values.

The number of reduces for the job is set through the consumer through Job.SetNumReduceTasks(int).

MongoDB Tutorial
Question 26. Explain The Core Methods Of The Reducer?

Answer :

The API of Reducer is very similar to that of Mapper, there's a run() technique that receives a Context containing the task's configuration as well as interfacing techniques that return information from the reducer itself back to the framework. The run() technique calls setup() once, lessen() as soon as for every key related to the reduce undertaking, and cleanup() once on the give up. Each of those techniques can get entry to the activity's configuration facts by using using Context.GetConfiguration().

As in Mapper, all or any of those methods can be overridden with custom implementations. If none of these techniques are overridden, the default reducer operation is the identity characteristic; values are handed via without further processing.

The coronary heart of Reducer is its lessen() approach. This is known as as soon as in keeping with key; the second one argument is an Iterable which returns all the values associated with that key.

Data Science R Interview Questions
Question 27. What Are The Primary Phases Of The Reducer?

Answer :

Shuffle, Sort and Reduce.

Hadoop MapReduce Interview Questions
Question 28. Explain The Shuffle?

Answer :

Input to the Reducer is the sorted output of the mappers. In this section the framework fetches the applicable partition of the output of all the mappers, thru HTTP.

Lucene Tutorial
Question 29. Explain The Reducer's Sort Phase?

Answer :

The framework businesses Reducer inputs via keys (due to the fact one-of-a-kind mappers may additionally have output the same key) on this degree. The shuffle and type stages arise concurrently; at the same time as map-outputs are being fetched they're merged (It is just like merge-kind).

Question 30. Explain The Reducer's Reduce Phase?

Answer :

In this segment the reduce(MapOutKeyType, Iterable, Context) method is known as for each pair in the grouped inputs. The output of the reduce task is commonly written to the FileSystem via Context.Write (ReduceOutKeyType, ReduceOutValType). Applications can use the Context to record progress, set utility-level status messages and update Counters, or just indicate that they are alive. The output of the Reducer is not taken care of.

Question 31. How Many Reducers Should Be Configured?

Answer :

The right quantity of reduces appears to be zero.Ninety five or 1.Seventy five extended by
(<no.Of nades> * mapreduce.Tasktracker.Lessen.Responsibilities.Most).

With zero.Ninety five all the reduces can release immediately and begin transfering map outputs as the maps end. With 1.75 the faster nodes will end their first round of reduces and launch a 2d wave of reduces doing a far better task of load balancing. Increasing the wide variety of reduces increases the framework overhead, however will increase load balancing and lowers the fee of disasters.

Question 32. It Can Be Possible That A Job Has 0 Reducers?

Answer :

It is prison to set the wide variety of reduce-obligations to zero if no discount is preferred.

Question 33. What Happens If Number Of Reducers Are zero?

Answer :

In this case the outputs of the map-responsibilities go immediately to the FileSystem, into the output path set by using setOutputPath(Path). The framework does not kind the map-outputs earlier than writing them out to the FileSystem.

Apache Pig Interview Questions
Question 34. How Many Instances Of Jobtracker Can Run On A Hadoop Cluster?

Answer :

Only one

Question 35. What Is The Jobtracker And What It Performs In A Hadoop Cluster?

Answer :

JobTracker is a daemon provider which submits and tracks the MapReduce responsibilities to the Hadoop cluster. It runs its personal JVM process. And generally it run on a separate device, and each slave node is configured with job tracker node vicinity. The JobTracker is unmarried point of failure for the Hadoop MapReduce service. If it is going down, all walking jobs are halted.

JobTracker in Hadoop performs following movements

Client programs submit jobs to the Job tracker.
The JobTracker talks to the NameNode to decide the place of the records
The JobTracker locates TaskTracker nodes with to be had slots at or close to the information
The JobTracker submits the work to the chosen TaskTracker nodes.
A TaskTracker will notify the JobTracker while a venture fails. The JobTracker decides what to do then: it could resubmit the job some other place, it could mark that precise report as some thing to keep away from, and it is able to may additionally even blacklist the TaskTracker as unreliable.
When the work is finished, the JobTracker updates its reputation.
The TaskTracker nodes are monitored. If they do now not post heartbeat signals regularly enough, they're deemed to have failed and the paintings is scheduled on a extraordinary TaskTracker.
A TaskTracker will notify the JobTracker while a venture fails. The JobTracker makes a decision what to do then: it may resubmit the process some place else, it is able to mark that unique report as some thing to keep away from, and it may can also even blacklist the TaskTracker as unreliable.
When the paintings is completed, the JobTracker updates its reputation.
Client programs can poll the JobTracker for information.
Question 36. How A Task Is Scheduled By A Jobtracker?

Answer :

The TaskTrackers send out heartbeat messages to the JobTracker, normally each short time, to reassure the JobTracker that it is nonetheless alive. These messages also tell the JobTracker of the wide variety of to be had slots, so the JobTracker can stay up to date with where inside the cluster paintings can be delegated. When the JobTracker attempts to find somewhere to schedule a assignment within the MapReduce operations, it first looks for an empty slot on the equal server that hosts the DataNode containing the statistics, and if not, it looks for an empty slot on a gadget inside the equal rack.

Machine gaining knowledge of Interview Questions
Question 37. How Many Instances Of Tasktracker Run On A Hadoop Cluster?

Answer :

There is one Daemon Tasktracker technique for each slave node inside the Hadoop cluster.

Question 38. What Are The Two Main Parts Of The Hadoop Framework?

Answer :

Hadoop consists of two important parts.

Hadoop allotted report gadget, a distributed document machine with high throughput,
Hadoop MapReduce, a software program framework for processing massive statistics units.
Question 39. Explain The Use Of Tasktracker In The Hadoop Cluster?

Answer :

A Tasktracker is a slave node in the cluster which that accepts the responsibilities from JobTracker like Map, Reduce or shuffle operation. Tasktracker additionally runs in its personal JVM Process.

Every TaskTracker is configured with a set of slots; those suggest the quantity of tasks that it may accept. The TaskTracker begins a separate JVM procedures to do the actual paintings (called as Task Instance) that is to make sure that technique failure does now not take down the project tracker.

The Tasktracker monitors those challenge times, taking pictures the output and go out codes. When the Task instances finish, successfully or no longer, the mission tracker notifies the JobTracker.

The TaskTrackers additionally ship out heartbeat messages to the JobTracker, typically each short time, to reassure the JobTracker that it's miles nevertheless alive. These messages additionally tell the JobTracker of the range of available slots, so the JobTracker can stay up to date with in which inside the cluster paintings can be delegated.

Question 40. What Do You Mean By Taskinstance?

Answer :

Task instances are the real MapReduce jobs which run on every slave node. The TaskTracker starts offevolved a separate JVM tactics to do the actual work (called as Task Instance) this is to make sure that technique failure does no longer take down the entire project tracker.Each Task Instance runs on its own JVM process. There may be a couple of tactics of task example walking on a slave node. This is based at the wide variety of slots configured on project tracker. By default a brand new challenge example JVM technique is spawned for a mission.

NoSQL Interview Questions
Question forty one. How Many Daemon Processes Run On A Hadoop Cluster?

Answer :

Hadoop is constructed from 5 separate daemons. Each of those daemons runs in its very own JVM.

Following 3 Daemons run on Master nodes.

NameNode : This daemon shops and maintains the metadata for HDFS.

Secondary NameNode : Performs house responsibilities functions for the NameNode.

JobTracker : Manages MapReduce jobs, distributes individual tasks to machines going for walks the Task Tracker. Following 2 Daemons run on each Slave nodes

DataNode : Stores real HDFS facts blocks.

TaskTracker : It is Responsible for instantiating and monitoring character Map and Reduce obligations.

Question forty two. How Many Maximum Jvm Can Run On A Slave Node?

Answer :

One or Multiple instances of Task Instance can run on each slave node. Each mission instance is run as a separate JVM technique. The wide variety of Task times can be controlled by way of configuration. Typically a excessive give up device is configured to run extra venture instances.

HBase Interview Questions
Question forty three. What Is Nas?

Answer :

It is one form of report gadget in which facts can are living on one centralized gadget and all the cluster member will read write data from that shared database, which could not be as efficient as HDFS.

Question 44. How Hdfa Differs With Nfs?

Answer :

Following are variations between HDFS and NAS

In HDFS Data Blocks are allotted across local drives of all machines in a cluster. Whereas in NAS facts is saved on committed hardware.
HDFS is designed to paintings with MapReduce System, on account that computation is moved to records. NAS isn't always appropriate for MapReduce considering that statistics is stored separately from the computations.
HDFS runs on a cluster of machines and affords redundancy the usage of replication protocol. Whereas NAS is provided by using a unmarried machine consequently does now not offer statistics redundancy.
Question forty five. How Does A Namenode Handle The Failure Of The Data Nodes?

Answer :

HDFS has master/slave architecture. An HDFS cluster consists of a unmarried

NameNode, a grasp server that manages the report system namespace and regulates access to documents through clients.

In addition, there are a number of DataNodes, typically one per node within the cluster, which control storage attached to the nodes that they run on. The NameNode and DataNode are portions of software program designed to run on commodity machines.

NameNode periodically receives a Heartbeat and a Block document from every of the DataNodes inside the cluster. Receipt of a Heartbeat implies that the DataNode is functioning well. A Blockreport consists of a list of all blocks on a DataNode. When NameNode notices that it has no longer obtained a heartbeat message from a information node after a sure quantity of time, the facts node is marked as useless. Since blocks will be under replicated the system begins replicating the blocks that had been saved on the useless DataNode. The NameNode Orchestrates the replication of facts blocks from one DataNode to any other. The replication data transfer happens immediately between DataNode and the records in no way passes thru the NameNode.

Question 46. Can Reducer Talk With Each Other?

Answer :

No, Reducer runs in isolation.

Question 47. Where The Mapper's Intermediate Data Will Be Stored?

Answer :

The mapper output (intermediate data) is stored at the Local report device (NOT HDFS) of every person mapper nodes. This is commonly a temporary directory place which can be setup in config by using the Hadoop administrator. The intermediate information is wiped clean up after the Hadoop Job completes.

Question 48. What Is The Use Of Combiners In The Hadoop Framework?

Answer :

Combiners are used to boom the performance of a MapReduce application. They are used to mixture intermediate map output domestically on character mapper outputs. Combiners allow you to reduce the quantity of facts that desires to be transferred across to the reducers.

You can use your reducer code as a combiner if the operation done is commutative and associative.

The execution of combiner is not assured; Hadoop may or may not execute a combiner. Also, if required it is able to execute it more than 1 times. Therefore your MapReduce jobs ought to no longer rely upon the combiners’ execution.

Question 49. What Is The Hadoop Mapreduce Api Contract For A Key And Value Class?

Answer :

The Key have to put into effect the org.Apache.Hadoop.Io.WritableComparable interface.
The price should put into effect the org.Apache.Hadoop.Io.Writable interface.
Question 50. What Is A Identitymapper And Identityreducer In Mapreduce?

Answer :

org.Apache.Hadoop.Mapred.Lib.IdentityMapper: Implements the identification characteristic, mapping inputs without delay to outputs. If MapReduce programmer does not set the Mapper Class using JobConf.SetMapperClass then IdentityMapper.Magnificence is used as a default price.
Org.Apache.Hadoop.Mapred.Lib.IdentityReducer : Performs no reduction, writing all input values without delay to the output. If MapReduce programmer does now not set the Reducer Class the use of JobConf.SetReducerClass then IdentityReducer.Magnificence is used as a default value.
Question fifty one. What Is The Meaning Of Speculative Execution In Hadoop? Why Is It Important?

Answer :

Speculative execution is a manner of dealing with character Machine overall performance. In big clusters wherein masses or hundreds of machines are worried there can be machines which aren't appearing as speedy as others.

This might also bring about delays in a complete task due to only one system not performaing properly. To keep away from this, speculative execution in hadoop can run more than one copies of equal map or lessen challenge on special slave nodes. The results from first node to complete are used.

Question 52. When The Reducers Are Are Started In A Mapreduce Job?

Answer :

In a MapReduce activity reducers do not begin executing the lessen technique until the all Map jobs have completed. Reducers begin copying intermediate key-price pairs from the mappers as quickly as they may be available. The programmer defined reduce approach is called simplest after all of the mappers have finished.

If reducers do not start before all mappers finish then why does the development on MapReduce process suggests something like Map(50%) Reduce(10%)? Why reducers development percent is displayed when mapper isn't always finished but?

Reducers start copying intermediate key-value pairs from the mappers as soon as they may be available. The development calculation additionally takes in account the processing of statistics switch that's executed by using reduce procedure, therefore the reduce progress starts displaying up as soon as any intermediate key-fee pair for a mapper is available to be transferred to reducer.

Though the reducer development is up to date nevertheless the programmer defined reduce technique is known as handiest after all of the mappers have completed.

Question fifty three. What Is Hdfs ? How It Is Different From Traditional File Systems?

Answer :

HDFS, the Hadoop Distributed File System, is accountable for storing large facts on the cluster. This is a allotted document gadget designed to run on commodity hardware.

It has many similarities with present disbursed record systems. However, the variations from other allotted record systems are extensive.

HDFS is particularly fault-tolerant and is designed to be deployed on low-cost hardware.
HDFS offers high throughput access to utility records and is suitable for programs that have big data sets.
HDFS is designed to guide very huge documents. Applications which might be well suited with HDFS are people who address huge records sets. These applications write their facts best once however they examine it one or greater instances and require these reads to be glad at streaming speeds. HDFS helps write-as soon as-study-many semantics on files.
Question fifty four. What Is Hdfs Block Size? How Is It Different From Traditional File System Block Size?

Answer :

In HDFS records is split into blocks and disbursed across more than one nodes within the cluster. Each block is generally 64Mb or 128Mb in length. Each block is replicated multiple instances. Default is to copy each block three instances. Replicas are stored on specific nodes. HDFS utilizes the local record machine to keep each HDFS block as a separate report. HDFS Block length can not be in comparison with the conventional record device block length.

Question fifty five. What Is A Namenode? How Many Instances Of Namenode Run On A Hadoop Cluster?

Answer :

The NameNode is the center piece of an HDFS document device. It keeps the directory tree of all files in the document machine, and tracks wherein throughout the cluster the file facts is stored. It does now not store the statistics of these documents itself.

There is only One NameNode technique run on any hadoop cluster. NameNode runs on its personal JVM system. In a regular manufacturing cluster its run on a separate system.

The NameNode is a Single Point of Failure for the HDFS Cluster. When the NameNode goes down, the record gadget goes offline.

Client applications speak to the NameNode each time they desire to find a record, or when they need to add /replica /flow /delete a file. The NameNode responds the successful requests with the aid of returning a list of relevant DataNode servers where the records lives.

Question fifty six. What Is A Datanode? How Many Instances Of Datanode Run On A Hadoop Cluster?

Answer :

A DataNode shops facts in the Hadoop File System HDFS. There is handiest One DataNode system run on any hadoop slave node. DataNode runs on its own JVM system. On startup, a DataNode connects to the NameNode. DataNode times can communicate to every different, that is more often than not in the course of replicating statistics.

Question fifty seven. How The Client Communicates With Hdfs?

Answer :

The Client verbal exchange to HDFS takes place the usage of Hadoop HDFS API. Client packages speak to the NameNode each time they want to locate a document, or when they need to add/copy/circulate/delete a file on HDFS. The NameNode responds the a hit requests by way of returning a list of applicable DataNode servers wherein the information lives. Client programs can speak directly to a DataNode, as soon as the NameNode has provided the place of the statistics.

Question fifty eight. How The Hdfs Blocks Are Replicated?

Answer :

HDFS is designed to reliably store very massive documents throughout machines in a massive cluster. It shops every file as a chain of blocks; all blocks in a document besides the final block are the identical size. The blocks of a document are replicated for fault tolerance. The block size and replication element are configurable according to report. An application can specify the range of replicas of a report. The replication component may be certain at document advent time and may be modified later. Files in HDFS are write-once and have strictly one creator at any time.

The NameNode makes all selections regarding replication of blocks. HDFS makes use of rack-aware duplicate placement coverage. In default configuration there are total three copies of a datablock on HDFS, 2 copies are stored on datanodes on same rack and 3rd copy on a extraordinary rack.

Question fifty nine. What Is Hadoop Framework?

Answer :

Hadoop is a open source framework which is written in java by using apche software program foundation. This framework is used to wirite software program utility which requires to manner vast amount of statistics (It should manage multi tera bytes of facts). It works in-paralle on huge clusters that can have 1000 of computers (Nodes) on the clusters. It additionally process facts very reliably and fault-tolerant way.

Question 60. What Is Big Data?

Answer :

Big Data is nothing however an collection of any such big and complex records that it will become very tedious to seize, keep, system, retrieve and analyze it with the help of on-hand database control gear or conventional statistics processing techniques.

Question sixty one. Can You Give Some Examples Of Big Data?

Answer :

There are many real lifestyles examples of Big Data! Facebook is generating 500+ terabytes of facts per day, NYSE (New York Stock Exchange) generates approximately 1 terabyte of recent exchange records consistent with day, a jet airline collects 10 terabytes of censor statistics for each half-hour of flying time. All these are day to day examples of Big Data!

Question sixty two. Can You Give A Detailed Overview About The Big Data Being Generated By Facebook?

Answer :

As of December 31, 2012, there are 1.06 billion month-to-month energetic customers on fb and 680 million cell customers. On a median, three.2 billion likes and remarks are posted each day on Facebook. 72% of web target audience is on Facebook. And why no longer! There are such a lot of activities going on fb from wall posts, sharing photos, films, writing remarks and liking posts, and many others. In reality, Facebook began using Hadoop in mid-2009 and changed into one of the preliminary users of Hadoop.

Question 63. According To Ibm, What Are The Three Characteristics Of Big Data?

Answer :

According to IBM, the three characteristics of Big Data are:

Volume: Facebook producing 500+ terabytes of information in step with day.

Velocity: Analyzing 2 million records every day to pick out the purpose for losses.

Variety: pictures, audio, video, sensor facts, log files, and so on.

Question sixty four. How Analysis Of Big Data Is Useful For Organizations?

Answer :

Effective analysis of Big Data offers numerous commercial enterprise advantage as organizations will study which regions to cognizance on and which regions are much less essential. Big records analysis gives a few early key signs that may save you the organization from a huge loss or help in greedy a amazing opportunity with open fingers! A specific analysis of Big Data enables in selection making! For instance, in recent times people rely so much on Facebook and Twitter earlier than shopping for any services or products. All way to the Big Data explosion.

Question 65. How Big Is 'huge Data'?

Answer :

With time, information extent is growing exponentially. Earlier we used to talk approximately Megabytes or Gigabytes. But time has arrived when we speak about records extent in terms of terabytes, petabytes and additionally zettabytes! Global facts volume turned into around 1.8ZB in 2011 and is predicted to be 7.9ZB in 2015. It is likewise recognised that the global information doubles in each two years!

Question 66. Who Are 'statistics Scientists'?

Answer :

Data scientists are soon changing commercial enterprise analysts or information analysts. Data scientists are professionals who find solutions to research data. Just as internet evaluation, we have statistics scientists who have excellent enterprise perception as to the way to deal with a commercial enterprise undertaking. Sharp data scientists are not simplest worried in dealing enterprise issues, however also deciding on the relevant troubles that may convey price-addition to the organisation.

Question sixty seven. Why The Name 'hadoop'?

Answer :

Hadoop doesn’t have any increasing model like ‘oops’. The fascinating yellow elephant you notice is largely named after Doug’s son’s toy elephant!

Question sixty eight. Why Do We Need Hadoop?

Answer :

Everyday a large quantity of unstructured records is getting dumped into our machines. The principal task isn't to keep massive records units in our structures but to retrieve and examine the huge records in the corporations, that too statistics present in specific machines at extraordinary places. In this example a necessity for Hadoop arises. Hadoop has the ability to research the information found in exclusive machines at specific locations very quickly and in a very cost powerful way. It uses the concept of MapReduce which enables it to divide the question into small elements and process them in parallel. This is likewise called parallel computing.

Question 69. What Are Some Of The Characteristics Of Hadoop Framework?

Answer :

Hadoop framework is written in Java. It is designed to clear up issues that contain reading big information (e.G. Petabytes). The programming version is primarily based on Google’s MapReduce. The infrastructure is based on Google’s Big Data and Distributed File System. Hadoop handles big files/records throughput and helps data in depth allotted programs. Hadoop is scalable as greater nodes can be without problems delivered to it.

Question 70. Give A Brief Overview Of Hadoop History?

Answer :

In 2002, Doug Cutting created an open source, net crawler challenge.

In 2004, Google posted MapReduce, GFS papers.

In 2006, Doug Cutting advanced the open supply, Mapreduce and HDFS mission.

In 2008, Yahoo ran four,000 node Hadoop cluster and Hadoop won terabyte type benchmark.

In 2009, Facebook released SQL aid for Hadoop.

Question seventy one. Give Examples Of Some Companies That Are Using Hadoop Structure?

Answer :

A lot of businesses are using the Hadoop structure consisting of Cloudera, EMC, MapR, Hortonworks, Amazon, Facebook, eBay, Twitter, Google and so forth.

Question 72. What Is The Basic Difference Between Traditional Rdbms And Hadoop?

Answer :

Traditional RDBMS is used for transactional structures to file and archive the statistics, while Hadoop is an method to store massive quantity of facts in the allotted file gadget and system it. RDBMS will be useful while you want to are searching for one document from Big statistics, whereas, Hadoop could be useful while you want Big facts in one shot and carry out analysis on that later.

Question 73. What Is Structured And Unstructured Data?

Answer :

Structured information is the statistics that is effortlessly identifiable as it is prepared in a shape. The maximum commonplace shape of dependent records is a database in which precise information is stored in tables, this is, rows and columns. Unstructured information refers to any statistics that cannot be identified easily. It could be inside the shape of pics, films, documents, e-mail, logs and random textual content. It is not in the form of rows and columns.

Question 74. What Are The Core Components Of Hadoop?

Answer :

Core additives of Hadoop are HDFS and MapReduce. HDFS is essentially used to shop big statistics sets and MapReduce is used to process such big data sets.

Question seventy five. What Is Hdfs?

Answer :

HDFS is a file gadget designed for storing very large files with streaming information get entry to patterns, jogging clusters on commodity hardware.

Question 76. What Are The Key Features Of Hdfs?

Answer :

HDFS is tremendously fault-tolerant, with high throughput, suitable for packages with large statistics sets, streaming access to document gadget facts and can be constructed out of commodity hardware.

Question seventy seven. What Is Fault Tolerance?

Answer :

Suppose you have got a document stored in a system, and because of a few technical hassle that record receives destroyed. Then there is no hazard of having the records back present in that document. To keep away from such conditions, Hadoop has brought the characteristic of fault tolerance in HDFS. In Hadoop, whilst we save a document, it routinely gets replicated at two other places additionally. So even though one or two of the systems fall apart, the record continues to be to be had on the 1/3 device.

Question seventy eight. Replication Causes Data Redundancy Then Why Is Is Pursued In Hdfs?

Answer :

HDFS works with commodity hardware (systems with common configurations) that has high probabilities of having crashed any time. Thus, to make the whole gadget relatively fault-tolerant, HDFS replicates and shops facts in distinct places. Any records on HDFS gets saved at at least three extraordinary locations. So, even though one in all them is corrupted and the opposite is unavailable for some time for any cause, then facts may be accessed from the 0.33 one. Hence, there may be no danger of dropping the statistics. This replication aspect allows us to acquire the function of Hadoop called Fault Tolerant.

Question seventy nine. Since The Data Is Replicated Thrice In Hdfs, Does It Mean That Any Calculation Done On One Node Will Also Be Replicated On The Other Two?

Answer :

Since there are 3 nodes, when we send the MapReduce programs, calculations can be performed best at the original statistics. The master node will recognise which node precisely has that unique records. In case, if one of the nodes isn't responding, it's far assumed to be failed. Only then, the specified calculation may be accomplished on the second one duplicate.

Question 80. What Is Throughput? How Does Hdfs Get A Good Throughput?

Answer :

Throughput is the quantity of work accomplished in a unit time. It describes how fast the facts is getting accessed from the machine and it is also used to measure performance of the gadget. In HDFS, when we need to perform a project or an movement, then the work is divided and shared among one-of-a-kind structures. So all of the systems may be executing the tasks assigned to them independently and in parallel. So the work may be completed in a totally quick period of time. In this way, the HDFS gives exact throughput. By studying statistics in parallel, we lower the actual time to examine facts exceptionally.

Question 81. What Is Streaming Access?

Answer :

As HDFS works on the precept of ‘Write Once, Read Many‘, the feature of streaming get right of entry to is extremely critical in HDFS. HDFS focuses not so much on storing the data however how to retrieve it on the fastest feasible pace, mainly while reading logs. In HDFS, reading the whole records is more essential than the time taken to fetch a unmarried record from the statistics.

Question 82. What Is A Commodity Hardware? Does Commodity Hardware Include Ram?

Answer :

Commodity hardware is a non-highly-priced gadget which isn't of high best or high-availability. Hadoop can be mounted in any common commodity hardware. We don’t need top notch computer systems or high-give up hardware to work on Hadoop. Yes, Commodity hardware consists of RAM due to the fact there can be a few offerings if you want to be strolling on RAM.

Question 83. Is Namenode Also A Commodity?

Answer :

No. Namenode can in no way be a commodity hardware due to the fact the whole HDFS rely upon it. It is the single factor of failure in HDFS. Namenode must be a excessive-availability system.

Question eighty four. What Is A Metadata?

Answer :

Metadata is the statistics about the information stored in facts nodes which includes vicinity of the record, size of the file and so on.

Question 85. What Is A Daemon?

Answer :

Daemon is a manner or provider that runs in historical past. In popular, we use this phrase in UNIX environment. The equal of Daemon in Windows is “offerings” and in Dos is ” TSR”.

Question 86. What Is A Job Tracker?

Answer :

Job tracker is a daemon that runs on a namenode for submitting and monitoring MapReduce jobs in Hadoop. It assigns the duties to the distinct assignment tracker. In a Hadoop cluster, there could be best one activity tracker however many venture trackers. It is the unmarried point of failure for Hadoop and MapReduce Service. If the activity tracker goes down all of the strolling jobs are halted. It receives heartbeat from project tracker primarily based on which Job tracker comes to a decision whether the assigned venture is completed or no longer.

Question 87. What Is A Task Tracker?

Answer :

Task tracker is also a daemon that runs on datanodes. Task Trackers control the execution of man or woman tasks on slave node. When a client submits a activity, the activity tracker will initialize the process and divide the paintings and assign them to different task trackers to perform MapReduce duties. While performing this motion, the assignment tracker may be simultaneously communicating with activity tracker by using sending heartbeat. If the job tracker does now not get hold of heartbeat from undertaking tracker within designated time, then it's going to anticipate that task tracker has crashed and assign that assignment to every other undertaking tracker inside the cluster.

Question 88. Is Namenode Machine Same As Datanode Machine As In Terms Of Hardware?

Answer :

It relies upon upon the cluster you are trying to create. The Hadoop VM may be there on the equal machine or on any other machine. For instance, in a single node cluster, there is best one device, while inside the development or in a checking out surroundings, Namenode and statistics nodes are on one-of-a-kind machines.

Question 89. What Is A Heartbeat In Hdfs?

Answer :

A heartbeat is a sign indicating that it is alive. A datanode sends heartbeat to Namenode and task tracker will ship its heart beat to activity tracker. If the Namenode or job tracker does now not get hold of coronary heart beat then they will decide that there may be some trouble in datanode or mission tracker is unable to carry out the assigned project.

Question 90. Are Namenode And Job Tracker On The Same Host?

Answer :

No, in realistic surroundings, Namenode is on a separate host and task tracker is on a separate host.

Question 91. What Is A 'block' In Hdfs?

Answer :

A ‘block’ is the minimal amount of statistics that may be study or written. In HDFS, the default block size is 64 MB as contrast to the block length of 8192 bytes in Unix/Linux. Files in HDFS are damaged down into block-sized chunks, which might be stored as unbiased units. HDFS blocks are massive compared to disk blocks, in particular to minimize the fee of seeks.

Question 92. If A Particular File Is 50 Mb, Will The Hdfs Block Still Consume 64 Mb As The Default Size?

Answer :

No, in no way! 64 mb is only a unit in which the statistics may be saved. In this specific scenario, only 50 mb might be ate up by means of an HDFS block and 14 mb might be unfastened to store something else. It is the MasterNode that does records allocation in an efficient manner.

Question ninety three. What Are The Benefits Of Block Transfer?

Answer :

A document may be larger than any unmarried disk inside the network. There’s nothing that requires the blocks from a report to be saved at the equal disk, so as to take advantage of any of the disks in the cluster. Making the unit of abstraction a block as opposed to a record simplifies the garage subsystem. Blocks provide fault tolerance and availability. To insure towards corrupted blocks and disk and device failure, every block is replicated to a small number of bodily separate machines (usually three). If a block becomes unavailable, a copy can be examine from any other location in a manner that is transparent to the customer.

Question ninety four. If We Want To Copy 10 Blocks From One Machine To Another, But Another Machine Can Copy Only eight.5 Blocks, Can The Blocks Be Broken At The Time Of Replication?

Answer :

In HDFS, blocks can not be damaged down. Before copying the blocks from one gadget to another, the Master node will discern out what is the actual quantity of area required, what number of block are being used, how an awful lot area is to be had, and it's going to allocate the blocks hence.

Question 95. How Indexing Is Done In Hdfs?

Answer :

Hadoop has its very own manner of indexing. Depending upon the block size, once the facts is stored, HDFS will keep on storing the final part of the data if you want to say where the next a part of the information may be. In reality, this is the base of HDFS.

Question 96. If A Data Node Is Full How It's Identified?

Answer :

When records is saved in datanode, then the metadata of that facts will be stored inside the Namenode. So Namenode will perceive if the statistics node is full.

Question 97. If Datanodes Increase, Then Do We Need To Upgrade Namenode?

Answer :

While putting in the Hadoop device, Namenode is determined primarily based on the size of the clusters. Most of the time, we do not want to improve the Namenode because it does not save the actual records, however just the metadata, so such a requirement hardly ever arise.

Question 98. Are Job Tracker And Task Trackers Present In Separate Machines?

Answer :

Yes, process tracker and task tracker are found in specific machines. The motive is task tracker is a unmarried factor of failure for the Hadoop MapReduce carrier. If it is going down, all running jobs are halted.

Question 99. When We Send A Data To A Node, Do We Allow Settling In Time, Before Sending Another Data To That Node?

Answer :

Yes, we do.

Question a hundred. Does Hadoop Always Require Digital Data To Process?

Answer :

Yes. Hadoop always require digital data to be processed.

Question one zero one. On What Basis Namenode Will Decide Which Datanode To Write On?

Answer :

As the Namenode has the metadata (statistics) related to all the data nodes, it knows which datanode is loose.

Question 102. Doesn't Google Have Its Very Own Version Of Dfs?

Answer :

Yes, Google owns a DFS referred to as “Google File System (GFS)” advanced via Google Inc. For its own use.

Question 103. Who Is A 'person' In Hdfs?

Answer :

A consumer is such as you or me, who has a few question or who desires a few kind of data.

Question 104. Is Client The End User In Hdfs?

Answer :

No, Client is an utility which runs to your machine, which is used to interact with the Namenode (task tracker) or datanode (task tracker).

Question one zero five. What Is The Communication Channel Between Client And Namenode/datanode?

Answer :

The mode of communique is SSH.

Question 106. What Is A Rack?

Answer :

Rack is a storage location with all of the datanodes prepare. These datanodes can be bodily located at distinct locations. Rack is a bodily series of datanodes that are saved at a single area. There can be a couple of racks in a single area.

Question 107. On What Basis Data Will Be Stored On A Rack?

Answer :

When the purchaser is ready to load a file into the cluster, the content material of the file could be divided into blocks. Now the patron consults the Namenode and receives 3 datanodes for every block of the record which indicates wherein the block should be stored. While setting the datanodes, the important thing rule observed is “for each block of data,  copies will exist in a single rack, 1/3 reproduction in a distinctive rack“. This rule is called “Replica Placement Policy“.

Question 108. Do We Need To Place 2nd And third Data In Rack 2 Only?

Answer :

Yes, this is to avoid datanode failure.

Question 109. What If Rack 2 And Datanode Fails?

Answer :

If both rack2 and datanode found in rack 1 fails then there may be no danger of getting records from it. In order to avoid such situations, we want to replicate that facts extra variety of instances as opposed to replicating most effective three times. This may be completed with the aid of changing the fee in replication element which is set to three via default.

Question one hundred ten. What Is A Secondary Namenode? Is It A Substitute To The Namenode?

Answer :

The secondary Namenode continuously reads the statistics from the RAM of the Namenode and writes it into the tough disk or the report device. It isn't always a replacement to the Namenode, so if the Namenode fails, the whole Hadoop machine goes down.

Question 111. What Is The Difference Between Gen1 And Gen2 Hadoop With Regards To The Namenode?

Answer :

In Gen 1 Hadoop, Namenode is the unmarried point of failure. In Gen 2 Hadoop, we've what's known as Active and Passive Namenodes type of a shape. If the energetic Namenode fails, passive Namenode takes over the price.

Question 112. Can You Explain How Do 'map' And 'reduce' Work?

Answer :

Namenode takes the input and divide it into elements and assign them to information nodes. These datanodes method the responsibilities assigned to them and make a key-fee pair and returns the intermediate output to the Reducer. The reducer collects this key value pairs of all the datanodes and combines them and generates the very last output.

Question 113. What Is 'key Value Pair' In Hdfs?

Answer :

Key fee pair is the intermediate statistics generated with the aid of maps and sent to reduces for producing the final output.

Question 114. What Is The Difference Between Mapreduce Engine And Hdfs Cluster?

Answer :

HDFS cluster is the name given to the entire configuration of grasp and slaves wherein facts is saved. Map Reduce Engine is the programming module which is used to retrieve and examine data.

Question 115. Is Map Like A Pointer?

Answer :




CFG