- August 10, 2018
- Posted by: AFourTech
- Category: Blogs
Today, the enormous amount of data is being generated every day. With this enormous amount of data consumption, we are moving into a zettabytes age.
But at the same time, the rapid technological advancement has made it possible to easily organize the huge amount of data presently being generated. With this, there is a demand generating for new data analysis as well as storage methods.
Specifically, the real-world characteristics of extracting knowledge from huge data sets have become extremely important.
After all, “Big Data” is the biggest observable fact that has captured the attention of the modern computing industry today since the expansion of the internet universally.
The only reason why the Big Data is rapidly gaining popularity is due to the industrial revolution that has landed in such a situation where processing data into multiple formats and structures is what everybody wants. It provides a platform without constraints associated with traditional systems and database platforms.
As we know Big Data usually includes that data which is generated every second by sensors, mobiles, and consumer-driven data from social networks. But apart from this, Big Data is also evolving from various facets of our day to day lives.
Some of the crucial parameters are:
- Ambiguity: It comes into existence when the metadata lags the clarity of data in Big Data.
- Viscosity: It measures the resistance in the flow volume of data. In the real scenario, resistance in data flow can be a limitation to business rules and technology itself. For example, let’s consider a number of enterprises which cannot understand what impact is there on business and how it resists the usage of the data in many cases.
- Virality: It measures and defines parameters on how quickly data is shared in a people-to-people (peer) network.
Before getting an answer for “why MapReduce?” let’s first find out what is MapReduce?
MapReduce is a burgeoning programming platform which is designed for processing extremely large volumes of data in parallel mode by splitting the job into various independent tasks.
It is generally a combination of a Map() function and a Reduce() function. The job of Map() is to perform filtering and sorting operations as such, sorting customers by the first name into queues, by generating one queue for each name and the Reduce() performs a summary/aggregate operations like counting the number of customers in each queue, thereby yielding the name counts.
The “MapReduce System” well known as “MapReduce framework” or architecture demonstrates the processing with the distributed servers, running the various tasks in parallel, managing all communications and data transfers between the various parts of the system, and providing for redundant data.
MapReduce is a framework for processing voluminous data is splitted and distributed across huge datasets using a large number of nodes. The group of nodes collectively treated as a cluster, if all nodes are with similar hardware configurations working on the same local network or else the nodes are treated as a grid, if they are geographically shared and distributed with varying hardware specifications.
Processing may occur on the data that is stored either in system log files or in a database. MapReduce takes advantage of locality of data, to minimize the data transfer distance.
MapReduce works in two phases:
Map Phase: In this phase, the master node does everything, like taking the input, separating it into smaller sub-tasks, and allocating them to worker nodes. Again, a worker node may do this repeatedly, leading to a multi-level tree structure. The worker node processes the smaller task only and passes the intermediated result back to its master node.
Reduce Phase: During the reduce phase, the master node collects all the intermediated outputs of all the sub-tasks generated by various worker nodes and combines them in some way to form the final output – the solution to the problem it was originally trying to solve.
On the contrary, MapReduce programs do not produce the output with high speed. The main advantage of MapReduce programming model is to make use of the optimized shuffle operation of the platform.
In this process, the only task of the programmer is to write the Map and reduce functions of the program, and while executing, the programmer of MapReduce program needs to shuffle the intermediate results.
However, the amount of data generated Map function along with the partition function highly influences the performance of the program. In addition to the partitioner, the Combiner function helps to reduce the amount of data written to the storage disk and transmitted over the network.
MapReduce implantation is a simple yet tricky task. While designing MapReduce programs, the user is not liable of specifying the mappers since it is fully dependent on file size and the block size. Whereas the number of reducers can easily be configured by the user based on the number of mappers.
However, in general, the programmer decides to choose reducers or else Hadoop takes over the job. But if map() is not defined by the user then the output of Record reader is sent to identity mapper without any logic to reduce without any reducer defined in the program.
Then the output of the identity reducer is stored in the data node itself and is not sent to HDFS. When multiple mappers are running there may be a situation where some mappers may be running very slow, Hadoop then identifies such slow running jobs and triggers the same job to other data node, this concept is called as Speculator execution in Hadoop.
Apache Hadoop is mainly coupled with a storage part “Hadoop Distributed File System” (HDFS) and a processing part (MapReduce). Hadoop performs the functionality of splitting up files into smaller blocks and distribute them on the nodes in a cluster which is further taken care by Hadoop.
In this scenario it is not the job of the programmer to do the distribution over the cluster, Hadoop itself looks into it.
In Hadoop, MapReduce performs the data processing just by transferring the program code to the nodes in parallel based on the data requirements of each node i.e., the process code travels to the node.
Hadoop framework is generally encapsulated with the following modules:
- Hadoop Common
- Hadoop Distributed File System (HDFS)
- Hadoop MapReduce.
With the advent of new technologies, one must be careful to understand the worldwide competition and use big data analysis to support wise decision making.
Today Big Data Analysis is still in its infancy stage, but in the near future, Big Data will definitely bring a major social change.
Though programming languages like R, SPSS are evolving for Big Data analytics further development is still required to ensure integrity, security for the large data sets being processed.