Hadoop assists you to manage thousands of terabytes of data and makes it simple to run applications on systems which have thousands of commodity hardware nodes. It has a distributed filing system which enables rapid data transfer rates amongst the nodes. Hadoop also allows the system to continue running in case a node fails.

This approach reduces the risk of unexpected data loss or system failure even if the number of inoperative nodes increases. Thus, Hadoop has emerged to be the foundation for processing big data tasks like business and planning, scientific analytics and processing large volumes of sensor data, including IoT sensors.

Doug Cutting and Mike Cafarella, who are both computer scientists created Hadoop in 2006 with the aim of supporting distribution for the Nutch search engine. Hadoop was inspired by Google’s MapReduce, a software framework used to break down applications into small parts.

These parts, which are also known as blocks or fragments, can be run on naynode in the cluster. Hadoop 1.0 became available to the public in November 2012 after nearly six years of development within the open source community. Apache Software Foundation sponsored the project.

Hadoop has been continuously developed and updated after its initial release. Hadoop 2.0, which was the second version of Hadoop improved scheduling and resource management. Hadoop 2.0 supports Microsoft Windows and features a high-availability option amongst other components.

This option assists in the expansion of the framework’s flexibility for data analytics and processing. Hadoop components and supporting software packages can be deployed by companies in their local data centers.

Hadoop modules and projects

Hadoop, which is a software framework, is composed of various functional modules. At a minimum, Hadoop Common is used as a kernel to provide the framework’s critical libraries. Other components include:

  • The Hadoop Distributed Filing System (HDFS) – used to achieve high bandwidth amongst the nodes by storing data across thousands of commodity servers
  • Hadoop MapReduce – offers the programming model required to handle large distributed data processing, that is, mapping data and reducing it to a result
  • Hadoop Yet Another Resource Negotiator (YARN) – it is responsible for providing scheduling and resource management for user applications

Challenges of using Hadoop

The MapReduce model cannot solve all problems. Hadoop is well suited for simple information requests and tasks which can be divided into independent units, but it’s not efficient for collaborative and iterative tasks. MapReduce is usually file-intensive and does not allow use fractions of the input data.

Iterative algorithms frequently require multiple map-shuffle/sort-reduce phases to reduce because the nodes don’t intercommunicate except through sorting and shuffles. This, in turn, creates multiple files between MapReduce phases, and this is very inefficient for advanced analytic computing.

There is a wide gap

Today, there are very few entry-level programmers with sufficient Java skills which are required for one to be productive with MapReduce. This is one of the top reasons why distribution providers are battling to add relational (SQL) technology on top of Hadoop.

This is because programmers with SQL skills can be easily found compared to the ones with MapReduce skills. Additionally, managing Hadoop seems part science and part art and this requires low-level knowledge of Hadoop kernel settings, hardware, and operating systems.

Data security

Although new tools and technologies are emerging, the fragmented data has some security issues which revolve around it. This has led to the development of the Kerberos authentication protocol, which is a great step towards securing the Hadoop environments.

Managing and governing full-fledged data

Hadoop lacks the necessary tools required for data quality and standardization, and the tools it has are very complicated to use. This makes it very hard for data management, data cleansing, governance and metadata.

Share This