Hadoop

Hadoop's Tools and Techniques for Big Data

Big Data

Hadoop's Tools and Techniques for Big Data

In order to manage large data that cannot be managed by the database, a special incentive is required. To verify this type of large data set, the departmental platform Hadoop will be different methods of recording, organized and developed data analysis. The most effective tools are required to get significant output data out. Most tools set Apache Hadoop framework, including MapReduce, Mahout, Hive, and so on. Below we will discuss the tools used to process large health data sets.

Apache Hadoop

The name Hadoop becomes different things. In 2002, it was built as a single software project to help web search engines. Since then, it has developed an ecosystem of tools and applications for a variety of data analysis. Hadoop is already considered a single project, but a traditional relational database model, the data processing methods is completely different. A more practical definition of the ecosystem and frame Hadoop as follows: a large collection of sources, such as Internet images, audio, video and sensor records, can be used to treat "big data" open tests, libraries, and methods Structured and structured data. FIG. 3 represents physical design architecture of Hadoop includes MapReduce, HBase and HDFS.

HDFS

HDFS is designed to handle large data. Several users can help at the same time, the HDFS system is not a true parallel file. The design includes large files that can be rewritten repeatedly, and reads many models, allowing other optimizations and the relaxation of the complexity and coefficients of coefficients, a true parallel file system. HDFS is designed for data transmissions. This allows you to read large amounts of data from disk to bulk. HDFS block size is 64 MB or 128 MB. There are more than one node name node type and multiple data nodes, which manages all the necessary metadata from a node name that is stored and retrieved. There is no name stored on the node. Files are stored as blocks in the correct order, and the blocks are in the same size. HDFS distinguishes its nature and reliability. Metadata and file data warehouse are separated. Metadata is stored on nodes and is an application stored in the data node.

Map Reduce

Hadoop Apache is associated with MapReduce computation. The MapReduce Computing Model is a powerful tool used in many health applications, and it is realized by most users. Its underlying concept is very easy. In MapReduce, there are two stages: stage mapping and stage reduction. In a mapping stage, the map procedure applies to data entry. The counting phase is counted. MapReduce also has two phases in the programming phase: a cartography stage that accepts couples with key value input and output creates the key value in pairs and the second phase is reduced, and in each phase the value and value pairs are input and output. The segment of data segments is split into the entry called Hadoop. The map function generates the pairs and keys with the value of the maps and matching keys will be merged.

Apache Hive

Hive is the storage layer of data warehousing at the top, where the analysis and query can be performed using the SQL-like procedural language. Apache hive add-ons can be used to query, integrate and analyze data. Hive is considered to be a true SQL query standard based on five-bit data using Hadoop and offers features for easy access to data, transfers and data files or HDFS access to other HBase storage systems.

Apache Pig

Apache Pig is available open source platforms for better analysis of big data. Pig is a replacement for the MapReduce programming program. Pig recommends that users develop their own user performance features and support many traditional data operations such as join, sort, filter, and more.

Apache HBase

HBase is a column-oriented NoSQL database used in Hadoop, where users can store a large number of rows and columns. HBase has a random read/write function. It also supports record level updates that are not possible with HDFS. HBase provides parallel data storage through the underlying commodity server distributed files. Due to the tight integration of HBase and HDFS, the selection system is usually HDFS. HBase is the right choice if there is a structured low-latency view of high-scale data stored via Hadoop. Its open source code can be scaled linearly to handle petabytes of data on thousands of nodes.

Apache Oozie

If you have complex systems or system-driven layouts or have dependencies between many connectivity sites, you need a sophisticated Apache Oozie technique. Apache Oozie can manage and execute various jobs related to Hadoop. Oozie has two sections: a workflow engine, a workflow collection, a Hadoop-based job creation and execution, and a coordinator engineer for workflow management based on the design of the process plan. Oozie's Hadoop job seeks to build and manage workflows, with an output of jobs that are used as input for the next jobs. Oozie is not a Yarn scheduler replacement. Oozie's work flows are targeted by acute actions (DAG). Oozie meets the role of a cluster service, and the client presents an active or passive presentation of the work.

Apache Avro

Avro is a serialization format, which makes it possible to exchange data between programs written in any language. It is often used to connect Flume data flow. Avro system is based on a scheme where the schema is to perform read and write operations does not depend on language. Avro organizes data that has built-in circuitry. This is the basis for persistent data serialization and remote procedure calls between nodes between Hadoop and Hadoop client programs and services

Apache Zookeeper

Zookeeper centralized system used by applications to maintain health systems and to ensure the organization and other elements, as well as between nodes. It supports common objects needed in large cluster environments including configuration information and a hierarchical namespace. These services can be used in various applications for coordinating distributed processing Hadoop clusters. Zookeeper also guarantees the reliability of the application. If the master application dies than Zookeeper can create of new capital to restore the administration tasks.

Apache Yarn

The Hadoop yarn application is located on each host in the cluster and processes the available resources on the independent host. Both components are processed. Job scheduling and management of containers running dedicated application code, memory management, CPU throughput and I/O systems.

Apache Sqoop

Apache Sqoop is a powerful tool for data extraction capabilities performance from a data management system (RDMS) to Hadoop consulting processing architecture. To do this process uses the MapReduce paradigm or other standard plane tools, for example Hive After installing HDFS, the data may be used by Hadoop applications.

Apache Flume

Apache Flume is a trusted service where accurate data collection and large data volume from independent mobile phones move to HDFS. Often, data transport has become a number of streaming agents that can hack a range of machines and locations. Flume is often used in log files, data generated in social networks and email.