big data analysis in r using hadoop
DESCRIPTION
Big data Analysis in R using Hadoop. HAMS Technologies www.hams.co.in [email protected] [email protected] [email protected]. Data size is continuously increasing due to Multiple source of data Easy and fast availability of data sources - PowerPoint PPT PresentationTRANSCRIPT
HAMS Technologies
1
Big data Analysis in R using Hadoop
HAMS Technologies
2
» Data size is continuously increasing due to ˃ Multiple source of data˃ Easy and fast availability of data sources
» R is widely known and accepted language in field of statistical analysis and prediction techniques. But performing an analysis over Big data is always a challenging task.
» MapReduce is a mechanism to handle large volume of data. Hadoop is one of open source MapReduce implementation provided by Apache.
» R and Hadoop can be used together with the help of a package called RHadoop.
3
HAMS Technologies
Before moving further, We can take a look of MapReduce architecture and Hadoop implementation..
General flow in MapReduce architecture
1. Create a clustered network2. Load the data into cluster using Map (mapper task)3. Fetch the processing data with help of Map (mapper task)4. Aggregate the result with Reducer ( Reducer task)
Local Data Local Data Local Data
Partial Result-1
Partial Result-2
Partial Result-3
Map Map Map
Reduce Aggregated Result
4
HAMS Technologies
General attributes of in MapReduce architecture
1. Distributed file system (DFS)2. Data locality3. Data redundancy for fault tolerance 4. Map tasks applied to partitioned data it scheduled so that input blocks are
on same machine5. Reducer tasks applied to process data partitioned by MAP task
Local Data Local Data Local Data
Partial Result-1
Partial Result-2
Partial Result-3
Map Map Map
Reduce Aggregated Result
5
HAMS Technologies
Hadoop is an open source implementation of MapReduced architecture maintained by Apache
Hadoop
HDFSHadoop Distributed file system
MapReduceJob trackers
name node/s
Data node/s Job tracker node/s
Data NodeData node/s
Tracker node/s
Data NodeData node/s
Tracker node/s
Data NodeData node/s
Tracker node/s
Master nodes
Slavenodes
Hive(HadoopinteractIVE)
6
HAMS Technologies
» Hadoop-streaming allow to create and run MapReducde job as Mapper and/or as Reducer.
» HDFS (Hadoop Distributed File System) is a clustered network used to store data. HDFS contain the script to replicate and track the different data blocks. HDFS write is show below. In same reverse manner we retrieve data from HDFS.
hams.txtBlock-1
Block-2
Block-3 Name Node
Data Node-1
Data node/s
Tracker node/s
Data Node-2
Data node/s
Tracker node/s
Data Node-3
Data node/s
Tracker node/s
Data Node-n
Data node/s
Tracker node/s
12
33 3
I am having a file contains 3 blocks.. Where should I write
these? Okey, Write these on data-node 1 ,2
and 3
HAMS Technologies
7
» HIVE (Hadoop InteractIVE) support high level function for handling hadoop framework like hive.start(), hive.create().. .etc.
» It provided functions like hive.stream() to support Hadoop-streaming
» DFS functions in R like DFS.put(), DFS.list()…» To start working with HIVE
˃ Download it from https://hadoop/apache.org/core˃ Configure files
> mapred-site.xml – to configure mapReduce> Core-site.xml – to configure basic hadoop> Hdfs-site.xml – for configuration related to HDFS
HAMS Technologies
8
» R and Hadoop : a package Rhadoop is available in R-Forge
» In R prompt, ˃ Hadoop package can be loaded as
» Library(‘hive’);˃ To start hadoop
» hive_start()˃ Put the data, list the data
» DFS.put(‘source_data’, ‘/router_list’)» DFS.list(‘/router_list’);
HAMS Technologies
9
» Hive stream can be initialized as ˃ hive_stream(mapper = <mapper_function_name>
reducer= <reducer_function_name> input = <routers> output = <routers_out> )
» Other important functions˃ DFS_put_object()˃ DFS_cat()˃ Hive_create()˃ Hive_get_parameter()
10
HAMS Technologies
Thank you
Kindly drop us a mail at below mention address for any suggestion and clarification. We like to hear from you