Hadoop 2.x伪分布式环境搭建详细步骤
本文以图文结合的方式详细介绍了Hadoop 2.x伪分布式环境搭建的全过程,供大家参考,具体内容如下
1、修改hadoop-env.sh、yarn-env.sh、mapred-env.sh
方法:使用notepad++(beifeng用户)打开这三个文件
添加代码:export JAVA_HOME=/opt/modules/jdk1.7.0_67
2、修改core-site.xml、hdfs-site.xml、yarn-site.xml、mapred-site.xml配置文件
1)修改core-site.xml
1
2
3
4
5
6
7
8
9
10
|
< configuration >
< property >
< name >fs.defaultFS</ name >
< value >hdfs://Hadoop-senior02.beifeng.com:8020</ value >
</ property >
< property >
< name >hadoop.tmp.dir</ name >
< value >/opt/modules/hadoop-2.5.0/data</ value >
</ property >
</ configuration >
|
2)修改hdfs-site.xml
1
2
3
4
5
6
7
8
9
10
|
< configuration >
< property >
< name >dfs.replication</ name >
< value >1</ value >
</ property >
< property >
< name >dfs.namenode.http-address</ name >
< value >Hadoop-senior02.beifeng.com:50070</ value >
</ property >
</ configuration >
|
3)修改yarn-site.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
< configuration >
< property >
< name >yarn.nodemanager.aux-services</ name >
< value >mapreduce_shuffle</ value >
</ property >
< property >
< name >yarn.resourcemanager.hostname</ name >
< value >Hadoop-senior02.beifeng.com</ value >
</ property >
< property >
< name >yarn.log-aggregation-enable</ name >
< value >true</ value >
</ property >
< property >
< name >yarn.log-aggregation.retain-seconds</ name >
< value >86400</ value >
</ property >
</ configuration >
|
4)修改mapred-site.xml
1
2
3
4
5
6
7
8
9
10
|
< configuration >
< property >
< name >mapreduce.framework.name</ name >
< value >yarn</ value >
</ property >
< property >
< name >mapreduce.jobhistory.webapp.address</ name >
< value >0.0.0.0:19888</ value >
</ property >
</ configuration >
|
3、启动hdfs
1)格式化namenode:$ bin/hdfs namenode -format
2)启动namenode:$sbin/hadoop-daemon.sh start namenode
3)启动datanode:$sbin/hadoop-daemon.sh start datanode
4)hdfs监控web页面:http://hadoop-senior02.beifeng.com:50070
4、启动yarn
1)启动resourcemanager:$sbin/yarn-daemon.sh start resourcemanager
2)启动nodemanager:sbin/yarn-daemon.sh start nodemanager
3)yarn监控web页面:http://hadoop-senior02.beifeng.com:8088
5、测试wordcount jar包
1)定位路径:/opt/modules/hadoop-2.5.0
2)代码测试:bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar wordcount /input/sort.txt /output6/
运行过程:
16/05/08 06:39:13 INFO client.RMProxy: Connecting to ResourceManager at Hadoop-senior02.beifeng.com/192.168.241.130:8032
16/05/08 06:39:15 INFO input.FileInputFormat: Total input paths to process : 1
16/05/08 06:39:15 INFO mapreduce.JobSubmitter: number of splits:1
16/05/08 06:39:15 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1462660542807_0001
16/05/08 06:39:16 INFO impl.YarnClientImpl: Submitted application application_1462660542807_0001
16/05/08 06:39:16 INFO mapreduce.Job: The url to track the job: http://Hadoop-senior02.beifeng.com:8088/proxy/application_1462660542807_0001/
16/05/08 06:39:16 INFO mapreduce.Job: Running job: job_1462660542807_0001
16/05/08 06:39:36 INFO mapreduce.Job: Job job_1462660542807_0001 running in uber mode : false
16/05/08 06:39:36 INFO mapreduce.Job: map 0% reduce 0%
16/05/08 06:39:48 INFO mapreduce.Job: map 100% reduce 0%
16/05/08 06:40:04 INFO mapreduce.Job: map 100% reduce 100%
16/05/08 06:40:04 INFO mapreduce.Job: Job job_1462660542807_0001 completed successfully
16/05/08 06:40:04 INFO mapreduce.Job: Counters: 49
3)结果查看:bin/hdfs dfs -text /output6/par*
运行结果:
hadoop 2
jps 1
mapreduce 2
yarn 1
6、MapReduce历史服务器
1)启动:sbin/mr-jobhistory-daemon.sh start historyserver
2)web ui界面:http://hadoop-senior02.beifeng.com:19888
7、hdfs、yarn、mapreduce功能
1)hdfs:分布式文件系统,高容错性的文件系统,适合部署在廉价的机器上。
hdfs是一个主从结构,分为namenode和datanode,其中namenode是命名空间,datanode是存储空间,datanode以数据块的形式进行存储,每个数据块128M
2)yarn:通用资源管理系统,为上层应用提供统一的资源管理和调度。
yarn分为resourcemanager和nodemanager,resourcemanager负责资源调度和分配,nodemanager负责数据处理和资源
3)mapreduce:MapReduce是一种计算模型,分为Map(映射)和Reduce(归约)。
map将每一行数据处理后,以键值对的形式出现,并传给reduce;reduce将map传过来的数据进行汇总和统计。
以上就是本文的全部内容,希望对大家的学习有所帮助。
本文由主机测评网发布,不代表主机测评网立场,转载联系作者并注明出处:https://zhuji.jb51.net/shujuku/2925.html