ITPub博客

首页 > 大数据 > Hadoop > hadoop实战2-更改指定hostname启动hadoop,jps介绍,yarn部署,yarn上运行程序

hadoop实战2-更改指定hostname启动hadoop,jps介绍,yarn部署,yarn上运行程序

原创 Hadoop 作者:shaozi74108 时间:2019-04-03 01:13:32 0 删除 编辑

hadoop练习1(更改指定hostname启动hadoop,jps介绍,yarn部署,yarn上运行程序)

目的.配置hdfs三个进程要以hadoop002启动

在之前配置的hadoop,三个进程分别是以localhost, 及0.0.0.0启动的,但是生产一般以固定的hostname启动,所以现在以新的hostname配置启动


 [hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   sbin/start-dfs.sh  


19/02/17 14:33:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
 localhost: starting namenode  , logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop.out
 localhost: starting datanode  , logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop.out
Starting secondary namenodes [0.0.0.0]
 0.0.0.0: starting secondarynamenode  , logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop.out
19/02/17 14:34:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

 查看三个进程端口号  

 查看三个进程的启动IP  

 开始配置  

 1.关闭进程  

[hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   sbin/stop-dfs.sh  

 2.进入配置文件  

[hadoop@hadoop hadoop]$   cd /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop  

[hadoop@hadoop hadoop]$ ll

total 152


 -rw-r--r-- 1 hadoop hadoop  4436 Mar 24  2016 capacity-scheduler.xml #  集成hdfs, mapreduce yarn核心大框架配置信息,详细的在下面具体的配置文件  


-rw-r--r-- 1 hadoop hadoop  1335 Mar 24  2016 configuration.xsl
-rw-r--r-- 1 hadoop hadoop   318 Mar 24  2016 container-executor.cfg
-rw-r--r-- 1 hadoop hadoop   884 Feb 14 22:18 core-site.xml
-rw-r--r-- 1 hadoop hadoop  3670 Mar 24  2016 hadoop-env.cmd     #  CMD结尾的是windows的配置  
-rw-r--r-- 1 hadoop hadoop  4335 Feb 14 23:36 hadoop-env.sh         #  CMD结尾的是linux的配置  
-rw-r--r-- 1 hadoop hadoop  2598 Mar 24  2016 hadoop-metrics2.properties
-rw-r--r-- 1 hadoop hadoop  2490 Mar 24  2016 hadoop-metrics.properties
-rw-r--r-- 1 hadoop hadoop  9683 Mar 24  2016 hadoop-policy.xml
-rw-r--r-- 1 hadoop hadoop   867 Feb 14 22:20 hdfs-site.xml
-rw-r--r-- 1 hadoop hadoop  1449 Mar 24  2016 httpfs-env.sh
-rw-r--r-- 1 hadoop hadoop  1657 Mar 24  2016 httpfs-log4j.properties
-rw-r--r-- 1 hadoop hadoop    21 Mar 24  2016 httpfs-signature.secret
-rw-r--r-- 1 hadoop hadoop   620 Mar 24  2016 httpfs-site.xml
-rw-r--r-- 1 hadoop hadoop  3523 Mar 24  2016 kms-acls.xml
-rw-r--r-- 1 hadoop hadoop  1611 Mar 24  2016 kms-env.sh
-rw-r--r-- 1 hadoop hadoop  1631 Mar 24  2016 kms-log4j.properties
-rw-r--r-- 1 hadoop hadoop  5511 Mar 24  2016 kms-site.xml
-rw-r--r-- 1 hadoop hadoop 11291 Mar 24  2016 log4j.properties
-rw-r--r-- 1 hadoop hadoop   938 Mar 24  2016 mapred-env.cmd
-rw-r--r-- 1 hadoop hadoop  1383 Mar 24  2016 mapred-env.sh
-rw-r--r-- 1 hadoop hadoop  4113 Mar 24  2016 mapred-queues.xml.template
-rw-r--r-- 1 hadoop hadoop   758 Mar 24  2016 mapred-site.xml.template
-rw-r--r-- 1 hadoop hadoop    10 Mar 24  2016 slaves
-rw-r--r-- 1 hadoop hadoop  2316 Mar 24  2016 ssl-client.xml.example
-rw-r--r-- 1 hadoop hadoop  2268 Mar 24  2016 ssl-server.xml.example
-rw-r--r-- 1 hadoop hadoop  2237 Mar 24  2016 yarn-env.cmd
-rw-r--r-- 1 hadoop hadoop  4567 Mar 24  2016 yarn-env.sh
-rwzr--r-- 1 hadoop hadoop   690 Mar 24  2016 yarn-site.xml
conf

 3.映射iP hostname  

 vi /etc/hostname  

192.168.1.100 hadoop      #etc/host中第1,2行不要删除

 4.配置 core-site.xml    #此配置对应namenode进程  


  cd    /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop  


 vi  core-site.xml    
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://  hadoop  :9000</value>   #l  ocalhost改为hadoop  
</property>
</configuration>

小提示:生产 学习: 不用ip部署,统一机器名称hostname部署

 5.映射slaves  #此配置文件对应  datanode进程:

[hadoop@hadoop hadoop]$ vi slaves

hadoop                #如果配置集群多个节点,格式为  hadoop,hadoop1,hadoop2  

 6.映射secondarynamenode  #此配置文件对应secondarynamenode进程:  

参考官方配置:    

 

ctrl+f  搜索secondary, 将参数及参数值配置到hdfs-site.xml中


  vi hdfs-site.xml  


<configuration>
<property>
   <name>   dfs.namenode.secondary.http-address   </name>  
         <value>hadoop:50090</value>  
</property>
<property>
 <name>dfs.namenode.secondary.https-address</name>  
         <value>hadoop:50091</value>  
</property>
</configuration>

7.重新启动hadoop


 [hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   sbin/start-dfs.sh  


19/02/17 15:37:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop]
The authenticity of host   'hadoop (127.0.0.1)' can't be established.  
RSA key fingerprint is f5:3a:0b:3d:c6:ce:a2:e2:87:1c:e6:55:71:b1:aa:31.
Are you sure you want to continue connecting (yes/no)? ^Chadoop: Host key verification failed.

 原因是没有配置hadoop的互信  

8.重新配置hadoop互信


 cd /home/hadoop/ 


rm -rf  .ssh
ssh-keygen
cat id_rsa.pub >>authorized_keys
chmod 600 authorized_keys    #

9.再次启动hadoop


  [hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$ sbin/start-dfs.sh  


19/02/17 15:53:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [  hadoop  ]       #可以看到是以hadoop,hostname启动了  
hadoop: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop.out
hadoop: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop.out
Starting secondary namenodes [  hadoop  ]
 hadoop:   starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop.out
19/02/17 15:53:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

10.jps检查进程是否启动完成


 [hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$ jps 


7288 NameNode
7532 SecondaryNameNode
7389 DataNode
7742 Jps

至此配置完毕


jps命令的真相

2.1位置哪里的


 [hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   which jps  


 /usr/java/jdk1.8.0_45/bin/jps  
[hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   jps  
7926 Jps
7288 NameNode
7532 SecondaryNameNode
7389 DataNode
[hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   ps -ef | grep hadoop  
root      4889  4751  0 14:32 pts/0    00:00:00 su - hadoop
hadoop    4890  4889  0 14:32 pts/0    00:00:00 -bash
root      7121  6047  0 15:51 pts/1    00:00:00 su - hadoop
hadoop    7122  7121  0 15:51 pts/1    00:00:00 -bash
hadoop    7288     1  1 15:53 ?        00:00:09 /usr/java/jdk1

 结论:jps的进程与 ps -ef | grep hadoop出来的是一样的,ps -ef更加详细  

2.2对应的进程的  标识文件在哪    /tmp/hsperfdata_进程用户名称


 [hadoop@hadoop hsperfdata_hadoop]$ pwd 


 /tmp/hsperfdata_hadoop   #目录格式为:hsperfdata_   hostname  
[hadoop@hadoop hsperfdata_hadoop]$ ll
total 96
-rw------- 1 hadoop hadoop 32768 Feb 17 16:11   7288    #进程号与下面一致  
-rw------- 1 hadoop hadoop 32768 Feb 17 16:11   7389  
-rw------- 1 hadoop hadoop 32768 Feb 17 16:11   7532  
[hadoop@hadoop hsperfdata_hadoop]$   jps  
8072 Jps
 7288   NameNode                 #三个文件均有内容
 7532   SecondaryNameNode
 7389   DataNode

2.3其他用户查看jps的结果

root用户看所有用户的jps结果

普通用户只能看自己的

2.4  jps出来的信息“process information unavailable“ 是否就是表示hadoop没启动

模拟进程kill,root用户kill

kill -9  1378,1210,1086

此时再jps仍然是存在的“process information unavailable,  其实进程已经结束  ,此时用hadoop用户换别的session时jps显示没有。

提示:如果在kill进程后仍有残留,可以直接删除文件目录,启动会再次生成


 [root@hadoop002 tmp]# rm -rf hsperfdata_hadoop 


[root@hadoop002 tmp]#
[root@hadoop002 tmp]# jps
1906 Jps

总结:故这里有个小坑:hdfs组件使用hdfs用户安装,如果在一些脚本中用root用户收集信息,看到“process information unavailable不可盲目认为是进程有问题,而应用ps -ef检测

 真假判断: ps -ef|grep namenode 真正判断进程是否可用,不用进程号,因为进程号代表的可能是文件的内容,不准确,这里用namenode  


 [root@hadoop002 ~]# jps 


1520 Jps
1378 -- process information unavailable
1210 -- process information unavailable
1086 -- process information unavailable

kill动作发生原因:

1人为

2 进程在Linux看来是耗内存最大的自动给你kill

[root@hadoop002 tmp]#

3.pid文件 被误删除,造成hadoop无法正常启动问题


 [hadoop@hadoop tmp]$ pwd 


/tmp
-rw-rw-r-- 1 hadoop hadoop    5 Feb 16 20:56 hadoop-hadoop-datanode.pid
-rw-rw-r-- 1 hadoop hadoop    5 Feb 16 20:56 hadoop-hadoop-namenode.pid
-rw-rw-r-- 1 hadoop hadoop    5 Feb 16 20:57 hadoop-hadoop-secondarynamenode.pid

可以看到tmp目录下有三个pid文件,但是Linux的tmp目录会定期删除一些文件和文件夹 ,一般保存30天周期,此时hadoop进程还在,也可正常运行,但是在关闭hadoop会出现问题,发现无法关闭

但是启动是可以正常启动的

 此时有问题:虽然现在启动可以重新启动,但是namenode进程仍然是之前的进程,而不是新的进程,这个是错误的  

 解决方式:修改配置文件,更改pid存放目录  


  mkdir /data/tmp    #重新创建个目录  


 chmod -R 777 /data/tmp  
 cd   /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop  
 vi    hadoop-env.sh  
\# The directory where pid files are stored. /tmp by default.
\# NOTE: this should be set to a directory that can only be written to by
\#       the user that will run the hadoop daemons.  Otherwise there is the
\#       potential for a symlink attack.
export HADOOP_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_PID_DIR=/data/tmp  

 另一种方式: 更改tmp文件删除规则,将不需要删除的文件加入规则。  

为什么要用PID文件?

hadoop在启动停止时用到了sbin目录下的hadoop-daemonsh

cat /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/sbin/stop-dfs.sh, 在stop-dfs.sh脚本中调用了hadoop-daemon.sh脚本

重新启动 start_dfs.sh

 启动时,已经启动的进程不会再次启动,没有启动的进程会启动  

4.部署单个节点的yarn

MapReduce: MapReduce在Yarn运行,用来做计算的,是依靠jar包提交Yarn上的,本身不需要部署。

Yarn: 资源和作业调度,是需要部署的。

部署单个节点的yarn


 [hadoop@hadoop hadoop]$  cd /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop  


[hadoop@hadoop hadoop]$   cp mapred-site.xml.template  mapred-site.xml  
[hadoop@hadoop hadoop]$   vi mapred-site.xml  
 <configuration>  
     <property>  
         <name>mapreduce.framework.name</name>  
         <value>yarn</value>  
     </property>  
 </configuration>  
[hadoop@hadoop hadoop]$   vi yarn-site.xml  
<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<!-- Site specific YARN configuration properties -->
   <property>  
         <name>yarn.nodemanager.aux-services</name>  
         <value>mapreduce_shuffle</value>  
     </property>  
</configuration>

至此部署完成

yarn的进程:

ResourceManager daemon  老大 资源管理者

NodeManager daemon         小弟 节点管理者

启动yarn


 [hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   cd /home/hadoop/app/hadoop-2.6.0-cdh5.7.0  


[hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   sbin/start-yarn.sh  
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/  hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop.out   #yarn日志目录  
hadoop: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop.out
[hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$    jps     #查看yarn进程是否启动  
 9520 ResourceManager  
 9617 NodeManager  
7288 NameNode
9996 Jps
7532 SecondaryNameNode
7389 DataNode

打开浏览器yarn的web控制窗口

 http://192.168.1.100:8088/  

小提示:日志跟踪分析

tail -200f hadoop-hadoop-datanode-hadoop002.log  另外窗口重启进程 为了再现这个错误

hadoop-hadoop-datanode-hadoop002.out不看

或者rz上传到windows editplus去定位查看 备份

hadoop-用户-进程名称-机器名称

5.运行mr


  cd   /home/hadoop/app/hadoop-2.6.0-cdh5.7.0  


[hadoop@hadoop hadoop-2.6.0-cdh5.7.0]$   find ./ -name '*example*.jar'  #寻找可运行的jar包
./share/hadoop/mapreduce2/sources/hadoop-mapreduce-examples-2.6.0-cdh5.7.0-sources.jar
./share/hadoop/mapreduce2/sources/hadoop-mapreduce-examples-2.6.0-cdh5.7.0-test-sources.jar
 ./share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar  
./share/hadoop/mapreduce1/hadoop-examples-2.6.0-mr1-cdh5.7.0.jar
运行mr
 cd   /home/hadoop/app/hadoop-2.6.0-cdh5.7.0  
hadoop   #可显示其余的参数,类似help帮助

查看一些经典案例

  hadoop jar ./share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar  

[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$ hadoop jar ./share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar pi 5 10     #  相当于运行pi程序,类似oracle调用一个procedure  

 从过程上来讲看似是先map后reduce,实际上在map的过程中已经开始reduce  

map 映射

reduce 规约

本次运行遇到的错误:

 

解决方法:

修改为:

词频统计案例


 [hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   vi a.log   #创建a文本  


ruoze
jepson
[    ](http:///)
dashu
adai
fanren
1
a
b
c
a b c ruoze jepon
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   vi b.txt  #创建b文本  
a b d e f ruoze
1 1 3 5
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   hdfs dfs  -mkdir /wordcount   #创建输出目录   [hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   hdfs dfs  -mkdir /wordcount/input  #创建输入目录  
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   hdfs dfs -put a.log /wordcount/input       #将a文本上传  
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   hdfs dfs -put b.txt /wordcount/input        #将b文件上传  
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   hdfs dfs -ls /wordcount/input/              #查看上传的文件  
Found 2 items
-rw-r--r--   1 hadoop supergroup         76 2019-02-16 21:59 /wordcount/input/a.log
-rw-r--r--   1 hadoop supergroup         24 2019-02-16 21:59 /wordcount/input/b.txt

# 运行程序  \表示上一行与下一行命令链接  

[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   hadoop jar \  

 ./share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar \  

 wordcount /wordcount/input    /wordcount/output1        #output1为指定的输出目录,也可以output2  

 提示:在运行程序时,如果不清楚输入哪些参数,可以不加参数运行。  

 如:hadoop jar  ./share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar  wordcount  

# 查看运行结果  


 [hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   hdfs dfs -cat /wordcount/output1/part-r-00000  


19/02/16 22:05:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
1       3
3       1
5       1
a       3
adai    1
b       3
c       2
d       1
dashu   1
e       1
f       1
fanren  1
jepon   1
jepson  1
ruoze   3
      1
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$   hdfs dfs -get /wordcount/output1/part-r-00000 ./  #下载文件,便于查看  
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$  
 cat part-r-00000  #文件是在hdfs平台,而非linux本地  
1       3
3       1
5       1
a       3
adai    1
b       3
c       2
d       1
dashu   1
e       1
f       1
fanren  1
jepon   1
jepson  1
ruoze   3
      1
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$



来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/28339956/viewspace-2640192/,如需转载,请注明出处,否则将追究法律责任。

下一篇: 没有了~
请登录后发表评论 登录
全部评论
数据库方向,传统关系型数据库,etl方向,涉及银行,保险等海量数据。欢迎大牛批评指点。

注册时间:2016-02-22

  • 博文量
    9
  • 访问量
    19799