ITPub博客

首页 > 大数据 > Hadoop > 在Linux系统下安装hadoop-2.6集群环境

在Linux系统下安装hadoop-2.6集群环境

Hadoop 作者:kinglin_zy 时间:2015-01-13 12:25:46 0 删除 编辑
搭建环境:
IP地址:192.168.1.247   主机名:tong1
          192.168.1.248   主机名:tong2
          192.168.1.249   主机名:tong3
系统:CentOS 6.4 64位
软件:hadoop-2.6.0.tar.gz

1.配置网络环境
tong1节点,tong2节点和tong3节点是一样:

[root@tong1 ~]# hostname                 --在每个节点上设置主机名
tong1
[root@tong1 ~]# cat /etc/hosts     --将每个节点的主机名写入到hosts文件中进行解析
192.168.1.247 tong1
192.168.1.248 tong2
192.168.1.249 tong3

[root@tong1 ~]#

2.在每个节点创建hadoop用户和用户组
tong1节点,tong2节点和tong3节点是一样
[root@tong1 ~]# groupadd  -g 1000 hadoop               --在每个节点上创建hadoop用户和用户组
[root@tong1 ~]# useradd  -u 1000 -g 1000 hadoop hadoop
[root@tong1 ~]# passwd hadoop            --为hadoop用户设置密码
 
3.对每个节点对hadoop用户ssh互相信任
tong1节点,tong2节点和tong3节点在hadoop用户下创建私钥
[root@tong1 ~]# su - hadoop
[hadoop@tong1 ~]$ ssh-keygen -t rsa       
[hadoop@tong1 ~]$ cd ~/.ssh

[hadoop@tong1 .ssh]$ ll
total 16
-rw-------. 1 hadoop hadoop 1675 Jan  6 10:28 id_rsa
-rw-r--r--. 1 hadoop hadoop  394 Jan  6 10:28 id_rsa.pub
-rw-r--r--. 1 hadoop hadoop 1983 Jan  8 11:14 known_hosts
[hadoop@tong1 .ssh]$ scp tong2:~/.ssh/id_rsa.pub  tong2    --将tong2节点上的密钥复制到tong1节点上
id_rsa.pub                                                                   100%  394     0.4KB/s   00:00   
[hadoop@tong1 .ssh]$ scp tong3:~/.ssh/id_rsa.pub  tong3     --将tong3节点上的密钥复制到tong1节点上
id_rsa.pub                                                                   100%  394     0.4KB/s   00:00   
[hadoop@tong1 .ssh]$ cat tong2  tong3 id_rsa.pub >> authorized_keys    --将三个节点的私钥放到authorized_keys文件中
[hadoop@tong1 .ssh]$ scp authorized_keys  tong2:~/.ssh/      --将authorized_keys文件放入hadoop用户下.ssh目录中
authorized_keys                                                          100%  394     0.4KB/s   00:00   
[hadoop@tong1 .ssh]$ scp authorized_keys  tong2:~/.ssh/
authorized_keys                                                          100%  394     0.4KB/s   00:00   
[hadoop@tong1 .ssh]$ ssh tong1 date        --验证三个节点ssh是否互信
Thu Jan  8 13:48:45 CST 2015
[hadoop@tong1 .ssh]$ ssh tong2 date
Thu Jan  8 13:49:13 CST 2015
[hadoop@tong1 .ssh]$ ssh tong3 date
Thu Jan  8 22:30:32 CST 2015
[hadoop@tong1 .ssh]$

4.下载jdk软件和安装jdk并测试安装成功
[root@tong1 ~]$ wget http://download.oracle.com/otn-pub/java/jdk/8u25-b17/jdk-8u25-linux-x64.tar.gz
[root@tong1 ~]# tar xvf jdk-8u25-linux-x64.tar.gz
[root@tong1 ~]# mv  jdk1.8.0_25 /usr/local/
[root@tong1 local]# chown -R hadoop:hadoop jdk1.8.0_25/
[root@tong1 local]# vim /home/hadoop/.bash_profile       --设置环境变量
export JAVA_HOME=/usr/local/jdk1.8.0_25
export JAR_HOME=/usr/local/jdk1.8.0_25/jre
export CLASSPATH=$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$JAR_HOME/lib
export PATH=$JAVA_HOME/bin:$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export PATH
[root@tong1 local]# . /home/hadoop/.bash_profile    --使环境变量生效
[root@tong1 local]# java -version     --检查java是否安装好
java version "1.8.0_25"
Java(TM) SE Runtime Environment (build 1.8.0_25-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode)
[root@tong1 local]#

5.下载安装hadoop-2.6.0软件
[root@tong1 ~]$ wget http://mirrors.hust.edu.cn/apache/hadoop/core/stable2/hadoop-2.6.0.tar.gz
[root@tong1 ~]$ tar xvf hadoop-2.6.0.tar.gz
[root@tong1 ~]$ mv hadoop-2.6.0 /usr/local/
[root@tong1 ~]$ cd /usr/local/
[root@tong1 local]$ chown -R hadoop:hadoop hadoop-2.6.0/
[root@tong1 local]$ ll hadoop-2.6.0/
total 56
drwxr-xr-x. 2 hadoop hadoop  4096 Nov 14 05:20 bin
drwxr-xr-x. 3 hadoop hadoop  4096 Nov 14 05:20 etc
drwxr-xr-x. 2 hadoop hadoop  4096 Nov 14 05:20 include
drwxr-xr-x. 3 hadoop hadoop  4096 Nov 14 05:20 lib
drwxr-xr-x. 2 hadoop hadoop  4096 Nov 14 05:20 libexec
-rw-r--r--. 1 hadoop hadoop 15429 Nov 14 05:20 LICENSE.txt
drwxrwxr-x. 2 hadoop hadoop  4096 Jan  8 14:13 logs
-rw-r--r--. 1 hadoop hadoop   101 Nov 14 05:20 NOTICE.txt
-rw-r--r--. 1 hadoop hadoop  1366 Nov 14 05:20 README.txt
drwxr-xr-x. 2 hadoop hadoop  4096 Nov 14 05:20 sbin

drwxr-xr-x. 4 hadoop hadoop  4096 Nov 14 05:20 share
[root@tong1 local]$ su - hadoop

[hadoop@tong1 ~]$ vim ~/.bash_profile       --将hadoop的执行文件加入到环境变量中
export JAVA_HOME=/usr/local/jdk1.8.0_25
export JAR_HOME=/usr/local/jdk1.8.0_25/jre
export CLASSPATH=$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$JAR_HOME/lib
export HADOOP_HOME=/usr/local/hadoop-2.6.0
export HBASE_HOME=/usr/local/hbase-0.98.9-hadoop2
export PATH=$JAVA_HOME/bin:$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:/usr/local/hbase-0.98.9-hadoop2/bin
export PATH
[hadoop@tong1 ~]$ .  ~/.bash_profile
[hadoop@tong1 ~]$ hadoop         --环境变量生效
hadoop             hadoop.cmd         hadoop-daemon.sh   hadoop-daemons.sh 
[hadoop@tong1 local]$ hadoop

 
6.修改hadoop配置文件
[root@tong1 local]# cd hadoop-2.6.0/etc/hadoop/
[root@tong1 hadoop]# vim core-site.xml


  fs.default.name
  hdfs://tong1:9000


  hadoop.tmp.dir
  /usr/local/hadoopdata          --hadoop数据文件存放


   io.file.buffer.size
   4096


   hadoop.native.lib
   true


[root@tong1 hadoop]# vim hadoop-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_25/     --定义java安装目录
[root@tong1 hadoop]# vim hdfs-site.xml


  dfs.nameservices
  hadoop-cluster1


  dfs.namenode.secondary.http-address
  tong1:50090


  dfs.namenode.name.dir
  file:///usr/local/hadoopdata/dfs/name


  dfs.datanode.data.dir
  file:///usr/local/hadoopdata/dfs/data


  dfs.replication
  2

 
  dfs.webhdfs.enabled
  true


[root@tong1 hadoop]# vim mapred-site.xml


     mapreduce.framework.name
     yarn


      mapreduce.jobtracker.http.address
      tong1:50030


      mapreduce.jobhistory.address
      tong1:10020


      mapreduce.jobhistory.webapp.address
      tong1:19888


[root@tong1 hadoop]# cd /usr/local/
[root@tong1 local]# scp jdk1.8.0_25  tong2:/usr/local/
[root@tong1 local]# scp jdk1.8.0_25  tong3:/usr/local/

[root@tong1 local]# scp hadoop-2.6.0  tong2:/usr/local/
[root@tong1 local]# scp hadoop-2.6.0  tong3:/usr/local/
  
7.修改主从关系和管理节点,数据节点
tong1节点(管理节点):
[root@tong1 local]# cd hadoop-2.6.0/etc/hadoop/
[root@tong1 hadoop]# vim masters
tong1
[root@tong1 hadoop]# vim slaves
tong2
tong3
[root@tong1 hadoop]#

tong2节点,tong3节点(数据节点):
[root@tong2 ~]# cd /usr/local/hadoop-2.6.0/etc/hadoop/
[root@tong2 hadoop]# vim masters
tong1
[root@tong2 hadoop]# vim slaves
tong2
tong3
[root@tong2 hadoop]# 
  
8.启动服务,检查进程
tong1节点:
[root@tong1 hadoop]# su - hadoop
[hadoop@tong1 ~]$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/01/08 14:13:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [tong1]
tong1: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-namenode-tong1.out
tong3: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-datanode-tong3.out
tong2: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-datanode-tong2.out
15/01/08 14:13:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-hadoop-resourcemanager-tong1.out
tong2: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-tong2.out
tong3: starting nodemanager, logging to /usr/local/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-tong3.out
[hadoop@tong1 ~]$ jps
7767 ResourceManager
7899 Jps
7518 NameNode
[hadoop@tong1 ~]$

tong2节点
[root@tong2 hadoop]# su - hadoop
[hadoop@tong2 ~]$ jps
15192 NodeManager
15625 Jps
6107 DataNode
[hadoop@tong2 ~]$

tong3节点:
[root@tong3 ~]# su - hadoop
[hadoop@tong3 ~]$ jps
28721 NodeManager
28897 Jps
2957 DataNode
[hadoop@tong3 ~]$
  
9.用浏览器查看名节点的状态

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/781883/viewspace-1400087/,如需转载,请注明出处,否则将追究法律责任。

请登录后发表评论 登录
全部评论

注册时间:2009-09-22

  • 博文量
    58
  • 访问量
    67429