ITPub博客

首页 > 大数据 > Hadoop > 学习三:基于Hadoop的Cloudera CDH3平台安装 (转)

学习三:基于Hadoop的Cloudera CDH3平台安装 (转)

Hadoop 作者:a2851407 时间:2013-12-03 18:49:02 0 删除 编辑
转自:http://blog.itpub.net/354732/viewspace-720985
 Cloudera为Hadoop开源软件整体方案供应商,提供基于Apache Hadoop软件

和服务领域。去年11月获得4千万美金风险投资,Oracle, Dell等与Cloudera达成了

合作计划。IBM,Amazon,微软等都加入了Hadoop俱乐部,并发布了各自的

Hadoop-as-a-Service。相信该计划将成为云计算平台的一个主流。以下是基于

CDH3在RedHat 5 安装Hadoop产品步骤。(CDH3:Cloudera's Distribution 

including Apache Hadoop Version 3)

1、三台主机地址和IP如下:
#vi /etc/hosts
172.16.130.136  masternode
172.16.130.137 slavenode1
172.16.130.138 slavenode2

2、root用户配置SSH
masternode节点
#ssh-keygen -t rsa
#cat /root/.ssh/id_rsa_pub >>/root/.ssh/authorized_keys
slavenode
#ssh-keygen -t rsa
拷贝authorized_keys 到slavenode节点上

测试是否能不用密码登陆:
masternode
 #ssh slavenode 1
 #ssh slavenode2

3、下载Cloudera安装包,下载地址:
http://archive.cloudera.com/redhat/cdh/cdh3-repository-1.0-1.noarch.rpm

4、在各个节点上执行
#sudo yum --nogpgcheck localinstall cdh3-repository-1.0-1.noarch.rpm

5、各个节点安装hadoop core包
# yum search hadoop 
# sudo yum install hadoop-0.20

6、在masternode上安装namenode和jobtracker
#sudo yum install hadoop-0.20-namenode
#sudo yum install hadoop-0.20-jobtracker

7、在slvaenode节点上安装datanode和tasktracker
#sudo yum install hadoop-0.20-datanode
#sudo yum install hadoop-0.20-tasktracker

8、配置cluster,在masternode操作
#sudo cp -r /etc/hadoop-0.20/conf.empty /etc/hadoop-0.20/conf.my_cluster
添加自己配置
#sudo alternatives --install /etc/hadoop-0.20/conf hadoop-0.20-conf /etc/hadoop-0.20/conf.my_cluster 50
设置所定义的配置
#sudo alternatives --set hadoop-0.20-conf /etc/hadoop-0.20/conf.my_cluster
显示配置
#sudo alternatives --display hadoop-0.20-conf
删除配置
#sudo alternatives --remove hadoop-0.20-conf /etc/hadoop-0.20/conf.my_cluster

9、配置/etc/hadoop-0.20/conf/core-site.xml文件(缺省端口8020)

 
   fs.default.name
   hdfs://masternode/
  


10、配置/etc/hadoop-0.20/conf/hdfs-site.xml文件


dfs.name.dir
 /data/1/dfs/nn,/data/2/dfs/nn 
 
 
 dfs.data.dir
  /data/1/dfs/dn,/data/2/dfs/dn,/data/3/dfs/dn
 

   dfs.replication
   3



11、配置/etc/hadoop-0.20/conf/mapred-site.xml


  mapred.job.tracker
  masternode:54311
  The host and port that the Mapreduce job tracker runs
   at. If "local", then jobs are run in-process as a single map and
   ruduce task.
 
 

 mapred.local.dir
 /data/1/mapred/local,/data/2/mapred/local,/data/3/mapred/local
 


12、配置/etc/hadoop-0.20/conf/masters和slaves
masters文件添加masternode
slaves文件添加slavenode1 和slavenode2

13、根据配置创建文件
masternode:
#sudo mkdir -p /data/1/dfs/nn /data/2/dfs/nn
#sudo chown -R hdfs:hadoop /data/1/dfs/nn /data/2/dfs/nn
#sudo chmod 700 /data/1/dfs/nn /data/2/dfs/nn
slavenode:
#sudo mkdir -p /data/1/dfs/dn /data/2/dfs/dn /data/3/dfs/dn
#sudo mkdir -p /data/1/mapred/local /data/2/mapred/local /data/3/mapred/local
#sudo chown -R hdfs:hadoop /data/1/dfs/dn /data/2/dfs/dn /data/3/dfs/dn 
#sudo chown -R hdfs:hadoop /data/1/mapred/local /data/2/mapred/local /data/3/mapred/local

14、将配置文件conf_mycluster打包,分发到各slavenode节点

15、按照步骤8步骤,激活配置文件

16、在masternode节点执行初始化
#sudo -u hdfs hadoop namenode -format

17、启动后台进程
masternode:
#sudo service hadoop-0.20-namenode start
slavenode
#sudo service hadoop-0.20-datanode start

18、创建HDFS文件目录
#sudo -u hdfs hadoop fs -mkdir /tmp
#sudo -u hdfs hadoop fs -chown -R 1777 /tmp
#sudo -u hdfs hadoop fs -mkdir /mapred/system
#sudo -u hdfs hadoop fs -chown mapred:hadoop /mapred/system

19、启动mapred
masternode:
#sudo service hadoop-0.20-jobtracker start
slavenode
#sudo service hadoop-0.20-tasjtracker start

20、配置机器启动之后,后台服务自动运行
masternode:
#sudo chkconfig hadoop-0.20-namenode on
#sudo chkconfig hadoop-0.20-jobtracker on
slavenode:
#sudo chkconfig hadoop-0.20-datanode on
#sudo chkconfig hadoop-0.20-tasktracker on
<!-- 正文结束 -->

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/23816195/viewspace-1114359/,如需转载,请注明出处,否则将追究法律责任。

上一篇: 没有了~
下一篇: 没有了~
请登录后发表评论 登录
全部评论

注册时间:2010-04-29