本文共 4516 字,大约阅读时间需要 15 分钟。
- 一: 环境配置
- 二:系统环境的初始化
- 三:安装hadoop与配置处理
- 四:环境测试
系统:CentOS 6.4 X64软件:Hadoop-2.5.2.tar.gz native-2.5.2.tar.gz jdk-7u67-linux-x64.tar.gz
将所有软件安装上传到/home/hadoop/yangyang/ 下面
192.168.3.1 master.hadoop.com192.168.3.2 slave1.hadoop.com192.168.3.3 slave2.hadoop.com
以 master.hadoop.com 配置 作为NTP SERVER,master.hadoop.com master.hadoop.com NTP 配置:master.hadoop.com去网上同步时间#加入开机自启动
#echo “ntpdate –u 202.112.10.36 ” >> /etc/rc.d/rc.local#vim /etc/ntp.conf
#取消下面两行的#
#vim /etc/sysconfig/ntpd增加:
#service ntpd restart#chkconfig ntpd on
slave1.hadoop.com与slave2.hadoop.com 配置计划任务处理将从master.hadoop.com 同步时间crontab –e*/10 * * * * /usr/sbin/ntpdate master.hadoop.com
slave1.hadoop.com
slave2.hadoop.com安装jdk tar -zxvf jdk-7u67-linux-x64.tar.gz mv jdk-7u67-linux-x64 jdk 环境变量配置 #vim .bash_profile 到最后加上:export JAVA_HOME=/home/hadoop/yangyang/jdkexport CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jarexport HADOOP_HOME=/home/hadoop/yangyang/hadoopPATH=$PATH:$HOME/bin:$JAVA_HOME/bin:${HADOOP_HOME}/bin
等所有软件安装部署完毕在进行source .bash_profile java –version
ssh-keygen-------一种按回车键即可生成。(三台服务器一样) slave1和slave2的配置 cd .ssh scp id_rsa.pub hadoop@192.168.3.1:/home/hadoop/.ssh/slave1.pub scp id_rsa.pub hadoop@192.168.3.1:/home/hadoop/.ssh/slave2.pub maste的配置 cat id_rsa.pub >> authorized_keys cat slave1.pub >> authorized_keys cat slave2.pub >> authorized_keys chmod 600 authorized_keys scp authorized_keys hadoop@slave1.hadoop.com:/home/hadoop/.ssh/ scp authorized_keys hadoopslave2.hadoop.com:/home/hadoop/.ssh/
测试:
3.1 安装hadoop 与配置文件处理 tar -zxvf hadoop-2.5.2.tar.gz mv hadoop-2.5.2 hadoop cd /home/hadoop/yangyang/hadoop/etc/hadoop3.2更换native 文件rm -rf lib/native/*tar –zxvf hadoop-native-2.5.2.tar.gz –C hadoop/lib/nativecd hadoop/lib/native/编辑core-site.xml 文件:
fs.defaultFS hdfs://master.hadoop.com:8020 hadoop.tmp.dir /home/hadoop/yangyang/hadoop/data hadoop_temp
编辑hdfs-site.xml 文件:
dfs.replication 3 dfs.namenode.http-address master.hadoop.com:50070 dfs.namenode.secondary.http-address slave2.hadoop.com:50090
编辑mapred-site.xml
mapreduce.framework.name yarn mapreduce.jobhistory.address slave2.hadoop.com:10020 mapreduce.jobhistory.webapp.address slave2.hadoop.com:19888
编辑yarn-site.xml
yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.hostname slave1.hadoop.com yarn.log-aggregation-enable true yarn.log-aggregation.retain-seconds 604800
编辑hadoop-env.sh 文件:
export JAVA_HOME=/home/hadoop/yangyang/jdkexport HADOOP_PID_DIR=/home/hadoop/yangyang/hadoop/data/tmpexport HADOOP_SECURE_DN_PID_DIR=/home/hadoop/yangyang/hadoop/data/tmp
编辑mapred-env.sh 文件:
export JAVA_HOME=/home/hadoop/yangyang/jdkexport HADOOP_MAPRED_PID_DIR=/home/hadoop/yangyang/hadoop/data/tmp
编辑yarn-env.sh 文件:
vim yarn-env.shexport JAVA_HOME=/home/hadoop/yangyang/jdk
编辑slaves 文件
vim slavesmaster.hadoop.comslave1.hadoop.comslave2.hadoop.com
3.3 同步到所有节点slave1和slave2
cd /home/hadoop/yangyang/tar –zcvf hadoop.tar.gz hadoopscp hadoop.tar.gz hadoop@192.168.3.2:/home/hadoop/yangyang/scp hadoop.tar.gz hadoop@192.168.3.3:/home/hadoop/yangyang/
3.4 格式化文件系统HDFS
master.hadoop.com 主机上执行:cd hadoop/bin/./hdfs namenode –format3.5 启动hdfs master.hadoop.com 主机上执行:cd hadoop/sbin/./start-dfs.sh3.6启动start-yarn.sh
slave1.hadoop.comcd hadoop/sbin/./start-yarn.sh3.7 启动日志功能:
slave1.hadoop.com cd hadoop/sbin/ ./mr-jobhistory-daemon.sh start historyserver3.8 参照分配表处理master.hadoop.com 主机:slave1.haodop.com 主机:Slave2.hadoop.com 主机
###四:环境测试
master.hadoop.com上面的HDFS slave1.hadoop.com上的yarnslave2.hadoop.com上面的jobhistoryhadoop 环境的测试与检查:创建,上传,运行wordcount 检测转载于:https://blog.51cto.com/flyfish225/2096361