CentOS7下Hadoop集群搭建
文章目錄
- 1、概念
- 1.1、主從結(jié)構(gòu)
- 1.2、Hadoop集群角色名稱
- 2、安裝前準(zhǔn)備
- 2.1、安裝軟件準(zhǔn)備
- 2.2、Hadoop集群服務(wù)器規(guī)劃
- 3、安裝
- 3.1 解壓文件
- 3.2 修改配置文件
- 3.2.1.修改`hadoop-env.sh`配置
- 3.2.2.修改`core-site.xml`配置
- 3.2.3.修改`hdfs-site.xml`配置
- 3.2.4.修改`mapred-site.xml`配置
- 3.2.5.修改`yarn-site.xml`配置
- 3.2.6.修改`slaves`配置
- 3.3 增加Hadoop環(huán)境變量
- 3.4 將Hadoop主機(jī)復(fù)制
- 3.5 Hadoop初始化
- 3.6 Hadoop啟動(dòng)
- 3.6.1 啟動(dòng)HDFS
- 3.6.2 啟動(dòng)YARN
- 3.6.2 查看服務(wù)啟動(dòng)情況
- 4、啟動(dòng)HDFS和YARN的web管理頁(yè)面
- 5、Hadoop常用命令
1、概念
1.1、主從結(jié)構(gòu)
主從結(jié)構(gòu):集群中有部分節(jié)點(diǎn)充當(dāng)主服務(wù)器,其他節(jié)點(diǎn)相對(duì)的充當(dāng)從服務(wù)器的結(jié)構(gòu)。主從結(jié)構(gòu)一般分為:一主多從、多主多從。
Hadoop中的HDFS和YARN都是主從結(jié)構(gòu),主從結(jié)構(gòu)有多種叫法:主節(jié)點(diǎn)-從節(jié)點(diǎn)、master-slave、管理者-工作者、leader-follower
1.2、Hadoop集群角色名稱
Hadoop集群中各個(gè)角色的名稱:
| HDFS | NameNode | DataNode |
| YARN | ResourceManager | NodeManager |
2、安裝前準(zhǔn)備
2.1、安裝軟件準(zhǔn)備
1、準(zhǔn)備一臺(tái)CentOS7的虛擬機(jī)
2、虛擬機(jī)中已安裝JDK,如果沒(méi)有,請(qǐng)參考《CentOS7安裝JDK1.8簡(jiǎn)單體驗(yàn)(java開(kāi)發(fā)必備)》
3、準(zhǔn)備好Hadoop的安裝文件,本次使用的版本是hadoop-2.7.3.tar.gz,可以點(diǎn)擊進(jìn)行下載。
2.2、Hadoop集群服務(wù)器規(guī)劃
| Hadoop-Msater | 192.192.168.223.131 | hadoop-master | root | NameNode,DataNode | ResourceManager,NodeManager |
| Hadoop-Slave1 | 192.192.168.223.128 | hadoop-slave1 | root | SecondaryNameNode,DataNode | NodeManager |
| Hadoop-Slave2 | 192.192.168.223.129 | hadoop-slave2 | root | DataNode | NodeManager |
| Hadoop-Slave3 | 192.192.168.223.130 | hadoop-slave3 | root | DataNode | NodeManager |
規(guī)劃安裝目錄:/usr/local/hadoop/apps
規(guī)劃數(shù)據(jù)目錄:/usr/local/hadoop/data
說(shuō)明:請(qǐng)自行創(chuàng)建或者自定義安裝目錄
3、安裝
3.1 解壓文件
首先將壓縮文包拷貝到/usr/local/hadoop/apps下,然后執(zhí)行解壓程序
[root@hadoop-master apps]# cd /usr/local/hadoop/apps/ [root@hadoop-master apps]# ls hadoop-2.7.3.tar.gz [root@hadoop-master apps]# tar zxvf hadoop-2.7.3.tar.gz3.2 修改配置文件
配置文件的目錄在:/usr/local/hadoop/apps/hadoop-2.7.3/etc/hadoop/
[root@hadoop-master apps]# cd /usr/local/hadoop/apps/hadoop-2.7.3/etc/hadoop/ [root@hadoop-master hadoop]# ls capacity-scheduler.xml hadoop-env.sh httpfs-env.sh kms-env.sh mapred-env.sh ssl-server.xml.example configuration.xsl hadoop-metrics2.properties httpfs-log4j.properties kms-log4j.properties mapred-queues.xml.template yarn-env.cmd container-executor.cfg hadoop-metrics.properties httpfs-signature.secret kms-site.xml mapred-site.xml.template yarn-env.sh core-site.xml hadoop-policy.xml httpfs-site.xml log4j.properties slaves yarn-site.xml hadoop-env.cmd hdfs-site.xml3.2.1.修改hadoop-env.sh配置
打開(kāi)hadoop-env.sh配置文件,修改JDK環(huán)境變量
[root@hadoop-master hadoop]# vi hadoop-env.sh默認(rèn)的環(huán)境變量取系統(tǒng)環(huán)境變量${JAVA_HOME},當(dāng)然也可以自己修改成。
# The java implementation to use. export JAVA_HOME=${JAVA_HOME}因?yàn)槲冶緳C(jī)配置過(guò)JDK環(huán)境變量,因此此處不做修改。
[root@hadoop-master hadoop]# echo $JAVA_HOME /usr/local/jdk/jdk1.8.0_261注意:我上面的示范太想當(dāng)然了,這里必須配置jdk絕對(duì)路徑,不然啟動(dòng)會(huì)報(bào)錯(cuò)。
正確的配置是按照如下修改:
3.2.2.修改core-site.xml配置
首先打開(kāi)core-site.xml配置文件
[root@hadoop-master hadoop]# vim core-site.xml配置參數(shù)介紹:
fs.defaultFS : 這個(gè)屬性用來(lái)指定namenode的hdfs協(xié)議的文件系統(tǒng)通信地址,可以指定一個(gè)主機(jī)+端口,也可以指定為一個(gè)namenode服務(wù)(這個(gè)服務(wù)內(nèi)部可以有多臺(tái)namenode實(shí)現(xiàn)ha的namenode服務(wù)
hadoop.tmp.dir : hadoop集群在工作的時(shí)候存儲(chǔ)的一些臨時(shí)文件的目錄
3.2.3.修改hdfs-site.xml配置
[root@hadoop-master hadoop]# vi hdfs-site.xmldfs.namenode.name.dir:namenode數(shù)據(jù)的存放地點(diǎn)。也就是namenode元數(shù)據(jù)存放的地方,記錄了hdfs系統(tǒng)中文件的元數(shù)據(jù)。
dfs.datanode.data.dir: datanode數(shù)據(jù)的存放地點(diǎn)。也就是block塊存放的目錄了。
dfs.replication:hdfs的副本數(shù)設(shè)置。也就是上傳一個(gè)文件,其分割為block塊后,每個(gè)block的冗余副本個(gè)數(shù),默認(rèn)配置是3。
dfs.secondary.http.address:secondarynamenode 運(yùn)行節(jié)點(diǎn)的信息,和 namenode 不同節(jié)點(diǎn)
3.2.4.修改mapred-site.xml配置
[root@hadoop-master hadoop]# cp mapred-site.xml.template mapred-site.xml [root@hadoop-master hadoop]# vi mapred-site.xmlmapreduce.framework.name:指定mr框架為yarn方式,Hadoop二代MP也基于資源管理系統(tǒng)Yarn來(lái)運(yùn)行。
<configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property> </configuration>3.2.5.修改yarn-site.xml配置
[root@hadoop-master hadoop]# vi yarn-site.xmlyarn.resourcemanager.hostname:yarn總管理器的IPC通訊地址
yarn.nodemanager.aux-services:NodeManager上運(yùn)行的附屬服務(wù)。需配置成mapreduce_shuffle,才可運(yùn)行MapReduce程序,默認(rèn)""
3.2.6.修改slaves配置
[root@hadoop-master hadoop]# vi slaves hadoop-master hadoop-slave1 hadoop-slave2 hadoop-slave33.3 增加Hadoop環(huán)境變量
增加Hadoop環(huán)境變量
將光標(biāo)調(diào)整到文件尾部,并添加以下內(nèi)容
export HADOOP_HOME=/usr/local/hadoop/apps/hadoop-2.7.3 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:更新配置文件,并查看hadoop版本
[root@hadoop-master hadoop]# source /etc/profile [root@hadoop-master hadoop]# hadoop version Hadoop 2.7.3 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff Compiled by root on 2016-08-18T01:41Z Compiled with protoc 2.5.0 From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4 This command was run using /usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar3.4 將Hadoop主機(jī)復(fù)制
將Hadoop-Master克隆成三份:
Hadoop-Slave1 Hadoop-Slave2 Hadoop-Slave3修改主機(jī)名:
Hadoop-Slave1主機(jī)名修改為:hadoop-slave1
Hadoop-Slave2主機(jī)名修改為:hadoop-slave2
[root@hadoop-master ~]# hostnamectl set-hostname hadoop-slave2 [root@hadoop-master ~]# hostname hadoop-slave2Hadoop-Slave3主機(jī)名修改為:hadoop-slave3
[root@hadoop-master ~]# hostnamectl set-hostname hadoop-slave3 [root@hadoop-master ~]# hostname hadoop-slave3并且在每一臺(tái)的主機(jī)里添加ip地址:
[root@hadoop-master ~]# vi /etc/hosts192.168.223.131 hadoop-master 192.168.223.128 hadoop-slave1 192.168.223.129 hadoop-slave2 192.168.223.130 hadoop-slave3主機(jī)之間設(shè)置免密登錄可以參考《CentOS7虛擬機(jī)之間設(shè)置免密登錄》
3.5 Hadoop初始化
注意:HDFS初始化只能在主節(jié)點(diǎn)上進(jìn)行
[root@hadoop-master hadoop]# hadoop namenode -format初始化結(jié)果:
[root@hadoop-master hadoop]# hadoop namenode -format DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it.20/08/08 13:23:37 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = hadoop-master/192.168.223.131 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.7.3 STARTUP_MSG: classpath = /usr/local/hadoop/apps/hadoop-2.7.3/etc/hadoop:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/usr/local/hadoop/apps/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/usr/local/hadoop/apps/hadoop-2.7.3/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/apps/hadoop-2.7.3/contrib/capacity-scheduler/*.jar STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z STARTUP_MSG: java = 1.8.0_261 ************************************************************/ 20/08/08 13:23:37 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 20/08/08 13:23:37 INFO namenode.NameNode: createNameNode [-format] 20/08/08 13:23:38 WARN common.Util: Path /usr/local/hadoop/data/hadoopdata/name should be specified as a URI in configuration files. Please update hdfs configuration. 20/08/08 13:23:38 WARN common.Util: Path /usr/local/hadoop/data/hadoopdata/name should be specified as a URI in configuration files. Please update hdfs configuration. Formatting using clusterid: CID-fce3113f-05f1-4de1-bc28-6927ce8d03ca 20/08/08 13:23:38 INFO namenode.FSNamesystem: No KeyProvider found. 20/08/08 13:23:38 INFO namenode.FSNamesystem: fsLock is fair:true 20/08/08 13:23:38 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 20/08/08 13:23:38 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 20/08/08 13:23:38 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 20/08/08 13:23:38 INFO blockmanagement.BlockManager: The block deletion will start around 2020 八月 08 13:23:38 20/08/08 13:23:38 INFO util.GSet: Computing capacity for map BlocksMap 20/08/08 13:23:38 INFO util.GSet: VM type = 64-bit 20/08/08 13:23:38 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB 20/08/08 13:23:38 INFO util.GSet: capacity = 2^21 = 2097152 entries 20/08/08 13:23:38 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 20/08/08 13:23:38 INFO blockmanagement.BlockManager: defaultReplication = 2 20/08/08 13:23:38 INFO blockmanagement.BlockManager: maxReplication = 512 20/08/08 13:23:38 INFO blockmanagement.BlockManager: minReplication = 1 20/08/08 13:23:38 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 20/08/08 13:23:38 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 20/08/08 13:23:38 INFO blockmanagement.BlockManager: encryptDataTransfer = false 20/08/08 13:23:38 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 20/08/08 13:23:38 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE) 20/08/08 13:23:38 INFO namenode.FSNamesystem: supergroup = supergroup 20/08/08 13:23:38 INFO namenode.FSNamesystem: isPermissionEnabled = true 20/08/08 13:23:38 INFO namenode.FSNamesystem: HA Enabled: false 20/08/08 13:23:38 INFO namenode.FSNamesystem: Append Enabled: true 20/08/08 13:23:39 INFO util.GSet: Computing capacity for map INodeMap 20/08/08 13:23:39 INFO util.GSet: VM type = 64-bit 20/08/08 13:23:39 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB 20/08/08 13:23:39 INFO util.GSet: capacity = 2^20 = 1048576 entries 20/08/08 13:23:39 INFO namenode.FSDirectory: ACLs enabled? false 20/08/08 13:23:39 INFO namenode.FSDirectory: XAttrs enabled? true 20/08/08 13:23:39 INFO namenode.FSDirectory: Maximum size of an xattr: 16384 20/08/08 13:23:39 INFO namenode.NameNode: Caching file names occuring more than 10 times 20/08/08 13:23:39 INFO util.GSet: Computing capacity for map cachedBlocks 20/08/08 13:23:39 INFO util.GSet: VM type = 64-bit 20/08/08 13:23:39 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB 20/08/08 13:23:39 INFO util.GSet: capacity = 2^18 = 262144 entries 20/08/08 13:23:39 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 20/08/08 13:23:39 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 20/08/08 13:23:39 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 20/08/08 13:23:39 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 20/08/08 13:23:39 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 20/08/08 13:23:39 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 20/08/08 13:23:39 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 20/08/08 13:23:39 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 20/08/08 13:23:39 INFO util.GSet: Computing capacity for map NameNodeRetryCache 20/08/08 13:23:39 INFO util.GSet: VM type = 64-bit 20/08/08 13:23:39 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB 20/08/08 13:23:39 INFO util.GSet: capacity = 2^15 = 32768 entries 20/08/08 13:23:39 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1166270090-192.168.223.131-1596864219273 20/08/08 13:23:39 INFO common.Storage: Storage directory /usr/local/hadoop/data/hadoopdata/name has been successfully formatted. 20/08/08 13:23:39 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop/data/hadoopdata/name/current/fsimage.ckpt_0000000000000000000 using no compression 20/08/08 13:23:39 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop/data/hadoopdata/name/current/fsimage.ckpt_0000000000000000000 of size 351 bytes saved in 0 seconds. 20/08/08 13:23:39 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 20/08/08 13:23:39 INFO util.ExitUtil: Exiting with status 0 20/08/08 13:23:39 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hadoop-master/192.168.223.131 ************************************************************/3.6 Hadoop啟動(dòng)
3.6.1 啟動(dòng)HDFS
啟動(dòng)HDFS,不管在集群哪個(gè)節(jié)點(diǎn)都可以
[root@hadoop-master ~]# start-dfs.sh Starting namenodes on [hadoop-master] root@hadoop-master's password: hadoop-master: starting namenode, logging to /usr/local/hadoop/apps/hadoop-2.7.3/logs/hadoop-root-namenode-hadoop-master.out root@hadoop-master's password: hadoop-slave1: starting datanode, logging to /usr/local/hadoop/apps/hadoop-2.7.3/logs/hadoop-root-datanode-hadoop-slave1.out hadoop-slave2: starting datanode, logging to /usr/local/hadoop/apps/hadoop-2.7.3/logs/hadoop-root-datanode-hadoop-slave2.out hadoop-slave3: starting datanode, logging to /usr/local/hadoop/apps/hadoop-2.7.3/logs/hadoop-root-datanode-hadoop-slave3.out hadoop-master: starting datanode, logging to /usr/local/hadoop/apps/hadoop-2.7.3/logs/hadoop-root-datanode-hadoop-master.out Starting secondary namenodes [hadoop-slave1] hadoop-slave1: starting secondarynamenode, logging to /usr/local/hadoop/apps/hadoop-2.7.3/logs/hadoop-root-secondarynamenode-hadoop-slave1.out [root@hadoop-master ~]#歷盡艱難終于啟動(dòng)成功了,期間遇到了輸入密碼兩次,剛開(kāi)始不知道于是乎傻傻的等,然后殺進(jìn)程,重復(fù)了好幾次操作。下面介紹一個(gè)快速殺進(jìn)程的命令:
ps -ef | grep hadoop | grep -v grep | cut -c 9-15 | xargs kill -s 93.6.2 啟動(dòng)YARN
注意:這個(gè)只能在主節(jié)點(diǎn)進(jìn)行啟動(dòng)。
[root@hadoop-master ~]# start-yarn.sh starting yarn daemons starting resourcemanager, logging to /usr/local/hadoop/apps/hadoop-2.7.3/logs/yarn-root-resourcemanager-hadoop-master.out root@hadoop-master's password: hadoop-slave2: starting nodemanager, logging to /usr/local/hadoop/apps/hadoop-2.7.3/logs/yarn-root-nodemanager-hadoop-slave2.out hadoop-slave3: starting nodemanager, logging to /usr/local/hadoop/apps/hadoop-2.7.3/logs/yarn-root-nodemanager-hadoop-slave3.out hadoop-slave1: starting nodemanager, logging to /usr/local/hadoop/apps/hadoop-2.7.3/logs/yarn-root-nodemanager-hadoop-slave1.out hadoop-master: starting nodemanager, logging to /usr/local/hadoop/apps/hadoop-2.7.3/logs/yarn-root-nodemanager-hadoop-master.out [root@hadoop-master ~]#3.6.2 查看服務(wù)啟動(dòng)情況
hadoop-master
[root@hadoop-master ~]# jps 8737 Jps 8003 DataNode 8317 ResourceManager 7870 NameNode 8606 NodeManagerhadoop-slave1
[root@hadoop-slave1 hadoop]# jps 6055 NodeManager 5913 SecondaryNameNode 6185 Jps 5818 DataNodehadoop-slave2
[root@hadoop-slave2 hadoop]# jps 4922 NodeManager 5068 Jps 4751 DataNodehadoop-slave3
[root@hadoop-slave3 hadoop]# jps 5184 Jps 4868 DataNode 5038 NodeManager4、啟動(dòng)HDFS和YARN的web管理頁(yè)面
HDFS:http://192.168.223.131:50070 YARN:http://192.168.223.131:8088這里我只看到了一個(gè)節(jié)點(diǎn):
網(wǎng)上查詢?cè)?#xff0c;大意是:直接克隆的機(jī)器storageID一樣,導(dǎo)致訪問(wèn)總是出現(xiàn)錯(cuò)誤。
可以參考:hadoop集群通過(guò)web管理界面只顯示一個(gè)節(jié)點(diǎn),datanode只啟動(dòng)一個(gè)
但是,但是,但是我?guī)缀踉嚤榱怂械姆绞?#xff0c;耗時(shí)3個(gè)多小時(shí)都沒(méi)有解決,幾乎要放棄了,突然想到一個(gè)問(wèn)題,我虛擬機(jī)的防火墻還開(kāi)著,集群之間需要通信是否被防火墻所阻擋了???
嘗試關(guān)閉虛擬機(jī)防火墻:
systemctl stop firewalld.service然后奇跡出現(xiàn)了:
看到這個(gè)結(jié)果,也不知道想哭還是想笑,以后一定要引以為戒。
5、Hadoop常用命令
1. start-all.sh 啟動(dòng)所有的Hadoop守護(hù)進(jìn)程。包括NameNode、 Secondary NameNode、DataNode、JobTracker、 TaskTrack 2. stop-all.sh 停止所有的Hadoop守護(hù)進(jìn)程。包括NameNode、 Secondary NameNode、DataNode、JobTracker、 TaskTrack 3. start-dfs.sh 啟動(dòng)Hadoop HDFS守護(hù)進(jìn)程N(yùn)ameNode、SecondaryNameNode和DataNode 4. stop-dfs.sh 停止Hadoop HDFS守護(hù)進(jìn)程N(yùn)ameNode、SecondaryNameNode和DataNode 5. hadoop-daemons.sh start namenode 單獨(dú)啟動(dòng)NameNode守護(hù)進(jìn)程 6. hadoop-daemons.sh stop namenode 單獨(dú)停止NameNode守護(hù)進(jìn)程 7. hadoop-daemons.sh start datanode 單獨(dú)啟動(dòng)DataNode守護(hù)進(jìn)程 8. hadoop-daemons.sh stop datanode 單獨(dú)停止DataNode守護(hù)進(jìn)程 9. hadoop-daemons.sh start secondarynamenode 單獨(dú)啟動(dòng)SecondaryNameNode守護(hù)進(jìn)程 10. hadoop-daemons.sh stop secondarynamenode 單獨(dú)停止SecondaryNameNode守護(hù)進(jìn)程 11. start-mapred.sh 啟動(dòng)Hadoop MapReduce守護(hù)進(jìn)程JobTracker和TaskTracker 12. stop-mapred.sh 停止Hadoop MapReduce守護(hù)進(jìn)程JobTracker和TaskTracker 13. hadoop-daemons.sh start jobtracker 單獨(dú)啟動(dòng)JobTracker守護(hù)進(jìn)程 14. hadoop-daemons.sh stop jobtracker 單獨(dú)停止JobTracker守護(hù)進(jìn)程 15. hadoop-daemons.sh start tasktracker 單獨(dú)啟動(dòng)TaskTracker守護(hù)進(jìn)程 16. hadoop-daemons.sh stop tasktracker 單獨(dú)啟動(dòng)TaskTracker守護(hù)進(jìn)程總結(jié)
以上是生活随笔為你收集整理的CentOS7下Hadoop集群搭建的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: CentOS7虚拟机之间设置免密登录
- 下一篇: 每天学一点儿shell:shell字符串