Hadoop入门·环境搭建
Hadoop入門·環境搭建
1 步驟
- 硬件環境準備
- 資源下載
- 環境部署
2 分布式集群環境部署
2.1 硬件環境準備
本案例中使用三臺服務器(僅作為學習案例),分別為Hadoop102,Hadoop103,Hadoop104,要求如下(資源充足可多分配):
?
備注:可使用創建模板機進行克隆操作
2.2 資源下載
資源列表:
Apache產品下載網:Apache Distribution Directory
- JDK1.8
- Hadoop3.2.3
2.3 環境部署
前提:確保服務上述服務器及資源已準備完全。
建議將服務器防火墻關閉(否則需要配置防火墻出站、入站規則),以確保客戶端可以訪問相應端口。
集群服務安裝位置要求:
2.3.1 服務器環境配置
每一臺服務器都需要進行操作(Hadoop102,Hadoop103,Hadoop104)。
主機名配置:
| 確認主機名是否與要求一致: [my@Hadoop102 ~]$ hostname Hadoop102 [my@Hadoop102 ~]$ 若不一致,可進行如下操作,編輯hostname文件,輸入需要的主機名稱(需重啟生效): [my@Hadoop102 ~]$ sudo vi /etc/hostname [sudo] my 的密碼: Hadoop102 ~??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ~???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ~??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? "/etc/hostname" 1L, 10C |
配置域名解析文件
| 編輯hosts文件(便于使用域名解析對應主機ip),如下加入Hadoop集群主機列表: [my@Hadoop102 ~]$ sudo vi /etc/hosts [sudo] my 的密碼: 127.0.0.1?? localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1???????? localhost localhost.localdomain localhost6 localhost6.localdomain6 #Hadoop集群主機列表 192.168.10.102 Hadoop102 192.168.10.103 Hadoop103 192.168.10.104 Hadoop104 ~???????????????????????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? "/etc/hosts" [noeol] 6L, 258C |
關閉防火墻
| 切換到root用戶: [my@Hadoop102 ~]$ su - root 密碼: 上一次登錄:六 4月 16 23:00:28 CST 2022pts/1 上 關閉防火墻: [root@Hadoop102 ~]# systemctl stop firewalld 查看防火墻狀態: [root@Hadoop102 ~]# systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon ?? Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) ?? Active: inactive (dead) ???? Docs: man:firewalld(1) 關閉防火墻開機自啟: [root@Hadoop102 ~]# systemctl disable firewalld [root@Hadoop102 ~]# |
2.3.2 安裝jdk
在Hadoop102上安裝
上傳jdk安裝包
| 連接sftp會話到Hadoop102進行文件上傳 sftp> lcd D:\soft\Hadoop sftp> lls hadoop-3.2.3.tar.gz? jdk-8u321-linux-x64.tar.gz sftp> cd /opt/Hadoop/ sftp> ls hadoop-3.2.3 sftp> put jdk-8u321-linux-x64.tar.gz Uploading hadoop-3.2.3.tar.gz to /opt/Hadoop/ jdk-8u321-linux-x64.tar.gz ? 100% 480705KB? 48070KB/s 00:00:10???? D:/soft/Hadoop/ jdk-8u321-linux-x64.tar.gz: 592241961 bytes transferred in 18 seconds (48070 KB/s) sftp> |
安裝jdk
| 進入安裝包目錄,將文件解壓到對應的路徑下: tar zxvf jdk-8u321-linux-x64.tar.gz -C /opt/java 配置JAVA環境變量: [root@Hadoop102 java]# cd /etc/profile.d/ [root@Hadoop102 profile.d]# ls 256term.csh?????????????????? bash_completion.sh? colorls.csh? lang.csh? less.sh??????? vim.csh? which2.csh 256term.sh??????????????????? colorgrep.csh?????? colorls.sh?? lang.sh????????? vim.sh?? which2.sh abrt-console-notification.sh? colorgrep.sh?????? ?flatpak.sh?? less.csh? PackageKit.sh? vte.sh 在/etc/profile.d/目錄下新增myenv.sh,并寫入指定環境變量: [root@Hadoop102 profile.d]# vi myenv.sh #JAVA JDK 環境變量 export JAVA_HOME=/opt/java/jdk1.8/jdk1.8.0_321 export PATH=$PATH:$JAVA_HOME/bin ~??????????????????????????????? ???????????????????????????????????????????????????????????????????????????????????????????? ~?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? "myenv.sh" 6L, 220C 使環境變量生效: [root@Hadoop102 profile.d]# source /etc/profile [root@Hadoop102 profile.d]# 查看是否安裝成功,出現以下信息代表成功: [root@Hadoop102 profile.d]# java 用法: java [-options] class [args...] ?????????? (執行類) ?? 或? java [-options] -jar jarfile [args...] ?????????? (執行 jar 文件) 其中選項包括: ??? -d32????????? 使用 32 位數據模型 (如果可用) ??? -d64????????? 使用 64 位數據模型 (如果可用) ??? -server?????? 選擇 "server" VM ????????????????? 默認 VM 是 server. ??? -cp <目錄和 zip/jar 文件的類搜索路徑> ??? -classpath <目錄和 zip/jar 文件的類搜索路徑> ????????????????? 用 : 分隔的目錄, JAR 檔案 ???????? ?????????和 ZIP 檔案列表, 用于搜索類文件。 ??? -D<名稱>=<值> ????????????????? 設置系統屬性 ??? -verbose:[class|gc|jni] ????????????????? 啟用詳細輸出 ??? -version????? 輸出產品版本并退出 ??? -version:<值> ????????????????? 警告: 此功能已過時, 將在 ????????????????? 未來發行版中刪除。 ????????????????? 需要指定的版本才能運行 ??? -showversion? 輸出產品版本并繼續 ??? -jre-restrict-search | -no-jre-restrict-search ????????????????? 警告: 此功能已過時, 將在 ????????????????? 未來發行版中刪除。 ????????????????? 在版本搜索中包括/排除用戶專用 JRE ??? -? -help????? 輸出此幫助消息 ??? -X??????????? 輸出非標準選項的幫助 ??? -ea[:<packagename>...|:<classname>] ??? -enableassertions[:<packagename>...|:<classname>] ????????????????? 按指定的粒度啟用斷言 ??? -da[:<packagename>...|:<classname>] ??? -disableassertions[:<packagename>...|:<classname>] ????????????????? 禁用具有指定粒度的斷言 ??? -esa | -enablesystemassertions ????????????????? 啟用系統斷言 ??? -dsa | -disablesystemassertions ????????????????? 禁用系統斷言 ??? -agentlib:<libname>[=<選項>] ????????????????? 加載本機代理庫 <libname>, 例如 -agentlib:hprof ????????????????? 另請參閱 -agentlib:jdwp=help 和 -agentlib:hprof=help ??? -agentpath:<pathname>[=<選項>] ????????????????? 按完整路徑名加載本機代理庫 ??? -javaagent:<jarpath>[=<選項>] ????????????????? 加載 Java 編程語言代理, 請參閱 java.lang.instrument ??? -splash:<imagepath> ????????????????? 使用指定的圖像顯示啟動屏幕 有關詳細信息, 請參閱 http://www.oracle.com/technetwork/java/javase/documentation/index.html。 [root@Hadoop102 profile.d]# |
2.3.3 安裝Hadoop
在Hadoop102上安裝
上傳Hadoop安裝包
| 連接sftp會話到Hadoop102進行文件上傳 sftp> lcd D:\soft\Hadoop sftp> lls hadoop-3.2.3.tar.gz? jdk-8u321-linux-x64.tar.gz sftp> cd /opt/Hadoop/ sftp> ls hadoop-3.2.3 sftp> put hadoop-3.2.3.tar.gz Uploading hadoop-3.2.3.tar.gz to /opt/Hadoop/hadoop-3.2.3.tar.gz ? 100% 480705KB? 48070KB/s 00:00:10???? D:/soft/Hadoop/hadoop-3.2.3.tar.gz: 492241961 bytes transferred in 10 seconds (48070 KB/s) sftp> |
安裝Hadoop
| 進入安裝包目錄,將文件解壓到對應的路徑下: tar zxvf hadoop-3.2.3.tar.gz -C /opt/Hadoop 更改Hadoop文件所有者: chown -R my:my /opt/Hadoop 配置Hadoop環境變量: [root@Hadoop102 Hadoop]# cd /etc/profile.d/ [root@Hadoop102 profile.d]# ls 256term.csh?????????????????? bash_completion.sh? colorls.csh? lang.csh? less.sh??????? vim.csh? which2.csh 256term.sh??????????????????? colorgrep.csh?????? colorls.sh?? lang.sh????????? vim.sh?? which2.sh abrt-console-notification.sh? colorgrep.sh??????? flatpak.sh?? less.csh? PackageKit.sh? vte.sh 在/etc/profile.d/目錄下新增myenv.sh,并寫入指定環境變量: [root@Hadoop102 profile.d]# vi myenv.sh #JAVA JDK 環境變量 export JAVA_HOME=/opt/java/jdk1.8/jdk1.8.0_321 export PATH=$PATH:$JAVA_HOME/bin #Hadoop環境變量 export HADOOP_HOME=/opt/Hadoop/hadoop-3.2.3 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin ~??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ~????????????????????????????????????? ????????????????????????????????????????????????????????????????????????????????????? "myenv.sh" 6L, 220C 使環境變量生效: [root@Hadoop102 profile.d]# source /etc/profile [root@Hadoop102 profile.d]# 查看是否安裝成功,出現以下信息代表成功: [root@Hadoop102 profile.d]# hadoop Usage: hadoop [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS] ?or??? hadoop [OPTIONS] CLASSNAME [CLASSNAME OPTIONS] ? where CLASSNAME is a user-provided Java class ? OPTIONS is none or any of: buildpaths?????????????????????? attempt to add class files from build tree --config dir???????????????????? Hadoop config directory --debug????????????????????????? turn on shell script debug mode --help?????????????????????????? usage information hostnames list[,of,host,names]?? hosts to use in slave mode hosts filename?????????????????? list of hosts to use in slave mode loglevel level?????????????????? set the log4j level for this command workers????????????????????????? turn on worker mode ? SUBCOMMAND is one of: ??? Admin Commands: daemonlog???? get/set the log level for each daemon ??? Client Commands: archive?????? create a Hadoop archive checknative?? check native Hadoop and compression libraries availability classpath???? prints the class path needed to get the Hadoop jar and the required libraries conftest????? validate configuration XML files credential??? interact with credential providers distch??????? distributed metadata changer distcp??????? copy file or directories recursively dtutil??????? operations related to delegation tokens envvars?????? display computed Hadoop environment variables fs??????????? run a generic filesystem user client gridmix?????? submit a mix of synthetic job, modeling a profiled from production load jar <jar>???? run a jar file. NOTE: please use "yarn jar" to launch YARN applications, not this command. jnipath?????? prints the java.library.path kdiag???????? Diagnose Kerberos Problems kerbname????? show auth_to_local principal conversion key?????????? manage keys via the KeyProvider rumenfolder?? scale a rumen input trace rumentrace??? convert logs into a rumen trace s3guard?????? manage metadata on S3 trace???????? view and modify Hadoop tracing settings version?????? print the version ??? Daemon Commands: kms?????????? run KMS, the Key Management Server SUBCOMMAND may print help when invoked w/o parameters or with -h. [root@Hadoop102 profile.d]# |
2.3.3 配置Hadoop集群
在Hadoop102上配置
配置相關的主要配置文件(位于Hadoop安裝目錄的etc文件夾下,配置文件在官網Hadoop源碼中有默認配置文件, Hadoop默認使用默認配置):
- core-site.xml
- hdfs-site.xml
- mapred-site.xml
- yarn-site.xml
- workers
配置core-site.xml
| [root@Hadoop102 hadoop]# pwd /opt/Hadoop/hadoop-3.2.3/etc/hadoop [root@Hadoop102 hadoop]# vi core-site.xml <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- ? Licensed under the Apache License, Version 2.0 (the "License"); ? you may not use this file except in compliance with the License. ? You may obtain a copy of the License at ??? http://www.apache.org/licenses/LICENSE-2.0 ? Unless required by applicable law or agreed to in writing, software ? distributed under the License is distributed on an "AS IS" BASIS, ? WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ? See the License for the specific language governing permissions and ? limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> ??? <!--NameNode位置,及內部通訊端口--> ??? <property> ??????? <name>fs.default.name</name> ??????? <value>hdfs://Hadoop102:8020</value> ??? </property> ??? <!--hdfs數據文件存儲路徑--> ??? <property> ??????? <name>hadoop.tmp.dir</name> ??????? <value>/opt/Hadoop/hadoop-3.2.3/Datas</value> ??? </property> </configuration> ~???????????????????????????????? ??????????????????????????????????????????????????????????????????????????????????????????? "core-site.xml" 30L, 1102C |
配置hdfs-site.xml
| [root@Hadoop102 hadoop]# ls capacity-scheduler.xml????? hadoop-user-functions.sh.example? kms-log4j.properties??????? ssl-client.xml.example configuration.xsl?????????? hdfs-site.xml???????????????????? kms-site.xml??????????????? ssl-server.xml.example container-executor.cfg????? httpfs-env.sh???????????????? ????log4j.properties??????????? user_ec_policies.xml.template core-site.xml?????????????? httpfs-log4j.properties?????????? mapred-env.cmd????????????? workers hadoop-env.cmd????????????? httpfs-signature.secret?????????? mapred-env.sh?????????????? yarn-env.cmd hadoop-env.sh?????????????? httpfs-site.xml?????????????????? mapred-queues.xml.template? yarn-env.sh hadoop-metrics2.properties? kms-acls.xml????????????????????? mapred-site.xml???????????? yarnservice-log4j.properties hadoop-policy.xml?????????? kms-env.sh??????????????????????? shellprofile.d????????????? yarn-site.xml [root@Hadoop102 hadoop]# vi hdfs-site.xml <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- ? Licensed under the Apache License, Version 2.0 (the "License"); ? you may not use this file except in compliance with the License. ? You may obtain a copy of the License at ??? http://www.apache.org/licenses/LICENSE-2.0 ? Unless required by applicable law or agreed to in writing, software ? distributed under the License is distributed on an "AS IS" BASIS, ? WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ? See the License for the specific language governing permissions and ? limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> ??? <!--2nn web前端訪問地址--> ??????? <property> ??????? <name>dfs.namenode.secondary.http-address</name> ??????? <value>Hadoop104:9868</value> ??????? </property> ??? <!--nn web前端訪問地址--> ??????? <property> ??????? <name>dfs.namenode.http-address</name> ??????? <value>Hadoop102:9870</value> ??????? </property> </configuration> ~??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? "hdfs-site.xml" 30L, 1049C |
配置mapred-site.xml
| [root@Hadoop102 hadoop]# vi mapred-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- ? Licensed under the Apache License, Version 2.0 (the "License"); ? you may not use this file except in compliance with the License. ? You may obtain a copy of the License at ??? http://www.apache.org/licenses/LICENSE-2.0 ? Unless required by applicable law or agreed to in writing, software ? distributed under the License is distributed on an "AS IS" BASIS, ? WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ? See the License for the specific language governing permissions and ? limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> ??????? <!--mapreduce處理平臺配置,此案例使用yarn--> ??????? <property> ??????? <name>mapreduce.framework.name</name> ??????? <value>yarn</value> ??????? </property> </configuration> ~????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ?????? ~??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ~????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ?? ~??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ~??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ~??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? "mapred-site.xml" 25L, 900C |
配置yarn-site.xml
| [root@Hadoop102 hadoop]# vi yarn-site.xml <?xml version="1.0"?> <!-- ? Licensed under the Apache License, Version 2.0 (the "License"); ? you may not use this file except in compliance with the License. ? You may obtain a copy of the License at ??? http://www.apache.org/licenses/LICENSE-2.0 ? Unless required by applicable law or agreed to in writing, software ? distributed under the License is distributed on an "AS IS" BASIS, ? WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ? See the License for the specific language governing permissions and ? limitations under the License. See accompanying LICENSE file. --> <configuration> <!-- Site specific YARN configuration properties --> ?????? ?<!—配置resourcemanager 安裝位置--> <property> ??????? <name>yarn.resourcemanager.hostname</name> ??????? <value>Hadoop103</value> ??????? </property> ??????? <property> ??????? <!—配置nodemanager 使用服務--> <name>yarn.nodemanager.aux-services</name> ??????? <value>mapreduce_shuffle</value> ??????? </property> </configuration> ~?????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ????????? ~??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ~?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ????? ~??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? "yarn-site.xml" 27L, 888C |
配置集群主機列表
| works 配置DataNode節點的主機名或IP,用于集群服務啟動時啟動對應集群中的Hadoop服務 [root@Hadoop102 hadoop]# vi workers Hadoop102 Hadoop103 Hadoop104 ~???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??????????????????????????????????????????????????????????? ~??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ~???????????????????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? "workers" [noeol] 3L, 29C |
配置集群主機間免密登錄
| 生成RSA密鑰: ssh-keygen -t rsa 進入用戶家目錄中的.ssh目錄: cd ~/.ssh 將公鑰發送到指定主機,實現Hadoop102 ssh連接到Hadoop103、Hadoop104免密登錄 ssh-copy-id -i id_rsa.pub my@Hadoop103 ssh-copy-id -i id_rsa.pub my@Hadoop104 |
Hadoop集群配置分發
將Hadoop集群配置分發復制同步到各個集群主機(Hadoop103、Hadoop104,當然也可以每一臺主機分別進行與Hadoop102相同的配置),免密登錄需單獨仿照“配置集群主機間免密登錄”進行配置。
全部完全復制可使用scp命令
更新同步配置可使用rsync命令
| 此處首次全部復制同步,使用scp進行復制分發。 同步安裝Hadoop環境及配置: [root@Hadoop102 hadoop]# scp -r /opt/Hadoop my@Hadoop103:/opt [root@Hadoop102 hadoop]# scp -r /opt/Hadoop my@Hadoop104:/opt 同步環境變量: [root@Hadoop102 hadoop]# scp -r /etc/profile.d/ myenv.sh root@Hadoop103: /etc/profile.d/ [root@Hadoop102 hadoop]# scp -r /etc/profile.d/ myenv.sh root@Hadoop104: /etc/profile.d/ 更改文件所有者(各集群主機保持一致) [root@Hadoop103 hadoop]# chown -R my:my /opt/Hadoop/ [root@Hadoop104 hadoop]# chown -R my:my /opt/Hadoop/ 完成同步后,分別檢查各主機上Hadoop是否安裝成功 |
啟動Hadoop 集群服務
| 初始化HDFS: [my@Hadoop102 hadoop-3.2.3]$ hdfs namenode -format [my@Hadoop102 hadoop-3.2.3]$ ll 總用量 180 drwxr-xr-x. 2 my my 203 3月 20 09:58 bin drwxrwxr-x. 3 my my 17 4月 17 18:20 Datas drwxr-xr-x. 3 my my 20 3月 20 09:20 etc drwxr-xr-x. 2 my my 106 3月 20 09:58 include drwxr-xr-x. 3 my my 20 3月 20 09:58 lib drwxr-xr-x. 4 my my 288 3月 20 09:58 libexec -rw-rw-r--. 1 my my 150571 3月 10 13:39 LICENSE.txt drwxrwxr-x. 2 my my 35 4月 17 18:20 logs -rw-rw-r--. 1 my my 21943 3月 10 13:39 NOTICE.txt -rw-rw-r--. 1 my my 1361 3月 10 13:39 README.txt drwxr-xr-x. 3 my my 4096 3月 20 09:20 sbin drwxr-xr-x. 4 my my 31 3月 20 10:17 share 啟動HDFS集群服務: [my@Hadoop102 ~]$ $HADOOP_HOME/sbin/start-dfs.sh Starting namenodes on [Hadoop102] Starting datanodes Starting secondary namenodes [Hadoop104] 啟動yarn服務: [my@Hadoop103 ~]$ /$HADOOP_HOME/sbin/start-yarn.sh ?? Starting resourcemanager Starting nodemanagers 查看運行服務(確認對應服務位置是否與規劃一致): [my@Hadoop102 ~]$ jps 9328 Jps 8774 DataNode 9225 NodeManager 8653 NameNode [my@Hadoop102 ~]$ [my@Hadoop103 ~]$ jps 7989 ResourceManager 7670 DataNode 8104 NodeManager 8473 Jps [my@Hadoop103 ~]$ [my@Hadoop104 ~]$ jps 7968 NodeManager 7590 DataNode 7704 SecondaryNameNode 8104 Jps [my@Hadoop104 ~]$ |
啟動成功以后在網頁上輸入對應HDFS網頁瀏覽地址,驗證是否成功:
?
3 備注
3.1 常見錯誤
3.1.1 集群配置文件錯誤
啟動時若對應xml配置有誤,則在集群啟動是邊會拋出指定配置文件錯誤,如:
| ration>; expected . at [row,col,system-id]: [24,15,"file:/opt/Hadoop/hadoop-3.2.3/etc/hadoop/mapred-site.xml"] 2022-04-17 18:17:48,721 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at Hadoop102/192.168.10.102 ************************************************************/ 2022-04-17 18:17:48,767 ERROR conf.Configuration: error parsing conf mapred-site.xml com.ctc.wstx.exc.WstxParsingException: Unexpected close tag ; expected . at [row,col,system-id]: [24,15,"file:/opt/Hadoop/hadoop-3.2.3/etc/hadoop/mapred-site.xml"] at com.ctc.wstx.sr.StreamScanner.constructWfcException(StreamScanner.java:634) at com.ctc.wstx.sr.StreamScanner.throwParseError(StreamScanner.java:504) at com.ctc.wstx.sr.StreamScanner.throwParseError(StreamScanner.java:488) at com.ctc.wstx.sr.BasicStreamReader.reportWrongEndElem(BasicStreamReader.java:3352) at com.ctc.wstx.sr.BasicStreamReader.readEndElem(BasicStreamReader.java:3279) at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2900) at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1121) at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3336) at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3130) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3023) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2984) at org.apache.hadoop.conf.Configuration.loadProps(Configuration.java:2862) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2844) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1200) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1812) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1789) at org.apache.hadoop.util.ShutdownHookManager.getShutdownTimeout(ShutdownHookManager.java:183) at org.apache.hadoop.util.ShutdownHookManager.shutdownExecutor(ShutdownHookManager.java:145) at org.apache.hadoop.util.ShutdownHookManager.access$300(ShutdownHookManager.java:65) at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:102) Exception in thread "Thread-1" java.lang.RuntimeException: com.ctc.wstx.exc.WstxParsingException: Unexpected cl ose tag ; expected . at [row,col,system-id]: [24,15,"file:/opt/Hadoop/hadoop-3.2.3/etc/hadoop/mapred-site.xml"] at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3040) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2984) at org.apache.hadoop.conf.Configuration.loadProps(Configuration.java:2862) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2844) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1200) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1812) at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1789) at org.apache.hadoop.util.ShutdownHookManager.getShutdownTimeout(ShutdownHookManager.java:183) at org.apache.hadoop.util.ShutdownHookManager.shutdownExecutor(ShutdownHookManager.java:145) at org.apache.hadoop.util.ShutdownHookManager.access$300(ShutdownHookManager.java:65) at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:102) Caused by: com.ctc.wstx.exc.WstxParsingException: Unexpected close tag ; expected . at [row,col,system-id]: [24,15,"file:/opt/Hadoop/hadoop-3.2.3/etc/hadoop/mapred-site.xml"] at com.ctc.wstx.sr.StreamScanner.constructWfcException(StreamScanner.java:634) at com.ctc.wstx.sr.StreamScanner.throwParseError(StreamScanner.java:504) at com.ctc.wstx.sr.StreamScanner.throwParseError(StreamScanner.java:488) at com.ctc.wstx.sr.BasicStreamReader.reportWrongEndElem(BasicStreamReader.java:3352) at com.ctc.wstx.sr.BasicStreamReader.readEndElem(BasicStreamReader.java:3279) at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2900) at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1121) at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3336) at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3130) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3023) ... 10 more |
這時需更改修正指定配置文件錯誤,并將其分發更新到各個集群服務器上:
| [my@Hadoop102 hadoop-3.2.3]$ rsync -av /opt/Hadoop/hadoop-3.2.3/etc/hadoop/mapred-site.xml Hadoop103:/opt/Hadoo p/hadoop-3.2.3/etc/hadoop/ my@hadoop103's password: sending incremental file list mapred-site.xml sent 296 bytes received 43 bytes 96.86 bytes/sec total size is 900 speedup is 2.65 [my@Hadoop102 hadoop-3.2.3]$ rsync -av /opt/Hadoop/hadoop-3.2.3/etc/hadoop/mapred-site.xml Hadoop104:/opt/Hadoo p/hadoop-3.2.3/etc/hadoop/ The authenticity of host 'hadoop104 (192.168.10.104)' can't be established. ECDSA key fingerprint is SHA256:fE+HBwM03RQA+TNPrpQvWYHV46mYltvqrh9psMUXwos. ECDSA key fingerprint is MD5:38:8a:6a:6c:a2:f9:43:2e:e4:99:58:53:aa:84:cc:13. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'hadoop104,192.168.10.104' (ECDSA) to the list of known hosts. my@hadoop104's password: sending incremental file list mapred-site.xml sent 296 bytes received 43 bytes 61.64 bytes/sec total size is 900 speedup is 2.65 初始化HDFS: [my@Hadoop102 hadoop-3.2.3]$ hdfs namenode -format 2022-04-17 18:20:30,242 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = Hadoop102/192.168.10.102 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 3.2.3 STARTUP_MSG: classpath = /opt/Hadoop/hadoop-3.2.3/etc/hadoop:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib /kerb-client-1 .0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/opt/Hadoop/hadoop-3 .2.3/share/had oop/common/lib/kerb-common-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/op t/Hadoop/hadoo p-3.2.3/share/hadoop/common/lib/jetty-http-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/li b/jersey-core1.19.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/hadoop-annotations-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3 /share/hadoop/ common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop /common/lib/jc ip-annotations-1.0-1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/woodstox-core-5.3.0.jar:/opt/Hadoop/h adoop-3.2.3/sh are/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/common s-collections3.2.2.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/netty-3.10.6.Final.jar:/opt/Hadoop/hadoop-3.2.3/shar e/hadoop/commo n/lib/zookeeper-3.4.14.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jersey-server-1.19.jar:/opt/Hadoop/ hadoop-3.2.3/s hare/hadoop/common/lib/paranamer-2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/javax.servlet-api-3.1 .0.jar:/opt/Ha doop/hadoop-3.2.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/com mon/lib/javax. activation-api-1.2.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/guava-27.0-jre.jar:/opt/Hadoop/hadoop -3.2.3/share/h adoop/common/lib/asm-5.0.4.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/opt /Hadoop/hadoop -3.2.3/share/hadoop/common/lib/curator-framework-2.13.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/ac cessors-smart2.4.7.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/re2j-1.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/c ommon/lib/kerb y-config-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jackson-databind-2.10.5.1.jar:/opt/Hadoop/h adoop-3.2.3/sh are/hadoop/common/lib/commons-beanutils-1.9.4.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/failureacces s-1.0.jar:/opt /Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/comm on/lib/commons -cli-1.2.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jersey-json-1.19.jar:/opt/Hadoop/hadoop-3.2.3/sha re/hadoop/comm on/lib/hadoop-auth-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/op t/Hadoop/hadoo p-3.2.3/share/hadoop/common/lib/dnsjava-2.1.7.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jackson-xc-1 .9.13.jar:/opt /Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/j axb-api-2.2.11 .jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/kerby-util-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoo p/common/lib/k erb-core-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/httpclient-4.5.13.jar:/opt/Hadoop/hadoop-3. 2.3/share/hado op/common/lib/nimbus-jose-jwt-9.8.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/curator-client-2.13.0. jar:/opt/Hadoo p/hadoop-3.2.3/share/hadoop/common/lib/commons-text-1.4.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/sl f4j-api-1.7.25 .jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jackson-annotations-2.10.5.jar:/opt/Hadoop/hadoop-3.2.3/s hare/hadoop/co mmon/lib/jaxb-impl-2.2.3-1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jersey-servlet-1.19.jar:/opt/Ha doop/hadoop-3. 2.3/share/hadoop/common/lib/jetty-xml-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/slf 4j-log4j12-1.7 .25.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jetty-server-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3 .2.3/share/had oop/common/lib/gson-2.2.4.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/opt/Had oop/hadoop-3.2 .3/share/hadoop/common/lib/spotbugs-annotations-3.1.9.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/chec ker-qual-2.5.2 .jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jetty-webapp-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2. 3/share/hadoop /common/lib/metrics-core-3.2.4.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/token-provider-1.0.1.jar:/o pt/Hadoop/hado op-3.2.3/share/hadoop/common/lib/jetty-security-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/comm on/lib/snappyjava-1.0.5.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/commons-net-3.6.jar:/opt/Hadoop/hadoop-3.2.3/sh are/hadoop/com mon/lib/kerb-server-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/opt/Hadoop /hadoop-3.2.3/ share/hadoop/common/lib/commons-compress-1.21.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/httpcore-4.4 .13.jar:/opt/H adoop/hadoop-3.2.3/share/hadoop/common/lib/jsr305-3.0.2.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/lo g4j-1.2.17.jar :/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop /common/lib/je tty-util-ajax-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/opt/H adoop/hadoop-3 .2.3/share/hadoop/common/lib/curator-recipes-2.13.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/common s-codec-1.11.j ar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/commons-io-2.8.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/ common/lib/jet ty-servlet-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/avro-1.7.7.jar:/opt/Hadoop/had oop-3.2.3/shar e/hadoop/common/lib/kerby-asn1-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/commons-lang3-3.7.jar :/opt/Hadoop/h adoop-3.2.3/share/hadoop/common/lib/json-smart-2.4.7.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/kerbutil-1.0.1.jar :/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/co mmon/lib/kerbsimplekdc-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/Hadoop/hadoo p-3.2.3/share/ hadoop/common/lib/jackson-core-2.10.5.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/kerby-xdr-1.0.1.jar: /opt/Hadoop/ha doop-3.2.3/share/hadoop/common/lib/jsch-0.1.55.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jettison-1. 1.jar:/opt/Had oop/hadoop-3.2.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/l ib/commons-log ging-1.1.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/opt/Hadoop /hadoop-3.2.3/ share/hadoop/common/lib/error_prone_annotations-2.2.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/jett y-io-9.4.40.v2 0210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/Hadoop/hadoop3.2.3/share/ha doop/common/lib/jetty-util-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/lib/stax2-api-4.2. 1.jar:/opt/Had oop/hadoop-3.2.3/share/hadoop/common/hadoop-kms-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/common/hadoop-n fs-3.2.3.jar:/ opt/Hadoop/hadoop-3.2.3/share/hadoop/common/hadoop-common-3.2.3-tests.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop /common/hadoop -common-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/ker b-client-1.0.1 .jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/opt/Hadoop/hadoop-3.2.3/s hare/hadoop/hd fs/lib/kerb-common-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/opt/Hadoop/h adoop-3.2.3/sh are/hadoop/hdfs/lib/jetty-http-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jersey-core1.19.jar:/opt/ Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/hadoop-annotations-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hd fs/lib/listena blefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jcip-ann otations-1.0-1 .jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/woodstox-core-5.3.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hado op/hdfs/lib/ht race-core4-4.1.0-incubating.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/o pt/Hadoop/hado op-3.2.3/share/hadoop/hdfs/lib/netty-3.10.6.Final.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/zookeeper3.4.14.jar:/op t/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/l ib/paranamer-2 .3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/Hadoop/hadoop-3.2.3/share/had oop/hdfs/lib/j avax.servlet-api-3.1.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/Hado op/hadoop-3.2. 3/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/javax.activation-a pi-1.2.0.jar:/ opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib /asm-5.0.4.jar :/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoo p/hdfs/lib/cur ator-framework-2.13.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/accessors-smart-2.4.7.jar:/opt/Hadoop/ hadoop-3.2.3/s hare/hadoop/hdfs/lib/re2j-1.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/opt/Ha doop/hadoop-3. 2.3/share/hadoop/hdfs/lib/jackson-databind-2.10.5.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/commonsbeanutils-1.9. 4.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoo p/hdfs/lib/com mons-cli-1.2.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/opt/Hadoop/hadoop-3.2.3/s hare/hadoop/hd fs/lib/hadoop-auth-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/ Hadoop/hadoop3.2.3/share/hadoop/hdfs/lib/dnsjava-2.1.7.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jackson-xc-1.9.13. jar:/opt/Hadoo p/hadoop-3.2.3/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/kerby-u til-1.0.1.jar: /opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/l ib/httpclient4.5.13.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.8.1.jar:/opt/Hadoop/hadoop-3.2.3/sh are/hadoop/hdf s/lib/curator-client-2.13.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/commons-text-1.4.jar:/opt/Hadoop /hadoop-3.2.3/ share/hadoop/hdfs/lib/netty-all-4.1.68.Final.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jackson-annotat ions-2.10.5.ja r:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hd fs/lib/jerseyservlet-1.19.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jetty-xml-9.4.40.v20210413.jar:/opt/Hadoop/hado op-3.2.3/share /hadoop/hdfs/lib/jetty-server-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/gson-2.2.4.ja r:/opt/Hadoop/ hadoop-3.2.3/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/spotb ugs-annotation s-3.1.9.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/opt/Hadoop/hadoop-3.2.3/shar e/hadoop/hdfs/ lib/jetty-webapp-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/ opt/Hadoop/had oop-3.2.3/share/hadoop/hdfs/lib/jetty-security-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/ lib/snappy-jav a-1.0.5.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/commons-net-3.6.jar:/opt/Hadoop/hadoop-3.2.3/share/h adoop/hdfs/lib /kerb-server-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/opt/Hadoop/hadoop-3 .2.3/share/had oop/hdfs/lib/commons-compress-1.21.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/opt/ Hadoop/hadoop3.2.3/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/o pt/Hadoop/hado op-3.2.3/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jetty-uti l-ajax-9.4.40. v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/shar e/hadoop/hdfs/ lib/commons-daemon-1.0.13.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/curator-recipes-2.13.0.jar:/opt/Ha doop/hadoop-3. 2.3/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/commons-io-2.8. 0.jar:/opt/Had oop/hadoop-3.2.3/share/hadoop/hdfs/lib/jetty-servlet-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop /hdfs/lib/avro -1.7.7.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/h adoop/hdfs/lib /commons-lang3-3.7.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/json-smart-2.4.7.jar:/opt/Hadoop/hadoop-3 .2.3/share/had oop/hdfs/lib/kerb-util-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/Hadoo p/hadoop-3.2.3 /share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jackson-jaxrs-1. 9.13.jar:/opt/ Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/okio-1.6.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jackson -core-2.10.5.j ar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdf s/lib/jsch-0.1 .55.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jettison-1.1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/h dfs/lib/protob uf-java-2.5.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/Hadoop/hadoop-3 .2.3/share/had oop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/error_prone_ann otations-2.2.0 .jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/jetty-io-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/shar e/hadoop/hdfs/ lib/audience-annotations-0.5.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/opt/Hadoop/ hadoop-3.2.3/s hare/hadoop/hdfs/lib/jetty-util-9.4.40.v20210413.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/lib/stax2-api-4 .2.1.jar:/opt/ Hadoop/hadoop-3.2.3/share/hadoop/hdfs/hadoop-hdfs-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/hadoop-h dfs-3.2.3-test s.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/hadoop-hdfs-client-3.2.3-tests.jar:/opt/Hadoop/hadoop-3.2.3/sh are/hadoop/hdf s/hadoop-hdfs-nfs-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.3.jar:/opt/Hadoop/ha doop-3.2.3/sha re/hadoop/hdfs/hadoop-hdfs-native-client-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/hadoop-hdfs-httpf s-3.2.3.jar:/o pt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/hadoop-hdfs-native-client-3.2.3-tests.jar:/opt/Hadoop/hadoop-3.2.3/sha re/hadoop/hdfs /hadoop-hdfs-client-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.3-tests.jar:/opt/H adoop/hadoop-3 .2.3/share/hadoop/mapreduce/lib/junit-4.13.2.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/mapreduce/lib/hamcrest-c ore-1.3.jar:/o pt/Hadoop/hadoop-3.2.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.2.3.jar:/opt/Hadoop/hadoop-3. 2.3/share/hado op/mapreduce/hadoop-mapreduce-examples-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/mapreduce/hadoop-mapredu ce-client-app3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.2.3.jar:/opt/Had oop/hadoop-3.2 .3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/ma preduce/hadoop -mapreduce-client-jobclient-3.2.3-tests.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/mapreduce/hadoop-mapreduce-cl ient-uploader3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.2.3.jar:/opt/Hadoop/ha doop-3.2.3/sha re/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/mapreduce/hadoop -mapreduce-cli ent-common-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.2.3.jar: /opt/Hadoop/ha doop-3.2.3/share/hadoop/yarn:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/opt/Hadoop/h adoop-3.2.3/sh are/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/json-io-2.5.1. jar:/opt/Hadoo p/hadoop-3.2.3/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/e hcache-3.3.1.j ar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/bcpkix-jdk15on-1.60.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop /yarn/lib/jack son-jaxrs-json-provider-2.10.5.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/Hadoo p/hadoop-3.2.3 /share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/jakarta.xml.bind-api2.3.2.jar:/opt /Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/li b/objenesis-1. 0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/snakeyaml-1.26.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/y arn/lib/bcprov -jdk15on-1.60.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.10.5.jar:/op t/Hadoop/hadoo p-3.2.3/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/ya rn/lib/jackson -jaxrs-base-2.10.5.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/fst-2.50.jar:/opt/Hadoop/hadoop-3.2.3/sha re/hadoop/yarn /lib/mssql-jdbc-6.2.1.jre7.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/guice-4.0.jar:/opt/Hadoop/hadoop3.2.3/share/ha doop/yarn/lib/java-util-1.9.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/jersey-client-1.19.jar:/opt/Ha doop/hadoop-3. 2.3/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/lib/jakarta.activati on-api-1.2.1.j ar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-yarn-services-api-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share /hadoop/yarn/h adoop-yarn-server-common-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-yarn-client-3.2.3.jar:/opt /Hadoop/hadoop -3.2.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/shar e/hadoop/yarn/ hadoop-yarn-submarine-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3 .2.3.jar:/opt/ Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-yarn-common-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/h adoop-yarn-reg istry-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.2.3.jar:/opt /Hadoop/hadoop -3.2.3/share/hadoop/yarn/hadoop-yarn-server-router-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoopyarn-server-ap plicationhistoryservice-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-yarn-api-3.2.3.jar:/opt/Had oop/hadoop-3.2 .3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-y arn-server-tes ts-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.2.3.jar:/opt/Hadoop/ha doop-3.2.3/sha re/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/h adoop-yarn-ser vices-core-3.2.3.jar:/opt/Hadoop/hadoop-3.2.3/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.2.3 .jar STARTUP_MSG: build = https://github.com/apache/hadoop -r abe5358143720085498613d399be3bbf01e0f131; compiled b y 'ubuntu' on 2022-03-20T01:18Z STARTUP_MSG: java = 1.8.0_321 ************************************************************/ 2022-04-17 18:20:30,255 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 2022-04-17 18:20:30,438 INFO namenode.NameNode: createNameNode [-format] Formatting using clusterid: CID-98655ee1-8960-41ad-bc6d-2bb95651b12a 2022-04-17 18:20:31,781 INFO namenode.FSEditLog: Edit logging is async:true 2022-04-17 18:20:31,851 INFO namenode.FSNamesystem: KeyProvider: null 2022-04-17 18:20:31,853 INFO namenode.FSNamesystem: fsLock is fair: true 2022-04-17 18:20:31,853 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 2022-04-17 18:20:31,872 INFO namenode.FSNamesystem: fsOwner = my (auth:SIMPLE) 2022-04-17 18:20:31,872 INFO namenode.FSNamesystem: supergroup = supergroup 2022-04-17 18:20:31,872 INFO namenode.FSNamesystem: isPermissionEnabled = true 2022-04-17 18:20:31,872 INFO namenode.FSNamesystem: HA Enabled: false 2022-04-17 18:20:31,952 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profi ling 2022-04-17 18:20:31,968 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, coun ted=60, effect ed=1000 2022-04-17 18:20:31,968 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-ch eck=true 2022-04-17 18:20:31,974 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00: 00.000 2022-04-17 18:20:31,981 INFO blockmanagement.BlockManager: The block deletion will start around 2022 四月 17 18:2 0:31 2022-04-17 18:20:31,985 INFO util.GSet: Computing capacity for map BlocksMap 2022-04-17 18:20:31,985 INFO util.GSet: VM type = 64-bit 2022-04-17 18:20:31,986 INFO util.GSet: 2.0% max memory 481.4 MB = 9.6 MB 2022-04-17 18:20:31,987 INFO util.GSet: capacity = 2^20 = 1048576 entries 2022-04-17 18:20:32,001 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled 2022-04-17 18:20:32,001 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false 2022-04-17 18:20:32,011 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990 000128746033 2022-04-17 18:20:32,011 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0 2022-04-17 18:20:32,011 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000 2022-04-17 18:20:32,012 INFO blockmanagement.BlockManager: defaultReplication = 3 2022-04-17 18:20:32,013 INFO blockmanagement.BlockManager: maxReplication = 512 2022-04-17 18:20:32,013 INFO blockmanagement.BlockManager: minReplication = 1 2022-04-17 18:20:32,013 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 2022-04-17 18:20:32,013 INFO blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms 2022-04-17 18:20:32,013 INFO blockmanagement.BlockManager: encryptDataTransfer = false 2022-04-17 18:20:32,013 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 2022-04-17 18:20:32,050 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911 2022-04-17 18:20:32,050 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215 2022-04-17 18:20:32,050 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215 2022-04-17 18:20:32,050 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215 2022-04-17 18:20:32,067 INFO util.GSet: Computing capacity for map INodeMap 2022-04-17 18:20:32,068 INFO util.GSet: VM type = 64-bit 2022-04-17 18:20:32,068 INFO util.GSet: 1.0% max memory 481.4 MB = 4.8 MB 2022-04-17 18:20:32,068 INFO util.GSet: capacity = 2^19 = 524288 entries 2022-04-17 18:20:32,069 INFO namenode.FSDirectory: ACLs enabled? false 2022-04-17 18:20:32,069 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true 2022-04-17 18:20:32,069 INFO namenode.FSDirectory: XAttrs enabled? true 2022-04-17 18:20:32,069 INFO namenode.NameNode: Caching file names occurring more than 10 times 2022-04-17 18:20:32,074 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccess TimeOnlyChange : false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536 2022-04-17 18:20:32,079 INFO snapshot.SnapshotManager: SkipList is disabled 2022-04-17 18:20:32,097 INFO util.GSet: Computing capacity for map cachedBlocks 2022-04-17 18:20:32,098 INFO util.GSet: VM type = 64-bit 2022-04-17 18:20:32,098 INFO util.GSet: 0.25% max memory 481.4 MB = 1.2 MB 2022-04-17 18:20:32,098 INFO util.GSet: capacity = 2^17 = 131072 entries 2022-04-17 18:20:32,108 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 2022-04-17 18:20:32,108 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 2022-04-17 18:20:32,108 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 2022-04-17 18:20:32,112 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 2022-04-17 18:20:32,112 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache ent ry expiry time is 600000 millis 2022-04-17 18:20:32,120 INFO util.GSet: Computing capacity for map NameNodeRetryCache 2022-04-17 18:20:32,120 INFO util.GSet: VM type = 64-bit 2022-04-17 18:20:32,120 INFO util.GSet: 0.029999999329447746% max memory 481.4 MB = 147.9 KB 2022-04-17 18:20:32,120 INFO util.GSet: capacity = 2^14 = 16384 entries 2022-04-17 18:20:32,173 INFO namenode.FSImage: Allocated new BlockPoolId: BP-56363176-192.168.10.102-1650190832 146 2022-04-17 18:20:32,204 INFO common.Storage: Storage directory /opt/Hadoop/hadoop-3.2.3/Datas/dfs/name has been successfully formatted. 2022-04-17 18:20:32,254 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/Hadoop/hadoop-3.2.3/Datas/d fs/name/curren t/fsimage.ckpt_0000000000000000000 using no compression 2022-04-17 18:20:32,406 INFO namenode.FSImageFormatProtobuf: Image file /opt/Hadoop/hadoop-3.2.3/Datas/dfs/name /current/fsima ge.ckpt_0000000000000000000 of size 397 bytes saved in 0 seconds . 2022-04-17 18:20:32,420 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 2022-04-17 18:20:32,438 INFO namenode.FSNamesystem: Stopping services started for active state 2022-04-17 18:20:32,438 INFO namenode.FSNamesystem: Stopping services started for standby state 2022-04-17 18:20:32,442 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown. 2022-04-17 18:20:32,443 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at Hadoop102/192.168.10.102 ************************************************************/ [my@Hadoop102 hadoop-3.2.3]$ ll 總用量 180 drwxr-xr-x. 2 my my 203 3月 20 09:58 bin drwxrwxr-x. 3 my my 17 4月 17 18:20 Datas drwxr-xr-x. 3 my my 20 3月 20 09:20 etc drwxr-xr-x. 2 my my 106 3月 20 09:58 include drwxr-xr-x. 3 my my 20 3月 20 09:58 lib drwxr-xr-x. 4 my my 288 3月 20 09:58 libexec -rw-rw-r--. 1 my my 150571 3月 10 13:39 LICENSE.txt drwxrwxr-x. 2 my my 35 4月 17 18:20 logs -rw-rw-r--. 1 my my 21943 3月 10 13:39 NOTICE.txt -rw-rw-r--. 1 my my 1361 3月 10 13:39 README.txt drwxr-xr-x. 3 my my 4096 3月 20 09:20 sbin drwxr-xr-x. 4 my my 31 3月 20 10:17 share |
3.1.2 集群未配置SSH免密登錄
| 未配置免密登錄,啟動HDFS集群服務時會拋出以下錯誤: [my@Hadoop102 current]$ $HADOOP_HOME/sbin/start-dfs.sh Starting namenodes on [Hadoop102] Hadoop102: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). Starting datanodes Hadoop104: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). Hadoop102: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). Hadoop103: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). Starting secondary namenodes [Hadoop104] Hadoop104: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). [my@Hadoop102 current]$ $HADOOP_HOME/sbin/start-dfs.sh Starting namenodes on [Hadoop102] Hadoop102: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). Starting datanodes Hadoop103: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). Hadoop104: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). Hadoop102: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). 1Starting secondary namenodes [Hadoop104] Hadoop104: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). |
此時需要配置RSA密鑰,進行免密認證:
| [my@Hadoop102 current]$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/my/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/my/.ssh/id_rsa. Your public key has been saved in /home/my/.ssh/id_rsa.pub. The key fingerprint is: SHA256:hrIi7CAZx5vqX0VertHrvbDi7lO1eKMPqECOaaJ6auw my@Hadoop102 The key's randomart image is: +---[RSA 2048]----+ | | | | | . . | | . o.+ . | |. o.. .+Soo . | |.+=o o..+o.+ | |B=+o.. o.+o . | |B=..o .o..= | |XE.. .++oo.+. | +----[SHA256]-----+ [my@Hadoop102 current]$ cd ~ [my@Hadoop102 ~]$ ll 總用量 0 drwxr-xr-x. 2 my my 6 4月 16 13:59 公共 drwxr-xr-x. 2 my my 6 4月 16 13:59 模板 drwxr-xr-x. 2 my my 6 4月 16 13:59 視頻 drwxr-xr-x. 2 my my 6 4月 16 13:59 圖片 drwxr-xr-x. 2 my my 6 4月 16 13:59 文檔 drwxr-xr-x. 2 my my 6 4月 16 13:59 下載 drwxr-xr-x. 2 my my 6 4月 16 13:59 音樂 drwxr-xr-x. 2 my my 6 4月 16 13:59 桌面 [my@Hadoop102 ~]$ ls -a .. .bash_history .bash_profile ..ccaacchhee .esd_auth ..llooccaall ..sssshh 公共 視頻 文檔 音樂 .... .bash_logout .bashrc ..ccoonnffiigg .ICEauthority ..mmoozziillllaa .viminfo 模板 圖片 下載 桌面 [my@Hadoop102 ~]$ cd .ssh [my@Hadoop102 .ssh]$ ls id_rsa id_rsa.pub known_hosts [my@Hadoop102 .ssh]$ ssh-copy-id -i id_rsa.pub Hadoop103 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already instal led /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys my@hadoop103's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'Hadoop103'" and check to make sure that only the key(s) you wanted were added. [my@Hadoop102 .ssh]$ ssh-copy-id -i id_rsa.pub Hadoop104 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already instal led /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys my@hadoop104's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'Hadoop104'" and check to make sure that only the key(s) you wanted were added. [my@Hadoop102 .ssh]$ $HADOOP_HOME/sbin/start-dfs.sh Starting namenodes on [Hadoop102] Hadoop102: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). Starting datanodes Hadoop102: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). Hadoop103: WARNING: /opt/Hadoop/hadoop-3.2.3/logs does not exist. Creating. Hadoop104: WARNING: /opt/Hadoop/hadoop-3.2.3/logs does not exist. Creating. Starting secondary namenodes [Hadoop104] [my@Hadoop102 .ssh]$ ssh-copy-id -i id_rsa.pub Hadoop102 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already instal led /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys my@hadoop102's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'Hadoop102'" and check to make sure that only the key(s) you wanted were added. |
3.1.3 使用其它非集群用戶啟動集群服務
| [root@Hadoop102 hadoop-3.2.3]# $HADOOP_HOME/sbin/start-dfs.sh Starting namenodes on [Hadoop102] ERROR: Attempting to operate on hdfs namenode as root ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation. Starting datanodes ERROR: Attempting to operate on hdfs datanode as root ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation. Starting secondary namenodes [Hadoop104] ERROR: Attempting to operate on hdfs secondarynamenode as root ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation. 此時使用原本配置的集成用戶啟動集群服務便可。 |
總結
以上是生活随笔為你收集整理的Hadoop入门·环境搭建的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: ISIS(中央系统到中央系统)
- 下一篇: 我们为什么而活