7.测试hadoop安装成功与否,并跑mapreduce实例
生活随笔
收集整理的這篇文章主要介紹了
7.测试hadoop安装成功与否,并跑mapreduce实例
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
hadoop2.6.5集群安裝及mapreduce測試運行
http://blog.csdn.net/fanfanrenrenmi/article/details/54232184
【準備工作】在每一次測試之前,必須把前一次測試完的文件刪除掉,具體命令見下:
################################ #在master機器上:su hadoop #切換用戶 ################################rm -r /home/hadoop/hadoop/* #刪除mkdir /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp #創建chmod -R 777 /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp #修改權限 ################################ ssh slave1rm -r /home/hadoop/hadoop/* #刪除mkdir /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp #創建chmod -R 777 /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp #修改權限 ################################# ssh slave2rm -r /home/hadoop/hadoop/* #刪除mkdir /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp #創建chmod -R 777 /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp #修改權限ssh master ################################=============================================
開 始 測 試
=============================================
(一)
1)格式化 hdfs (在 master 機器上)
hdfs namenode -format 顯示下面內容: 17/08/12 22:13:49 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/192.168.222.134 ************************************************************/2)啟動 hdfs (在 master 機器上)
start-dfs.sh 顯示下面內容: hadoop@master:~$ start-dfs.sh Starting namenodes on [master] master: starting namenode, logging to /data/hadoop-2.6.5/logs/hadoop-hadoop-namenode-master.out slave1: starting datanode, logging to /data/hadoop-2.6.5/logs/hadoop-hadoop-datanode-slave1.out slave2: starting datanode, logging to /data/hadoop-2.6.5/logs/hadoop-hadoop-datanode-slave2.out Starting secondary namenodes [master] master: starting secondarynamenode, logging to /data/hadoop-2.6.5/logs/hadoop-hadoop-secondarynamenode-master.out3)在master機器上jps
hadoop@master:~$ jps # 3 個 10260 NameNode 10581 Jps 10469 SecondaryNameNode4)在 slave1 和slave2 上使用jps
hadoop@slave1:~/hadoop$ jps # 2 個 6688 Jps 6603 DataNode==================================hadoop@slave2:~$ jps # 2 個 6600 DataNode 6682 Jps 解釋:jps命令是查看當前啟動的節點上面說明了在 master 節點上成功啟動了NameNode 和 SecondaryNameNode, 在 slave 節點上成功啟動了DataNode,也就說明 HDFS 啟動成功。===========
(二)
1)在 master上
start-yarn.sh #啟動 yarn顯示下面內容: hadoop@master:~$ start-yarn.sh #啟動 yarn starting yarn daemons starting resourcemanager, logging to /data/hadoop-2.6.5/logs/yarn-hadoop-resourcemanager-master.out slave2: nodemanager running as process 6856. Stop it first. slave1: starting nodemanager, logging to /data/hadoop-2.6.5/logs/yarn-hadoop-nodemanager-slave1.out2)在master上jps
hadoop@master:~$ jps # 4 個 10260 NameNode 10469 SecondaryNameNode 10649 ResourceManager 10921 Jps3)在 slave1 和slave2 上jps
hadoop@slave1:~/hadoop$ jps # 3 個 6771 NodeManager 6887 Jps 6603 DataNode=========================================hadoop@slave2:~$ jps # 3 個 7057 Jps 6600 DataNode 6856 NodeManager 上面說明成功啟動了 ResourceManager 和 NodeManager,也就 是說 yarn 啟動成功。(三)訪問 WebUI
在 master、slave1 和 slave2 任意一臺機器上打開 firefox,然后 輸入 http://master:8088/,如果看到如下的圖片,就說明我們的 hadoop 集群搭建成功了。(四)測試完成后,用下面命令進行關閉:
stop-all.sh 顯示見下: hadoop@master:~$ stop-all.sh This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh Stopping namenodes on [master] master: stopping namenode slave1: stopping datanode slave2: stopping datanode Stopping secondary namenodes [master] master: stopping secondarynamenode stopping yarn daemons stopping resourcemanager slave1: stopping nodemanager slave2: stopping nodemanager no proxyserver to stop再用jps分別查看master、slaver1、slave2機器的狀態,發現已經關閉。(五)清理產生的文件
【記得執行下面代碼清空上次生成的文件,以免對下次測試造成影響】 ################################ #在master機器上:su hadoop #切換用戶 ################################rm -r /home/hadoop/hadoop/* #刪除mkdir /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp #創建chmod -R 777 /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp #修改權限 ################################ ssh slave1rm -r /home/hadoop/hadoop/* #刪除mkdir /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp #創建chmod -R 777 /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp #修改權限 ################################# ssh slave2rm -r /home/hadoop/hadoop/* #刪除mkdir /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp #創建chmod -R 777 /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp #修改權限ssh master ################################=============================================
應用mapreduce
=============================================
hadoop fs 查看hdfs操作系統命令集合 1.啟動hadoop集群 start-all.sh2.創建hdfs目錄 hadoop fs -mkdir /input3.上傳文件 hadoop fs -put /data/hadoop-2.6.5/README.txt /input/4.修改文件名稱 hadoop fs -mv /input/README.txt /input/readme.txt5.查看文件 hadoop fs -ls /input 運行輸出情況見下: hadoop@master:~$ hadoop fs -ls /input Found 1 items -rw-r--r-- 3 hadoop supergroup 1366 2017-08-13 19:58 /input/readme.txt【注解】輸出文件夾為output,無需新建,若已存在需刪除6.運行hadoop自帶例子 hadoop jar /data/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount /input /output 運行輸出情況見下: hadoop@master:~$ hadoop jar /data/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount /input /output 17/08/13 20:11:18 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.222.139:8032 17/08/13 20:11:21 INFO input.FileInputFormat: Total input paths to process : 1 17/08/13 20:11:21 INFO mapreduce.JobSubmitter: number of splits:1 17/08/13 20:11:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1502625091562_0001 17/08/13 20:11:23 INFO impl.YarnClientImpl: Submitted application application_1502625091562_0001 17/08/13 20:11:23 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1502625091562_0001/ 17/08/13 20:11:23 INFO mapreduce.Job: Running job: job_1502625091562_0001 17/08/13 20:11:45 INFO mapreduce.Job: Job job_1502625091562_0001 running in uber mode : false 17/08/13 20:11:45 INFO mapreduce.Job: map 0% reduce 0% 17/08/13 20:11:59 INFO mapreduce.Job: map 100% reduce 0% 17/08/13 20:12:29 INFO mapreduce.Job: map 100% reduce 100% 17/08/13 20:12:30 INFO mapreduce.Job: Job job_1502625091562_0001 completed successfully 17/08/13 20:12:30 INFO mapreduce.Job: Counters: 49File System CountersFILE: Number of bytes read=1836FILE: Number of bytes written=218883FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=1466HDFS: Number of bytes written=1306HDFS: Number of read operations=6HDFS: Number of large read operations=0HDFS: Number of write operations=2Job Counters Launched map tasks=1Launched reduce tasks=1Data-local map tasks=1Total time spent by all maps in occupied slots (ms)=11022Total time spent by all reduces in occupied slots (ms)=26723Total time spent by all map tasks (ms)=11022Total time spent by all reduce tasks (ms)=26723Total vcore-milliseconds taken by all map tasks=11022Total vcore-milliseconds taken by all reduce tasks=26723Total megabyte-milliseconds taken by all map tasks=11286528Total megabyte-milliseconds taken by all reduce tasks=27364352Map-Reduce FrameworkMap input records=31Map output records=179Map output bytes=2055Map output materialized bytes=1836Input split bytes=100Combine input records=179Combine output records=131Reduce input groups=131Reduce shuffle bytes=1836Reduce input records=131Reduce output records=131Spilled Records=262Shuffled Maps =1Failed Shuffles=0Merged Map outputs=1GC time elapsed (ms)=245CPU time spent (ms)=2700Physical memory (bytes) snapshot=291491840Virtual memory (bytes) snapshot=3782098944Total committed heap usage (bytes)=138350592Shuffle ErrorsBAD_ID=0CONNECTION=0IO_ERROR=0WRONG_LENGTH=0WRONG_MAP=0WRONG_REDUCE=0File Input Format Counters Bytes Read=1366File Output Format Counters Bytes Written=13067.查看文件輸出結果 hadoop fs -ls /output 運行輸出情況見下: hadoop@master:~$ hadoop fs -ls /output Found 2 items -rw-r--r-- 3 hadoop supergroup 0 2017-08-13 20:12 /output/_SUCCESS -rw-r--r-- 3 hadoop supergroup 1306 2017-08-13 20:12 /output/part-r-000008.查看詞頻統計結果 hadoop fs -cat /output/part-r-00000 運行輸出情況見下: hadoop@master:~$ hadoop fs -cat /output/part-r-00000 (BIS), 1 (ECCN) 1 (TSU) 1 (see 1 5D002.C.1, 1 740.13) 1 <http://www.wassenaar.org/> 1 Administration 1 Apache 1 BEFORE 1 BIS 1 Bureau 1 Commerce, 1 Commodity 1 Control 1 Core 1 Department 1 ENC 1 Exception 1 Export 2 For 1 Foundation 1 Government 1 Hadoop 1 Hadoop, 1 Industry 1 Jetty 1 License 1 Number 1 Regulations, 1 SSL 1 Section 1 Security 1 See 1 Software 2 Technology 1 The 4 This 1 U.S. 1 Unrestricted 1 about 1 algorithms. 1 and 6 and/or 1 another 1 any 1 as 1 asymmetric 1 at: 2 both 1 by 1 check 1 classified 1 code 1 code. 1 concerning 1 country 1 country's 1 country, 1 cryptographic 3 currently 1 details 1 distribution 2 eligible 1 encryption 3 exception 1 export 1 following 1 for 3 form 1 from 1 functions 1 has 1 have 1 http://hadoop.apache.org/core/ 1 http://wiki.apache.org/hadoop/ 1 if 1 import, 2 in 1 included 1 includes 2 information 2 information. 1 is 1 it 1 latest 1 laws, 1 libraries 1 makes 1 manner 1 may 1 more 2 mortbay.org. 1 object 1 of 5 on 2 or 2 our 2 performing 1 permitted. 1 please 2 policies 1 possession, 2 project 1 provides 1 re-export 2 regulations 1 reside 1 restrictions 1 security 1 see 1 software 2 software, 2 software. 2 software: 1 source 1 the 8 this 3 to 2 under 1 use, 2 uses 1 using 2 visit 1 website 1 which 2 wiki, 1 with 1 written 1 you 1 your 19.將hdfs上文件導出到本地 【注解】先在/home/hadoop/下新建一個/home/hadoop/example目錄用于接受產生的文件 su hadoop mkdir /home/hadoop/example再執行: hadoop@master:~$ hadoop fs -get /output/part-r-00000 /home/hadoop/example執行完成后,在/home/hadoop/example目錄下生成part-r-00000文件,見下圖: 此時測試成功,即安裝Hadoop并跑實例成功。總結
以上是生活随笔為你收集整理的7.测试hadoop安装成功与否,并跑mapreduce实例的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 6.神操作(把master上的三个安装包
- 下一篇: 二、安装Spark集群