SUSE上搭建Hadoop环境(单机模式+伪分布模式)
為什么80%的碼農(nóng)都做不了架構(gòu)師?>>> ??
【環(huán)境】:
經(jīng)常遭遇因?yàn)橐蕾囓浖姹静黄ヅ鋵?dǎo)致的問題,這次大意了,以為java問題不大,就用本來通過yast安裝的java1.6 openjdk去搞了,結(jié)果可想而知,問題很多,反復(fù)定位,反復(fù)谷歌百度,最后一朋友啟發(fā)下決定換換jdk版本。問題解決了,所以這里貼下我的環(huán)境
java環(huán)境:?java version "1.7.0_51"
? ? ? ? ? ? ? ?Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
? ? ? ? ? ? ? ?Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
系統(tǒng): ? ? ??openSUSE 11.2 (x86_64)
hadoop版本:Hadoop-1.1.2.tar.gz
【Step1:】創(chuàng)建hadoop用戶及用戶組
?組:hadoop?
?用戶:hadoop ?-> /home/hadoop
?加權(quán)限: vi /etc/sudoers ?增加?hadoop??ALL=(ALL:ALL)??ALL
【Stpe2:】安裝hadoop
?筆者tar xf 安裝完后是這樣的目錄結(jié)構(gòu)(供參考):
?/home/hadoop/hadoop-home/[bin|conf]
【Step3:】配SSH(避免啟動(dòng)hadoop時(shí)需要密碼)
?略安裝ssh
?ssh-keygen -t rsa -P "" [一路回車及確認(rèn)]
?cat ~/.ssh/id_rsa.pub >>?~/.ssh/authorized_keys
?嘗試 ssh localhost [檢查下是不是不需要密碼啦]
【Step4:】安裝java
?版本見【環(huán)境】部分
【Step5:】配conf/hadoop-env.sh?
export JAVA_HOME=/usr/java/jdk1.7.0_17xxx ? ? ? ?#[jdk目錄]
export HADOOP_INSTALL=/home/hadoop/hadoop-home ? ? ? ? ?
export PATH=$PATH:$HADOOP_INSTALL/bin ? ? ? ?#[這里是hadoop腳本所在目錄]
【Step6:】使用單機(jī)模式
?hadoop?version
mkdir input
?man find > input/test.txt
?hadoop?jar?hadoop-examples-1.1.2.jar?wordcount?input?output
【Step7:】偽分布模式(單機(jī)實(shí)現(xiàn)namenode,datanode,tackerd等模塊)
?conf/[core-site.xml、hdfs-site.xml、mapred-site.xml]
?core-site.xml
<configuration><property><name>fs.default.name</name><value>hdfs://localhost:9000</value></property><property><name>hadoop.tmp.dir</name><value>/usr/local/hadoop/tmp</value></property> </configuration>hdfs-site.xml
<configuration><property><name>dfs.replication</name><value>2</value></property><property><name>dfs.name.dir</name><value>/usr/local/hadoop/datalog1,/usr/local/hadoop/datalog2</value></property><prop<configuration>???<property>??<name>mapred.job.tracker</name><value>localhost:9001</value>???</property> </configuration>erty><name>dfs.data.dir</name><value>/usr/local/hadoop/data1,/usr/local/hadoop/data2</value></property> </configuration>mapred-site.xml
<configuration>???<property>??<name>mapred.job.tracker</name><value>localhost:9001</value>???</property> </configuration>【Step8:】啟動(dòng)
?格式化:hadoop?namenode?-format
?cd bin
?sh?start-all.sh
hadoop@linux-peterguo:~/hadoop-home/bin>?sh?start-all.sh starting?namenode,?logging?to?/home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-namenode-linux-peterguo.out localhost:?starting?datanode,?logging?to?/home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-datanode-linux-peterguo.out localhost:?starting?secondarynamenode,?logging?to?/home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-secondarynamenode-linux-peterguo.out starting?jobtracker,?logging?to?/home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-jobtracker-linux-peterguo.out localhost:?starting?tasktracker,?logging?to?/home/hadoop/hadoop-home/libexec/../logs/hadoop-hadoop-tasktracker-linux-peterguo.out?jps查看進(jìn)程是否全啟動(dòng) 五個(gè)java進(jìn)程 jobtracker/tasktracker/namenode/datanode/sencondarynamenode
?可以通過下面的操作來查看服務(wù)是否正常,在Hadoop中用于監(jiān)控集群健康狀態(tài)的Web界面:
http://localhost:50030/?- Hadoop 管理介面
http://localhost:50060/?- Hadoop Task Tracker 狀態(tài)
http://localhost:50070/?- Hadoop DFS 狀態(tài)
【Step9:】操作dfs數(shù)據(jù)文件
hadoop?dfs?-mkdir?input
hadoop?dfs?-copyFromLocal input/test.txt input
hadoop?dfs?-ls input
【Step10:】運(yùn)行dfs上的mr
hadoop?jar?hadoop-examples-1.1.2.jar?wordcount?input?output?
hadoop?dfs?-cat?output/*
【Step11:】關(guān)閉
stop-all.sh
參考:http://blog.csdn.net/zhaoyl03/article/details/8657104
轉(zhuǎn)載于:https://my.oschina.net/sanpeterguo/blog/219656
新人創(chuàng)作打卡挑戰(zhàn)賽發(fā)博客就能抽獎(jiǎng)!定制產(chǎn)品紅包拿不停!總結(jié)
以上是生活随笔為你收集整理的SUSE上搭建Hadoop环境(单机模式+伪分布模式)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: SCCM 2012 R2---安装客户端
- 下一篇: 博客创世贴