利用 Cloudera 实现 Hadoop (二)
安裝
規劃好了就開始安裝Hadoop,如前言中所說使用Cloudera的Hadoop發布版安裝Hadoop是十分方便的,首先當然是在每臺主機上一個干凈的操作系統(我用的是Ubuntu 8.04,用戶設為Hadoop,其它的版本應該差不多),然后就是安裝Hadoop了(這樣安裝的是Hadoop-0.20,也可以安裝Hadoop- 0.18的版本,反正安裝步驟都差不多。注意,不能同時啟用Hadoop-0.20和Hadoop-0.18)。由于每臺機器安裝步驟都一樣,這里就寫出了一臺主機的安裝步驟,主要分為以下幾個步驟:
設置Cloudera的源
- 生成Cloudera源文件(這里采用的是Hadoop-0.20版本):
#穩定版(Hadoop-0.18)
#deb http://archive.cloudera.com/debian hardy-stable contrib
#deb-src http://archive.cloudera.com/debian hardy-stable contrib
#測試版(Hadoop-0.20)
deb http://archive.cloudera.com/debian hardy-testing contrib
deb-src http://archive.cloudera.com/debian hardy-testing contrib
- 生成源的密鑰:
curl -s http://archive.cloudera.com/debian/archive.key | sudo apt-key add -
安裝Hadoop
- 更新源包索引:
sudo apt-get dist-upgrade
- 安裝Hadoop:
部署
安裝好這幾臺主機的Hadoop環境之后,就要對它們進行分布式運行模式的部署了,首先是設置它們之間的互聯。
主機互聯
Hadoop環境中的互聯是指各主機之間網絡暢通,機器名與IP地址之間解析正常,可以從任一主機ping通其它主機的主機名。注意,這里指的是主機名,即在Hadoop-01主機上可以通過命令ping Hadoop-02來ping通Hadoop-02主機(同理,要求這幾臺主機都能相互Ping通各自的主機名)。可以通過在各主機的/etc /hosts文件來實現,具體設置如下:
sudo vi /etc/hosts127.0.0.1 localhost
10.x.253.201 hadoop-01 hadoop-01
10.x.253.202 hadoop-02 hadoop-02
10.x.253.203 hadoop-03 hadoop-03
10.x.253.204 hadoop-04 hadoop-04
10.x.3.30 firehare-303 firehare-303
將每個主機的hosts文件都改成上述設置,這樣就實現了主機間使用主機名互聯的要求。
?
注:如果深究起來,并不是所有的主機都需要知道Hadoop環境中其它主機主機名的。其實只是作為主節點的主機(如NameNode、 JobTracker),需要在該主節點hosts文件中加上Hadoop環境中所有機器的IP地址及其對應的主機名,如果該臺機器作Datanode 用,則只需要在hosts文件中加上本機和主節點機器的IP地址與主機名即可(至于JobTracker主機是否也要同NameNode主機一樣加上所有機器的IP和主機名,本人由于沒有環境,不敢妄言,但猜想是要加的,如果哪位兄弟有興趣,倒是不妨一試)。在這里只是由于要作測試,作為主節點的主機可能會改變,加上本人比較懶,所以就全加上了。:)
?計算帳號設置
Hadoop要求所有機器上hadoop的部署目錄結構要相同,并且都有一個相同用戶名的帳戶。由于這里采用的是Cloudera發布的Hadoop包,所以并不需要這方面的設置,大家了解一下即可。
SSH設置
在 Hadoop 分布式環境中,主節點(NameNode、JobTracker) 需要通過 SSH 來啟動和停止從節點(DataNode、TeskTracker)上的各類進程。因此需要保證環境中的各臺機器均可以通過 SSH 登錄訪問,并且主節點用 SSH 登錄從節點時,不需要輸入密碼,這樣主節點才能在后臺自如地控制其它結點。可以將各臺機器上的 SSH 配置為使用無密碼公鑰認證方式來實現。 Ubuntu上的SSH協議的開源實現是OpenSSH, 缺省狀態下是沒有安裝的,如需使用需要進行安裝。
安裝OpenSSH
安裝OpenSSH很簡單,只需要下列命令就可以把openssh-client和openssh-server給安裝好:
sudo apt-get install ssh設置OpenSSH的無密碼公鑰認證
首先在Hadoop-01機器上執行以下命令:
hadoop@hadoop-01:~$ ssh-keygen -t rsaGenerating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):(在這里直接回車)
Enter same passphrase again:(在這里直接回車)
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
9d:42:04:26:00:51:c7:4e:2f:7e:38:dd:93:1c:a2:d6 hadoop@hadoop-01
上述命令將為主機hadoops-01上的當前用戶hadoop生成其密鑰對,該密鑰對被保存在/home/hadoop/.ssh/id_rsa 文件中,同時命令所生成的證書以及公鑰也保存在該文件所在的目錄中(在這里是:/home/hadoop/.ssh),并形成兩個文件 id_rsa,id_rsa.pub。然后將 id_rsa.pub 文件的內容復制到每臺主機(其中包括本機hadoop-01)的/home/hadoop/.ssh/authorized_keys文件的尾部,如果該文件不存在,可手工創建一個。
注意:id_rsa.pub 文件的內容是長長的一行,復制時不要遺漏字符或混入了多余換行符。
無密碼公鑰SSH的連接測試
從 hadoop-01 分別向 hadoop-01, hadoop-04, firehare-303 發起 SSH 連接請求,確保不需要輸入密碼就能 SSH 連接成功。注意第一次 SSH 連接時會出現類似如下提示的信息:
The authenticity of host [hadoop-01] can't be established. The key fingerprint is:c8:c2:b2:d0:29:29:1a:e3:ec:d9:4a:47:98:29:b4:48 Are you sure you want to continue connecting (yes/no)?
請輸入 yes, 這樣 OpenSSH 會把連接過來的這臺主機的信息自動加到 /home/hadoop/.ssh/know_hosts 文件中去,第二次再連接時,就不會有這樣的提示信息了。
設置主節點的Hadoop
設置JAVA_HOME
Hadoop的JAVA_HOME是在文件/etc/conf/hadoop-env.sh中設置,具體設置如下:
sudo vi /etc/conf/hadoop-env.shexport JAVA_HOME="/usr/lib/jvm/java-6-sun"
Hadoop的核心配置
Hadoop的核心配置文件是/etc/hadoop/conf/core-site.xml,具體配置如下:
fs.default.name
hdfs://hadoop-01:8020
hadoop.tmp.dir
/var/lib/hadoop-0.20/cache/${user.name}
設置Hadoop的分布式存儲環境
Hadoop的分布式環境設置主要是通過文件/etc/hadoop/conf/hdfs-site.xml來實現的,具體配置如下:
dfs.replication
3
dfs.permissions
false
dfs.name.dir
/var/lib/hadoop-0.20/cache/hadoop/dfs/name
設置Hapoop的分布式計算環境
Hadoop的分布式計算是采用了Map/Reduce算法,該算法環境的設置主要是通過文件/etc/hadoop/conf/mapred-site.xml來實現的,具體配置如下:
mapred.job.tracker
hadoop-01:8021
設置Hadoop的主從節點
首先設置主節點,編輯/etc/hadoop/conf/masters文件,如下所示:
hadoop-01然后是設置從節點,編輯/etc/hadoop/conf/slaves文件,如下所示:
hadoop-02hadoop-03
hadoop-04
firehare-303
設置從節點上的Hadoop
從節點上的Hadoop設置很簡單,只需要將主節點上的Hadoop設置,復制一份到從節點上即可。
scp -r /etc/hadoop/conf hadoop-02:/etc/hadoopscp -r /etc/hadoop/conf hadoop-03:/etc/hadoop
scp -r /etc/hadoop/conf hadoop-04:/etc/hadoop
scp -r /etc/hadoop/conf firehare-303:/etc/hadoop
啟動Hadoop
格式化分布式文件系統
在啟動Hadoop之前還要做最后一個準備工作,那就是格式化分布式文件系統,這個只需要在主節點做就行了,具體如下:
/usr/lib/hadoop-0.20/bin/hadoop namenode -format啟動Hadoop服務
啟動Hadoop可以通過以下命令來實現:
/usr/lib/hadoop-0.20/bin/start-all.sh注意:該命令是沒有加sudo的,如果加了sudo就會提示出錯信息的,因為root用戶并沒有做無驗證ssh設置。以下是輸出信息,注意hadoop-03是故意沒接的,所以出現No route to host信息。
hadoop@hadoop-01:~$ /usr/lib/hadoop-0.20/bin/start-all.shnamenode running as process 4836. Stop it first.
hadoop-02: starting datanode, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-datanode-hadoop-02.out
hadoop-04: starting datanode, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-datanode-hadoop-04.out
firehare-303: starting datanode, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-datanode-usvr-303b.out
hadoop-03: ssh: connect to host hadoop-03 port 22: No route to host
hadoop-01: secondarynamenode running as process 4891. Stop it first.
jobtracker running as process 4787. Stop it first.
hadoop-02: starting tasktracker, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-tasktracker-hadoop-02.out
hadoop-04: starting tasktracker, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-tasktracker-hadoop-04.out
firehare-303: starting tasktracker, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-tasktracker-usvr-303b.out
hadoop-03: ssh: connect to host hadoop-03 port 22: No route to host
這樣Hadoop就正常啟動了!
測試Hadoop
Hadoop架設好了,接下來就是要對其進行測試,看看它是否能正常工作,具體代碼如下:
hadoop@hadoop-01:~$ hadoop-0.20 fs -mkdir inputhadoop@hadoop-01:~$ hadoop-0.20 fs -put /etc/hadoop-0.20/conf/*.xml input
hadoop@hadoop-01:~$ hadoop-0.20 fs -ls input
Found 6 items
-rw-r--r-- 3 hadoop supergroup 3936 2010-03-11 08:55 /user/hadoop/input/capacity-scheduler.xml
-rw-r--r-- 3 hadoop supergroup 400 2010-03-11 08:55 /user/hadoop/input/core-site.xml
-rw-r--r-- 3 hadoop supergroup 3032 2010-03-11 08:55 /user/hadoop/input/fair-scheduler.xml
-rw-r--r-- 3 hadoop supergroup 4190 2010-03-11 08:55 /user/hadoop/input/hadoop-policy.xml
-rw-r--r-- 3 hadoop supergroup 536 2010-03-11 08:55 /user/hadoop/input/hdfs-site.xml
-rw-r--r-- 3 hadoop supergroup 266 2010-03-11 08:55 /user/hadoop/input/mapred-site.xml
hadoop@hadoop-01:~$ hadoop-0.20 jar /usr/lib/hadoop-0.20/hadoop-*-examples.jar grep input output 'dfs[a-z.]+'
10/03/11 14:35:57 INFO mapred.FileInputFormat: Total input paths to process?: 6
10/03/11 14:35:58 INFO mapred.JobClient: Running job: job_201003111431_0001
10/03/11 14:35:59 INFO mapred.JobClient: map 0% reduce 0%
10/03/11 14:36:14 INFO mapred.JobClient: map 33% reduce 0%
10/03/11 14:36:20 INFO mapred.JobClient: map 66% reduce 0%
10/03/11 14:36:26 INFO mapred.JobClient: map 66% reduce 22%
10/03/11 14:36:36 INFO mapred.JobClient: map 100% reduce 22%
10/03/11 14:36:44 INFO mapred.JobClient: map 100% reduce 100%
10/03/11 14:36:46 INFO mapred.JobClient: Job complete: job_201003111431_0001
10/03/11 14:36:46 INFO mapred.JobClient: Counters: 19
10/03/11 14:36:46 INFO mapred.JobClient: Job Counters
10/03/11 14:36:46 INFO mapred.JobClient: Launched reduce tasks=1
10/03/11 14:36:46 INFO mapred.JobClient: Rack-local map tasks=4
10/03/11 14:36:46 INFO mapred.JobClient: Launched map tasks=6
10/03/11 14:36:46 INFO mapred.JobClient: Data-local map tasks=2
10/03/11 14:36:46 INFO mapred.JobClient: FileSystemCounters
10/03/11 14:36:46 INFO mapred.JobClient: FILE_BYTES_READ=100
10/03/11 14:36:46 INFO mapred.JobClient: HDFS_BYTES_READ=12360
10/03/11 14:36:46 INFO mapred.JobClient: FILE_BYTES_WRITTEN=422
10/03/11 14:36:46 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=204
10/03/11 14:36:46 INFO mapred.JobClient: Map-Reduce Framework
10/03/11 14:36:46 INFO mapred.JobClient: Reduce input groups=4
10/03/11 14:36:46 INFO mapred.JobClient: Combine output records=4
10/03/11 14:36:46 INFO mapred.JobClient: Map input records=315
10/03/11 14:36:46 INFO mapred.JobClient: Reduce shuffle bytes=124
10/03/11 14:36:46 INFO mapred.JobClient: Reduce output records=4
10/03/11 14:36:46 INFO mapred.JobClient: Spilled Records=8
10/03/11 14:36:46 INFO mapred.JobClient: Map output bytes=86
10/03/11 14:36:46 INFO mapred.JobClient: Map input bytes=12360
10/03/11 14:36:46 INFO mapred.JobClient: Combine input records=4
10/03/11 14:36:46 INFO mapred.JobClient: Map output records=4
10/03/11 14:36:46 INFO mapred.JobClient: Reduce input records=4
10/03/11 14:36:46 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
10/03/11 14:36:46 INFO mapred.FileInputFormat: Total input paths to process?: 1
10/03/11 14:36:46 INFO mapred.JobClient: Running job: job_201003111431_0002
10/03/11 14:36:47 INFO mapred.JobClient: map 0% reduce 0%
10/03/11 14:36:56 INFO mapred.JobClient: map 100% reduce 0%
10/03/11 14:37:08 INFO mapred.JobClient: map 100% reduce 100%
10/03/11 14:37:10 INFO mapred.JobClient: Job complete: job_201003111431_0002
10/03/11 14:37:11 INFO mapred.JobClient: Counters: 18
10/03/11 14:37:11 INFO mapred.JobClient: Job Counters
10/03/11 14:37:11 INFO mapred.JobClient: Launched reduce tasks=1
10/03/11 14:37:11 INFO mapred.JobClient: Launched map tasks=1
10/03/11 14:37:11 INFO mapred.JobClient: Data-local map tasks=1
10/03/11 14:37:11 INFO mapred.JobClient: FileSystemCounters
10/03/11 14:37:11 INFO mapred.JobClient: FILE_BYTES_READ=100
10/03/11 14:37:11 INFO mapred.JobClient: HDFS_BYTES_READ=204
10/03/11 14:37:11 INFO mapred.JobClient: FILE_BYTES_WRITTEN=232
10/03/11 14:37:11 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=62
10/03/11 14:37:11 INFO mapred.JobClient: Map-Reduce Framework
10/03/11 14:37:11 INFO mapred.JobClient: Reduce input groups=1
10/03/11 14:37:11 INFO mapred.JobClient: Combine output records=0
10/03/11 14:37:11 INFO mapred.JobClient: Map input records=4
10/03/11 14:37:11 INFO mapred.JobClient: Reduce shuffle bytes=0
10/03/11 14:37:11 INFO mapred.JobClient: Reduce output records=4
10/03/11 14:37:11 INFO mapred.JobClient: Spilled Records=8
10/03/11 14:37:11 INFO mapred.JobClient: Map output bytes=86
10/03/11 14:37:11 INFO mapred.JobClient: Map input bytes=118
10/03/11 14:37:11 INFO mapred.JobClient: Combine input records=0
10/03/11 14:37:11 INFO mapred.JobClient: Map output records=4
10/03/11 14:37:11 INFO mapred.JobClient: Reduce input records=4
不難看出,上述測試已經成功,這說明Hadoop部署成功,能夠在上面進行Map/Reduce分布性計算了。
總結
以上是生活随笔為你收集整理的利用 Cloudera 实现 Hadoop (二)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Ext GrdPanel多种取值方式
- 下一篇: 做梦梦到鬼附体是什么征兆