利用 Cloudera 实现 Hadoop (二)
安裝
規(guī)劃好了就開始安裝Hadoop,如前言中所說使用Cloudera的Hadoop發(fā)布版安裝Hadoop是十分方便的,首先當(dāng)然是在每臺主機(jī)上一個干凈的操作系統(tǒng)(我用的是Ubuntu 8.04,用戶設(shè)為Hadoop,其它的版本應(yīng)該差不多),然后就是安裝Hadoop了(這樣安裝的是Hadoop-0.20,也可以安裝Hadoop- 0.18的版本,反正安裝步驟都差不多。注意,不能同時啟用Hadoop-0.20和Hadoop-0.18)。由于每臺機(jī)器安裝步驟都一樣,這里就寫出了一臺主機(jī)的安裝步驟,主要分為以下幾個步驟:
設(shè)置Cloudera的源
- 生成Cloudera源文件(這里采用的是Hadoop-0.20版本):
#穩(wěn)定版(Hadoop-0.18)
#deb http://archive.cloudera.com/debian hardy-stable contrib
#deb-src http://archive.cloudera.com/debian hardy-stable contrib
#測試版(Hadoop-0.20)
deb http://archive.cloudera.com/debian hardy-testing contrib
deb-src http://archive.cloudera.com/debian hardy-testing contrib
- 生成源的密鑰:
curl -s http://archive.cloudera.com/debian/archive.key | sudo apt-key add -
安裝Hadoop
- 更新源包索引:
sudo apt-get dist-upgrade
- 安裝Hadoop:
部署
安裝好這幾臺主機(jī)的Hadoop環(huán)境之后,就要對它們進(jìn)行分布式運行模式的部署了,首先是設(shè)置它們之間的互聯(lián)。
主機(jī)互聯(lián)
Hadoop環(huán)境中的互聯(lián)是指各主機(jī)之間網(wǎng)絡(luò)暢通,機(jī)器名與IP地址之間解析正常,可以從任一主機(jī)ping通其它主機(jī)的主機(jī)名。注意,這里指的是主機(jī)名,即在Hadoop-01主機(jī)上可以通過命令ping Hadoop-02來ping通Hadoop-02主機(jī)(同理,要求這幾臺主機(jī)都能相互Ping通各自的主機(jī)名)。可以通過在各主機(jī)的/etc /hosts文件來實現(xiàn),具體設(shè)置如下:
sudo vi /etc/hosts127.0.0.1 localhost
10.x.253.201 hadoop-01 hadoop-01
10.x.253.202 hadoop-02 hadoop-02
10.x.253.203 hadoop-03 hadoop-03
10.x.253.204 hadoop-04 hadoop-04
10.x.3.30 firehare-303 firehare-303
將每個主機(jī)的hosts文件都改成上述設(shè)置,這樣就實現(xiàn)了主機(jī)間使用主機(jī)名互聯(lián)的要求。
?
注:如果深究起來,并不是所有的主機(jī)都需要知道Hadoop環(huán)境中其它主機(jī)主機(jī)名的。其實只是作為主節(jié)點的主機(jī)(如NameNode、 JobTracker),需要在該主節(jié)點hosts文件中加上Hadoop環(huán)境中所有機(jī)器的IP地址及其對應(yīng)的主機(jī)名,如果該臺機(jī)器作Datanode 用,則只需要在hosts文件中加上本機(jī)和主節(jié)點機(jī)器的IP地址與主機(jī)名即可(至于JobTracker主機(jī)是否也要同NameNode主機(jī)一樣加上所有機(jī)器的IP和主機(jī)名,本人由于沒有環(huán)境,不敢妄言,但猜想是要加的,如果哪位兄弟有興趣,倒是不妨一試)。在這里只是由于要作測試,作為主節(jié)點的主機(jī)可能會改變,加上本人比較懶,所以就全加上了。:)
?計算帳號設(shè)置
Hadoop要求所有機(jī)器上hadoop的部署目錄結(jié)構(gòu)要相同,并且都有一個相同用戶名的帳戶。由于這里采用的是Cloudera發(fā)布的Hadoop包,所以并不需要這方面的設(shè)置,大家了解一下即可。
SSH設(shè)置
在 Hadoop 分布式環(huán)境中,主節(jié)點(NameNode、JobTracker) 需要通過 SSH 來啟動和停止從節(jié)點(DataNode、TeskTracker)上的各類進(jìn)程。因此需要保證環(huán)境中的各臺機(jī)器均可以通過 SSH 登錄訪問,并且主節(jié)點用 SSH 登錄從節(jié)點時,不需要輸入密碼,這樣主節(jié)點才能在后臺自如地控制其它結(jié)點。可以將各臺機(jī)器上的 SSH 配置為使用無密碼公鑰認(rèn)證方式來實現(xiàn)。 Ubuntu上的SSH協(xié)議的開源實現(xiàn)是OpenSSH, 缺省狀態(tài)下是沒有安裝的,如需使用需要進(jìn)行安裝。
安裝OpenSSH
安裝OpenSSH很簡單,只需要下列命令就可以把openssh-client和openssh-server給安裝好:
sudo apt-get install ssh設(shè)置OpenSSH的無密碼公鑰認(rèn)證
首先在Hadoop-01機(jī)器上執(zhí)行以下命令:
hadoop@hadoop-01:~$ ssh-keygen -t rsaGenerating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):(在這里直接回車)
Enter same passphrase again:(在這里直接回車)
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
9d:42:04:26:00:51:c7:4e:2f:7e:38:dd:93:1c:a2:d6 hadoop@hadoop-01
上述命令將為主機(jī)hadoops-01上的當(dāng)前用戶hadoop生成其密鑰對,該密鑰對被保存在/home/hadoop/.ssh/id_rsa 文件中,同時命令所生成的證書以及公鑰也保存在該文件所在的目錄中(在這里是:/home/hadoop/.ssh),并形成兩個文件 id_rsa,id_rsa.pub。然后將 id_rsa.pub 文件的內(nèi)容復(fù)制到每臺主機(jī)(其中包括本機(jī)hadoop-01)的/home/hadoop/.ssh/authorized_keys文件的尾部,如果該文件不存在,可手工創(chuàng)建一個。
注意:id_rsa.pub 文件的內(nèi)容是長長的一行,復(fù)制時不要遺漏字符或混入了多余換行符。
無密碼公鑰SSH的連接測試
從 hadoop-01 分別向 hadoop-01, hadoop-04, firehare-303 發(fā)起 SSH 連接請求,確保不需要輸入密碼就能 SSH 連接成功。注意第一次 SSH 連接時會出現(xiàn)類似如下提示的信息:
The authenticity of host [hadoop-01] can't be established. The key fingerprint is:c8:c2:b2:d0:29:29:1a:e3:ec:d9:4a:47:98:29:b4:48 Are you sure you want to continue connecting (yes/no)?
請輸入 yes, 這樣 OpenSSH 會把連接過來的這臺主機(jī)的信息自動加到 /home/hadoop/.ssh/know_hosts 文件中去,第二次再連接時,就不會有這樣的提示信息了。
設(shè)置主節(jié)點的Hadoop
設(shè)置JAVA_HOME
Hadoop的JAVA_HOME是在文件/etc/conf/hadoop-env.sh中設(shè)置,具體設(shè)置如下:
sudo vi /etc/conf/hadoop-env.shexport JAVA_HOME="/usr/lib/jvm/java-6-sun"
Hadoop的核心配置
Hadoop的核心配置文件是/etc/hadoop/conf/core-site.xml,具體配置如下:
fs.default.name
hdfs://hadoop-01:8020
hadoop.tmp.dir
/var/lib/hadoop-0.20/cache/${user.name}
設(shè)置Hadoop的分布式存儲環(huán)境
Hadoop的分布式環(huán)境設(shè)置主要是通過文件/etc/hadoop/conf/hdfs-site.xml來實現(xiàn)的,具體配置如下:
dfs.replication
3
dfs.permissions
false
dfs.name.dir
/var/lib/hadoop-0.20/cache/hadoop/dfs/name
設(shè)置Hapoop的分布式計算環(huán)境
Hadoop的分布式計算是采用了Map/Reduce算法,該算法環(huán)境的設(shè)置主要是通過文件/etc/hadoop/conf/mapred-site.xml來實現(xiàn)的,具體配置如下:
mapred.job.tracker
hadoop-01:8021
設(shè)置Hadoop的主從節(jié)點
首先設(shè)置主節(jié)點,編輯/etc/hadoop/conf/masters文件,如下所示:
hadoop-01然后是設(shè)置從節(jié)點,編輯/etc/hadoop/conf/slaves文件,如下所示:
hadoop-02hadoop-03
hadoop-04
firehare-303
設(shè)置從節(jié)點上的Hadoop
從節(jié)點上的Hadoop設(shè)置很簡單,只需要將主節(jié)點上的Hadoop設(shè)置,復(fù)制一份到從節(jié)點上即可。
scp -r /etc/hadoop/conf hadoop-02:/etc/hadoopscp -r /etc/hadoop/conf hadoop-03:/etc/hadoop
scp -r /etc/hadoop/conf hadoop-04:/etc/hadoop
scp -r /etc/hadoop/conf firehare-303:/etc/hadoop
啟動Hadoop
格式化分布式文件系統(tǒng)
在啟動Hadoop之前還要做最后一個準(zhǔn)備工作,那就是格式化分布式文件系統(tǒng),這個只需要在主節(jié)點做就行了,具體如下:
/usr/lib/hadoop-0.20/bin/hadoop namenode -format啟動Hadoop服務(wù)
啟動Hadoop可以通過以下命令來實現(xiàn):
/usr/lib/hadoop-0.20/bin/start-all.sh注意:該命令是沒有加sudo的,如果加了sudo就會提示出錯信息的,因為root用戶并沒有做無驗證ssh設(shè)置。以下是輸出信息,注意hadoop-03是故意沒接的,所以出現(xiàn)No route to host信息。
hadoop@hadoop-01:~$ /usr/lib/hadoop-0.20/bin/start-all.shnamenode running as process 4836. Stop it first.
hadoop-02: starting datanode, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-datanode-hadoop-02.out
hadoop-04: starting datanode, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-datanode-hadoop-04.out
firehare-303: starting datanode, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-datanode-usvr-303b.out
hadoop-03: ssh: connect to host hadoop-03 port 22: No route to host
hadoop-01: secondarynamenode running as process 4891. Stop it first.
jobtracker running as process 4787. Stop it first.
hadoop-02: starting tasktracker, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-tasktracker-hadoop-02.out
hadoop-04: starting tasktracker, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-tasktracker-hadoop-04.out
firehare-303: starting tasktracker, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-tasktracker-usvr-303b.out
hadoop-03: ssh: connect to host hadoop-03 port 22: No route to host
這樣Hadoop就正常啟動了!
測試Hadoop
Hadoop架設(shè)好了,接下來就是要對其進(jìn)行測試,看看它是否能正常工作,具體代碼如下:
hadoop@hadoop-01:~$ hadoop-0.20 fs -mkdir inputhadoop@hadoop-01:~$ hadoop-0.20 fs -put /etc/hadoop-0.20/conf/*.xml input
hadoop@hadoop-01:~$ hadoop-0.20 fs -ls input
Found 6 items
-rw-r--r-- 3 hadoop supergroup 3936 2010-03-11 08:55 /user/hadoop/input/capacity-scheduler.xml
-rw-r--r-- 3 hadoop supergroup 400 2010-03-11 08:55 /user/hadoop/input/core-site.xml
-rw-r--r-- 3 hadoop supergroup 3032 2010-03-11 08:55 /user/hadoop/input/fair-scheduler.xml
-rw-r--r-- 3 hadoop supergroup 4190 2010-03-11 08:55 /user/hadoop/input/hadoop-policy.xml
-rw-r--r-- 3 hadoop supergroup 536 2010-03-11 08:55 /user/hadoop/input/hdfs-site.xml
-rw-r--r-- 3 hadoop supergroup 266 2010-03-11 08:55 /user/hadoop/input/mapred-site.xml
hadoop@hadoop-01:~$ hadoop-0.20 jar /usr/lib/hadoop-0.20/hadoop-*-examples.jar grep input output 'dfs[a-z.]+'
10/03/11 14:35:57 INFO mapred.FileInputFormat: Total input paths to process?: 6
10/03/11 14:35:58 INFO mapred.JobClient: Running job: job_201003111431_0001
10/03/11 14:35:59 INFO mapred.JobClient: map 0% reduce 0%
10/03/11 14:36:14 INFO mapred.JobClient: map 33% reduce 0%
10/03/11 14:36:20 INFO mapred.JobClient: map 66% reduce 0%
10/03/11 14:36:26 INFO mapred.JobClient: map 66% reduce 22%
10/03/11 14:36:36 INFO mapred.JobClient: map 100% reduce 22%
10/03/11 14:36:44 INFO mapred.JobClient: map 100% reduce 100%
10/03/11 14:36:46 INFO mapred.JobClient: Job complete: job_201003111431_0001
10/03/11 14:36:46 INFO mapred.JobClient: Counters: 19
10/03/11 14:36:46 INFO mapred.JobClient: Job Counters
10/03/11 14:36:46 INFO mapred.JobClient: Launched reduce tasks=1
10/03/11 14:36:46 INFO mapred.JobClient: Rack-local map tasks=4
10/03/11 14:36:46 INFO mapred.JobClient: Launched map tasks=6
10/03/11 14:36:46 INFO mapred.JobClient: Data-local map tasks=2
10/03/11 14:36:46 INFO mapred.JobClient: FileSystemCounters
10/03/11 14:36:46 INFO mapred.JobClient: FILE_BYTES_READ=100
10/03/11 14:36:46 INFO mapred.JobClient: HDFS_BYTES_READ=12360
10/03/11 14:36:46 INFO mapred.JobClient: FILE_BYTES_WRITTEN=422
10/03/11 14:36:46 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=204
10/03/11 14:36:46 INFO mapred.JobClient: Map-Reduce Framework
10/03/11 14:36:46 INFO mapred.JobClient: Reduce input groups=4
10/03/11 14:36:46 INFO mapred.JobClient: Combine output records=4
10/03/11 14:36:46 INFO mapred.JobClient: Map input records=315
10/03/11 14:36:46 INFO mapred.JobClient: Reduce shuffle bytes=124
10/03/11 14:36:46 INFO mapred.JobClient: Reduce output records=4
10/03/11 14:36:46 INFO mapred.JobClient: Spilled Records=8
10/03/11 14:36:46 INFO mapred.JobClient: Map output bytes=86
10/03/11 14:36:46 INFO mapred.JobClient: Map input bytes=12360
10/03/11 14:36:46 INFO mapred.JobClient: Combine input records=4
10/03/11 14:36:46 INFO mapred.JobClient: Map output records=4
10/03/11 14:36:46 INFO mapred.JobClient: Reduce input records=4
10/03/11 14:36:46 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
10/03/11 14:36:46 INFO mapred.FileInputFormat: Total input paths to process?: 1
10/03/11 14:36:46 INFO mapred.JobClient: Running job: job_201003111431_0002
10/03/11 14:36:47 INFO mapred.JobClient: map 0% reduce 0%
10/03/11 14:36:56 INFO mapred.JobClient: map 100% reduce 0%
10/03/11 14:37:08 INFO mapred.JobClient: map 100% reduce 100%
10/03/11 14:37:10 INFO mapred.JobClient: Job complete: job_201003111431_0002
10/03/11 14:37:11 INFO mapred.JobClient: Counters: 18
10/03/11 14:37:11 INFO mapred.JobClient: Job Counters
10/03/11 14:37:11 INFO mapred.JobClient: Launched reduce tasks=1
10/03/11 14:37:11 INFO mapred.JobClient: Launched map tasks=1
10/03/11 14:37:11 INFO mapred.JobClient: Data-local map tasks=1
10/03/11 14:37:11 INFO mapred.JobClient: FileSystemCounters
10/03/11 14:37:11 INFO mapred.JobClient: FILE_BYTES_READ=100
10/03/11 14:37:11 INFO mapred.JobClient: HDFS_BYTES_READ=204
10/03/11 14:37:11 INFO mapred.JobClient: FILE_BYTES_WRITTEN=232
10/03/11 14:37:11 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=62
10/03/11 14:37:11 INFO mapred.JobClient: Map-Reduce Framework
10/03/11 14:37:11 INFO mapred.JobClient: Reduce input groups=1
10/03/11 14:37:11 INFO mapred.JobClient: Combine output records=0
10/03/11 14:37:11 INFO mapred.JobClient: Map input records=4
10/03/11 14:37:11 INFO mapred.JobClient: Reduce shuffle bytes=0
10/03/11 14:37:11 INFO mapred.JobClient: Reduce output records=4
10/03/11 14:37:11 INFO mapred.JobClient: Spilled Records=8
10/03/11 14:37:11 INFO mapred.JobClient: Map output bytes=86
10/03/11 14:37:11 INFO mapred.JobClient: Map input bytes=118
10/03/11 14:37:11 INFO mapred.JobClient: Combine input records=0
10/03/11 14:37:11 INFO mapred.JobClient: Map output records=4
10/03/11 14:37:11 INFO mapred.JobClient: Reduce input records=4
不難看出,上述測試已經(jīng)成功,這說明Hadoop部署成功,能夠在上面進(jìn)行Map/Reduce分布性計算了。
總結(jié)
以上是生活随笔為你收集整理的利用 Cloudera 实现 Hadoop (二)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Ext GrdPanel多种取值方式
- 下一篇: 一封写给自己的信