六十.完全分布式 、 节点管理 、 NFS网关
3.1 增加節點
1)增加一個新的節點node4
node4 ~]# yum -y install rsync
node4 ~]# yum -y install java-1.8.0-openjdk-devel
node4 ~]# mkdir /var/hadoop
nn01 ~]# ssh-copy-id 192.168.1.25
nn01 ~]# vim /etc/hosts
192.168.1.21 nn01
192.168.1.22 node1
192.168.1.23 node2
192.168.1.24 node3
192.168.1.25 node4
nn01 ~]# scp /etc/hosts 192.168.1.25:/etc/
nn01 ~]# cd /usr/local/hadoop/
hadoop]# vim ./etc/hadoop/slaves
node1
node2
node3
node4
//同步配置 (jobs -l 確保node4同步完成!)
hadoop]# for i in {22..25}; do rsync -aSH --delete /usr/local/hadoop/ \
192.168.1.$i:/usr/local/hadoop/ -e 'ssh' & done
[1] 12375
[2] 12376
[3] 12377
[4] 12378
node4 hadoop]# ./sbin/hadoop-daemon.sh start datanode //node4啟動
2)查看狀態
node4 hadoop]# jps
12470 Jps
12396 DataNode
3)設置同步帶寬 #其他NN將數據勻一部分過來,勻數據的時候不能占用所有帶寬
node4 hadoop]# ./bin/hdfs dfsadmin -setBalancerBandwidth 60000000
Balancer bandwidth is set to 60000000
node4 hadoop]# ./sbin/start-balancer.sh
starting balancer...
nn01 hadoop]# ./bin/hdfs dfsadmin -report #查看狀態
Live datanodes (4):...(有node4)
4)刪除節點
nn01 hadoop]# vim /usr/local/hadoop/etc/hadoop/slaves
//去掉之前添加的node4
node1
node2
node3
nn01 hadoop]# vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml
//在此配置文件里面加入下面四行
...
<property>
? ? ? ? <name>dfs.hosts.exclude</name>
? ? ? ? <value>/usr/local/hadoop/etc/hadoop/exclude</value>
</property>
nn01 hadoop]# vim /usr/local/hadoop/etc/hadoop/exclude
node4
5)導出數據
nn01 hadoop]# ./bin/hdfs dfsadmin -refreshNodes
Refresh nodes successful
nn01 hadoop]# ./bin/hdfs dfsadmin -report ?#等待
//查看node4,不要動機器,等待node4顯示Decommissioned
Dead datanodes (1):
Name: 192.168.1.25:50010 (node4)
Hostname: node4
Decommission Status : Decommissioned... #顯示Decommissioned 才可以
node4 hadoop]# ./sbin/hadoop-daemon.sh stop datanode //停止datanode
stopping datanode
node4 hadoop]# ./sbin/yarn-daemon.sh start nodemanager
//yarn 增加 nodemanager
node4 hadoop]# jps
10342 NodeManager
10446 Jps
node4 hadoop]# ./sbin/yarn-daemon.sh stop nodemanager //停止nodemanager
stopping nodemanager ?#jps 將沒有NodeManager
node4 hadoop]# ./bin/yarn node -list
//yarn 查看節點狀態,還是有node4節點,要過一段時間才會消失
19/03/02 18:40:27 INFO client.RMProxy: Connecting to ResourceManager at nn01/192.168.1.21:8032
Total Nodes:4
Node-Id Node-StateNode-Http-AddressNumber-of-Running-Containers
node2:36669 RUNNING node2:8042 0
node4:38698 RUNNING node4:8042 0
node3:36146 RUNNING node3:8042 0
node1:34124 RUNNING node1:8042 0
4.NFS配置
創建代理用戶
啟動一個新系統,禁用Selinux和firewalld
配置NFSWG
啟動服務
掛載NFS并實現開機自啟
4.1 基礎準備
1)更改主機名,配置/etc/hosts(/etc/hosts在nn01和nfsgw上面配置)
localhost ~]# echo nfsgw > /etc/hostname
localhost ~]# hostname nfsgw
nn01 ~]# ssh-copy-id 192.168.1.26
nn01 hadoop]# vim /etc/hosts
192.168.1.21 nn01
192.168.1.22 node1
192.168.1.23 node2
192.168.1.24 node3
192.168.1.25 node4
192.168.1.26 nfsgw
2)創建代理用戶(nn01和nfsgw上面操作),以nn01為例子
nn01 hadoop]# groupadd -g 200 nfs
nn01 hadoop]# useradd -u 200 -g nfs nfs
3)配置core-site.xml
nn01 hadoop]# ./sbin/stop-all.sh //停止所有服務
This script ...
nn01 hadoop]# cd etc/hadoop
nn01 hadoop]# >exclude
nn01 hadoop]# vim core-site.xml #添加
<property>
? ? ?<name>hadoop.proxyuser.nfs.groups</name>
? ? ?<value>*</value>
</property>
<property>
? ? ? <name>hadoop.proxyuser.nfs.hosts</name>
? ? ? <value>*</value>
</property>
4)同步配置到node1,node2,node3
nn01 hadoop]# for i in {22..24}; do rsync -aSH --delete /usr/local/hadoop/ 192.168.1.$i:/usr/local/hadoop/ -e 'ssh' & done
[4] 2722
[5] 2723
[6] 2724 #jobs -l 確保同步完成!
5)啟動集群
nn01 hadoop]# /usr/local/hadoop/sbin/start-dfs.sh
6)查看狀態
nn01 hadoop]# /usr/local/hadoop/bin/hdfs dfsadmin -report
Live datanodes (3):... #恢復到3
4.2 NFSGW配置
1)安裝java-1.8.0-openjdk-devel和rsync
nfsgw ~]# yum -y install java-1.8.0-openjdk-devel
nfsgw ~]# yum -y install rsync
nn01 hadoop]# rsync -avSH --delete \
/usr/local/hadoop/ 192.168.1.26:/usr/local/hadoop/ -e 'ssh'
2)創建數據根目錄 /var/hadoop(在NFSGW主機上面操作)
nfsgw ~]# mkdir /var/hadoop
3)創建轉儲目錄,并給用戶nfs 賦權
nfsgw ~]# mkdir /var/nfstmp
nfsgw ~]# chown nfs:nfs /var/nfstmp
4)給/usr/local/hadoop/logs賦權(在NFSGW主機上面操作)
nfsgw ~]# setfacl -m u:nfs:rwx /usr/local/hadoop/logs
nfsgw ~]# vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml #新添加
<property>
? ? ? <name>nfs.exports.allowed.hosts</name>
? ? ? <value>* rw</value>
</property>
<property>
? ? ? <name>nfs.dump.dir</name>
? ? ? <value>/var/nfstmp</value>
</property>
5)可以創建和刪除即可
nfsgw ~]# su - nfs
nfsgw ~]$ cd /var/nfstmp/
nfsgw nfstmp]$ touch 1
nfsgw nfstmp]$ ls
1
nfsgw nfstmp]$ rm -rf 1
nfsgw nfstmp]$ ls
nfsgw nfstmp]$ cd /usr/local/hadoop/logs/
nfsgw logs]$ touch 1
nfsgw logs]$ ls
1 hadoop-root-secondarynamenode-nn01.log
hadoop-root-datanode-nn01.log hadoop-root-secondarynamenode-nn01.out
hadoop-root-datanode-nn01.out hadoop-root-secondarynamenode-nn01.out.1
hadoop-root-namenode-nn01.log SecurityAuth-root.audit
hadoop-root-namenode-nn01.out yarn-root-resourcemanager-nn01.log
hadoop-root-namenode-nn01.out.1 yarn-root-resourcemanager-nn01.out
nfsgw logs]$ rm -rf 1
nfsgw logs]$ ls
6)啟動服務
nfsgw ~]# /usr/local/hadoop/sbin/hadoop-daemon.sh --script ./bin/hdfs start portmap
//portmap服務只能用root用戶啟動
starting portmap, logging to /usr/local/hadoop/logs/hadoop-root-portmap-nfsgw.out
nfsgw ~]# jps
1091 Jps
1045 Portmap
nfsgw ~]# su - nfs
nfsgw ~]$ cd /usr/local/hadoop/
nfsgw hadoop]$ ./sbin/hadoop-daemon.sh --script ./bin/hdfs start nfs3
//nfs3只能用代理用戶啟動
tarting nfs3, logging to /usr/local/hadoop/logs/hadoop-nfs-nfs3-nfsgw.out
nfsgw hadoop]$ jps
1139 Nfs3
1192 Jps
nfsgw hadoop]# jps //root用戶執行可以看到portmap和nfs3
1139 Nfs3
1204 Jps
1045 Portmap
7)實現客戶端掛載(客戶端可以用node4這臺主機)
node4 ~]# rm -rf /usr/local/hadoop
node4 ~]# yum -y install nfs-utils
node4 ~]# mount -t nfs -o \
vers=3,proto=tcp,nolock,noatime,sync,noacl 192.168.1.26:/ /mnt/
//掛載 ?df -h 查看
node4 ~]# cd /mnt/
node4 mnt]# ls
aaa bbb fa system tmp
node4 mnt]# touch a
node4 mnt]# ls
a aaa bbb fa system tmp
node4 mnt]# rm -rf a
node4 mnt]# ls
aaa bbb fa system tmp
8)實現開機自動掛載
node4 ~]# vim /etc/fstab
192.168.1.26:/ /mnt/ nfs vers=3,proto=tcp,nolock,noatime,sync,noacl,_netdev 0 0
node4 ~]# mount -a
node4 ~]# df -h
192.168.1.26:/ 80G 8.2G 72G 11% /dvd
node4 ~]# rpcinfo -p 192.168.1.26
program vers proto port service
100005 3 udp 4242 mountd
100005 1 tcp 4242 mountd
100000 2 udp 111 portmapper
100000 2 tcp 111 portmapper
100005 3 tcp 4242 mountd
100005 2 tcp 4242 mountd
100003 3 tcp 2049 nfs
100005 2 udp 4242 mountd
100005 1 udp 4242 mountd
01:Yarn:集群資源管理系統(核心組件,集群資源管理系統)
MapReduce結構 分布式計算框架
02:(NFS)節點管理 增加 刪除 修復
03:Yarn節點管理(NodeManager) 增加 刪除
04:NFS網關(HDFS client + NFS server)
訪問HDFS文件系統必須是HDFS客戶端
將HDFS文件系統mount到本地(使用NFSv3協議,任何機器直接能掛載)
外部客戶端要訪問 HDFS文件系統,通過NFS網關,采用NFS掛載的方式,NFS網關將NFS指令轉化為HDFS客戶端的指令,發給HDFS client,以HDFS客戶端的方式訪問后端HDFS文件系統,NFS網關起到代理轉發的作用。
#######################
轉載于:https://www.cnblogs.com/luwei0915/p/10496585.html
總結
以上是生活随笔為你收集整理的六十.完全分布式 、 节点管理 、 NFS网关的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 12、DOM
- 下一篇: jenkins安装插件一直不动