hadoop--HDFS的Shell相关操作
目錄
- 基本語法
- 命令
- 常用命令
- 1. 前提
- 2. 上傳
- 3. 下載
- 4. HDFS的直接操作
基本語法
hadoop fs [選項] / hdfs dfs [選項]
命令
[xiaobai@hadoop102 ~]$ hadoop fs Usage: hadoop fs [generic options][-appendToFile <localsrc> ... <dst>][-cat [-ignoreCrc] <src> ...][-checksum <src> ...][-chgrp [-R] GROUP PATH...][-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...][-chown [-R] [OWNER][:[GROUP]] PATH...][-copyFromLocal [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst>][-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>][-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] [-e] <path> ...][-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>][-createSnapshot <snapshotDir> [<snapshotName>]][-deleteSnapshot <snapshotDir> <snapshotName>][-df [-h] [<path> ...]][-du [-s] [-h] [-v] [-x] <path> ...][-expunge [-immediate]][-find <path> ... <expression> ...][-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>][-getfacl [-R] <path>][-getfattr [-R] {-n name | -d} [-e en] <path>][-getmerge [-nl] [-skip-empty-file] <src> <localdst>][-head <file>][-help [cmd ...]][-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]][-mkdir [-p] <path> ...][-moveFromLocal [-f] [-p] [-l] [-d] <localsrc> ... <dst>][-moveToLocal <src> <localdst>][-mv <src> ... <dst>][-put [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst>][-renameSnapshot <snapshotDir> <oldName> <newName>][-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...][-rmdir [--ignore-fail-on-non-empty] <dir> ...][-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]][-setfattr {-n name [-v value] | -x name} <path>][-setrep [-R] [-w] <rep> <path> ...][-stat [format] <path> ...][-tail [-f] [-s <sleep interval>] <file>][-test -[defswrz] <path>][-text [-ignoreCrc] <src> ...][-touch [-a] [-m] [-t TIMESTAMP ] [-c] <path> ...][-touchz <path> ...][-truncate [-w] <length> <path> ...][-usage [cmd ...]] Generic options supported are: -conf <configuration file> specify an application configuration file -D <property=value> define a value for a given property -fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations. -jt <local|resourcemanager:port> specify a ResourceManager -files <file1,...> specify a comma-separated list of files to be copied to the map reduce cluster -libjars <jar1,...> specify a comma-separated list of jar files to be included in the classpath -archives <archive1,...> specify a comma-separated list of archives to be unarchived on the compute machinesThe general command line syntax is: command [genericOptions] [commandOptions]常用命令
1. 前提
1). 啟動Hadoop集群:
hadoop102啟動hdfs:
2). -help:輸出命令參數:
[xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -help rm3). 創建/weather文件夾:
[xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -mkdir /weather2. 上傳
1). -moveFromLocal:從本地剪貼到HDFS(本地無,HDFS有);
[xiaobai@hadoop102 hadoop-3.2.2]$ vim monday.txt 輸入: monday is a rainy day [xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -moveFromLocal ./monday.txt /weather2). -copyFromLocal:從本地文件系統中拷貝文件到HDFS路徑(本地有,HDSF有);
[xiaobai@hadoop102 hadoop-3.2.2]$ vim Tuesday.txt 輸入: Tuesday is a sunny day [xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -copyFromLocal Tuesday.txt /weather3). -put:與copyFromLocal同,生產環境中用put(本地有,HDSF有);
[xiaobai@hadoop102 hadoop-3.2.2]$ vim Wednesday.txt 輸入: Wednesday balabala [xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -put ./Wednesday.txt /weather4). appendToFile:追加一個文件到另一個已經存在的文件末尾(HDFS 只能追加不能修改);
[xiaobai@hadoop102 hadoop-3.2.2]$ vim myfeeling 輸入: I was not in a good mood that day. [xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -appendToFile ./myfeeling /weather/Wednesday.txt3. 下載
1). -copyToLocal:從HDFS拷貝到本地(本地有,HDSF有);
[xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -copyToLocal /weather/monday.txt ./2). -get:于copyToLocal同,生產環境中用get(本地有,HDSF有);
[xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -get /weather/monday.txt ./monday2.txt4. HDFS的直接操作
1). -ls: 顯示目錄信息;
[xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -ls /weather2). -cat: 顯示文件內容;
[xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -cat /weather/Wednesday.txt3). -chgrp / chmod / -chown: 與Linux文件系統中的用法相同,修改文件所屬的權限;
[xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -chmod 666 /weather/Tuesday.txt [xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -chown xiaobai:xiaobai /weather/monday.txt4). -mkdir: 創建路徑;
[xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -mkdir/sunny5). -cp: 在HDFS中的路徑拷貝;
[xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -cp /weather/Wednesday.txt /sunny [xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -rm /sunny/monday.txt6). -mv: 在HDFS目錄中移動文件;
[xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -mv /weather/monday.txt /sunny7). -tail: 顯示一個文件的末尾1kb的數據(末尾文件為最新文件/重要文件);
[xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -tail /sunny/Wednesday.txt8). -rm: 刪除文件/文件夾;
[xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -rm /sunny/monday.txt9). -rm -r: 遞歸刪除目錄及目錄里的內容;
[xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -rm -r /sunny10). -du: 統計文件夾的大小信息;
[xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -du -s -h /weather 99 297 /weather [xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -du -h /weather 23 69 /weather/Tuesday.txt 54 162 /weather/Wednesday.txt 22 66 /weather/monday.txttips:99表示文件大小,297表示99*3個副本;/weather 表示查看的目錄;
11). -setup: 設置HDFS中文件的副本數量;
[xiaobai@hadoop102 hadoop-3.2.2]$ hadoop fs -setrep 6 /weather/monday.txt Replication 6 set: /weather/monday.txt
tips:此處設置的副本數只是記錄在NameNode的元數據中,不一定會有這么多副本,只有DataNode的數量可以得到保證時才會達到設置的副本數,否則,副本數只等到下次DataNode足夠時才會繼續拷貝副本。因為目前只有3ux設備,最多3x副本,只有節點數增加到6x時,副本數才能達到6.
總結
以上是生活随笔為你收集整理的hadoop--HDFS的Shell相关操作的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: XML布局中出现Exception ra
- 下一篇: GC参考手册 —— GC 调优(基础篇)