hdfs命令_HDFS命令
hdfs命令
In this lesson on Apache Hadoop HDFS commands, we will go through the most common commands which are used for Hadoop administration and to manage files present on a Hadoop cluster.
在本課程中,有關Apache Hadoop HDFS命令的內容,我們將介紹最常用的命令,這些命令用于Hadoop管理和管理Hadoop集群上的文件。
HDFS命令 (HDFS Commands)
Hive commands can be run on any Hadoop cluster or you’re free to use any of the VMs offered by Hortonworks, Cloudera etc.
Hive命令可以在任何Hadoop群集上運行,或者您可以自由使用Hortonworks , Cloudera等提供的任何VM。
In this guide, we will make use of Ubuntu 17.10 (GNU/Linux 4.13.0-37-generic x86_64) machine:
在本指南中,我們將使用Ubuntu 17.10(GNU / Linux 4.13.0-37-generic x86_64)計算機:
Ubuntu Version
Ubuntu版本
Finally, we will make use of Hadoop v3.0.1 for this lesson:
最后,本課將使用Hadoop v3.0.1:
Hadoop version
Hadoop版本
Let’s get started.
讓我們開始吧。
Hadoop HDFS命令 (Hadoop HDFS Commands)
We will start with some very basic help commands and go into more detail as we go through this lesson.
我們將從一些非常基本的幫助命令開始,并在本課程中進行更詳細的介紹。
獲取所有HDFS命令 (Getting all HDFS Commands)
The simplest help command for Hadoop HDFS is the following with which we get all the available commands in Hadoop and how to use them:
以下是Hadoop HDFS最簡單的幫助命令,通過它我們可以獲取Hadoop中所有可用的命令以及如何使用它們:
hadoop fs -helpLet’s see the output for this command:
讓我們看一下該命令的輸出:
Hadoop fs help
Hadoop fs幫助
The output was quite long actually as this prints all the available commands a brief on how to use those commands as well.
實際上,輸出相當長,因為這會打印所有可用的命令,并簡要說明如何使用這些命令。
有關特定Hadoop命令的幫助 (Help on specific Hadoop command)
The information printed from the last command was quite big as it printed all the commands. Finding help for a specific command is tricky in that output. Here is a command to narrow your search:
從最后一個命令打印的信息很大,因為它打印了所有命令。 在該輸出中查找特定命令的幫助非常棘手。 這是縮小搜索范圍的命令:
hadoop fs -help lsLet’s see the output of this command:
讓我們看一下該命令的輸出:
Hadoop specific command guide
Hadoop特定命令指南
特定Hadoop命令的用法 (Usage of specific Hadoop command)
to know the syntax of each command, we don’t need t go anywhere apart from the terminal itself. We can know the syntax of a command on how to use it, use the usage option:
要知道每個命令的語法,除了終端本身,我們不需要走任何地方。 我們可以使用用法選項來了解有關如何使用命令的語法:
hadoop fs -usage lsLet’s see the output of this command:
讓我們看一下該命令的輸出:
Usage of Hadoop Command
Hadoop命令的用法
Apart from usage, it also shows all possible options for the command specified.
除用法外,它還顯示指定命令的所有可能選項。
列出fs文件和目錄 (Listing fs files and directories)
To list all the available files and subdirectories under default directory, just use the following command:
要列出默認目錄下的所有可用文件和子目錄,只需使用以下命令:
hadoop fs -lsLet’s see the output for this command:
讓我們看一下該命令的輸出:
Listing all files
列出所有文件
We ran this in the root directory and that’s why the output.
我們在根目錄中運行它,這就是輸出的原因。
制作HDFS目錄 (Making HDFS Directory)
We can make a new directory for Hadoop File System using the following command:
我們可以使用以下命令為Hadoop File System創建新目錄:
hadoop fs -mkdir /root/journaldev_bigdataNote that if you create a new directory inside the /user/ directory, Hadoop will have read/write permissions on the directory but with other directories, it only has read permission by default.
請注意,如果在/user/目錄中創建一個新目錄,則Hadoop將對該目錄具有讀/寫權限,但對于其他目錄,默認情況下它僅具有讀權限。
將文件從本地文件系統復制到Hadoop FS (Copying file from Local file System to Hadoop FS)
To copy a file from Local file System to Hadoop FS, we can use a simple command:
要將文件從本地文件系統復制到Hadoop FS,我們可以使用一個簡單的命令:
hadoop fs -copyFromLocal derby.log /root/journaldev_bigdataLet’s see the output for this command:
讓我們看一下該命令的輸出:
Copy File from local fs to HDFS
將文件從本地fs復制到HDFS
If instead of copying the file, you just want to move it, just make use of the
如果您只想移動文件而不是復制文件,則只需使用-moveFromLocal option.-moveFromLocal選項。
磁盤使用情況 (Disk Usage)
We can see the disk usage of files under HDFS in a given directory with a simple option as shown:
我們可以通過一個簡單的選項查看給定目錄中HDFS下文件的磁盤使用情況,如下所示:
hadoop fs -du /root/journaldev_bigdata/Let’s see the output for this command:
讓我們看一下該命令的輸出:
Disk Usage of a directory in HDFS
HDFS中目錄的磁盤使用情況
If you simply want to check disk usage of complete HDFS, run the following command:
如果您只想檢查完整HDFS的磁盤使用情況,請運行以下命令:
Let’s see the output for this command:
讓我們看一下該命令的輸出:
Disk Usage of complete HDFS
完整HDFS的磁盤使用情況
清空垃圾數據 (Empty Trash Data)
When we are sure that no files in the trash are usable, we can empty the trash in HDFS by deleting all files with the following command:
當我們確定垃圾箱中沒有可用的文件時,我們可以通過使用以下命令刪除所有文件來清空HDFS中的垃圾箱:
hadoop fs -expungeThis will simply delete all Trashed data in the HDFS and creates no output.
這將僅刪除HDFS中的所有Trashed數據,并且不創建任何輸出。
修改文件的復制因子 (Modifying replication factor for a file)
As we already know, replication factor is the count by which a file is replicated across as Hadoop cluster and in its HDFS. We can modify the replication factor of a file using the following command:
眾所周知,復制因子是文件在Hadoop集群及其HDFS中被復制的計數。 我們可以使用以下命令修改文件的復制因子:
hadoop fs -setrep -w 1 /root/journaldev_bigdata/derby.logLet’s see the output of this command:
讓我們看一下該命令的輸出:
Modify replication factor in HDFS
修改HDFS中的復制因子
更新Hadoop目錄權限 (Updating Hadoop Directory permissions)
If you face permission related issues in Hadoop, run the following command:
如果您在Hadoop中遇到與權限相關的問題,請運行以下命令:
hadoop fs -chmod 700 /root/journaldev_bigdata/With this command, you can provide and formulate the permissions given to a HDFS directory and restrict its access.
使用此命令,您可以提供和制定授予HDFS目錄的權限并限制其訪問。
刪除HDFS目錄 (Removing HDFS Directory)
We can remove an entire HDFS directory using the rm command:
我們可以使用rm命令刪除整個HDFS目錄:
hadoop fs -rm -r /root/journaldev_bigdataLet’s see the output for this command:
讓我們看一下該命令的輸出:
Removing directory from HDFS
從HDFS移除目錄
That’s all for a quick roundup on Hadoop HDFS commands.
這就是對Hadoop HDFS命令的快速總結。
翻譯自: https://www.journaldev.com/20624/hdfs-commands
hdfs命令
總結
以上是生活随笔為你收集整理的hdfs命令_HDFS命令的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: eclipse配置java开发环境_Ja
- 下一篇: ssh key生成