日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Hive安装详细教程

發(fā)布時(shí)間:2023/12/9 编程问答 52 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Hive安装详细教程 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

一.Hive安裝
1、下載安裝包:apache-hive-3.1.1-bin.tar.gz
上傳至linux系統(tǒng)/opt/software路徑
2、解壓軟件

cd /opt/software tar -zxvf apache-hive-3.1.1-bin.tar.gz -C /opt/module/

3、修改系統(tǒng)環(huán)境變量

vi /etc/profile

添加內(nèi)容:

export HIVE_HOME=/opt/module/apache-hive-3.1.1-bin export PATH=$PATH:$HADOOP_HOME/sbin:$HIVE_HOME/bin

保存:

source /etc/profile

4、修改hive環(huán)境變量

cd /opt/module/apache-hive-3.1.1-bin/bin/

編輯hive-config.sh文件

vi hive-config.sh

新增內(nèi)容:

export JAVA_HOME=/opt/module/jdk1.8.0_11 export HIVE_HOME=/opt/module/apache-hive-3.1.1-bin export HADOOP_HOME=/opt/module/hadoop-3.2.0 export HIVE_CONF_DIR=/opt/module/apache-hive-3.1.1-bin/conf

5、拷貝hive配置文件

cd /opt/module/apache-hive-3.1.1-bin/conf/ cp hive-default.xml.template hive-site.xml

6、修改Hive配置文件,找到對應(yīng)的位置進(jìn)行修改

<property><name>javax.jdo.option.ConnectionDriverName</name><value>com.mysql.cj.jdbc.Driver</value><description>Driver class name for a JDBC metastore</description></property> <property><name>javax.jdo.option.ConnectionUserName</name><value>root</value><description>Username to use against metastore database</description></property> <property><name>javax.jdo.option.ConnectionPassword</name><value>root123</value><description>password to use against metastore database</description></property> <property><name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://192.168.202.131:3306/hive?useUnicode=true&amp;characterEncoding=utf8&amp;useSSL=false&amp;serverTimezone=GMT</value><description>JDBC connect string for a JDBC metastore.To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.</description></property><property><name>datanucleus.schema.autoCreateAll</name><value>true</value><description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description></property> <property><name>hive.metastore.schema.verification</name><value>false</value><description>Enforce metastore schema version consistency.True: Verify that version information stored in is compatible with one from Hive jars. Also disable automaticschema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensuresproper metastore schema migration. (Default)False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.</description></property> <property><name>hive.exec.local.scratchdir</name><value>/opt/module/apache-hive-3.1.1-bin/tmp/${user.name}</value><description>Local scratch space for Hive jobs</description></property><property> <name>system:java.io.tmpdir</name> <value>/opt/module/apache-hive-3.1.1-bin/iotmp</value> <description/> </property><property><name>hive.downloaded.resources.dir</name> <value>/opt/module/apache-hive-3.1.1-bin/tmp/${hive.session.id}_resources</value><description>Temporary local directory for added resources in the remote file system.</description></property> <property><name>hive.querylog.location</name><value>/opt/module/apache-hive-3.1.1-bin/tmp/${system:user.name}</value><description>Location of Hive run time structured log file</description></property><property><name>hive.server2.logging.operation.log.location</name> <value>/opt/module/apache-hive-3.1.1-bin/tmp/${system:user.name}/operation_logs</value><description>Top level directory where operation logs are stored if logging functionality is enabled</description></property><property><name>hive.metastore.db.type</name><value>mysql</value><description>Expects one of [derby, oracle, mysql, mssql, postgres].Type of database used by the metastore. Information schema &amp; JDBCStorageHandler depend on it.</description></property><property><name>hive.cli.print.current.db</name><value>true</value><description>Whether to include the current database in the Hive prompt.</description></property><property><name>hive.cli.print.header</name><value>true</value><description>Whether to print the names of the columns in query output.</description></property><property><name>hive.metastore.warehouse.dir</name><value>/user/hive/warehouse</value><description>location of default database for the warehouse</description></property>

7、上傳mysql驅(qū)動(dòng)包到/usr/local/soft/apache-hive-3.1.1-bin/lib/文件夾下
驅(qū)動(dòng)包:mysql-connector-java-8.0.15.zip,解壓后從里面獲取jar包

8、確保 mysql數(shù)據(jù)庫中有名稱為hive的數(shù)據(jù)庫
9、初始化初始化元數(shù)據(jù)庫

schematool -dbType mysql -initSchema

10、確保Hadoop啟動(dòng)
11、啟動(dòng)hive

hive

12、檢測是否啟動(dòng)成功

show databases;

二.命令簡單使用
(1)啟動(dòng)hive
[linux01 hive]$ bin/hive
(2)顯示數(shù)據(jù)庫
hive>show databases;
(3)使用default數(shù)據(jù)庫
hive>use default;
(4)顯示default數(shù)據(jù)庫中的表
hive>show tables;
(5)創(chuàng)建student表, 并聲明文件分隔符’\t’
hive> create table student(id int, name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\t’;
(6)加載/usr/local/data/student.txt 文件到student數(shù)據(jù)庫表中。
Linux創(chuàng)建students.txt文件編輯文件內(nèi)容,將文件上傳至hdfs中
hadoop fs - put /usr/local/data/student.txt /data

load data inpath ‘/data/student.txt’ overwrite into table student;
load data local inpath ‘/data/student.txt’ into table student;
(7)查詢數(shù)據(jù)庫
hive> select * from student;
查詢id
hive> select id from student;
(8)退出hive窗口:
hive(default)>exit;
(9)在hive cli命令窗口中如何查看hdfs文件系統(tǒng)
hive(default)>dfs -ls /;

總結(jié)

以上是生活随笔為你收集整理的Hive安装详细教程的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。