Hive安装详细教程
一.Hive安裝
1、下載安裝包:apache-hive-3.1.1-bin.tar.gz
上傳至linux系統(tǒng)/opt/software路徑
2、解壓軟件
3、修改系統(tǒng)環(huán)境變量
vi /etc/profile添加內(nèi)容:
export HIVE_HOME=/opt/module/apache-hive-3.1.1-bin export PATH=$PATH:$HADOOP_HOME/sbin:$HIVE_HOME/bin保存:
source /etc/profile4、修改hive環(huán)境變量
cd /opt/module/apache-hive-3.1.1-bin/bin/編輯hive-config.sh文件
vi hive-config.sh新增內(nèi)容:
export JAVA_HOME=/opt/module/jdk1.8.0_11 export HIVE_HOME=/opt/module/apache-hive-3.1.1-bin export HADOOP_HOME=/opt/module/hadoop-3.2.0 export HIVE_CONF_DIR=/opt/module/apache-hive-3.1.1-bin/conf5、拷貝hive配置文件
cd /opt/module/apache-hive-3.1.1-bin/conf/ cp hive-default.xml.template hive-site.xml6、修改Hive配置文件,找到對應(yīng)的位置進(jìn)行修改
<property><name>javax.jdo.option.ConnectionDriverName</name><value>com.mysql.cj.jdbc.Driver</value><description>Driver class name for a JDBC metastore</description></property> <property><name>javax.jdo.option.ConnectionUserName</name><value>root</value><description>Username to use against metastore database</description></property> <property><name>javax.jdo.option.ConnectionPassword</name><value>root123</value><description>password to use against metastore database</description></property> <property><name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://192.168.202.131:3306/hive?useUnicode=true&characterEncoding=utf8&useSSL=false&serverTimezone=GMT</value><description>JDBC connect string for a JDBC metastore.To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.</description></property><property><name>datanucleus.schema.autoCreateAll</name><value>true</value><description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description></property> <property><name>hive.metastore.schema.verification</name><value>false</value><description>Enforce metastore schema version consistency.True: Verify that version information stored in is compatible with one from Hive jars. Also disable automaticschema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensuresproper metastore schema migration. (Default)False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.</description></property> <property><name>hive.exec.local.scratchdir</name><value>/opt/module/apache-hive-3.1.1-bin/tmp/${user.name}</value><description>Local scratch space for Hive jobs</description></property><property> <name>system:java.io.tmpdir</name> <value>/opt/module/apache-hive-3.1.1-bin/iotmp</value> <description/> </property><property><name>hive.downloaded.resources.dir</name> <value>/opt/module/apache-hive-3.1.1-bin/tmp/${hive.session.id}_resources</value><description>Temporary local directory for added resources in the remote file system.</description></property> <property><name>hive.querylog.location</name><value>/opt/module/apache-hive-3.1.1-bin/tmp/${system:user.name}</value><description>Location of Hive run time structured log file</description></property><property><name>hive.server2.logging.operation.log.location</name> <value>/opt/module/apache-hive-3.1.1-bin/tmp/${system:user.name}/operation_logs</value><description>Top level directory where operation logs are stored if logging functionality is enabled</description></property><property><name>hive.metastore.db.type</name><value>mysql</value><description>Expects one of [derby, oracle, mysql, mssql, postgres].Type of database used by the metastore. Information schema & JDBCStorageHandler depend on it.</description></property><property><name>hive.cli.print.current.db</name><value>true</value><description>Whether to include the current database in the Hive prompt.</description></property><property><name>hive.cli.print.header</name><value>true</value><description>Whether to print the names of the columns in query output.</description></property><property><name>hive.metastore.warehouse.dir</name><value>/user/hive/warehouse</value><description>location of default database for the warehouse</description></property>7、上傳mysql驅(qū)動(dòng)包到/usr/local/soft/apache-hive-3.1.1-bin/lib/文件夾下
驅(qū)動(dòng)包:mysql-connector-java-8.0.15.zip,解壓后從里面獲取jar包
8、確保 mysql數(shù)據(jù)庫中有名稱為hive的數(shù)據(jù)庫
9、初始化初始化元數(shù)據(jù)庫
10、確保Hadoop啟動(dòng)
11、啟動(dòng)hive
12、檢測是否啟動(dòng)成功
show databases;二.命令簡單使用
(1)啟動(dòng)hive
[linux01 hive]$ bin/hive
(2)顯示數(shù)據(jù)庫
hive>show databases;
(3)使用default數(shù)據(jù)庫
hive>use default;
(4)顯示default數(shù)據(jù)庫中的表
hive>show tables;
(5)創(chuàng)建student表, 并聲明文件分隔符’\t’
hive> create table student(id int, name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\t’;
(6)加載/usr/local/data/student.txt 文件到student數(shù)據(jù)庫表中。
Linux創(chuàng)建students.txt文件編輯文件內(nèi)容,將文件上傳至hdfs中
hadoop fs - put /usr/local/data/student.txt /data
load data inpath ‘/data/student.txt’ overwrite into table student;
load data local inpath ‘/data/student.txt’ into table student;
(7)查詢數(shù)據(jù)庫
hive> select * from student;
查詢id
hive> select id from student;
(8)退出hive窗口:
hive(default)>exit;
(9)在hive cli命令窗口中如何查看hdfs文件系統(tǒng)
hive(default)>dfs -ls /;
總結(jié)
以上是生活随笔為你收集整理的Hive安装详细教程的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 一位面试者提到直接调用vuex中muta
- 下一篇: Selenium-基础操作