Hadoop HIVE
?
??
數(shù)據(jù)倉(cāng)庫(kù)工具。構(gòu)建在hadoop上的數(shù)據(jù)倉(cāng)庫(kù)框架,可以把hadoop下的原始結(jié)構(gòu)化數(shù)據(jù)變成Hive中的表。(主要解決ad-hoc query,即時(shí)查詢的問(wèn)題)
支持一種與SQL幾乎完全相同的語(yǔ)言HQL。除了不支持更新,索引和事務(wù),幾乎SQL其他的特性都支持。
可以看成是SQL到Map-reduce的映射器
提供shell,JDBC/ODBC,Thrift,Web等接口
?
HIVE組件與體系結(jié)構(gòu)
用戶接口:shell,thrift,web
Thrift 服務(wù)器
元數(shù)據(jù)庫(kù) (Derby是hive內(nèi)嵌db 用于本地模式,一般都用mysql,所謂metadata,因?yàn)镠ive本身不存儲(chǔ)數(shù)據(jù),完全依賴于HDFS和MapReduce,Hive可以將結(jié)構(gòu)化的數(shù)據(jù)文件映射為一張數(shù)據(jù)庫(kù)表,Hive中表純邏輯,這些有關(guān)邏輯的表的定義數(shù)據(jù)就是元數(shù)據(jù))
HIVE QL(compiler,optimizer,executor)
Hadoop?
?
?
?
?
HIVE數(shù)據(jù)放在哪里?
在hdfs下的warehouse下面,一個(gè)表對(duì)應(yīng)一個(gè)子目錄
桶與reducer
本地的/tmp目錄存放日志和執(zhí)行計(jì)劃
?
HIVE安裝:
內(nèi)嵌模式:元數(shù)據(jù)保持在內(nèi)嵌的Derby,只允許一個(gè)會(huì)話連接(hive服務(wù)和metastore服務(wù)運(yùn)行在同一個(gè)進(jìn)程中,derby服務(wù)也運(yùn)行在該進(jìn)程中)
本地獨(dú)立模式:hive服務(wù)和metastore服務(wù)運(yùn)行在同一個(gè)進(jìn)程中,mysql是單獨(dú)的進(jìn)程,可以在同一臺(tái)機(jī)器上,也可以在遠(yuǎn)程機(jī)器上。
該模式只需將hive-site.xml中的ConnectionURL指向mysql,并配置好驅(qū)動(dòng)名、數(shù)據(jù)庫(kù)連接賬號(hào)即可
遠(yuǎn)程模式:hive服務(wù)和metastore在不同的進(jìn)程內(nèi),可能是不同的機(jī)器。
該模式需要將hive.metastore.local設(shè)置為false,并將hive.metastore.uris設(shè)置為metastore服務(wù)器URI,如有多個(gè)metastore服務(wù)器,URI之間用逗號(hào)分隔。metastore服務(wù)器URI的格式為thrift://host port
<property>
<name>hive.metastore.uris</name>
<value>thrift://127.0.0.1:9083</value>
</property>
?
Hive2.1.1:
(1)內(nèi)嵌模式
cp apache-hive-2.1.1-bin.tar.gz /home/hdp/ cd /home/hdp/ tar -zxvf apache-hive-2.1.1-bin.tar.gzroot修改/etc/profile
export HIVE_HOME=/home/hdp/hive211 export PATH=$PATH:$HIVE_HOME/bin export CLASSPATH=$CLASSPATH:$HIVE_HOME/lib:$HIVE_HOME/confexport HIVE_AUX_JARS_PATH=/home/hdp/hive211/lib
切換hdp修改/home/hdp/hive211/conf 下的
hive-env.sh
cp hive-env.sh.template hive-env.sh vi hive-env.shHADOOP_HOME=/home/hdp/hadoopexport HIVE_CONF_DIR=/home/hdp/hive211/conf
hive-site.xml
cp hive-default.xml.template hive-site.xml vi hive-site.xml?
#該參數(shù)指定了 Hive 的數(shù)據(jù)存儲(chǔ)目錄,默認(rèn)位置在 HDFS 上面的 /user/hive/warehouse 路#徑下<property><name>hive.metastore.warehouse.dir</name><value>/user/hive/warehouse</value><description>location of default database for the warehouse</description></property>#該參數(shù)指定了 Hive 的數(shù)據(jù)臨時(shí)文件目錄,默認(rèn)位置為 HDFS 上面的 /tmp/hive 路徑下<property><name>hive.exec.scratchdir</name><value>/tmp/hive</value><description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description></property><property> <name>hive.querylog.location</name> <value>/home/hdp/hive211/iotmp</value> <description>Location of Hive run time structured log file</description> </property><property> <name>hive.exec.local.scratchdir</name> <value>/home/hdp/hive211/iotmp</value> <description>Local scratch space for Hive jobs</description> </property><property> <name>hive.downloaded.resources.dir</name> <value>/home/hdp/hive211/iotmp</value> <description>Temporary local directory for added resources in the remote file system.</description> </property><property><name>hive.server2.logging.operation.log.location</name><value>/home/hdp/hive211/iotmp/operation_logs</value><description>Top level directory where operation logs are stored if logging functionality is enabled</description> </property><property><name>javax.jdo.option.ConnectionURL</name><value>jdbc:derby:;databaseName=metastore_db;create=true</value><description>JDBC connect string for a JDBC metastore.To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.</description> </property><property><name>javax.jdo.option.ConnectionDriverName</name><value>org.apache.derby.jdbc.EmbeddedDriver</value><description>Driver class name for a JDBC metastore</description> </property><property><name>javax.jdo.option.ConnectionUserName</name><value>APP</value><description>Username to use against metastore database</description> </property><property><name>javax.jdo.option.ConnectionPassword</name><value>mine</value><description>password to use against metastore database</description> </property><property><name>hive.exec.reducers.bytes.per.reducer</name><value>256000000</value><description>size per reducer.The default is 256Mb, i.e if the input size is 1G, it will use 4 reducers.</description> </property><property><name>hive.exec.reducers.max</name><value>1009</value><description>max number of reducers will be used. If the one specified in the configuration parameter mapred.reduce.tasks isnegative, Hive will use this one as the max number of reducers when automatically determine number of reducers.</description> </property>?
根據(jù)上面的參數(shù)創(chuàng)造必要的文件夾
[hdp@hdp265m conf]$ hadoop fs -mkdir -p /user/hive/warehouse [hdp@hdp265m conf]$ hadoop fs -mkdir -p /tmp/hive [hdp@hdp265m conf]$ hadoop fs -chmod 777 /tmp/hive [hdp@hdp265m conf]$ hadoop fs -chmod 777 /user/hive/warehouse [hdp@hdp265m conf]$ hadoop fs ls / [hdp@hdp265m hive211]$ pwd /home/hdp/hive211 [hdp@hdp265m hive211]$ mkdir iotmp [hdp@hdp265m hive211]$ chmod 777 iotmp把hive-site.xml?中所有包含?${system:java.io.tmpdir}替換成/home/hdp/hive211/iotmp
用vi打開編輯全局替換命令 先按Esc鍵 ?再同時(shí)按shift加: 把以下替換命令粘貼按回車即可全局替換
%s#${system:java.io.tmpdir}#/home/hdp/hive211/iotmp#g運(yùn)行hive
./bin/hivehive <parameters> --service serviceName <serviceparameters>啟動(dòng)特定的服務(wù)
[hadoop@hadoop~]$ hive --service help Usage ./hive<parameters> --service serviceName <service parameters> Service List: beelinecli help hiveserver2 hiveserver hwi jar lineage metastore metatool orcfiledumprcfilecat schemaTool version Parametersparsed: --auxpath : Auxillary jars --config : Hive configuration directory --service : Starts specificservice/component. cli is default Parameters used: HADOOP_HOME or HADOOP_PREFIX : Hadoop installdirectory HIVE_OPT : Hive options For help on aparticular service: ./hive --service serviceName --help Debug help: ./hive --debug --help?
?
?
報(bào)錯(cuò)
解決辦法:./bin/schematool -initSchema -dbType derby
報(bào)錯(cuò)
解決方法:刪除
/home/hdp/hive211/metastore_db
目錄下 rm -rf metastore_db/ 在初始化:./bin/schematool -initSchema -dbType derby
重新運(yùn)行
./bin/hive
Logging initialized using configuration in jar:file:/home/hdp/hive211/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: true Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. tez, spark) or using Hive 1.X releases. hive>?
HIVE遠(yuǎn)程模式
在hdp265dnsnfs上安裝mysql
編譯hive-site.html
?
<configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://192.168.56.108:3306/hive?createDatabaseIfNotExist=true</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>111111</value> </property> <property> <name>hive.metastore.schema.verification</name> <value>false</value> <description> Enforce metastore schema version consistency. True: Verify that version information stored in metastore matches with one from Hive jars. Also disable automatic schema migration attempt. Users are required to manully migrate schema after Hive upgrade which ensures proper metastore schema migration. (Default) False: Warn if the version information stored in metastore doesn't match with one from in Hive jars. </description> </property> </configuration>報(bào)錯(cuò):Caused by: MetaException(message:Version information not found in metastore. )
解決:hive-site.xml加入
<name>hive.metastore.schema.verification</name> <value>false</value>報(bào)錯(cuò):缺少mysql jar包
解決:將其(如mysql-connector-java-5.1.42-bin.jar)拷貝到$HIVE_HOME/lib下即可
報(bào)錯(cuò):
Exception in thread "main" java.lang.RuntimeException: Hive metastore database is not initialized. Please use schematool (e.g. ./schematool -initSchema -dbType ...) to create the schema. If needed, don't forget to include the option to auto-create the underlying database in your JDBC connection string (e.g. ?createDatabaseIfNotExist=true for mysql)解決:
#數(shù)據(jù)庫(kù)的初始化。 bin/schematool -initSchema -dbType mysql啟動(dòng):
bin/hive啟動(dòng)后mysql 多了hive 數(shù)據(jù)庫(kù)
?
?
測(cè)試
?
創(chuàng)建數(shù)據(jù)庫(kù)
create database db_hive_test;創(chuàng)建測(cè)試表
?
use db_hive_test;
create table student(id int,name string) row format delimited fields terminated by '\t';
?
加載數(shù)據(jù)到表中
?
新建student.txt 文件寫入數(shù)據(jù)(id,name 按tab鍵分隔)
vi student.txt
?
[html]?view plaincopyload data local inpath '/home/hadoop/student.txt' into table? db_hive_test.student
?
?
查詢表信息
select * from student;
?
?
查看表的詳細(xì)信息
desc formatted student;
?
通過(guò)Mysql查看創(chuàng)建的表
?
查看hive的函數(shù)?
show functions;
查看函數(shù)詳細(xì)信息?
desc function sum;?
desc function extended
?
配置hiveserver2
?
Hive從2.0版本開始,為HiveServer2提供了一個(gè)簡(jiǎn)單的WEB UI界面 <property> <name>hive.server2.webui.host</name> <value>192.168.56.104</value> </property><property> <name>hive.server2.webui.port</name> <value>10002</value> </property>?
啟動(dòng)hiveserver2
hive --service hiveserver2?
http://192.168.56.104:10002/
?
配置hwi
https://cwiki.apache.org/confluence/display/Hive/HiveWebInterface#HiveWebInterface-WhatIstheHiveWebInterface
http://www.cnblogs.com/xinlingyoulan/p/6025692.html
?
http://apache.fayea.com/hive/hive-2.1.1/
下載源碼apache-hive-2.1.1-src.tar.gz
tar -xzf apache-hive-2.1.1-src.tar.gz [hdp@hdp265m web]$ pwd /home/hdp/apache-hive-2.1.1-src/hwi/web [hdp@hdp265m web]$ jar -cvf hive-hwi-2.1.1.war * [hdp@hdp265m web]$ cp hive-hwi-2.1.1.war $HIVE_HOME/lib下載安裝ant
http://ant.apache.org/bindownload.cgi
如果你的jdf是7則不要安裝最新的ant
?
tar -zxvf apache-ant-1.9.7-bin.tar.gz mv apache-ant-1.9.7 ant110su root修改/etc/profile
export ANT_HOME=/home/hdp/ant197 export PATH=$PATH:$ANT_HOME/bin?
驗(yàn)證ant是否安裝成功
[hdp@hdp265m ~]$ ant -version Apache Ant(TM) version 1.9.7 compiled on April 9 2016啟動(dòng)hwi
hive --service hwi?
如果有下面的錯(cuò)誤
The following error occurred while executing this line: jar:file:/home/linux/application/hive2.1.0/lib/ant-1.9.1.jar!/org/apache/tools/ant/antlib.xml:37: Could not create task or type of type: componentdef.根據(jù)錯(cuò)誤信息,HIVEHOME/lib下的ant.jar版本為1.9.1,但是我裝的ant版本為1.9.7,因此該錯(cuò)誤是因為版本不一致導(dǎo)致的。因此,需要把HIVEHOME/lib下的ant.jar版本為1.9.1,但是我裝的ant版本為1.9.7,因此該錯(cuò)誤是因?yàn)榘姹静灰恢聦?dǎo)致的。因此,需要把{ANT_HOME}/lib下的ant.jar包c(diǎn)opy到${HIVE_HOME}/lib下,并修改權(quán)限為777。使用的命令如下:
cp ${ANT_HOME}/lib/ant.jar ${HIVE_HOME}/lib/ant-1.9.7.jar cd ${HIVE_HOME}/lib chmod 777 ant-1.9.7.jar?
?
如果前面問(wèn)題都解決了可以每次多刷新幾遍
http://192.168.56.104:9999/hwi/
?
啟動(dòng)metastore服務(wù)
hive --service metastore?
?
?
?
?
?
?
?
?
Hive 快速入門
https://cwiki.apache.org/confluence/display/Hive/GettingStarted
Hive語(yǔ)言手冊(cè)
https://cwiki.apache.org/confluence/display/Hive/LanguageManual
Hive指南
https://cwiki.apache.org/confluence/display/Hive/Tutorial
?
轉(zhuǎn)載于:https://www.cnblogs.com/dadadechengzi/p/6721581.html
總結(jié)
以上是生活随笔為你收集整理的Hadoop HIVE的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 关于主键的设计、primary key
- 下一篇: 4月18日