日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 运维知识 > 数据库 >内容正文

数据库

hive mysql集群安装_HIVE完全分布式集群安装过程(元数据库: MySQL)

發布時間:2025/3/12 数据库 23 豆豆
生活随笔 收集整理的這篇文章主要介紹了 hive mysql集群安装_HIVE完全分布式集群安装过程(元数据库: MySQL) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

[root@node01 mysql]# mysql -u hive -p

Enter password:

mysql> create database hive;

Query OK, 1 row affected (0.00 sec)

mysql> use hive;

Database changed

mysql> show tables;

Empty set (0.00 sec)

3)解壓縮hive安裝包

tar -xzvf hive-0.9.0.tar.gz

[hadoop@node01 ~]$ cd hive-0.9.0

[hadoop@node01 hive-0.9.0]$ ls

bin??conf??docs??examples??lib??LICENSE??NOTICE??README.txt??RELEASE_NOTES.txt??scripts??src

4)下載mysql連接java的驅動 并拷入hive home的lib下

[hadoop@node01 ~]$ mv mysql-connector-java-5.1.24-bin.jar ./hive-0.9.0/lib

5)修改環境變量,把Hive加到PATH

/etc/profile

export HIVE_HOME=/home/hadoop/hive-0.9.0

export PATH=$PATH:$HIVE_HOME/bin

6)修改hive-env.sh

[hadoop@node01 conf]$ cp hive-env.sh.template hive-env.sh

[hadoop@node01 conf]$ vi hive-env.sh

7)拷貝hive-default.xml 并命名為 hive-site.xml

修改四個關鍵配置 為上面mysql的配置

[hadoop@node01 conf]$ cp hive-default.xml.template hive-site.xml

[hadoop@node01 conf]$ vi hive-site.xml

javax.jdo.option.ConnectionURL

jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true

JDBC connect string for a JDBC metastore

javax.jdo.option.ConnectionDriverName

com.mysql.jdbc.Driver

Driver class name for a JDBC metastore

javax.jdo.option.ConnectionUserName

hive

username to use against metastore database

javax.jdo.option.ConnectionPassword

hive

password to use against metastore database

8)啟動Hadoop,打開hive shell 測試

[hadoop@node01 conf]$ start-all.sh

hive> load data inpath 'hdfs://node01:9000/user/hadoop/access_log.txt'

> overwrite into table records;

Loading data to table default.records

Moved to trash: hdfs://node01:9000/user/hive/warehouse/records

OK

Time taken: 0.526 seconds

hive> select ip, count(*) from records

> group by ip;

Total MapReduce jobs = 1

Launching Job 1 out of 1

Number of reduce tasks not specified. Estimated from input data size: 1

In order to change the average load for a reducer (in bytes):

set hive.exec.reducers.bytes.per.reducer=

In order to limit the maximum number of reducers:

set hive.exec.reducers.max=

In order to set a constant number of reducers:

set mapred.reduce.tasks=

Starting Job = job_201304242001_0001, Tracking URL = http://node01:50030/jobdetails.jsp?jobid=job_201304242001_0001

Kill Command = /home/hadoop/hadoop-0.20.2/bin/../bin/hadoop job??-Dmapred.job.tracker=192.168.231.131:9001 -kill job_201304242001_0001

Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1

2013-04-24 20:11:03,127 Stage-1 map = 0%,??reduce = 0%

2013-04-24 20:11:11,196 Stage-1 map = 100%,??reduce = 0%

2013-04-24 20:11:23,331 Stage-1 map = 100%,??reduce = 100%

Ended Job = job_201304242001_0001

MapReduce Jobs Launched:

Job 0: Map: 1??Reduce: 1? ?HDFS Read: 7118627 HDFS Write: 9 SUCCESS

Total MapReduce CPU Time Spent: 0 msec

OK

NULL? ? 28134

Time taken: 33.273 seconds

records在HDFS中就是一個文件:

[hadoop@node01 home]$ hadoop fs -ls /user/hive/warehouse/records

Found 1 items

-rw-r--r--? ?2 hadoop supergroup? ? 7118627 2013-04-15 20:06 /user/hive/warehouse/records/access_log.txt

總結

以上是生活随笔為你收集整理的hive mysql集群安装_HIVE完全分布式集群安装过程(元数据库: MySQL)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。