日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

bigint hive java类型_详解Apache Hudi如何配置各种类型分区

發布時間:2023/12/4 编程问答 36 豆豆
生活随笔 收集整理的這篇文章主要介紹了 bigint hive java类型_详解Apache Hudi如何配置各种类型分区 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

1. 引入

Apache Hudi支持多種分區方式數據集,如多級分區、單分區、時間日期分區、無分區數據集等,用戶可根據實際需求選擇合適的分區方式,下面來詳細了解Hudi如何配置何種類型分區。

2. 分區處理

為說明Hudi對不同分區類型的處理,假定寫入Hudi的Schema如下

{ "type" : "record", "name" : "HudiSchemaDemo", "namespace" : "hoodie.HudiSchemaDemo", "fields" : [ { "name" : "age", "type" : [ "long", "null" ] }, { "name" : "location", "type" : [ "string", "null" ] }, { "name" : "name", "type" : [ "string", "null" ] }, { "name" : "sex", "type" : [ "string", "null" ] }, { "name" : "ts", "type" : [ "long", "null" ] }, { "name" : "date", "type" : [ "string", "null" ] } ]}

其中一條具體數據如下

{ "name": "zhangsan", "ts": 1574297893837, "age": 16, "location": "beijing", "sex":"male", "date":"2020/08/16"}

2.1 單分區

單分區表示使用一個字段表示作為分區字段的場景,可具體分為非日期格式字段(如location)和日期格式字段(如date)

2.1.1 非日期格式字段分區

如使用上述location字段做為分區字段,在寫入Hudi并同步至Hive時配置如下

df.write().format("org.apache.hudi"). options(getQuickstartWriteConfigs()). option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY(), "COPY_ON_WRITE"). option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY(), "ts"). option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY(), "name"). option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY(), partitionFields). option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY(), keyGenerator). option(TABLE_NAME, tableName). option("hoodie.datasource.hive_sync.enable", true). option("hoodie.datasource.hive_sync.table", tableName). option("hoodie.datasource.hive_sync.username", "root"). option("hoodie.datasource.hive_sync.password", "123456"). option("hoodie.datasource.hive_sync.jdbcurl", "jdbc:hive2://localhost:10000"). option("hoodie.datasource.hive_sync.partition_fields", hivePartitionFields). option("hoodie.datasource.write.table.type", "COPY_ON_WRITE"). option("hoodie.embed.timeline.server", false). option("hoodie.datasource.hive_sync.partition_extractor_class", hivePartitionExtractorClass). mode(saveMode). save(basePath);

值得注意如下幾個配置項

  • DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY()配置為location;
  • hoodie.datasource.hive_sync.partition_fields配置為location,與寫入Hudi的分區字段相同;
  • DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY()配置為org.apache.hudi.keygen.SimpleKeyGenerator,或者不配置該選項,默認為org.apache.hudi.keygen.SimpleKeyGenerator
  • hoodie.datasource.hive_sync.partition_extractor_class配置為org.apache.hudi.hive.MultiPartKeysValueExtractor

Hudi同步到Hive創建的表如下

CREATE EXTERNAL TABLE `notdateformatsinglepartitiondemo`( `_hoodie_commit_time` string, `_hoodie_commit_seqno` string, `_hoodie_record_key` string, `_hoodie_partition_path` string, `_hoodie_file_name` string, `age` bigint, `date` string, `name` string, `sex` string, `ts` bigint)PARTITIONED BY ( `location` string)ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'STORED AS INPUTFORMAT 'org.apache.hudi.hadoop.HoodieParquetInputFormat'OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'LOCATION 'file:/tmp/hudi-partitions/notDateFormatSinglePartitionDemo'TBLPROPERTIES ( 'last_commit_time_sync'='20200816154250', 'transient_lastDdlTime'='1597563780')

查詢表notdateformatsinglepartitiondemo

tips:?查詢時請先將hudi-hive-sync-bundle-xxx.jar包放入$HIVE_HOME/lib下

2.1.2 日期格式分區

如使用上述date字段做為分區字段,核心配置項如下

  • DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY()配置為date;
  • hoodie.datasource.hive_sync.partition_fields配置為date,與寫入Hudi的分區字段相同;
  • DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY()配置為org.apache.hudi.keygen.SimpleKeyGenerator,或者不配置該選項,默認為org.apache.hudi.keygen.SimpleKeyGenerator
  • hoodie.datasource.hive_sync.partition_extractor_class配置為org.apache.hudi.hive.SlashEncodedDayPartitionValueExtractor

Hudi同步到Hive創建的表如下

CREATE EXTERNAL TABLE `dateformatsinglepartitiondemo`( `_hoodie_commit_time` string, `_hoodie_commit_seqno` string, `_hoodie_record_key` string, `_hoodie_partition_path` string, `_hoodie_file_name` string, `age` bigint, `location` string, `name` string, `sex` string, `ts` bigint)PARTITIONED BY ( `date` string)ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'STORED AS INPUTFORMAT 'org.apache.hudi.hadoop.HoodieParquetInputFormat'OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'LOCATION 'file:/tmp/hudi-partitions/dateFormatSinglePartitionDemo'TBLPROPERTIES ( 'last_commit_time_sync'='20200816155107', 'transient_lastDdlTime'='1597564276')

查詢表dateformatsinglepartitiondemo

2.2 多分區

多分區表示使用多個字段表示作為分區字段的場景,如上述使用location字段和sex字段,核心配置項如下

  • DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY()配置為location,sex;
  • hoodie.datasource.hive_sync.partition_fields配置為location,sex,與寫入Hudi的分區字段相同;
  • DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY()配置為org.apache.hudi.keygen.ComplexKeyGenerator
  • hoodie.datasource.hive_sync.partition_extractor_class配置為org.apache.hudi.hive.MultiPartKeysValueExtractor

Hudi同步到Hive創建的表如下

CREATE EXTERNAL TABLE `multipartitiondemo`( `_hoodie_commit_time` string, `_hoodie_commit_seqno` string, `_hoodie_record_key` string, `_hoodie_partition_path` string, `_hoodie_file_name` string, `age` bigint, `date` string, `name` string, `ts` bigint)PARTITIONED BY ( `location` string, `sex` string)ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'STORED AS INPUTFORMAT 'org.apache.hudi.hadoop.HoodieParquetInputFormat'OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'LOCATION 'file:/tmp/hudi-partitions/multiPartitionDemo'TBLPROPERTIES ( 'last_commit_time_sync'='20200816160557', 'transient_lastDdlTime'='1597565166')

查詢表multipartitiondemo

2.3 無分區

無分區場景是指無分區字段,寫入Hudi的數據集無分區。核心配置如下

  • DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY()配置為空字符串;
  • hoodie.datasource.hive_sync.partition_fields配置為空字符串,與寫入Hudi的分區字段相同;
  • DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY()配置為org.apache.hudi.keygen.NonpartitionedKeyGenerator
  • hoodie.datasource.hive_sync.partition_extractor_class配置為org.apache.hudi.hive.NonPartitionedExtractor

Hudi同步到Hive創建的表如下

CREATE EXTERNAL TABLE `nonpartitiondemo`( `_hoodie_commit_time` string, `_hoodie_commit_seqno` string, `_hoodie_record_key` string, `_hoodie_partition_path` string, `_hoodie_file_name` string, `age` bigint, `date` string, `location` string, `name` string, `sex` string, `ts` bigint)ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'STORED AS INPUTFORMAT 'org.apache.hudi.hadoop.HoodieParquetInputFormat'OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'LOCATION 'file:/tmp/hudi-partitions/nonPartitionDemo'TBLPROPERTIES ( 'last_commit_time_sync'='20200816161558', 'transient_lastDdlTime'='1597565767')

查詢表nonpartitiondemo

2.4 Hive風格分區

除了上述幾種常見的分區方式,還有一種Hive風格分區格式,如location=beijing/sex=male格式,以location,sex作為分區字段,核心配置如下

  • DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY()配置為location,sex;
  • hoodie.datasource.hive_sync.partition_fields配置為location,sex,與寫入Hudi的分區字段相同;
  • DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY()配置為org.apache.hudi.keygen.ComplexKeyGenerator
  • hoodie.datasource.hive_sync.partition_extractor_class配置為org.apache.hudi.hive.SlashEncodedDayPartitionValueExtractor
  • DataSourceWriteOptions.HIVE_STYLE_PARTITIONING_OPT_KEY()配置為true

生成的Hudi數據集目錄結構會為如下格式

/location=beijing/sex=male

Hudi同步到Hive創建的表如下

CREATE EXTERNAL TABLE `hivestylepartitiondemo`( `_hoodie_commit_time` string, `_hoodie_commit_seqno` string, `_hoodie_record_key` string, `_hoodie_partition_path` string, `_hoodie_file_name` string, `age` bigint, `date` string, `name` string, `ts` bigint)PARTITIONED BY ( `location` string, `sex` string)ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'STORED AS INPUTFORMAT 'org.apache.hudi.hadoop.HoodieParquetInputFormat'OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'LOCATION 'file:/tmp/hudi-partitions/hiveStylePartitionDemo'TBLPROPERTIES ( 'last_commit_time_sync'='20200816172710', 'transient_lastDdlTime'='1597570039')

查詢表hivestylepartitiondemo

3. 總結

本篇文章介紹了Hudi如何處理不同分區場景,上述配置的分區類配置可以滿足絕大多數場景,當然Hudi非常靈活,還支持自定義分區解析器,具體可查看KeyGenerator和PartitionValueExtractor類,其中所有寫入Hudi的分區字段生成器都是KeyGenerator的子類,所有同步至Hive的分區值解析器都是PartitionValueExtractor的子類。上述示例代碼都已經上傳至https://github.com/leesf/hudi-demos,該倉庫會持續補充各種使用Hudi的Demo,方便開發者快速了解Hudi,構建企業級數據湖,歡迎star & fork。

推薦閱讀

Apache Hudi表自動同步至阿里云數據湖分析DLA

Apache Hudi + AWS S3 + Athena實踐

官宣!AWS Athena正式可查Apache Hudi數據集

生態 | Apache Hudi插上Alluxio的翅膀

Apache Hudi重磅RFC解讀之存量表高效遷移機制

`

總結

以上是生活随笔為你收集整理的bigint hive java类型_详解Apache Hudi如何配置各种类型分区的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。