DistCp迁移Hive数据过程中源集群增加删除文件等场景测试
1 概述
??由于在數據遷移過程中,上層任務仍在運行,會出現源集群新增文件及刪除文件的情況,因此進行測試
2 需要同步的數據文件說明
??源集群:192.168.40.100:8020,目標集群:192.168.40.200:8020
??數據目錄及其大小如下
??這個文件對應的表為dwd_pollution_distcp,共有20140801-20140930這些分區
3 元數據遷移
??源集群與目標集群元數據存儲都是采用pg,pg庫遷移如下:
??①新主機PostgreSQL需要事先建立和原主機名稱相同的用戶和數據庫
??②備份原主機數據庫,格式:pg_dump -U -p >
??在源集群的pg庫中導出數據
??③導入數據到新主機
??再目標集群的pg庫執行導入命令
4 源集群新增文件測試
4.1 執行distcp命令
hadoop distcp -p hdfs://192.168.40.100:8020/user/hive/warehouse/iot.db/dwd_pollution_distcp hdfs://192.168.40.200:8020/user/hive/warehouse/iot.db/??distcp執行期間執行如下SQL,插入數據到新分區
insert overwrite table iot.dwd_pollution_distcp partition(stat_date) select ozone,particullate_matter,carbon_monoxide,sulfure_dioxide,nitrogen_dioxide,longitude,latitude,date_time,"20201021" as stat_date from iot.dwd_pollution??任務完成,并且沒有報錯
... 20/10/21 11:16:21 INFO mapreduce.Job: Job job_1602207790062_0240 completed successfully 20/10/21 11:16:21 INFO mapreduce.Job: Counters: 38File System CountersFILE: Number of bytes read=0FILE: Number of bytes written=4923884FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=5921518162HDFS: Number of bytes written=5921449140HDFS: Number of read operations=1962HDFS: Number of large read operations=0HDFS: Number of write operations=1249HDFS: Number of bytes read erasure-coded=0Job Counters Launched map tasks=22Other local map tasks=22Total time spent by all maps in occupied slots (ms)=3720826Total time spent by all reduces in occupied slots (ms)=0Total time spent by all map tasks (ms)=3720826Total vcore-milliseconds taken by all map tasks=3720826Total megabyte-milliseconds taken by all map tasks=3810125824Map-Reduce FrameworkMap input records=245Map output records=0Input split bytes=2530Spilled Records=0Failed Shuffles=0Merged Map outputs=0GC time elapsed (ms)=4309CPU time spent (ms)=167940Physical memory (bytes) snapshot=8456990720Virtual memory (bytes) snapshot=59629797376Total committed heap usage (bytes)=13333692416Peak Map Physical memory (bytes)=418676736Peak Map Virtual memory (bytes)=2735362048File Input Format Counters Bytes Read=66492File Output Format Counters Bytes Written=0DistCp CountersBandwidth in Btyes=69105400Bytes Copied=5921449140Bytes Expected=5921449140Files Copied=182DIR_COPY=634.2 數據文件核對
??源集群文件大小
[hdfs@hadoop-local-02 ~]$ hdfs dfs -du -s -h /user/hive/warehouse/iot.db/dwd_pollution_distcp 5.7 G 17.1 G /user/hive/warehouse/iot.db/dwd_pollution_distcp[hdfs@hadoop-local-02 ~]$ hdfs dfs -ls /user/hive/warehouse/iot.db/dwd_pollution_distcp/ Found 63 items??目標集群文件大小
[hdfs@hadoop-cdh-03 ~]$ hdfs dfs -du -s -h /user/hive/warehouse/iot.db/dwd_pollution_distcp 5.5 G 16.5 G /user/hive/warehouse/iot.db/dwd_pollution_distcp [hdfs@hadoop-cdh-03 ~]$ hdfs dfs -ls /user/hive/warehouse/iot.db/dwd_pollution_distcp/ Found 62 items??發現再執行DistCp過程中插入的數據不影響DistCp的使用,但是插入的數據不能及時發現
4.3 源集群新增文件處理方法
??可以重新執行命令使用DistCp的update參數更新增量數據
hadoop distcp -update -p hdfs://192.168.40.100:8020/user/hive/warehouse/iot.db/dwd_pollution_distcp hdfs://192.168.40.200:8020/user/hive/warehouse/iot.db/dwd_pollution_distcp4.4 核對更新后文件是否一致
??(1)文件核對
??源集群文件大小
??目標集群文件大小
[hdfs@hadoop-cdh-03 ~]$ hdfs dfs -du -s -h /user/hive/warehouse/iot.db/dwd_pollution_distcp 5.7 G 17.1 G /user/hive/warehouse/iot.db/dwd_pollution_distcp [hdfs@hadoop-cdh-03 ~]$ hdfs dfs -ls /user/hive/warehouse/iot.db/dwd_pollution_distcp/ Found 63 items??(2)SQL核對
??首先在目標集群修復分區,因為新插入的那個分區元數據在目標集群的hive元數據中沒有
??源集群hive查詢條數
hive> select count(*) from iot.dwd_pollution_distcp; OK 138250266??目標集群hive查詢條數
hive> select count(*) from iot.dwd_pollution_distcp; OK 1382502665 源集群刪除文件測試
??這里所說的刪除文件是指SQL在執行的時候執行了overwrite操作,會先刪除原有數據在新增。測試前清空目標集群中遷移過去的數據,源集群恢復最原始狀態
5.1 執行distcp命令
hadoop distcp -p hdfs://192.168.40.100:8020/user/hive/warehouse/iot.db/dwd_pollution_distcp hdfs://192.168.40.200:8020/user/hive/warehouse/iot.db/??distcp執行期間執行如下SQL,會將這個表的20140801-20140930分區先刪除再新增數據
insert overwrite table iot.dwd_pollution_distcp partition(stat_date) select ozone,particullate_matter,carbon_monoxide,sulfure_dioxide,nitrogen_dioxide,longitude,latitude,date_time,stat_date as stat_date from iot.dwd_pollution??錯誤信息如下
Caused by: java.io.IOException: Couldn't run retriable-command: Copying hdfs://192.168.40.100:8020/user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140810/000008_0 to hdfs://192.168.40.200:8020/user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140810/000008_0at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:256)... 10 more Caused by: java.io.FileNotFoundException: File does not exist: /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140810/000008_0at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:85)at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:75)at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:152)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1909)at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:735)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:415)at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:870)at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:816)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:422)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2680)at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:423)at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:855)at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1768)at org.apache.hadoop.hdfs.DFSClient.getFileChecksumInternal(DFSClient.java:1714)at org.apache.hadoop.hdfs.DFSClient.getFileChecksumWithCombineMode(DFSClient.java:1744)at org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1610)at org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1607)at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)at org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1619)at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:119)at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)... 11 more5.2 核對數據
??查看源集群文件,部分文件如下
[hdfs@hadoop-local-02 ~]$ hdfs dfs -du -h /user/hive/warehouse/iot.db/dwd_pollution_distcp 969.3 K 2.8 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140801 1.7 M 5.0 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140802 2.2 M 6.7 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140803 2.5 M 7.4 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140804 2.7 M 8.0 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140805??查看目標集群文件,部分文件如下
[hdfs@hadoop-cdh-03 ~]$ hdfs dfs -du -h /user/hive/warehouse/iot.db/dwd_pollution_distcp 969.3 K 2.8 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140801 16.8 M 50.4 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140802 2.2 M 6.7 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140803 2.5 M 7.4 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140804 26.6 M 79.8 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140805??發現數據完全核對不上
5.3 使用update與delete參數修復
??重新執行命令,并且加上update與delete參數
hadoop distcp -p -update -delete hdfs://192.168.40.100:8020/user/hive/warehouse/iot.db/dwd_pollution_distcp hdfs://192.168.40.200:8020/user/hive/warehouse/iot.db/??查看源集群文件大小與數據記錄數
[hdfs@hadoop-local-02 ~]$ hdfs dfs -du -s -h /user/hive/warehouse/iot.db/dwd_pollution_distcp/ 376.5 M 1.1 G /user/hive/warehouse/iot.db/dwd_pollution_distcphive> select count(*) from dwd_pollution_distcp; OK 8919106 Time taken: 7.39 seconds, Fetched: 1 row(s)??查看目標集群文件大小與數據記錄數
[hdfs@hadoop-cdh-03 ~]$ hdfs dfs -du -s -h /user/hive/warehouse/iot.db/dwd_pollution_distcp/ 376.5 M 1.1 G /user/hive/warehouse/iot.db/dwd_pollution_distcphive> select count(*) from dwd_pollution_distcp; OK 8919106 Time taken: 7.39 seconds, Fetched: 1 row(s)??注意:hive要先執行MSCK REPAIR TABLE dwd_pollution_distcp進行分區修復,因為前面的測試導致分區元數據信息沒更新
5.4 執行overwrite時的另一種情況
??前面測試的是overwrite時數據還沒copy完,還有一種情況就是overwrite是數據已經copy完。測試前清空目標集群中遷移過去的數據
??(1)執行distcp
??(2)在程序快執行完時執行SQL
insert overwrite table iot.dwd_pollution_distcp partition(stat_date) select ozone,particullate_matter,carbon_monoxide,sulfure_dioxide,nitrogen_dioxide,longitude,latitude,date_time,'20141001' as stat_date from iot.dwd_pollution;??程序正常執行成功
??(3)查詢未執行SQl前后20141001這個分區的數據量
??源集群執行前
345.3 K 1.0 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20141001??源集群執行后
188.2 M 564.7 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20141001??查看目標集群
345.3 K 1.0 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20141001??(4)修復
??重新執行distcp命令加上update和delete參數
6 總結
??(1)disctp在拷貝過程中對源集群拷貝的文件夾下增加文件不影??響拷貝,但是這個增加的文件不會再拷貝列表中。后續可以通過update參數對這個增加的文件再進行遷移。
??(2)遷移的過程中老集群目錄因sql執行了ovewrite刪除了文件怎么辦?
??分兩種情況:①執行ovewrite的時候對應分區數據已經copy完,執行distcp程序不報錯,只能在通過sql查看條數來判斷
??②執行ovewrite的時候對應分區數據沒有copy完,執行distcp程序報錯,可以通過-i參數忽略失敗,那么其他沒有影響的分區會正常copy處理:使用-update對文件進行更新(如果目的文件的名稱和大小與源文件不同,則覆蓋;若目的文件大小和名稱與源文件相同則跳過),需要加-detele(如果目的目錄存在源目錄中不存在的文件,則刪除;走hdfs垃圾回收站),如果該分區ovewrite之后數據減少,導致存在了老集群不存在的文件,需要將其刪除
??(3)遷移了一半,任務失敗了怎么辦?
??①-update配合-detele進行重新執行命令
??②刪除掉新集群中的臟數據,重新執行遷移命令。不加-overwrite參數,來跳過已存在的文件。
總結
以上是生活随笔為你收集整理的DistCp迁移Hive数据过程中源集群增加删除文件等场景测试的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: oracle磁带库清洁带标签,磁带库、磁
- 下一篇: 逻辑回归原理