日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 运维知识 > 数据库 >内容正文

数据库

datax mysql replace_DataX-MySQL(读写)

發布時間:2023/12/19 数据库 42 豆豆
生活随笔 收集整理的這篇文章主要介紹了 datax mysql replace_DataX-MySQL(读写) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

DataX操作MySQL

一、 從MySQL讀取

介紹

MysqlReader插件實現了從Mysql讀取數據。在底層實現上,MysqlReader通過JDBC連接遠程Mysql數據庫,并執行相應的sql語句將數據從mysql庫中SELECT出來。

不同于其他關系型數據庫,MysqlReader不支持FetchSize.

實現原理

簡而言之,MysqlReader通過JDBC連接器連接到遠程的Mysql數據庫,并根據用戶配置的信息生成查詢SELECT SQL語句,然后發送到遠程Mysql數據庫,并將該SQL執行返回結果使用DataX自定義的數據類型拼裝為抽象的數據集,并傳遞給下游Writer處理。

對于用戶配置Table、Column、Where的信息,MysqlReader將其拼接為SQL語句發送到Mysql數據庫;對于用戶配置querySql信息,MysqlReader直接將其發送到Mysql數據庫。

json如下

{"job": {"setting": {"speed": {"channel": 3},"errorLimit": {"record": 0,"percentage": 0.02}

},"content": [{"reader": {"name": "mysqlreader","parameter": {"username": "root","password": "123456","column": ["id","name"],"splitPk": "id","connection": [{"table": ["datax_test"],"jdbcUrl": ["jdbc:mysql://192.168.1.123:3306/test"]

}]

}

},"writer": {"name": "streamwriter","parameter": {"print": true}

}

}]

}

}

參數說明--jdbcUrl

描述:描述的是到對端數據庫的JDBC連接信息,使用JSON的數組描述,并支持一個庫填寫多個連接地址。之所以使用JSON數組描述連接信息,是因為阿里集團內部支持多個IP探測,如果配置了多個,MysqlReader可以依次探測ip的可連接性,直到選擇一個合法的IP。如果全部連接失敗,MysqlReader報錯。 注意,jdbcUrl必須包含在connection配置單元中。對于阿里集團外部使用情況,JSON數組填寫一個JDBC連接即可。

jdbcUrl按照Mysql官方規范,并可以填寫連接附件控制信息。

必選:是

默認值:無

--username

描述:數據源的用戶名

必選:是

默認值:無

--password

描述:數據源指定用戶名的密碼

必選:是

默認值:無

--table

描述:所選取的需要同步的表。使用JSON的數組描述,因此支持多張表同時抽取。當配置為多張表時,用戶自己需保證多張表是同一schema結構,MysqlReader不予檢查表是否同一邏輯表。注意,table必須包含在connection配置單元中。

必選:是

默認值:無

--column

描述:所配置的表中需要同步的列名集合,使用JSON的數組描述字段信息。用戶使用*代表默認使用所有列配置,例如['*']。

支持列裁剪,即列可以挑選部分列進行導出。

支持列換序,即列可以不按照表schema信息進行導出。

支持常量配置,用戶需要按照Mysql SQL語法格式: ["id", "`table`", "1", "'bazhen.csy'", "null", "to_char(a + 1)", "2.3" , "true"] id為普通列名,`table`為包含保留在的列名,1為整形數字常量,'bazhen.csy'為字符串常量,null為空指針,to_char(a + 1)為表達式,2.3為浮點數,true為布爾值。

必選:是

默認值:無

--splitPk

描述:MysqlReader進行數據抽取時,如果指定splitPk,表示用戶希望使用splitPk代表的字段進行數據分片,DataX因此會啟動并發任務進行數據同步,這樣可以大大提供數據同步的效能。

推薦splitPk用戶使用表主鍵,因為表主鍵通常情況下比較均勻,因此切分出來的分片也不容易出現數據熱點。

--?目前splitPk僅支持整形數據切分,不支持浮點、字符串、日期等其他類型。如果用戶指定其他非支持類型,MysqlReader將報錯!

--如果splitPk不填寫,包括不提供splitPk或者splitPk值為空,DataX視作使用單通道同步該表數據。

必選:否

默認值:空

--where

描述:篩選條件,MysqlReader根據指定的column、table、where條件拼接SQL,并根據這個SQL進行數據抽取。在實際業務場景中,往往會選擇當天的數據進行同步,可以將where條件指定為gmt_create > $bizdate 。注意:不可以將where條件指定為limit 10,limit不是SQL的合法where子句。

where條件可以有效地進行業務增量同步。如果不填寫where語句,包括不提供where的key或者value,DataX均視作同步全量數據。

必選:否

默認值:無

--querySql

描述:在有些業務場景下,where這一配置項不足以描述所篩選的條件,用戶可以通過該配置型來自定義篩選SQL。當用戶配置了這一項之后,DataX系統就會忽略table,column這些配置型,直接使用這個配置項的內容對數據進行篩選,例如需要進行多表join后同步數據,使用select a,b from table_a join table_b on table_a.id = table_b.id

當用戶配置querySql時,MysqlReader直接忽略table、column、where條件的配置,querySql優先級大于table、column、where選項。

必選:否

默認值:無

mysqlreader類型轉換表

請注意:

--除上述羅列字段類型外,其他類型均不支持。

--tinyint(1) DataX視作為整形。

--year DataX視作為字符串類型

--bit DataX屬于未定義行為。

執行

FengZhendeMacBook-Pro:bin FengZhen$ ./datax.py /Users/FengZhen/Desktop/Hadoop/dataX/json/mysql/reader_all.json

DataX (DATAX-OPENSOURCE-3.0), From Alibaba !Copyright (C)2010-2017, Alibaba Group. All Rights Reserved.2018-11-18 16:22:04.599 [main] INFO VMInfo - VMInfo# operatingSystem class =>sun.management.OperatingSystemImpl2018-11-18 16:22:04.612 [main] INFO Engine - the machine info =>osInfo: Oracle Corporation1.8 25.162-b12

jvmInfo: Mac OS X x86_6410.13.4cpu num:4totalPhysicalMemory:-0.00G

freePhysicalMemory:-0.00G

maxFileDescriptorCount:-1currentOpenFileDescriptorCount:-1GC Names [PS MarkSweep, PS Scavenge]

MEMORY_NAME| allocation_size |init_size

PS Eden Space| 256.00MB | 256.00MB

Code Cache| 240.00MB | 2.44MB

Compressed Class Space| 1,024.00MB | 0.00MB

PS Survivor Space| 42.50MB | 42.50MB

PS Old Gen| 683.00MB | 683.00MB

Metaspace| -0.00MB | 0.00MB2018-11-18 16:22:04.638 [main] INFO Engine -{"content":[

{"reader":{"name":"mysqlreader","parameter":{"column":["id","name"],"connection":[

{"jdbcUrl":["jdbc:mysql://192.168.1.123:3306/test"],"table":["datax_test"]

}

],"password":"******","splitPk":"id","username":"root"}

},"writer":{"name":"streamwriter","parameter":{"print":true}

}

}

],"setting":{"errorLimit":{"percentage":0.02,"record":0},"speed":{"channel":3}

}

}2018-11-18 16:22:04.673 [main] WARN Engine - prioriy set to 0, because NumberFormatException, the value is: null

2018-11-18 16:22:04.678 [main] INFO PerfTrace - PerfTrace traceId=job_-1, isEnable=false, priority=0

2018-11-18 16:22:04.678 [main] INFO JobContainer -DataX jobContainer starts job.2018-11-18 16:22:04.681 [main] INFO JobContainer - Set jobId = 0

2018-11-18 16:22:05.323 [job-0] INFO OriginalConfPretreatmentUtil - Available jdbcUrl:jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true.

2018-11-18 16:22:05.478 [job-0] INFO OriginalConfPretreatmentUtil -table:[datax_test] has columns:[id,name].2018-11-18 16:22:05.490 [job-0] INFO JobContainer - jobContainer starts to doprepare ...2018-11-18 16:22:05.491 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] doprepare work .2018-11-18 16:22:05.492 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] doprepare work .2018-11-18 16:22:05.493 [job-0] INFO JobContainer - jobContainer starts to dosplit ...2018-11-18 16:22:05.493 [job-0] INFO JobContainer - Job set Channel-Number to 3channels.2018-11-18 16:22:05.618 [job-0] INFO SingleTableSplitUtil - split pk [sql=SELECT MIN(id),MAX(id) FROM datax_test] isrunning...2018-11-18 16:22:05.665 [job-0] INFO SingleTableSplitUtil - After split(), allQuerySql=[select id,name from datax_test where (1 <= id AND id < 2)select id,name from datax_test where (2 <= id AND id < 3)select id,name from datax_test where (3 <= id AND id < 4)select id,name from datax_test where (4 <= id AND id <= 5)select id,name from datax_test whereid IS NULL

].2018-11-18 16:22:05.666 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] splits to [5] tasks.2018-11-18 16:22:05.667 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] splits to [5] tasks.2018-11-18 16:22:05.697 [job-0] INFO JobContainer - jobContainer starts to doschedule ...2018-11-18 16:22:05.721 [job-0] INFO JobContainer - Scheduler starts [1] taskGroups.2018-11-18 16:22:05.744 [job-0] INFO JobContainer -Running by standalone Mode.2018-11-18 16:22:05.758 [taskGroup-0] INFO TaskGroupContainer - taskGroupId=[0] start [3] channels for [5] tasks.2018-11-18 16:22:05.765 [taskGroup-0] INFO Channel - Channel set byte_speed_limit to -1, No bps activated.2018-11-18 16:22:05.766 [taskGroup-0] INFO Channel - Channel set record_speed_limit to -1, No tps activated.2018-11-18 16:22:05.790 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] attemptCount[1] isstarted2018-11-18 16:22:05.795 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[1] attemptCount[1] isstarted2018-11-18 16:22:05.796 [0-0-0-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (1 <= id AND id < 2)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:22:05.796 [0-0-1-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (2 <= id AND id < 3)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:22:05.820 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[2] attemptCount[1] isstarted2018-11-18 16:22:05.821 [0-0-2-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (3 <= id AND id < 4)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:22:05.981 [0-0-0-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (1 <= id AND id < 2)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

1test12018-11-18 16:22:06.030 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] is successed, used[241]ms2018-11-18 16:22:06.033 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[3] attemptCount[1] isstarted2018-11-18 16:22:06.034 [0-0-3-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (4 <= id AND id <= 5)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:22:06.041 [0-0-2-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (3 <= id AND id < 4)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

3test32018-11-18 16:22:06.137 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[2] is successed, used[326]ms2018-11-18 16:22:06.139 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[4] attemptCount[1] isstarted2018-11-18 16:22:06.139 [0-0-4-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test whereid IS NULL

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:22:06.157 [0-0-1-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (2 <= id AND id < 3)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2test22018-11-18 16:22:06.243 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[1] is successed, used[449]ms2018-11-18 16:22:11.295 [0-0-3-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (4 <= id AND id <= 5)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

4test45test52018-11-18 16:22:11.393 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[3] is successed, used[5360]ms2018-11-18 16:22:15.784 [job-0] INFO StandAloneJobContainerCommunicator - Total 0 records, 0 bytes | Speed 0B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 0.00%

2018-11-18 16:22:25.166 [0-0-4-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test whereid IS NULL

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:22:25.413 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[4] is successed, used[19274]ms2018-11-18 16:22:25.417 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] completed it's tasks.

2018-11-18 16:22:25.786 [job-0] INFO StandAloneJobContainerCommunicator - Total 5 records, 30 bytes | Speed 3B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%

2018-11-18 16:22:25.786 [job-0] INFO AbstractScheduler -Scheduler accomplished all tasks.2018-11-18 16:22:25.787 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] dopost work.2018-11-18 16:22:25.788 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] dopost work.2018-11-18 16:22:25.788 [job-0] INFO JobContainer - DataX jobId [0] completed successfully.2018-11-18 16:22:25.791 [job-0] INFO HookInvoker - No hook invoked, because base dir not exists or is a file: /Users/FengZhen/Desktop/Hadoop/dataX/datax/hook2018-11-18 16:22:25.796 [job-0] INFO JobContainer -[total cpu info]=>averageCpu| maxDeltaCpu |minDeltaCpu-1.00% | -1.00% | -1.00%[total gc info]=>NAME| totalGCCount | maxDeltaGCCount | minDeltaGCCount | totalGCTime | maxDeltaGCTime |minDeltaGCTime

PS MarkSweep| 0 | 0 | 0 | 0.000s | 0.000s | 0.000s

PS Scavenge| 0 | 0 | 0 | 0.000s | 0.000s | 0.000s2018-11-18 16:22:25.797 [job-0] INFO JobContainer - PerfTrace not enable!

2018-11-18 16:22:25.798 [job-0] INFO StandAloneJobContainerCommunicator - Total 5 records, 30 bytes | Speed 1B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%

2018-11-18 16:22:25.799 [job-0] INFO JobContainer -任務啟動時刻 :2018-11-18 16:22:04任務結束時刻 :2018-11-18 16:22:25任務總計耗時 : 21s

任務平均流量 : 1B/s

記錄寫入速度 : 0rec/s

讀出記錄總數 :5讀寫失敗總數 :0

在控制臺可看到結果輸出

二、從MySQL按條件讀取數據

json如下

{"job": {"setting": {"speed": {"channel": 1}

},"content": [{"reader": {"name": "mysqlreader","parameter": {"username": "root","password": "123456","connection": [{"querySql": ["select * from datax_test where id < 3;"],"jdbcUrl": ["jdbc:mysql://bad_ip:3306/database","jdbc:mysql://127.0.0.1:bad_port/database","jdbc:mysql://192.168.1.123:3306/test"]

}]

}

},"writer": {"name": "streamwriter","parameter": {"print": true,"encoding": "UTF-8"}

}

}]

}

}

執行

FengZhendeMacBook-Pro:bin FengZhen$ ./datax.py /Users/FengZhen/Desktop/Hadoop/dataX/json/mysql/reader_select.json

DataX (DATAX-OPENSOURCE-3.0), From Alibaba !Copyright (C)2010-2017, Alibaba Group. All Rights Reserved.2018-11-18 16:31:20.508 [main] INFO VMInfo - VMInfo# operatingSystem class =>sun.management.OperatingSystemImpl2018-11-18 16:31:20.521 [main] INFO Engine - the machine info =>osInfo: Oracle Corporation1.8 25.162-b12

jvmInfo: Mac OS X x86_6410.13.4cpu num:4totalPhysicalMemory:-0.00G

freePhysicalMemory:-0.00G

maxFileDescriptorCount:-1currentOpenFileDescriptorCount:-1GC Names [PS MarkSweep, PS Scavenge]

MEMORY_NAME| allocation_size |init_size

PS Eden Space| 256.00MB | 256.00MB

Code Cache| 240.00MB | 2.44MB

Compressed Class Space| 1,024.00MB | 0.00MB

PS Survivor Space| 42.50MB | 42.50MB

PS Old Gen| 683.00MB | 683.00MB

Metaspace| -0.00MB | 0.00MB2018-11-18 16:31:20.557 [main] INFO Engine -{"content":[

{"reader":{"name":"mysqlreader","parameter":{"connection":[

{"jdbcUrl":["jdbc:mysql://bad_ip:3306/database","jdbc:mysql://127.0.0.1:bad_port/database","jdbc:mysql://192.168.1.123:3306/test"],"querySql":["select * from datax_test where id < 3;"]

}

],"password":"******","username":"root"}

},"writer":{"name":"streamwriter","parameter":{"encoding":"UTF-8","print":true}

}

}

],"setting":{"speed":{"channel":1}

}

}2018-11-18 16:31:20.609 [main] WARN Engine - prioriy set to 0, because NumberFormatException, the value is: null

2018-11-18 16:31:20.612 [main] INFO PerfTrace - PerfTrace traceId=job_-1, isEnable=false, priority=0

2018-11-18 16:31:20.613 [main] INFO JobContainer -DataX jobContainer starts job.2018-11-18 16:31:20.618 [main] INFO JobContainer - Set jobId = 0

2018-11-18 16:31:21.140 [job-0] WARN DBUtil - test connection of [jdbc:mysql://bad_ip:3306/database] failed, for Code:[MYSQLErrCode-02], Description:[數據庫服務的IP地址或者Port錯誤,請檢查填寫的IP地址和Port或者聯系DBA確認IP地址和Port是否正確。如果是同步中心用戶請聯系DBA確認idb上錄入的IP和PORT信息和數據庫的當前實際信息是一致的]. - 具體錯誤信息為:com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

The last packet sent successfully to the server was0 milliseconds ago. The driver has not received any packets fromthe server..2018-11-18 16:31:21.143 [job-0] WARN DBUtil - test connection of [jdbc:mysql://127.0.0.1:bad_port/database] failed, for Code:[DBUtilErrorCode-10], Description:[連接數據庫失敗. 請檢查您的 賬號、密碼、數據庫名稱、IP、Port或者向 DBA 尋求幫助(注意網絡環境).]. - 具體錯誤信息為:com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Cannot load connection class because of underlying exception: 'java.lang.NumberFormatException: For input string: "bad_port"'..

2018-11-18 16:31:21.498 [job-0] INFO OriginalConfPretreatmentUtil - Available jdbcUrl:jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true.

2018-11-18 16:31:21.512 [job-0] INFO JobContainer - jobContainer starts to doprepare ...2018-11-18 16:31:21.518 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] doprepare work .2018-11-18 16:31:21.520 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] doprepare work .2018-11-18 16:31:21.521 [job-0] INFO JobContainer - jobContainer starts to dosplit ...2018-11-18 16:31:21.524 [job-0] INFO JobContainer - Job set Channel-Number to 1channels.2018-11-18 16:31:21.546 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] splits to [1] tasks.2018-11-18 16:31:21.548 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] splits to [1] tasks.2018-11-18 16:31:21.587 [job-0] INFO JobContainer - jobContainer starts to doschedule ...2018-11-18 16:31:21.592 [job-0] INFO JobContainer - Scheduler starts [1] taskGroups.2018-11-18 16:31:21.597 [job-0] INFO JobContainer -Running by standalone Mode.2018-11-18 16:31:21.629 [taskGroup-0] INFO TaskGroupContainer - taskGroupId=[0] start [1] channels for [1] tasks.2018-11-18 16:31:21.639 [taskGroup-0] INFO Channel - Channel set byte_speed_limit to -1, No bps activated.2018-11-18 16:31:21.639 [taskGroup-0] INFO Channel - Channel set record_speed_limit to -1, No tps activated.2018-11-18 16:31:21.658 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] attemptCount[1] isstarted2018-11-18 16:31:21.667 [0-0-0-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select * from datax_test where id < 3;

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:31:21.814 [0-0-0-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select * from datax_test where id < 3;

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

1test12test22018-11-18 16:31:21.865 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] is successed, used[211]ms2018-11-18 16:31:21.866 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] completed it's tasks.

2018-11-18 16:31:31.685 [job-0] INFO StandAloneJobContainerCommunicator - Total 2 records, 12 bytes | Speed 1B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%

2018-11-18 16:31:31.685 [job-0] INFO AbstractScheduler -Scheduler accomplished all tasks.2018-11-18 16:31:31.686 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] dopost work.2018-11-18 16:31:31.687 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] dopost work.2018-11-18 16:31:31.687 [job-0] INFO JobContainer - DataX jobId [0] completed successfully.2018-11-18 16:31:31.688 [job-0] INFO HookInvoker - No hook invoked, because base dir not exists or is a file: /Users/FengZhen/Desktop/Hadoop/dataX/datax/hook2018-11-18 16:31:31.693 [job-0] INFO JobContainer -[total cpu info]=>averageCpu| maxDeltaCpu |minDeltaCpu-1.00% | -1.00% | -1.00%[total gc info]=>NAME| totalGCCount | maxDeltaGCCount | minDeltaGCCount | totalGCTime | maxDeltaGCTime |minDeltaGCTime

PS MarkSweep| 0 | 0 | 0 | 0.000s | 0.000s | 0.000s

PS Scavenge| 0 | 0 | 0 | 0.000s | 0.000s | 0.000s2018-11-18 16:31:31.693 [job-0] INFO JobContainer - PerfTrace not enable!

2018-11-18 16:31:31.694 [job-0] INFO StandAloneJobContainerCommunicator - Total 2 records, 12 bytes | Speed 1B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%

2018-11-18 16:31:31.695 [job-0] INFO JobContainer -任務啟動時刻 :2018-11-18 16:31:20任務結束時刻 :2018-11-18 16:31:31任務總計耗時 : 11s

任務平均流量 : 1B/s

記錄寫入速度 : 0rec/s

讀出記錄總數 :2讀寫失敗總數 :0

三、從MySQL讀取寫入MySQL

寫入MySQL簡介

MysqlWriter 插件實現了寫入數據到 Mysql 主庫的目的表的功能。在底層實現上, MysqlWriter 通過 JDBC 連接遠程 Mysql 數據庫,并執行相應的 insert into ... 或者 ( replace into ...) 的 sql 語句將數據寫入 Mysql,內部會分批次提交入庫,需要數據庫本身采用 innodb 引擎。

實現原理

MysqlWriter 通過 DataX 框架獲取 Reader 生成的協議數據,根據你配置的?writeMode?生成

insert into...(當主鍵/唯一性索引沖突時會寫不進去沖突的行)

或者

replace into...(沒有遇到主鍵/唯一性索引沖突時,與 insert into 行為一致,沖突時會用新行替換原有行所有字段) 的語句寫入數據到 Mysql。出于性能考慮,采用了?PreparedStatement + Batch,并且設置了:rewriteBatchedStatements=true,將數據緩沖到線程上下文 Buffer 中,當 Buffer 累計到預定閾值時,才發起寫入請求。

json如下

{"job": {"setting": {"speed": {"channel": 3},"errorLimit": {"record": 0,"percentage": 0.02}

},"content": [{"reader": {"name": "mysqlreader","parameter": {"username": "root","password": "123456","column": ["id","name"],"splitPk": "id","connection": [{"table": ["datax_test"],"jdbcUrl": ["jdbc:mysql://192.168.1.123:3306/test"]

}]

}

},"writer": {"name": "mysqlwriter","parameter": {"writeMode": "insert","username": "root","password": "123456","column": ["id","name"],"session": ["set session sql_mode='ANSI'"],"preSql": ["delete from datax_target_test"],"connection": [{"jdbcUrl": "jdbc:mysql://192.168.1.123:3306/test?useUnicode=true&characterEncoding=gbk","table": ["datax_target_test"]

}]

}

}

}]

}

}

執行

FengZhendeMacBook-Pro:bin FengZhen$ ./datax.py /Users/FengZhen/Desktop/Hadoop/dataX/json/mysql/3.mysql2mysql.json

DataX (DATAX-OPENSOURCE-3.0), From Alibaba !Copyright (C)2010-2017, Alibaba Group. All Rights Reserved.2018-11-18 16:49:13.176 [main] INFO VMInfo - VMInfo# operatingSystem class =>sun.management.OperatingSystemImpl2018-11-18 16:49:13.189 [main] INFO Engine - the machine info =>osInfo: Oracle Corporation1.8 25.162-b12

jvmInfo: Mac OS X x86_6410.13.4cpu num:4totalPhysicalMemory:-0.00G

freePhysicalMemory:-0.00G

maxFileDescriptorCount:-1currentOpenFileDescriptorCount:-1GC Names [PS MarkSweep, PS Scavenge]

MEMORY_NAME| allocation_size |init_size

PS Eden Space| 256.00MB | 256.00MB

Code Cache| 240.00MB | 2.44MB

Compressed Class Space| 1,024.00MB | 0.00MB

PS Survivor Space| 42.50MB | 42.50MB

PS Old Gen| 683.00MB | 683.00MB

Metaspace| -0.00MB | 0.00MB2018-11-18 16:49:13.218 [main] INFO Engine -{"content":[

{"reader":{"name":"mysqlreader","parameter":{"column":["id","name"],"connection":[

{"jdbcUrl":["jdbc:mysql://192.168.1.123:3306/test"],"table":["datax_test"]

}

],"password":"******","splitPk":"id","username":"root"}

},"writer":{"name":"mysqlwriter","parameter":{"column":["id","name"],"connection":[

{"jdbcUrl":"jdbc:mysql://192.168.1.123:3306/test?useUnicode=true&characterEncoding=gbk","table":["datax_target_test"]

}

],"password":"******","preSql":["delete from datax_target_test"],"session":["set session sql_mode='ANSI'"],"username":"root","writeMode":"insert"}

}

}

],"setting":{"errorLimit":{"percentage":0.02,"record":0},"speed":{"channel":3}

}

}2018-11-18 16:49:13.268 [main] WARN Engine - prioriy set to 0, because NumberFormatException, the value is: null

2018-11-18 16:49:13.271 [main] INFO PerfTrace - PerfTrace traceId=job_-1, isEnable=false, priority=0

2018-11-18 16:49:13.272 [main] INFO JobContainer -DataX jobContainer starts job.2018-11-18 16:49:13.280 [main] INFO JobContainer - Set jobId = 0

2018-11-18 16:49:13.991 [job-0] INFO OriginalConfPretreatmentUtil - Available jdbcUrl:jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true.

2018-11-18 16:49:14.147 [job-0] INFO OriginalConfPretreatmentUtil -table:[datax_test] has columns:[id,name].2018-11-18 16:49:14.567 [job-0] INFO OriginalConfPretreatmentUtil -table:[datax_target_test] all columns:[

id,name

].2018-11-18 16:49:14.697 [job-0] INFO OriginalConfPretreatmentUtil -Write data [

insert INTO%s (id,name) VALUES(?,?)

], which jdbcUrl like:[jdbc:mysql://192.168.1.123:3306/test?useUnicode=true&characterEncoding=gbk&yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true]

2018-11-18 16:49:14.698 [job-0] INFO JobContainer - jobContainer starts to doprepare ...2018-11-18 16:49:14.698 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] doprepare work .2018-11-18 16:49:14.699 [job-0] INFO JobContainer - DataX Writer.Job [mysqlwriter] doprepare work .2018-11-18 16:49:14.765 [job-0] INFO CommonRdbmsWriter$Job - Begin to execute preSqls:[delete from datax_target_test]. context info:jdbc:mysql://192.168.1.123:3306/test?useUnicode=true&characterEncoding=gbk&yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true.

2018-11-18 16:49:14.770 [job-0] INFO JobContainer - jobContainer starts to dosplit ...2018-11-18 16:49:14.771 [job-0] INFO JobContainer - Job set Channel-Number to 3channels.2018-11-18 16:49:14.879 [job-0] INFO SingleTableSplitUtil - split pk [sql=SELECT MIN(id),MAX(id) FROM datax_test] isrunning...2018-11-18 16:49:14.926 [job-0] INFO SingleTableSplitUtil - After split(), allQuerySql=[select id,name from datax_test where (1 <= id AND id < 2)select id,name from datax_test where (2 <= id AND id < 3)select id,name from datax_test where (3 <= id AND id < 4)select id,name from datax_test where (4 <= id AND id <= 5)select id,name from datax_test whereid IS NULL

].2018-11-18 16:49:14.926 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] splits to [5] tasks.2018-11-18 16:49:14.928 [job-0] INFO JobContainer - DataX Writer.Job [mysqlwriter] splits to [5] tasks.2018-11-18 16:49:14.974 [job-0] INFO JobContainer - jobContainer starts to doschedule ...2018-11-18 16:49:14.991 [job-0] INFO JobContainer - Scheduler starts [1] taskGroups.2018-11-18 16:49:14.995 [job-0] INFO JobContainer -Running by standalone Mode.2018-11-18 16:49:15.011 [taskGroup-0] INFO TaskGroupContainer - taskGroupId=[0] start [3] channels for [5] tasks.2018-11-18 16:49:15.022 [taskGroup-0] INFO Channel - Channel set byte_speed_limit to -1, No bps activated.2018-11-18 16:49:15.022 [taskGroup-0] INFO Channel - Channel set record_speed_limit to -1, No tps activated.2018-11-18 16:49:15.041 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] attemptCount[1] isstarted2018-11-18 16:49:15.052 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[1] attemptCount[1] isstarted2018-11-18 16:49:15.052 [0-0-0-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (1 <= id AND id < 2)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:15.052 [0-0-1-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (2 <= id AND id < 3)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:15.057 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[2] attemptCount[1] isstarted2018-11-18 16:49:15.057 [0-0-2-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (3 <= id AND id < 4)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:15.175 [0-0-0-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:15.215 [0-0-0-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (1 <= id AND id < 2)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:15.233 [0-0-2-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:19.387 [0-0-1-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:19.457 [0-0-2-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (3 <= id AND id < 4)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:19.575 [0-0-2-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:19.612 [0-0-1-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (2 <= id AND id < 3)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:19.687 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[2] is successed, used[4632]ms2018-11-18 16:49:19.693 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[3] attemptCount[1] isstarted2018-11-18 16:49:19.693 [0-0-0-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:19.696 [0-0-3-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (4 <= id AND id <= 5)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:19.796 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] is successed, used[4761]ms2018-11-18 16:49:19.799 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[4] attemptCount[1] isstarted2018-11-18 16:49:19.799 [0-0-4-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test whereid IS NULL

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:19.873 [0-0-3-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:19.882 [0-0-3-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (4 <= id AND id <= 5)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:19.989 [0-0-1-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:20.000 [0-0-4-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test whereid IS NULL

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:20.074 [0-0-3-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:20.107 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[1] is successed, used[5055]ms2018-11-18 16:49:20.142 [0-0-4-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:20.212 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[3] is successed, used[522]ms2018-11-18 16:49:25.061 [job-0] INFO StandAloneJobContainerCommunicator - Total 0 records, 0 bytes | Speed 0B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 0.00%

2018-11-18 16:49:25.578 [0-0-4-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:25.671 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[4] is successed, used[5872]ms2018-11-18 16:49:25.671 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] completed it's tasks.

2018-11-18 16:49:35.064 [job-0] INFO StandAloneJobContainerCommunicator - Total 5 records, 30 bytes | Speed 3B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%

2018-11-18 16:49:35.064 [job-0] INFO AbstractScheduler -Scheduler accomplished all tasks.2018-11-18 16:49:35.065 [job-0] INFO JobContainer - DataX Writer.Job [mysqlwriter] dopost work.2018-11-18 16:49:35.066 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] dopost work.2018-11-18 16:49:35.067 [job-0] INFO JobContainer - DataX jobId [0] completed successfully.2018-11-18 16:49:35.068 [job-0] INFO HookInvoker - No hook invoked, because base dir not exists or is a file: /Users/FengZhen/Desktop/Hadoop/dataX/datax/hook2018-11-18 16:49:35.072 [job-0] INFO JobContainer -[total cpu info]=>averageCpu| maxDeltaCpu |minDeltaCpu-1.00% | -1.00% | -1.00%[total gc info]=>NAME| totalGCCount | maxDeltaGCCount | minDeltaGCCount | totalGCTime | maxDeltaGCTime |minDeltaGCTime

PS MarkSweep| 0 | 0 | 0 | 0.000s | 0.000s | 0.000s

PS Scavenge| 0 | 0 | 0 | 0.000s | 0.000s | 0.000s2018-11-18 16:49:35.072 [job-0] INFO JobContainer - PerfTrace not enable!

2018-11-18 16:49:35.073 [job-0] INFO StandAloneJobContainerCommunicator - Total 5 records, 30 bytes | Speed 1B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%

2018-11-18 16:49:35.074 [job-0] INFO JobContainer -任務啟動時刻 :2018-11-18 16:49:13任務結束時刻 :2018-11-18 16:49:35任務總計耗時 : 21s

任務平均流量 : 1B/s

記錄寫入速度 : 0rec/s

讀出記錄總數 :5讀寫失敗總數 :0

參數說明:

--jdbcUrl

描述:目的數據庫的 JDBC 連接信息。作業運行時,DataX 會在你提供的 jdbcUrl 后面追加如下屬性:yearIsDateType=false&zeroDateTimeBehavior=convertToNull&rewriteBatchedStatements=true

注意:1、在一個數據庫上只能配置一個 jdbcUrl 值。這與 MysqlReader 支持多個備庫探測不同,因為此處不支持同一個數據庫存在多個主庫的情況(雙主導入數據情況)

2、jdbcUrl按照Mysql官方規范,并可以填寫連接附加控制信息,比如想指定連接編碼為 gbk ,則在 jdbcUrl 后面追加屬性 useUnicode=true&characterEncoding=gbk。具體請參看 Mysql官方文檔或者咨詢對應 DBA。

必選:是

默認值:無

--username

描述:目的數據庫的用戶名

必選:是

默認值:無

--password

描述:目的數據庫的密碼

必選:是

默認值:無

--table

描述:目的表的表名稱。支持寫入一個或者多個表。當配置為多張表時,必須確保所有表結構保持一致。

注意:table 和 jdbcUrl 必須包含在 connection 配置單元中

必選:是

默認值:無

--column

描述:目的表需要寫入數據的字段,字段之間用英文逗號分隔。例如: "column": ["id","name","age"]。如果要依次寫入全部列,使用表示, 例如: "column": [""]。

**column配置項必須指定,不能留空!**

注意:1、我們強烈不推薦你這樣配置,因為當你目的表字段個數、類型等有改動時,你的任務可能運行不正確或者失敗

2、 column 不能配置任何常量值

必選:是

默認值:否

--session

描述: DataX在獲取Mysql連接時,執行session指定的SQL語句,修改當前connection session屬性

必須: 否

默認值: 空

--preSql

描述:寫入數據到目的表前,會先執行這里的標準語句。如果 Sql 中有你需要操作到的表名稱,請使用?@table?表示,這樣在實際執行 Sql 語句時,會對變量按照實際表名稱進行替換。比如你的任務是要寫入到目的端的100個同構分表(表名稱為:datax_00,datax01, ... datax_98,datax_99),并且你希望導入數據前,先對表中數據進行刪除操作,那么你可以這樣配置:"preSql":["delete from 表名"],效果是:在執行到每個表寫入數據前,會先執行對應的 delete from 對應表名稱

必選:否

默認值:無

--postSql

描述:寫入數據到目的表后,會執行這里的標準語句。(原理同 preSql )

必選:否

默認值:無

--writeMode

描述:控制寫入數據到目標表采用?insert into?或者?replace into?或者?ON DUPLICATE KEY UPDATE?語句

必選:是

所有選項:insert/replace/update

默認值:insert

--batchSize

描述:一次性批量提交的記錄數大小,該值可以極大減少DataX與Mysql的網絡交互次數,并提升整體吞吐量。但是該值設置過大可能會造成DataX運行進程OOM情況。

必選:否

默認值:1024

類型轉換

類似 MysqlReader ,目前 MysqlWriter 支持大部分 Mysql 類型,但也存在部分個別類型沒有支持的情況,請注意檢查你的類型。

下面列出 MysqlWriter 針對 Mysql 類型轉換列表:

bit類型目前是未定義類型轉換

總結

以上是生活随笔為你收集整理的datax mysql replace_DataX-MySQL(读写)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。