日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 运维知识 > 数据库 >内容正文

数据库

mysql基于binlog增量更新_一个应用它提取MySQL binlog,解析binlog并将增量更新数据推送到不同的接收器...

發布時間:2025/4/16 数据库 39 豆豆
生活随笔 收集整理的這篇文章主要介紹了 mysql基于binlog增量更新_一个应用它提取MySQL binlog,解析binlog并将增量更新数据推送到不同的接收器... 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

DolphinBeat

Other languages: 中文

This is a high available server that pulls MySQL binlog, parses binlog and pushs incremental update data into different sinks.

The types of sink supported currently and officially are Kafka and Stdout.

Features:

Supports MySQL and MariaDB.

Supports GTID and not GTID.

Supports MySQL failover: if using GTID, dolphinbeat can work smoothly even if MySQL failover.

Supports MySQL DDL: dolphinbeat can parse DDL statement and replay DDL upon it's own schema data in memory.

Supports breakpoint resume: dolphinbeat has persistent metadata, it can resume to work after crash recover.

Supports standalone and election mode: if election enabled, dolphinbeat follower will take over dead leader.

Supports filter rules base on database and table for each sink.

Supports HTTP API to inspect dolphinbeat.

Supports metrics in Prometheus style.

The types of sink are scalable, you can implement your own sink if need, but I recommend you to use Kafka sink and let business consumes data from Kafka.

Quick start

Prepare your MySQL source, trun on binlog with ROW format, and type following commands and you will see JSON printed by dolphinbeat's Stdout sink.

docker run -e MYSQL_ADDR='8.8.8.8:3306' -e MYSQL_USER='root' -e MYSQL_PASSWORD='xxx' bytewatch/dolphinbeat

{

"header": {

"server_id": 66693,

"type": "rotate",

"timestamp": 0,

"log_pos": 0

},

"next_log_name": "mysql-bin.000008",

"next_log_pos": 4

}

...

...

The docker image above is for MySQL with GTID and only with Stdout sink enabled.

If your source database is not GTID, please add -e GTID_ENABLED='false' arg. If your source database is MariaDB, please add -e FLAVOR='mariadb' arg.

If you want to have a deep test, type following commands and you will get a shell:

docker run -e MYSQL_ADDR='8.8.8.8:3306' -e MYSQL_USER='root' -e MYSQL_PASSWORD='xxx' sh

In this shell, you can modify configurations in /data directory, and then start dolphinbeat manually.

Configuration description is presented in Wiki.

Compile from source

Type following commands and you will get builded binary distribution at build/dolphinbeat directory:

go get github.com/bytewatch/dolphinbeat

make

Documents

Sink

Kafka

This is a sink used for production. Dolphinbeat write data encoded with Protobuf into Kafka and business consumes data from Kafka.

Business need use client library to decode data in Kafka message, do stream processing on the binlog stream.

The Protobuf protocol is presented in protocol.proto.

Kafka sink has following features:

Strong-ordered delivery: business will receive events in the same order with MySQL binlog.

Exactly-once delivery: client library can dedup duplicated message with same sequence number which may caused by producer retry or Kafka failover.

Unlimited event size: dolphinbeat use fragments algorithm like IPV4 if the binlog event is bigger than Kafka's max message size.

A small example using client library is presented in kafka-consumer.

kafka-consumer is a command tool to decode data in Kafka message and print out with JSON.

Stdout

This is a sink used for demonstration. Dolphinbeat write data encoded with JSON to Stdout.

Stdout sink doesn't support breakpoint resume.

Special thanks

Thank siddontang for his popular and powerful go-mysql library!

License

《新程序員》:云原生和全面數字化實踐50位技術專家共同創作,文字、視頻、音頻交互閱讀

總結

以上是生活随笔為你收集整理的mysql基于binlog增量更新_一个应用它提取MySQL binlog,解析binlog并将增量更新数据推送到不同的接收器...的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。