日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

ClickHouse 详细集群部署方案

發布時間:2024/1/17 编程问答 25 豆豆
生活随笔 收集整理的這篇文章主要介紹了 ClickHouse 详细集群部署方案 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

基本介紹:

ClickHouse 來自俄羅斯最大的搜索公司Yandex,配置文件中可以看到Yandex的樣子,于2016年開源。?ClickHouse是面向OLAP的分布式列式DBMS,OLAP(在線聯機分析)性能優秀,市場反應非常強烈。?面向列的數據庫更適合于OLAP方案(大量查詢場景,處理速度至少提升100倍),高逼格的ClickHouse在SSD上性能表現更佳。

官網地址:https://clickhouse.tech?

源碼地址:https://github.com/ClickHouse/ClickHouse

主要特性:

  • 真正的面向列的DBMS
  • 實時數據更新
  • SQL語法支持
  • 多核并行處理
  • 數據高效壓縮
  • 分布式處理
  • 數據復制完整性
  • 豐富的索引
  • 集群式管理
  • 可直接讀取MySQL數據
  • 適合于在線實時查詢
  • 支持近似預估計算

目前缺點:

  • 不支持二級索引
  • 不支持事物
  • 缺乏全面的UPDATE|DELETE的實現

應用場景:

  • 海量數據分析、報表和監控

?

環境配置描述:

服務器:CentOS Linux release 7.4.1708 (Core) * 3臺

?

安裝依賴:

yum?install?-y curl pygpgme yum-utils coreutils epel-release

Yum安裝:

yum install clickhouse-server clickhouse-client clickhouse-server-common clickhouse-compressor

驗證是否已經安裝:

yum list installed 'clickhouse*'Installed Packages clickhouse-client.x86_64 19.17.4.11-1.el7 @Altinity_clickhouse clickhouse-common-static.x86_64 19.17.4.11-1.el7 @Altinity_clickhouse clickhouse-compressor.x86_64 1.1.54336-3.el7 @Altinity_clickhouse clickhouse-server.x86_64 19.17.4.11-1.el7 @Altinity_clickhouse clickhouse-server-common.x86_64 19.17.4.11-1.el7 @Altinity_clickhouse

?

運行clickhouse-server:

/etc/init.d/clickhouse-server?restart

添加用戶:

useradd clickhouse

免密登陸

chmod 755 ~/.ssh ssh-keygen -t rsa cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys vi /etc/ssh/sshd_config PubkeyAuthentication yes service sshd restart $ ssh-copy-id -i ~/.ssh/id_rsa.pub root@xxxx

連接clickhouse-server:

clickhouse-clientClickHouse client version 19.17.4.11. Connecting to localhost:9000 as user default. Connected to ClickHouse server version 19.17.4 revision 54428.127.0.0.1 :)

創建相關文件和目錄:

cd /usr/local/clickhouse> config.xml > metrika.xml > users.xmlmkdir cores mkdir data mkdir flags mkdir log mkdir metadata mkdir status mkdir tmp mkdir format_schemas/usr/local/clickhouse tree . ├── config.xml ├── cores ├── data ├── flags ├── format_schemas ├── log ├── metadata ├── metrika.xml ├── status ├── tmp └── users.xml

配置 config.xml:

<?xml version="1.0"?> <yandex><logger><level>trace</level><log>/usr/local/clickhouse/log/server.log</log><errorlog>/usr/local/clickhouse/log/error.log</errorlog><size>1000M</size><count>10</count></logger><http_port>8123</http_port><tcp_port>9000</tcp_port><interserver_http_port>9009</interserver_http_port><listen_host>0.0.0.0</listen_host><path>/usr/local/clickhouse/data/clickhouse/</path><tmp_path>/usr/local/clickhouse/data/clickhouse/tmp/</tmp_path><users_config>users.xml</users_config><default_profile>default</default_profile><default_database>default</default_database><remote_servers incl="clickhouse_remote_servers" /><zookeeper incl="zookeeper-servers" optional="true" /><macros incl="macros" optional="true" /><include_from>/etc/clickhouse-server/metrika.xml</include_from><mark_cache_size>5368709120</mark_cache_size> </yandex>

配置 metrika.xml:

<yandex><clickhouse_remote_servers><report_shards_replicas><shard><weight>1</weight><internal_replication>false</internal_replication><replica><host>192.168.1.1</host><port>9000</port><user>default</user><password>6lYaUiFi</password></replica><replica><host>192.168.1.2</host><port>9000</port><user>default</user><password>6lYaUiFi</password></replica></shard><shard><weight>1</weight><internal_replication>false</internal_replication><replica><host>192.168.1.2</host><port>9000</port><user>default</user><password>6lYaUiFi</password></replica><replica><host>192.168.1.3</host><port>9000</port><user>default</user><password>6lYaUiFi</password></replica></shard><shard><weight>1</weight><internal_replication>false</internal_replication><replica><host>192.168.1.3</host><port>9000</port><user>default</user><password>6lYaUiFi</password></replica><replica><host>192.168.1.1</host><port>9000</port><user>default</user><password>6lYaUiFi</password></replica></shard></report_shards_replicas> </clickhouse_remote_servers><macros><replica>192.168.1.1</replica> </macros><networks><ip>::/0</ip> </networks><zookeeper-servers><node index="1"><host>192.168.1.1</host><port>2181</port></node><node index="2"><host>192.168.1.2</host><port>2181</port></node><node index="3"><host>192.168.1.3</host><port>2181</port></node> </zookeeper-servers><clickhouse_compression><case><min_part_size>10000000000</min_part_size><min_part_size_ratio>0.01</min_part_size_ratio><method>lz4</method></case> </clickhouse_compression> </yandex>

配置 users.xml:

<?xml version="1.0"?> <yandex><profiles><!-- 讀寫用戶設置 --><default><max_memory_usage>10000000000</max_memory_usage><use_uncompressed_cache>0</use_uncompressed_cache><load_balancing>random</load_balancing></default><!-- 只寫用戶設置 --><readonly><max_memory_usage>10000000000</max_memory_usage><use_uncompressed_cache>0</use_uncompressed_cache><load_balancing>random</load_balancing><readonly>1</readonly></readonly></profiles><!-- 配額 --><quotas><!-- Name of quota. --><default><interval><duration>3600</duration><queries>0</queries><errors>0</errors><result_rows>0</result_rows><read_rows>0</read_rows><execution_time>0</execution_time></interval></default></quotas><users><!-- 讀寫用戶 --><default><password_sha256_hex>967f3bf355dddfabfca1c9f5cab39352b2ec1cd0b05f9e1e6b8f629705fe7d6e</password_sha256_hex><networks incl="networks" replace="replace"><ip>::/0</ip></networks><profile>default</profile><quota>default</quota></default><!-- 只讀用戶 --><ck><password_sha256_hex>967f3bf355dddfabfca1c9f5cab39352b2ec1cd0b05f9e1e6b8f629705fe7d6e</password_sha256_hex><networks incl="networks" replace="replace"><ip>::/0</ip></networks><profile>readonly</profile><quota>default</quota></ck></users> </yandex>

啟動服務:

/etc/init.d/clickhouse-server start

查看集群:

clickhouse-client --host=192.168.1.1 --port=9000 --user=default --password=6lYaUiFiselect * from system.clusters;┌─cluster────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐│ report_shards_replicas │ 1 │ 1 │ 1 │ 192.168.1.1 │ 192.168.1.1 │ 9000 │ 1 │ default │ │ 0 │ 0 ││ report_shards_replicas │ 1 │ 1 │ 2 │ 192.168.1.2 │ 192.168.1.2 │ 9000 │ 0 │ default │ │ 0 │ 0 ││ report_shards_replicas │ 2 │ 1 │ 1 │ 192.168.1.2 │ 192.168.1.2 │ 9000 │ 0 │ default │ │ 0 │ 0 ││ report_shards_replicas │ 2 │ 1 │ 2 │ 192.168.1.3 │ 192.168.1.3 │ 9000 │ 0 │ default │ │ 0 │ 0 ││ report_shards_replicas │ 3 │ 1 │ 1 │ 192.168.1.3 │ 192.168.1.3 │ 9000 │ 0 │ default │ │ 0 │ 0 ││ report_shards_replicas │ 3 │ 1 │ 2 │ 192.168.1.1 │ 192.168.1.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │└─────────────────────── ┴───────────┴──────────────┴─────────────┴─────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘6 rows in set. Elapsed: 0.002 sec.

創建本地表,創建Distributed表。3臺機器上都要創建,DDL不同步:

CREATE TABLE ck_local (UnixDate Date,Year UInt16) ENGINE = MergeTree(UnixDate, (Year, UnixDate), 8192); CREATE TABLE ck_all AS ck_local ENGINE = Distributed(report_shards_replicas, default, ck_local, rand());

插入數據:

insert into ck_all (UnixDate,Year)values('2010-03-20',2010); insert into ck_all (UnixDate,Year)values('2011-03-20',2011); insert into ck_all (UnixDate,Year)values('2012-03-20',2012);

?

該文章公眾號:“sir小龍蝦”獨家授權,其他人未經允許不得轉載。

總結

以上是生活随笔為你收集整理的ClickHouse 详细集群部署方案的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。