日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

17_clickhouse分布式集群部署

發(fā)布時(shí)間:2024/9/27 编程问答 24 豆豆
生活随笔 收集整理的這篇文章主要介紹了 17_clickhouse分布式集群部署 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

23.ClickHouse分布式集群部署

23.1.集群部署

23.1.1.準(zhǔn)備工作

節(jié)點(diǎn)規(guī)劃:

主機(jī)名IP地址分片副本
clickhouse1192.168.106.103shard1副本1
clickhouse2192.168.106.104shard1副本2
clickhouse3192.168.106.105shard2副本1
clickhouse4192.168.106.106shard2副本2

規(guī)劃4個(gè)節(jié)點(diǎn), 2個(gè)分片, 每個(gè)分片2個(gè)副本。 分片1的副本在主機(jī)clickhouse1和clickhouse2上, 2分片的副本在主機(jī)clickhouse3和clickhouse4上。

操作系統(tǒng)準(zhǔn)備工作:
(1)、修改主機(jī)名
hostname按照上面主機(jī)名進(jìn)行修改。
(2)、關(guān)閉防火墻、selinux等。

關(guān)閉防火墻: systemctl stop firewalld.service systemctl disable firewalld.service systemctl is-enabled firewalld.serviceselinux的配置: vim /etc/sysconfig/selinux SELINUX=enforcing 改為 SELINUX=disabled檢查SELinux的狀態(tài) [root@localhost etc]# getenforce Disabled [root@localhost etc]#

(3)、配置/etc/hosts
必須在/etc/hosts配置主機(jī)名和ip地址映射關(guān)系, 否則副本之間的數(shù)據(jù)不能同步。

[root@clickhouse1 ~]# cat /etc/hosts 192.168.106.103 clickhouse1 192.168.106.104 clickhouse2 192.168.106.105 clickhouse3 192.168.106.106 clickhouse4 [root@clickhouse1 ~]#

2 安裝配置zookeeper
下載zookeeper,zookeeper版本要求3.4.5以上。
將conf目錄下的zoo_sample.cfg復(fù)制一份,命名為:zoo.cfg
環(huán)境配置:

export ZOOKEEPER_HOME=/root/apache-zookeeper-3.6.2-bin export PATH=$PATH:$ZOOKEEPER_HOME/bin

Zookeeper的配置:

# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/root/apache-zookeeper-3.6.2-bin/data dataLogDir=/root/apache-zookeeper-3.6.2-bin/log # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1## Metrics Providers # # https://prometheus.io Metrics Exporter #metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider #metricsProvider.httpPort=7000 #metricsProvider.exportJvmInfo=true

上面配置目錄需要手工創(chuàng)建。
然后啟動(dòng)zookeeper即可。
啟動(dòng)命令:

$ZOOKEEPER_HOME/bin/zkServer.sh start

3 在所有的主機(jī)安裝clickhouse
4 修改clickhouse的網(wǎng)絡(luò)相關(guān)配置
修改配置文件:/etc/clickhouse-server/config.xml
打開以下注釋,并做相關(guān)修改:

<listen_host>::1</listen_host> <listen_host>0.0.0.0</listen_host>

clickhouse1上的修改如下:

<listen_host>::1</listen_host> <listen_host>192.168.106.103</listen_host>

Clickhouse2上的修改如下:

<listen_host>::1</listen_host> <listen_host>192.168.106.104</listen_host>

Clickhouse3上的修改如下:

<listen_host>::1</listen_host> <listen_host>192.168.106.105</listen_host>

Clickhouse4上的修改如下:

<listen_host>::1</listen_host> <listen_host>192.168.106.106</listen_host>

5 增加配置文件:/etc/metrika.xml
clickhouse1的配置如下:

<?xml version="1.0" encoding="utf-8"?><yandex> <clickhouse_remote_servers> <mycluster> <shard> <internal_replication>true</internal_replication> <replica> <host>192.168.106.103</host> <port>9000</port> </replica> <replica> <host>192.168.106.104</host> <port>9000</port> </replica> </shard> <shard> <internal_replication>true</internal_replication> <replica> <host>192.168.106.105</host> <port>9000</port> </replica> <replica> <host>192.168.106.106</host> <port>9000</port> </replica> </shard> </mycluster> </clickhouse_remote_servers> <zookeeper-servers> <node index="1"> <host>192.168.106.103</host> <port>2181</port> </node> </zookeeper-servers> <macros> <layer>01</layer> <shard>01</shard> <replica>192.168.106.103</replica> </macros> </yandex>

Clickhouse2的配置如下:

<?xml version="1.0" encoding="utf-8"?><yandex> <clickhouse_remote_servers> <mycluster> <shard> <internal_replication>true</internal_replication> <replica> <host>192.168.106.103</host> <port>9000</port> </replica> <replica> <host>192.168.106.104</host> <port>9000</port> </replica> </shard> <shard> <internal_replication>true</internal_replication> <replica> <host>192.168.106.105</host> <port>9000</port> </replica> <replica> <host>192.168.106.106</host> <port>9000</port> </replica> </shard> </mycluster> </clickhouse_remote_servers> <zookeeper-servers> <node index="1"> <host>192.168.106.103</host> <port>2181</port> </node> </zookeeper-servers> <macros> <layer>01</layer> <shard>01</shard> <replica>192.168.106.104</replica></macros> </yandex>

Clickhouse3的配置如下:

<?xml version="1.0" encoding="utf-8"?><yandex> <clickhouse_remote_servers> <mycluster> <shard> <internal_replication>true</internal_replication> <replica> <host>192.168.106.103</host> <port>9000</port> </replica> <replica> <host>192.168.106.104</host> <port>9000</port> </replica> </shard> <shard> <internal_replication>true</internal_replication> <replica> <host>192.168.106.105</host> <port>9000</port> </replica> <replica> <host>192.168.106.106</host> <port>9000</port> </replica> </shard> </mycluster> </clickhouse_remote_servers> <zookeeper-servers> <node index="1"> <host>192.168.106.103</host> <port>2181</port> </node> </zookeeper-servers> <macros> <layer>01</layer> <shard>02</shard> <replica>192.168.106.105</replica> </macros> </yandex>

Clickhouse4的配置如下:

<?xml version="1.0" encoding="utf-8"?><yandex> <clickhouse_remote_servers> <mycluster> <shard> <internal_replication>true</internal_replication> <replica> <host>192.168.106.103</host> <port>9000</port> </replica> <replica> <host>192.168.106.104</host> <port>9000</port> </replica> </shard> <shard> <internal_replication>true</internal_replication> <replica> <host>192.168.106.105</host> <port>9000</port> </replica> <replica> <host>192.168.106.106</host> <port>9000</port> </replica> </shard> </mycluster> </clickhouse_remote_servers> <zookeeper-servers> <node index="1"> <host>192.168.106.103</host> <port>2181</port> </node> </zookeeper-servers> <macros> <layer>01</layer> <shard>02</shard> <replica>192.168.106.106</replica> </macros> </yandex>

其中如下這段配置在每個(gè)節(jié)點(diǎn)是不相同的:

<macros> <layer>01</layer> <shard>01</shard> <replica>192.168.106.103</replica> </macros>

layer表示分層, shard表示分片的編號(hào), 按照配置順序從1開始。這里的01表示第一個(gè)分片。 replica配置副本的標(biāo)識(shí), 這里配置為本機(jī)的主機(jī)名。 使用這三個(gè)參數(shù)可以唯一表示一個(gè)副本分片。 這里表示layer為01的分片1的副本,副本標(biāo)識(shí):192.168.106.103。

配置完成后, 在每個(gè)節(jié)點(diǎn)啟動(dòng)ClickHouse服務(wù)。
systemctl restart clickhouse-server

5 查看系統(tǒng)表

clickhouse1 :) select * from system.clusters where cluster='mycluster';SELECT * FROM system.clusters WHERE cluster = 'mycluster'┌─cluster───┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───────┬─host_address────┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐ │ mycluster │ 111192.168.106.103192.168.106.10390001default │ │ 00 │ │ mycluster │ 112192.168.106.104192.168.106.10490000default │ │ 00 │ │ mycluster │ 211192.168.106.105192.168.106.10590000default │ │ 00 │ │ mycluster │ 212192.168.106.106192.168.106.10690000default │ │ 00 │ └───────────┴───────────┴──────────────┴─────────────┴─────────────────┴─────────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘4 rows in set. Elapsed: 0.005 sec. clickhouse1 :)

總結(jié)

以上是生活随笔為你收集整理的17_clickhouse分布式集群部署的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。