日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

elasticsearch分片分配和路由配置

發(fā)布時(shí)間:2023/12/20 编程问答 38 豆豆
生活随笔 收集整理的這篇文章主要介紹了 elasticsearch分片分配和路由配置 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

本文基于es7.3版本。
集群級(jí)別的分片分配配置,主要有下面幾個(gè):

cluster.routing.allocation.enable:啟用或禁止特定種類(lèi)分片的分配。有下面四種取值:

  • all?- (default) Allows shard allocation for all kinds of shards.允許所有種類(lèi)分片的分配,包括primary和replica。默認(rèn)行為。
  • primaries?- Allows shard allocation only for primary shards.僅允許primary分片的分配,節(jié)點(diǎn)重啟后,replica的分片不恢復(fù)。
  • new_primaries?- Allows shard allocation only for primary shards for new indices.僅允許新建索引的primary分片的分配。測(cè)試了一下,貌似與上面的區(qū)別不大。
  • none?- No shard allocations of any kind are allowed for any indices.不允許任何種類(lèi)分片的分配。新建的索引也不會(huì)分配primary和replica分片。

僅影響變化的分片。默認(rèn)情況下,集群中某個(gè)節(jié)點(diǎn)失敗后,此節(jié)點(diǎn)上的shard會(huì)恢復(fù)到其他節(jié)點(diǎn)上,設(shè)置非all值,會(huì)導(dǎo)致失敗節(jié)點(diǎn)上shard不會(huì)恢復(fù)到其他節(jié)點(diǎn)。這在集群維護(hù)時(shí)非常有用,避免了節(jié)點(diǎn)重啟時(shí),分片在節(jié)點(diǎn)間移動(dòng)的開(kāi)銷(xiāo)。需要注意的是,無(wú)論何種取值,節(jié)點(diǎn)重啟后,如果此節(jié)點(diǎn)上存在某分片的replica copy,并且集群中沒(méi)有此分片的primary copy,則此replica copy會(huì)恢復(fù)為primary copy。另外,即使在none下,新建索引不分配任何分片,但是集群重啟后,仍然會(huì)分配primary分片。

cluster.routing.allocation.node_concurrent_incoming_recoveries:單個(gè)節(jié)點(diǎn)的入口并發(fā)恢復(fù)的分片數(shù)量。表示此節(jié)點(diǎn)作為恢復(fù)目標(biāo)節(jié)點(diǎn),分片在其他節(jié)點(diǎn)或者是由于Rebalance或者是由于其他節(jié)點(diǎn)失敗,導(dǎo)致需要在此節(jié)點(diǎn)上恢復(fù)分片。默認(rèn)是2個(gè)并發(fā)分片。

cluster.routing.allocation.node_concurrent_outgoing_recoveries:單個(gè)幾點(diǎn)的出口并發(fā)恢復(fù)的分片數(shù)量。表示此節(jié)點(diǎn)作為恢復(fù)的源節(jié)點(diǎn),由于Rebalance導(dǎo)致需要從此節(jié)點(diǎn)遷移部分分片到其他節(jié)點(diǎn)。默認(rèn)是2個(gè)并發(fā)分片。

cluster.routing.allocation.node_concurrent_recoveries:用于快速設(shè)置上面兩個(gè)參數(shù),至于這個(gè)是總數(shù)兩個(gè)平分,還是分別設(shè)置兩個(gè)限制,目前未知。先留個(gè)坑,等我翻看源碼再回來(lái)填。從下面的代碼看,是分別設(shè)置。
?

//ThrottlingAllocationDecider.java public static final int DEFAULT_CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_RECOVERIES = 2;public static final int DEFAULT_CLUSTER_ROUTING_ALLOCATION_NODE_INITIAL_PRIMARIES_RECOVERIES = 4;public static final String NAME = "throttling";public static final Setting<Integer> CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_RECOVERIES_SETTING =new Setting<>("cluster.routing.allocation.node_concurrent_recoveries",Integer.toString(DEFAULT_CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_RECOVERIES),(s) -> Setting.parseInt(s, 0, "cluster.routing.allocation.node_concurrent_recoveries"),Property.Dynamic, Property.NodeScope);public static final Setting<Integer> CLUSTER_ROUTING_ALLOCATION_NODE_INITIAL_PRIMARIES_RECOVERIES_SETTING =Setting.intSetting("cluster.routing.allocation.node_initial_primaries_recoveries",DEFAULT_CLUSTER_ROUTING_ALLOCATION_NODE_INITIAL_PRIMARIES_RECOVERIES, 0,Property.Dynamic, Property.NodeScope);public static final Setting<Integer> CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_INCOMING_RECOVERIES_SETTING =new Setting<>("cluster.routing.allocation.node_concurrent_incoming_recoveries",CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_RECOVERIES_SETTING::getRaw,(s) -> Setting.parseInt(s, 0, "cluster.routing.allocation.node_concurrent_incoming_recoveries"),Property.Dynamic, Property.NodeScope);public static final Setting<Integer> CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_OUTGOING_RECOVERIES_SETTING =new Setting<>("cluster.routing.allocation.node_concurrent_outgoing_recoveries",CLUSTER_ROUTING_ALLOCATION_NODE_CONCURRENT_RECOVERIES_SETTING::getRaw,(s) -> Setting.parseInt(s, 0, "cluster.routing.allocation.node_concurrent_outgoing_recoveries"),Property.Dynamic, Property.NodeScope);

cluster.routing.allocation.node_initial_primaries_recoveries:單個(gè)節(jié)點(diǎn)并行initial? primary恢復(fù)的并發(fā)數(shù)。指的是在節(jié)點(diǎn)restart后,本來(lái)屬于此節(jié)點(diǎn)的primary shard進(jìn)行的恢復(fù)。從本地磁盤(pán)進(jìn)行的恢復(fù)。因此恢復(fù)較快。默認(rèn)值為4。

cluster.routing.allocation.same_shard.host:設(shè)置是否檢查同一臺(tái)主機(jī)不能存放多個(gè)shard的copy。僅針對(duì)一個(gè)主機(jī)上運(yùn)行同個(gè)集群的多個(gè)節(jié)點(diǎn)的情況。默認(rèn)為false。

與恢復(fù)相關(guān)的其他參數(shù):

indices.recovery.max_bytes_per_sec:單個(gè)幾點(diǎn)進(jìn)行恢復(fù)的inbound和outbound帶寬的和。默認(rèn)40mb。
indices.recovery.max_concurrent_file_chunks:每一個(gè)shard恢復(fù)可以并行發(fā)送的file chunk的數(shù)量。默認(rèn)值為2。file chunk可理解為將文件內(nèi)容分割為一個(gè)一個(gè)的chunk,類(lèi)似操作系統(tǒng)的page的概念。oracle中共享池的內(nèi)存分配單元就是按chunk來(lái)的,盡管各個(gè)chunk的大小不同。

集群級(jí)別的分片Rebalance配置:

cluster.routing.rebalance.enable:啟用或禁止特定種類(lèi)分片的Rebalance。有四種取值:

  • all?- (default) Allows shard balancing for all kinds of shards. 啟用所有類(lèi)別分片的Rebalance。
  • primaries?- Allows shard balancing only for primary shards.僅啟用primary分片的Rebalance。
  • replicas?- Allows shard balancing only for replica shards.僅啟用replica分片的Rebalance。
  • none?- No shard balancing of any kind are allowed for any indices.禁止分片Rebalance。

cluster.routing.allocation.allow_rebalance:指定何時(shí)可以進(jìn)行分片Rebalance。有三種取值:

  • always?- Always allow rebalancing. 總是允許。
  • indices_primaries_active?- Only when all primaries in the cluster are allocated.僅僅當(dāng)集群中所有primary分片都active的時(shí)候。
  • indices_all_active?- (default) Only when all shards (primaries and replicas) in the cluster are allocated.僅僅當(dāng)集群中所有分片都active。

cluster.routing.allocation.cluster_concurrent_rebalance:控制集群范圍內(nèi)并發(fā)Rebalance的分片數(shù)量。默認(rèn)為2。僅僅影響由于分片分布不平衡產(chǎn)生的Rebalance操作。不影響因?yàn)榉制峙溥^(guò)濾allocation filtering或者強(qiáng)制 awareness引起的分片遷徙。

shard rebalance heuristics設(shè)置參數(shù):

cluster.routing.allocation.balance.shard:rebalance相關(guān)的分片因子,默認(rèn)值為0.45f;
cluster.routing.allocation.balance.index:rebalance相關(guān)的索引因子,默認(rèn)值為0.55f;與上面的配置參數(shù)一起,一起帶入BalancedShardsAllocator類(lèi)的靜態(tài)內(nèi)部類(lèi)WeightFunction中進(jìn)行計(jì)算。

//BalancedShardsAllocator.java public static final Setting<Float> INDEX_BALANCE_FACTOR_SETTING =Setting.floatSetting("cluster.routing.allocation.balance.index", 0.55f, 0.0f, Property.Dynamic, Property.NodeScope);public static final Setting<Float> SHARD_BALANCE_FACTOR_SETTING =Setting.floatSetting("cluster.routing.allocation.balance.shard", 0.45f, 0.0f, Property.Dynamic, Property.NodeScope);public static final Setting<Float> THRESHOLD_SETTING =Setting.floatSetting("cluster.routing.allocation.balance.threshold", 1.0f, 0.0f,Property.Dynamic, Property.NodeScope);@Injectpublic BalancedShardsAllocator(Settings settings, ClusterSettings clusterSettings) {setWeightFunction(INDEX_BALANCE_FACTOR_SETTING.get(settings), SHARD_BALANCE_FACTOR_SETTING.get(settings));setThreshold(THRESHOLD_SETTING.get(settings));clusterSettings.addSettingsUpdateConsumer(INDEX_BALANCE_FACTOR_SETTING, SHARD_BALANCE_FACTOR_SETTING, this::setWeightFunction);clusterSettings.addSettingsUpdateConsumer(THRESHOLD_SETTING, this::setThreshold);}private void setWeightFunction(float indexBalance, float shardBalanceFactor) {weightFunction = new WeightFunction(indexBalance, shardBalanceFactor);}public static class WeightFunction {private final float indexBalance;private final float shardBalance;private final float theta0;private final float theta1;public WeightFunction(float indexBalance, float shardBalance) {float sum = indexBalance + shardBalance;if (sum <= 0.0f) {throw new IllegalArgumentException("Balance factors must sum to a value > 0 but was: " + sum);}theta0 = shardBalance / sum;theta1 = indexBalance / sum;this.indexBalance = indexBalance;this.shardBalance = shardBalance;}public float weight(Balancer balancer, ModelNode node, String index) {return weight(balancer, node, index, 0);}public float weightShardAdded(Balancer balancer, ModelNode node, String index) {return weight(balancer, node, index, 1);}public float weightShardRemoved(Balancer balancer, ModelNode node, String index) {return weight(balancer, node, index, -1);}private float weight(Balancer balancer, ModelNode node, String index, int numAdditionalShards) {final float weightShard = node.numShards() + numAdditionalShards - balancer.avgShardsPerNode();final float weightIndex = node.numShards(index) + numAdditionalShards - balancer.avgShardsPerNode(index);return theta0 * weightShard + theta1 * weightIndex;}}

cluster.routing.allocation.balance.threshold:閾值。當(dāng)節(jié)點(diǎn)間權(quán)重差值大于這個(gè)值時(shí),才會(huì)進(jìn)行分片的reallocate。默認(rèn)值為1.0f,增大這個(gè)值,將會(huì)降低reallocate的敏感度:
?

private static boolean lessThan(float delta, float threshold) {/* deltas close to the threshold are "rounded" to the threshold manuallyto prevent floating point problems if the delta is very close to thethreshold ie. 1.000000002 which can trigger unnecessary balance actions*/return delta <= (threshold + 0.001f);}

除了上面集群級(jí)別設(shè)置之外,分片分配還收到基于磁盤(pán)的分片分配Disk-based shard allocation和基于awareness的分片分配Shard allocation awareness的影響。

es會(huì)考慮磁盤(pán)剩余空間的多少,來(lái)決定是否分配新的分片到節(jié)點(diǎn)或者將分片從節(jié)點(diǎn)中遷移到集群中其他節(jié)點(diǎn)。如下是相關(guān)參數(shù)設(shè)置:
cluster.routing.allocation.disk.threshold_enabled:設(shè)置是否啟用基于磁盤(pán)的分配策略。默認(rèn)為true。
cluster.routing.allocation.disk.watermark.low:設(shè)置磁盤(pán)使用空間的低水線限制。默認(rèn)值為85%,表示磁盤(pán)使用空間達(dá)到85%后,除了新建索引的primary shards以及之前從未分配過(guò)的shards(unassigned shards),es將不會(huì)分配其他shard到此節(jié)點(diǎn)。設(shè)置為字節(jié)值,例如500mb,則表示磁盤(pán)剩余空間限制。
cluster.routing.allocation.disk.watermark.high設(shè)置磁盤(pán)使用空間的高水線限制。默認(rèn)值為90%,表示磁盤(pán)使用空間達(dá)到90%后,es將會(huì)嘗試將分片從此節(jié)點(diǎn)遷出。此影響針對(duì)所有類(lèi)型的分片,包括unassigned shards??梢栽O(shè)置為字節(jié)值,例如250mb,表示磁盤(pán)剩余空間限制。
cluster.routing.allocation.disk.watermark.flood_stage:磁盤(pán)使用率的最高限制。默認(rèn)值為95%,表示當(dāng)磁盤(pán)使用率達(dá)到95%后,es將會(huì)設(shè)置所有在此節(jié)點(diǎn)上有分片存儲(chǔ)的index為readonly并允許delete的(index.blocks.read_only_allow_delete)。當(dāng)磁盤(pán)空間釋放后,被設(shè)置為index.blocks.read_only_allow_delete的index,需要通過(guò)如下語(yǔ)句重置:
?

PUT /twitter/_settings {"index.blocks.read_only_allow_delete": null }

需要注意的是,以上三個(gè)參數(shù)不能混合使用百分比與字節(jié)值。要么三個(gè)都使用百分比,要么都使用字節(jié)值。并且百分比值需要遞增,字節(jié)值需要遞減。
cluster.info.update.interval:設(shè)置磁盤(pán)空間檢查頻率。默認(rèn)為30s。
cluster.routing.allocation.disk.include_relocations:設(shè)置評(píng)估磁盤(pán)使用率時(shí)是否考慮正在reallocate中的分片的空間。默認(rèn)值為true。這會(huì)導(dǎo)致磁盤(pán)使用率的評(píng)估偏高,假設(shè)reallocate的分片大小為1G,reallocate過(guò)程已完成了50%,那這個(gè)評(píng)估過(guò)程會(huì)多出這50%的空間占用。參數(shù)設(shè)置舉例如下:
?

PUT _cluster/settings {"transient": {"cluster.routing.allocation.disk.watermark.low": "100gb","cluster.routing.allocation.disk.watermark.high": "50gb","cluster.routing.allocation.disk.watermark.flood_stage": "10gb","cluster.info.update.interval": "1m"} }

基于awareness的分配是考慮了這樣的想定:一個(gè)elasticsearch集群可能包含了若干服務(wù)器,這些服務(wù)器可能分布在若干機(jī)架或不同地理位置的機(jī)房或不同網(wǎng)絡(luò)區(qū)域?;谌轂?zāi)的考慮,可能會(huì)將同個(gè)索引的primary、replica分片分布在不同的機(jī)架上;或是基于就近獲取的考慮,將get請(qǐng)求路由到與coordinator處于同個(gè)網(wǎng)絡(luò)區(qū)域的節(jié)點(diǎn)。啟用shard allocation awareness需要做如下設(shè)置:

1,在節(jié)點(diǎn)的elasticsearch.yml配置文件中設(shè)置節(jié)點(diǎn)屬性,屬性名稱(chēng)與值是任意指定的,假設(shè)我的集群中有3個(gè)節(jié)點(diǎn),這里指定my_rack_id的屬性:
node1:? ? node.attr.my_rack_id: rack1
node2:? ? node.attr.my_rack_id: rack1
node3:? ? node.attr.my_rack_id: rack2

2,在節(jié)點(diǎn)的elasticsearch.yml配置文件中,指定cluster.routing.allocation.awareness.attributes:
cluster.routing.allocation.awareness.attributes: my_rack_id
或者通過(guò)cluster update api指定:
?

PUT _cluster/settings {"persistent": {"cluster.routing.allocation.awareness.attributes":"my_rack_id"} }

cluster.routing.allocation.awareness.attributes設(shè)置要特別小心,如果設(shè)置錯(cuò)誤,比如設(shè)置了不存在的屬性,會(huì)導(dǎo)致分片分配錯(cuò)誤,新建的索引無(wú)法分配分片,已存在的索引replica copy無(wú)法分配,導(dǎo)致集群healthy變?yōu)閥ellow甚至red狀態(tài)。


注意,這個(gè)的三個(gè)節(jié)點(diǎn)中,node1和node2設(shè)置了my_rack_id都為rack1,node3只是my_rack_id為rack2。

現(xiàn)在考慮這樣一種情況,假設(shè)給每個(gè)索引設(shè)置3個(gè)分片,1個(gè)replica。那么此時(shí),集群共有6個(gè)分片,平均每個(gè)節(jié)點(diǎn)3個(gè)。按照我上面的設(shè)置,那么必然node3會(huì)存放3個(gè)分片的各一個(gè)copy,也就是node3上會(huì)有3個(gè)分片,另外兩個(gè)節(jié)點(diǎn)上隨機(jī)分布3個(gè)節(jié)點(diǎn)。此時(shí),整個(gè)集群時(shí)不平衡的,但是這是為了滿足用戶(hù)的設(shè)置。

情況在發(fā)展,you know, things going on 。這個(gè)時(shí)候node3掛掉了,如果其中一個(gè)節(jié)點(diǎn)丟失,那么此時(shí),node3上的分片會(huì)遷移到另外兩個(gè)節(jié)點(diǎn),而忽略了awareness的容災(zāi)要求的設(shè)置。這個(gè)時(shí)候會(huì)變成node1,node2平分6個(gè)分片的情況。如果需要強(qiáng)制保留node3掛掉之前的效果,需要設(shè)置cluster.routing.allocation.awareness.force來(lái)讓同一個(gè)my_rack_id區(qū)域的節(jié)點(diǎn)上,不會(huì)分配一個(gè)分片的多余一個(gè)copy。既在node3掛掉之后,node1、node2上只會(huì)分布所有分片repica group的其中一個(gè)copy,而不是所有。此時(shí)node1,node2上的copy會(huì)全部轉(zhuǎn)變成primary copy,而沒(méi)有replica copy。這個(gè)時(shí)候,索引的狀態(tài)是yellow。如下:
?

PUT _cluster/settings {"persistent": {"cluster.routing.allocation.awareness.attributes":"my_rack_id","cluster.routing.allocation.awareness.force.rack_id.values":"rack_one,rack_two"} }

同樣可以通過(guò)shard allocation filter過(guò)濾(include或者exclude)分片在節(jié)點(diǎn)上的分布,相關(guān)的設(shè)置參數(shù)有下面三個(gè):
cluster.routing.allocation.include.{attribute}:Allocate shards to a node whose?{attribute}?has at least one of the comma-separated values。將shard分配到至少有一個(gè)attribute-value的節(jié)點(diǎn)上。{attribute}的值是一個(gè)逗號(hào)分隔的屬性值列表;
cluster.routing.allocation.require.{attribute}:Only allocate shards to a node whose?{attribute}?has?all?of the comma-separated values。將shard分配到擁有所有attribute-values的節(jié)點(diǎn)上。
cluster.routing.allocation.exclude.{attribute}:Do not allocate shards to a node whose?{attribute}?has?any?of the comma-separated values。將shard從擁有任何attribute-value的節(jié)點(diǎn)上排除掉,移走。需要注意的是,這個(gè)并不是強(qiáng)制生效的。同時(shí)需要符合其他的設(shè)置,例如這里的node1和node2的rack_id為rack_one,node3的rack_id為rack_two,當(dāng)設(shè)置awareness為rack_id時(shí),primary 和replica shard不能都分布在同一個(gè)rack_id上。
{attribute}支持自定義屬性及下面的內(nèi)建屬性:

_name

Match nodes by node names

_ip

Match nodes by IP addresses (the IP address associated with the hostname)

_host

Match nodes by hostnames

舉例如下:
?

PUT _cluster/settings {"transient": {"cluster.routing.allocation.exclude._ip": "192.168.2.*","192.168.1.*"} } PUT _cluster/settings {"transient": {"cluster.routing.allocation.include._name": "node1","node2"} } PUT _cluster/settings {"transient": {"cluster.routing.allocation.require.rack_id": "rack_one","rack_two"} }

因?yàn)榭梢詣?dòng)態(tài)設(shè)置,這一功能通常使用在節(jié)點(diǎn)停機(jī)時(shí),通過(guò)設(shè)置cluster.routing.allocation.exclude將分片從此節(jié)點(diǎn)移出到其他節(jié)點(diǎn)。

其他設(shè)置:
cluster.blocks.read_only:Make the whole cluster read only (indices do not accept write operations), metadata is not allowed to be modified (create or delete indices)。使整個(gè)集群只讀。禁止包括document的CUD操作,以及索引元數(shù)據(jù)的修改(創(chuàng)建、刪除索引);
cluster.blocks.read_only_allow_delete:Identical to?cluster.blocks.read_only?but allows to delete indices to free up resources。使集群只讀,但是可進(jìn)行刪除操作以釋放空間。
cluster.max_shards_per_node:Controls the number of shards allowed in the cluster per data node。集群中單個(gè)data節(jié)點(diǎn)允許的open的分片數(shù)量,closed的index所屬的shard不計(jì)算在內(nèi)。默認(rèn)1000。如果集群中data node節(jié)點(diǎn)數(shù)固定的話,這個(gè)值也限定了整個(gè)集群中shard的數(shù)量,包括primary和replica的shard。在進(jìn)行create index/restore snapshot/open index時(shí),如果會(huì)導(dǎo)致節(jié)點(diǎn)上的分片數(shù)超過(guò)設(shè)置的話,會(huì)造成操作失敗。同時(shí)因?yàn)楦脑O(shè)置導(dǎo)致分片上存在了多余設(shè)置的值,(例如節(jié)點(diǎn)上已存在900個(gè)shard,此時(shí)修改設(shè)置為500),會(huì)造成不能新建和open索引。
cluster.metadata.*:用戶(hù)自定義設(shè)置。可以設(shè)置任何自定義配置和配置值。
cluster.indices.tombstones.size:索引墓碑大小設(shè)置,默認(rèn)值500。靜態(tài)設(shè)置。cluster state中維護(hù)了deleted的index的index_name、index_uuid、以及刪除時(shí)間delete_date_in_millis信息??赏ㄟ^(guò)如下dsl獲取:

GET _cluster/state?filter_path=metadata.index-graveyard.tombstones

這個(gè)設(shè)置用于控制cluster state中維護(hù)deleted index的數(shù)量。當(dāng)節(jié)點(diǎn)A從集群中離開(kāi)后,此時(shí)集群中進(jìn)行了刪除索引的操作。操作成功后,此時(shí)集群中已經(jīng)沒(méi)有這個(gè)index的任何記錄了。此后,節(jié)點(diǎn)A再次加入集群,由于es的特點(diǎn),當(dāng)節(jié)點(diǎn)重新加入集群時(shí)會(huì)import節(jié)點(diǎn)中有的,集群中沒(méi)有的index,因此可能會(huì)re-import這些在節(jié)點(diǎn)A離開(kāi)期間刪除掉的索引,可能會(huì)抵消掉索引的刪除操作。為了對(duì)抗這個(gè)影響帶來(lái)的錯(cuò)誤影響,cluster state中維護(hù)了deleted的索引信息。當(dāng)集群頻繁刪除索引時(shí),可調(diào)大此設(shè)置,維護(hù)過(guò)多的deleted index會(huì)造成cluster state膨脹,需要權(quán)衡。

持久化任務(wù)(persistent task)分配相關(guān)設(shè)置:
持久化任務(wù)創(chuàng)建后會(huì)存儲(chǔ)在cluster state中,以保證集群重啟后仍然存在。但是task需要分配到具體的node上去執(zhí)行。
cluster.persistent_tasks.allocation.enable:啟用或禁用持久化任務(wù)分配。取值為all、none:這個(gè)設(shè)置不影響已經(jīng)存在的task,只影響新建或者需要重新分配節(jié)點(diǎn)的task(例如節(jié)點(diǎn)失去連接,資源不足等)。

  • all?- (default) Allows persistent tasks to be assigned to nodes
  • none?- No allocations are allowed for any type of persistent task

cluster.persistent_tasks.allocation.recheck_interval:task重新分配檢查間隔。當(dāng)節(jié)點(diǎn)失去連接后,節(jié)點(diǎn)上的task會(huì)自動(dòng)由master分配到其它節(jié)點(diǎn)上執(zhí)行,這是因?yàn)楣?jié)點(diǎn)離開(kāi)后,cluster state會(huì)變化,此時(shí)master是知道哪個(gè)節(jié)點(diǎn)上的task需要重新分配節(jié)點(diǎn)的。但是當(dāng)節(jié)點(diǎn)因?yàn)橘Y源不足需要將task分配到其他節(jié)點(diǎn)時(shí),就需要master定期進(jìn)行檢查。默認(rèn)值30s,最小值為10s。

Logger日志相關(guān)設(shè)置:這個(gè)放在日志相關(guān)中介紹。

總結(jié)

以上是生活随笔為你收集整理的elasticsearch分片分配和路由配置的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。