日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

lettuce 配置域名 dns 切换

發布時間:2023/12/10 编程问答 37 豆豆
生活随笔 收集整理的這篇文章主要介紹了 lettuce 配置域名 dns 切换 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

大家好,我是烤鴨:
?????如果你也有類似的困擾,運維告訴你,redis連接配置域名,這樣出問題了,直接改dns地址就行,不需要重啟服務。。。夢想是美好的,現實是殘酷的。如果你使用的是 lettuce連接池,那么恭喜你,必須重啟服務才能生效。

測試一下

環境:

<parent><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-parent</artifactId><version>2.5.2</version><relativePath/></parent><dependencies><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-data-redis</artifactId></dependency></dependencies>

配置:

application.properties

server.port=8115spring.redis.open=false spring.redis.database=0 spring.redis.cluster.max-redirects=3 spring.redis.cluster.nodes=myredis1.test.com:6001,myredis1.test.com:6002,myredis2.test.com:6001,myredis2.test.com:6002,myredis3.test.com:6001,myredis3.test.com:6002 spring.redis.timeout=6000 spring.redis.pool.max-active=200 spring.redis.pool.max-wait=-1 spring.redis.pool.max-idle=10 spring.redis.pool.min-idle=5 spring.redis.password=8efS6Snt

dns配置:

本地切換dns解析,可以通過 switchHosts 工具

https://pan.baidu.com/s/1eSgmYzS

修改完如圖所示,右下角點 √ 生效

使用jmeter 每2秒發送一次請求,都是成功的

接著down掉現有的redis集群,服務開始報錯:

切換本地hosts,發現報錯依舊,spring-boot-starter-data-redis 是不支持 域名dns切換的。

2021-07-26 10:42:54.796 ERROR 3712 --- [nio-8115-exec-2] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.data.redis.RedisSystemException: Error in execution; nested exception is io.lettuce.core.RedisCommandExecutionException: CLUSTERDOWN The cluster is down] with root causeio.lettuce.core.RedisCommandExecutionException: CLUSTERDOWN The cluster is downat io.lettuce.core.internal.ExceptionFactory.createExecutionException(ExceptionFactory.java:137) ~[lettuce-core-6.1.3.RELEASE.jar:6.1.3.RELEASE]at io.lettuce.core.internal.ExceptionFactory.createExecutionException(ExceptionFactory.java:110) ~[lettuce-core-6.1.3.RELEASE.jar:6.1.3.RELEASE]at io.lettuce.core.protocol.AsyncCommand.completeResult(AsyncCommand.java:120) ~[lettuce-core-6.1.3.RELEASE.jar:6.1.3.RELEASE]at io.lettuce.core.protocol.AsyncCommand.complete(AsyncCommand.java:111) ~[lettuce-core-6.1.3.RELEASE.jar:6.1.3.RELEASE]at io.lettuce.core.protocol.CommandWrapper.complete(CommandWrapper.java:63) ~[lettuce-core-6.1.3.RELEASE.jar:6.1.3.RELEASE]at io.lettuce.core.cluster.ClusterCommand.complete(ClusterCommand.java:65) ~[lettuce-core-6.1.3.RELEASE.jar:6.1.3.RELEASE]at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:746) ~[lettuce-core-6.1.3.RELEASE.jar:6.1.3.RELEASE]at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:681) ~[lettuce-core-6.1.3.RELEASE.jar:6.1.3.RELEASE]at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:598) ~[lettuce-core-6.1.3.RELEASE.jar:6.1.3.RELEASE]at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.65.Final.jar:4.1.65.Final]at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.65.Final.jar:4.1.65.Final]at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.65.Final.jar:4.1.65.Final]at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.65.Final.jar:4.1.65.Final]at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.65.Final.jar:4.1.65.Final]at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.65.Final.jar:4.1.65.Final]at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.65.Final.jar:4.1.65.Final]at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[netty-transport-4.1.65.Final.jar:4.1.65.Final]at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) ~[netty-transport-4.1.65.Final.jar:4.1.65.Final]at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) ~[netty-transport-4.1.65.Final.jar:4.1.65.Final]at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) ~[netty-transport-4.1.65.Final.jar:4.1.65.Final]at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[netty-transport-4.1.65.Final.jar:4.1.65.Final]at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.65.Final.jar:4.1.65.Final]at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.65.Final.jar:4.1.65.Final]at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.65.Final.jar:4.1.65.Final]at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]

源碼分析

PooledClusterConnectionProvider

根據操作類型獲取不同的 connection

@Override public CompletableFuture<StatefulRedisConnection<K, V>> getConnectionAsync(Intent intent, int slot) {if (debugEnabled) {logger.debug("getConnection(" + intent + ", " + slot + ")");}if (intent == Intent.READ && readFrom != null && readFrom != ReadFrom.UPSTREAM) {return getReadConnection(slot);}return getWriteConnection(slot).toCompletableFuture(); }

以set操作為例,會發現根據slot獲取節點的時候,一直從 partitions 獲取,而 partitions 初始化后,如果集群擴容之類的操作會更新 partitions ,其他是不變的

private CompletableFuture<StatefulRedisConnection<K, V>> getWriteConnection(int slot) {CompletableFuture<StatefulRedisConnection<K, V>> writer;// avoid races when reconfiguring partitions.synchronized (stateLock) {writer = writers[slot];}if (writer == null) {RedisClusterNode partition = partitions.getPartitionBySlot(slot);if (partition == null) {clusterEventListener.onUncoveredSlot(slot);return Futures.failed(new PartitionSelectorException("Cannot determine a partition for slot " + slot + ".",partitions.clone()));}// Use always host and port for slot-oriented operations. We don't want to get reconnected on a different// host because the nodeId can be handled by a different host.RedisURI uri = partition.getUri();ConnectionKey key = new ConnectionKey(Intent.WRITE, uri.getHost(), uri.getPort());ConnectionFuture<StatefulRedisConnection<K, V>> future = getConnectionAsync(key);return future.thenApply(connection -> {synchronized (stateLock) {if (writers[slot] == null) {writers[slot] = CompletableFuture.completedFuture(connection);}}return connection;}).toCompletableFuture();}return writer; }

再看下初始化連接的方法,連接初始化之后內部的實例對象不會再更新了

/*** Create a clustered pub/sub connection with command distributor.** @param codec Use this codec to encode/decode keys and values, must not be {@code null}* @param <K> Key type* @param <V> Value type* @return a new connection*/ private <K, V> CompletableFuture<StatefulRedisClusterConnection<K, V>> connectClusterAsync(RedisCodec<K, V> codec) {if (partitions == null) {return Futures.failed(new IllegalStateException("Partitions not initialized. Initialize via RedisClusterClient.getPartitions()."));}topologyRefreshScheduler.activateTopologyRefreshIfNeeded();logger.debug("connectCluster(" + initialUris + ")");DefaultEndpoint endpoint = new DefaultEndpoint(getClusterClientOptions(), getResources());RedisChannelWriter writer = endpoint;if (CommandExpiryWriter.isSupported(getClusterClientOptions())) {writer = new CommandExpiryWriter(writer, getClusterClientOptions(), getResources());}if (CommandListenerWriter.isSupported(getCommandListeners())) {writer = new CommandListenerWriter(writer, getCommandListeners());}ClusterDistributionChannelWriter clusterWriter = new ClusterDistributionChannelWriter(getClusterClientOptions(), writer,topologyRefreshScheduler);PooledClusterConnectionProvider<K, V> pooledClusterConnectionProvider = new PooledClusterConnectionProvider<>(this,clusterWriter, codec, topologyRefreshScheduler);clusterWriter.setClusterConnectionProvider(pooledClusterConnectionProvider);StatefulRedisClusterConnectionImpl<K, V> connection = newStatefulRedisClusterConnection(clusterWriter,pooledClusterConnectionProvider, codec, getDefaultTimeout());connection.setReadFrom(ReadFrom.UPSTREAM);connection.setPartitions(partitions);Supplier<CommandHandler> commandHandlerSupplier = () -> new CommandHandler(getClusterClientOptions(), getResources(),endpoint);Mono<SocketAddress> socketAddressSupplier = getSocketAddressSupplier(connection::getPartitions,TopologyComparators::sortByClientCount);Mono<StatefulRedisClusterConnectionImpl<K, V>> connectionMono = Mono.defer(() -> connect(socketAddressSupplier, endpoint, connection, commandHandlerSupplier));for (int i = 1; i < getConnectionAttempts(); i++) {connectionMono = connectionMono.onErrorResume(t -> connect(socketAddressSupplier, endpoint, connection, commandHandlerSupplier));}return connectionMono.doOnNext(c -> connection.registerCloseables(closeableResources, clusterWriter, pooledClusterConnectionProvider)).map(it -> (StatefulRedisClusterConnection<K, V>) it).toFuture(); }

解決方案

別鬧了,年初就有人提過。

https://github.com/lettuce-io/lettuce-core/issues/1572

想更新dns,需要重建 channel。作者也說了,沒什么好辦法,除非用反射,但是反射也不是很好。

剛開始出現這個問題,我還以為是 netty的問題,勁勁的給提issue了,不知道是是我表達不清楚,還是什么其他原因,他們還真給源碼改了…(其實是lettuce的問題)

https://github.com/netty/netty/issues/11519

所以用域名,即便是dns切換了,也得重啟服務才能生效。

對了,不止 lettuce,jedis 也是一樣的。

總結

以上是生活随笔為你收集整理的lettuce 配置域名 dns 切换的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。