日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

Apache Kafka-通过concurrency实现并发消费

發(fā)布時(shí)間:2025/3/21 编程问答 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Apache Kafka-通过concurrency实现并发消费 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

文章目錄

  • 概述
  • 演示過(guò)程
  • Code
    • POM依賴
    • 配置文件
    • 生產(chǎn)者
    • 消費(fèi)者
    • 單元測(cè)試
    • 測(cè)試結(jié)果
  • 方式二
  • @KafkaListener 配置項(xiàng)
  • 分布式下的concurrency
  • 源碼地址


概述

默認(rèn)情況下, Spring-Kafka @KafkaListener 串行消費(fèi)的。缺點(diǎn)顯而易見(jiàn)生產(chǎn)者生產(chǎn)的數(shù)據(jù)過(guò)多時(shí),消費(fèi)端容易導(dǎo)致消息積壓的問(wèn)題。

當(dāng)然了, 我們可以通過(guò)啟動(dòng)多個(gè)進(jìn)程,實(shí)現(xiàn) 多進(jìn)程的并發(fā)消費(fèi)。 當(dāng)然了也取決于你的TOPIC的 partition的數(shù)量。

試想一下, 在單進(jìn)程的情況下,能否實(shí)現(xiàn)多線程的并發(fā)消費(fèi)呢? Spring Kafka 為我們提供了這個(gè)功能,而且使用起來(lái)相當(dāng)簡(jiǎn)單。 重點(diǎn)是把握原理,靈活運(yùn)用。

@KafkaListener 的 concurrecy屬性 可以指定并發(fā)消費(fèi)的線程數(shù) 。

舉個(gè)例子 : 如果設(shè)置 concurrency=2 時(shí),Spring-Kafka 就會(huì)為該 @KafkaListener標(biāo)注的方法消費(fèi)的消息 創(chuàng)建 2個(gè)線程,進(jìn)行并發(fā)消費(fèi)。 當(dāng)然了,這是有前置條件的。 不要超過(guò) partitions 的大小

  • 當(dāng)concurrency < partition 的數(shù)量,會(huì)出現(xiàn)消費(fèi)不均的情況,一個(gè)消費(fèi)者的線程可能消費(fèi)多個(gè)partition 的數(shù)據(jù)

  • 當(dāng)concurrency = partition 的數(shù)量,最佳狀態(tài),一個(gè)消費(fèi)者的線程消費(fèi)一個(gè) partition 的數(shù)據(jù)

  • 當(dāng)concurrency > partition 的數(shù)量,會(huì)出現(xiàn)有的消費(fèi)者的線程沒(méi)有可消費(fèi)的partition, 造成資源的浪費(fèi)


演示過(guò)程

  • 創(chuàng)建一個(gè) Topic 為 “RRRR” ,并且設(shè)置其 Partition 分區(qū)數(shù)為 2
  • 創(chuàng)建一個(gè) ArtisanCosumerMock類,并在其消費(fèi)方法上,添加 @KafkaListener(concurrency=2) 注解
  • 啟動(dòng)單元測(cè)試, Spring Kafka會(huì)根據(jù)@KafkaListener(concurrency=2) ,創(chuàng)建2個(gè)kafka consumer . ( 是兩個(gè)Kafka Consumer ) . 然后,每個(gè)kafka Consumer 會(huì)被單獨(dú)分配到一個(gè)線程中pull 消息, 消費(fèi)消息
  • 之后,Kafka Broker將Topic RRRR 分配給創(chuàng)建的 2個(gè) Kafka Consumer 各 1個(gè)Partition (一共就2個(gè)partition,最佳情況,一人一個(gè))
  • 總結(jié)下: @KafkaListener(concurrency=2) 創(chuàng)建兩個(gè)Kafka Consumer , 就在各自的線程中,拉取各自的Topic RRRR的 分區(qū)Partition 消息, 各自串行消費(fèi),從而實(shí)現(xiàn)單進(jìn)程的多線程的并發(fā)消費(fèi)。

    題外話:

    RocketMQ 的并發(fā)消費(fèi),只要?jiǎng)?chuàng)建一個(gè) RocketMQ Consumer 對(duì)象,然后 Consumer 拉取完消息之后,丟到 Consumer 的線程池中執(zhí)行消費(fèi),從而實(shí)現(xiàn)并發(fā)消費(fèi)。

    Spring-Kafka 提供的并發(fā)消費(fèi),需要?jiǎng)?chuàng)建多個(gè) Kafka Consumer 對(duì)象,并且每個(gè) Consumer 都單獨(dú)分配一個(gè)線程,然后 Consumer 拉取完消息之后,在各自的線程中執(zhí)行消費(fèi)。


    Code

    POM依賴

    <dependencies><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-web</artifactId></dependency><!-- 引入 Spring-Kafka 依賴 --><dependency><groupId>org.springframework.kafka</groupId><artifactId>spring-kafka</artifactId></dependency><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-test</artifactId><scope>test</scope></dependency><dependency><groupId>junit</groupId><artifactId>junit</artifactId><scope>test</scope></dependency></dependencies>

    配置文件

    spring:# Kafka 配置項(xiàng),對(duì)應(yīng) KafkaProperties 配置類kafka:bootstrap-servers: 192.168.126.140:9092 # 指定 Kafka Broker 地址,可以設(shè)置多個(gè),以逗號(hào)分隔# Kafka Producer 配置項(xiàng)producer:acks: 1 # 0-不應(yīng)答。1-leader 應(yīng)答。all-所有 leader 和 follower 應(yīng)答。retries: 3 # 發(fā)送失敗時(shí),重試發(fā)送的次數(shù)key-serializer: org.apache.kafka.common.serialization.StringSerializer # 消息的 key 的序列化value-serializer: org.springframework.kafka.support.serializer.JsonSerializer # 消息的 value 的序列化# Kafka Consumer 配置項(xiàng)consumer:auto-offset-reset: earliest # 設(shè)置消費(fèi)者分組最初的消費(fèi)進(jìn)度為 earliestkey-deserializer: org.apache.kafka.common.serialization.StringDeserializervalue-deserializer: org.springframework.kafka.support.serializer.JsonDeserializerproperties:spring:json:trusted:packages: com.artisan.springkafka.domain# Kafka Consumer Listener 監(jiān)聽器配置listener:missing-topics-fatal: false # 消費(fèi)監(jiān)聽接口監(jiān)聽的主題不存在時(shí),默認(rèn)會(huì)報(bào)錯(cuò)。所以通過(guò)設(shè)置為 false ,解決報(bào)錯(cuò)logging:level:org:springframework:kafka: ERROR # spring-kafkaapache:kafka: ERROR # kafka

    生產(chǎn)者

    package com.artisan.springkafka.producer;import com.artisan.springkafka.constants.TOPIC; import com.artisan.springkafka.domain.MessageMock; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.kafka.core.KafkaTemplate; import org.springframework.kafka.support.SendResult; import org.springframework.stereotype.Component; import org.springframework.util.concurrent.ListenableFuture;import java.util.Random; import java.util.concurrent.ExecutionException;/*** @author 小工匠* @version 1.0* @description: TODO* @date 2021/2/17 22:25* @mark: show me the code , change the world*/@Component public class ArtisanProducerMock {@Autowiredprivate KafkaTemplate<Object,Object> kafkaTemplate ;/*** 同步發(fā)送* @return* @throws ExecutionException* @throws InterruptedException*/public SendResult sendMsgSync() throws ExecutionException, InterruptedException {// 模擬發(fā)送的消息Integer id = new Random().nextInt(100);MessageMock messageMock = new MessageMock(id,"artisanTestMessage-" + id);// 同步等待return kafkaTemplate.send(TOPIC.TOPIC, messageMock).get();}}

    消費(fèi)者

    package com.artisan.springkafka.consumer;import com.artisan.springkafka.domain.MessageMock; import com.artisan.springkafka.constants.TOPIC; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.kafka.annotation.KafkaListener; import org.springframework.stereotype.Component;/*** @author 小工匠* @version 1.0* @description: TODO* @date 2021/2/17 22:33* @mark: show me the code , change the world*/@Component public class ArtisanCosumerMock {private Logger logger = LoggerFactory.getLogger(getClass());private static final String CONSUMER_GROUP_PREFIX = "MOCK-A" ;@KafkaListener(topics = TOPIC.TOPIC ,groupId = CONSUMER_GROUP_PREFIX + TOPIC.TOPIC,concurrency = "2")public void onMessage(MessageMock messageMock){logger.info("【接受到消息][線程ID:{} 消息內(nèi)容:{}]", Thread.currentThread().getId(), messageMock);}}

    在 @KafkaListener 注解上,添加了 concurrency = "2" 屬性,創(chuàng)建 2 個(gè)線程消費(fèi) Topic = “RRRR” 下的消息。


    單元測(cè)試

    package com.artisan.springkafka.produceTest;import com.artisan.springkafka.SpringkafkaApplication; import com.artisan.springkafka.producer.ArtisanProducerMock; import org.junit.Test; import org.junit.runner.RunWith; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.kafka.support.SendResult; import org.springframework.test.context.junit4.SpringRunner; import org.springframework.util.concurrent.ListenableFuture; import org.springframework.util.concurrent.ListenableFutureCallback;import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutionException; import java.util.concurrent.TimeUnit;/*** @author 小工匠* * @version 1.0* @description: TODO* @date 2021/2/17 22:40* @mark: show me the code , change the world*/@RunWith(SpringRunner.class) @SpringBootTest(classes = SpringkafkaApplication.class) public class ProduceMockTest {private Logger logger = LoggerFactory.getLogger(getClass());@Autowiredprivate ArtisanProducerMock artisanProducerMock;@Testpublic void testAsynSend() throws ExecutionException, InterruptedException {logger.info("開始發(fā)送");// 模擬發(fā)送多條消息 for (int i = 0; i < 10; i++) {artisanProducerMock.sendMsgSync();}// 阻塞等待,保證消費(fèi)new CountDownLatch(1).await();}}

    測(cè)試結(jié)果

    2021-02-18 21:55:35.504 INFO 20456 --- [ main] c.a.s.produceTest.ProduceMockTest : 開始發(fā)送 2021-02-18 21:55:35.852 INFO 20456 --- [ntainer#0-0-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:18 消息內(nèi)容:MessageMock{id=23, name='artisanTestMessage-23'}] 2021-02-18 21:55:35.852 INFO 20456 --- [ntainer#0-1-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:20 消息內(nèi)容:MessageMock{id=64, name='artisanTestMessage-64'}] 2021-02-18 21:55:35.859 INFO 20456 --- [ntainer#0-1-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:20 消息內(nèi)容:MessageMock{id=53, name='artisanTestMessage-53'}] 2021-02-18 21:55:35.859 INFO 20456 --- [ntainer#0-0-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:18 消息內(nèi)容:MessageMock{id=51, name='artisanTestMessage-51'}] 2021-02-18 21:55:35.859 INFO 20456 --- [ntainer#0-1-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:20 消息內(nèi)容:MessageMock{id=67, name='artisanTestMessage-67'}] 2021-02-18 21:55:35.859 INFO 20456 --- [ntainer#0-0-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:18 消息內(nèi)容:MessageMock{id=42, name='artisanTestMessage-42'}] 2021-02-18 21:55:35.859 INFO 20456 --- [ntainer#0-0-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:18 消息內(nèi)容:MessageMock{id=12, name='artisanTestMessage-12'}] 2021-02-18 21:55:35.859 INFO 20456 --- [ntainer#0-1-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:20 消息內(nèi)容:MessageMock{id=40, name='artisanTestMessage-40'}] 2021-02-18 21:55:35.859 INFO 20456 --- [ntainer#0-1-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:20 消息內(nèi)容:MessageMock{id=37, name='artisanTestMessage-37'}] 2021-02-18 21:55:35.859 INFO 20456 --- [ntainer#0-0-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:18 消息內(nèi)容:MessageMock{id=27, name='artisanTestMessage-27'}]

    從日志結(jié)果來(lái)看 兩個(gè)線程在消費(fèi) “TOPIC RRRR” 下的消息。

    控制臺(tái)也看下


    緊接著

    日志

    是不是一目了然 ,只有一個(gè)線程消費(fèi)


    方式二


    重新測(cè)試


    @KafkaListener 配置項(xiàng)

    /*** @KafkaListener(groupId = "testGroup", topicPartitions = {* @TopicPartition(topic = "topic1", partitions = {"0", "1"}),* @TopicPartition(topic = "topic2", partitions = "0",* partitionOffsets = @PartitionOffset(partition = "1", initialOffset = "100"))* },concurrency = "6")* //concurrency就是同組下的消費(fèi)者個(gè)數(shù),就是并發(fā)消費(fèi)數(shù),必須小于等于分區(qū)總數(shù)*/ /*** 監(jiān)聽的 Topic 數(shù)組* * The topics for this listener.* The entries can be 'topic name', 'property-placeholder keys' or 'expressions'.* An expression must be resolved to the topic name.* This uses group management and Kafka will assign partitions to group members.* <p>* Mutually exclusive with {@link #topicPattern()} and {@link #topicPartitions()}.* @return the topic names or expressions (SpEL) to listen to.*/ String[] topics() default {}; /*** 監(jiān)聽的 Topic 表達(dá)式* * The topic pattern for this listener. The entries can be 'topic pattern', a* 'property-placeholder key' or an 'expression'. The framework will create a* container that subscribes to all topics matching the specified pattern to get* dynamically assigned partitions. The pattern matching will be performed* periodically against topics existing at the time of check. An expression must* be resolved to the topic pattern (String or Pattern result types are supported).* This uses group management and Kafka will assign partitions to group members.* <p>* Mutually exclusive with {@link #topics()} and {@link #topicPartitions()}.* @return the topic pattern or expression (SpEL).* @see org.apache.kafka.clients.CommonClientConfigs#METADATA_MAX_AGE_CONFIG*/ String topicPattern() default ""; /*** @TopicPartition 注解的數(shù)組。每個(gè) @TopicPartition 注解,可配置監(jiān)聽的 Topic、隊(duì)列、消費(fèi)的開始位置* * The topicPartitions for this listener when using manual topic/partition* assignment.* <p>* Mutually exclusive with {@link #topicPattern()} and {@link #topics()}.* @return the topic names or expressions (SpEL) to listen to.*/ TopicPartition[] topicPartitions() default {};/*** 消費(fèi)者分組* Override the {@code group.id} property for the consumer factory with this value* for this listener only.* <p>SpEL {@code #{...}} and property place holders {@code ${...}} are supported.* @return the group id.* @since 1.3*/ String groupId() default "";/*** 使用消費(fèi)異常處理器 KafkaListenerErrorHandler 的 Bean 名字* * Set an {@link org.springframework.kafka.listener.KafkaListenerErrorHandler} bean* name to invoke if the listener method throws an exception.* @return the error handler.* @since 1.3*/ String errorHandler() default "";/*** 自定義消費(fèi)者監(jiān)聽器的并發(fā)數(shù),這個(gè)我們?cè)?TODO 詳細(xì)解析。* * Override the container factory's {@code concurrency} setting for this listener. May* be a property placeholder or SpEL expression that evaluates to a {@link Number}, in* which case {@link Number#intValue()} is used to obtain the value.* <p>SpEL {@code #{...}} and property place holders {@code ${...}} are supported.* @return the concurrency.* @since 2.2*/ String concurrency() default "";/*** 是否自動(dòng)啟動(dòng)監(jiān)聽器。默認(rèn)情況下,為 true 自動(dòng)啟動(dòng)。* * Set to true or false, to override the default setting in the container factory. May* be a property placeholder or SpEL expression that evaluates to a {@link Boolean} or* a {@link String}, in which case the {@link Boolean#parseBoolean(String)} is used to* obtain the value.* <p>SpEL {@code #{...}} and property place holders {@code ${...}} are supported.* @return true to auto start, false to not auto start.* @since 2.2*/ String autoStartup() default "";/*** Kafka Consumer 拓展屬性。* * Kafka consumer properties; they will supersede any properties with the same name* defined in the consumer factory (if the consumer factory supports property overrides).* <h3>Supported Syntax</h3>* <p>The supported syntax for key-value pairs is the same as the* syntax defined for entries in a Java* {@linkplain java.util.Properties#load(java.io.Reader) properties file}:* <ul>* <li>{@code key=value}</li>* <li>{@code key:value}</li>* <li>{@code key value}</li>* </ul>* {@code group.id} and {@code client.id} are ignored.* @return the properties.* @since 2.2.4* @see org.apache.kafka.clients.consumer.ConsumerConfig* @see #groupId()* @see #clientIdPrefix()*/ String[] properties() default {};/*** 唯一標(biāo)識(shí)* * The unique identifier of the container managing for this endpoint.* <p>If none is specified an auto-generated one is provided.* <p>Note: When provided, this value will override the group id property* in the consumer factory configuration, unless {@link #idIsGroup()}* is set to false.* <p>SpEL {@code #{...}} and property place holders {@code ${...}} are supported.* @return the {@code id} for the container managing for this endpoint.* @see org.springframework.kafka.config.KafkaListenerEndpointRegistry#getListenerContainer(String)*/ String id() default "";/*** id 唯一標(biāo)識(shí)的前綴* * When provided, overrides the client id property in the consumer factory* configuration. A suffix ('-n') is added for each container instance to ensure* uniqueness when concurrency is used.* <p>SpEL {@code #{...}} and property place holders {@code ${...}} are supported.* @return the client id prefix.* @since 2.1.1*/ String clientIdPrefix() default ""; /*** 當(dāng) groupId 未設(shè)置時(shí),是否使用 id 作為 groupId* * When {@link #groupId() groupId} is not provided, use the {@link #id() id} (if* provided) as the {@code group.id} property for the consumer. Set to false, to use* the {@code group.id} from the consumer factory.* @return false to disable.* @since 1.3*/ boolean idIsGroup() default true;/*** 使用的 KafkaListenerContainerFactory Bean 的名字。* 若未設(shè)置,則使用默認(rèn)的 KafkaListenerContainerFactory Bean 。* * The bean name of the {@link org.springframework.kafka.config.KafkaListenerContainerFactory}* to use to create the message listener container responsible to serve this endpoint.* <p>If not specified, the default container factory is used, if any.* @return the container factory bean name.*/ String containerFactory() default "";/*** 所屬 MessageListenerContainer Bean 的名字。* * If provided, the listener container for this listener will be added to a bean* with this value as its name, of type {@code Collection<MessageListenerContainer>}.* This allows, for example, iteration over the collection to start/stop a subset* of containers.* <p>SpEL {@code #{...}} and property place holders {@code ${...}} are supported.* @return the bean name for the group.*/ String containerGroup() default "";/*** 真實(shí)監(jiān)聽容器的 Bean 名字,需要在名字前加 "__" 。* * A pseudo bean name used in SpEL expressions within this annotation to reference* the current bean within which this listener is defined. This allows access to* properties and methods within the enclosing bean.* Default '__listener'.* <p>* Example: {@code topics = "#{__listener.topicList}"}.* @return the pseudo bean name.* @since 2.1.2*/ String beanRef() default "__listener";

    分布式下的concurrency

    第一個(gè)單元測(cè)試,不要關(guān)閉,我們繼續(xù)啟動(dòng)單元測(cè)試

    繼續(xù)啟動(dòng), 會(huì)發(fā)現(xiàn) 當(dāng)節(jié)點(diǎn)數(shù)量 = partition的數(shù)量的時(shí)候, 每個(gè)節(jié)點(diǎn) 其實(shí)還是一個(gè)線程去消費(fèi),達(dá)到最優(yōu)。


    源碼地址

    https://github.com/yangshangwei/boot2/tree/master/springkafkaConcurrencyConsume

    總結(jié)

    以上是生活随笔為你收集整理的Apache Kafka-通过concurrency实现并发消费的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

    如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。