日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Apache Kafka-通过concurrency实现并发消费

發布時間:2025/3/21 编程问答 39 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Apache Kafka-通过concurrency实现并发消费 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

文章目錄

  • 概述
  • 演示過程
  • Code
    • POM依賴
    • 配置文件
    • 生產者
    • 消費者
    • 單元測試
    • 測試結果
  • 方式二
  • @KafkaListener 配置項
  • 分布式下的concurrency
  • 源碼地址


概述

默認情況下, Spring-Kafka @KafkaListener 串行消費的。缺點顯而易見生產者生產的數據過多時,消費端容易導致消息積壓的問題。

當然了, 我們可以通過啟動多個進程,實現 多進程的并發消費。 當然了也取決于你的TOPIC的 partition的數量。

試想一下, 在單進程的情況下,能否實現多線程的并發消費呢? Spring Kafka 為我們提供了這個功能,而且使用起來相當簡單。 重點是把握原理,靈活運用。

@KafkaListener 的 concurrecy屬性 可以指定并發消費的線程數 。

舉個例子 : 如果設置 concurrency=2 時,Spring-Kafka 就會為該 @KafkaListener標注的方法消費的消息 創建 2個線程,進行并發消費。 當然了,這是有前置條件的。 不要超過 partitions 的大小

  • 當concurrency < partition 的數量,會出現消費不均的情況,一個消費者的線程可能消費多個partition 的數據

  • 當concurrency = partition 的數量,最佳狀態,一個消費者的線程消費一個 partition 的數據

  • 當concurrency > partition 的數量,會出現有的消費者的線程沒有可消費的partition, 造成資源的浪費


演示過程

  • 創建一個 Topic 為 “RRRR” ,并且設置其 Partition 分區數為 2
  • 創建一個 ArtisanCosumerMock類,并在其消費方法上,添加 @KafkaListener(concurrency=2) 注解
  • 啟動單元測試, Spring Kafka會根據@KafkaListener(concurrency=2) ,創建2個kafka consumer . ( 是兩個Kafka Consumer ) . 然后,每個kafka Consumer 會被單獨分配到一個線程中pull 消息, 消費消息
  • 之后,Kafka Broker將Topic RRRR 分配給創建的 2個 Kafka Consumer 各 1個Partition (一共就2個partition,最佳情況,一人一個)
  • 總結下: @KafkaListener(concurrency=2) 創建兩個Kafka Consumer , 就在各自的線程中,拉取各自的Topic RRRR的 分區Partition 消息, 各自串行消費,從而實現單進程的多線程的并發消費。

    題外話:

    RocketMQ 的并發消費,只要創建一個 RocketMQ Consumer 對象,然后 Consumer 拉取完消息之后,丟到 Consumer 的線程池中執行消費,從而實現并發消費。

    Spring-Kafka 提供的并發消費,需要創建多個 Kafka Consumer 對象,并且每個 Consumer 都單獨分配一個線程,然后 Consumer 拉取完消息之后,在各自的線程中執行消費。


    Code

    POM依賴

    <dependencies><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-web</artifactId></dependency><!-- 引入 Spring-Kafka 依賴 --><dependency><groupId>org.springframework.kafka</groupId><artifactId>spring-kafka</artifactId></dependency><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-test</artifactId><scope>test</scope></dependency><dependency><groupId>junit</groupId><artifactId>junit</artifactId><scope>test</scope></dependency></dependencies>

    配置文件

    spring:# Kafka 配置項,對應 KafkaProperties 配置類kafka:bootstrap-servers: 192.168.126.140:9092 # 指定 Kafka Broker 地址,可以設置多個,以逗號分隔# Kafka Producer 配置項producer:acks: 1 # 0-不應答。1-leader 應答。all-所有 leader 和 follower 應答。retries: 3 # 發送失敗時,重試發送的次數key-serializer: org.apache.kafka.common.serialization.StringSerializer # 消息的 key 的序列化value-serializer: org.springframework.kafka.support.serializer.JsonSerializer # 消息的 value 的序列化# Kafka Consumer 配置項consumer:auto-offset-reset: earliest # 設置消費者分組最初的消費進度為 earliestkey-deserializer: org.apache.kafka.common.serialization.StringDeserializervalue-deserializer: org.springframework.kafka.support.serializer.JsonDeserializerproperties:spring:json:trusted:packages: com.artisan.springkafka.domain# Kafka Consumer Listener 監聽器配置listener:missing-topics-fatal: false # 消費監聽接口監聽的主題不存在時,默認會報錯。所以通過設置為 false ,解決報錯logging:level:org:springframework:kafka: ERROR # spring-kafkaapache:kafka: ERROR # kafka

    生產者

    package com.artisan.springkafka.producer;import com.artisan.springkafka.constants.TOPIC; import com.artisan.springkafka.domain.MessageMock; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.kafka.core.KafkaTemplate; import org.springframework.kafka.support.SendResult; import org.springframework.stereotype.Component; import org.springframework.util.concurrent.ListenableFuture;import java.util.Random; import java.util.concurrent.ExecutionException;/*** @author 小工匠* @version 1.0* @description: TODO* @date 2021/2/17 22:25* @mark: show me the code , change the world*/@Component public class ArtisanProducerMock {@Autowiredprivate KafkaTemplate<Object,Object> kafkaTemplate ;/*** 同步發送* @return* @throws ExecutionException* @throws InterruptedException*/public SendResult sendMsgSync() throws ExecutionException, InterruptedException {// 模擬發送的消息Integer id = new Random().nextInt(100);MessageMock messageMock = new MessageMock(id,"artisanTestMessage-" + id);// 同步等待return kafkaTemplate.send(TOPIC.TOPIC, messageMock).get();}}

    消費者

    package com.artisan.springkafka.consumer;import com.artisan.springkafka.domain.MessageMock; import com.artisan.springkafka.constants.TOPIC; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.kafka.annotation.KafkaListener; import org.springframework.stereotype.Component;/*** @author 小工匠* @version 1.0* @description: TODO* @date 2021/2/17 22:33* @mark: show me the code , change the world*/@Component public class ArtisanCosumerMock {private Logger logger = LoggerFactory.getLogger(getClass());private static final String CONSUMER_GROUP_PREFIX = "MOCK-A" ;@KafkaListener(topics = TOPIC.TOPIC ,groupId = CONSUMER_GROUP_PREFIX + TOPIC.TOPIC,concurrency = "2")public void onMessage(MessageMock messageMock){logger.info("【接受到消息][線程ID:{} 消息內容:{}]", Thread.currentThread().getId(), messageMock);}}

    在 @KafkaListener 注解上,添加了 concurrency = "2" 屬性,創建 2 個線程消費 Topic = “RRRR” 下的消息。


    單元測試

    package com.artisan.springkafka.produceTest;import com.artisan.springkafka.SpringkafkaApplication; import com.artisan.springkafka.producer.ArtisanProducerMock; import org.junit.Test; import org.junit.runner.RunWith; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.kafka.support.SendResult; import org.springframework.test.context.junit4.SpringRunner; import org.springframework.util.concurrent.ListenableFuture; import org.springframework.util.concurrent.ListenableFutureCallback;import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutionException; import java.util.concurrent.TimeUnit;/*** @author 小工匠* * @version 1.0* @description: TODO* @date 2021/2/17 22:40* @mark: show me the code , change the world*/@RunWith(SpringRunner.class) @SpringBootTest(classes = SpringkafkaApplication.class) public class ProduceMockTest {private Logger logger = LoggerFactory.getLogger(getClass());@Autowiredprivate ArtisanProducerMock artisanProducerMock;@Testpublic void testAsynSend() throws ExecutionException, InterruptedException {logger.info("開始發送");// 模擬發送多條消息 for (int i = 0; i < 10; i++) {artisanProducerMock.sendMsgSync();}// 阻塞等待,保證消費new CountDownLatch(1).await();}}

    測試結果

    2021-02-18 21:55:35.504 INFO 20456 --- [ main] c.a.s.produceTest.ProduceMockTest : 開始發送 2021-02-18 21:55:35.852 INFO 20456 --- [ntainer#0-0-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:18 消息內容:MessageMock{id=23, name='artisanTestMessage-23'}] 2021-02-18 21:55:35.852 INFO 20456 --- [ntainer#0-1-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:20 消息內容:MessageMock{id=64, name='artisanTestMessage-64'}] 2021-02-18 21:55:35.859 INFO 20456 --- [ntainer#0-1-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:20 消息內容:MessageMock{id=53, name='artisanTestMessage-53'}] 2021-02-18 21:55:35.859 INFO 20456 --- [ntainer#0-0-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:18 消息內容:MessageMock{id=51, name='artisanTestMessage-51'}] 2021-02-18 21:55:35.859 INFO 20456 --- [ntainer#0-1-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:20 消息內容:MessageMock{id=67, name='artisanTestMessage-67'}] 2021-02-18 21:55:35.859 INFO 20456 --- [ntainer#0-0-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:18 消息內容:MessageMock{id=42, name='artisanTestMessage-42'}] 2021-02-18 21:55:35.859 INFO 20456 --- [ntainer#0-0-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:18 消息內容:MessageMock{id=12, name='artisanTestMessage-12'}] 2021-02-18 21:55:35.859 INFO 20456 --- [ntainer#0-1-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:20 消息內容:MessageMock{id=40, name='artisanTestMessage-40'}] 2021-02-18 21:55:35.859 INFO 20456 --- [ntainer#0-1-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:20 消息內容:MessageMock{id=37, name='artisanTestMessage-37'}] 2021-02-18 21:55:35.859 INFO 20456 --- [ntainer#0-0-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][線程ID:18 消息內容:MessageMock{id=27, name='artisanTestMessage-27'}]

    從日志結果來看 兩個線程在消費 “TOPIC RRRR” 下的消息。

    控制臺也看下


    緊接著

    日志

    是不是一目了然 ,只有一個線程消費


    方式二


    重新測試


    @KafkaListener 配置項

    /*** @KafkaListener(groupId = "testGroup", topicPartitions = {* @TopicPartition(topic = "topic1", partitions = {"0", "1"}),* @TopicPartition(topic = "topic2", partitions = "0",* partitionOffsets = @PartitionOffset(partition = "1", initialOffset = "100"))* },concurrency = "6")* //concurrency就是同組下的消費者個數,就是并發消費數,必須小于等于分區總數*/ /*** 監聽的 Topic 數組* * The topics for this listener.* The entries can be 'topic name', 'property-placeholder keys' or 'expressions'.* An expression must be resolved to the topic name.* This uses group management and Kafka will assign partitions to group members.* <p>* Mutually exclusive with {@link #topicPattern()} and {@link #topicPartitions()}.* @return the topic names or expressions (SpEL) to listen to.*/ String[] topics() default {}; /*** 監聽的 Topic 表達式* * The topic pattern for this listener. The entries can be 'topic pattern', a* 'property-placeholder key' or an 'expression'. The framework will create a* container that subscribes to all topics matching the specified pattern to get* dynamically assigned partitions. The pattern matching will be performed* periodically against topics existing at the time of check. An expression must* be resolved to the topic pattern (String or Pattern result types are supported).* This uses group management and Kafka will assign partitions to group members.* <p>* Mutually exclusive with {@link #topics()} and {@link #topicPartitions()}.* @return the topic pattern or expression (SpEL).* @see org.apache.kafka.clients.CommonClientConfigs#METADATA_MAX_AGE_CONFIG*/ String topicPattern() default ""; /*** @TopicPartition 注解的數組。每個 @TopicPartition 注解,可配置監聽的 Topic、隊列、消費的開始位置* * The topicPartitions for this listener when using manual topic/partition* assignment.* <p>* Mutually exclusive with {@link #topicPattern()} and {@link #topics()}.* @return the topic names or expressions (SpEL) to listen to.*/ TopicPartition[] topicPartitions() default {};/*** 消費者分組* Override the {@code group.id} property for the consumer factory with this value* for this listener only.* <p>SpEL {@code #{...}} and property place holders {@code ${...}} are supported.* @return the group id.* @since 1.3*/ String groupId() default "";/*** 使用消費異常處理器 KafkaListenerErrorHandler 的 Bean 名字* * Set an {@link org.springframework.kafka.listener.KafkaListenerErrorHandler} bean* name to invoke if the listener method throws an exception.* @return the error handler.* @since 1.3*/ String errorHandler() default "";/*** 自定義消費者監聽器的并發數,這個我們在 TODO 詳細解析。* * Override the container factory's {@code concurrency} setting for this listener. May* be a property placeholder or SpEL expression that evaluates to a {@link Number}, in* which case {@link Number#intValue()} is used to obtain the value.* <p>SpEL {@code #{...}} and property place holders {@code ${...}} are supported.* @return the concurrency.* @since 2.2*/ String concurrency() default "";/*** 是否自動啟動監聽器。默認情況下,為 true 自動啟動。* * Set to true or false, to override the default setting in the container factory. May* be a property placeholder or SpEL expression that evaluates to a {@link Boolean} or* a {@link String}, in which case the {@link Boolean#parseBoolean(String)} is used to* obtain the value.* <p>SpEL {@code #{...}} and property place holders {@code ${...}} are supported.* @return true to auto start, false to not auto start.* @since 2.2*/ String autoStartup() default "";/*** Kafka Consumer 拓展屬性。* * Kafka consumer properties; they will supersede any properties with the same name* defined in the consumer factory (if the consumer factory supports property overrides).* <h3>Supported Syntax</h3>* <p>The supported syntax for key-value pairs is the same as the* syntax defined for entries in a Java* {@linkplain java.util.Properties#load(java.io.Reader) properties file}:* <ul>* <li>{@code key=value}</li>* <li>{@code key:value}</li>* <li>{@code key value}</li>* </ul>* {@code group.id} and {@code client.id} are ignored.* @return the properties.* @since 2.2.4* @see org.apache.kafka.clients.consumer.ConsumerConfig* @see #groupId()* @see #clientIdPrefix()*/ String[] properties() default {};/*** 唯一標識* * The unique identifier of the container managing for this endpoint.* <p>If none is specified an auto-generated one is provided.* <p>Note: When provided, this value will override the group id property* in the consumer factory configuration, unless {@link #idIsGroup()}* is set to false.* <p>SpEL {@code #{...}} and property place holders {@code ${...}} are supported.* @return the {@code id} for the container managing for this endpoint.* @see org.springframework.kafka.config.KafkaListenerEndpointRegistry#getListenerContainer(String)*/ String id() default "";/*** id 唯一標識的前綴* * When provided, overrides the client id property in the consumer factory* configuration. A suffix ('-n') is added for each container instance to ensure* uniqueness when concurrency is used.* <p>SpEL {@code #{...}} and property place holders {@code ${...}} are supported.* @return the client id prefix.* @since 2.1.1*/ String clientIdPrefix() default ""; /*** 當 groupId 未設置時,是否使用 id 作為 groupId* * When {@link #groupId() groupId} is not provided, use the {@link #id() id} (if* provided) as the {@code group.id} property for the consumer. Set to false, to use* the {@code group.id} from the consumer factory.* @return false to disable.* @since 1.3*/ boolean idIsGroup() default true;/*** 使用的 KafkaListenerContainerFactory Bean 的名字。* 若未設置,則使用默認的 KafkaListenerContainerFactory Bean 。* * The bean name of the {@link org.springframework.kafka.config.KafkaListenerContainerFactory}* to use to create the message listener container responsible to serve this endpoint.* <p>If not specified, the default container factory is used, if any.* @return the container factory bean name.*/ String containerFactory() default "";/*** 所屬 MessageListenerContainer Bean 的名字。* * If provided, the listener container for this listener will be added to a bean* with this value as its name, of type {@code Collection<MessageListenerContainer>}.* This allows, for example, iteration over the collection to start/stop a subset* of containers.* <p>SpEL {@code #{...}} and property place holders {@code ${...}} are supported.* @return the bean name for the group.*/ String containerGroup() default "";/*** 真實監聽容器的 Bean 名字,需要在名字前加 "__" 。* * A pseudo bean name used in SpEL expressions within this annotation to reference* the current bean within which this listener is defined. This allows access to* properties and methods within the enclosing bean.* Default '__listener'.* <p>* Example: {@code topics = "#{__listener.topicList}"}.* @return the pseudo bean name.* @since 2.1.2*/ String beanRef() default "__listener";

    分布式下的concurrency

    第一個單元測試,不要關閉,我們繼續啟動單元測試

    繼續啟動, 會發現 當節點數量 = partition的數量的時候, 每個節點 其實還是一個線程去消費,達到最優。


    源碼地址

    https://github.com/yangshangwei/boot2/tree/master/springkafkaConcurrencyConsume

    總結

    以上是生活随笔為你收集整理的Apache Kafka-通过concurrency实现并发消费的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。