日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > java >内容正文

java

java kafka 集群消费_kafka集群搭建和使用Java写kafka生产者消费者

發布時間:2023/11/27 java 44 豆豆
生活随笔 收集整理的這篇文章主要介紹了 java kafka 集群消费_kafka集群搭建和使用Java写kafka生产者消费者 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

轉自:http://chengjianxiaoxue.iteye.com/blog/2190488

1 kafka集群搭建

1.zookeeper集群 搭建在110, 111,112

2.kafka使用3個節點110, 111,112修改配置文件config/server.properties

broker.id=110host.name=192.168.1.110log.dirs=/usr/local/kafka_2.10-0.8.2.0/logs

復制到其他兩個節點,然后修改對應節點上的config/server.pro3.啟動,在三個節點分別執行

bin/kafka-server-start.sh config/server.properties >/dev/null 2>&1 &

4創建主題

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 3 --topic test5查看主題詳細

bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test--topic test

Topic:test PartitionCount:3 ReplicationFactor:3Configs:

Topic: test Partition:0 Leader: 110 Replicas: 110,111,112 Isr: 110,111,112Topic: test Partition:1 Leader: 111 Replicas: 111,112,110 Isr: 111,112,110Topic: test Partition:2 Leader: 112 Replicas: 112,110,111 Isr: 112,110,111

6去zk上看kafka集群

[zk: localhost:2181(CONNECTED) 5] ls /[admin, zookeeper, consumers, config, controller, zk-fifo, storm, brokers, controller_epoch]

[zk: localhost:2181(CONNECTED) 6] ls /brokers ---->查看注冊在zk內的kafka

[topics, ids]

[zk: localhost:2181(CONNECTED) 7] ls /brokers/ids

[112, 110, 111]

[zk: localhost:2181(CONNECTED) 8] ls /brokers/ids/112[]

[zk: localhost:2181(CONNECTED) 9] ls /brokers/topics

[test]

[zk: localhost:2181(CONNECTED) 10] ls /brokers/topics/test

[partitions]

[zk: localhost:2181(CONNECTED) 11] ls /brokers/topics/test/partitions

[2, 1, 0]

[zk: localhost:2181(CONNECTED) 12]

2 ?kafka java調用:

2.1 java端生產數據, kafka集群消費數據:

1 創建maven工程,pom.xml中增加如下:

org.apache.kafka

kafka_2.10

0.8.2.0

2 java代碼: 向主題test內寫入數據

import java.util.Properties;

import java.util.concurrent.TimeUnit;

import kafka.javaapi.producer.Producer;

import kafka.producer.KeyedMessage;

import kafka.producer.ProducerConfig;

import kafka.serializer.StringEncoder;

public class kafkaProducer extends Thread{

private String topic;

public kafkaProducer(String topic){

super();

this.topic = topic;

}

@Override

public void run() {

Producer producer = createProducer();

int i=0;

while(true){

producer.send(new KeyedMessage(topic, "message: " + i++));

try {

TimeUnit.SECONDS.sleep(1);

} catch (InterruptedException e) {

e.printStackTrace();

}

}

}

private Producer createProducer() {

Properties properties = new Properties();

properties.put("zookeeper.connect", "192.168.1.110:2181,192.168.1.111:2181,192.168.1.112:2181");//聲明zk

properties.put("serializer.class", StringEncoder.class.getName());

properties.put("metadata.broker.list", "192.168.1.110:9092,192.168.1.111:9093,192.168.1.112:9094");// 聲明kafka broker

return new Producer(new ProducerConfig(properties));

}

public static void main(String[] args) {

new kafkaProducer("test").start();// 使用kafka集群中創建好的主題 test

}

}

3 kafka集群中消費主題test的數據:

[root@h2master kafka]# bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginnin

4 啟動java代碼,然后在看集群消費的數據如下:

message: 0

message: 1

message: 2

message: 3

message: 4

message: 5

message: 6

message: 7

message: 8

message: 9

message: 10

message: 11

message: 12

message: 13

message: 14

message: 15

message: 16

message: 17

message: 18

message: 19

message: 20

message: 21

3 kafka 使用Java寫消費者,這樣 先運行kafkaProducer ,在運行kafkaConsumer,即可得到生產者的數據:

importjava.util.HashMap;importjava.util.List;importjava.util.Map;importjava.util.Properties;importkafka.consumer.Consumer;importkafka.consumer.ConsumerConfig;importkafka.consumer.ConsumerIterator;importkafka.consumer.KafkaStream;importkafka.javaapi.consumer.ConsumerConnector;/*** 接收數據

* 接收到: message: 10

接收到: message: 11

接收到: message: 12

接收到: message: 13

接收到: message: 14

*@authorzm

**/

public class kafkaConsumer extendsThread{privateString topic;publickafkaConsumer(String topic){super();this.topic =topic;

}

@Overridepublic voidrun() {

ConsumerConnector consumer=createConsumer();

Map topicCountMap = new HashMap();

topicCountMap.put(topic,1); //一次從主題中獲取一個數據

Map>> messageStreams =consumer.createMessageStreams(topicCountMap);

KafkaStream stream = messageStreams.get(topic).get(0);//獲取每次接收到的這個數據

ConsumerIterator iterator =stream.iterator();while(iterator.hasNext()){

String message= newString(iterator.next().message());

System.out.println("接收到: " +message);

}

}privateConsumerConnector createConsumer() {

Properties properties= newProperties();

properties.put("zookeeper.connect", "192.168.1.110:2181,192.168.1.111:2181,192.168.1.112:2181");//聲明zk

properties.put("group.id", "group1");//必須要使用別的組名稱, 如果生產者和消費者都在同一組,則不能訪問同一組內的topic數據

return Consumer.createJavaConsumerConnector(newConsumerConfig(properties));

}public static voidmain(String[] args) {new kafkaConsumer("test").start();//使用kafka集群中創建好的主題 test

}

}

總結

以上是生活随笔為你收集整理的java kafka 集群消费_kafka集群搭建和使用Java写kafka生产者消费者的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

歡迎分享!

轉載請說明來源于"生活随笔",并保留原作者的名字。

本文地址:java kafka 集群消费_kafka集群搭建和使用Ja