日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

kafka 集群_10分钟搭建单机Kafka集群

發(fā)布時(shí)間:2024/9/18 编程问答 36 豆豆
生活随笔 收集整理的這篇文章主要介紹了 kafka 集群_10分钟搭建单机Kafka集群 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

單機(jī)版kafka集群有什么作用

練習(xí)上手用。

搭建zookeeper集群

  • 首先下載zookeeper

apache zookeeper官網(wǎng) apache zookeeper下載地址 apache zookeeper 3.5.5.tar.gz

  • 解壓 apache zookeeper
tar -zxvf apache-zookeeper-3.5.5-bin.tar.gz
  • 將zookeeper復(fù)制三份,分別命名為zookeeper-1,zookeeper-2,zookeeper-3
  • 將zookeeper-1中的zoo.example.cfg文件復(fù)制一份改名為: zoo.cfg
  • 修改config/zoo.cfg文件
    • 修改端口: clientPort=2181
    • 修改數(shù)據(jù)目錄: dataDir=/ashura/zookeeper-1/datalog
    • 增加以下配置:

server.1=localhost.:2887:3887 server.2=localhost.:2888:3888 server.3=localhost.:2889:3889 admin.serverPort=8000
完成的配置文件如下:

# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/ashura/zookeeper-1/datalog # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.1=localhost.:2887:3887 server.2=localhost.:2888:3888 server.3=localhost.:2889:3889
  • 將這份zoo.cfg分別復(fù)制到zookeeper-2,zookeeper-3的config目錄下.
    • 修改zookeeper2的zoo.cfg中clientPort=2183,dataDir=/ashura/zookeeper-2/datalog
    • 修改zookeeper3的zoo.cfg中clientPort=2184,dataDir=/ashura/zookeeper-3/datalog
  • 創(chuàng)建剛才在配置文件中寫的目錄
mkdir /ashura/zookeeper-1/datalog mkdir /ashura/zookeeper-2/datalog mkdir /ashura/zookeeper-3/datalog
  • 分別{-- 在datalog目錄下 --}執(zhí)行以下命令,寫入myid。
echo "1" > /ashura/zookeeper-1/datalog/myid echo "2" > /ashura/zookeeper-2/datalog/myid echo "3" > /ashura/zookeeper-3/datalog/myid
  • 最后分別啟動(dòng)zookeeper集群
/ashura/zookeeper-1/bin/zkServer.sh start /ashura/zookeeper-2/bin/zkServer.sh start /ashura/zookeeper-3/bin/zkServer.sh start

使用如下命令判斷是否啟動(dòng)成功

/ashura/zookeeper-1/bin/zkServer.sh status /ashura/zookeeper-2/bin/zkServer.sh status /ashura/zookeeper-3/bin/zkServer.sh status

搭建Kafka集群

下載

kafka官網(wǎng) kafka_2.11-2.2.1.tgz

開始安裝

  • 解壓
tar -zxvf kafka_2.11-2.2.1.tgz
  • 將config/server.properties復(fù)制三份,分別命名為server1.properties,server2.properties,server3.properties。
  • 修改server1.properties
    • broker.id=1
    • listeners=PLAINTEXT://:9092
    • advertised.listeners=PLAINTEXT://10.1.14.159:9092(其中10.1.14.159是我本機(jī)的ip)
    • log.dirs=/ashura/kafka_2.11-2.2.1/logs/kafka1-logs
    • zookeeper.connect=localhost:2181,localhost:2182,localhost:2183
  • 同理,修改server2.properties
    • broker.id=2
    • listeners=PLAINTEXT://:9093
    • advertised.listeners=PLAINTEXT://10.1.14.159:9093(其中10.1.14.159是我本機(jī)的ip)
    • log.dirs=/ashura/kafka_2.11-2.2.1/logs/kafka2-logs
    • zookeeper.connect=localhost:2181,localhost:2182,localhost:2183
  • 同理,修改server3.properties
    • broker.id=3
    • listeners=PLAINTEXT://:9094
    • advertised.listeners=PLAINTEXT://10.1.14.159:9094(其中10.1.14.159是我本機(jī)的ip)
    • log.dirs=/ashura/kafka_2.11-2.2.1/logs/kafka3-logs
    • zookeeper.connect=localhost:2181,localhost:2182,localhost:2183
  • 然后執(zhí)行以下命令
nohup /ashura/kafka_2.11-2.2.1/bin/kafka-server-start.sh /ashura/kafka_2.11-2.2.1/config/server3.properties > /ashura/kafka_2.11-2.2.1/logs/kafka3-logs/startup.log 2>&1 & nohup /ashura/kafka_2.11-2.2.1/bin/kafka-server-start.sh /ashura/kafka_2.11-2.2.1/config/server2.properties > /ashura/kafka_2.11-2.2.1/logs/kafka2-logs/startup.log 2>&1 & nohup /ashura/kafka_2.11-2.2.1/bin/kafka-server-start.sh /ashura/kafka_2.11-2.2.1/config/server1.properties > /ashura/kafka_2.11-2.2.1/logs/kafka1-logs/startup.log 2>&1 &
  • 通過startup.log,或者同級(jí)目錄下的server.log查看是否有報(bào)錯(cuò)即可。
  • 檢測(cè)
    • 創(chuàng)建主題:./kafka-topics.sh --bootstrap-server 127.0.0.1:9092 --create --topic fxb_test1 --replication-factor 3 --partitions 3
    • 啟動(dòng)消費(fèi)者: ./kafka-console-producer.sh --broker-list 10.1.14.159:9092 --topic fxb_test1
    • 新開窗口個(gè),啟動(dòng)生產(chǎn)者: kafka-console-producer.sh --bootstrap-server 127.0.0.1 --create --topic fxb_test1 在生產(chǎn)者窗口中輸入消息,查看消費(fèi)者的窗口,是否有消息產(chǎn)生。

參考文檔

Java DOC

使用java client進(jìn)行測(cè)試

  • 引入依賴
<dependencies><dependency><groupId>org.apache.kafka</groupId><artifactId>kafka-clients</artifactId><version>2.3.0</version></dependency> </dependencies>
  • 創(chuàng)建生產(chǎn)者
package com.fxb.learn.kafka.producer;import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.Producer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.clients.producer.ProducerRecord;import java.util.Properties;/**** 生產(chǎn)者*/ public class ProducerDemo {public static void main(String[] args) {Properties props = new Properties();props.put("bootstrap.servers", "10.127.138.75:9092");props.put("acks", "all");props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); // props.put(ProducerConfig.Re)Producer<String, String> producer = new KafkaProducer<>(props);for (int i = 0; i < 10; i++) {producer.send(new ProducerRecord<String, String>("fxb_test1", Integer.toString(i), Integer.toString(i)));System.out.println("has sent msg [" + i + "]");}producer.close();} }
  • 創(chuàng)建消費(fèi)者
package com.fxb.learn.kafka.consumer;import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer;import java.time.Duration; import java.util.Arrays; import java.util.Properties;/**** 消費(fèi)者*/ public class ConsumerDemo {public static void main(String[] args) {Properties props = new Properties();props.setProperty("bootstrap.servers", "10.127.138.75:9092");props.setProperty("group.id", "test");props.setProperty("enable.auto.commit", "true");props.setProperty("auto.commit.interval.ms", "1000");props.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");props.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);consumer.subscribe(Arrays.asList("fxb_test1", "bar"));while (true) {ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));for (ConsumerRecord<String, String> record : records)System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());}} }

總結(jié)

以上是生活随笔為你收集整理的kafka 集群_10分钟搭建单机Kafka集群的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。