日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

大数据笔记(三十二)——SparkStreaming集成Kafka与Flume

發(fā)布時(shí)間:2025/4/14 编程问答 48 豆豆
生活随笔 收集整理的這篇文章主要介紹了 大数据笔记(三十二)——SparkStreaming集成Kafka与Flume 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

三、集成:數(shù)據(jù)源
1、Apache Kafka:一種高吞吐量的分布式發(fā)布訂閱消息系統(tǒng)
(1)
(*)消息的類(lèi)型

Topic:主題(相當(dāng)于:廣播) Queue:隊(duì)列(相當(dāng)于:點(diǎn)對(duì)點(diǎn))

(*)常見(jiàn)的消息系統(tǒng)
Kafka、Redis -----> 只支持Topic
JMS(Java Messaging Service標(biāo)準(zhǔn)):Topic、Queue -----> Weblogic

(*)角色:生產(chǎn)者:產(chǎn)生消息
消費(fèi)者:接收消息(處理消息)

(2)Kafka的消息系統(tǒng)的體系結(jié)構(gòu)

?

?


(3)搭建Kafka的環(huán)境:單機(jī)單Broker的模式

//啟動(dòng)kafka bin/kafka-server-start.sh config/server.properties &

測(cè)試Kafka
?創(chuàng)建Topic

bin/kafka-topics.sh --create --zookeeper bigdata11:2181 -replication-factor 1 --partitions 3 --topic mydemo1

?發(fā)送消息

bin/kafka-console-producer.sh --broker-list bigdata11:9092 --topic mydemo1

?接收消息: 從zookeeper中獲取topic的信息

bin/kafka-console-consumer.sh --zookeeper bigdata11:2181 --topic mydemo1

(4)集成Spark Streaming:兩種方式
注意:依賴的jar包很多(還有沖突),強(qiáng)烈建議使用Maven方式
讀到數(shù)據(jù):都是key value
(*)基于接收器方式(receiver)

Receiver的實(shí)現(xiàn)使用到Kafka高層次的API.對(duì)于所有的Receivers,接收到的數(shù)據(jù)將會(huì)保存在Spark executors中,然后由Spark Streaming 啟動(dòng)Job來(lái)處理這些數(shù)據(jù)

?

1 package main.scala.demo 2 3 import org.apache.spark.SparkConf 4 import org.apache.spark.streaming.kafka.KafkaUtils 5 import org.apache.spark.streaming.{Seconds, StreamingContext} 6 7 object KafkaReceiverDemo { 8 9 def main(args: Array[String]): Unit = { 10 val conf = new SparkConf().setAppName("KafkaReceiverDemo").setMaster("local[2]") 11 val ssc = new StreamingContext(conf,Seconds(10)) 12 13 //指定Topic信息:從mydemo1的topic中,每次接受一條消息 14 val topic = Map("mydemo1" -> 1) 15 16 //創(chuàng)建Kafka輸入流(DStream),基于Receiver方式,鏈接到ZK 17 //參數(shù):SparkStream,ZK地址,groupId,topic 18 val kafkaStream = KafkaUtils.createStream(ssc,"192.168.153.11:2181","mygroup",topic) 19 20 //接受數(shù)據(jù),并處理 21 val lines = kafkaStream.map(e=>{ 22 //e代表是每次接受到的數(shù)據(jù) 23 new String(e.toString()) 24 } 25 ) 26 27 //輸出 28 lines.print() 29 30 ssc.start() 31 ssc.awaitTermination() 32 } 33 }

啟動(dòng)Kafka,在上面發(fā)送一條消息,結(jié)果

(*)直接讀取方式:推薦(效率更高)

這種方式定期的從Kafka的topic+partition中查詢最新的偏移量,再根據(jù)定義的偏移量在每個(gè)batch里面處理數(shù)據(jù)。當(dāng)需要處理的數(shù)據(jù)來(lái)臨時(shí),spark通過(guò)調(diào)用kafka簡(jiǎn)單的消費(fèi)者API讀取一定范圍內(nèi)的數(shù)據(jù)。

?

package main.scala.demoimport kafka.serializer.StringDecoder import org.apache.spark.SparkConf import org.apache.spark.streaming.kafka.KafkaUtils import org.apache.spark.streaming.{Seconds, StreamingContext}object KafkaDirectDemo {def main(args: Array[String]): Unit = {val conf = new SparkConf().setAppName("KafkaReceiverDemo").setMaster("local[2]")val ssc = new StreamingContext(conf,Seconds(10))//指定Topic信息val topic = Set("mydemo1")//直接讀取Broker,指定就是Broker的地址val brokerList = Map[String,String]("metadata.broker.list"->"192.168.153.11:9092")//創(chuàng)建一個(gè)DStream key value key的解碼器 value的解碼器val lines = KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder](ssc,brokerList,topic)//讀取消息val message = lines.map(e=>{new String(e.toString())})message.print()ssc.start()ssc.awaitTermination()} }

?

2、集成Apache Flume:兩種方式
注意:依賴jar包Flume lib下面的Jar包,以及
(1)基于Flume Push模式: 推模式。Flume被用于在Flume agents 之間推送數(shù)據(jù)。在這種方式下,Spark Streaming可以建立一個(gè)receiver,起到一個(gè)avro receiver的作用。Flume可以直接將數(shù)據(jù)推送到該receiver。

a4.conf配置。

#bin/flume-ng agent -n a4 -f myagent/a4.conf -c conf -Dflume.root.logger=INFO,console #定義agent名, source、channel、sink的名稱(chēng) a4.sources = r1 a4.channels = c1 a4.sinks = k1#具體定義source a4.sources.r1.type = spooldir a4.sources.r1.spoolDir = /root/training/logs#具體定義channel a4.channels.c1.type = memory a4.channels.c1.capacity = 10000 a4.channels.c1.transactionCapacity = 100#具體定義sink a4.sinks = k1 a4.sinks.k1.type = avro a4.sinks.k1.channel = c1 a4.sinks.k1.hostname = 192.168.153.1 a4.sinks.k1.port = 1234#組裝source、channel、sink a4.sources.r1.channels = c1 a4.sinks.k1.channel = c1

?

package flumeimport org.apache.spark.SparkConf import org.apache.spark.streaming.flume.FlumeUtils import org.apache.spark.streaming.{Seconds, StreamingContext}object MyFlumeStream {def main(args: Array[String]): Unit = {val conf = new SparkConf().setAppName("SparkFlumeNGWordCount").setMaster("local[2]")val ssc = new StreamingContext(conf, Seconds(5))//創(chuàng)建FlumeEvent的DStreamval flumeEvent = FlumeUtils.createStream(ssc,"192.168.153.1",1234)//將FlumeEvent中的事件轉(zhuǎn)成字符串val lineDStream = flumeEvent.map( e => {new String(e.event.getBody.array)})//輸出結(jié)果 lineDStream.print()ssc.start()ssc.awaitTermination();} }

?測(cè)試:

1.啟動(dòng)Spark streaming程序MyFlumeStream

2.啟動(dòng)Flume:bin/flume-ng agent -n a4 -f myagent/a4.conf -c conf -Dflume.root.logger=INFO,console

3.拷貝日志文件到/root/training/logs目錄

4.觀察輸出,采集到數(shù)據(jù):

?

(2)自定義sink方式(Pull模式): 拉模式。Flume將數(shù)據(jù)推送到sink中,并且保持?jǐn)?shù)據(jù)buffered狀態(tài)。Spark Streaming使用一個(gè)可靠的Flume接收器從sink拉取數(shù)據(jù)。這種模式更加健壯和可靠,需要為Flume配置一個(gè)正常的sink
(*)將Spark的jar包拷貝到Flume的lib目錄下
(*)下面的這個(gè)jar包也需要拷貝到Flume的lib目錄下

(*)同時(shí)加入IDEA工程的classpath

#bin/flume-ng agent -n a1 -f myagent/a1.conf -c conf -Dflume.root.logger=INFO,console a1.channels = c1 a1.sinks = k1 a1.sources = r1a1.sources.r1.type = spooldir a1.sources.r1.spoolDir = /root/training/logsa1.channels.c1.type = memory a1.channels.c1.capacity = 100000 a1.channels.c1.transactionCapacity = 100000a1.sinks.k1.type = org.apache.spark.streaming.flume.sink.SparkSink a1.sinks.k1.channel = c1 a1.sinks.k1.hostname = 192.168.153.11 a1.sinks.k1.port = 1234#組裝source、channel、sink a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1

?

package flumeimport org.apache.spark.SparkConf import org.apache.spark.storage.StorageLevel import org.apache.spark.streaming.flume.FlumeUtils import org.apache.spark.streaming.{Seconds, StreamingContext}object FlumeLogPull {def main(args: Array[String]) {val conf = new SparkConf().setAppName("SparkFlumeNGWordCount").setMaster("local[2]")val ssc = new StreamingContext(conf, Seconds(10))//創(chuàng)建FlumeEvent的DStreamval flumeEvent = FlumeUtils.createPollingStream(ssc,"192.168.153.11",1234,StorageLevel.MEMORY_ONLY_SER_2)//將FlumeEvent中的事件轉(zhuǎn)成字符串val lineDStream = flumeEvent.map( e => {new String(e.event.getBody.array)})//輸出結(jié)果 lineDStream.print()ssc.start()ssc.awaitTermination();} }

?開(kāi)啟flume:

bin/flume-ng agent -n a1 -f myagent/a1.conf -c conf -Dflume.root.logger=INFO,console

測(cè)試步驟和推模式類(lèi)似。

轉(zhuǎn)載于:https://www.cnblogs.com/lingluo2017/p/8709122.html

總結(jié)

以上是生活随笔為你收集整理的大数据笔记(三十二)——SparkStreaming集成Kafka与Flume的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。