日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

storm kafkaSpout 踩坑问题记录! offset问题!

發(fā)布時間:2025/3/20 编程问答 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 storm kafkaSpout 踩坑问题记录! offset问题! 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

整合kafka和storm例子網(wǎng)上很多,自行查找

?

問題描述:

  kafka是之前早就搭建好的,新建的storm集群要消費kafka的主題,由于kafka中已經(jīng)記錄了很多消息,storm消費時從最開始消費

?

問題解決:

  下面是摘自官網(wǎng)的一段話:

How KafkaSpout stores offsets of a Kafka topic and recovers in case of failures

As shown in the above KafkaConfig properties, you can control from where in the Kafka topic the spout begins to read by setting?KafkaConfig.startOffsetTime?as follows:

  • kafka.api.OffsetRequest.EarliestTime(): read from the beginning of the topic (i.e. from the oldest messages onwards)
  • kafka.api.OffsetRequest.LatestTime(): read from the end of the topic (i.e. any new messsages that are being written to the topic)
  • A Unix timestamp aka seconds since the epoch (e.g. via?System.currentTimeMillis()): see?How do I accurately get offsets of messages for a certain timestamp using OffsetRequest??in the Kafka FAQ
  • As the topology runs the Kafka spout keeps track of the offsets it has read and emitted by storing state information under the ZooKeeper path?SpoutConfig.zkRoot+ "/" + SpoutConfig.id. In the case of failures it recovers from the last written offset in ZooKeeper.

    Important:?When re-deploying a topology make sure that the settings for?SpoutConfig.zkRoot?and?SpoutConfig.id?were not modified, otherwise the spout will not be able to read its previous consumer state information (i.e. the offsets) from ZooKeeper -- which may lead to unexpected behavior and/or to data loss, depending on your use case.

    This means that when a topology has run once the setting?KafkaConfig.startOffsetTime?will not have an effect for subsequent runs of the topology because now the topology will rely on the consumer state information (offsets) in ZooKeeper to determine from where it should begin (more precisely: resume) reading. If you want to force the spout to ignore any consumer state information stored in ZooKeeper, then you should set the parameter?KafkaConfig.ignoreZkOffsets?to?true. If?true, the spout will always begin reading from the offset defined by?KafkaConfig.startOffsetTime?as described above.

    ?

      這段話的包含的內(nèi)容大概有,通過SpoutConfig對象的startOffsetTime字段設置消費進度,默認值是kafka.api.OffsetRequest.EarliestTime(),也就是從最早的消息開始消費,如果想從最新的消息開始消費需要手動設置成kafka.api.OffsetRequest.LatestTime()。另外還有一個問題是,這個字段只會在第一次消費消息時起作用,之后消費的offset是從zookeeper中記錄的offset開始的(存放消費記錄的地方是SpoutConfig對象的zkroot字段,未驗證)

      如果想要當前的topology的消費進度接著上一個topology的消費進度繼續(xù)消費,那么不要修改SpoutConfig對象的id。換言之,如果你第一次已經(jīng)從最早的消息開始消費了,那么如果不換id的話,它就要從最早的消息一直消費到最新的消息,這個時候如果想要跳過中間的消息直接從最新的消息開始消費,那么修改SpoutConfig對象的id就可以了

    ?

      下面是SpoutConfig對象的一些字段的含義,其實是繼承的KafkaConfig的字段,可看源碼

      public int fetchSizeBytes = 1024 * 1024; //發(fā)給Kafka的每個FetchRequest中,用此指定想要的response中總的消息的大小public int socketTimeoutMs = 10000;//與Kafka broker的連接的socket超時時間public int fetchMaxWait = 10000; //當服務器沒有新消息時,消費者會等待這些時間public int bufferSizeBytes = 1024 * 1024;//SimpleConsumer所使用的SocketChannel的讀緩沖區(qū)大小public MultiScheme scheme = new RawMultiScheme();//從Kafka中取出的byte[],該如何反序列化public boolean forceFromStart = false;//是否強制從Kafka中offset最小的開始讀起public long startOffsetTime = kafka.api.OffsetRequest.EarliestTime();//從何時的offset時間開始讀,默認為最舊的offsetpublic long maxOffsetBehind = Long.MAX_VALUE;//KafkaSpout讀取的進度與目標進度相差多少,相差太多,Spout會丟棄中間的消息
       public boolean useStartOffsetTimeIfOffsetOutOfRange = true;//如果所請求的offset對應的消息在Kafka中不存在,是否使用startOffsetTime
       public int metricsTimeBucketSizeInSecs = 60;//多長時間統(tǒng)計一次metrics

    ?

    轉(zhuǎn)載于:https://www.cnblogs.com/wsss/p/6745493.html

    總結(jié)

    以上是生活随笔為你收集整理的storm kafkaSpout 踩坑问题记录! offset问题!的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。