日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 运维知识 > linux >内容正文

linux

linux server.xml日志参数,Linux Log4j+Kafka+KafkaLog4jAppender 日志收集

發(fā)布時間:2024/7/5 linux 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 linux server.xml日志参数,Linux Log4j+Kafka+KafkaLog4jAppender 日志收集 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

背景:

kafka版本:kafka_2.10-0.8.2.1

服務(wù)器IP:10.243.3.17

一:Kafkaserver.properties 文件配置

二:zookeeper.properties 文件配置

三: zookeeper,kafka啟動

../bin/zookeeper-server-start.sh -daemon?/usr/local/kafka_2.10-0.8.2.1/config/zookeeper.properties

../bin/kafka-server-start.sh -daemon?/usr/local/kafka_2.10-0.8.2.1/config/server.properties &

四:創(chuàng)建Topic

../bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

五: log4j.xml文件配置

六:常見問題

如遇到問題,首先確定參數(shù)配置是否正確,尤其是host,port,advertised.host.name; ?然后刪除kafka-logs-1,zookeeper-logs; 重新啟動zookeeper,kafka;

重新創(chuàng)建topic

七:附錄, 非log4j ? java連接kafka配置參考

import org.apache.log4j.Logger;

import scala.App;

/**

* TODO:

*

* @author gengchong

* @date 2016年1月5日 上午9:21:16

*/

public class KafkaApp {

private static final Logger LOGGER = Logger.getLogger(App.class);

public static void main(String[] args) throws InterruptedException {

for (int i = 0; i < 20; i++) {

LOGGER.info("Info [" + i + "]");

Thread.sleep(1000);

}

}

}

import java.util.ArrayList;

import java.util.List;

import java.util.Properties;

import kafka.javaapi.producer.Producer;

import kafka.producer.KeyedMessage;

import kafka.producer.ProducerConfig;

/**

* TODO:

*

* @author gengchong

* @date 2016年1月5日 下午1:55:56

*/

public class KafakProducer {

private static final String TOPIC = "test";

private static final String CONTENT = "This is a single message";

private static final String BROKER_LIST = "10.243.3.17:8457";

private static final String SERIALIZER_CLASS = "kafka.serializer.StringEncoder";

public static void main(String[] args) {

Properties props = new Properties();

props.put("serializer.class", SERIALIZER_CLASS);

props.put("metadata.broker.list", BROKER_LIST);

ProducerConfig config = new ProducerConfig(props);

Producer producer = new Producer(config);

//Send one message.

KeyedMessage message =

new KeyedMessage(TOPIC, CONTENT);

producer.send(message);

//Send multiple messages.

List messages =?

new ArrayList();

for (int i = 0; i < 5; i++) {

messages.add(new KeyedMessage

(TOPIC, "============== send Message. " + i));

}

producer.send(messages);

}

}

import java.util.List;

import java.util.Map;

import java.util.Properties;

import java.util.concurrent.ExecutorService;

import java.util.concurrent.Executors;

import com.google.common.collect.ImmutableMap;

import kafka.consumer.Consumer;

import kafka.consumer.ConsumerConfig;

import kafka.consumer.KafkaStream;

import kafka.javaapi.consumer.ConsumerConnector;

import kafka.message.MessageAndMetadata;

/**

* TODO:

*

* @author gengchong

* @date 2016年1月5日 上午9:22:04

*/

public class KafkaConsumer {

private static final String ZOOKEEPER = "10.243.3.17:2181";

//groupName可以隨意給,因為對于kafka里的每條消息,每個group都會完整的處理一遍

private static final String GROUP_NAME = "test_group";

private static final String TOPIC_NAME = "test";

private static final int CONSUMER_NUM = 4;

private static final int PARTITION_NUM = 4;

public static void main(String[] args) {

// specify some consumer properties

Properties props = new Properties();

props.put("zookeeper.connect", ZOOKEEPER);

props.put("zookeeper.connectiontimeout.ms", "1000000");

props.put("group.id", GROUP_NAME);

// Create the connection to the cluster

ConsumerConfig consumerConfig = new ConsumerConfig(props);

ConsumerConnector consumerConnector =

Consumer.createJavaConsumerConnector(consumerConfig);

// create 4 partitions of the stream for topic “test”, to allow 4

// threads to consume

Map> topicMessageStreams =

consumerConnector.createMessageStreams(

ImmutableMap.of(TOPIC_NAME, PARTITION_NUM));

List streams = topicMessageStreams.get(TOPIC_NAME);

// create list of 4 threads to consume from each of the partitions

ExecutorService executor = Executors.newFixedThreadPool(CONSUMER_NUM);

// consume the messages in the threads

for (final KafkaStream stream : streams) {

executor.submit(new Runnable() {

public void run() {

for (MessageAndMetadata msgAndMetadata : stream) {

// process message (msgAndMetadata.message())

System.out.println(new String(msgAndMetadata.message()));

}

}

});

}

}

}

總結(jié)

以上是生活随笔為你收集整理的linux server.xml日志参数,Linux Log4j+Kafka+KafkaLog4jAppender 日志收集的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。