日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

cloudwatch_将CloudWatch Logs与Cloudhub Mule集成

發(fā)布時(shí)間:2023/12/3 编程问答 37 豆豆
生活随笔 收集整理的這篇文章主要介紹了 cloudwatch_将CloudWatch Logs与Cloudhub Mule集成 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

cloudwatch

在此博客中,我將解釋如何為您的Mule CloudHub應(yīng)用程序啟用AWS Cloudwatch日志 。 AWS提供了Cloudwatch Logs Services,以便您可以更好地管理日志。 它比松散便宜。 由于cloudhub會(huì)自動(dòng)翻轉(zhuǎn)超過(guò)100 MB的日志,因此我們需要一種機(jī)制來(lái)更有效地管理日志。 為此,我們創(chuàng)建此自定義附加程序,它將日志發(fā)送到cloudwatch。

package com.javaroots.appenders;import static java.util.Comparator.comparing; import static java.util.stream.Collectors.toList;import java.io.Serializable; import java.nio.charset.StandardCharsets; import java.util.ArrayList; import java.util.Formatter; import java.util.List; import java.util.Optional; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicReference;import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.core.Filter; import org.apache.logging.log4j.core.Layout; import org.apache.logging.log4j.core.LogEvent; import org.apache.logging.log4j.core.appender.AbstractAppender; import org.apache.logging.log4j.core.config.plugins.Plugin; import org.apache.logging.log4j.core.config.plugins.PluginAttribute; import org.apache.logging.log4j.core.config.plugins.PluginElement; import org.apache.logging.log4j.core.config.plugins.PluginFactory; import org.apache.logging.log4j.status.StatusLogger;import com.amazonaws.regions.Regions; import com.amazonaws.services.logs.AWSLogs; import com.amazonaws.services.logs.model.CreateLogGroupRequest; import com.amazonaws.services.logs.model.CreateLogStreamRequest; import com.amazonaws.services.logs.model.CreateLogStreamResult; import com.amazonaws.services.logs.model.DataAlreadyAcceptedException; import com.amazonaws.services.logs.model.DescribeLogGroupsRequest; import com.amazonaws.services.logs.model.DescribeLogStreamsRequest; import com.amazonaws.services.logs.model.InputLogEvent; import com.amazonaws.services.logs.model.InvalidSequenceTokenException; import com.amazonaws.services.logs.model.LogGroup; import com.amazonaws.services.logs.model.LogStream; import com.amazonaws.services.logs.model.PutLogEventsRequest; import com.amazonaws.services.logs.model.PutLogEventsResult;@Plugin(name = "CLOUDW", category = "Core", elementType = "appender", printObject = true) public class CloudwatchAppender extends AbstractAppender {/*** */private static final long serialVersionUID = 12321345L;private static Logger logger2 = LogManager.getLogger(CloudwatchAppender.class);private final Boolean DEBUG_MODE = System.getProperty("log4j.debug") != null;/*** Used to make sure that on close() our daemon thread isn't also trying to sendMessage()s*/private Object sendMessagesLock = new Object();/*** The queue used to buffer log entries*/private LinkedBlockingQueue loggingEventsQueue;/*** the AWS Cloudwatch Logs API client*/private AWSLogs awsLogsClient;private AtomicReference lastSequenceToken = new AtomicReference<>();/*** The AWS Cloudwatch Log group name*/private String logGroupName;/*** The AWS Cloudwatch Log stream name*/private String logStreamName;/*** The queue / buffer size*/private int queueLength = 1024;/*** The maximum number of log entries to send in one go to the AWS Cloudwatch Log service*/private int messagesBatchSize = 128;private AtomicBoolean cloudwatchAppenderInitialised = new AtomicBoolean(false);private CloudwatchAppender(final String name,final Layout layout,final Filter filter,final boolean ignoreExceptions,String logGroupName, String logStreamName,Integer queueLength,Integer messagesBatchSize) {super(name, filter, layout, ignoreExceptions);this.logGroupName = logGroupName;this.logStreamName = logStreamName;this.queueLength = queueLength;this.messagesBatchSize = messagesBatchSize;this.activateOptions();}@Overridepublic void append(LogEvent event) {if (cloudwatchAppenderInitialised.get()) {loggingEventsQueue.offer(event);} else {// just do nothing}}public void activateOptions() {if (isBlank(logGroupName) || isBlank(logStreamName)) {logger2.error("Could not initialise CloudwatchAppender because either or both LogGroupName(" + logGroupName + ") and LogStreamName(" + logStreamName + ") are null or empty");this.stop();} else {//below lines work with aws version 1.9.40 for local build//this.awsLogsClient = new AWSLogsClient();//awsLogsClient.setRegion(Region.getRegion(Regions.AP_SOUTHEAST_2));this.awsLogsClient = com.amazonaws.services.logs.AWSLogsClientBuilder.standard().withRegion(Regions.AP_SOUTHEAST_2).build();loggingEventsQueue = new LinkedBlockingQueue<>(queueLength);try {initializeCloudwatchResources();initCloudwatchDaemon();cloudwatchAppenderInitialised.set(true);} catch (Exception e) {logger2.error("Could not initialise Cloudwatch Logs for LogGroupName: " + logGroupName + " and LogStreamName: " + logStreamName, e);if (DEBUG_MODE) {System.err.println("Could not initialise Cloudwatch Logs for LogGroupName: " + logGroupName + " and LogStreamName: " + logStreamName);e.printStackTrace();}}}}private void initCloudwatchDaemon() {Thread t = new Thread(() -> {while (true) {try {if (loggingEventsQueue.size() > 0) {sendMessages();}Thread.currentThread().sleep(20L);} catch (InterruptedException e) {if (DEBUG_MODE) {e.printStackTrace();}}}});t.setName("CloudwatchThread");t.setDaemon(true);t.start();}private void sendMessages() {synchronized (sendMessagesLock) {LogEvent polledLoggingEvent;final Layout layout = getLayout();List loggingEvents = new ArrayList<>();try {while ((polledLoggingEvent = loggingEventsQueue.poll()) != null && loggingEvents.size() <= messagesBatchSize) {loggingEvents.add(polledLoggingEvent);}List inputLogEvents = loggingEvents.stream().map(loggingEvent -> new InputLogEvent().withTimestamp(loggingEvent.getTimeMillis()).withMessage(layout == null ?loggingEvent.getMessage().getFormattedMessage():new String(layout.toByteArray(loggingEvent), StandardCharsets.UTF_8))).sorted(comparing(InputLogEvent::getTimestamp)).collect(toList());if (!inputLogEvents.isEmpty()) {PutLogEventsRequest putLogEventsRequest = new PutLogEventsRequest(logGroupName,logStreamName,inputLogEvents);try {putLogEventsRequest.setSequenceToken(lastSequenceToken.get());PutLogEventsResult result = awsLogsClient.putLogEvents(putLogEventsRequest);lastSequenceToken.set(result.getNextSequenceToken());} catch (DataAlreadyAcceptedException dataAlreadyAcceptedExcepted) {putLogEventsRequest.setSequenceToken(dataAlreadyAcceptedExcepted.getExpectedSequenceToken());PutLogEventsResult result = awsLogsClient.putLogEvents(putLogEventsRequest);lastSequenceToken.set(result.getNextSequenceToken());if (DEBUG_MODE) {dataAlreadyAcceptedExcepted.printStackTrace();}} catch (InvalidSequenceTokenException invalidSequenceTokenException) {putLogEventsRequest.setSequenceToken(invalidSequenceTokenException.getExpectedSequenceToken());PutLogEventsResult result = awsLogsClient.putLogEvents(putLogEventsRequest);lastSequenceToken.set(result.getNextSequenceToken());if (DEBUG_MODE) {invalidSequenceTokenException.printStackTrace();}}}} catch (Exception e) {if (DEBUG_MODE) {logger2.error(" error inserting cloudwatch:",e);e.printStackTrace();}}}}private void initializeCloudwatchResources() {DescribeLogGroupsRequest describeLogGroupsRequest = new DescribeLogGroupsRequest();describeLogGroupsRequest.setLogGroupNamePrefix(logGroupName);Optional logGroupOptional = awsLogsClient.describeLogGroups(describeLogGroupsRequest).getLogGroups().stream().filter(logGroup -> logGroup.getLogGroupName().equals(logGroupName)).findFirst();if (!logGroupOptional.isPresent()) {CreateLogGroupRequest createLogGroupRequest = new CreateLogGroupRequest().withLogGroupName(logGroupName);awsLogsClient.createLogGroup(createLogGroupRequest);}DescribeLogStreamsRequest describeLogStreamsRequest = new DescribeLogStreamsRequest().withLogGroupName(logGroupName).withLogStreamNamePrefix(logStreamName);Optional logStreamOptional = awsLogsClient.describeLogStreams(describeLogStreamsRequest).getLogStreams().stream().filter(logStream -> logStream.getLogStreamName().equals(logStreamName)).findFirst();if (!logStreamOptional.isPresent()) {CreateLogStreamRequest createLogStreamRequest = new CreateLogStreamRequest().withLogGroupName(logGroupName).withLogStreamName(logStreamName);CreateLogStreamResult o = awsLogsClient.createLogStream(createLogStreamRequest);}}private boolean isBlank(String string) {return null == string || string.trim().length() == 0;}protected String getSimpleStacktraceAsString(final Throwable thrown) {final StringBuilder stackTraceBuilder = new StringBuilder();for (StackTraceElement stackTraceElement : thrown.getStackTrace()) {new Formatter(stackTraceBuilder).format("%s.%s(%s:%d)%n",stackTraceElement.getClassName(),stackTraceElement.getMethodName(),stackTraceElement.getFileName(),stackTraceElement.getLineNumber());}return stackTraceBuilder.toString();}@Overridepublic void start() {super.start();}@Overridepublic void stop() {super.stop();while (loggingEventsQueue != null && !loggingEventsQueue.isEmpty()) {this.sendMessages();}}@Overridepublic String toString() {return CloudwatchAppender.class.getSimpleName() + "{"+ "name=" + getName() + " loggroupName=" + logGroupName+" logstreamName=" + logStreamName;}@PluginFactory@SuppressWarnings("unused")public static CloudwatchAppender createCloudWatchAppender(@PluginAttribute(value = "queueLength" ) Integer queueLength,@PluginElement("Layout") Layout layout,@PluginAttribute(value = "logGroupName") String logGroupName,@PluginAttribute(value = "logStreamName") String logStreamName,@PluginAttribute(value = "name") String name,@PluginAttribute(value = "ignoreExceptions", defaultBoolean = false) Boolean ignoreExceptions,@PluginAttribute(value = "messagesBatchSize") Integer messagesBatchSize){return new CloudwatchAppender(name, layout, null, ignoreExceptions, logGroupName, logStreamName ,queueLength,messagesBatchSize);} }

我們?cè)趐om.xml文件中添加依賴(lài)項(xiàng)。

<dependency><groupId>com.amazonaws</groupId><artifactId>aws-java-sdk-logs</artifactId><!-- for local 3.8.5 we need to use this version cloudhub 3.8.5 has jackson 2.6.6 --><!-- <version>1.9.40</version> --><version>1.11.105</version><exclusions><exclusion> <!-- declare the exclusion here --><groupId>org.apache.logging.log4j</groupId><artifactId>log4j-1.2-api</artifactId></exclusion><exclusion> <!-- declare the exclusion here --><groupId>com.fasterxml.jackson.core</groupId><artifactId>jackson-core</artifactId></exclusion><exclusion> <!-- declare the exclusion here --><groupId>com.fasterxml.jackson.core</groupId><artifactId>jackson-databind</artifactId></exclusion></exclusions></dependency><!-- https://mvnrepository.com/artifact/org.apache.logging.log4j/log4j-api --><dependency><groupId>org.apache.logging.log4j</groupId><artifactId>log4j-api</artifactId><version>2.5</version></dependency><!-- https://mvnrepository.com/artifact/org.apache.logging.log4j/log4j-core --><dependency><groupId>org.apache.logging.log4j</groupId><artifactId>log4j-core</artifactId><version>2.5</version></dependency>

現(xiàn)在我們需要修改log4j2.xml。 還要添加自定義cloudwatch附加程序和CloudhubLogs附加程序,以便我們也可以獲取cloudhub上的日志。

<?xml version="1.0" encoding="utf-8"?> <Configuration status="trace" packages="au.edu.vu.appenders,com.mulesoft.ch.logging.appender"><!--These are some of the loggers you can enable. There are several more you can find in the documentation. Besides this log4j configuration, you can also use Java VM environment variablesto enable other logs like network (-Djavax.net.debug=ssl or all) and Garbage Collector (-XX:+PrintGC). These will be append to the console, so you will see them in the mule_ee.log file. --><Appenders><CLOUDW name="CloudW" logGroupName="test-log-stream" logStreamName="test44" messagesBatchSize="${sys:cloudwatch.msg.batch.size}" queueLength="${sys:cloudwatch.queue.length}"><PatternLayout pattern="%d [%t] %-5p %c - %m%n"/></CLOUDW><Log4J2CloudhubLogAppender name="CLOUDHUB"addressProvider="com.mulesoft.ch.logging.DefaultAggregatorAddressProvider"applicationContext="com.mulesoft.ch.logging.DefaultApplicationContext"appendRetryIntervalMs="${sys:logging.appendRetryInterval}"appendMaxAttempts="${sys:logging.appendMaxAttempts}"batchSendIntervalMs="${sys:logging.batchSendInterval}"batchMaxRecords="${sys:logging.batchMaxRecords}"memBufferMaxSize="${sys:logging.memBufferMaxSize}"journalMaxWriteBatchSize="${sys:logging.journalMaxBatchSize}"journalMaxFileSize="${sys:logging.journalMaxFileSize}"clientMaxPacketSize="${sys:logging.clientMaxPacketSize}"clientConnectTimeoutMs="${sys:logging.clientConnectTimeout}"clientSocketTimeoutMs="${sys:logging.clientSocketTimeout}"serverAddressPollIntervalMs="${sys:logging.serverAddressPollInterval}"serverHeartbeatSendIntervalMs="${sys:logging.serverHeartbeatSendIntervalMs}"statisticsPrintIntervalMs="${sys:logging.statisticsPrintIntervalMs}"><PatternLayout pattern="[%d{MM-dd HH:mm:ss}] %-5p %c{1} [%t] CUSTOM: %m%n"/></Log4J2CloudhubLogAppender></Appenders><Loggers><!-- Http Logger shows wire traffic on DEBUG --><AsyncLogger name="org.mule.module.http.internal.HttpMessageLogger" level="WARN"/><!-- JDBC Logger shows queries and parameters values on DEBUG --><AsyncLogger name="com.mulesoft.mule.transport.jdbc" level="WARN"/><!-- CXF is used heavily by Mule for web services --><AsyncLogger name="org.apache.cxf" level="WARN"/><!-- Apache Commons tend to make a lot of noise which can clutter the log--><AsyncLogger name="org.apache" level="WARN"/><!-- Reduce startup noise --><AsyncLogger name="org.springframework.beans.factory" level="WARN"/><!-- Mule classes --><AsyncLogger name="org.mule" level="INFO"/><AsyncLogger name="com.mulesoft" level="INFO"/><!-- Reduce DM verbosity --><AsyncLogger name="org.jetel" level="WARN"/><AsyncLogger name="Tracking" level="WARN"/><AsyncRoot level="INFO"><AppenderRef ref="CLOUDHUB" level="INFO"/><AppenderRef ref="CloudW" level="INFO"/></AsyncRoot></Loggers> </Configuration>

最后,我們需要在cloudhub運(yùn)行時(shí)管理器上禁用cloudhub日志。

這適用于cloudhub mule運(yùn)行時(shí)版本3.8.4。 cloudhub 3.8.5版本存在一些問(wèn)題,該版本已正確初始化并發(fā)送日志,但是缺少事件和消息。

翻譯自: https://www.javacodegeeks.com/2017/10/integrate-cloudwatch-logs-cloudhub-mule.html

cloudwatch

總結(jié)

以上是生活随笔為你收集整理的cloudwatch_将CloudWatch Logs与Cloudhub Mule集成的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 国产激情在线观看 | 欧美一级射 | 成人av地址 | 成人一级生活片 | 久久成人午夜 | 伊人网伊人影院 | 久久精品视频一区 | 天天干天天插天天操 | 欧美视频xxxx| 成人啪啪18免费游戏链接 | 久久亚洲影院 | 久久精品成人 | 91污在线观看 | 亚洲国产精华液网站w | 免费精品一区二区 | 亚洲啪啪免费视频 | 国产91绿帽单男绿奴 | 午夜爱爱免费视频 | 亚洲欲色| 最新av免费在线观看 | 免费的三级网站 | 成人精品在线观看 | 免费黄色网址视频 | 久久久久久99精品久久久 | 国产精品美女久久久久久 | 黄色aaaaa | 日韩电影二区 | 国产黄在线免费观看 | 在线播放少妇奶水过盛 | 欧美乱色 | 黄色99视频| 动漫美女隐私无遮挡 | 1级黄色大片儿 | 亚洲一区二区黄片 | 欧美激情第1页 | 日本黄色xxxxx | 五月激情综合网 | 女人被狂躁c到高潮喷水电影 | 成人免费毛片高清视频 | 污视频免费网站 | 国产91久久精品一区二区 | 国产xxx69麻豆国语对白 | 无码人妻黑人中文字幕 | 波多野结衣在线一区 | 亚洲男女一区二区三区 | 九色porny视频 | 成人高潮片免费 | 一区二区在线免费观看视频 | 欧美激情亚洲 | 午夜性刺激免费视频 | 日本免费高清 | 精品国产乱码久久久久久蜜臀网站 | 美女少妇一区二区 | 91精品视频在线看 | 末路1997全集免费观看完整版 | 日韩激情网址 | 国产欧美一区二区精品性色99 | 99久久久无码国产精品性波多 | 97精品一区二区 | 国产网址 | 97在线观看免费高清 | 无遮挡国产| 国产一区二区在线播放 | wwwav视频 | 亚洲一区二区动漫 | 中文字幕一区二区三区精彩视频 | 老妇荒淫牲艳史 | 海角官网 | 成年人在线免费看 | 性欧美精品 | 老司机av福利 | 欧美日韩在线免费看 | 国产精品zjzjzj在线观看 | 欧美少妇毛茸茸 | 午夜视频在线观看网站 | 五月婷婷六月综合 | 97人人超 | av国产一区二区 | 30一40一50老女人毛片 | 国产又爽又色 | 高潮毛片无遮挡高清免费 | av成人免费| 国产小视频在线观看免费 | 成人免费影院 | 成人一级黄色片 | 99国产精品99久久久久久粉嫩 | 爱乃なみ加勒比在线播放 | 久久桃色 | 一区国产在线 | 看一级黄色片 | 天天干夜夜草 | 黄色网免费看 | 日本韩国欧美一区二区三区 | 玖玖玖在线观看 | 亚洲成人mv | 欧美黑人做爰爽爽爽 | 亚洲精品传媒 | 69视频在线 | 亚洲精品久久久 |