日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

elk 聚合日志_使用ELK堆栈进行日志聚合

發布時間:2023/12/3 编程问答 34 豆豆
生活随笔 收集整理的這篇文章主要介紹了 elk 聚合日志_使用ELK堆栈进行日志聚合 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

elk 聚合日志

1.簡介

隨著微服務的使用,創建穩定的分布式應用程序和擺脫許多遺留問題變得很容易。

但是微服務的使用也帶來了一些挑戰, 分布式日志管理就是其中之一。

由于微服務是隔離的,因此它們不共享數據庫和日志文件,因此實時搜索,分析和查看日志數據變得充滿挑戰。

這就是ELK堆棧的救援之處。

2. ELK

它是三個開源產品的集合:

  • 彈性搜索是基于JSON的NoSQL數據庫
  • Logstash一個日志管道工具,可從各種來源獲取輸入,執行不同的轉換并將數據導出到各種目標(此處為彈性搜索)
  • Kibana是可視化層,可在彈性搜索的基礎上工作

請參考下面給出的架構:



ELK堆棧

日志存儲從微服務中獲取日志。

提取的日志將轉換為JSON并提供給彈性搜索。

開發人員可以使用Kibana查看彈性搜索中存在的日志。

3.安裝ELK

ELK是基于Java的。

在安裝ELK之前,必須確保已JAVA_HOME和PATH ,并且已使用JDK 1.8完成安裝。

3.1 Elasticsearch

  • 可以從下載頁面下載最新版本的Elasticsearch,并將其解壓縮到任何文件夾中
  • 可以使用bin\elasticsearch.bat從命令提示符處執行它
  • 默認情況下,它將從http:// localhost:9200開始

3.2基巴納

  • 可以從下載頁面下載最新版本的Kibana,并且可以將其提取到任何文件夾中
  • 可以使用bin\kibana.bat在命令提示符下執行它
  • 成功啟動后,Kibana將在默認端口5601上啟動,并且Kibana UI將位于http:// localhost:5601

3.3 Logstash

  • 可以從下載頁面下載最新版本的Logstash,并將其解壓縮到任何文件夾中
  • 根據配置說明創建一個文件cst_logstash.conf
  • 可以使用bin/logstash -f cst_logstash.conf在命令提示符下執行以啟動logstash

4.創建一個示例微服務組件

創建微服務是必需的,以便logstash可以指向API日志。

下面的清單顯示了示例微服務的代碼。

pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>com.xyz.app</groupId><artifactId>ArtDemo1001_Rest_Controller_Full_Deployment_Logging</artifactId><version>0.0.1-SNAPSHOT</version><!-- Add Spring repositories --><!-- (you don't need this if you are using a .RELEASE version) --><repositories><repository><id>spring-snapshots</id><url>http://repo.spring.io/snapshot</url><snapshots><enabled>true</enabled></snapshots></repository><repository><id>spring-milestones</id><url>http://repo.spring.io/milestone</url></repository></repositories><pluginRepositories><pluginRepository><id>spring-snapshots</id><url>http://repo.spring.io/snapshot</url></pluginRepository><pluginRepository><id>spring-milestones</id><url>http://repo.spring.io/milestone</url></pluginRepository></pluginRepositories><parent><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-parent</artifactId><version>1.5.2.RELEASE</version></parent><properties><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding><java.version>1.8</java.version><spring-cloud.version>Dalston.SR3</spring-cloud.version></properties><!-- Add typical dependencies for a web application --><dependencies><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-web</artifactId></dependency></dependencies><!-- Package as an executable jar --><build><plugins><plugin><groupId>org.springframework.boot</groupId><artifactId>spring-boot-maven-plugin</artifactId></plugin></plugins></build><dependencyManagement><dependencies><dependency><groupId>org.springframework.cloud</groupId><artifactId>spring-cloud-dependencies</artifactId><version>${spring-cloud.version}</version><type>pom</type><scope>import</scope></dependency></dependencies></dependencyManagement></project>

上面的pom.xml代碼已配置了基于Spring Boot的項目所需的依賴項。

EmployeeDAO.java

package com.xyz.app.dao;import java.util.Collection; import java.util.LinkedHashMap; import java.util.Map;import org.springframework.stereotype.Repository;import com.xyz.app.model.Employee;@Repository public class EmployeeDAO {/*** Map is used to Replace the Database * */static public Map<Integer,Employee> mapOfEmloyees = new LinkedHashMap<Integer,Employee>();static int count=10004;static{mapOfEmloyees.put(10001, new Employee("Jack",10001,12345.6,1001));mapOfEmloyees.put(10002, new Employee("Justin",10002,12355.6,1002));mapOfEmloyees.put(10003, new Employee("Eric",10003,12445.6,1003));}/*** Returns all the Existing Employees* */public Collection getAllEmployee(){return mapOfEmloyees.values(); }/**Get Employee details using EmployeeId .* Returns an Employee object response with Data if Employee is Found* Else returns a null* */public Employee getEmployeeDetailsById(int id){return mapOfEmloyees.get(id);}/**Create Employee details.* Returns auto-generated Id* */public Integer addEmployee(Employee employee){count++;employee.setEmployeeId(count);mapOfEmloyees.put(count, employee);return count;}/**Update the Employee details,* Receives the Employee Object and returns the updated Details * */public Employee updateEmployee (Employee employee){mapOfEmloyees.put(employee.getEmployeeId(), employee);return employee;}/**Delete the Employee details,* Receives the EmployeeID and returns the deleted employee's Details * */public Employee removeEmployee (int id){Employee emp= mapOfEmloyees.remove(id);return emp;}}

上面的代碼表示應用程序的DAO層。

CRUD操作在包含Employee對象的Map集合上執行,以避免數據庫依賴性并保持應用程序輕巧。

EmployeeController.java

package com.xyz.app.controller;import java.util.Collection;import org.apache.log4j.Logger; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpStatus; import org.springframework.http.MediaType; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.PathVariable; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.RestController;import com.xyz.app.dao.EmployeeDAO; import com.xyz.app.model.Employee;@RestController public class EmployeeController {@Autowired private EmployeeDAO employeeDAO;public static Logger logger = Logger.getLogger(EmployeeController.class);/** Method is used to get all the employee details and return the same */ @RequestMapping(value="emp/controller/getDetails",method=RequestMethod.GET,produces=MediaType.APPLICATION_JSON_VALUE)public ResponseEntity<Collection> getEmployeeDetails(){logger.info("From Producer method[getEmployeeDetails] start");logger.debug("From Producer method[getEmployeeDetails] start");Collection listEmployee =employeeDAO.getAllEmployee();logger.debug("From Producer method[getEmployeeDetails] start");logger.info("From Producer method[getEmployeeDetails] end");return new ResponseEntity<Collection>(listEmployee, HttpStatus.OK);}/** Method finds an employee using employeeId and returns the found Employee If no employee is not existing corresponding to the employeeId, then null is returned with HttpStatus.INTERNAL_SERVER_ERROR as status*/ @RequestMapping(value="emp/controller/getDetailsById/{id}",method=RequestMethod.GET,produces=MediaType.APPLICATION_JSON_VALUE)public ResponseEntity getEmployeeDetailByEmployeeId(@PathVariable("id") int myId){logger.info("From Producer method[getEmployeeDetailByEmployeeId] start");Employee employee = employeeDAO.getEmployeeDetailsById(myId);if(employee!=null){logger.info("From Producer method[getEmployeeDetailByEmployeeId] end");return new ResponseEntity(employee,HttpStatus.OK);}else{logger.info("From Producer method[getEmployeeDetailByEmployeeId] end");return new ResponseEntity(HttpStatus.NOT_FOUND);}}/** Method creates an employee and returns the auto-generated employeeId */ @RequestMapping(value="/emp/controller/addEmp",method=RequestMethod.POST,consumes=MediaType.APPLICATION_JSON_VALUE,produces=MediaType.TEXT_HTML_VALUE)public ResponseEntity addEmployee(@RequestBody Employee employee){logger.info("From Producer method[addEmployee] start");logger.debug("From Producer method[addEmployee] start");int empId= employeeDAO.addEmployee(employee);logger.debug("From Producer method[addEmployee] start");logger.info("From Producer method[addEmployee] end");return new ResponseEntity("Employee added successfully with id:"+empId,HttpStatus.CREATED);}/** Method updates an employee and returns the updated Employee If Employee to be updated is not existing, then null is returned with HttpStatus.INTERNAL_SERVER_ERROR as status*/ @RequestMapping(value="/emp/controller/updateEmp",method=RequestMethod.PUT,consumes=MediaType.APPLICATION_JSON_VALUE,produces=MediaType.APPLICATION_JSON_VALUE)public ResponseEntity updateEmployee(@RequestBody Employee employee){logger.info("From Producer method[updateEmployee] start");if(employeeDAO.getEmployeeDetailsById(employee.getEmployeeId())==null){Employee employee2=null;return new ResponseEntity(employee2,HttpStatus.INTERNAL_SERVER_ERROR);}System.out.println(employee);employeeDAO.updateEmployee(employee);logger.info("From Producer method[updateEmployee] end");return new ResponseEntity(employee,HttpStatus.OK);}/** Method deletes an employee using employeeId and returns the deleted Employee If Employee to be deleted is not existing, then null is returned with HttpStatus.INTERNAL_SERVER_ERROR as status*/ @RequestMapping(value="/emp/controller/deleteEmp/{id}",method=RequestMethod.DELETE,produces=MediaType.APPLICATION_JSON_VALUE)public ResponseEntity deleteEmployee(@PathVariable("id") int myId){logger.info("From Producer method[deleteEmployee] start");if(employeeDAO.getEmployeeDetailsById(myId)==null){Employee employee2=null;return new ResponseEntity(employee2,HttpStatus.INTERNAL_SERVER_ERROR);}Employee employee = employeeDAO.removeEmployee(myId);System.out.println("Removed: "+employee);logger.info("From Producer method[deleteEmployee] end");return new ResponseEntity(employee,HttpStatus.OK);} }

上面的代碼代表具有請求處理程序的應用程序的控制器層。

請求處理程序調用DAO層函數并執行CRUD操作。

application.properties

server.port = 8090logging.level.com.xyz.app.controller.EmployeeController=DEBUG#name of the log file to be created#same file will be given as input to logstashlogging.file=app.logspring.application.name = producer

上面的代碼代表為基于Spring Boot的應用程序配置的屬性。

5. Logstash配置

如3.3節所述,需要為logstash創建配置文件。

logstash將使用此配置文件從微服務日志中獲取輸入。

日志被轉換為JSON并饋入elasticsearch。

cst_logstash.conf

input {file {# If more than one log files from different microservices have to be tracked then a comma-separated list of log files can # be providedpath => ["PATH-TO-UPDATE/app.log"]codec => multiline {pattern => "^%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}.*"negate => "true"what => "previous"}} } output {stdout {codec => rubydebug}# Sending properly parsed log events to elasticsearchelasticsearch {hosts => ["localhost:9200"]} }

上面的logstash配置文件偵聽日志文件,并將日志消息推送到彈性搜索。

注意 :根據您的設置更改日志路徑。

6.執行與輸出

6.1為日志執行微服務

可以使用clean install spring-boot:run部署Spring Boot應用程序,并可以從瀏覽器或郵遞員客戶端訪問以下URL: http:// localhost:8090 / emp / controller / getDetails 。

這將擊中微服務并在微服務方面生成日志。

這些日志將由logstash讀取,并推送到彈性搜索中,此外,可以使用Kibana進行后續步驟來查看這些日志。

6.2在Kibana上查看輸出的步驟

  • 在管理控制臺中配置索引。 使用索引值作為logstash-*作為默認配置。 打開鏈接: http:// localhost:5601 / app / kibana#/ management / kibana / index?_g =() ,它將顯示如下屏幕:
Kibana Index Creation- 1
  • 單擊下一步,將顯示以下屏幕
Kibana Index Creation- 2

選擇上面突出顯示的選項,然后單擊“創建索引模式”

  • 從左側菜單中選擇“發現”選項后,頁面顯示如下:
在Kibana-1上查看日志
  • 可以根據上面突出顯示的屬性來可視化和過濾日志。 將鼠標懸停在任何屬性上后,將顯示該屬性的“添加”按鈕。 在這里,選擇消息屬性視圖如下所示:
在Kibana- 2上查看日志

7.參考

  • https://logz.io/learn/complete-guide-elk-stack/
  • https://howtodoinjava.com/microservices/elk-stack-tutorial-example/
  • https://dzone.com/articles/logging-with-elastic-stack

8.下載Eclipse項目

下載您可以在此處下載此示例的完整源代碼: microservice

翻譯自: https://www.javacodegeeks.com/2018/12/log-aggregation-using-elk-stack.html

elk 聚合日志

總結

以上是生活随笔為你收集整理的elk 聚合日志_使用ELK堆栈进行日志聚合的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。