日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 >

Python日志详解【两篇就够了系列】--第二篇loguru

發布時間:2025/3/19 43 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Python日志详解【两篇就够了系列】--第二篇loguru 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

目錄

第二章 Python日志loguru庫詳解

一、loguru簡介

二、日志級別

三、loguru日志常用參數配置解析

1.rotation

2.retention

3.compression

4.format

四、常用方式測試

1.按照文件大小切割

2.按照間隔時間切割

3.按照每天時間切割

4.清理過期日志

5.不可配置兩種切割方式

6.如果一定要根據大小和時間輪換日志文件

五、其他說明

1.對比loguru和logging

2.參考文獻


第二章 Python日志loguru庫詳解

一、loguru簡介

Loguru是一個旨在以Python帶來令人愉悅的日志記錄的庫。Loguru的主要概念是只有一個 logger。沒有 Handler, 沒有 Formatter, 沒有Filter只有一個add函數。

使用方法如下

logger.add("file_1.log", rotation="500 MB") #自動分割大文件 logger.add("file_2.log", rotation="12:00") #每天12:00自動壓縮 logger.add("file_X.log", retention="10 days") # 十天前的日志刪除 logger.add("file_Y.log", compression="zip") # zip方式壓縮

參數說明

  • sink (file-like object, str, pathlib.Path, callable, coroutine function or logging.Handler) – An object in charge of receiving formatted logging messages and propagating them to an appropriate endpoint.【日志發送目的地】
  • level (int or str, optional) – The minimum severity level from which logged messages should be sent to the sink.【日志等級】
  • format (str or callable, optional) – The template used to format logged messages before being sent to the sink.【日志格式】
  • filter (callable, str or dict, optional) – A directive optionally used to decide for each logged message whether it should be sent to the sink or not.【日追成過濾器】
  • colorize (bool, optional) – Whether the color markups contained in the formatted message should be converted to ansi codes for terminal coloration, or stripped otherwise. If None, the choice is automatically made based on the sink being a tty or not.【是否加顏色】
  • serialize (bool, optional) – Whether the logged message and its records should be first converted to a JSON string before being sent to the sink.【是否序列號】
  • backtrace (bool, optional) – Whether the exception trace formatted should be extended upward, beyond the catching point, to show the full stacktrace which generated the error.
  • diagnose (bool, optional) – Whether the exception trace should display the variables values to eases the debugging. This should be set to False in production to avoid leaking sensitive data.
  • enqueue (bool, optional) – Whether the messages to be logged should first pass through a multiprocess-safe queue before reaching the sink. This is useful while logging to a file through multiple processes. This also has the advantage of making logging calls non-blocking.
  • catch (bool, optional) – Whether errors occurring while sink handles logs messages should be automatically caught. If True, an exception message is displayed on sys.stderr but the exception is not propagated to the caller, preventing your app to crash.
  • **kwargs – Additional parameters that are only valid to configure a coroutine or file sink
  • rotation (str, int, datetime.time, datetime.timedelta or callable, optional) – A condition indicating whenever the current logged file should be closed and a new one started.【配置日志切割】
  • retention (str, int, datetime.timedelta or callable, optional) – A directive filtering old files that should be removed during rotation or end of program.【刪除過期日志】
  • compression (str or callable, optional) – A compression or archive format to which log files should be converted at closure.【壓縮方式】
  • delay (bool, optional) – Whether the file should be created as soon as the sink is configured, or delayed until first logged message. It defaults to False.
  • mode (str, optional) – The opening mode as for built-in open() function. It defaults to "a" (open the file in appending mode).
  • buffering (int, optional) – The buffering policy as for built-in open() function. It defaults to 1 (line buffered file).
  • encoding (str, optional) – The file encoding as for built-in open() function. If None, it defaults to locale.getpreferredencoding().
  • 二、日志級別

    ?

    級別

    方法

    TRACE

    5

    logger.trace()

    DEBUG

    10

    logger.debug()

    INFO

    20

    logger.info()

    SUCCESS

    25

    logger.success()

    WARNING

    30

    logger.warning()

    ERROR

    40

    logger.error()

    CRITICAL

    50

    logger.critical()

    ?

    三、loguru日志常用參數配置解析

    1.rotation

    rotation檢查記錄每個消息之前完成。如果已經存在與要創建的文件同名的文件,則通過將日期附加到其基名中來重命名現有文件,以防止文件被覆蓋

    例如:

    "100?MB""0.5?GB""1?month?2?weeks""4?days""10h""monthly""18:00""sunday""w0""monday?at?12:00"

    2.retention

    保留文件最大保留期限,例如

    "1?week,?3?days""2?months"

    3.compression

    日志壓縮方式,例如

    "gz""bz2""xz""lzma""tar""tar.gz""tar.bz2"?"tar.xz""zip"

    4.format

    Key

    官方描述

    備注

    elapsed

    The time elapsed since the start of the program

    日期

    exception

    The formatted exception if any, none otherwise

    ?

    extra

    The dict of attributes bound by the user (see?bind())

    ?

    file

    The file where the logging call was made

    出錯文件

    function

    The function from which the logging call was made

    出錯方法

    level

    The severity used to log the message

    日志級別

    line

    The line number in the source code

    行數

    message

    The logged message (not yet formatted)

    信息

    module

    The module where the logging call was made

    模塊

    name

    The?__name__?where the logging call was made

    __name__

    process

    The process in which the logging call was made

    進程id或者進程名,默認是id

    thread

    The thread in which the logging call was made

    線程id或者進程名,默認是id

    time

    The aware local time when the logging call was made

    日期

    ?

    四、常用方式測試

    1.按照文件大小切割

    按照文件大小切割日志,并壓縮成tar.gz

    logger.add('log6/yuhceng.log', rotation="500KB",compression="tar.gz")

    ?

    2.按照間隔時間切割

    時間間隔設置為4s,每4秒切割一次,

    logger.add("log7/yuhceng.log", rotation="4s")

    ?

    3.按照每天時間切割

    在每天01:15切割

    logger.add("log8/yuhceng.log", rotation="01:15")

    4.清理過期日志

    按每5s切割一次,只保留過去1分分鐘的日志

    logger.add("log8/yuhceng.log", rotation="5s",retention="1 min")

    5.不可配置兩種切割方式

    配置按照時間切割和按照文件大小切割,但是只能配置一種方式。如果配置兩種方式,那就會打印重復日志。

    import time from loguru import logger logger.add('log/yuhceng.log', rotation="5MB") logger.add("log/yuhceng.log", rotation="2h") for i in range(1000000000000000000000000):logger.info("yucheng{}".format(str(i)))time.sleep(0.1)

    如圖所示,第一個文件會比其他文件都大,說明日志并沒有按照預期切割。

    6.如果一定要根據大小和時間輪換日志文件

    雖然上文方案已經可以滿足需求,但是有的系統偏偏要搞同時按照大小和時間輪換。怎么解決?

    文件接收器的參數接受大小或時間限制,但出于簡化原因,不能同時接受。但是,可以創建自定義函數來支持更高級的方案

    import datetimeclass Rotator:def __init__(self, *, size, at):now = datetime.datetime.now()self._size_limit = sizeself._time_limit = now.replace(hour=at.hour, minute=at.minute, second=at.second)if now >= self._time_limit:# The current time is already past the target time so it would rotate already.# Add one day to prevent an immediate rotation.self._time_limit += datetime.timedelta(days=1)def should_rotate(self, message, file):file.seek(0, 2)if file.tell() + len(message) > self._size_limit:return Trueif message.record["time"].timestamp() > self._time_limit.timestamp():self._time_limit += datetime.timedelta(days=1)return Truereturn False# Rotate file if over 500 MB or at midnight every day rotator = Rotator(size=5e+8, at=datetime.time(4, 0, 0)) logger.add("file.log", rotation=rotator.should_rotate)

    可以看到,在4點的時候那次切割,日志最小,是因為出觸發了按照固定時間的切割。

    五、其他說明

    1.對比loguru和logging

    (1)從安裝來看,logging不需要安裝,loguru需要安裝。

    (2)從配置的角度,loguru全局只有一個logger,logging可以通過名字的不同創建多個

    (3)配置loguru相比配置logging更加簡便

    (4)其他模塊的支持,這里面其他模塊指定是import 的其他模塊,比如,系統中import peewee,在peewee中使用了logging模塊,如果想要打印peewee中的日志,logging可以通過配置打印出來,而loguru是不支持的。

    對比項目

    logging

    loguru

    ???????????? logger

    可以多個

    全局一個

    是否需要安裝

    不需要

    需要

    自定義handler

    支持

    支持

    自定義難度

    困難,要自己控制

    相對容易,需要指定切割條件

    其他模塊日志打印

    支持

    不支持

    2.參考文獻

    https://loguru.readthedocs.io/en/stable/overview.html

    ?

    總結

    以上是生活随笔為你收集整理的Python日志详解【两篇就够了系列】--第二篇loguru的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。