基于logback的系统日志配置及FAQ
系统日志配置相关
现在java体系的工程,框架工程基本上是面向slf4j使用日志系统,实现层主要由logback和log4j,我们采用的是logback,现在主要说明下logback工程的使用。
自定义日志存放路径
如果需要自定义日志存放路径,修改logPath的属性即可,比如在配置中心或者bootstrap.properties文件里面配置下logPath=d:\logs
自定义日志结转路径
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>${LOG_PATH}/%d{yyyy-MM-dd}/i18nMS-%d{yyyy-MM-dd}.log.gz</fileNamePattern>
<!-- keep 7 days' worth of history -->
<maxHistory>7</maxHistory>
</rollingPolicy>
如上面示例所示,通过配置fileNamePattern实现自定义结转日志路径,最后结转的效果类似于d:\logs\2021-01-12\i18nMS-2021-01-12.log.gz,此日志也已经实现了日志的压缩。
注意事项:使用日志结转时,rollingPolicy需要使用ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy或者ch.qos.logback.core.rolling.TimeBasedRollingPolicy。
自定义日志保留时间
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>${LOG_PATH}/%d{yyyy-MM-dd}/Swagger-%d{yyyy-MM-dd}.log.gz</fileNamePattern>
<!-- keep 7 days' worth of history -->
<maxHistory>7</maxHistory>
</rollingPolicy>
如上面所示,可以在rollingPolicy里面通过配置maxHistory配置日志最大保留时间策略,如果不配置,默认永久保留。
注意事项:使用日志结转时,rollingPolicy需要使用ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy或者ch.qos.logback.core.rolling.TimeBasedRollingPolicy。
自定义日志按照日志文件大小结转
<rollingPolicy
class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${LOG_PATH}/%d{yyyy-MM-dd}/${SERVICE_CODE}.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
<maxFileSize>200MB</maxFileSize>
</rollingPolicy>
如上面所示,可以通过将rollingPolicy使用ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy,通过配置fileNamePattern,配置好 “%i”,然后配置好结转大小。上面的例子里面是按照日志文件200MB大小进行的结转。最后结转的效果类似于d:\logs\2021-01-12\i18nMS-2021-01-12.0.log.gz。
动态配置日志是否启用
之前有个业务场景,需要抓取服务启动过程中的日志,主要是为了抓取错误日志,方便服务启动失败之后,快速定位问题。我们进行了如下配置:
<!--服务启动日志-->
<appender name="EC_GENERATOR_SERVICE_LOG_FILE" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator>
<Key>serviceLogName</Key>
<DefaultValue>serviceLogName</DefaultValue>
</discriminator>
<appender name="EC_SERVCIE_LOG_FILE" class="ch.qos.logback.core.FileAppender">
<file>${LOG_PATH}/publish/${serviceLogName}.log</file>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<Pattern>[%d{yyyy-MM-dd HH:mm:ss.SSS}] %-5level %logger{80} [%X{EC.Entity}] - %msg %ex%n</Pattern>
</encoder>
</appender>
</sift>
</appender>
<root level="INFO">
<appender-ref ref="EC_GENERATOR_SERVICE_LOG_FILE" />
</root>
启动类里面加入了如下配置:
String taskId = System.getProperty("task.id");
MDC.put("serviceLogName", taskId+"-${bootName}");
SpringApplication.run(${bootName}.class, args);
LoggerContext loggerContext = (LoggerContext) LoggerFactory.getILoggerFactory();
loggerContext.getLogger("root").detachAppender("EC_GENERATOR_SERVICE_LOG_FILE");
首先在启动前,生成一个taskid,通过serviceLogName传入到日志系统中,发布结束后,通过loggerContext移除EC_GENERATOR_SERVICE_LOG_FILE,即可实现appender的移除。既然可以移除appender,也就可以实现动态新增、修改appender。
动态修改日志级别
上面这种场景,可以处理的东西比较多,当然也可以修改日志级别,然是上面这种方式侵入业务实现,下面再讲两种不侵入业务实现而动态修改日志级别的方式。
- 方案一:通过再配置中心或者配置文件中入如下配置logging.level.org.hibernate.SQL=WARN,我们就动态修改了org.hibernate.SQL的日志级别
-
方案二:工程结构里面依赖spring-boot-starter-actuator,同时需要
开启端点配置
,我们自己的工程默认都开启了。通过postman,使用post方式访问http://[ip]:[port]/actuator/loggers/root,并传入如下参数
{
"org.hibernate.SQL":"WARN"
}
基础logback.xml模板
最后放一个完整的logback模板,这个算配置的比较全了。
<#noparse><?xml version="1.0" encoding="UTF-8"?>
<configuration>
<!--定义日志保存的路径 -->
<springProperty scope="context" name="LOG_PATH" source="logPath" />
<conversionRule conversionWord="clr"
converterClass="org.springframework.boot.logging.logback.ColorConverter"/>
<conversionRule conversionWord="wex"
converterClass="org.springframework.boot.logging.logback.WhitespaceThrowableProxyConverter"/>
<conversionRule conversionWord="wEx"
converterClass="org.springframework.boot.logging.logback.ExtendedWhitespaceThrowableProxyConverter"/>
<springProperty scope="context" name="SERVER_PORT" source="server.port"/>
<springProperty scope="context" name="SERVER_ADDRESS" source="server.address"/>
<springProperty scope="context" name="SERVICE_CODE" source="serviceCode"/>
<property name="CONSOLE_LOG_PATTERN"
value="${CONSOLE_LOG_PATTERN:-%clr(%d{HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}" />
<property name="FILE_LOG_PATTERN"
value="${FILE_LOG_PATTERN:-%d{yyyy-MM-dd HH:mm:ss.SSS} ${LOG_LEVEL_PATTERN:-%5p} ${SERVER_ADDRESS}:${SERVER_PORT} ${PID:- } --- [%t] %-40.40logger{39} : %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}" />
<!--输出到控制台 -->
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFO</level>
</filter>
<encoder>
<pattern>${CONSOLE_LOG_PATTERN}</pattern>
</encoder>
</appender>
<!--输出到文件 -->
<appender name="allFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_PATH}/${SERVICE_CODE}.log</file>
<filter class="com.demo.orchid.ec.cache.EcCacheLogFilter"/>
<rollingPolicy
class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${LOG_PATH}/%d{yyyy-MM-dd}/${SERVICE_CODE}.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
<maxFileSize>200MB</maxFileSize>
</rollingPolicy>
<encoder>
<pattern>${FILE_LOG_PATTERN}</pattern>
<charset class="java.nio.charset.Charset">UTF-8</charset>
</encoder>
</appender>
<!--输出到error文件 -->
<appender name="errorFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_PATH}/${SERVICE_CODE}-error.log</file>
<rollingPolicy
class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_PATH}/%d{yyyy-MM-dd}/${SERVICE_CODE}-error.%d{yyyy-MM-dd}.log.gz</fileNamePattern>
</rollingPolicy>
<encoder>
<pattern>${FILE_LOG_PATTERN}</pattern>
</encoder>
<append>true</append>
<!-- 此日志文件只记录error级别的 -->
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>ERROR</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<appender name="REDIS_LOG_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_PATH}/redisMS.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>${LOG_PATH}/%d{yyyy-MM-dd}/redisMS-%d{yyyy-MM-dd}.log.gz</fileNamePattern>
<!-- keep 7 days' worth of history -->
<maxHistory>7</maxHistory>
</rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<Pattern>[%d{yyyy-MM-dd HH:mm:ss.SSS}] %-5level ${SERVER_ADDRESS}:${SERVER_PORT} %-28.28thread %-64.64logger{64} %X{medic.eventCode} %msg %ex%n</Pattern>
</encoder>
</appender>
<appender name="I18N_LOG_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_PATH}/i18nMS.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>${LOG_PATH}/%d{yyyy-MM-dd}/i18nMS-%d{yyyy-MM-dd}.log.gz</fileNamePattern>
<!-- keep 7 days' worth of history -->
<maxHistory>7</maxHistory>
</rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<Pattern>[%d{yyyy-MM-dd HH:mm:ss.SSS}] %-5level ${SERVER_ADDRESS}:${SERVER_PORT} %-28.28thread %-64.64logger{64} %X{medic.eventCode} %msg %ex%n</Pattern>
</encoder>
</appender>
<appender name="KAFKA_LOG_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_PATH}/KafkaMS.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>${LOG_PATH}/%d{yyyy-MM-dd}/KafkaMS-%d{yyyy-MM-dd}.log.gz</fileNamePattern>
<!-- keep 7 days' worth of history -->
<maxHistory>7</maxHistory>
</rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<Pattern>[%d{yyyy-MM-dd HH:mm:ss.SSS}] %-5level ${SERVER_ADDRESS}:${SERVER_PORT} %-28.28thread %-64.64logger{64} %X{medic.eventCode} %msg %ex%n</Pattern>
</encoder>
</appender>
<appender name="SWAGGER_LOG_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_PATH}/Swagger.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>${LOG_PATH}/%d{yyyy-MM-dd}/Swagger-%d{yyyy-MM-dd}.log.gz</fileNamePattern>
<!-- keep 7 days' worth of history -->
<maxHistory>7</maxHistory>
</rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<Pattern>[%d{yyyy-MM-dd HH:mm:ss.SSS}] %-5level ${SERVER_ADDRESS}:${SERVER_PORT} %-28.28thread %-64.64logger{64} %X{medic.eventCode} %msg %ex%n</Pattern>
</encoder>
</appender>
<!--服务启动日志-->
<appender name="EC_GENERATOR_SERVICE_LOG_FILE" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator>
<Key>serviceLogName</Key>
<DefaultValue>serviceLogName</DefaultValue>
</discriminator>
<appender name="EC_SERVCIE_LOG_FILE" class="ch.qos.logback.core.FileAppender">
<file>${LOG_PATH}/publish/${serviceLogName}.log</file>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<Pattern>[%d{yyyy-MM-dd HH:mm:ss.SSS}] %-5level %logger{80} [%X{EC.Entity}] - %msg %ex%n</Pattern>
</encoder>
</appender>
</sift>
</appender>
<logger level="INFO" additivity="false" name="org.redisson">
<appender-ref ref="REDIS_LOG_FILE" />
</logger>
<logger level="INFO" additivity="false" name="com.demo.demo2.framework.cloud.i18n.resource.utils.MessageResourceWrapper">
<appender-ref ref="I18N_LOG_FILE" />
</logger>
<logger level="OFF" additivity="false" name="org.hibernate.orm.deprecation" >
<appender-ref ref="allFile" />
</logger>
<!-- hibernate sql日志 -->
<logger level="DEBUG" additivity="false" name="org.hibernate.SQL">
<appender-ref ref="allFile" />
</logger>
<logger level="TRACE" additivity="false" name="org.hibernate.type.descriptor.sql.BasicBinder">
<appender-ref ref="allFile" />
</logger>
<logger level="DEBUG" additivity="false" name="com.demo.license">
<appender-ref ref="KAFKA_LOG_FILE" />
</logger>
<logger level="ERROR" additivity="false" name="org.apache.kafka">
<appender-ref ref="KAFKA_LOG_FILE" />
</logger>
<logger level="ERROR" additivity="false" name="org.springframework.kafka">
<appender-ref ref="KAFKA_LOG_FILE" />
</logger>
<logger level="WARN" additivity="false" name="org.springframework.context.support">
<appender-ref ref="KAFKA_LOG_FILE" />
</logger>
<logger level="ERROR" additivity="false" name="springfox.documentation.spring.web">
<appender-ref ref="SWAGGER_LOG_FILE" />
</logger>
<logger level="OFF" additivity="false" name="com.alibaba.nacos.client.config.impl.ClientWorker">
<appender-ref ref="allFile" />
</logger>
<logger level="ERROR" additivity="false" name="com.alibaba.nacos.client.naming">
<appender-ref ref="allFile" />
</logger>
<!-- 生产环境,将此级别配置为适合的级别,以防日志文件太多或影响程序性能 -->
<root level="INFO">
<appender-ref ref="errorFile" />
<appender-ref ref="allFile" />
<appender-ref ref="console" />