A Comprehensive Guide to Logging in Python

A Comprehensive Guide to Logging in Python Техника

Configuring your logging system

Flask recommends that you use the logging.config.dictConfig() method to
overwrite the default configurations. Here is an example:

from flask import Flask

from logging.config import dictConfig

app = Flask(__name__) . . .

Let’s take a closer look at this configuration. First of all, the version key
represents the schema version and, at the time this article is written, the only
valid option is 1. Having this key allows the schema format to evolve in the
future while maintaining backward compatibility.

Next, the formatters key is where you specify formatting patterns for your log
records. In this example, only a default formatter is defined. To define a
format, you need to use
LogRecord attributes,
which always start with a % symbol.

For example, %(asctime)s indicates the timestamp in ASCII encoding, s
indicates this attribute corresponds to a string. %(levelname)s is the log
level, %(module)s is the name of the module that pushed the message, and
finally, %(message)s is the message itself.

Inside the handlers key, you can create different handlers for your loggers.
Handlers are used to push log records to various destinations. In this case, a
console handler is defined, which uses the logging.StreamHandler library to
push messages to the standard output. Also, notice that this handler is using
the default formatter you just defined.

One last thing you should notice in this example is that the configurations are
defined before the application (app) is initialized. It is recommended to
configure logging behavior as soon as possible. If the app.logger is accessed
before logging is configured, it will create a default handler instead, which
could be in conflict with your configuration.

Centralizing your logs in the cloud

After your application has been deployed to production, it will start to
generate logs which may be stored in various servers. It is very inconvenient
having to log into each server just to check some log records. In such cases, it
is probably better to use a cloud-based log management system such as
Logtail, so that you can manage, monitor and
analyze all your log records together.

To use Logtail in your Flask application, first make sure you have registered an
account, and then go to the Sources page, click the Connect source
button.

Next, give your source a name, and remember to choose Python as your platform.

Install the logtail-python package:

pip install logtail-python
Collecting logtail-python
 Downloading logtail_python-0.1.3-py2.py3-none-any.whl (8.0 kB)
. . .
Installing collected packages: msgpack, urllib3, idna, charset-normalizer, certifi, requests, logtail-python
Successfully installed certifi-2022.6.15 charset-normalizer-2.1.0 idna-3.3 logtail-python-0.1.3 msgpack-1.0.4 requests-2.28.1 urllib3-1.26.11

Setup the LogtailHandler like this:

. . .
dictConfig(
    {
        "version": 1,
        "formatters": {
            . . .
        },
        "handlers": {
            . . .





        },

    }
)

. . .

app.logger.debug("A debug message")

This time when you run the above code, your log messages will be sent to
Logtail. Go to the Live tail page.

Understanding log levels

Each of these log level has a corresponding method, which allows you to send log
entry with that log level. For instance:

. . .
@app.route("/")
def hello():

    app.logger.debug("A debug message")
    app.logger.info("An info message")
    app.logger.warning("A warning message")
    app.logger.error("An error message")
    app.logger.critical("A critical message")

    return "Hello, World!"

However, when you run this code, only messages with log level higher than INFO
will be logged. That is because you haven’t configured this logger yet, which
means Flask will use the default configurations leading to the dropping of the
DEBUG and INFO messages.

Remember to restart the server before making a request to the / route:

curl http://127.0.0.1:5000/
[2022-07-18 11:47:39,589] WARNING in app: A warning message
[2022-07-18 11:47:39,590] ERROR in app: An error message
[2022-07-18 11:47:39,590] CRITICAL in app: A critical message

In the next section, we will discuss how to override the default Flask logging
configurations so that you can customize its behavior according to your needs.

Default Logging Configuration

private final Logger LOGGER = LoggerFactory.getLogger(this.getClass());

@RequestMapping("/")
public String home(Map<String, Object> model) {

	LOGGER.debug("This is a debug message");
	LOGGER.info("This is an info message");
	LOGGER.warn("This is a warn message");
	LOGGER.error("This is an error message");

	model.put("message", "HowToDoInJava Reader !!");
	return "index";
}

Start the application. Access the application in the browser and verify log messages in the console.

2017-03-02 23:33:51.318  INFO 3060 --- [nio-8080-exec-1] c.h.app.controller.IndexController       : info log statement printed
2017-03-02 23:33:51.319  WARN 3060 --- [nio-8080-exec-1] c.h.app.controller.IndexController       : warn log statement printed
2017-03-02 23:33:51.319 ERROR 3060 --- [nio-8080-exec-1] c.h.app.controller.IndexController       : error log statement printed
  • The default logging level is INFO because the debug log message is not present.
  • Spring boot uses a fixed default log pattern configured in different base configuration files.
%clr{%d{yyyy-MM-dd HH:mm:ss.SSS}}{faint} %clr{${LOG_LEVEL_PATTERN}} %clr{${sys:PID}}{magenta} 
%clr{---}{faint} %clr{[%15.15t]}{faint} %clr{%-40.40c{1.}}{cyan} %clr{:}{faint} %m%n${sys:LOG_EXCEPTION_CONVERSION_WORD}

The above pattern prints these listed log message parts with respective color coding applied:

  • Date and Time — Millisecond precision.
  • Log Level — ERROR, WARN, INFO, DEBUG or TRACE.
  • Process ID.
  • A — separator to distinguish the start of actual log messages.
  • Thread name — Enclosed in square brackets (may be truncated for console output).
  • Logger name — This is usually the source class name (often abbreviated).
  • The log message

Customizing the log format

Beyond the log message itself, it is essential to include as much diagnostic
information about the context surrounding each event being logged. Standardizing
the structure of your logs entries makes it much easier to parse them and
extract only the necessary details.

  1. Timestamp: indicating the time that the event being logged occurred.
  2. Log level: indicating the severity of the event.
  3. Message: describing the details of the event.

These properties allow for basic filtering by log management tools so you must
ensure they are present in all your Python log entries. At the moment, the
timestamp is missing from our examples so far which prevents us from knowing
when each log entry was created. Let’s fix this by changing the log format
through the logging.basicConfig() method as shown below:

import logging


logging.warning("Something bad is going to happen")
  • %(levelname)s: the log level name (INFO, DEBUG, WARNING, etc).
  • %(asctime)s: human-readable time indicating when the log was created.
  • %(message)s: the log message.
WARNING | 2023-02-05 11:41:31,862 | Something bad is going to happen
import logging

logging.basicConfig(
    format="%(levelname)s | %(asctime)s | %(message)s",

)
logging.warning("Something bad is going to happen")
WARNING | 2023-02-05T11:45:31Z | Something bad is going to happen

The datefmt argument accepts the same directives as the
time.strftime()
method. We recommend using the
ISO-8601 format shown above because it
is widely recognized and supported which makes it easy to process and sort
timestamps in a consistent manner.

Please
refer to the LogRecord documentation
to examine all the available attributes that you can use to format your log
entries. Here’s another example that uses a few more of these attributes to
customize the log format:

import logging

logging.basicConfig(
    format="%(name)s: %(asctime)s | %(levelname)s | %(filename)s:%(lineno)s | %(process)d >>> %(message)s",
    datefmt="%Y-%m-%dT%H:%M:%SZ",
)

logging.warning("system disk is 85% full")
logging.error("unexpected error")

root: 2023-02-05T14:11:56Z | WARNING | example.py:8 | 428223 >>> system disk is 85% full
root: 2023-02-05T14:11:56Z | ERROR | example.py:9 | 428223 >>> unexpected error

Setting up custom loggers

So far in this tutorial, we’ve used the default logger (named root) to write
logs in the examples. The default logger is accessible and configurable through
module-level methods on the logging module (such as logging.info(),
logging.basicConfig()). However, it is generally considered a bad idea to
log in this manner due to a number of reasons:

  1. Namespace collisions: If multiple modules in your application use the
    root logger, you risk having messages from different modules logged under the
    same name, which can make it more difficult to understand where log messages
    are coming from.

  2. Lack of control: When using the root logger, it can be difficult to
    control the log level for different aspects of your application. This can
    lead to logging too much information or not enough information, depending on
    your needs.

  3. Loss of context: Log messages generated by the root logger may not
    contain enough information to provide context about where the log message
    came from, which can make it difficult to determine the source of issues in
    your application.

In general, it’s best to avoid using the root logger and instead create a
separate logger for each module in your application. This allows you to easily
control the logging level and configuration for each module, and provide better
context for log messages. Creating a new custom logger can be achieved by
calling the getLogger() method as shown below:

import logging

logger = logging.getLogger("example")

logger.info("An info")
logger.warning("A warning")

The getLogger() method returns a
Logger object
whose name is set to the specified name argument (example in this case). A
Logger provides all the functions for you to log messages in your application.
Unlike the root logger, its name is not included in the log output by default.
In fact, we don’t even get the log level, only the log message is present.
However, it defaults to logging to the standard error just like the root logger.

To change the output and behaviour of a custom logger, you have to use the
Formatter
and Handler classes
provided by the logging module. The Formatter does what you’d expect; it helps
with formatting the output of the logs, while the Handler specifies the log
destination which could be the console, a file, an HTTP endpoint, and more. We
also have
Filter objects
which provide sophisticated filtering capabilities for your Loggers and
Handlers.

Understanding log levels in Python

Log levels define the severity of the event that is
being logged. They convey implicit meaning about the program state when the
message was recorded, which is crucial when sieving through large logs for
specific events. For example a message logged at the INFO level indicates a
normal and expected event, while one that is logged at the ERROR level
signifies that some unexpected error has occurred.

Each log level in Python is associated with a number (from 10 to 50) and has a
corresponding module-level method in the logging module as demonstrated in the
previous example. The available log levels in the logging module are listed
below in increasing order of severity:

  1. DEBUG (10): used to log messages that are useful for debugging.
  2. INFO (20): used to log events within the parameters of expected program
    behavior.
  3. WARNING (30): used to log unexpected events which may impede future program
    function but not severe enough to be an error.
  4. ERROR (40): used to log unexpected failures in the program. Often, an
    exception needs to be raised to avoid further failures, but the program may
    still be able to run.
  5. CRITICAL (50): used to log severe errors that can cause the application to
    stop running altogether.

It’s important to always use the most appropriate log level so that you can
quickly find the information you need. For instance, logging a message at the
WARNING level will help you find potential problems that need to be
investigated, while logging at the ERROR or CRITICAL level helps you
discover problems that need to be rectified immediately. If an inappropriate log
level is used, you will likely miss important events and your application will
be worse off as a result.

By default, the logging module will only produce records for events that have
been logged at a severity level of WARNING and above. This is why the
debug() and info() messages were omitted in the previous example since they
are less severe than WARNING. You can change this behavior using the
logging.basicConfig() method as demonstrated below:

import logging



logging.debug("A debug message")
logging.info("An info message")
logging.warning("A warning message")
logging.error("An error message")
logging.critical("A critical message")

Ensure to place the call to logging.basicConfig() before any methods such as
info(), warning(), and others are used. It should also be called once as it
is a one-off configuration facility. If called multiple times, only the first
one will have an effect.

When you execute the program now, you will observe the output below:

INFO:root:An info message
WARNING:root:A warning message
ERROR:root:An error message
CRITICAL:root:A critical message

Since we’ve set the minimum level to INFO, we are now able to view the log
message produced by logging.info() in addition levels with a greater severity
while the DEBUG message remains suppressed.

Setting a default log level in the manner shown above is useful for controlling
the amount of logs that is generated by your application. For example, during
development you could set it to DEBUG to see all the log messages produced by
the application, while production environments could use INFO or WARNING.

Adding contextual data to your logs

You can have something like this:

Adding contextual variables in the manner shown above can be done by using a
formatting directive in the log message and passing variable data as arguments.

import sys
import logging


logger = logging.getLogger(__name__)

stdout = logging.StreamHandler(stream=sys.stdout)

fmt = logging.Formatter(
    "%(name)s: %(asctime)s | %(levelname)s | %(filename)s%(lineno)s | %(process)d >>> %(message)s"
)

stdout.setFormatter(fmt)
logger.addHandler(stdout)

logger.setLevel(logging.INFO)

name = "james"
browser = "firefox"

__main__: 2023-02-09 08:37:47,860 | INFO | main.py20 | 161954 >>> user james logged in with firefox

You can also use the newer
formatted string literals (f-strings)
syntax which are easier to read:

. . .
logger.info(f"user {name} logged in with {browser}")

However, note that
the logging module is optimized to use the % formatting style
and the use of f-strings might have an extra cost since formatting will occur
even if the message isn’t logged. So even though f-strings are easier to read,
you should probably stick to the % style to guarantee best performance.

Дополнительно:  Не могу зайти в Телеграмм с компьютера: причины, способы решения

Another way to add context to your logs is by passing the extra parameter to
any level method to supply a dictionary of custom attributes to the record. This
is useful in cases where all your application logs need to have certain
information present in the entry, but those details are dependent on the
execution context.

Here’s an example that demonstrates the extra argument:

import sys
import logging


logger = logging.getLogger(__name__)

stdout = logging.StreamHandler(stream=sys.stdout)

fmt = logging.Formatter(
    "%(name)s: %(asctime)s | %(levelname)s | %(filename)s%(lineno)s | %(process)d >>> %(message)s"
)

stdout.setFormatter(fmt)
logger.addHandler(stdout)

logger.setLevel(logging.INFO)




__main__: 2023-02-09 08:43:30,827 | INFO | main.py18 | 175097 >>> Info message
__main__: 2023-02-09 08:43:30,827 | WARNING | main.py19 | 175097 >>> Warning message
__main__: 2023-02-09 08:43:30,827 | ERROR | main.py20 | 175097 >>> Error message
. . .
fmt = logging.Formatter(
    "%(name)s: %(asctime)s | %(levelname)s | %(filename)s%(lineno)s | %(process)d | %(user)s | %(session_id)s >>> %(message)s"
)
. . .
__main__: 2023-02-09 08:46:07,588 | INFO | main.py18 | 178735 | johndoe | abc123 >>> Info message
__main__: 2023-02-09 08:46:07,588 | WARNING | main.py19 | 178735 | joegage | 3fe-uz >>> Warning message
__main__: 2023-02-09 08:46:07,588 | ERROR | main.py20 | 178735 | domingrange | 2fe-a1 >>> Error message

There are two main things to note when using the extra property:

First, the field names used in the extra dictionary should not clash with any
of the
LogRecord attributes
otherwise you will get a KeyError when you run the program:

# the message field here clashes with the `message` LogRecord attribute
logger.info(
  "Info message",
  extra={"user": "johndoe", "session_id": "abc123", "message": "override info
  message"},
)
. . .
File "/usr/lib64/python3.11/logging/__init__.py", line 1606, in makeRecord
 raise KeyError("Attempt to overwrite %r in LogRecord" % key)

KeyError: "Attempt to overwrite 'message' in LogRecord"

# notice that the `user` field is missing here
logger.info("Info message", extra={"session_id": "abc123"})
--- Logging error ---
Traceback (most recent call last):
  File "/usr/lib64/python3.11/logging/__init__.py", line 449, in format
    return self._format(record)
           ^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/logging/__init__.py", line 445, in _format
    return self._fmt % values
           ~~~~~~~~~~^~~~~~~~
KeyError: 'user'

These limitations, especially the second one, can make using the extra
parameter impractical for many use cases but I will show how to get around them
in the next section of this tutorial.

Using Formatters

The logging.Formatter class determines the format of the log entries produced
by a Logger. By default, a Logger will not have a specific format, so all it
outputs is the log message (that is %(message)s as seen in the previous
section). We can create and attach a Formatter object to the Logger to give
it a specific format.

import sys
import logging

logger = logging.getLogger("example")

stdout = logging.StreamHandler(stream=sys.stdout)

fmt = logging.Formatter(

logger.addHandler(stdout) logger.setLevel(logging.INFO) logger.info("An info") logger.warning("A warning")

The Formatter class above is configured using the same LogRecord attributes
from our earlier logging.basicConfig() examples. It also accepts some other
options but
this is enough for us to get started.

example: 2023-02-05 21:21:46,634 | INFO | example.py:17 | 653630 >>> An info
example: 2023-02-05 21:21:46,634 | WARNING | example.py:18 | 653630 >>> A warning

Notice that the logger name (example) corresponds to the string argument
passed to the getLogger() method. You can also set the logger name to be the
name of the current module by using the special __name__ variable:

logger = logging.getLogger(__name__)
__main__: 2023-02-05 21:42:50,882 | INFO | example.py:17 | 675470 >>> An info
__main__: 2023-02-05 21:42:50,882 | WARNING | example.py:18 | 675470 >>> A warning

Now, the logger name is registered as __main__ indicating that the records
were logged from the module where execution starts. If the example.py file is
imported in some other file and that file is executed, the __name__ variable
will correspond to the module name (example).

example: 2023-02-05 21:21:46,634 | INFO | example.py:17 | 653630 >>> An info
example: 2023-02-05 21:21:46,634 | WARNING | example.py:18 | 653630 >>> A warning

Coloring Python log output

If you’re logging to the console as we’ve been doing so far, it may be helpful
to color your log output so that the
different fields and levels are easily distinguishable at a glance. Here’s an
example that uses the colorlog to achieve
log output coloring in Python:

import sys
import logging
import colorlog

logger = logging.getLogger("example")

stdout = colorlog.StreamHandler(stream=sys.stdout)

fmt = colorlog.ColoredFormatter(
    "%(name)s: %(white)s%(asctime)s%(reset)s | %(log_color)s%(levelname)s%(reset)s | %(blue)s%(filename)s:%(lineno)s%(reset)s | %(process)d >>> %(log_color)s%(message)s%(reset)s"
)

stdout.setFormatter(fmt)
logger.addHandler(stdout)

logger.setLevel(logging.DEBUG)

logger.debug("A debug message")
logger.info("An info message")
logger.warning("A warning message")
logger.error("An error message")
logger.critical("A critical message")

Ensure to install the package first before executing the above program:

The Logback Configuration File

For Logback configuration through XML, Logback expects a Logback.xml or Logback-test.xml file in the classpath. In a Spring Boot application, you can put the Logback.xml file in the resources folder. If your Logback.xml file is outside the classpath, you need to point to its location using the Logback.configurationFile system property, like this.

-DLogback.configurationFile=/path/to/Logback.xml

In a Logback.xml file, all the configuration options are enclosed within the <configuration> root element. In the root element, you can set the debug=true attribute to inspect Logback’s internal status. You can also configure auto scanning of the configuration file by setting the scan=true attribute. When you do so, Logback scans for changes in its configuration file. If Logback finds any changes, Logback automatically reconfigure itself with the changes. When auto scanning is enabled, Logback scans for changes once every minute. You can specify a different scanning period by setting the scanPeriod attribute, with a value specified in units of milliseconds, seconds, minutes or hours, like this.

<configuration debug="true" scan="true" scanPeriod="30 seconds" > 
  ...
</configuration> 
<configuration debug="true" scan="true" scanPeriod="30 seconds" > 
  <property name="LOG_PATH" value="logs"/>
  <property name="LOG_ARCHIVE" value="${LOG_PATH}/archive"/> 
 ...
</configuration> 

The configuration code above declares two properties, LOG_PATH and LOG_ARCHIVE whose values represent the paths to store log files and archived log files respectively.

At this point, one Logback element worth mentioning is <timestamp>. This element defines a property according to the current date and time – particularly useful when you log to a file. Using this property, you can create a new log file uniquely named by timestamp at each new application launch. The code to declare a timestamp property is this.

<timestamp key="timestamp-by-second" datePattern="yyyyMMdd'T'HHmmss"/>

Next, we’ll look how to use each of the declared properties from different appenders.

Prerequisites

Before proceeding with this article, ensure that you have a recent version of
Python 3 installed on your machine. To best
learn the concepts discussed here, you should also create a new Flask project so
that you may try out all the code snippets and examples.

Create a new working directory and change into it with the command below:

mkdir flask-logging && cd flask-logging

Adding contextual information

In a production environment, you should include detailed information about the
logged event so that the logs can help you and your team understand and
troubleshoot possible issues. For example, instead of a simple message,
Order shipped successfully., you could include the order number, buyer’s name,
destination, and so on.

2023-04-25 21:21:26.991 [main] INFO com.example.App - Order shipped successfully. Order number: xxxxx. Buyer name: xxxxx. Destination: xxxxx.

To include contextual information with Log4j, you need to use the
ThreadContext library.

package com.example;

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;


public class App {

    protected static final Logger logger = LogManager.getLogger();

    public static void main(String[] args) {




logger.info("Order shipped successfully.");

} }

The put() method will add new items to the context map, and the clearAll()
method will clear the entire map. You must ensure that the logging call is
placed in between. Next we will discuss how to log in a structured format so
that the contextual information is included in the log message.

FAQs

5.1. How to print Jar files names in logs

Once configured, logback can include packaging data (name and version of the jar file) for each line of the stack trace lines it outputs. It can help debug identifying ClassCastException issues due to multiple versions of jars of any library in the classpath.

The packaging data is disabled by default.

<configuration packagingData="true">
  ...
</configuration>

5.2. Clean Resources on Shutdown

In standalone applications, to correctly shutting down logback and releasing associated resources, use the shutdown hook. The hook will close all appenders attached to loggers defined by the context and stop any active threads in an orderly manner. It will allow up to 30 seconds for any log file compression tasks running in the background to finish.

<configuration debug="false">
	<shutdownHook/>
	.... 
</configuration>

A shutdown hook will be installed automatically in web applications, making this directive entirely redundant.

Understanding log levels

There are a few things we need to discuss from the previous example. First,
notice the logging call:

logger.info("Hello World!");

The info() method here is used to log an event at the INFO level. In
software development, log levels serve as a way to
categorize log messages based on their severity or importance. Log4j offers six
log levels by default, and each level is associated with an integer value:

  • TRACE (600): this is the least severe log level, typically used to log
    fine-grained information about a program’s execution such as entering or
    exiting functions, and variable values, and other low-level details that can
    help in understanding the internal workings of your code.
  • DEBUG (500): it is used for logging messages intended to be helpful during
    the development and testing process, which is usually program state
    information that can be helpful when ascertaining whether an operation is
    being performed correctly.
  • INFO (400): it is used for informational messages that record events that
    occur during the normal operation of your application, such as user
    authentication, API calls, or database access. These messages help you
    understand what’s happening within your application.
  • WARN (300): events logged at this level indicate potential issues that
    might require your attention before they become significant problems.
  • ERROR (200): it is used to record unexpected errors that occur during the
    course of program execution.
  • FATAL (100): this is the most severe log level, and it indicates an urgent
    situation affecting your application’s core component that should be addressed
    immediately.

Log4j provides a corresponding method for each of these levels:

logger.trace("Entering method processOrder().");
logger.debug("Received order with ID 12345.");
logger.info("Order shipped successfully.");
logger.warn("Potential security vulnerability detected in user input: '...'");
logger.error("Failed to process order. Error: {. . .}");
logger.fatal("System crashed. Shutting down...");
2023-04-20 20:44:47.254 [main] TRACE com.example.App - Entering method processOrder().
2023-04-20 20:44:47.255 [main] DEBUG com.example.App - Received order with ID 12345.
2023-04-20 20:44:47.255 [main] INFO com.example.App - Order shipped successfully.
2023-04-20 20:44:47.255 [main] WARN com.example.App - Potential security vulnerability detected in user input: '...'
2023-04-20 20:44:47.255 [main] ERROR com.example.App - Failed to process order. Error: {. . .}
2023-04-20 20:44:47.255 [main] FATAL com.example.App - System crashed. Shutting down...

In addition to these predefined log levels, Log4j also supports custom log
levels. For example, if your project requires a log level VERBOSE with integer
value 550, which is between levels DEBUG and TRACE, you can use the
forName() method to create it.

package com.example;

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;


public class App {

final Level VERBOSE = Level.forName("VERBOSE", 550);

protected static final Logger logger = LogManager.getLogger(); public static void main(String[] args) {

App app = new App();

logger.log(app.VERBOSE, "a verbose message");

} }
2023-04-24 17:13:30.257 [main] VERBOSE com.example.App - a verbose message

Alternatively, you can define custom log levels directly in the configuration
file:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="INFO">


<CustomLevel name="VERBOSE" intLevel="550" />

<Appenders> <Console name="console" target="SYSTEM_OUT"> <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" /> </Console> </Appenders> <Loggers> <Root level="trace"> <AppenderRef ref="console" /> </Root> </Loggers> </Configuration>

This configuration will make Log4j call the forName() method we just
introduced and create the VERBOSE level internally. You can then access this
level in your application code using the getLevel() method, and it should give
you the same output.

package com.example;

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;


public class App {
    protected static final Logger logger = LogManager.getLogger();

    public static void main(String[] args) {

logger.log(Level.getLevel("VERBOSE"), "a verbose message");

} }

Log levels also play a crucial role in controlling the volume of logs generated
by an application. By setting the appropriate log level, you can filter out less
critical log messages, reducing the overall volume.

Head back to the configuration file and notice how the Root logger is defined:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="INFO">

    <CustomLevels>
        <CustomLevel name="VERBOSE" intLevel="550" />
    </CustomLevels>

    <Appenders>
        <Console name="console" target="SYSTEM_OUT">
            <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
        </Console>
    </Appenders>

    <Loggers>

<AppenderRef ref="console" />

</Loggers> </Configuration>

The level attribute defines the minimum log level a record must have to be
logged. So, for example, if you set level="info", then the trace and debug
level messages will be exempted from the output. Of course, this will work for
custom log levels as well.

logger.trace("Entering method processOrder().");
logger.log(Level.getLevel("VERBOSE"), "Executing method foo() with parameters: [param1, param2]");
logger.debug("Received order with ID 12345.");
logger.info("Order shipped successfully.");
logger.warn("Potential security vulnerability detected in user input: '...'");
logger.error("Failed to process order. Error: {. . .}");
logger.fatal("System crashed. Shutting down...");
2023-04-25 13:58:41.830 [main] INFO com.example.App - Order shipped successfully.
2023-04-25 13:58:41.831 [main] WARN com.example.App - Potential security vulnerability detected in user input: '...'
2023-04-25 13:58:41.831 [main] ERROR com.example.App - Failed to process order. Error: {. . .}
2023-04-25 13:58:41.831 [main] FATAL com.example.App - System crashed. Shutting down...

The Python logging hierarchy

Before closing out this tutorial, let’s touch base on Python’s logging hierarchy
and how it works. The Python logging hierarchy is a way of organizing loggers
into a tree structure based on their names with the root logger at the top.

Each custom logger has a unique name, and loggers with similar names form a
hierarchy. When a logger is created, it inherits log levels and handlers from
the nearest ancestor that does if it doesn’t have those settings on itself. This
allows for fine-grained control over how log messages are handled.

Дополнительно:  Как сбросить пароль root linux ubuntu

For example, a logger named app.example is the child of the app logger which
in turn is the child of the «root» logger:

import logging

app_logger = logging.getLogger("app")
module_logger = logging.getLogger("app.module")

print(app_logger.parent)
print(module.parent)
<RootLogger root (WARNING)>
<Logger app (WARNING)>

Notice how the log level of the app logger is WARNING despite not setting a
log level on it directly. This is because the default log level for custom
loggers is a special NOTSET value which causes Python traverse its chain of
ancestors until one with a level other than NOTSET is found, or the root is
reached. Since root is set to WARNING by default, the effective level of
app (and app.example) is also warning. However, if root also has a level of
NOTSET, then all messages will be processed.

The propagate argument in the Python logging module is used to control whether
log messages should be passed up the logging hierarchy to parent loggers. By
default, this argument is set to True, meaning that log messages are
propagated up the hierarchy.

If the propagate argument is set to False for a particular logger, log
messages emitted by that logger will not be passed up the hierarchy to its
parent loggers. This allows for greater control over log message handling, as it
allows you to prevent messages from being handled by parent loggers.

For example, consider a logging hierarchy with a root logger, an app logger,
and a app.module1 logger. If the propagate argument is set to False for
the app.module1 logger, log messages emitted by this logger will only be
handled by handlers attached directly to it and will not be passed up to the
app logger or the root logger.

In general, it’s a good practice to set the propagate argument to False only
when necessary, as it can lead to unexpected behavior and make it harder to
manage logging in a complex application. However, in some cases, it may be
desirable to turn off propagation in order to provide greater control over
logging behavior.

File Config Examples ¶

1. File Config to Log Messages with Level DEBUG & Above

All our examples from example eighth to the end will be used to explain how we can use configuration files to configure the logging module in Python. Our examples from this till the end will be almost the same as our previous dictionary configuration examples with the only change that we’ll use the file to configure the logging module instead of a dictionary.


  • fileConfig(filename,defaults=None,disable_existing_loggers=True) — This method takes as input configuration file name and configures logging module based on it.
    • It has a parameter named defaults which accepts a dictionary with a list of key-value pairs. This dictionary will be used when parsing a configuration file if some key is expected in some section but is not present there then the value of that key will be taken from this default dictionary.

Please make a NOTE that Python internally uses configparser module to parse the configuration file. If you are interested in learning about configparser module then please feel free to check our tutorial on the same.


First, we have created a configuration file named file_config1.conf which will be used to configure logging.

We have defined a single logger named root inside of the configuration file. We have then defined one handler named console and one formatter named std_out. We have then included a definition of root logger to have log level as DEBUG and handler as console. We have then included a definition of console handler which uses logging.StreamHandler to log messages to standard output with a log level of DEBUG and formatter Std_out. At last, we have included a formatter definition for std_out formatter. This file has exactly the same configuration as that we created in our first example using a dictionary.

Our script starts by configuring the logging module with file_config1.conf file. All the remaining code is exactly the same as our first example.

When we run the script, we can notice from the output that it’s exactly the same as our first example.

file_config1.conf











  
  


 
  
  


                     \  
  -%m-%Y %I:%M:%S

logging_config_ex_8.py

 
   





  
    "Inside Addition Function"
        
        "Warning : Parameter A is passed as String. Future versions won't support it."

        
        "Warning : Parameter B is passed as String. Future versions won't support it."

    
            
        "Addition Function Completed Successfully"
         
       
        "Error Type : , Error Message :  
         


   
    

      
     

      
     

      
     
                    \    
                    \     
                    \         

                    \    
                    \            't support it.
                    \     
                    \         

                    \    
                    \               
                    \         

1.1 Configuration File Format Explanation


The configuration file should have the below-mentioned three sections which will be used to define the names of the list of loggers, handlers, and formatters.









The three sections will define a list of loggers, the list of handlers, and a list of formatters that will be present in this configuration file.

After these three sections, there will be a list of sections defining individual logger, individual handler, and individual formatter as explained below.

















 
  
  


 
  
  


 
  
  





          
   


          
   




2. File Config to Create Logger

As a part of our ninth example, we are explaining how we can create loggers to log messages from the logging module which is configured from the configuration file.

Our code for this example uses the same configuration file file_config1.conf which we have used in our previous example. The rest of our code is exactly the same as our code for the second example.

When we run the below script, we can notice that the output is exactly the same as our output from the second example.

logging_config_ex_9.py

 
   





  

  
    "Inside Addition Function"
        
        "Warning : Parameter A is passed as String. Future versions won't support it."

        
        "Warning : Parameter B is passed as String. Future versions won't support it."

    
            
        "Addition Function Completed Successfully"
         
       
        "Error Type : , Error Message :  
         


   
    "Current Log Level : 

      
     

      
     

      
     
                    \      

                    \    
                    \     
                    \         

                    \    
                    \            't support it.
                    \     
                    \         

                    \    
                    \               
                    \         

3. File Config to Create Different Handlers (Direct Logs to File)

As a part of our tenth example, we have explained how we can configure two different handlers with different settings using a configuration file.

We have created a new configuration file named file_config2.conf. We have included two handlers this time. The console handler will be used to direct messages to standard output. The file handler will be used to direct log messages to a file named all_message_conf.log. We have defined file handler in section handler_file with a class named logging.FileHandler to direct log messages to file all_messages_conf.log. The rest of the configuration is almost the same as our previous examples.

Our script for this example is exactly the same as our script for the previous example with the only difference that we have used file_config2.conf file to configure logging module.

When we run the below script, we can notice that output includes all log messages at level DEBUG and above. We have also displayed the contents of all_messages_conf.log file which has included all messages at level INFO and above as per configuration.

file_config2.conf





 





   
  


 
  
  



 
   
  
  


          

logging_config_ex_10.py

 
   





  

  
    "Inside Addition Function"
        
        "Warning : Parameter A is passed as String. Future versions won't support it."

        
        "Warning : Parameter B is passed as String. Future versions won't support it."

    
            
        "Addition Function Completed Successfully"
         
       
        "Error Type : , Error Message :  
         


   
    "Current Log Level : 

      
     

      
     

      
     
            

          
           
               

          
                  't support it.
           
               

          
                     
               
cat logging_config_examples/all_messages_conf.log
INFO : root : logging_config_ex_10 : <module> : Current Log Level : 10

INFO : root : logging_config_ex_10 : addition : Addition Function Completed Successfully
INFO : root : logging_config_ex_10 : <module> : Addition of 10 & 20 is : 30.0

WARNING : root : logging_config_ex_10 : addition : Warning : Parameter A is passed as String. Future versions won't support it.
INFO : root : logging_config_ex_10 : addition : Addition Function Completed Successfully
INFO : root : logging_config_ex_10 : <module> : Addition of '20' & 20 is : 40.0

ERROR : root : logging_config_ex_10 : addition : Error Type : ValueError, Error Message : could not convert string to float: 'A'
INFO : root : logging_config_ex_10 : <module> : Addition of A & 20 is : None

4. File Config to Create Hierarchy of Loggers

As a part of our eleventh example, we are explaining how we can create a hierarchy of loggers for logging messages.

Our code for this example is exactly the same as our code for example 5 which has the same code but is configured using a dictionary. We have used our configuration file named file_config2.conf to configure logging in this example.

When we run the below script, we can notice based on log messages which log events are logged by which logger based on logger name present in the log message.

logging_config_ex_11.py

 
   



################ Module Logger #################
  

 
     
        ################ Class Logger #################
          

      
        "Inside Addition Function"
            
            "Warning : Parameter A is passed as String. Future versions won't support it."

            
            "Warning : Parameter B is passed as String. Future versions won't support it."

        
                
            "Addition Function Completed Successfully"
             
           
            "Error Type : , Error Message :  
             


   
    "Current Log Level : 

      
      
     

      
     

      
     
            

          
           
               

          
                  't support it.
           
               

          
                     
               

5. File Config to Create Multiple Loggers

As a part of our twelfth example, we are demonstrating how we can use more than one logger instance to log messages inside of the script based on configuration details from the file.

We have created a new configuration file named file_config3.conf for defining a configuration. We have created two loggers root and main. The root logger has the same configuration which we have been using for many of our examples. The main logger directs log messages at level INFO and above to standard output. We have defined two different handlers (console1 and console2) to be used by two different loggers. We have also defined two different formatters (std_out1 and std_out2) to be used by two different handlers.

Our code for this example is same as our code for previous examples with minor changes. We have created two loggers this time instead of one. We are using logger1 inside of addition method whereas we are using logger2 for the main part of our code.

When we run our script for this example, we can notice from the output which logger has logged which log message.

file_config3.conf


 


 


 


  
  


  
  
  
  


 
  
  


 
  
  


         


          

logging_config_ex_12.py

 
   





################ Logger #################
  
  

  
    "Inside Addition Function"
        
        "Warning : Parameter A is passed as String. Future versions won't support it."

        
        "Warning : Parameter B is passed as String. Future versions won't support it."

    
            
        "Addition Function Completed Successfully"
         
       
        "Error Type : , Error Message :  
         


   
    "Current Log Level : 


      
     

      
     

      
     
            

         
          
               

         
                 't support it.
          
               

         
                    
               

This ends our small tutorial explaining how we can use ‘logging.config’ module configure logging module from ‘dictionary’ and ‘configuration files’.

Aggregating logs in the cloud

Aggregating logs in the cloud has become a popular approach for managing logs,
especially for large projects with distributed systems that generate large
volumes of log records. When these records are distributed across multiple
systems, managing and analyzing them can be challenging. By centralizing logs in
the cloud, you can collect logs in one location, which makes it easier to
search, filter, and analyze log data, helping to identify and troubleshoot
issues more quickly.

Cloud-based log management platforms such as
Logtail offer advanced features like log
search, visualization, and alerting, which can help you gain deeper insights
into your logs and respond quickly to any issues that arise. With Logtail, you
can effortlessly collect, process, and analyze log data from a variety of
sources, including servers, applications, and cloud environments. The tool also
provides advanced features such as alerting and notification, allowing you to
set custom alerts based on specific log patterns or events, so you can respond
to issues quickly. Additionally, Logtail provides an intuitive web-based
interface that allows you to easily search, filter, and visualize your log data
in real-time.

Fill observability gaps with Logtail in 5 minutes.

Creating a Logger

We will start by creating an application logger and later configure it through XML. As mentioned here, if we are using Spring Boot, we don’t require any additional dependency declaration on Logback in our Maven POM. We can straightaway start writing logging code.

LogbackConfigXml.java

package guru.springframework.blog.logbackxml;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class LogbackConfigXml {
    private final Logger logger = LoggerFactory.getLogger(this.getClass());
    public void performTask(){
        logger.debug("This is a debug message.");
        logger.info("This is an info message.");
        logger.warn("This is a warn message.");
        logger.error("This is an error message.");

    }
}

Our test class uses JUnit to unit test the preceding LogbackConfigXml class.

LogbackConfigXmlTest.java

package guru.springframework.blog.logbackxml;

import org.junit.Test;

import static org.junit.Assert.*;


public class LogbackConfigXmlTest {
    @Test
    public void testPerformTask() throws Exception {
        LogbackConfigXml logbackConfigXml=new LogbackConfigXml();
        logbackConfigXml.performTask();
    }
}

Best practices for logging in Python

Let’s wrap up this article by looking at a few best practices to help you put
together an effective logging strategy in your Python applications:

  1. Use meaningful logger names: Give loggers meaningful names that reflect
    their purpose, using dots as separators to create a hierarchy. For example, a
    logger for a module could be named module.submodule. You can use the
    __name__ variable to achieve this naming convention.

  2. Avoid using the root logger: The root logger is a catch-all logger that
    can be difficult to manage. Instead, create specific loggers for different
    parts of your application.

  3. Set the appropriate log levels: For example, you can use WARNING in
    production and DEBUG in development or testing environments.

  4. Centralize your logging configuration: Centralizing your logging
    configuration in a single location will make it much easier to manage.

  5. Aggregate your logs: Consider using a library like
    logtail-python for centralizing
    your logs, as it provides advanced features like centralized logging,
    aggregation, and alerting.

  6. Include as much context as necessary: Ensure that relevant context
    surrounding the event being logged is included in the log record. At a
    minimum, records should always include the severity level, the logger name,
    and the time the message was emitted.

  7. Test your logging configuration: Test your logging configuration in
    different scenarios to ensure that it behaves as expected before deploying to
    production.

  8. Rotate log files regularly: Regularly rotate log files to keep them from
    growing too large and becoming difficult to manage. We recommend using
    Logrotate but you
    can also use the RotatingFileHandler or TimedRotatingFileHandler as
    demonstrated in this article.

Дополнительно:  What Is A Root Directory Of A Website

Important Sections Of Tutorial¶

We’ll now start explaining the configuration of logging with examples.

Color-coded Logs

If your terminal supports ANSI, the color output will be used to aid readability. You can set spring.output.ansi.enabled value to either ALWAYS, NEVER or DETECT.

Color coding is configured using the %clr conversion word. In its simplest form, the converter will color the output according to the log level.

  • FATAL and ERROR –
  • WARN –
  • INFO, DEBUG and TRACE –

Drop me your questions in the comments section.

Happy Learning !!

Logging to a file

Next, let’s discuss how to store Python logs in a file. This can be done in many
ways but the most basic way involves setting up a Handler that transports the
logs to a file. Fortunately the logging.FileHandler class exists for this
purpose:

import sys
import logging
from pythonjsonlogger import jsonlogger


logger = logging.getLogger(__name__)

stdoutHandler = logging.StreamHandler(stream=sys.stdout)

fileHandler = logging.FileHandler("logs.txt")

jsonFmt = jsonlogger.JsonFormatter( "%(name)s %(asctime)s %(levelname)s %(filename)s %(lineno)s %(process)d %(message)s", rename_fields={"levelname": "severity", "asctime": "timestamp"}, datefmt="%Y-%m-%dT%H:%M:%SZ", ) stdoutHandler.setFormatter(jsonFmt) logger.addHandler(stdoutHandler) logger.setLevel(logging.DEBUG) logger.debug("A debug message") logger.error("An error message")

Once a new FileHandler object is created, you can set your preferred format
for the handler and add it to the Logger as shown above. When you execute the
program, the logs will be printed to the standard output as before but also
stored in a logs.txt file in the current working directory.

{"name": "__main__", "timestamp": "2023-02-06T10:19:34Z", "severity": "DEBUG", "filename": "example.py", "lineno": 30, "process": 974925, "message": "A debug message"}
{"name": "__main__", "timestamp": "2023-02-06T10:19:34Z", "severity": "ERROR", "filename": "example.py", "lineno": 31, "process": 974925, "message": "An error message"}

If you’re logging to multiple Handlerss (as above) you don’t have to use the
same format for all of them. For example, structured JSON logs can be a little
hard to read in development so you can retain the plain text formatting for the
stdoutHandler while JSON formatted logs go to a file to be further processed
through log management tools.

import sys
import logging

logger = logging.getLogger(__name__)

stdoutHandler = logging.StreamHandler(stream=sys.stdout)
fileHandler = logging.FileHandler("logs.txt")

stdoutFmt = logging.Formatter(

jsonFmt = jsonlogger.JsonFormatter( "%(name)s %(asctime)s %(levelname)s %(filename)s %(lineno)s %(process)d %(message)s", rename_fields={"levelname": "severity", "asctime": "timestamp"}, datefmt="%Y-%m-%dT%H:%M:%SZ", ) fileHandler.setFormatter(jsonFmt) logger.addHandler(stdoutHandler) logger.addHandler(fileHandler) logger.setLevel(logging.DEBUG) logger.debug("A debug message") logger.error("An error message")
__main__: 2023-02-06 10:34:25,172 | DEBUG | example.py:30 | 996769 >>> A debug message
__main__: 2023-02-06 10:34:25,173 | ERROR | example.py:31 | 996769 >>> An error message
{"name": "__main__", "timestamp": "2023-02-06T10:19:34Z", "severity": "DEBUG", "filename": "example.py", "lineno": 30, "process": 974925, "message": "A debug message"}
{"name": "__main__", "timestamp": "2023-02-06T10:19:34Z", "severity": "ERROR", "filename": "example.py", "lineno": 31, "process": 974925, "message": "An error message"}

You can take this further by setting a different minimum log levels on the
Handlers or using a Filter object to prevent certain logs from being sent to
any of the destinations. I’ll leave you to experiment further with that on your
own.

Automatically rotating log files

When you’re logging to files, you need to be careful not to allow the file grow
too large and consume a huge amount of disk space. By rotating log files, older
logs can be compressed or deleted, freeing up space and reducing the risk of
disk usage issues. Additionally, rotating logs helps to maintain an easily
manageable set of log files, and can also be used to reduce the risk of
sensitive information exposure by removing logs after a set period of time.

We generally recommend using
Logrotate to manage
log file rotation since it is the standard way to do it on Linux servers.
However, the logging module also provides two Handlers to help with solving
this problem:
RotatingFileHandler
and
TimedRotatingFileHandler.

from logging.handlers import RotatingFileHandler

fileHandler = RotatingFileHandler("logs.txt", backupCount=5, maxBytes=5000000)

The RotatingFileHandler class takes the filename as before but also a few
other properties. The most crucial of these are backupCount and maxBytes.
The former determines how many backup files will be kept while the latter
determines the maximum size of a log file before it is rotated. In this manner,
each file is kept to a reasonable size and older logs don’t clog up storage
space unnecessarily.

With this setting in place, the logs.txt file will be created and written to
as before until it reaches 5 megabytes. It will subsequently be renamed to
logs.txt.1 and a new logs.txt file will be created once again. When the new
file gets to 5 MB, it will be renamed to logs.txt.1 and the previous
logs.txt.1 file will be renamed to logs.txt.2. This process continues until
we get to logs.txt.5. At that point, the oldest file (logs.txt.5) gets
deleted to make way for the newer logs.

from logging.handlers import TimedRotatingFileHandler

fileHandler = TimedRotatingFileHandler(
    "logs.txt", backupCount=5, when="midnight"
)

Logging in a structured format

So far, we’ve only been working with the PatternLayout, but Log4j also allows
you reformat the log messages in other formats such as JSON, XML, and so on.
These formats make it easier for the log records to be automatically parsed,
analyzed and monitored by log management systems.

The de facto format for structured logging is JSON, which can be configured
using JsonLayout. Before you proceed, ensure that you have jackson-core and
jackson-databind dependencies in your pom.xml:

<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.example</groupId>
  <artifactId>demo</artifactId>
  <version>1.0-SNAPSHOT</version>

  <name>demo</name>
  <!-- FIXME change it to the project's website -->
  <url>http://www.example.com</url>

  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
  </properties>

  <dependencies>
    . . .










  </dependencies>

  . . .
</project>

Install the packages with mvn install, then configure the appender to use
JsonLayout:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="INFO">

    <Appenders>
        <Console name="console" target="SYSTEM_OUT">

        </Console>
    </Appenders>

    <Loggers>
        <Root level="trace">
            <AppenderRef ref="console" />
        </Root>
    </Loggers>

</Configuration>

Rerun your application and you should see the log message in JSON format.

{
  "instant" : {
    "epochSecond" : 1682530421,
    "nanoOfSecond" : 359757000
  },
  "thread" : "main",
  "level" : "INFO",
  "loggerName" : "com.example.App",
  "message" : "Order shipped successfully.",
  "endOfBatch" : false,
  "loggerFqcn" : "org.apache.logging.log4j.spi.AbstractLogger",
  "threadId" : 1,
  "threadPriority" : 5
}

The JsonLayout also takes a set of
optional parameters,
allowing you to customize the output. For example, by setting
properties="true", you can include contextual information in the output.

<JsonLayout properties="true" />
{
  "instant" : {
    "epochSecond" : 1682531418,
    "nanoOfSecond" : 950795000
  },
  "thread" : "main",
  "level" : "INFO",
  "loggerName" : "com.example.App",
  "message" : "Order shipped successfully.",
  "endOfBatch" : false,
  "loggerFqcn" : "org.apache.logging.log4j.spi.AbstractLogger",

"buyerName" : "jack",

"destination" : "xxxxxxxxxx",

"orderNumber" : "1234567890"

"threadId" : 1, "threadPriority" : 5 }
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-layout-template-json</artifactId>
    <version>2.20.0</version>
</dependency>

And edit your appender setup.

<Appenders>
    <Console name="console" target="SYSTEM_OUT">
        <JsonTemplateLayout eventTemplateUri="classpath:template.json"/>
    </Console>
</Appenders>

The template is defined by the template.json file under the classpath. Since
this tutorial assumes you are using Maven, you can place the file under your
resources directory (same as your log4j2.xml), and it will be automatically
added to the classpath.

And in the template.json, you can customize the layout however you wish. Here
is an example:

{
  "@timestamp": {
    "$resolver": "timestamp",
    "pattern": {
      "format": "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'",
      "timeZone": "UTC"
    }
  },
  "ecs.version": "1.2.0",
  "log.level": {
    "$resolver": "level",
    "field": "name"
  },
  "message": {
    "$resolver": "message",
    "stringified": true
  },
  "process.thread.name": {
    "$resolver": "thread",
    "field": "name"
  },
  "log.logger": {
    "$resolver": "logger",
    "field": "name"
  },
  "labels": {
    "$resolver": "mdc",
    "flatten": true,
    "stringified": true
  },
  "tags": {
    "$resolver": "ndc"
  },
  "error.type": {
    "$resolver": "exception",
    "field": "className"
  },
  "error.message": {
    "$resolver": "exception",
    "field": "message"
  },
  "error.stack_trace": {
    "$resolver": "exception",
    "field": "stackTrace",
    "stackTrace": {
      "stringified": true
    }
  }
}
{"@timestamp":"2023-04-26T18:08:51.430Z","ecs.version":"1.2.0","log.level":"INFO","message":"Order shipped successfully.","process.thread.name":"main","log.logger":"com.example.App","buyerName":"jack","destination":"xxxxxxxxxx","orderNumber":"1234567890"}

Custom Log Levels

In the application.properties file, we can define log levels of Spring Boot loggers, application loggers, Hibernate loggers, Thymeleaf loggers, and more. To set the logging level for any logger, add properties starting with logging.level.

The logging level can be one of one of TRACE, DEBUG, INFO, WARN, ERROR, FATAL, OFF. The root logger can be configured using logging.level.root.

If we are using Logback or Log4j2, we can configure different log levels for console logs and file logs using the configuration properties logging.threshold.console and logging.threshold.file.

logging.level.root=WARN

logging.level.org.springframework.web=ERROR
logging.level.com.howtodoinjava=DEBUG

logging.threshold.console=TRACE
logging.threshold.file=INFO

In the above configuration, I upgraded the log level for application classes to DEBUG (from default INFO).

Now observe the logs:

2017-03-02 23:57:14.966 DEBUG 4092 --- [nio-8080-exec-1] c.h.app.controller.IndexController       : debug log statement printed
2017-03-02 23:57:14.967  INFO 4092 --- [nio-8080-exec-1] c.h.app.controller.IndexController       : info log statement printed
2017-03-02 23:57:14.967  WARN 4092 --- [nio-8080-exec-1] c.h.app.controller.IndexController       : warn log statement printed
2017-03-02 23:57:14.967 ERROR 4092 --- [nio-8080-exec-1] c.h.app.controller.IndexController       : error log statement printed

Logging errors in Python

Errors are often the most common target for logging so its important to
understand what tools the logging module provides for logging errors in Python
so that all the necessary information about the error is captured properly to
aid the debugging process.

The ERROR level is the primary way for recording errors in your programs. If
an problem is particularly severe to the function of your program, you may log
it at the CRITICAL level instead. The logging module also provides an
exception() method on a Logger which is essentially an alias for
logging.error(<msg>, exc_info=1). Let’s see it in action:

import sys
import logging
from pythonjsonlogger import jsonlogger

logger = logging.getLogger(__name__)

stdoutHandler = logging.StreamHandler(stream=sys.stdout)

jsonFmt = jsonlogger.JsonFormatter(
    "%(name)s %(asctime)s %(levelname)s %(filename)s %(lineno)s %(process)d %(message)s",
    rename_fields={"levelname": "severity", "asctime": "timestamp"},
    )
    datefmt="%Y-%m-%dT%H:%M:%SZ",

stdoutHandler.setFormatter(jsonFmt)
logger.addHandler(stdoutHandler)

logger.setLevel(logging.INFO)



except ZeroDivisionError as e:

{"name": "__main__", "timestamp": "2023-02-06T09:28:54Z", "severity": "ERROR", "filename": "example.py", "lineno": 26, "process": 913135, "message": "division by zero", "exc_info": "Traceback (most recent call last):\n  File \"/home/betterstack/community/python-logging/example.py\", line 24, in <module>\n    1 / 0\n    ~~^~~\nZeroDivisionError: division by zero"}
{"name": "__main__", "timestamp": "2023-02-06T09:28:54Z", "severity": "CRITICAL", "filename": "example.py", "lineno": 27, "process": 913135, "message": "division by zero", "exc_info": "Traceback (most recent call last):\n  File \"/home/betterstack/community/python-logging/example.py\", line 24, in <module>\n    1 / 0\n    ~~^~~\nZeroDivisionError: division by zero"}
{"name": "__main__", "timestamp": "2023-02-06T09:28:54Z", "severity": "ERROR", "filename": "example.py", "lineno": 28, "process": 913135, "message": "division by zero", "exc_info": "Traceback (most recent call last):\n  File \"/home/betterstack/community/python-logging/example.py\", line 24, in <module>\n    1 / 0\n    ~~^~~\nZeroDivisionError: division by zero"}

The error() and exception() methods produced exactly the same output, while
critical() differs only in the severity property. In all three cases, the
exception info is added to the record under the exc_info property. Note that
the exc_info argument should only be used in an exception context otherwise
exc_info will be set to NoneType: None in the output.

{"name": "__main__", "timestamp": "2023-02-06T09:35:27Z", "severity": "ERROR", "filename": "example.py", "lineno": 21, "process": 923890, "message": "error", "exc_info": "NoneType: None"}

If you want to add a stack trace to any of your logs outside an exception
context, use the stack_info argument instead:

logger.debug("debug", stack_info=True)
logger.error("error", stack_info=True)

The stack trace can be found under the stack_info property:

{"name": "__main__", "timestamp": "2023-02-06T09:39:47Z", "severity": "DEBUG", "filename": "example.py", "lineno": 21, "process": 934209, "message": "debug", "stack_info": "Stack (most recent call last):\n  File \"/home/betterstack/community/python-logging/example.py\", line 21, in <module>\n    logger.debug(\"debug\", stack_info=True)"}
{"name": "__main__", "timestamp": "2023-02-06T09:39:47Z", "severity": "ERROR", "filename": "example.py", "lineno": 22, "process": 934209, "message": "error", "stack_info": "Stack (most recent call last):\n  File \"/home/betterstack/community/python-logging/example.py\", line 22, in <module>\n    logger.error(\"error\", stack_info=True)"}

Logging uncaught exceptions

It is helpful to log uncaught
exceptions
at the CRITICAL level so
that you can identify the root cause of the problem and take appropriate action
to fix it. If you’re sending your logs to a log management tool like
Logtail, you can configure alerting on such
errors so that they are speedily resolved to prevent recurrence.

Here’s how you can ensure such exceptions are logged properly in Python:

import sys
import logging
from pythonjsonlogger import jsonlogger

logger = logging.getLogger(__name__)

stdoutHandler = logging.StreamHandler(stream=sys.stdout)

jsonFmt = jsonlogger.JsonFormatter(
    "%(name)s %(asctime)s %(levelname)s %(filename)s %(lineno)s %(process)d %(message)s",
    rename_fields={"levelname": "severity", "asctime": "timestamp"},
    datefmt="%Y-%m-%dT%H:%M:%SZ",
)

stdoutHandler.setFormatter(jsonFmt)

logger.addHandler(stdoutHandler)
logger.setLevel(logging.DEBUG)


def handle_exception(exc_type, exc_value, exc_traceback):

if issubclass(exc_type, KeyboardInterrupt):

sys.__excepthook__(exc_type, exc_value, exc_traceback)

logger.critical("Uncaught exception", exc_info=(exc_type, exc_value, exc_traceback))

sys.excepthook = handle_exception

# Cause an unhandled exception

raise RuntimeError("Test unhandled")

{"name": "__main__", "timestamp": "2023-02-07T14:36:42Z", "severity": "CRITICAL", "filename": "example.py", "lineno": 26, "process": 1437321, "message": "Uncaught exception", "exc_info": "Traceback (most recent call last):\n  File \"/home/betterstack/community/python-logging/example.py\", line 31, in <module>\n    raise RuntimeError(\"Test unhandled\")\nRuntimeError: Test unhandled"}

With this setup in place, you’ll always know when an uncaught exception occurs
and diagnosing the cause it should be a breeze since all the details have been
recorded in the log.

Prerequisites

Before we begin, you should have the latest Python version installed on your
machine (v3.10 at the time of writing). If you are missing Python, you can
find the installation instructions here.

🔭 Want to centralize and monitor your Python logs?

Head over to Logtail and start ingesting
your logs in 5 minutes.

Logback Additivity

To understand Logback additivity, let’s add the configured console appender to the application logger. The logger configuration code is this.

. . .
<logger name="guru.springframework.blog.logbackxml" level="info">
   <appender-ref ref="Console-Appender"/>
   <appender-ref ref="File-Appender"/>
   <appender-ref ref="RollingFile-Appender"/>
</logger>
<root>
    <appender-ref ref="Console-Appender"/>
</root>
. . .

The console output on running the test class is this.
Console Appender Additivity Output

In the figure above, notice the duplicate output. It’s due to additivity. The appender named Console-Appender is attached to two loggers: root and guru.springframework.blog.Logbackxml. Since root is the ancestor of all loggers, logging request made by guru.springframework.blog.Logbackxml gets output twice. Once by the appender attached to guru.springframework.blog.Logbackxml itself and once by the appender attached to root. You can override this default Logback behavior by setting the additivity flag of a logger to false, like this.

. . .
<logger name="guru.springframework.blog.logbackxml" level="info" additivity="false">
   <appender-ref ref="Console-Appender"/>
   <appender-ref ref="File-Appender"/>
   <appender-ref ref="RollingFile-Appender"/>
</logger>
<root>
    <appender-ref ref="Console-Appender"/>
</root>
. . .

With additivity set to false, Logback will not use Console-Appender of root to log messages.

The complete code of the Logback.xml file is this.

Logback.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="true" scan="true" scanPeriod="30 seconds">
    <property name="LOG_PATH" value="logs" />
    <property name="LOG_ARCHIVE" value="${LOG_PATH}/archive" />
    <timestamp key="timestamp-by-second" datePattern="yyyyMMdd'T'HHmmss"/>
    <appender name="Console-Appender" class="ch.qos.logback.core.ConsoleAppender">
        <layout>
            <pattern>%msg%n</pattern>
        </layout>
    </appender>
    <appender name="File-Appender" class="ch.qos.logback.core.FileAppender">
        <file>${LOG_PATH}/logfile-${timestamp-by-second}.log</file>
        <encoder>
            <pattern>%msg%n</pattern>
            <outputPatternAsHeader>true</outputPatternAsHeader>
        </encoder>
    </appender>
    <appender name="RollingFile-Appender" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${LOG_PATH}/rollingfile.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${LOG_ARCHIVE}/rollingfile.log%d{yyyy-MM-dd}.log</fileNamePattern>
            <maxHistory>30</maxHistory>
            <totalSizeCap>1KB</totalSizeCap>
        </rollingPolicy>
        <encoder>
            <pattern>%msg%n</pattern>
        </encoder>
    </appender>
    <appender name="Async-Appender" class="ch.qos.logback.classic.AsyncAppender">
        <appender-ref ref="RollingFile-Appender" />
    </appender>

    <logger name="guru.springframework.blog.logbackxml"  level="info" additivity="false">
        <appender-ref ref="Console-Appender" />
        <appender-ref ref="File-Appender" />
        <appender-ref ref="Async-Appender" />
    </logger>
    <root>
        <appender-ref ref="Console-Appender" />
    </root>
</configuration>

Custom Log Patterns

To change the logging patterns, use logging.pattern.console and logging.pattern.file properties.

# Logging pattern for the console
logging.pattern.console=%d{yyyy-MM-dd HH:mm:ss} - %msg%n

# Logging pattern for file
logging.pattern.file=%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n

After changing the console logging pattern in the application, log statements are printed as below:

2017-03-03 12:59:13 - This is a debug message
2017-03-03 12:59:13 - This is an info message
2017-03-03 12:59:13 - This is a warn message
2017-03-03 12:59:13 - This is an error message

Оцените статью
Master Hi-technology
Добавить комментарий