Effective Use of Python 'logging' Module

In large systems, logs can be terrifying: they are huge in volume, and hard to understand. This note lists some suggestions and common misuse of Python's logging module, with the aim of:

  • Reduce redundant logs & spams from libraries.
  • Allow more control of logging behaviors.
  • Make logs more informative to users.

Libraries must not configure the root logger

Loggers are globally identified by a dot-separated name given to logging.getLogger(name), such as library.module.submodule. The logger named by an empty string is the "root logger".

Libraries must not call logging.basicConfig or configure the root logger in any way, unless requested by users.

Configuration of the root logger affects all logs, which is beyond the responsibility of any single library. Only application developers, i.e. those who create the program that interacts with users, should determine how to configure the root logger.

Libraries must not write to the root logger

Never call functions like logging.{info,error} from within a library, because they write to the root logger. I've added this advice into CPython's official documentation.

When a library writes to the root logger, applications that use the library lose control over the library's logging behavior: they cannot turn on/off the logs from the library, apply custom filter/formatter, or redirect the logs from the library.

Instead, A library should write to a logger with easily and uniquely identifiable name, using

logger = logging.getLogger(good_name)

This way, caller of the library will be able to reconfigure its logger using the same name.

Choice of logger name

__name__, i.e. the current module name, is often a good logger name. Occasionally, __name__ is not good enough:

  • In some deep, internal submodules, a suffix of __name__ may be uninformative and can be removed, e.g. my_lib.submodule._internal._impl. Note that there is a trade-off between the name simplicity and the granularity of control.
  • In a monorepo, many modules may share a common prefix in __name__, e.g. company.company3.organization. Removing such a common prefix can simplify the logger names while still keeping them unique.

The "current function/class name" is often a bad logger name because:

  • The name is likely not unique.
  • The name could be the library's internal detail and therefore not identifiable to users.
  • The name may change more frequently (compared to module names), breaking users' logger configuration.

Fix such issues automatically

I wrote a simple script that processes Python source files to automatically replace all logging.xxx by logging.getLogger(__name__).xxx. This script has created PRs in a few projects that misuse the root logger, such as pytorch/72649, tensorboardX/662.

I hope someone could create a linter that performs this check.

Libraries should not add a handler that's visible to users

Handler can be attached to loggers to decide where/how to log a record.

Unless requested by users, a library should not add a handler anywhere (even not to its own logger), if the handler has a visible effect on users. This is because the application developer should make the final call on how each library's logs are processed. Pre-existing handlers may cause issues such as duplicated logs. This suggestion is present in CPython's documentation here.

Examples of invisible handlers that libraries may add to their loggers are:

  • logging.NullHandler can be added to disable logs by default.
  • Within a company, one library may add handlers that send logs to a centralized service monitored by the library's developers.

Be good citizens

Libraries should try to be good citizens in reducing the amount of duplicate/unwanted/useless logs they printed. Some tips include:

  • Use levels correctly.
    • In particular, don't use INFO for debugging.
  • Think about the frequency and ask the following questions when writing logs:
    • How often will this line be printed?
    • Can I move logging to an earlier/later place so it's printed less often? e.g. from a method to __init__.
    • Is there a condition we can check and skip printing this message? e.g. if valid(): log(...).
  • Use helpers to control frequency of logs:
    • log_first_n: log only for the first times. It limits the total number of certain logs.
      • Example: use it to print deprecation warnings.
    • log_every_n_seconds: limit the frequency of certain logs to be less than once every seconds.
      • Example: use it to print progress in a loop whose iteration speed is unknown.
    • log_every_n: log once every calls. It reduces the total number of certain logs by a factor of .
    • I ported these helpers from abseil-py's logging module to work with the logging module directly.

Use a structured logging service

Logs are not only strings. logging.LogRecord is a rich structure with useful attributes, and users can even tag logs with custom attributes through the extra= argument.

Large, distributed systems should not rely on printing as the sole method of logging. Whenever logs are printed (to terminal or files), they have to be converted to strings. A lot of useful attributes, such as stack trace and line number, are often lost. The lack of structure also makes it difficult to parse and analyze the logs.

In addition to printing, we can also use an additional Handler to send structured logs to a logging service, such as Google Cloud Logging or Humio. The advantages include:

  • Can log a much larger volume than what a terminal can hold.
  • Can keep the rich structure of logs, enabling many features like querying by filters, rendering as different views, etc. Here is a list of logging features Google Cloud provides.
  • Can persist and aggregate logs across different instances/services/users, allowing developers to find patterns from big data.

Reduce duplicated logs among workers

In an MPI-like distributed job (e.g. many data-parallel deep learning training), workers often print almost identical logs. We should avoid printing them all to the terminal.

A good strategy could be:

  1. Print all logs from the main process / first rank. Print some logs (e.g. only errors) from other workers.
  2. Save all logs from all workers to separate files.
  3. Save all logs from all workers to a structured logging service, tagged by worker rank.

Detectron2's setup_logger implements (1) and (2).

Use colors in terminal

When logs are printed to terminal, they are more readable if severity is represented by colors rather than strings. I often use this formatter:

from termcolor import colored
import logging

class ColorFormatter(logging.Formatter):
def __init__(self):
self._debug, self._info, self._warn, self._error = [
colored("[%(asctime)s%(name)s]: ", c) + "%(message)s",
datefmt="%m/%d %H:%M:%S",
for c in ["blue", "green", "yellow", "red"]

def format(self, record):
if record.name == "root":
record.name = ""
if len(record.name):
record.name = " " + record.name

if logging.INFO > record.levelno >= logging.DEBUG:
return self._debug.format(record)
elif logging.WARNING > record.levelno >= logging.INFO:
return self._info.format(record)
elif logging.ERROR > record.levelno >= logging.WARNING:
return self._warn.format(record)
return self._error.format(record)

Attach this formatter only when the handler writes to terminals (check sys.stdout.isatty), and we'll get outputs like:

[04/23 18:24:43 test]: debug
[04/23 18:24:43 test]: info
[04/23 18:24:43 test]: warning
[04/23 18:24:43 test]: error

logging module is not enough

Be aware that it's insufficient to only rely on logging module. A Python program may produce useful logs bypassing the logging module, e.g.:

  • print statements: they should be avoided, but may still exist.
  • Logs from C libraries linked to the process.
  • Logs from the interpreter itself, e.g. stack traces.
  • Logs printed by the shell, e.g. "Segmentation fault".

To not miss important logs, a comprehensive logging solution needs to integrate both the structured logs from Python and the less common unstructured logs from the above sources.