Set logging levels

I'm trying to use the standard library to debug my code:

This works fine:

import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
logger.info('message')

I can't make work the logger for the lower levels:

logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.info('message')


logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.debug('message')

I don't get any response for neither of those.

159213 次浏览

What Python version? That works for me in 3.4. But note that basicConfig() won't affect the root handler if it's already setup:

This function does nothing if the root logger already has handlers configured for it.

To set the level on root explicitly do logging.getLogger().setLevel(logging.DEBUG). But ensure you've called basicConfig() before hand so the root logger initially has some setup. I.e.:

import logging
logging.basicConfig()
logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger('foo').debug('bah')
logging.getLogger().setLevel(logging.INFO)
logging.getLogger('foo').debug('bah')

Also note that "Loggers" and their "Handlers" both have distinct independent log levels. So if you've previously explicitly loaded some complex logger config in you Python script, and that has messed with the root logger's handler(s), then this can have an effect, and just changing the loggers log level with logging.getLogger().setLevel(..) may not work. This is because the attached handler may have a log level set independently. This is unlikely to be the case and not something you'd normally have to worry about.

I use the following setup for logging.

Yaml based config

Create a yaml file called logging.yml like this:

version: 1


formatters:
simple:
format: "%(name)s - %(lineno)d -  %(message)s"


complex:
format: "%(asctime)s - %(name)s - %(lineno)d -  %(message)s"




handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple


file:
class: logging.handlers.TimedRotatingFileHandler
when: midnight
backupCount: 5
level: DEBUG
formatter: simple
filename : Thrift.log


loggers:


qsoWidget:
level: INFO
handlers: [console,file]
propagate: yes


__main__:
level: DEBUG
handlers: [console]
propagate: yes

Python - The main

The "main" module should look like this:

import logging.config
import logging
import yaml


with open('logging.yaml','rt') as f:
config=yaml.safe_load(f.read())
f.close()
logging.config.dictConfig(config)
logger=logging.getLogger(__name__)
logger.info("Contest is starting")

Sub Modules/Classes

These should start like this

import logging


class locator(object):
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logger.debug('{} initialized')

Hope that helps you...

In my opinion, this is the best approach for the majority of cases.

Configuration via an INI file

Create a filename logging.ini in the project root directory as below:

[loggers]
keys=root


[logger_root]
level=DEBUG
handlers=screen,file


[formatters]
keys=simple,verbose


[formatter_simple]
format=%(asctime)s [%(levelname)s] %(name)s: %(message)s


[formatter_verbose]
format=[%(asctime)s] %(levelname)s [%(filename)s %(name)s %(funcName)s (%(lineno)d)]: %(message)s


[handlers]
keys=file,screen


[handler_file]
class=handlers.TimedRotatingFileHandler
interval=midnight
backupCount=5
formatter=verbose
level=WARNING
args=('debug.log',)


[handler_screen]
class=StreamHandler
formatter=simple
level=DEBUG
args=(sys.stdout,)

Then configure it as below:

import logging


from logging.config import fileConfig


fileConfig('logging.ini')
logger = logging.getLogger('dev')




name = "stackoverflow"


logger.info(f"Hello {name}!")
logger.critical('This message should go to the log file.')
logger.error('So should this.')
logger.warning('And this, too.')
logger.debug('Bye!')

If you run the script, the sysout will be:

2021-01-31 03:40:10,241 [INFO] dev: Hello stackoverflow!
2021-01-31 03:40:10,242 [CRITICAL] dev: This message should go to the log file.
2021-01-31 03:40:10,243 [ERROR] dev: So should this.
2021-01-31 03:40:10,243 [WARNING] dev: And this, too.
2021-01-31 03:40:10,243 [DEBUG] dev: Bye!

And debug.log file should contain:

[2021-01-31 03:40:10,242] CRITICAL [my_loger.py dev <module> (12)]: This message should go to the log file.
[2021-01-31 03:40:10,243] ERROR [my_loger.py dev <module> (13)]: So should this.
[2021-01-31 03:40:10,243] WARNING [my_loger.py dev <module> (14)]: And this, too.

All done.

I wanted to leave the default logger at warning level but have detailed lower-level loggers for my code. But it wouldn't show anything. Building on the other answer, it's critical to run logging.basicConfig() beforehand.

import logging
logging.basicConfig()
logging.getLogger('foo').setLevel(logging.INFO)
logging.getLogger('foo').info('info')
logging.getLogger('foo').debug('info')
logging.getLogger('foo').setLevel(logging.DEBUG)
logging.getLogger('foo').info('info')
logging.getLogger('foo').debug('debug')

Outputs expected

INFO:foo:info
INFO:foo:info
DEBUG:foo:debug

For a logging solution across modules, I did this

# cfg.py


import logging
logging.basicConfig()
logger = logging.getLogger('foo')
logger.setLevel(logging.INFO)
logger.info(f'active')


# main.py


import cfg
cfg.logger.info(f'main')