Docker 容器日志占用了我所有的磁盘空间

我在 VM 上运行一个容器。我的容器默认情况下将日志写入/var/lib/docker/CONTAINER/CONTAINER _ ID/CONTAINER _ ID-json。日志文件,直到磁盘满为止。

目前,我必须手动删除这个文件,以避免磁盘已满。我读到在 Docker 1.8中有一个 旋转原木的参数。 作为当前的解决方案,您有什么建议?

100536 次浏览

Caution: this post relates to docker versions < 1.8 (which don't have the --log-opt option)

Why don't you use logrotate (which also supports compression)?

/var/lib/docker/containers/*/*-json.log {
hourly
rotate 48
compress
dateext
copytruncate
}

Configure it either directly on your CoreOs Node or deploy a container (e.g. https://github.com/tutumcloud/logrotate) which mounts /var/lib/docker to rotate the logs.

Docker 1.8 has been released with a log rotation option. Adding:

--log-opt max-size=50m

when the container is launched does the trick. You can learn more at: https://docs.docker.com/engine/admin/logging/overview/

CAUTION: This is for docker-compose version 2 only

Example:

version: '2'
services:
db:
container_name: db
image: mysql:5.7
ports:
- 3306:3306
logging:
options:
max-size: 50m

Pass log options while running a container. An example will be as follows

sudo docker run -ti --name visruth-cv-container  --log-opt max-size=5m --log-opt max-file=10 ubuntu /bin/bash

where --log-opt max-size=5m specifies the maximum log file size to be 5MB and --log-opt max-file=10 specifies the maximum number of files for rotation.

Example for docker-compose version 1:

mongo:
image: mongo:3.6.16
restart: unless-stopped
log_opt:
max-size: 1m
max-file: "10"

[This answer covers current versions of docker for those coming across the question long after it was asked.]

To set the default log limits for all newly created containers, you can add the following in /etc/docker/daemon.json:

{
"log-driver": "json-file",
"log-opts": {"max-size": "10m", "max-file": "3"}
}

Then reload docker with systemctl reload docker if you are using systemd (otherwise use the appropriate restart command for your install).

You can also switch to the local logging driver with a similar file:

{
"log-driver": "local",
"log-opts": {"max-size": "10m", "max-file": "3"}
}

The local logging driver stores the log contents in an internal format (I believe protobufs) so you will get more log contents in the same size logfile (or take less file space for the same logs). The downside of the local driver is external tools like log forwarders, may not be able to parse the raw logs. Be aware the docker logs only works when the log driver is set to json-file, local, or journald.

The max-size is a limit on the docker log file, so it includes the json or local log formatting overhead. And the max-file is the number of logfiles docker will maintain. After the size limit is reached on one file, the logs are rotated, and the oldest logs are deleted when you exceed max-file.

For more details, docker has documentation on all the drivers at: https://docs.docker.com/config/containers/logging/configure/

I also have a presentation covering this topic. Use P to see the presenter notes: https://sudo-bmitch.github.io/presentations/dc2019/tips-and-tricks-of-the-captains.html#logs

With compose 3.9, you can set a limit to the logs as below

version: "3.9"
services:
some-service:
image: some-service
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "10"

The example shown above would store log files until they reach a max-size of 200kB, and then rotate them. The amount of individual log files stored is specified by the max-file value. As logs grow beyond the max limits, older log files are removed to allow storage of new logs.

Logging options available depend on which logging driver you use

  • The above example for controlling log files and sizes uses options specific to the json-file driver. These particular options are not available on other logging drivers. For a full list of supported logging drivers and their options, refer to the logging drivers documentation.

Note: Only the json-file and journald drivers make the logs available directly from docker-compose up and docker-compose logs. Using any other driver does not print any logs.

Source: https://docs.docker.com/compose/compose-file/compose-file-v3/

Just in case you can't stop your container, I have created a script that performs the following actions (you have to run it with sudo):

  1. Creates a folder to store compressed log files as backup.
  2. Looks for the running container's id (specified by the container's name).
  3. Copy the container's log file to a new location (folder in step 1) using a random name.
  4. Compress the previous log file (to save space).
  5. Truncates the container's log file by certain size that you can define.

Notes:

  • It uses the shuf command. Make sure your linux distribution has it or change it to another bash-supported random generator.
  • Before use, change the variable CONTAINER_NAME to match your running container; it can be a partial name (doesn't have to be the exact matching name).
  • By default it truncates the log file to 10M (10 megabytes), but you can change this size by modifying the variable SIZE_TO_TRUNCATE.
  • It creates a folder in the path: /opt/your-container-name/logs, if you want to store the compressed logs somewhere else, just change the variable LOG_FOLDER.
  • Run some tests before running it in production.
#!/bin/bash
set -ex


############################# Main Variables Definition:
CONTAINER_NAME="your-container-name"
SIZE_TO_TRUNCATE="10M"


############################# Other Variables Definition:
CURRENT_DATE=$(date "+%d-%b-%Y-%H-%M-%S")
RANDOM_VALUE=$(shuf -i 1-1000000 -n 1)
LOG_FOLDER="/opt/${CONTAINER_NAME}/logs"
CN=$(docker ps --no-trunc -f name=${CONTAINER_NAME} | awk '{print $1}' | tail -n +2)
LOG_DOCKER_FILE="$(docker inspect --format='\{\{.LogPath}}' ${CN})"
LOG_FILE_NAME="${CURRENT_DATE}-${RANDOM_VALUE}"


############################# Procedure:
mkdir -p "${LOG_FOLDER}"
cp ${LOG_DOCKER_FILE} "${LOG_FOLDER}/${LOG_FILE_NAME}.log"
cd ${LOG_FOLDER}
tar -cvzf "${LOG_FILE_NAME}.tar.gz" "${LOG_FILE_NAME}.log"
rm -rf "${LOG_FILE_NAME}.log"
truncate -s ${SIZE_TO_TRUNCATE} ${LOG_DOCKER_FILE}

You can create a cronjob to run the previous script every month. First run:

sudo crontab -e

Type a in your keyboard to enter edit mode. Then add the following line:

0 0 1 * * /your-script-path/script.sh

Hit the escape key to exit Edit mode. Save the file by typing :wq and hitting enter. Make sure the script.sh file has execution permissions.

The limits can be set using the docker run command also.

docker run -it -d -v /tmp:/tmp -p 49160:8080 --name web-stats-app --log-opt max-size=10m --log-opt max-file=5 mydocker/stats_app