如何在运行中的码头集装箱中设置环境变量

如果我有一个码头容器,我开始了一段时间,什么是最好的方法来设置一个环境变量在运行的容器?在运行 run 命令时,我最初设置了一个环境变量。

$ docker run --name my-wordpress -e VIRTUAL_HOST=domain.example --link my-mysql:mysql -d spencercooley/wordpress

但现在它已经运行了一段时间,我想添加另一个 ABc0的环境变量。我不想删除容器,然后只是用我想要的环境变量重新运行它,因为那样我就必须将旧的卷迁移到新的容器,它有主题文件和上传,我不想失去它。

我只是想改变 ABc0环境变量的价值。

265301 次浏览

Docker doesn't offer this feature.

There is an issue: "How to set an enviroment variable on an existing container? #8838"

Also from "Allow docker start to take environment variables #7561":

Right now Docker can't change the configuration of the container once it's created, and generally this is OK because it's trivial to create a new container.

There are generaly two options, because docker doesn't support this feature now:

  1. Create your own script, which will act like runner for your command. For example:

    #!/bin/bash
    export VAR1=VAL1
    export VAR2=VAL2
    your_cmd
    
  2. Run your command following way:

    docker exec -i CONTAINER_ID /bin/bash -c "export VAR1=VAL1 && export VAR2=VAL2 && your_cmd"
    

For a somewhat narrow use case, docker issue 8838 mentions this sort-of-hack:

You just stop docker daemon and change container config in /var/lib/docker/containers/[container-id]/config.json (sic)

This solution updates the environment variables without the need to delete and re-run the container, having to migrate volumes and remembering parameters to run.

However, this requires a restart of the docker daemon. And, until issue issue 2658 is addressed, this includes a restart of all containers.

You wrote that you do not want to migrate the old volumes. So I assume either the Dockerfile that you used to build the spencercooley/wordpress image has VOLUMEs defined or you specified them on command line with the -v switch.

You could simply start a new container which imports the volumes from the old one with the --volumes-from switch like:

$ docker run --name my-new-wordpress --volumes-from my-wordpress -e VIRTUAL_HOST=domain.com --link my-mysql:mysql -d spencercooley/wordpres

So you will have a fresh container but you do not loose the old data. You do not even need to touch or migrate it.

A well-done container is always stateless. That means its process is supposed to add or modify only files on defined volumes. That can be verified with a simple docker diff <containerId> after the container ran a while.

In that case it is not dangerous when you re-create the container with the same parameters (in your case slightly modified ones). Assuming you create it from exactly the same image from which the old one was created and you re-use the same volumes with the above mentioned switch.

After the new container has started successfully and you verified that everything runs correctly you can delete the old wordpress container. The old volumes are then referred from the new container and will not be deleted.

Firstly you can set env inside the container the same way as you do on a linux box.

Secondly, you can do it by modifying the config file of your docker container (/var/lib/docker/containers/xxxx/config.v2.json). Note you need restart docker service to take affect. This way you can change some other things like port mapping etc.

To:

  1. set up many env. vars in one step,
  2. prevent exposing them in 'sh' history, like with '-e' option (passing credentials/api tokens!),

you can use

--env-file key_value_file.txt

option:

docker run --env-file key_value_file.txt $INSTANCE_ID

You could set an environment variable to a running Docker container by

docker exec -it -e "your environment Key"="your new value" <container> /bin/bash

Verify it using below command

printenv

This will update your key with the new value provided.

Note: This will get reverted back to old on if docker gets restarted.

here is how to update a docker container config permanently

  1. stop container: docker stop <container name>
  2. edit container config: docker run -it -v /var/lib/docker:/var/lib/docker alpine vi $(docker inspect --format='/var/lib/docker/containers/\{\{.Id}}/config.v2.json' <container name>)
  3. restart docker

If you are running the container as a service using docker swarm, you can do:

docker service update --env-add <you environment variable> <service_name>

Also remove using --env-rm

To make sure it's addedd as you wanted, just run: docker exec -it <container id> env

Use export VAR=Value

Then type printenv in terminal to validate it is set correctly.

Here's how you can modify a running container to update its environment variables. This assumes you're running on Linux. I tested it with Docker 19.03.8

Live Restore

First, ensure that your Docker daemon is set to leave containers running when it's shut down. Edit your /etc/docker/daemon.json, and add "live-restore": true as a top-level key.

sudo vim /etc/docker/daemon.json

My file looks like this:

{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"live-restore": true
}

Taken from here.

Get the Container ID

Save the ID of the container you want to edit for easier access to the files.

export CONTAINER_ID=`docker inspect --format="\{\{.Id}}" <YOUR CONTAINER NAME>`

Edit Container Configuration

Edit the configuration file, go to the "Env" section, and add your key.

sudo vim /var/lib/docker/containers/$CONTAINER_ID/config.v2.json

My file looks like this:

...,"Env":["TEST=1",...

Stop and Start Docker

I found that restarting Docker didn't work, I had to stop and then start Docker with two separate commands.

sudo systemctl stop docker
sudo systemctl start docker

Because of live-restore, your containers should stay up.

Verify That It Worked

docker exec <YOUR CONTAINER NAME> bash -c 'echo $TEST'

Single quotes are important here.

You can also verify that the uptime of your container hasn't changed:

docker ps

I solve this problem with docker commit after some modifications in the base container, we only need to tag the new image and start that one

docs.docker.com/engine/reference/commandline/commit

docker commit [container-id] [tag]

docker commit b0e71de98cb9 stack-overflow:0.0.1

then you can pass environment vars or file

docker run --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN --env-file env.local -p 8093:8093 stack-overflow:0.0.1

the quick working hack would be:

  1. get into the running container. docker exec -it <container_name> bash

  2. set env variable, install vim if not installed in the container

apt-get install vim

vi ~/.profile at the end of the file add export MAPPING_FILENAME=p_07302021

source ~/.profile check whether it has been set! echo $MAPPING_FILENAME(make sure you should come out of the container.)

  1. Now, you can run whatever you're running outside of the container from inside the container. Note, in case you're worried that you might lose your work if the current session you logged in gets logged off. you can always use screen even before starting step 1. That way if you logged off by chance of your inside running container session, you can log back in.

After understand that docker run an image constructed with a dockerfile , and the only way to change it is build another image stop everything and run everything again .

So the easy way to "set an environment variable in a running docker container" is read dockerfile [1] (with docker inspect) understand how docker starts [1]. In the example [1] we can see that docker start with /usr/local/bin/docker-php-entrypoint and we could edit it with vi and add one line with export myvar=myvalue since /usr/local/bin/docker-php-entrypoint Posix script .

If you can change dockerfile, you can add a call to a script [2] for example /usr/local/bin/mystart.sh and in that file we can set your environment var.
Of course after change the scripts you need restart the container [3]

[1]

$ docker inspect 011aa33ba92b
[{
. . .
"ContainerConfig": {
"Cmd": [
"php-fpm"
],
"WorkingDir": "/app",
"Entrypoint": [
"docker-php-entrypoint"
],
. . .
}]

[2]

/usr/local/bin/mystart.sh
#!/bin/bash
export VAR1=VAL1
export VAR2=VAL2
your_cmd

[3]

docker restart dev-php (container name)

1. Enter your running container:

sudo docker exec -it <container_name> /bin/bash

2. Run command to all available to user accessing the container and copy them to user running session that needs to run the commands:

printenv | grep -v "no_proxy" >> /etc/environment

3. Stop and Start the container

sudo docker stop <container_name>
sudo docker start <container_name>

Basically you can do like in normal linux, adding export MY_VAR="value" to ~/.bashrc file.

Instructions

  1. Using VScode attach to your running container
  2. Then with VScode open the ~/.bashrc file
  3. Export your variable by adding the code in the end of the file
export MY_VAR="value"
  1. Finally execute .bashrc using source command
source ~/.bashrc

Hack with editing docker inner configs and then restarting docker daemon was unsuitable for my case.

There is a way to recreate container with new environment settings and use it for some time.

1. Create new image from runnning container:

docker commit my-service
a1b2c3d4e5f6032165497

Docker created new image, and answered with its id. Note, the image doesn't include mounts and networks.

2. Stop and rename original container:

docker stop my-service
docker rename my-service my-service-original

3. Create and start new container with modified environment:

docker run \
-it --rm \
--name my-service \
--network=required-network \
--mount type=bind,source=/host/path,target=/inside/path,readonly \
--env 'MY_NEW_ENV_VAR=blablabla OLD_ENV=zzz' \
a1b2c3d4e5f6032165497

Here, I did the following:

  • created new temporary container from image built on step 1, that will show its output on terminal, will exit on Ctrl+C, and will be deleted after that
  • configured its mounts and networks
  • added my custom environment configuration

4. After you worked with temporary container, press Ctrl+C to stop and remove it, and then return old container back:

docker rename my-service-original my-service
docker start my-service