码头多个入口点

假设我有以下 Dockerfile:

FROM ubuntu


RUN apt-get update
RUN apt-get install -y apache2
RUN apt-get install -y mongod #pretend this exists


EXPOSE 80


ENTRYPOINT ["/usr/sbin/apache2"]

ENTRYPOINT命令使 apache2在容器启动时启动。我还希望能够启动 mongod时,容器启动与命令 service mongod start。然而,根据 文件,Dockerfile 中必须只有一个 ENTRYPOINT。那么正确的做法是什么呢?

145063 次浏览

I can think of several ways:

  • you can write a script to put on the container (ADD) that does all the startup commands, then put that in the ENTRYPOINT
  • I think you can put any shell commands on the ENTRYPOINT, so you can do service mongod start && /usr/sbin/apache2

I was not able to get the usage of && to work. I was able to solve this as described here: https://stackoverflow.com/a/19872810/2971199

So in your case you could do:

RUN echo "/usr/sbin/apache2" >> /etc/bash.bashrc
RUN echo "/path/to/mongodb" >> /etc/bash.bashrc
ENTRYPOINT ["/bin/bash"]

You may need/want to edit your start commands.

Be careful if you run your Dockerfile more than once, you probably don't want multiple copies of commands appended to your bash.bashrc file. You could use grep and an if statement to make your RUN command idempotent.

My solution is to throw individual scripts into /opt/run/ and execute them with:

#!/bin/bash


LOG=/var/log/all


touch $LOG


for a in /opt/run/*
do
$a >> $LOG &
done


tail -f $LOG

And my entry point is just the location of this script, say it's called /opt/bin/run_all:

ADD 00_sshd /opt/run/
ADD 01_nginx /opt/run/


ADD run_all /opt/bin/
ENTRYPOINT ["/opt/bin/run_all"]

As Jared Markell said, if you wan to launch several processes in a docker container, you have to use supervisor. You will have to configure supervisor to tell him to launch your different processes.

I wrote about this in this blog post, but you have a really nice article here detailing how and why using supervisor in Docker.

Basically, you will want to do something like:

FROM ubuntu


RUN apt-get update
RUN apt-get install -y apache2
RUN apt-get install -y mongod #pretend this exists
RUN apt-get install -y supervisor # Installing supervisord


ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf


EXPOSE 80


ENTRYPOINT ["/usr/bin/supervisord"]

And add a configuration a file supervisord.conf

[supervisord]
nodaemon=true


[program:mongodb]
command=/etc/mongod/mongo #To adapt, I don't know how to launch your mongodb process


[program:apache2]
command=/usr/sbin/apache2 -DFOREGROUND

EDIT: As this answer has received quite lot of upvotes, I want to precise as a warning that using Supervisor is not considered as a best practice to run several jobs. Instead, you may be interested in creating several containers for your different processes and managing them through docker compose. In a nutshell, Docker Compose allows you to define in one file all the containers needed for your app and launch them in one single command.

The simple answer is that you should not because it breaks the single responsibility principle: one container, one service. Imagine that you want to spawn additional cloud images of MongoDB because of a sudden workload - why increasing Apache2 instances as well and at a 1:1 ratio? Instead, you should link the boxes and make them speak through TCP. See https://docs.docker.com/userguide/dockerlinks/ for more info.

You can't specify multiple entry points in a Dockerfile. To run multiple servers in the same docker container you must use a command that will be able to launch your servers. Supervisord has already been cited but I could also recommend multirun, a project of mine which is a lighter alternative.

There is an answer in docker docs: https://docs.docker.com/config/containers/multi-service_container/

But in short

If you need to run more than one service within a container, you can accomplish this in a few different ways.

The first one is to run script which mange your process.

The second one is to use process manager like supervisord

If you are trying to run multiple concurrent npm scripts such as a watch script and a build script for example, check out:

How can I run multiple npm scripts in parallel?

Typically, you would not do this. It is an anti-pattern because:

  1. You typically have different update cycles for the two processes
  2. You may want to change base filesystems for each of these processes
  3. You want logging and error handling for each of these processes that are independent of each other
  4. Outside of a shared network or volume, the two processes likely have no other hard dependencies

Therefore the best option is to create two separate images, and start the two containers with a compose file that handles the shared private network.


If you cannot follow that best practice, then you end up in a scenario like the following. The parent image contains a line:

ENTRYPOINT ["/entrypoint-parent.sh"]

and you want to add the following to your child image:

ENTRYPOINT ["/entrypoint-child.sh"]

Then the value of ENTRYPOINT in the resulting image is replaced with /entrypoint-child.sh, in other words, there is only a single value for ENTRYPOINT. Docker will only call a single process to start your container, though that process can spawn child processes. There are a couple techniques to extend entrypoints.

Option A: Call your entrypoint, and then run the parent entrypoint at the end, e.g. /entrypoint-child.sh could look like:

#!/bin/sh


echo Running child entrypoint initialization steps here
/usr/bin/mongodb ... &


exec /entrypoint-parent.sh "$@"

The exec part is important, it replaces the current shell by the /entrypoint-parent.sh shell or process, which removes issues with signal handling. The result is you run the first bit of initialization in the child entrypoint, and then delegate to the original parent entrypoint. This does require that you keep track of the name of the parent entrypoint, would could change between versions of your base image. This also means you lose error handling and graceful termination on mongodb since it is run in the background. This could result in a false healthy container and data lose, neither of which I would recommend for a production environment.

Option B: Run the parent entrypoint in the background. This is less than ideal since you will no longer have error handling on the parent process unless you take some extra steps. At the simplest, this looks like the following in your /entrypoint-child.sh:

#!/bin/sh


# other initialization steps


/entrypoint-parent.sh "$@" &


# potentially wait for parent to be running by polling


# run something new in the foreground, that may depend on parent processes
exec /usr/bin/mongodb ...

Note, the "$@" notation I keep using is passing through the value of CMD as arguments to the parent entrypoint.

Option C: Switch to a tool like supervisord. I'm not a huge fan of this since it still implies running multiple daemons inside your container, and you are usually best to split that into multiple containers. You need to decide what the proper response is when a single child process keeps failing.

Option D: Similar to Options A and B, I often create a directory of entrypoint scripts that can be extended at different levels of the image build. The entrypoint itself is unchanged, I just add new files into a directory that gets called sequentially based on the filename. In my scenarios, these scripts are all run in the foreground, and I exec the CMD at the end. You can see an example of this in my base image repo, in particular the entrypoint.d directory and bin/entrypointd.sh script which includes the section:

# ...


for ep in /etc/entrypoint.d/*; do
ext="${ep##*.}"
if [ "${ext}" = "env" -a -f "${ep}" ]; then
# source files ending in ".env"
echo "Sourcing: ${ep}"
set -a && . "${ep}" && set +a
elif [ "${ext}" = "sh" -a -x "${ep}" ]; then
# run scripts ending in ".sh"
echo "Running: ${ep}"
"${ep}"
fi
done


# ...


# run command with exec to pass control
echo "Running CMD: $@"
exec "$@"

However, the above is more for extending the initialization steps, and not for running multiple daemons inside the container. Given the bad options and issues they each have, I hope it's clear why running two containers would be preferred in your scenario.