Does virtualenv serve a purpose (in production) when using docker?

For development we use virtualenv to have an isolated development when it comes to dependencies. From this question it seems deploying Python applications in a is recommended.

Now we're starting to use for deployment. This provides a more isolated environment so I'm questioning the use of virtualenv inside a docker container. In the case of a single application I do not think virtualenv has a purpose as docker already provides isolation. In the case where multiple applications are deployed on a single docker container, I do think virtualenv has a purpose as the applications can have conflicting dependencies.

Should virtualenv be used when a single application is deployed in a docker container?

Should docker contain multiple applications or only one application per container?

If so, should virtualenv be used when deploying a container with multiple applications?

26684 次浏览

Virtualenv was created long before docker. Today, I lean towards docker instead of virtualenv for these reasons:

  • Virtualenv still means people consuming your product need to download eggs. With docker, they get something which is "known to work". No strings attached.
  • Docker can do much more than virtualenv (like create a clean environment when you have products that need different Python versions).

The main drawback for Docker was the poor Windows support. That changed with the version for Windows 10.

As for "how many apps per container", the usual policy is 1.

Introducing virtualenv is very easy, so I'd say start without it on your docker container.

If the need arises, then maybe you can install it. Running "pip freeze > requirements.txt" will give you all your python packages. However, I doubt you'll ever need virtualenv inside a docker container as creating another container would be a more preferable alternative.

I would not recommend having more than one application in a single container. When you get to this point, your container is doing too much.

Yes. You should still use virtualenv. Also, you should be building wheels instead of eggs now. Finally, you should make sure that you keep your Docker image lean and efficient by building your wheels in a container with the full build tools and installing no build tools into your application container.

You should read this excellent article. https://glyph.twistedmatrix.com/2015/03/docker-deploy-double-dutch.html

The key take away is

It’s true that in many cases, perhaps even most, simply installing stuff into the system Python with Pip works fine; however, for more elaborate applications, you may end up wanting to invoke a tool provided by your base container that is implemented in Python, but which requires dependencies managed by the host. By putting things into a virtualenv regardless, we keep the things set up by the base image’s package system tidily separated from the things our application is building, which means that there should be no unforeseen interactions, regardless of how complex the application’s usage of Python might be.

If someone wants to replace virtualenv completely using docker he can.

Just create different Dockerfile for different environment and use port and volumes as you need for environment.

As an example for development you can use this project. Run docker compose and start coding. Write your own Dockerfiles for different environments like test, staging and production by putting your log and data in volume.

This link is also useful https://vsupalov.com/docker-python-development/.

I use both because with that you can more easily use multi stage builds and simply move your dependencies you built in one stage into later images/layers. Example can be found here.