如何在 Fabric 文件中设置目标主机

我想使用 Fabric 将我的 web 应用程序代码部署到开发、运行和生产服务器上:

def deploy_2_dev():
deploy('dev')


def deploy_2_staging():
deploy('staging')


def deploy_2_prod():
deploy('prod')


def deploy(server):
print 'env.hosts:', env.hosts
env.hosts = [server]
print 'env.hosts:', env.hosts

输出样本:

host:folder user$ fab deploy_2_dev
env.hosts: []
env.hosts: ['dev']
No hosts found. Please specify (single) host string for connection:

当我创建 set_hosts()任务时(如 布料文件所示) ,env.hosts 被正确设置。但是,这不是一个可行的选择,装饰者也不是。在命令行上传递主机将最终导致某种类型的 shell 脚本调用 Fabfile,我希望只有一个工具能够正确地完成这项工作。

它在 Fabric 文档中说,‘ env.hosts 只是一个 Python 列表对象’。根据我的观察,这根本不是真的。

有人能解释一下这是怎么回事吗? 我该如何设置主机部署到哪里?

66802 次浏览

Was stuck on this myself, but finally figured it out. You simply can't set the env.hosts configuration from within a task. Each task is executed N times, once for each Host specified, so the setting is fundamentally outside of task scope.

Looking at your code above, you could simply do this:

@hosts('dev')
def deploy_dev():
deploy()


@hosts('staging')
def deploy_staging():
deploy()


def deploy():
# do stuff...

Which seems like it would do what you're intending.

Or you can write some custom code in the global scope that parses the arguments manually, and sets env.hosts before your task function is defined. For a few reasons, that's actually how I've set mine up.

You need to set host_string an example would be:

from fabric.context_managers import settings as _settings


def _get_hardware_node(virtualized):
return "localhost"


def mystuff(virtualized):
real_host = _get_hardware_node(virtualized)
with _settings(
host_string=real_host):
run("echo I run on the host %s :: `hostname -f`" % (real_host, ))

You can assign to env.hoststring before executing a subtask. Assign to this global variable in a loop if you want to iterate over multiple hosts.

Unfortunately for you and me, fabric is not designed for this use case. Check out the main function at http://github.com/bitprophet/fabric/blob/master/fabric/main.py to see how it works.

You need to modify env.hosts at the module level, not within a task function. I made the same mistake.

from fabric.api import *


def _get_hosts():
hosts = []
... populate 'hosts' list ...
return hosts


env.hosts = _get_hosts()


def your_task():
... your task ...

I do this by declaring an actual function for each environment. For example:

def test():
env.user = 'testuser'
env.hosts = ['test.server.com']


def prod():
env.user = 'produser'
env.hosts = ['prod.server.com']


def deploy():
...

Using the above functions, I would type the following to deploy to my test environment:

fab test deploy

...and the following to deploy to production:

fab prod deploy

The nice thing about doing it this way is that the test and prod functions can be used before any fab function, not just deploy. It is incredibly useful.

To explain why it's even an issue. The command fab is leveraging fabric the library to run the tasks on the host lists. If you try and change the host list inside a task, you're esentially attempting to change a list while iterating over it. Or in the case where you have no hosts defined, loop over an empty list where the code where you set the list to loop over is never executed.

The use of env.host_string is a work around for this behavior only in that it's specifying directly to the functions what hosts to connect with. This causes some issues in that you'll be remaking the execution loop if you want to have a number of hosts to execute on.

The simplest way the people make the ability to set hosts at run time, is to keep the env populatiing as a distinct task, that sets up all the host strings, users, etc. Then they run the deploy task. It looks like this:

fab production deploy

or

fab staging deploy

Where staging and production are like the tasks you have given, but they do not call the next task themselves. The reason it has to work like this, is that the task has to finish, and break out of the loop (of hosts, in the env case None, but it's a loop of one at that point), and then have the loop over the hosts (now defined by the preceding task) anew.

Use roledefs

from fabric.api import env, run


env.roledefs = {
'test': ['localhost'],
'dev': ['user@dev.example.com'],
'staging': ['user@staging.example.com'],
'production': ['user@production.example.com']
}


def deploy():
run('echo test')

Choose role with -R:

$ fab -R test deploy
[localhost] Executing task 'deploy'
...

Here's a simpler version of serverhorror's answer:

from fabric.api import settings


def mystuff():
with settings(host_string='192.0.2.78'):
run("hostname -f")

It's very simple. Just initialize the env.host_string variable and all of the following commands will be executed on this host.

from fabric.api import env, run


env.host_string = 'user@exmaple.com'


def foo:
run("hostname -f")

Using roles is currently considered to be the "proper" and "correct" way of doing this and is what you "should" do it.

That said, if you are like most of what you "would like" or "desire" is the ability to perform a "twisted syster" or switching target systems on the fly.

So for entertainment purposes only (!) the following example illustrates what many might consider to a risky, and yet somehow thoroughly satisfying, manoeuvre that goes something like this:

env.remote_hosts       = env.hosts = ['10.0.1.6']
env.remote_user        = env.user = 'bob'
env.remote_password    = env.password = 'password1'
env.remote_host_string = env.host_string


env.local_hosts        = ['127.0.0.1']
env.local_user         = 'mark'
env.local_password     = 'password2'


def perform_sumersault():
env_local_host_string = env.host_string = env.local_user + '@' + env.local_hosts[0]
env.password = env.local_password
run("hostname -f")
env.host_string = env.remote_host_string
env.remote_password = env.password
run("hostname -f")

Then running:

fab perform_sumersault

Contrary to some other answers, it is possible to modify the env environment variables within a task. However, this env will only be used for subsequent tasks executed using the fabric.tasks.execute function.

from fabric.api import task, roles, run, env
from fabric.tasks import execute


# Not a task, plain old Python to dynamically retrieve list of hosts
def get_stressors():
hosts = []
# logic ...
return hosts


@task
def stress_test():
# 1) Dynamically generate hosts/roles
stressors = get_stressors()
env.roledefs['stressors'] = map(lambda x: x.public_ip, stressors)


# 2) Wrap sub-tasks you want to execute on new env in execute(...)
execute(stress)


# 3) Note that sub-tasks not nested in execute(...) will use original env
clean_up()


@roles('stressors')
def stress():
# this function will see any changes to env, as it was wrapped in execute(..)
run('echo "Running stress test..."')
# ...


@task
def clean_up():
# this task will NOT see any dynamic changes to env

Without wrapping sub-tasks in execute(...), your module-level env settings or whatever is passed from the fab CLI will be used.

Since fab 1.5 this is a documented way to dynamically set hosts.

http://docs.fabfile.org/en/1.7/usage/execution.html#dynamic-hosts

Quote from the doc below.

Using execute with dynamically-set host lists

A common intermediate-to-advanced use case for Fabric is to parameterize lookup of one’s target host list at runtime (when use of Roles does not suffice). execute can make this extremely simple, like so:

from fabric.api import run, execute, task


# For example, code talking to an HTTP API, or a database, or ...
from mylib import external_datastore


# This is the actual algorithm involved. It does not care about host
# lists at all.
def do_work():
run("something interesting on a host")


# This is the user-facing task invoked on the command line.
@task
def deploy(lookup_param):
# This is the magic you don't get with @hosts or @roles.
# Even lazy-loading roles require you to declare available roles
# beforehand. Here, the sky is the limit.
host_list = external_datastore.query(lookup_param)
# Put this dynamically generated host list together with the work to be
# done.
execute(do_work, hosts=host_list)

I'm totally new to fabric, but to get fabric to run the same commands on multiple hosts (e.g. to deploy to multiple servers, in one command) you can run:

fab -H staging-server,production-server deploy

where staging-server and production-server are 2 servers you want to run the deploy action against. Here's a simple fabfile.py that will display the OS name. Note that the fabfile.py should be in the same directory as where you run the fab command.

from fabric.api import *


def deploy():
run('uname -s')

This works with fabric 1.8.1 at least.

So, in order to set the hosts, and have the commands run across all the hosts, you have to start with:

def PROD():
env.hosts = ['10.0.0.1', '10.0.0.2']


def deploy(version='0.0'):
sudo('deploy %s' % version)

Once those are defined, then run the command on the command line:

fab PROD deploy:1.5

What will run the deploy task across all of the servers listed in the PROD function, as it sets the env.hosts before running the task.

Here's another "summersault" pattern that enables the fab my_env_1 my_command usage:

With this pattern, we only have to define environments one time using a dictionary. env_factory creates functions based on the keynames of ENVS. I put ENVS in its own directory and file secrets.config.py to separate config from the fabric code.

The drawback is that, as written, adding the @task decorator will break it.

Notes: We use def func(k=k): instead of def func(): in the factory because of late binding. We get the running module with this solution and patch it to define the function.

secrets.config.py

ENVS = {
'my_env_1': {
'HOSTS': [
'host_1',
'host_2',
],
'MY_OTHER_SETTING': 'value_1',
},
'my_env_2': {
'HOSTS': ['host_3'],
'MY_OTHER_SETTING': 'value_2'
}
}

fabfile.py

import sys
from fabric.api import env
from secrets import config




def _set_env(env_name):
# can easily customize for various use cases
selected_config = config.ENVS[env_name]
for k, v in selected_config.items():
setattr(env, k, v)




def _env_factory(env_dict):
for k in env_dict:
def func(k=k):
_set_env(k)
setattr(sys.modules[__name__], k, func)




_env_factory(config.ENVS)


def my_command():
# do work