How do I register a variable and persist it between plays targeted on different nodes?

I have an Ansible playbook, where I would like a variable I register in a first play targeted on one node to be available in a second play, targeted on another node.

Here is the playbook I am using:

---
- hosts: localhost
gather_facts: no


tasks:
- command: echo "hello world"
register: foo




- hosts: main
gather_facts: no


tasks:
- debug:
msg: {{ foo.stdout }}

But, when I try to access the variable in the second play, targeted on main, I get this message:

The task includes an option with an undefined variable. The error was: 'foo' is undefined

How can I access foo, registered on localhost, from main?

118508 次浏览

The problem you're running into is that you're trying to reference facts/variables of one host from those of another host.

You need to keep in mind that in Ansible, the variable foo assigned to the host localhost is distinct from the variable foo assigned to the host main or any other host.
If you want to access one hosts facts/variables from another host then you need to explicitly reference it via the hostvars variable. There's a bit more of a discussion on this in this question.

Suppose you have a playbook like this:

- hosts: localhost
gather_facts: no


tasks:
- command: echo "hello world"
register: foo




- hosts: localhost
gather_facts: no


tasks:
- debug:
var: foo

This will work because you're referencing the host localhost and localhosts's instance of the variable foo in both plays.

The output of this playbook is something like this:

PLAY [localhost] **************************************************


TASK: [command] ***************************************************
changed: [localhost]


PLAY [localhost] **************************************************


TASK: [debug] *****************************************************
ok: [localhost] => {
"var": {
"foo": {
"changed": true,
"cmd": [
"echo",
"hello world"
],
"delta": "0:00:00.004585",
"end": "2015-11-24 20:49:27.462609",
"invocation": {
"module_args": "echo \"hello world\",
"module_complex_args": {},
"module_name": "command"
},
"rc": 0,
"start": "2015-11-24 20:49:27.458024",
"stderr": "",
"stdout": "hello world",
"stdout_lines": [
"hello world"
],
"warnings": []
}
}
}

If you modify this playbook slightly to run the first play on one host and the second play on a different host, you'll get the error that you encountered. The solution is to use Ansible's built-in hostvars variable to have the second host explicitly reference the first hosts variable.

So modify the first example like this:

- hosts: localhost
gather_facts: no


tasks:
- command: echo "hello world"
register: foo




- hosts: main
gather_facts: no


tasks:
- debug:
var: foo
when: foo is defined


- debug:
var: hostvars['localhost']['foo']
## alternatively, you can use:
# var: hostvars.localhost.foo
when: hostvars['localhost']['foo'] is defined

The output of this playbook shows that the first task is skipped because foo is not defined by the host main.
But the second task succeeds because it's explicitly referencing localhosts's instance of the variable foo:

TASK: [debug] *************************************************
skipping: [main]


TASK: [debug] *************************************************
ok: [main] => {
"var": {
"hostvars['localhost']['foo']": {
"changed": true,
"cmd": [
"echo",
"hello world"
],
"delta": "0:00:00.005950",
"end": "2015-11-24 20:54:04.319147",
"invocation": {
"module_args": "echo \"hello world\"",
"module_complex_args": {},
"module_name": "command"
},
"rc": 0,
"start": "2015-11-24 20:54:04.313197",
"stderr": "",
"stdout": "hello world",
"stdout_lines": [
"hello world"
],
"warnings": []
}
}
}

So, in a nutshell, you want to modify the variable references in your main playbook to reference the localhost variables in this manner:

\{\{ hostvars['localhost']['foo'] }}
{# alternatively, you can use: #}
\{\{ hostvars.localhost.foo }}

I have had similar issues with even the same host, but across different plays. The thing to remember is that facts, not variables, are the persistent things across plays. Here is how I get around the problem.

#!/usr/local/bin/ansible-playbook --inventory=./inventories/ec2.py
---
- name: "TearDown Infrastructure !!!!!!!"
hosts: localhost
gather_facts: no
vars:
aws_state: absent
vars_prompt:
- name: "aws_region"
prompt: "Enter AWS Region:"
default: 'eu-west-2'
tasks:
- name: Make vars persistant
set_fact:
aws_region: "\{\{aws_region}}"
aws_state: "\{\{aws_state}}"








- name: "TearDown Infrastructure hosts !!!!!!!"
hosts: monitoring.ec2
connection: local
gather_facts: no
tasks:
- name: set the facts per host
set_fact:
aws_region: "\{\{hostvars['localhost']['aws_region']}}"
aws_state: "\{\{hostvars['localhost']['aws_state']}}"




- debug:
msg="state \{\{aws_state}} region \{\{aws_region}} id \{\{ ec2_id }} "


- name: last few bits
hosts: localhost
gather_facts: no
tasks:
- debug:
msg="state \{\{aws_state}} region \{\{aws_region}} "

results in

Enter AWS Region: [eu-west-2]:




PLAY [TearDown Infrastructure !!!!!!!] ***************************************************************************************************************************************************************************************************


TASK [Make vars persistant] **************************************************************************************************************************************************************************************************************
ok: [localhost]


PLAY [TearDown Infrastructure hosts !!!!!!!] *********************************************************************************************************************************************************************************************


TASK [set the facts per host] ************************************************************************************************************************************************************************************************************
ok: [XXXXXXXXXXXXXXXXX]


TASK [debug] *****************************************************************************************************************************************************************************************************************************
ok: [XXXXXXXXXXX] => {
"changed": false,
"msg": "state absent region eu-west-2 id i-0XXXXX1 "
}


PLAY [last few bits] *********************************************************************************************************************************************************************************************************************




TASK [debug] *****************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "state absent region eu-west-2 "
}


PLAY RECAP *******************************************************************************************************************************************************************************************************************************
XXXXXXXXXXXXX              : ok=2    changed=0    unreachable=0    failed=0
localhost                  : ok=2    changed=0    unreachable=0    failed=0

Use a dummy host and its variables

For example, to pass a Kubernetes token and hash from the master to the workers.

On master

- name: "Cluster token"
shell: kubeadm token list | cut -d ' ' -f1 | sed -n '2p'
register: K8S_TOKEN


- name: "CA Hash"
shell: openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
register: K8S_MASTER_CA_HASH


- name: "Add K8S Token and Hash to dummy host"
add_host:
name:   "K8S_TOKEN_HOLDER"
token:  "\{\{ K8S_TOKEN.stdout }}"
hash:   "\{\{ K8S_MASTER_CA_HASH.stdout }}"


- name:
debug:
msg: "[Master] K8S_TOKEN_HOLDER K8S token is \{\{ hostvars['K8S_TOKEN_HOLDER']['token'] }}"


- name:
debug:
msg: "[Master] K8S_TOKEN_HOLDER K8S Hash is  \{\{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}"

On worker

- name:
debug:
msg: "[Worker] K8S_TOKEN_HOLDER K8S token is \{\{ hostvars['K8S_TOKEN_HOLDER']['token'] }}"


- name:
debug:
msg: "[Worker] K8S_TOKEN_HOLDER K8S Hash is  \{\{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}"


- name: "Kubeadmn join"
shell: >
kubeadm join --token=\{\{ hostvars['K8S_TOKEN_HOLDER']['token'] }}
--discovery-token-ca-cert-hash sha256:\{\{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}
\{\{K8S_MASTER_NODE_IP}}:\{\{K8S_API_SERCURE_PORT}}

You can use an Ansible known behaviour. That is using group_vars folder to load some variables at your playbook. This is intended to be used together with inventory groups, but it is still a reference to the global variable declaration. If you put a file or folder in there with the same name as the group, you want some variable to be present, Ansible will make sure it happens!

As for example, let's create a file called all and put a timestamp variable there. Then, whenever you need, you can call that variable, which will be available to every host declared on any play inside your playbook.

I usually do this to update a timestamp once at the first play and use the value to write files and folders using the same timestamp.

I'm using lineinfile module to change the line starting with timestamp :

Check if it fits for your purpose.

On your group_vars/all

timestamp: t26032021165953

On the playbook, in the first play: hosts: localhost gather_facts: no

- name: Set timestamp on group_vars
lineinfile:
path: "\{\{ playbook_dir }}/group_vars/all"
insertafter: EOF
regexp: '^timestamp:'
line: "timestamp: t\{\{ lookup('pipe','date +%d%m%Y%H%M%S') }}"
state: present

On the playbook, in the second play:

hosts: any_hosts
gather_facts: no


tasks:
- name: Check if timestamp is there
debug:
msg: "\{\{ timestamp }}"