简单的playbook
执行playbook
# ansible-playbook -i hosts site.yml
样例playbook
修改样例配置文件
$ cd ansible-examples/tomcat-standalone
$ vim hosts
[tomcat-servers]
10.0.3.200
执行playbook
### 出错就反复执行,不过要加上出错提示中的--limit @/home/ubuntu/ansible-examples/tomcat-standalone/site.retry参数
# ansible-playbook -i hosts site.yml
TASK [selinux : Install libselinux-python] *************************************
fatal: [10.0.3.200]: FAILED! => {"changed": false, "failed": true, "msg": "Failure talking to yum: Cannot retrieve metalink for repository: epel/x86_64. Please verify its path and try again"}
to retry, use: --limit @/home/ubuntu/ansible-examples/tomcat-standalone/site.retry
### 在容器中执行一遍yum update更新下源,就是更新下缓存,不需要安装软件
TASK [tomcat : insert firewalld rule for tomcat http port] *********************
fatal: [10.0.3.200]: FAILED! => {"changed": false, "failed": true, "msg": "firewalld and its python 2 module are required for this module"}
RUNNING HANDLER [tomcat : restart tomcat] **************************************
to retry, use:
### 为容器安装firewalld
# yum search firewalld |grep python
python-firewall.noarch : Python2 bindings for firewalld
# yum install python-firewall.noarch
# systemctl enable firewalld
# systemctl start firewalld
roles使用
roles标准结构
# tree ansible-sshd/
ansible-sshd/
├── CHANGELOG
├── defaults
│ └── main.yml
├── handlers
│ └── main.yml
├── LICENSE
├── meta
│ ├── 10_top.j2
│ ├── 20_middle.j2
│ ├── 30_bottom.j2
│ ├── main.yml
│ ├── make_option_list
│ ├── options_body
│ └── options_match
├── README.md
├── tasks
│ └── main.yml
├── templates
│ └── sshd_config.j2
├── tests
│ ├── inventory
│ ├── roles
│ │ └── ansible-sshd -> ../../.
│ └── test.yml
├── Vagrantfile
└── vars
├── Amazon.yml
├── Archlinux.yml
├── Debian_8.yml
├── Debian.yml
├── default.yml
├── Fedora.yml
├── FreeBSD.yml
├── OpenBSD.yml
├── RedHat_6.yml
├── RedHat_7.yml
├── Suse.yml
├── Ubuntu_12.yml
├── Ubuntu_14.yml
└── Ubuntu_16.yml
目录名说明
defaults
为当前角色设定默认变量时使用此目录,应当包含一个main.yml文件
handlers
此目录中应当包含一个main.yml文件,用于定义此角色用到的各handler,在handler中使用include包含的其它的handler文件也应该位于此目录中
应当包含一个main.yml文件,用于定义此角色的特殊设定及其依赖关系
tasks
至少应该包含一个名为main.yml的文件,其定义了此角色的任务列表,此文件可以使用include包含其它的位于此目录中的task文件
templates
template模块会自动在此目录中寻找Jinja2模板文件
定义当前角色使用的变量
files
存放由copy或script等模块调用的文件
tests
在playbook中角色的使用样例
roles使用
roles的任务执行顺序
### 首先执行meta下的main.yml文件内容
### 然后执行tasks下的main.yml文件内容
用于设置文件/链接/目录的属性,或者删除文件/链接/目录
lineinfile
用于检测文件是否存在特殊行或者使用后端正则表达式来替换匹配到的特殊行
- name: Extra lxc config
lineinfile:
dest: "/var/lib/lxc/{{ inventory_hostname }}/config"
line: "{{ item.split('=')[0] }} = {{ item.split('=', 1)[1] }}"
insertafter: "^{{ item.split('=')[0] }}"
backup: "true"
with_items: "{{ extra_container_config | default([]) }}"
delegate_to: "{{ physical_host }}"
register: _ec
when: not is_metal | bool
tags:
- common-lxc
replace
lineinfile的多行匹配版本,此模块会在文件中插入一段内容,并在内容开始和结束位置设置标签,后续可以使用标签可以对此块内容进行操作
### 在ml2_conf.ini文件的[ml2]和[ml2_type_vlan]字段之间插入一段内容
- name: Enable ovn in neutron-server
replace:
dest: "{{ node_config_directory }}/neutron-server/ml2_conf.ini"
regexp: '\[ml2\][\S\s]*(?=\[ml2_type_vlan\])'
replace: |+
[ml2]
type_drivers = local,flat,vlan,geneve
tenant_network_types = geneve
mechanism_drivers = ovn
extension_drivers = port_security
overlay_ip_version = 4
[ml2_type_geneve]
vni_ranges = 1:65536
max_header_size = 38
[ovn]
ovn_nb_connection = tcp:{{ api_interface_address }}:{{ ovn_northdb_port }}
ovn_sb_connection = tcp:{{ api_interface_address }}:{{ ovn_sourthdb_port }}
ovn_l3_mode = False
ovn_l3_scheduler = chance
ovn_native_dhcp = True
neutron_sync_mode = repair
backup: yes
when:
- action == "deploy"
- inventory_hostname in groups['network']
notify:
- Restart neutron-server container
ini_file
ini后缀格式文件修改
### 设置l3_agent.ini文件[DEFAULT]字段的external_network_bridge选项值为br-ex
- name: Set the external network bridge
vars:
agent: "{{ 'neutron-vpnaas-agent' if enable_neutron_vpnaas | bool else 'neutron-l3-agent' }}"
ini_file:
dest: "{{ node_config_directory }}/{{ agent }}/l3_agent.ini"
section: "DEFAULT"
option: "external_network_bridge"
value: "{{ neutron_bridge_name | default('br-ex') }}"
backup: yes
when:
- action == "deploy"
- inventory_hostname in ovn_central_address
delegate_to: "{{ item }}"
with_items: "{{ groups['neutron-server'] }}"
notify:
- Restart {{ agent }} container
assemble
将多个文件聚合成一个文件
### 将/etc/haproxy/conf.d目录下的文件内容聚合成/etc/haproxy/haproxy.cfg文件
- name: Regenerate haproxy configuration
assemble:
src: "/etc/haproxy/conf.d"
dest: "/etc/haproxy/haproxy.cfg"
notify: Restart haproxy
tags:
- haproxy-general-config
with_items
标准循环,用于执行重复任务,
{{ item }}
类似宏展开
- name: add several users
user:
name: "{{ item.name }}"
state: present
groups: "{{ item.groups }}"
with_items:
- { name: 'testuser1', groups: 'wheel' }
- { name: 'testuser2', groups: 'root' }
with_nested
### 修改neutron-server组所有主机的ml2_conf.ini文件的对应字段值
- name: Enable ovn in neutron-server
vars:
params:
- { section: 'ml2', option: 'type_drivers', value: 'local,flat,vlan,geneve' }
- { section: 'ml2', option: 'tenant_network_types', value: 'geneve' }
- { section: 'ml2', option: 'mechanism_drivers', value: 'ovn' }
- { section: 'ml2', option: 'extension_drivers', value: 'port_security' }
- { section: 'ml2', option: 'overlay_ip_version', value: '4' }
- { section: 'securitygroup', option: 'enable_security_group', value: 'True' }
ini_file:
dest: "{{ node_config_directory }}/neutron-server/ml2_conf.ini"
section: "{{ item[0].section }}"
option: "{{ item[0].option }}"
value: "{{ item[0].value }}"
backup: yes
when:
- action == "deploy"
- inventory_hostname in ovn_central_address
delegate_to: "{{ item[1] }}"
with_nested:
- "{{ params }}"
- "{{ groups['neutron-server'] }}"
notify:
- Restart neutron-server container
设置任务标签
tasks:
- yum: name={{ item }} state=installed
with_items:
- httpd
- memcached
tags:
- packages
- template: src=templates/src.j2 dest=/etc/foo.conf
tags:
- configuration
### 执行playbook可以指定只执行标签对应任务或跳过标签对应任务
# ansible-playbook example.yml --tags "configuration,packages"
# ansible-playbook example.yml --skip-tags "notification"
fail_when
用来控制playbook退出
- name: Check if firewalld is installed
command: rpm -q firewalld
register: firewalld_check
failed_when: firewalld_check.rc > 1
when: ansible_os_family == 'RedHat'
pre_tasks/post_tasks
用来设置在执行roles模块之前和之后需要执行的任务
- name: Install the aodh components
hosts: aodh_all
gather_facts: "{{ gather_facts | default(True) }}"
max_fail_percentage: 20
user: root
pre_tasks:
- include: common-tasks/os-lxc-container-setup.yml
- include: common-tasks/rabbitmq-vhost-user.yml
static: no
vars:
user: "{{ aodh_rabbitmq_userid }}"
password: "{{ aodh_rabbitmq_password }}"
vhost: "{{ aodh_rabbitmq_vhost }}"
_rabbitmq_host_group: "{{ aodh_rabbitmq_host_group }}"
when:
- inventory_hostname == groups['aodh_api'][0]
- groups[aodh_rabbitmq_host_group] | length > 0
- include: common-tasks/os-log-dir-setup.yml
vars:
log_dirs:
- src: "/openstack/log/{{ inventory_hostname }}-aodh"
dest: "/var/log/aodh"
- include: common-tasks/mysql-db-user.yml
static: no
vars:
user_name: "{{ aodh_galera_user }}"
password: "{{ aodh_container_db_password }}"
login_host: "{{ aodh_galera_address }}"
db_name: "{{ aodh_galera_database }}"
when: inventory_hostname == groups['aodh_all'][0]
- include: common-tasks/package-cache-proxy.yml
roles:
- role: "os_aodh"
aodh_venv_tag: "{{ openstack_release }}"
aodh_venv_download_url: "{{ openstack_repo_url }}/venvs/{{ openstack_release }}/{{ ansible_distribution | lower }}/aodh-{{ openstack_release }}-{{ ansible_architecture | lower }}.tgz"
- role: "openstack_openrc"
tags:
- openrc
- role: "rsyslog_client"
rsyslog_client_log_rotate_file: aodh_log_rotate
rsyslog_client_log_dir: "/var/log/aodh"
rsyslog_client_config_name: "99-aodh-rsyslog-client.conf"
tags:
- rsyslog
vars:
is_metal: "{{ properties.is_metal|default(false) }}"
aodh_rabbitmq_userid: aodh
aodh_rabbitmq_vhost: /aodh
aodh_rabbitmq_servers: "{{ rabbitmq_servers }}"
aodh_rabbitmq_port: "{{ rabbitmq_port }}"
aodh_rabbitmq_use_ssl: "{{ rabbitmq_use_ssl }}"
tags:
- aodh
delegate_to
可以将当前任务放到其他hosts上执行
local_action
将任务放在ansible控制主机(运行ansible-playbook的主机)上执行
- name: Check if the git cache exists on deployment host
local_action:
module: stat
path: "{{ repo_build_git_cache }}"
register: _local_git_cache
when: repo_build_git_cache is defined
用户和用户组控制
group
创建用户组
### 创建系统管理员组haproxy,present表示不存在创建,absent表示存在删除
- name: Create the haproxy system group
group:
name: "haproxy"
state: "present"
system: "yes"
tags:
- haproxy-group
### 创建haproxy:haproxy用户,并创建home目录
- name: Create the haproxy system user
user:
name: "haproxy"
group: "haproxy"
comment: "haproxy user"
shell: "/bin/false"
system: "yes"
createhome: "yes"
home: "/var/lib/haproxy"
tags:
- haproxy-user
authorized_key
添加用户的SSH认证key
- name: Create authorized keys file from host vars
authorized_key:
user: "{{ repo_service_user_name }}"
key: "{{ hostvars[item]['repo_pubkey'] | b64decode }}"
with_items: "{{ groups['repo_all'] }}"
when: hostvars[item]['repo_pubkey'] is defined
tags:
- repo-key
- repo-key-store
slurp
用来读取远程主机上文件内容是base64加密的文件
### 读取id_rsa.pub文件的内容,并设置到变量repo_pub中
- name: Get public key contents and store as var
slurp:
src: "{{ repo_service_home_folder }}/.ssh/id_rsa.pub"
register: repo_pub
changed_when: false
tags:
- repo-key
- repo-key-create
web访问,类似执行curl命令
- name: test proxy URL for connectivity
url: "{{ repo_pkg_cache_url }}/acng-report.html"
method: "HEAD"
register: proxy_check
failed_when: false
tags:
- common-proxy
wait_for
等待一个端口变得可用或者等待一个文件变得可用
- name: Wait for container ssh
wait_for:
port: "22"
delay: "{{ ssh_delay }}"
search_regex: "OpenSSH"
host: "{{ ansible_host }}"
delegate_to: "{{ physical_host }}"
register: ssh_wait_check
until: ssh_wait_check | success
retries: 3
when:
- (_mc is defined and _mc | changed) or (_ec is defined and _ec | changed)
- not is_metal | bool
tags:
- common-lxc
command
执行shell命令
### ignore_errors为true表示命令执行出错也不会退出playbook
- name: Check if clean is needed
command: docker exec openvswitch_vswitchd ovs-vsctl br-exists br-tun
register: result
ignore_errors: True
检测链表是否为空
### pip_wheel_install为链表变量
- name: Install wheel packages
shell: cd /tmp/wheels && pip install {{ item }}*
with_items:
- "{{ pip_wheel_install | default([]) }}"
when: pip_wheel_install > 0
ternary
根据结果的真假来决定返回值
- name: Set container backend to "dir" or "lvm" based on whether the lxc VG was found
set_fact:
lxc_container_backing_store: "{{ (vg_result.rc != 0) | ternary('dir', 'lvm') }}"
when: vg_result.rc is defined
tags:
- lxc-container-vg-detect
vg_result.rc不为0返回dir,否则返回lvm
根据结果的真假来决定返回值
- name: Set the external network bridge
vars:
agent: "{{ 'neutron-vpnaas-agent' if enable_neutron_vpnaas | bool else 'neutron-l3-agent' }}"
ini_file:
dest: "{{ node_config_directory }}/{{ agent }}/l3_agent.ini"
section: "DEFAULT"
option: "external_network_bridge"
value: "{{ neutron_bridge_name | default('br-ex') }}"
backup: yes
when:
- action == "deploy"
- inventory_hostname in ovn_central_address
delegate_to: "{{ item }}"
with_items: "{{ groups['neutron-server'] }}"
notify:
- Restart {{ agent }} container
when中使用jinja2
when表达式中不建议直接使用
{{}}
的方式来获取变量值,如果变量是字符串可以使用管道操作
| string
来获取变量值
- name: Checking free port for OVN
vars:
service: "{{ neutron_services[item.name] }}"
wait_for:
host: "{{ hostvars[inventory_hostname]['ansible_' + api_interface]['ipv4']['address'] }}"
port: "{{ item.port }}"
connect_timeout: 1
state: stopped
when:
- container_facts[ item.facts | string ] is not defined
- service.enabled | bool
- service.host_in_groups | bool
with_items:
- { name: "ovn-nb-db-server", port: "{{ ovn_northdb_port }}", facts: "ovn_nb_db" }
- { name: "ovn-sb-db-server", port: "{{ ovn_sourthdb_port }}", facts: "ovn_sb_db" }
你将学到什么
如何配置ansible运行环境
如何执行ansible命令
如何配置Inventory
角色操作系统网络地址
ubuntu 14.04 TLS
192.168.200.250
ubuntu 16.04 TLS
192.168.200.11 192.168.200.12
管理主机配置
安装ansible
$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible
设置托管节点