文章目录

  • 一、 总览
    • 地址规划
    • ansible目录准备
    • 初始化网络
  • 二、ansible基础模块安装
    • 2.1 关闭防火墙
    • 2.2 创建统一用户
    • 2.3 创建统一仓库
    • 2.4 所有主机需要安装基础yum包
    • 2.8 调整内核参数 `pam_limits`, 文件描述符
    • 2.9 调整内核参数 `sysctl`
    • 2.10 统一入口文件 main.yml
    • 2.11 top 文件
    • 2.12 测试所有主机的连通性
  • 三、 安装具体的服务
    • 3.1 nfs安装
    • 3.2 rsync 安装
    • 3.3 sersync实时同步
    • 3.4 安装mysql
    • 3.5 安装redis
    • 3.6 安装nginx
    • 3.7 安装php
    • 3.8 安装 haproxy
    • 3.9 安装keepalived
    • 3.10 安装 LVS
    • 3.11 LVS后端七层代理集群(RS)节点配置VIP与ARP抑制
    • 3.12 配置route
    • 3.1.3 配置dns
  • 三 、部署wordpress 博客系统
    • 3.1 web站点层
    • 3.2 七层负载均衡 haproxy, 备用nginx
  • 四、部署java项目 zrlog
    • 4.1 安装 tomcat
    • 4.2 部署zrlog 项目
    • 4.3 部署七层 zrlog-proxy 用haproxy做代理
  • 五、部署phpmyadin项目
    • 5.1 web端部署phpmyadmin
    • 5.2 七层负载均衡上用nginx(备用haproxy)
    • 5.3 所有项目总参数汇总
    • 5.4 top.yml 汇总
    • 5.5 项目总文件目录层级结构
  • 附录:

一、 总览


1.系统初始化:指的是操作系统安装完毕后,都需要使用到的初始配置,比如安装基础软件、调整内核参数、设置yum仓库等。
⒉.功能模块:指的是生产使用到的应用软件,比如Nginx、PHP、Haproxy、Keepalived等这类应用服务的安装和管理,每一个功能我们创建一个roles角色来存放,我们把这个目录的集合称之为“功能模块"。
3.业务模块:在功能模块中我们编写了大量基础的功能状态,在业务层面直接进行引用,所以功能模块就是尽可能的全、而且独立。
而业务模块,不同的业务类型就可以调用不同的roles,使得每个业务使用自己独特的配置文件。最终在site.yml里面我们指定业务的状态即可。

地址规划

主机名 外网eth0-NAT 内网eth1-LAN 网关gateway 角色
route 10.0.0.200(开启Forward SNAT) 172.16.1.200 10.0.0.2 routes
dns-master 10.0.0.91 172.16.1.91 10.0.0.2 dnsservers
dns-slave 10.0.0.92 172.16.1.92 10.0.0.2 dnsservers
LVS-Master 172.16.1.3 172.16.1.200 lbservers
LVS-Backup 172.16.1.4 172.16.1.200 lbservers
haproxy-node1(或nginx-node1) 172.16.1.5 172.16.1.200 proxyservers
haproxy-node2(或nginx-node2) 172.16.1.6 172.16.1.200 proxyservers
web-node1 172.16.1.7 172.16.1.200 webservers
web-node2 172.16.1.8 172.16.1.200 webservers
web-node3 172.16.1.9 172.16.1.200 webservers
mysql-master 172.16.1.51 172.16.1.200 mysqlservers
mysql-slave 172.16.1.52 172.16.1.200 mysqlservers
redis-cluster 172.16.1.41 172.16.1.200 redservers
redis-cluster 172.16.1.42 172.16.1.200 redservers
redis-cluster 172.16.1.43 172.16.1.200 redservers
nfs-server 172.16.1.32 172.16.1.200 nfservers
rsync 172.16.1.31 172.16.1.200 rsyncservers
jumpserver 172.16.1.61 172.16.1.200
openvpn 10.0.0.60 172.16.1.60 172.16.1.200

ansible目录准备

[root@manager /]# mkdir /ansible/roles -p
[root@manager /]# cd ansible/roles/
[root@manager roles]# cp /etc/ansible/hosts ./
[root@manager roles]# cp /etc/ansible/ansible.cfg ./

ansible.cfg文件,将facts缓存在本地的redis中

[root@manager roles]# cat ansible.cfg
inventory  = ./host
host_key_checking = False
gathering = smart
fact_caching_timeout = 86400
fact_caching = redis
fact_caching_connection = 172.16.1.62:6379
forks = 50

hosts主机清单文件:

[root@manager roles]# cat hosts
[routes]
172.16.1.200[dnsservers]
172.16.1.91
172.16.1.92[lbservers]
172.16.1.3
172.16.1.4[proxyservers]
172.16.1.5
172.16.1.6[webservers]
172.16.1.7
172.16.1.8
172.16.1.9[mysqlservers]
172.16.1.51[redisservers]
172.16.1.41[nfsservers]
172.16.1.32[rsyncservers]
172.16.1.31

测试连通性,如果不连通则说明没有推送公钥,需要事先把ansible主机秘钥推送一份给各个节点。

[root@manager roles]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.1.51
[root@manager roles]# ansible all -m ping[root@manager roles]# ansible all --list-host # 查看当前hosts所有主机hosts (14):172.16.1.7172.16.1.8172.16.1.9172.16.1.31172.16.1.3172.16.1.4172.16.1.5172.16.1.6172.16.1.41172.16.1.200172.16.1.32172.16.1.91172.16.1.92172.16.1.51[root@manager roles]# mkdir group_vars # 创建全局变量文件
[root@manager roles]# touch group_vars/all

初始化网络

[root@manager roles]# cat network_init.yml
- hosts: all:!dnsservers:!routestasks:- name: delete default gatewaylineinfile:path:  /etc/sysconfig/network-scripts/ifcfg-eth1regexp: '^GATEWAY='state: absent- name: add new gatewaylineinfile:path:  /etc/sysconfig/network-scripts/ifcfg-eth1line: GATEWAY=172.16.1.200- name: delete default dnslineinfile:path:  /etc/sysconfig/network-scripts/ifcfg-eth1regexp: '^DNS'state: absent- name: add new dnslineinfile:path:  /etc/sysconfig/network-scripts/ifcfg-eth1line: DNS=223.5.5.5- name: restart networksystemd:name: networkstate: restarted

查看是否符合预期

[root@manager roles]# ansible-playbook network_init.yml
[root@manager roles]# ansible all -m shell -a "cat /etc/sysconfig/network-scripts/ifcfg-eth1 | grep DNS"
[root@manager roles]# ansible all -m shell -a "cat /etc/sysconfig/network-scripts/ifcfg-eth1 | grep GATEWAY"

二、ansible基础模块安装

当我们的服务器上架并安装好操作系统后,都会有一些基础的操作,建议将所有服务器都会涉及的基础配置都存放在名为roles中的base 下。我们称其为“初始化模块”
1.关闭防火墙Firewalld selinux
2.创建统一用户www, uid为666 gid为666
3.添加base、 epel仓库
4.特定主机需要添加特定的仓库源nginx 、php、 mysql、zabbix、 elk …
5.安装基础软件包, rsync、 nfs-utils、 net-tools lrzsz、 wget、 unzip、 vim、 tree. …
6.内核升级\内核参数调整\文件描述符调整

2.1 关闭防火墙

[root@manager roles]# mkdir base/{tasks,templates,files,handlers} -p[root@manager roles]# cat  base/tasks/firewalld.yml
- name: disable selinux selinux:state: disabled- name: disable firewallsystemd:name: firewalldstate: stoppedenabled: no

2.2 创建统一用户

[root@manager roles]# cat base/tasks/user.yml
- name: create uniform user group wwwgroup:name: "{{ all_group }}"gid: "{{ gid }}"system: yesstate: present- name: create uniform user wwwuser:name: "{{ all_user }}"group: "{{ all_group }}"uid: "{{ uid }}"system: yesstate: present

2.3 创建统一仓库

[root@manager roles]# cat base/tasks/yum_repository.yml
- name: Add Base Yum Repositoryyum_repository:name: basedescription: Base Aliyun Repositorybaseurl: http://mirrors.aliyun.com/centos/$releasever/os/$basearch/gpgcheck: yesgpgkey: http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7- name: Add Epel Yum Repositoryyum_repository:name: epeldescription: Epel Aliyun Repositorybaseurl: http://mirrors.aliyun.com/epel/7/$basearchgpgcheck: no- name: Add Nginx Yum Repositoryyum_repository:name: nginxdescription: Nginx Repositorybaseurl: http://nginx.org/packages/centos/7/$basearch/gpgcheck: no#when: (ansible_hostname is match('web*')) or (ansible_hostname is match('lb*'))
- name: Add PHP Yum Repositoryyum_repository:name: php71wdescription: php Repositorybaseurl: http://us-east.repo.webtatic.com/yum/el7/x86_64/gpgcheck: no- name: Add Haproxy Yum Repositoryyum_repository:name: haproxydescription: haproxy repositorybaseurl: https://repo.ius.io/archive/7/$basearch/gpgcheck: yesgpgkey: https://repo.ius.io/RPM-GPG-KEY-IUS-7

2.4 所有主机需要安装基础yum包

[root@manager roles]# cat base/tasks/yum_pkg.yml
- name: Installed Packages Allyum:name: "{{ item }}"state: presentloop:- rsync- nfs-utils- net-tools- wget- tree- lrzsz- vim- unzip- httpd-tools- bash-completion- iftop- iotop- glances- gzip- psmisc- MySQL-python- bind-utils- python-setuptools- python-pip- gcc- gcc-c++ - autoconf- sudo- net-tools- iptables

2.8 调整内核参数 pam_limits, 文件描述符

[root@manager roles]# cat base/tasks/limits.yml
- name: Change Limit /etc/security/limit.confpam_limits:domain: "*"limit_type: "{{ item.limit_type }}"limit_item: "{{ item.limit_item }}"value: "{{ item.value  }}"loop:- { limit_type: 'soft', limit_item: 'nofile',value: '100000' }- { limit_type: 'hard', limit_item: 'nofile',value: '100000' }

2.9 调整内核参数 sysctl

[root@manager roles]# cat base/tasks/kernel.yml
- name: Change Port Rangesysctl:name: net.ipv4.ip_local_port_rangevalue: '1024 65000'sysctl_set: yes- name: Enabled Forwardsysctl:name: net.ipv4.ip_forwardvalue: '1'sysctl_set: yes- name: Enabled tcp_reusesysctl:name: net.ipv4.tcp_tw_reusevalue: '1'sysctl_set: yes- name: Chanage tcp tw_bucketssysctl:name: net.ipv4.tcp_max_tw_bucketsvalue: '5000'sysctl_set: yes- name: Chanage tcp_syncookiessysctl:name: net.ipv4.tcp_syncookiesvalue: '1'sysctl_set: yes- name: Chanage tcp max_syn_backlogsysctl:name: net.ipv4.tcp_max_syn_backlogvalue: '8192'sysctl_set: yes- name: Chanage tcp Established Maxconnsysctl:name: net.core.somaxconnvalue: '32768'sysctl_set: yesstate: present- name: Chanage tcp_syn_retriessysctl:name: net.ipv4.tcp_syn_retriesvalue: '2'sysctl_set: yesstate: present- name: Chanage net.ipv4.tcp_synack_retriessysctl:name: net.ipv4.tcp_synack_retriesvalue: '2'sysctl_set: yesstate: present

2.10 统一入口文件 main.yml

[root@manager roles]# cat base/tasks/main.yml
- name: firewalldinclude: firewalld.yml- name: kernelinclude: kernel.yml- name: limitsinclude: limits.yml- name: userinclude: user.ymltags: create_user- name: yum_repositoryinclude: yum_repository.yml- name: yum_packageinclude: yum_pkg.yml

2.11 top 文件

[root@manager roles]# cat top.yml
- hosts: allroles:- role: base

2.12 测试所有主机的连通性

[root@manager roles]# tree /ansible/roles/
/ansible/roles/
├── ansible.cfg
├── base
│   ├── files
│   ├── handlers
│   ├── tasks
│   │   ├── firewalld.yml
│   │   ├── kernel.yml
│   │   ├── limits.yml
│   │   ├── main.yml
│   │   ├── user.yml
│   │   ├── yum_pkg.yml
│   │   └── yum_repository.yml
│   └── templates
├── group_vars
│   └── all
├── hosts
├── network_init.yml
└── top.yml[root@manager roles]# ansible-playbook top.yml

三、 安装具体的服务

3.1 nfs安装

root@manager roles]# mkdir nfs-server/{tasks,templates,handlers,files} -p
[root@manager roles]# cat nfs-server/tasks/main.yml
- name: create nfs share  directory # 创建两个共享目录file:path: "{{ item }}"owner: "{{all_user}}"group: "{{all_group}}"state: directorymode: "0755"recurse: yesloop:- "{{nfs_share_blog}}"- "{{nfs_share_zrlog}}"- name: configure nfs server  template:src: exports.j2dest: /etc/exportsnotify: restart nfs server- name: start nfs serversystemd:name: nfsstate: startedenabled: yes[root@manager roles]# cat nfs-server/templates/exports.j2
{{ nfs_share_blog }} {{ nfs_allow_ip_range }}(rw,sync,all_squash,anonuid={{ uid }},anongid={{ gid }})
{{ nfs_share_zrlog }} {{ nfs_allow_ip_range }}(rw,sync,all_squash,anonuid={{ uid }},anongid={{ gid }})[root@manager roles]# cat nfs-server/handlers/main.yml
- name: restart nfs serversystemd:name: nfsstate: restarted[root@manager roles]# cat group_vars/all
# base
all_user: www
all_group: www
uid: 666
gid: 666# nfs
nfs_share_zrlog: /data/zrlog
nfs_allow_ip_range: 172.16.1.0/24
nfs_share_blog: /data/blog

运行

[root@manager roles]# cat top.yml
#- hosts: all
#  roles:
#    - role: base- hosts: nfsserversroles:- role: nfs-server[root@manager roles]# ansible-playbook top.ym

3.2 rsync 安装

默认启动用www 用户,且初始化阶段已经安装了rsync。

root@manager roles]# mkdir rsync-server/{tasks,templates,handlers,files} -p[root@manager roles]# cat rsync-server/tasks/main.yml
[root@manager roles]# cat rsync-server/tasks/main.yml
- name: copy virtual user passwd filecopy:content: "{{ rsync_virtual_user }}:{{ rsync_virtual_passwd }}"dest: "{{ rsync_virtual_path }}"mode: "0600"- name: create rsync module dirfile: path: "{{ item }}"owner: "{{ all_user }}"group: "{{ all_group }}"state: directoryrecurse: yesloop:- "{{ rsync_module_path1 }}" # 创建两个目录用于两个项目- "{{ rsync_module_path2 }}"- name: configure rsynctemplate:src: rsyncd.conf.j2dest: /etc/rsyncd.confnotify: restart rsyncd- name: start rsyncdsystemd:name: rsyncdstate: startedenabled: yesroot@manager roles]# cat rsync-server/templates/rsyncd.conf.j2
uid = {{all_user}}
gid = {{all_group}}
port = {{rsync_port}}
fake super = yes
use chroot = no
max connections = {{rsync_max_conn}}
timeout = 600
ignore errors
read only = false
list = true
log file = /var/log/rsyncd.log
auth users = {{rsync_virtual_user}}
secrets file = {{ rsync_virtual_path}}
# 创建两个模块用于两个项目
[{{rsync_module_name1}}]
path = {{rsync_module_path1}}[{{rsync_module_name2}}]
path = {{rsync_module_path2}}[root@manager roles]# cat rsync-server/handlers/main.yml
- name: restart rsyncdsystemd:name: rsyncdstate: restarted

运行测试:


[root@manager roles]# cat top.yml
- hosts: rsyncserversroles:- role: rsync-server[root@manager roles]# ansible-playbook top.yml

3.3 sersync实时同步

监测nfs端的共享目录,如果有变化实时同步到rsync备份服务器。
安装sersync 、rsync、 inotify-tools

[root@manager roles]# cat sersync-server/tasks/main.yml
- name: install inotify-toolsyum:name: inotify-toolsstate: present- name: unarchive sersync.tar.gz to remoteunarchive:src: sersync2.5.4_64bit_binary_stable_final.tar.gzdest: "{{ rsync_path }}"- name: configure confxml.xml.j2 to remotetemplate: src: confxml1.xml.j2dest: "{{ rsync_path }}/GNU-Linux-x86/confxml1.xml"- name: configure confxml.xml1.j2 to remotetemplate: src: confxml2.xml.j2dest: "{{ rsync_path }}/GNU-Linux-x86/confxml2.xml"- name: create rsync_client passwd file  copy:content: "{{ rsync_virtual_passwd }}"dest: "{{ rsync_virtual_path }}"mode: 0600- name: start sersync1shell: "{{ rsync_path }}/GNU-Linux-x86/sersync2 -dro {{ rsync_path }}/GNU-Linux-x86/confxml1.xml"- name: start sersync2shell: "{{ rsync_path }}/GNU-Linux-x86/sersync2 -dro {{ rsync_path }}/GNU-Linux-x86/confxml2.xml"
[root@manager roles]# cat sersync-server/templates/confxml1.xml.j2
<?xml version="1.0" encoding="ISO-8859-1"?>
<head version="2.5"><host hostip="localhost" port="8008"></host><debug start="false"/><fileSystem xfs="true"/> <filter start="false">   <exclude expression="(.*)\.svn"></exclude><exclude expression="(.*)\.gz"></exclude><exclude expression="^info/*"></exclude><exclude expression="^static/*"></exclude></filter><inotify> <delete start="true"/><createFolder start="true"/><createFile start="false"/><closeWrite start="true"/><moveFrom start="true"/><moveTo start="true"/><attrib start="true"/><modify start="true"/></inotify><sersync><localpath watch="{{ nfs_share_blog }}"> <remote ip="{{ rsync_ip }}" name="{{ rsync_module_name1 }}"/></localpath><rsync> <commonParams params="-avz"/><auth start="true" users=" {{ rsync_virtual_user }}" passwordfile="{{ rsync_virtual_path }}"/><userDefinedPort start="false" port="874"/><!-- port=874 --><timeout start="true" time="100"/><ssh start="false"/></rsync><failLog path="/tmp/rsync_fail_log.sh" timeToExecute="{{ timeToExecute }}"/><!-- ?60????????faillog--><crontab start="false" schedule="600"><!--600mins--> <crontabfilter start="false"><exclude expression="*.php"></exclude><exclude expression="info/*"></exclude></crontabfilter></crontab><plugin start="false" name="command"/></sersync><plugin name="command"><param prefix="/bin/sh" suffix="" ignoreError="true"/>    <!--prefix /opt/tongbu/mmm.sh suffix--><filter start="false"><include expression="(.*)\.php"/><include expression="(.*)\.sh"/></filter></plugin><plugin name="socket"><localpath watch="/opt/tongbu"><deshost ip="192.168.138.20" port="8009"/></localpath></plugin><plugin name="refreshCDN"><localpath watch="/data0/htdocs/cms.xoyo.com/site/"><cdninfo domainname="ccms.chinacache.com" port="80" username="xxxx" passwd="xxxx"/><sendurl base="http://pic.xoyo.com/cms"/><regexurl regex="false" match="cms.xoyo.com/site([/a-zA-Z0-9]*).xoyo.com/images"/></localpath></plugin>
</head>
[root@manager roles]# cat sersync-server/templates/confxml2.xml.j2
<?xml version="1.0" encoding="ISO-8859-1"?>
<head version="2.5"><host hostip="localhost" port="8008"></host><debug start="false"/><fileSystem xfs="true"/> <!--????--><filter start="false">   <!--????????--><exclude expression="(.*)\.svn"></exclude><exclude expression="(.*)\.gz"></exclude><exclude expression="^info/*"></exclude><exclude expression="^static/*"></exclude></filter><inotify> <!-- ??????? --><delete start="true"/><createFolder start="true"/><createFile start="false"/><closeWrite start="true"/><moveFrom start="true"/><moveTo start="true"/><attrib start="true"/><modify start="true"/></inotify><sersync><localpath watch="{{ nfs_share_zrlog }}"> <!-- ?????? --><remote ip="{{ rsync_ip }}" name="{{ rsync_module_name2 }}"/><!--rsync????IP?????--></localpath><rsync> <!-- rsync??? --><commonParams params="-avz"/><auth start="true" users=" {{ rsync_virtual_user }}" passwordfile="{{ rsync_virtual_path }}"/><!--?????????--><userDefinedPort start="false" port="874"/><!-- port=874 --><timeout start="true" time="100"/><!-- ????100s--><ssh start="false"/><!--??rsync??--></rsync><failLog path="/tmp/rsync_fail_log.sh" timeToExecute="{{ timeToExecute }}"/><!-- ?60????????faillog--><crontab start="false" schedule="600"><!--600mins--> <!--????????--><crontabfilter start="false"><exclude expression="*.php"></exclude><exclude expression="info/*"></exclude></crontabfilter></crontab><plugin start="false" name="command"/></sersync><plugin name="command"><param prefix="/bin/sh" suffix="" ignoreError="true"/> <!--prefix /opt/tongbu/mmm.sh suffix--><filter start="false"><include expression="(.*)\.php"/><include expression="(.*)\.sh"/></filter></plugin><plugin name="socket"><localpath watch="/opt/tongbu"><deshost ip="192.168.138.20" port="8009"/></localpath></plugin><plugin name="refreshCDN"><localpath watch="/data0/htdocs/cms.xoyo.com/site/"><cdninfo domainname="ccms.chinacache.com" port="80" username="xxxx" passwd="xxxx"/><sendurl base="http://pic.xoyo.com/cms"/><regexurl regex="false" match="cms.xoyo.com/site([/a-zA-Z0-9]*).xoyo.com/images"/></localpath></plugin>
</head>
[root@manager roles]# ls sersync-server/files/sersync2.5.4_64bit_binary_stable_final.tar.gz
sersync-server/files/sersync2.5.4_64bit_binary_stable_final.tar.gz
[root@manager roles]# cat group_vars/all
# base
all_user: www
all_group: www
uid: 666
gid: 666# nfs
nfs_share_zrlog: /data/zrlog
nfs_allow_ip_range: 172.16.1.0/24
nfs_share_blog: /data/blog # rsync
rsync_virtual_user: rsync_backup
rsync_virtual_path: /etc/rsync.pass
rsync_module_name1: blog
rsync_module_name2: zrlogrsync_module_path1: /data/blog
rsync_module_path2: /data/zrlog
rsync_virtual_passwd: 123
rsync_port: 873
rsync_max_conn: 200# sersync
rsync_ip: 172.16.1.31
timeToExecute: 60
rsync_path: /usr/local

3.4 安装mysql

[root@manager roles]# cat mysql-server/tasks/main.yml +47
- name: install mariadb mariadb-serveryum:name: "{{ item }}"state: presentloop:- mariadb- mariadb-server- MySQL-python- name: start mariadbsystemd:name: mariadbstate: startedenabled: yes- name: Removes all anonymous user accountsmysql_user:name: ''host_all: yesstate: absent- name: Create database user with name '{{mysql_super_user}}' and password '{{mysql_super_user_passwd}}' with all database privilegesmysql_user:name: "{{mysql_super_user}}"password: "{{mysql_super_user_passwd}}"priv: '{{mysql_super_user_priv}}'host: "{{allow_ip}}"state: present# 循环创建两个数据库,并导入数据
- name: Create a new database with name 'wordpress and zrlog'mysql_db:login_host: "{{ mysql_ip }}"login_user: "{{ mysql_super_user }}"login_password: "{{ mysql_super_user_passwd }}"name: "{{ item }}"state: presentloop:- wordpress- zrlog- name: Import file.sql similar to mysql -u <username> -p <password> < hostname.sqlmysql_db:login_host: "{{ mysql_ip }}"login_user: "{{ mysql_super_user }}"login_password: "{{ mysql_super_user_passwd }}"state: importname: "{{ item.name }}"target: "{{ item.target }}"loop:- { name: 'wordpress', target: '/tmp/wordpress.sql' }- { name: 'zrlog', target: '/tmp/zrlog.sql' }[root@manager roles]# cat group_vars/all
#mysql
mysql_super_user: app
mysql_super_user_passwd: "123456"
mysql_super_user_priv: '*.*:ALL'
allow_ip: '172.16.1.%'

3.5 安装redis

[root@manager roles]# mkdir redis-server/{tasks,templates,handlers} -p
[root@manager roles]# cat redis-server/tasks/main.yml
- name: install redis serveryum:name: redisstate: present- name: configure redis servertemplate:src: redis.conf.j2 dest: /etc/redis.confowner: redisgroup: rootmode: 0640notify: restart redis server- name: start redis serversystemd:name: redisstate: startedenabled: yes[root@manager roles]# cat redis-server/handlers/main.yml
- name: restart redis serversystemd:name: redisstate: restarted

3.6 安装nginx

[root@manager roles]# mkdir nginx-server/{tasks,templates,handlers,files} -p
[root@manager roles]# cat nginx-server/tasks/main.yml
- name: install nginx serveryum:name: nginxstate: presentenablerepo: nginx- name: configure nginx servertemplate:src: nginx.conf.j2dest: "{{nginx_conf_path}}"notify: restart nginx server- name: start nginx serversystemd:name: nginxstate: startedenabled: yesroot@manager roles]# cat nginx-server/templates/nginx.conf.j2 user {{all_user}};
worker_processes  {{ansible_processor_vcpus}};error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;events {worker_connections  {{ ansible_processor_vcpus*1024 }};
}http {include       /etc/nginx/mime.types;default_type  application/octet-stream;log_format  main  '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for" "$http_x_via"';access_log  /var/log/nginx/access.log  main;sendfile        on;#tcp_nopush     on;keepalive_timeout  {{keepalive_timeout}};#gzip  on;include {{nginx_include_path}}
}[root@manager roles]# cat nginx-server/handlers/main.yml
- name: restart nginx serversystemd:name: nginxstate: restarted

3.7 安装php

[root@manager roles]# mkdir php-fpm/{tasks,templates,handlers,files} -p
[root@manager roles]# cat php-fpm/tasks/main.yml
- name: install php-fpmyum:name: "{{ item }}"enablerepo: php71wstate: presentloop:- php71w- php71w-cli - php71w-common - php71w-devel - php71w-embedded - php71w-gd - php71w-mcrypt - php71w-mbstring - php71w-pdo - php71w-xml - php71w-fpm - php71w-mysqlnd - php71w-opcache - php71w-pecl-memcached - php71w-pecl-redis - php71w-pecl-mongodb- name: configure php.ini php-fpm filetemplate:src: "{{item.src}}"dest: "{{item.dest}}"loop:- {src: php.ini.j2,dest: "{{php_ini_path}}"}- {src: www.conf.j2,dest: "{{php_fpm_path}}"}notify: restart php-fpm- name: start php-fpmsystemd:name: php-fpmstate: startedenabled: yes配置1
[root@manager roles]# cat php-fpm/templates/php.ini.j2 | egrep -v "^;|^$"
# 修改如下两处,其他默认不变
session.save_handler = {{session_method}}
session.save_path = "tcp://{{bind_ip}}:6379"配置2
[root@manager roles]# cat php-fpm/templates/www.conf.j2
[{{all_user}}]
user = {{all_user}}
group = {{all_user}}
listen = 127.0.0.1:9000
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = {{pm_max_children}}
pm.start_servers = {{pm_start_servers}}
pm.min_spare_servers = {{pm_min_spare_servers}}
pm.max_spare_servers = {{pm_max_spare_servers}}
slowlog = /var/log/php-fpm/www-slow.log
php_admin_value[error_log] = /var/log/php-fpm/www-error.log
php_admin_flag[log_errors] = on
php_value[soap.wsdl_cache_dir]  = /var/lib/php/wsdlcache[root@manager roles]# cat nfs-server/handlers/main.yml
- name: restart nfs serversystemd:name: nfsstate: restarted变量表
[root@manager roles]# cat group_vars/all
#php-fpm
php_ini_path: /etc/php.ini
php_fpm_path: /etc/php-fpm.d/www.conf
session_method: redispm_max_children: 50
pm_start_servers: 5
pm_min_spare_servers: 5
pm_max_spare_servers: 35

3.8 安装 haproxy

[root@manager roles]# mkdir php-fpm/{tasks,templates,handlers,files} -p
[root@manager roles]# cat haproxy/tasks/main.yml
- name: install haproxyyum:name: haproxy22enablerepo: haproxystate: present- name: configure haproxytemplate:src: haproxy.cfg.j2dest: /etc/haproxy/haproxy.cfgnotify: restarted haproxy server- name: configure  conf.d dir to systemctl template: src: haproxy.service.j2dest: /usr/lib/systemd/system/haproxy.service- name: create conf.d dir    file:path: /etc/haproxy/conf.dstate: directory- name: start haproxysystemd:name: haproxydaemon_reload: yesstate: startedenabled: yes# 根据需要自己设置变量
[root@manager roles]# cat haproxy/templates/haproxy.cfg.j2
globallog         127.0.0.1 local2chroot      /var/lib/haproxypidfile     /var/run/haproxy.pidmaxconn     {{maxconn}}user        {{all_user}}group       {{all_group}}daemon# turn on stats unix socketstats socket /var/lib/haproxy/stats level admin#nbproc 4#cpu-map 1 0#cpu-map 2 1#cpu-map 3 2#cpu-map 4 3nbthread 8
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaultsmode                    httplog                     globaloption                  httplogoption                  dontlognulloption http-server-close#option forwardfor       except 127.0.0.0/8option                  redispatchretries                 3timeout http-request    10stimeout queue           1mtimeout connect         10stimeout client          1mtimeout server          1mtimeout http-keep-alive 10stimeout check           10smaxconn                 3000#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------listen haproxy-statsbind *:{{stat_port}}stats enablestats refresh 1sstats hide-versionstats uri /haproxy?statsstats realm "HAProxy statistics"stats auth admin:123456stats admin if TRUE[root@manager roles]# cat haproxy/templates/haproxy.service.j2
[Unit]
Description=HAProxy Load Balancer
After=network-online.target
Wants=network-online.target[Service]
EnvironmentFile=-/etc/sysconfig/haproxy
Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid"
Environment="CONFIG_D=/etc/haproxy/conf.d"
ExecStartPre=/usr/sbin/haproxy -f $CONFIG -f $CONFIG_D -c -q $OPTIONS
ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -f $CONFIG_D -p $PIDFILE $OPTIONS
ExecReload=/usr/sbin/haproxy -f $CONFIG -f $CONFIG_D -c -q $OPTIONS
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
SuccessExitStatus=143
Type=notify[Install]
WantedBy=multi-user.target[root@manager roles]# cat haproxy/handlers/main.yml
- name: restarted haproxy serversystemd:name: haproxystate: restarted[root@manager roles]# tree haproxy/
haproxy/
├── handlers
│   └── main.yml
├── tasks
│   └── main.yml
└── templates├── haproxy.cfg.j2└── haproxy.service.j2[root@manager roles]# cat group_vars/all
# haproxy
stat_port: 8888
maxconn: 4000

3.9 安装keepalived

[root@manager roles]# mkdir keepalived/{tasks,template,handlers,files} -p[root@manager roles]# cat keepalived/tasks/main.yml
- name: install keepalivedyum:name: keepalivedstate: present- name: configure keepalivedtemplate:src: keepalived.conf.j2dest: "{{ keepalived_conf_path }}"notify: restart keepalived- name: start keeplivedsystemd:name: keepalivedstate: startedenabled: yes[root@manager roles]# cat keepalived/templates/keepalived.conf.j2
global_defs {     router_id {{ ansible_hostname }}
}vrrp_instance VI_1 {{% if ansible_hostname == "proxy01" %}state MASTERpriority 200
{% elif ansible_hostname == "proxy02" %}state BACKUPpriority 100
{% endif %}interface  eth0            # 绑定当前虚拟路由使用的物理接口;virtual_router_id 49            # 当前虚拟路由标识,VRID;advert_int 3                    # vrrp通告时间间隔,默认1s;#nopreemptauthentication {auth_type PASS              # 密码类型,简单密码;auth_pass 1111              # 密码不超过8位字符;}virtual_ipaddress {{{ proxy_vip }}}
}[root@manager roles]# cat keepalived/handlers/main.yml
- name: restart keepalivedsystemd:name: keepalivedstate: restarted# 变量
[root@manager roles]# cat keepalived
#keepalived
keepalived_conf_path: /etc/keepalived/keepalived.conf
proxy_vip: 10.0.0.100# 目录树
[root@manager roles]# tree keepalived/
keepalived/
├── files
├── handlers
│   └── main.yml
├── tasks
│   └── main.yml
└── templates└── keepalived.conf.j2

3.10 安装 LVS

四层LVS需要依赖keepalived

[root@manager roles]# mkdir lvs/{tasks,templates,handlers,meta} -p[root@manager roles]# cat lvs/tasks/main.yml
- name: install ipvsadm packagesyum:name: ipvsadmstate: present
- name: configure LVS keepalivedtemplate:src: keepalived.conf.j2dest: /etc/keepalived/keepalived.confnotify: restart keepalived- name: start LVS Keepalivedsystemd:name: keepalivedstate: startedenabled: yes[root@manager roles]# cat lvs/templates/keepalived.conf.j2
global_defs {router_id {{ ansible_hostname }}
}vrrp_instance VI_1 {
{% if ansible_hostname == "lvs-master" %}state MASTERpriority 200
{% elif ansible_hostname == "lvs-slave" %}state BACKUPpriority 100
{% endif %}interface eth1virtual_router_id 50advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {{{ lvs_vip }}}
}
# 配置集群地址访问的IP+Portvirtual_server {{ lvs_vip }} {{ lvs_http_port }} {# 健康检查的时间,单位:秒delay_loop 6# 配置负载均衡的算法lb_algo rr# 设置LVS的模式 NAT|TUN|DRlb_kind DR# 设置协议protocol TCP# 负载均衡后端的真实服务节点RS-1
{% for host in groups["proxyservers"]%}real_server {{ host }} {{ lvs_http_port }} {# 权重配比设置为1weight 1# 设置健康检查TCP_CHECK {# 检测后端80端口connect_port 80# 超时时间connect_timeout  3# 重试次数2次nb_get_retry 2# 间隔时间3sdelay_beefore_retry 3}}
{% endfor %}
}# 配置集群地址访问的IP+Port
virtual_server {{ lvs_vip }} {{ lvs_https_port }} {# 健康检查的时间,单位:秒delay_loop 6# 配置负载均衡的算法lb_algo rr# 设置LVS的模式 NAT|TUN|DRlb_kind DR# 设置协议protocol TCP
{% for host in groups["proxyservers"] %}# 负载均衡后端的真实服务节点RS-1real_server {{ host }} {{ lvs_https_port }} {# 权重配比设置为1weight 1# 设置健康检查TCP_CHECK {# 检测后端443端口connect_port 443# 超时时间connect_timeout  3# 重试次数2次nb_get_retry 2# 间隔时间3sdelay_beefore_retry 3}}
{% endfor %}
}# 依赖keepalived
[root@manager roles]# cat lvs/meta/mail.yml
dependencies:- { role: keepalived }[root@manager roles]# cat lvs/handlers/main.yml
- name: restart keepalivedsystemd:name: keepalivedstate: restarted[root@manager roles]# cat group_vars/all
#lvs
lvs_vip: 172.16.1.101
lvs_http_port: 80
lvs_https_port: 443

3.11 LVS后端七层代理集群(RS)节点配置VIP与ARP抑制

思路,需要在rs节点上添加虚拟网卡,绑定在lo:0上,重启该虚拟网卡,开启arp抑制

[root@manager roles]# mkdir lvs-rs/{tasks,templates,handlers} -p[root@manager roles]# cat lvs-rs/tasks/main.yml
- name: configure VIP for lo:0template:src: ifcfg-lo:0.j2dest: /etc/sysconfig/network-scripts/ifcfg-lo:0notify: restart network- name: configure arp_ignoresysctl:name: "{{ item }}"value: "1" sysctl_set: yesloop:- net.ipv4.conf.default.arp_ignore- net.ipv4.conf.all.arp_ignore- net.ipv4.conf.lo.arp_ignore- name: configure arp_announcesysctl:name: "{{ item }}"value: "2"sysctl_set: yesloop:- net.ipv4.conf.default.arp_announce- net.ipv4.conf.all.arp_announce- net.ipv4.conf.lo.arp_announce[root@manager roles]# cat lvs-rs/templates/ifcfg-lo\:0.j2
DEVICE={{ lvs_rs_network }}
IPADDR={{ lvs_vip }}
NETMASK=255.0.0.0
ONBOOT=yes
NAME=loopback[root@manager roles]# cat lvs-rs/handlers/main.yml
- name: restart networkservice: # centos6用service centos7用systemdname: networkstate: restartedargs: lo:0# 参数
[root@manager roles]# cat group_vars/all
#lvs
lvs_vip: 172.16.1.100
lvs_http_port: 80
lvs_https_port: 443
lvs_rs_network: lo:0

3.12 配置route

  1. 开启DNAT
  2. 开启SNAT
  3. 开启forword (基础阶段已经开启)
[root@manager roles]# cat route/tasks/main.yml
- name: iptables SNAT to share networkiptables:table: natchain: POSTROUTINGsource: 172.16.1.0/24jump: SNATto_source: "{{ ansible_eth0.ipv4.address }}"- name: iptables DNAT http 80 portiptables:table: natchain: PREROUTINGprotocol: tcpdestination: " {{ ansible_eth0.ipv4.address }}"destination_port: "{{ lvs_http_port|int }}"jump: DNATto_destination: "{{ lvs_vip }}:{{ lvs_http_port }}"- name: iptables DNAT https 443 port iptables:table: natchain: PREROUTINGprotocol: tcpdestination: "{{ ansible_eth0.ipv4.address }}"destination_port: "{{ lvs_https_port|int }}"jump: DNATto_destination: "{{ lvs_vip }}:{{ lvs_https_port }}"

3.1.3 配置dns

[root@manager roles]# mkdir dns/{tasks,templates,handlers} -p[root@manager roles]# cat dns/tasks/main.yml
- name: install bindyum:name: "{{ item }}"state: presentloop:- bind- bind-utils- name: configure dnstemplate:src: named.conf.j2dest: /etc/named.confgroup: namedowner: rootmode: "0640"notify: restart named- name: cofigure zone filetemplate:src: "{{ bertwu_online_zone_name }}.j2"dest: "{{ dns_zone_file_path }}/{{ bertwu_online_zone_name }}"when: ( ansible_hostname == "dns-master" ) # 区域文件只需要拷贝到主上,会同步一份给从notify: restart named- name: start namedsystemd:name: namedstate: startedenabled: yes# 配置文件,主从配置不一样,需要判断生成不同配置文件
[root@manager roles]# cat dns/templates/named.conf.j2
options {listen-on port 53 { any; };directory  "/var/named";dump-file     "/var/named/data/cache_dump.db";statistics-file "/var/named/data/named_stats.txt";memstatistics-file "/var/named/data/named_mem_stats.txt";recursing-file  "/var/named/data/named.recursing";secroots-file   "/var/named/data/named.secroots";allow-query     { any; };{% if ansible_hostname == "dns-master" %}allow-transfer {172.16.1.92;};   //允许哪些`IP`地址能同步Master配置信息;also-notify {172.16.1.92;};       //Master主动通知Slave域名变发生了变更
{% elif ansible_hostname == "dns-slave" %}masterfile-format text;
{% endif %}recursion yes;allow-recursion {172.16.1.0/24;};dnssec-enable yes;dnssec-validation yes;bindkeys-file "/etc/named.root.key";managed-keys-directory "/var/named/dynamic";pid-file "/run/named/named.pid";session-keyfile "/run/named/session.key";
};logging {channel default_debug {file "data/named.run";severity dynamic;};
};zone "." IN {type hint;file "named.ca";};zone "bertwu.online" IN {
{% if ansible_hostname == "dns-master" %}type master;file "{{ bertwu_online_zone_name }}";
{% elif ansible_hostname == "dns-slave" %}type slave;file "slaves/{{ bertwu_online_zone_name }}";masters { {{ dns_master_ip }}; };
{% endif %}};include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";# 区域配置文件,注意此文件的注释为; 报错调试了半天就是因为写成了#
[root@manager roles]# cat dns/templates/bertwu.online.zone.j2
$TTL 600;bertwu.online. IN SOA  ns.bertwu.online. qq.bertwu.online. (202109099910800900 60480086400
)bertwu.online. IN NS ns1.bertwu.online.
bertwu.online. IN NS ns2.bertwu.online.ns1.bertwu.online. IN A {{ dns_master_ip }}
ns2.bertwu.online. IN A {{ dns_slave_ip }}blog.bertwu.online. IN A 10.0.0.200
zrlog.bertwu.online. IN A 10.0.0.200[root@manager roles]# cat dns/handlers/main.yml
- name: restart namedsystemd:name: namedstate: restarted# 变量
[root@manager roles]# cat group_vars/all
# dns
bertwu_online_zone_name: bertwu.online.zonedns_master_ip: 172.16.1.91
dns_slave_ip: 172.16.1.92dns_zone_file_path: /var/named

三 、部署wordpress 博客系统

3.1 web站点层

0.备份以前原有数据库
   [root@localhost ~]# mysqldump -B wordpress >/tmp/wordpress.sql
1.创建存放项目代码的目录(/webcode)
2.解压本地打包好的wordpress 项目到远程指定的代码目录(里面包含连接数据配置信息,避免跳出向导页面)
3.推送nginx代理配置文件(要与首次连接数据库成功时的地址和端口保持一致,因为其已经写入了数据库,否则会有未知错误)
4.创建数据库wordpress
5.恢复数据库之前备份出来的sql文件
(注意,建议4,5 步,在数据库初始化阶段(安装配置阶段)完成,因为实际生产环境中数据库和数据本来就已经配置好,运维只需要连接即可。否则需要把wordpress.sql 分别推送到web站点服务器指定目录,例如 /tmp/wordpress.sql),并且要在站点安装mariadb客户端,才能成功。

[root@manager roles]# mkdir blog-web/{tasks,meta,templates,handlers,files} -p[root@manager roles]# cat wordpress-web/tasks/main.yml
- name: create code directoryfile:path: "{{ wordpress_code_path }}"owner: "{{ all_user }}"group: "{{ all_group }}"mode: "0755"state: directoryrecurse: yes- name: unarchive wordpress_web_code to remoteunarchive:src: wordpress.tar.gzdest: "{{ wordpress_code_path }}" owner: "{{ all_user }}"group: "{{ all_group }}"mode: "0755"creates: "{{ wordpress_code_path }}/wordpress/wp-config.php" # 避免重复解压- name: create wordpress nginx configtemplate: src: wordpress.conf.j2 dest: "{{ nginx_include_dir }}/wordpress.conf"notify: restart nginx server # 在安装nginx阶段已经定义,此处可以不用写handlers# 设置挂载点
- name: configure mount nfsmount:src: "172.16.1.32:{{ nfs_share_blog }}"path: "{{ wordpress_code_path}}/wordpress/wp-content/uploads/"fstype: nfsopts: defaultsstate: mounted  # 以下四部已经在安装mariadb服务器时候创建,放在这是因为提供思路,本身不用执行#-name: 推送wordpress.sql 到web站点指定目录
# 略#- name: web站点安装mariadb客户端
# 略#- name: Create a new database with name 'wordpress'
#  mysql_db:
#      login_host:"{{ mysql_ip }}"
#      login_user:"{{ mysql_super_user }}"
#      login_password:"{{ mysql_super_user_passwd }}"
#      name: wordpress
#      state: present
#
#- name: Import file.sql similar to mysql -u <username> -p <password> < hostname.sql
#  mysql_db:
#      login_host:"{{ mysql_ip }}"
#      login_user:"{{ mysql_super_user }}"
#      login_password:"{{ mysql_super_user_passwd }}"
#      state: import
#      name: wordpress
#      target: /tmp/wordpress.sql# 依赖
[root@manager roles]# cat wordpress-web/meta/main.yml
dependencies:- { role: nginx-server }- { role: php-fpm }[root@manager roles]# cat wordpress-web/templates/wordpress.conf.j2
##
server {listen {{ nginx_http_listen_port }};server_name {{ wordpress_domain }};client_max_body_size 100m;root {{ wordpress_code_path }}/wordpress;charset utf-8;location / {index index.php index.html;}location ~* .*\.php$ {fastcgi_pass 127.0.0.1:9000;fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;include /etc/nginx/fastcgi_params;fastcgi_param HTTPS {{ https_state }};}
}[root@manager roles]# ls wordpress-web/files/
wordpress.tar.gz # 项目压缩包# 参数
[root@manager roles]# cat group_vars/all
# wordpress
wordpress_domain: blog.bertwu.online
nginx_http_listen_port: 888
wordpress_code_path: /webcode
https_state: "off"

3.2 七层负载均衡 haproxy, 备用nginx

如果架构只做七层负载均衡,则需要依赖keepalived, 如果前面还需要接入四层负载均衡,则此处只做集群,不需要keepalived。在四层上面接入keepalived即可。

[root@manager roles]# mkdir wordpress-proxy/{tasks,handlers,templates,meta} -p[root@manager roles]# cat wordpress-proxy/tasks/main.yml
- name: create https key dirfile:path: "{{ https_key_dir }}"state: directory- name: copy https key_per copy:src: blog_key.pem.j2dest: "{{ https_key_dir }}/blog_key.pem"- name: wordpress haproxy configure filetemplate:src: haproxy.cfg.j2dest: "{{ haproxy_include_path }}/wordpress_haproxy.cfg"notify: restarted haproxy server[root@manager roles]# cat wordpress-proxy/templates/haproxy.cfg.j2
frontend webbind *:{{ haproxy_port }}bind *:443 ssl crt {{  https_key_dir }}/blog_key.pem mode http#acl 规则将blog 调度到blog_clusteracl blog_domain hdr(host) -i {{ wordpress_domain }}redirect scheme https code 301 if !{ ssl_fc } blog_domainuse_backend blog_cluster if blog_domainbackend blog_clusterbalance roundrobinoption httpchk HEAD / HTTP/1.1\r\nHost:\ {{ wordpress_domain }}{% for host in groups["webservers"] %}server {{ host }} {{ host }}:{{ nginx_http_listen_port }} check port {{  nginx_http_listen_port }} inter 3s rise 2 fall 3{% endfor %}# 依赖
[root@manager roles]# cat wordpress-proxy/meta/main.yml
dependencies:- { role: haproxy }#参数
#wordpress haproxy
haproxy_port: 80
https_key_dir: /ssl# 证书
[root@manager roles]# ls wordpress-proxy/files/blog_key.pem.j2
wordpress-proxy/files/blog_key.pem.j2# 证书生成方式:
1.准备证书 需要将证书追加到一个文件中
[root@proxy01 ~]# cat /etc/nginx/ssl_key/6152893_blog.bertwu.online.pem > /ssl/blog_key.pem
[root@proxy01 ~]# cat /etc/nginx/ssl_key/6152893_blog.bertwu.online.key >> /ssl/blog_key.pem

四、部署java项目 zrlog

4.1 安装 tomcat

1.下载 rpm 格式jdk解压到remote
2.yum本地安装jdk
3.远程创建/soft 目录
4.远程解压tomcat到/soft目录
5.制作软连接
6.推送tomcat启停文件
7.推送配置文件 权限0600
8.启动tomcat

[root@manager roles]# cat tomcat/tasks/main.yml
- name: copy jdk_rpm to webserverscopy:src: "{{ jdk_version }}.rpm"dest: "/tmp/{{ jdk_version }}.rpm"- name: install oraclejdk rmp packagesyum:name: "/tmp/{{ jdk_version }}.rpm"state: present- name: create tomcat dir file: path: "{{ tomcat_dir }}"state: directory- name: unarchive tomcat package to remote unarchive:src: "{{ tomcat_version }}.tar.gz"dest: "{{ tomcat_dir }}"#creates:"{{ tomcat_dir }}/{{ tomcat_version}}/conf/server.xml" # 避免重复解压- name: make tomcat linkfile:src: "{{ tomcat_dir }}/{{ tomcat_version }}"dest: "{{ tomcat_dir }}/{{ tomcat_name }}"state: link- name: copy systemctl manager filetemplate: src: tomcat.service.j2dest: /usr/lib/systemd/system/tomcat.service- name: start tomcatsystemd:name: tomcatstate: started- name: copy tomcat configure filetemplate:src: server.xml.j2dest: "{{tomcat_dir}}/{{tomcat_name}}/conf/server.xml"notify: restart tomcat# systemctl 管理启停文件
[root@manager roles]# cat tomcat/templates/tomcat.service.j2
[Unit]
Description=tomcat - high performance web server
Documentation=https://tomcat.apache.org/
After=network-online.target remote-fs.target nss-lookup.target
Wants=network-online.target[Service]
Type=forking
#Environment=JAVA_HOME=/usr/local/jdk
Environment=CATALINA_HOME={{ tomcat_dir }}/{{ tomcat_name }}
Environment=CATALINA_BASE={{ tomcat_dir }}/{{ tomcat_name }}ExecStart={{ tomcat_dir }}/{{ tomcat_name }}/bin/startup.sh
ExecStop={{ tomcat_dir }}/{{ tomcat_name }}/bin/shutdown.sh[Install]
WantedBy=multi-user.target# tomcat默认配置文件
[root@manager roles]# cat tomcat/templates/server.xml.j2
<?xml version="1.0" encoding="UTF-8"?><!--关闭tomcat的端口-->
<Server port="8005" shutdown="SHUTDOWN"><!--监听器 --><Listener className="org.apache.catalina.startup.VersionLoggerListener" /><Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /><Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /><Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /><Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" /><!--全局资源限制--><GlobalNamingResources><Resource name="UserDatabase" auth="Container"type="org.apache.catalina.UserDatabase"description="User database that can be updated and saved"factory="org.apache.catalina.users.MemoryUserDatabaseFactory"pathname="conf/tomcat-users.xml" /></GlobalNamingResources><!--连接器--><Service name="Catalina"><Connector port="8080" protocol="HTTP/1.1"connectionTimeout="20000"redirectPort="8443" /><!--引擎--><Engine name="Catalina" defaultHost="localhost"><!--调用限制--><Realm className="org.apache.catalina.realm.LockOutRealm"><Realm className="org.apache.catalina.realm.UserDatabaseRealm"resourceName="UserDatabase"/></Realm><!--虚拟主机--><Host name="localhost"  appBase="webapps"unpackWARs="true" autoDeploy="true"><Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"prefix="localhost_access_log" suffix=".txt"pattern="%h %l %u %t &quot;%r&quot; %s %b" /></Host></Engine></Service>
</Server># 触发器,重启
[root@manager roles]# cat tomcat/handlers/main.yml
- name: restart tomcatsystemd:name: tomcatstate: restarted# 这两个文件需要事先下载好 tomcat是二进制安装
[root@manager roles]# ls tomcat/files/
apache-tomcat-8.5.71.tar.gz  jdk-8u281-linux-x64.rpm# 参数
# tomcat
jdk_version: jdk-8u281-linux-x64
tomcat_dir: /soft
tomcat_version: apache-tomcat-8.5.71
tomcat_name: tomcat

项目运行截图:

4.2 部署zrlog 项目

[root@manager roles]# mkdir  zrlog-web/{tasks,templates,handlers,meta,files} -p
1.主文件
[root@manager roles]# cat zrlog-web/tasks/main.yml
- name: create code pathfile:path: "{{ zrlog_code_path }}"state: directoryowner: "{{ all_user }}"group: "{{ all_group }}"mode: "0755"recurse: yes- name: unarchive zrlog package to remoteunarchive:src: zrlog.tar.gzdest: "{{ zrlog_code_path }}"creates: "{{ zrlog_code_path }}/zrlog/ROOT/favicon.ico" # 避免重复解压- name: configuer  zrlog server.xmltemplate:src: server.xml.j2dest: "{{ tomcat_dir }}/{{ tomcat_name }}/conf/server.xml"notify: restart tomcat- name: configure mount nfsmount: src: "172.16.1.32:{{ nfs_share_zrlog }}"path: "{{ zrlog_code_path }}/zrlog/ROOT/attached/"fstype: nfsopts: defaultsstate: mounted2.配置文件
[root@manager roles]# cat zrlog-web/templates/server.xml.j2
<?xml version="1.0" encoding="UTF-8"?><!--关闭tomcat的端口-->
<Server port="8005" shutdown="SHUTDOWN"><!--监听器 --><Listener className="org.apache.catalina.startup.VersionLoggerListener" /><Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /><Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /><Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /><Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" /><!--全局资源限制--><GlobalNamingResources><Resource name="UserDatabase" auth="Container"type="org.apache.catalina.UserDatabase"description="User database that can be updated and saved"factory="org.apache.catalina.users.MemoryUserDatabaseFactory"pathname="conf/tomcat-users.xml" /></GlobalNamingResources><!--连接器--><Service name="Catalina"><Connector port="{{ tomcat_port }}" protocol="HTTP/1.1"connectionTimeout="20000"redirectPort="8443" /><!--引擎--><Engine name="Catalina" defaultHost="localhost"><!--调用限制--><Realm className="org.apache.catalina.realm.LockOutRealm"><Realm className="org.apache.catalina.realm.UserDatabaseRealm"resourceName="UserDatabase"/></Realm><!--虚拟主机--><Host name="localhost"  appBase="webapps"unpackWARs="true" autoDeploy="true"><Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"prefix="localhost_access_log" suffix=".txt"pattern="%h %l %u %t &quot;%r&quot; %s %b" /></Host><Host name="{{ zrlog_domain }}"  appBase="{{ zrlog_code_path }}/zrlog"unpackWARs="true" autoDeploy="true"><Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"prefix="zrlog_access_log" suffix=".txt"pattern="%h %l %u %t &quot;%r&quot; %s %b" /></Host></Engine></Service>
</Server>3.# 事先把代码目录已经打包好了
[root@manager roles]# ls zrlog-web/files/zrlog.tar.gz
zrlog-web/files/zrlog.tar.gz4.# 依赖
[root@manager roles]# cat zrlog-web/meta/main.yml
dependencies:- { role: tomcat }5.# 参数
[root@manager roles]# cat group_vars/all
# zrlog
zrlog_code_path: /webcode
tomcat_port: 8080
zrlog_domain: zrlog.bertwu.online

4.3 部署七层 zrlog-proxy 用haproxy做代理

如果架构只做七层负载均衡,则需要依赖keepalived, 如果前面还需要接入四层负载均衡,则此处只做集群,不需要keepalived。在四层上面接入keepalived即可。

1.创建目录
[root@manager roles]# mkdir zrlog-proxy/{tasks,templates,handlers,meta,files} -p2.主配置文件
[root@manager roles]# cat zrlog-proxy/tasks/main.yml
- name: create https key dirfile: path: "{{ https_key_dir }}"state: directory- name: copy https key_per to remotecopy:src: zrlog_key.pem.j2dest: "{{ https_key_dir }}/zrlog_key.pem"- name: configuer zrlog haproxy flie template:src: haproxy.cfg.j2dest: "{{ haproxy_include_path }}/zrlog_haproxy.cfg"notify: restarted haproxy server3.haproxy 配置文件
[root@manager roles]# cat zrlog-proxy/templates/haproxy.cfg.j2
frontend webs # 多站点时候这个名字好像不能重复bind *:80bind *:443 ssl crt {{ https_key_dir }}/zrlog_key.pem mode http# acl 规则将blog 调度到blog_clusteracl zrlog_domain hdr(host) -i {{ zrlog_domain }}redirect scheme https code 301 if !{ ssl_fc } zrlog_domainuse_backend zrlog_cluster if zrlog_domainbackend zrlog_clusterbalance roundrobinoption httpchk HEAD / HTTP/1.1\r\nHost:\ {{ zrlog_domain }}{% for host in groups['webservers'] %}server {{ host }} {{ host }}:{{ tomcat_port }} check port {{ tomcat_port }} inter 3s rise 2 fall 3{% endfor %}4.依赖
[root@manager roles]# cat zrlog-proxy/meta/main.yml
dependencies:- { role: haproxy }5.证书目录
[root@manager roles]# ls zrlog-proxy/files/zrlog_key.pem.j2
zrlog-proxy/files/zrlog_key.pem.j26.证书生成方式,在 证书目录下追加生成一个文件
[root@proxy01 ssl_key]# cat ./6181156_zrlog.bertwu.online.pem >/tmp/zrlog_key.pem
[root@proxy01 ssl_key]# cat ./6181156_zrlog.bertwu.online.key >>/tmp/zrlog_key.pem

项目运行截图

五、部署phpmyadin项目

5.1 web端部署phpmyadmin

1.七层负载均衡为nginx, 四层为lvs
2.后端web 节点 phpmyadmin代码打包,解压到web端,因为里面已经包含了连接数据库文件。
3.依赖php-fpm ,连接redis的配置在安装php时候已经配置完毕。一个文件为/etc/php.ini,里面包含连接redis的信息。一个为/etc/php-fpm.d/www.conf,让配置session存储的方式(数据库或文件)
4.需要依赖php-fpm 和nginx-server

[root@manager roles]# mkdir phpmyadmin/{tasks,templates,handlers,files,meta} -p1.主文件
[root@manager roles]# cat phpmyadmin/tasks/main.yml
- name: create code pathfile: path: "{{ phpmyadmin_code_path }}"state: directoryowner: "{{ all_user }}"group: "{{ all_group }}"mode: "0755"recurse: yes- name: unarchive code package to remoteunarchive:src: phpmyadmin.tar.gzdest: "{{ phpmyadmin_code_path }}"creates: "{{ phpmyadmin_code_path }}/phpmyadmin/config.inc.php "- name: configuer nginx proxy filetemplate: src: phpmyadmin.conf.j2dest: "{{ nginx_include_dir }}/phpmyadmin.conf"notify: restart nginx server2.依赖
[root@manager roles]# cat phpmyadmin/meta/main.yml
dependencies:- { role: php-fpm }- { role: nginx-server  }3. nginx代理配置文件
[root@manager roles]# cat phpmyadmin/templates/phpmyadmin.conf.j2
server {listen {{ phpmyadmin_nginx_listen_port }};server_name {{ phpmyadmin_domain }};client_max_body_size 100m;root {{ phpmyadmin_code_path }}/phpmyadmin;charset utf-8;location / {index index.php index.html;}location ~* .*\.php$ {fastcgi_pass 127.0.0.1:9000;fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;include /etc/nginx/fastcgi_params;fastcgi_param HTTPS {{ php_https_state }};}
}  4. 代码包,需要提前打包
[root@manager roles]# ls phpmyadmin/files/
phpmyadmin.tar.gz5.参数
# phpmyadmin
phpmyadmin_code_path: /webcode
phpmyadmin_domain: phpmyadmin.bertwu.online
phpmyadmin_nginx_listen_port: 80
php_https_state: "off"    # 当前端开启https时候将其改为on

5.2 七层负载均衡上用nginx(备用haproxy)

注意,需要手动停止先前运行的haproxy,否则(80和443)端口冲突导致nginx启动失败。或者nginx和haproxy监听不同的端口,前端lvs代理后端在重新定义一组端口集群,比较麻烦。而且生产环境中七层负载均衡也没有haproxy和nginx同时使用。本次纯粹是为了实验。

[root@manager roles]# mkdir phpmyadmin-proxy/{tasks,templates,meta,files} -p
1.主文件
root@manager roles]# cat phpmyadmin-proxy/tasks/main.yml
- name: create certificate dirfile:path: "{{ ssl_key_dir }}"state: directory- name: unarchive ssl_key to ssl_key_dirunarchive:src: 6386765_phpmyadmin.bertwu.online_nginx.zipdest: "{{ ssl_key_dir }}"creates: "{{ ssl_key_dir }}/6386765_phpmyadmin.bertwu.online.key"- name: copy proxy_params filetemplate:src: proxy_params.j2dest: /etc/nginx- name: configure nginx filetemplate:src: phpmyadmin.conf.j2dest: "{{ nginx_include_dir }}/phpmyadmin.conf"notify: restart nginx server2.依赖
[root@manager roles]# cat phpmyadmin-proxy/meta/main.yml
dependencies:- { role: nginx-server }3.# 七层负载均衡代理相关参数
root@manager roles]# cat phpmyadmin-proxy/templates/proxy_params.j2
# ip
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;# http version
proxy_http_version 1.1;
proxy_set_header Connection "";# timeout
proxy_connect_timeout 120s;
proxy_read_timeout 120s;
proxy_send_timeout 120s;# buffer
proxy_buffering on;
proxy_buffer_size 8k;
proxy_buffers 8 8k;4.# 七层负载均衡代理配置
[root@manager roles]# cat phpmyadmin-proxy/templates/phpmyadmin.conf.j2
upstream  {{ web_cluster_name }} {{% for host in groups["webservers"] %}server {{ host }}:{{ phpmyadmin_nginx_listen_port }};{% endfor %}
}server {listen 443 ssl;server_name {{ phpmyadmin_domain }};ssl_certificate {{ ssl_key_dir }}/6386765_phpmyadmin.bertwu.online.pem; ssl_certificate_key {{ ssl_key_dir }}/6386765_phpmyadmin.bertwu.online.key;location / {proxy_pass http://{{ web_cluster_name }};include proxy_params;}
}server {listen {{ nginx_proxy_port }};server_name {{ phpmyadmin_domain }};return 302 https://$server_name$request_uri;
}5.# 阿里云证书
[root@manager roles]# ls phpmyadmin-proxy/files/
6386765_phpmyadmin.bertwu.online_nginx.zip

运行截图

5.3 所有项目总参数汇总

[root@manager roles]# cat group_vars/all
# base
all_user: www
all_group: www
uid: 666
gid: 666# nfs
nfs_share_zrlog: /data/zrlog
nfs_allow_ip_range: 172.16.1.0/24
nfs_share_blog: /data/blog # rsync
rsync_virtual_user: rsync_backup
rsync_virtual_path: /etc/rsync.pass
rsync_module_name1: blog
rsync_module_name2: zrlog
rsync_module_path1: /data/blog
rsync_module_path2: /data/zrlog
rsync_virtual_passwd: 123
rsync_port: 873
rsync_max_conn: 200# sersync
rsync_ip: 172.16.1.31
timeToExecute: 60
rsync_path: /usr/local#mysql
mysql_ip: 172.16.1.51
mysql_super_user: app
mysql_super_user_passwd: "123456"
mysql_super_user_priv: '*.*:ALL'
allow_ip: '172.16.1.%'# redis
bind_ip: 172.16.1.41# nginx
nginx_include_path: /etc/nginx/conf.d/`*.conf;
nginx_include_dir: /etc/nginx/conf.d
keepalive_timeout: 65
nginx_conf_path: /etc/nginx/nginx.conf#php-fpm
php_ini_path: /etc/php.ini
php_fpm_path: /etc/php-fpm.d/www.conf
session_method: redis
pm_max_children: 50
pm_start_servers: 5
pm_min_spare_servers: 5
pm_max_spare_servers: 35# haproxy
stat_port: 8888
maxconn: 4000
haproxy_include_path: /etc/haproxy/conf.d#keepalived
keepalived_conf_path: /etc/keepalived/keepalived.conf
proxy_vip: 10.0.0.100#lvs
lvs_vip: 172.16.1.100
lvs_http_port: 80
lvs_https_port: 443
lvs_rs_network: lo:0# dns
bertwu_online_zone_name: bertwu.online.zone
dns_master_ip: 172.16.1.91
dns_slave_ip: 172.16.1.92
dns_zone_file_path: /var/named# wordpress
wordpress_domain: blog.bertwu.online
nginx_http_listen_port: 80
wordpress_code_path: /webcode
https_state: "on"#wordpress haproxy
haproxy_port: 80
https_key_dir: /ssl# tomcat
jdk_version: jdk-8u281-linux-x64
tomcat_dir: /soft
tomcat_version: apache-tomcat-8.5.71
tomcat_name: tomcat# zrlog
zrlog_code_path: /webcode
tomcat_port: 8080
zrlog_domain: zrlog.bertwu.online# zrlog-proxy# phpmyadmin
phpmyadmin_code_path: /webcode
phpmyadmin_domain: phpmyadmin.bertwu.online
phpmyadmin_nginx_listen_port: 80
php_https_state: "on"#phpmyadmin-proxy
ssl_key_dir: /etc/nginx/ssl_key
web_cluster_name: phpmyadmin
nginx_proxy_port: 80

5.4 top.yml 汇总

[root@manager roles]# cat top.yml
#- hosts: all
#  roles:
#    - role: base#- hosts: nfsservers
#  roles:
#- role: nfs-server#- hosts: rsyncservers
#  roles:
#    - role: rsync-server#- hosts: nfsservers
#  roles:
#    - role: sersync-server
##- hosts: mysqlservers
#  roles:
#    - role: mysql-server#- hosts: redisservers
#  roles:
#    - role: redis-server#- hosts: webservers
#  roles:
#   # - role: php-fpm
#   # - role: nginx-server
#    - role: wordpress-web#- hosts: proxyservers
#  roles:
#    - role: wordpress-proxy#- hosts: proxyservers
#  roles:
#    - role: keepalived#- hosts: lbservers
#  roles:
#    - role: lvs#- hosts: proxyservers
#  roles:
#    - role: lvs-rs#- hosts: routes
#  roles:
#    - role: route#- hosts: dnsservers
#  roles:
#    - role: dns#- hosts: webservers
#  roles:
#    - role: zrlog-web  #- hosts: proxyservers
#  roles:
#    - role: zrlog-proxy#- hosts: webservers
#  roles:
#    - role: phpmyadmin#- hosts: proxyservers
#  roles:
#    - role: phpmyadmin-proxy

5.5 项目总文件目录层级结构

[root@manager /]# tree ansible
ansible
└── roles├── ansible.cfg├── base│   ├── files│   ├── handlers│   ├── tasks│   │   ├── firewalld.yml│   │   ├── kernel.yml│   │   ├── limits.yml│   │   ├── main.yml│   │   ├── user.yml│   │   ├── yum_pkg.yml│   │   └── yum_repository.yml│   └── templates├── dns│   ├── handlers│   │   └── main.yml│   ├── tasks│   │   └── main.yml│   └── templates│       ├── bertwu.online.zone.j2│       └── named.conf.j2├── group_vars│   └── all├── haproxy│   ├── 1score.awk│   ├── count1.awk│   ├── handlers│   │   └── main.yml│   ├── sca│   ├── tasks│   │   └── main.yml│   └── templates│       ├── haproxy.cfg.j2│       └── haproxy.service.j2├── hosts├── keepalived│   ├── files│   ├── handlers│   │   └── main.yml│   ├── tasks│   │   └── main.yml│   └── templates│       └── keepalived.conf.j2├── lvs│   ├── handlers│   │   └── main.yml│   ├── meta│   │   └── mail.yml│   ├── tasks│   │   └── main.yml│   └── templates│       └── keepalived.conf.j2├── lvs-rs│   ├── handlers│   │   └── main.yml│   ├── tasks│   │   └── main.yml│   └── templates│       └── ifcfg-lo:0.j2├── mysql-server│   ├── handlers│   ├── tasks│   │   └── main.yml│   └── templates├── network_init.yml├── nfs-server│   ├── files│   ├── handlers│   │   └── main.yml│   ├── tasks│   │   └── main.yml│   └── templates│       └── exports.j2├── nginx-server│   ├── files│   ├── handlers│   │   └── main.yml│   ├── tasks│   │   └── main.yml│   └── templates│       └── nginx.conf.j2├── php-fpm│   ├── files│   ├── handlers│   │   └── main.yml│   ├── tasks│   │   └── main.yml│   └── templates│       ├── php.ini.j2│       └── www.conf.j2├── phpmyadmin│   ├── files│   │   └── phpmyadmin.tar.gz│   ├── handlers│   ├── meta│   │   └── main.yml│   ├── tasks│   │   └── main.yml│   └── templates│       └── phpmyadmin.conf.j2├── phpmyadmin-proxy│   ├── files│   │   └── 6386765_phpmyadmin.bertwu.online_nginx.zip│   ├── meta│   │   └── main.yml│   ├── tasks│   │   └── main.yml│   └── templates│       ├── phpmyadmin.conf.j2│       └── proxy_params.j2├── redis-server│   ├── handlers│   │   └── main.yml│   ├── tasks│   │   └── main.yml│   └── templates│       └── redis.conf.j2├── route│   └── tasks│       ├── main.yml│       └── main.yml1├── rsync-server│   ├── files│   ├── handlers│   │   └── main.yml│   ├── tasks│   │   └── main.yml│   └── templates│       └── rsyncd.conf.j2├── sersync-server│   ├── files│   │   └── sersync2.5.4_64bit_binary_stable_final.tar.gz│   ├── handlers│   ├── tasks│   │   └── main.yml│   └── templates│       ├── confxml1.xml.j2│       └── confxml2.xml.j2├── tomcat│   ├── files│   │   ├── apache-tomcat-8.5.71.tar.gz│   │   └── jdk-8u281-linux-x64.rpm│   ├── handlers│   │   └── main.yml│   ├── meta│   ├── tasks│   │   └── main.yml│   └── templates│       ├── server.xml.j2│       └── tomcat.service.j2├── top.yml├── wordpress-proxy│   ├── files│   │   └── blog_key.pem.j2│   ├── handlers│   ├── meta│   │   └── main.yml│   ├── tasks│   │   └── main.yml│   └── templates│       └── haproxy.cfg.j2├── wordpress-web│   ├── files│   │   └── wordpress.tar.gz│   ├── handlers│   ├── meta│   │   └── main.yml│   ├── tasks│   │   └── main.yml│   └── templates│       └── wordpress.conf.j2├── zrlog-proxy│   ├── files│   │   └── zrlog_key.pem.j2│   ├── handlers│   ├── meta│   │   └── main.yml│   ├── tasks│   │   └── main.yml│   └── templates│       └── haproxy.cfg.j2└── zrlog-web├── files│   └── zrlog.tar.gz├── handlers├── meta│   └── main.yml├── tasks│   └── main.yml└── templates└── server.xml.j2

附录:

1.wordpress 七层负载均衡选用nginx配置语法:

[root@proxy01 conf.d]# cat wordpress.conf
upstream wdblog {server 172.16.1.7:80;server 172.10.1.8:80;server 172.10.1.9:80;
}server {listen 443 default ssl;server_name blog.bertwu.online;ssl_certificate ssl_key/6188098_blog.bertwu.online.pem;ssl_certificate_key ssl_key/6188098_blog.bertwu.online.key;location / {proxy_pass http://wdblog;include proxy_params;}
}server  {listen 80;server_name blog.bertwu.online;return 302 https://$server_name$request_uri;
}

2.zrlog七层负载均衡选用nginx配置语法:

[root@proxy01 conf.d]# cat zrblog.conf
upstream zrlog {server 172.16.1.7:8080;server 172.16.1.8:8080;#server 172.16.1.9:8080;
}
server {listen 443 ssl;server_name zrlog.bertwu.online;ssl_certificate ssl_key/6181156_zrlog.bertwu.online.pem;ssl_certificate_key ssl_key/6181156_zrlog.bertwu.online.key;ssl_session_timeout 100m;ssl_session_cache  shared:cache:10m;location / {proxy_pass http://zrlog;include proxy_params;}
}server {listen 80;server_name zrlog.bertwu.online;return 302 https://$server_name$request_uri;
}

ansible-playbook基于角色一键交付wordpress+zrlog+phpmyadmin项目相关推荐

  1. Ansible之使用角色一键部署httpd并检查部署结果

    规划 角色:webser 任务: 111为ansible主节点,112,113为被控节点.在ansible主节点远程一键安装部署httpd,要求运行端口为99,根目录为/var/www,域名为其节点的 ...

  2. ansible中的角色使用--nginx+持续交付和滚动升级+时间同步角色+selinux+自动添加磁盘

    文章目录 1. nginx 2. 持续交付和滚动升级 3. noarch(时间同步角色) 4. selinux 5.自动添加磁盘 使用角色添加磁盘 用任务命令创建lv 用任务命令进行设备分区 ansi ...

  3. ansible playbook详细教程(笔记)

    ctrl F  执行playbook命令   ansible ­playbook -­i "inventory文件名" playbook.yml ­f 10 (并行级别10) 加参 ...

  4. sealer背后实现整个集群一键交付的奥秘 | 龙蜥技术

    简介:解读集群镜像"开箱即用"神器--sealer! 编者按:集群镜像把整个集群看成一台服务器,把 k8s 看成云操作系统,实现整个集群的镜像化打包和交付,为企业级软件提供一种&q ...

  5. Ansible 学习总结(2)—— Ansible playbook 入门详解

    一.Ansible playbook 简单概述 playbook 是 ansible 用于配置,部署,和管理被控节点的剧本.通过 playbook 的详细描述,执行其中的一系列 tasks ,可以让远 ...

  6. 自动化运维工具-----Ansible playbook详解

    目录 一.Ansible playbook简介 二.Ansible playbook使用场景 三.Ansible playbook格式 格式简介 核心元素 基本组件 variables变量 模板tem ...

  7. Ansible Galax在线角色的使用

    Ansible Galax在线角色的使用 1.使用ansible galaxy部署角色 1.1.介绍ansible galaxy Ansible Galaxy https://galaxy.ansib ...

  8. RHCE-B5. 使用Ansible Galaxy 安装角色

    红帽RHCE考试下午-RHCE(RH294) RH294任务概览 考试时间4个小时,6台虚拟机,15道题 原来通过脚本或者集群做的题现在都需要使用playbook实现 考试时大概有6台虚拟服务器,都已 ...

  9. Python+Django+Ansible Playbook自动化运维项目实战(二)

    Python+Django+Ansible Playbook自动化运维项目实战 一.资产管理,自动化发现.扫描 1.服务端资产探测.扫描发现 1)资产管理的资产: 2)抽象与约定: 2.探测协议和模块 ...

最新文章

  1. Iterative error correction of long sequencing reads maximizes accuracy and improves contig assembly
  2. Newtonsoft.Json code
  3. Saiku的下载与安装(一)
  4. Camera初探(二)
  5. main() 函数解析(一)——Linux-0.11 剖析笔记(六)
  6. excel表头_如何用Excel制作出库入库表
  7. pandas将所0值修改为NaN
  8. ssis 派生列_SSIS脚本组件与派生列
  9. daily news新闻阅读客户端应用源码(兼容iPhone和iPad)
  10. Visual studio 2010 中文SP1 无法安装Silverlight5 Beta Tools的解决办法
  11. pycharm: connot find declaration to go to
  12. java基础中多线程个线程add同一变量时的非原子性问题
  13. 2022 chrome离线下载包
  14. HTTP、TCP连接工具
  15. 数据库索引及基础优化入门
  16. css中relative、absolute和float
  17. C# serialport串口接收数据异常,出现很多3F的解决方法
  18. 2014广东计算机一级试题及答案,广东计算机一级试题2014版
  19. springboot启动失败之A child container failed during start
  20. STM32--汇编语言:子程呼叫与无条件跳转指令B、BL、BX和BLX

热门文章

  1. 利用开发者工具远程调试Android时,华为手机无法被识别
  2. 苹果和特斯拉供应商台达电子遭勒索攻击 360专家提出四点建议应对
  3. 测试用例难写?来试试 Sharness
  4. C程序中的未定义行为(Undefined Behavior)
  5. 华为freebuds 5无线充电充不上电怎么办?
  6. (算法练习)分数序列求和
  7. 30 字体分类 1 字体的分类
  8. swagger的分组注释
  9. 内核怎么通过主设备号找驱动、次设备号找设备
  10. css 全屏显示一张图片_css如何设置全屏背景图片