澳门新浦京娱乐场网站-www.146.net-新浦京娱乐场官网
做最好的网站

澳门新浦京娱乐场网站:超详细saltstack安装部署

作用:为了不手动去安装一台一台去salt-minion,并进重复的配置

                           自动化运维之SaltStack实践

1.1、环境

linux-node1(master服务端) 192.168.0.15
linux-node2(minion客户端) 192.168.0.16

1.2、SaltStack三种运行模式介绍

Local 本地
Master/Minion 传统运行方式(server端跟agent端)
Salt SSH SSH

1.3、SaltStack三大功能

●远程执行

●配置管理

●云管理

1.4、SaltStack安装基础环境准备

[root@linux-node1 ~]# cat /etc/redhat-release  ##查看系统版本

CentOS release 6.7 (Final)

[root@linux-node1 ~]# uname -r ##查看系统内核版本

2.6.32-573.el6.x86_64

[root@linux-node1 ~]# getenforce ##查看selinux的状态

Enforcing

[root@linux-node1 ~]# setenforce 0 ##关闭selinux

[root@linux-node1 ~]# getenforce  

Permissive

[root@linux-node1 ~]# /etc/init.d/iptables stop

[root@linux-node1 ~]# /etc/init.d/iptables stop

[root@linux-node1 ~]# ifconfig eth0|awk -F '[: ] ' 'NR==2{print $4}' ##过滤Ip地址

192.168.0.15

[root@linux-node1 ~]# hostname ##查看主机名

linux-node1.zhurui.com

[root@linux-node1 yum.repos.d]# wget -O /etc/yum.repos.d/epel.repo   ##安装salt必须使用到epel源 

1.4、安装Salt

服务端:

[root@linux-node1 yum.repos.d]# yum install -y salt-master salt-minion ##salt-master包跟salt-minion包

[root@linux-node1 yum.repos.d]# chkconfig salt-master on  ##加入到开机自动启动

[root@linux-node1 yum.repos.d]# chkconfig salt-minion on  ##加入到开机自动启动

[root@linux-node1 yum.repos.d]# /etc/init.d/salt-master start   ##启动salt-master

Starting salt-master daemon:                                   [  OK  ]

启动到这里需要修改minion配置文件,才能启动salt-minion服务

[root@linux-node1 yum.repos.d]# grep '^[a-z]' /etc/salt/minion   

master: 192.168.0.15  ##指定master主机

[root@linux-node1 yum.repos.d]# cat /etc/hosts

192.168.0.15 linux-node1.zhurui.com linux-node1  ##确认主机名是否解析

192.168.0.16 linux-node2.zhurui.com linux-node2

解析结果:

  1. [root@linux-node1 yum.repos.d]# ping linux-node1.zhurui.com
  2. PING linux-node1.zhurui.com (192.168.0.15)56(84) bytes of data.
  3. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=1 ttl=64 time=0.087 ms
  4. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=2 ttl=64 time=0.060 ms
  5. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=3 ttl=64 time=0.053 ms
  6. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=4 ttl=64 time=0.060 ms
  7. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=5 ttl=64 time=0.053 ms
  8. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=6 ttl=64 time=0.052 ms
  9. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=7 ttl=64 time=0.214 ms
  10. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=8 ttl=64 time=0.061 ms

[root@linux-node1 yum.repos.d]# /etc/init.d/salt-minion start  ##启动minion客户端

Starting salt-minion daemon:                               [  OK  ]

[root@linux-node1 yum.repos.d]#

客户端:

[root@linux-node2 ~]# yum install -y salt-minion  ##安装salt-minion包,相当于客户端包

[root@linux-node2 ~]# chkconfig salt-minion on  ##加入开机自启动

[root@linux-node2 ~]# grep '^[a-z]' /etc/salt/minion   ##客户端指定master主机

master: 192.168.0.15

[root@linux-node2 ~]# /etc/init.d/salt-minion start  ##接着启动minion

Starting salt-minion daemon:                               [  OK  ]

1.5、Salt秘钥认证设置

1.5.1使用salt-kes -a linux*命令之前在目录/etc/salt/pki/master目录结构如下

澳门新浦京娱乐场网站 1

澳门新浦京娱乐场网站 2

1.5.2使用salt-kes -a linux*命令将秘钥通过允许,随后minions_pre下的文件会转移到minions目录下

  1. [root@linux-node1 minion]# salt-key -a linux*
  2. The following keys are going to be accepted:
  3. UnacceptedKeys:
  4. linux-node1.zhurui.com
  5. linux-node2.zhurui.com
  6. Proceed?[n/Y] Y
  7. Keyfor minion linux-node1.zhurui.com accepted.
  8. Keyfor minion linux-node2.zhurui.com accepted.
  9. [root@linux-node1 minion]# salt-key
  10. AcceptedKeys:
  11. linux-node1.zhurui.com
  12. linux-node2.zhurui.com
  13. DeniedKeys:
  14. UnacceptedKeys:
  15. RejectedKeys:

澳门新浦京娱乐场网站 3

1.5.3此时目录机构变化成如下:

澳门新浦京娱乐场网站 4

1.5.4并且伴随着客户端/etc/salt/pki/minion/目录下有master公钥生成

澳门新浦京娱乐场网站 5

1.6、salt远程执行命令详解

1.6.1 salt '*' test.ping 命令

[root@linux-node1 master]# salt '*' test.ping  ##salt命令  test.ping的含义是,test是一个模块,ping是模块内的方法

linux-node2.zhurui.com:

    True

linux-node1.zhurui.com:

    True

[root@linux-node1 master]# 

澳门新浦京娱乐场网站 6

1.6.2  salt '*' cmd.run 'uptime' 命令

澳门新浦京娱乐场网站 7

1.7、saltstack配置管理

1.7.1编辑配置文件/etc/salt/master,将file_roots注释去掉

澳门新浦京娱乐场网站 8

1.7.2接着saltstack远程执行如下命令

[root@linux-node1 master]# ls /srv/

[root@linux-node1 master]# mkdir /srv/salt

[root@linux-node1 master]# /etc/init.d/salt-master restart

Stopping salt-master daemon:                               [  OK  ]

Starting salt-master daemon:                                 [  OK  ]

[root@linux-node1 salt]# cat apache.sls   ##进入到/srv/salt/目录下创建

澳门新浦京娱乐场网站 9

[root@linux-node1 salt]# salt '*' state.sls apache  ##接着执行如下语句

接着会出现如下报错:

澳门新浦京娱乐场网站 10

便捷apache.sls文件添加如下:

澳门新浦京娱乐场网站 11

最后成功如下:

  1. [root@linux-node1 salt]# salt '*' state.sls apache
  2. linux-node2.zhurui.com:
  3. ----------
  4. ID: apache-install
  5. Function: pkg.installed
  6. Name: httpd
  7. Result:True
  8. Comment:Package httpd is already installed.
  9. Started:22:38:52.954973
  10. Duration:1102.909 ms
  11. Changes:
  12. ----------
  13. ID: apache-install
  14. Function: pkg.installed
  15. Name: httpd-devel
  16. Result:True
  17. Comment:Package httpd-devel is already installed.
  18. Started:22:38:54.058190
  19. Duration:0.629 ms
  20. Changes:
  21. ----------
  22. ID: apache-service
  23. Function: service.running
  24. Name: httpd
  25. Result:True
  26. Comment:Service httpd has been enabled, and is running
  27. Started:22:38:54.059569
  28. Duration:1630.938 ms
  29. Changes:
  30. ----------
  31. httpd:
  32. True
  33. ``
  34. Summary
  35. ------------
  36. Succeeded:3(changed=1)
  37. Failed:0
  38. ------------
  39. Total states run:3
  40. linux-node1.zhurui.com:
  41. ----------
  42. ID: apache-install
  43. Function: pkg.installed
  44. Name: httpd
  45. Result:True
  46. Comment:Package httpd is already installed.
  47. Started:05:01:17.491217
  48. Duration:1305.282 ms
  49. Changes:
  50. ----------
  51. ID: apache-install
  52. Function: pkg.installed
  53. Name: httpd-devel
  54. Result:True
  55. Comment:Package httpd-devel is already installed.
  56. Started:05:01:18.796746
  57. Duration:0.64 ms
  58. Changes:
  59. ----------
  60. ID: apache-service
  61. Function: service.running
  62. Name: httpd
  63. Result:True
  64. Comment:Service httpd has been enabled, and is running
  65. Started:05:01:18.798131
  66. Duration:1719.618 ms
  67. Changes:
  68. ----------
  69. httpd:
  70. True
  71. ``
  72. Summary
  73. ------------
  74. Succeeded:3(changed=1)
  75. Failed:0
  76. ------------
  77. Total states run:3
  78. [root@linux-node1 salt]#

1.7.3验证使用saltstack安装httpd是否成功

linux-node1:

[root@linux-node1 salt]# lsof -i:80  ##已经成功启动

COMMAND  PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME

httpd   7397   root    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

httpd   7399 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

httpd   7400 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

httpd   7401 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

httpd   7403 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

httpd   7404 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

httpd   7405 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

httpd   7406 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

httpd   7407 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)

linux-node2:

[root@linux-node2 pki]# lsof -i:80

COMMAND   PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME

httpd   12895   root    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

httpd   12897 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

httpd   12898 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

httpd   12899 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

httpd   12901 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

httpd   12902 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

httpd   12906 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

httpd   12908 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

httpd   12909 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)

[root@linux-node2 pki]# 

1.7.4使用saltstack状态管理

澳门新浦京娱乐场网站 12

[root@linux-node1 salt]# salt '*' state.highstate

2.1、SaltStack之Grains数据系统

●Grains

●Pillar

2.1.1使用salt命令查看系统版本

  1. ``
  2. [root@linux-node1 salt]# salt 'linux-node1*' grains.ls
  3. linux-node1.zhurui.com:
  4. -SSDs
  5. - biosreleasedate
  6. - biosversion
  7. - cpu_flags
  8. - cpu_model
  9. - cpuarch
  10. - domain
  11. - fqdn
  12. - fqdn_ip4
  13. - fqdn_ip6
  14. - gpus
  15. - host
  16. - hwaddr_interfaces
  17. - id
  18. - init
  19. - ip4_interfaces
  20. - ip6_interfaces
  21. - ip_interfaces
  22. - ipv4
  23. - ipv6
  24. - kernel
  25. - kernelrelease
  26. - locale_info
  27. - localhost
  28. - lsb_distrib_codename
  29. - lsb_distrib_id
  30. - lsb_distrib_release
  31. - machine_id
  32. - manufacturer
  33. - master
  34. - mdadm
  35. - mem_total
  36. - nodename
  37. - num_cpus
  38. - num_gpus
  39. - os
  40. - os_family
  41. - osarch
  42. - oscodename
  43. - osfinger
  44. - osfullname
  45. - osmajorrelease
  46. - osrelease
  47. - osrelease_info
  48. - path
  49. - productname
  50. - ps
  51. - pythonexecutable
  52. - pythonpath
  53. - pythonversion
  54. - saltpath
  55. - saltversion
  56. - saltversioninfo
  57. - selinux
  58. - serialnumber
  59. - server_id
  60. - shell
  61. - virtual
  62. - zmqversion
  63. [root@linux-node1 salt]#

2.1.2系统版本相关信息:

  1. [root@linux-node1 salt]# salt 'linux-node1*' grains.items
  2. linux-node1.zhurui.com:
  3. ----------
  4. SSDs:
  5. biosreleasedate:
  6. 07/31/2013
  7. biosversion:
  8. 6.00
  9. cpu_flags:
  10. - fpu
  11. - vme
  12. - de
  13. - pse
  14. - tsc
  15. - msr
  16. - pae
  17. - mce
  18. - cx8
  19. - apic
  20. - sep
  21. - mtrr
  22. - pge
  23. - mca
  24. - cmov
  25. - pat
  26. - pse36
  27. - clflush
  28. - dts
  29. - mmx
  30. - fxsr
  31. - sse
  32. - sse2
  33. - ss
  34. - syscall
  35. - nx
  36. - rdtscp
  37. - lm
  38. - constant_tsc
  39. - up
  40. - arch_perfmon
  41. - pebs
  42. - bts
  43. - xtopology
  44. - tsc_reliable
  45. - nonstop_tsc
  46. - aperfmperf
  47. - unfair_spinlock
  48. - pni
  49. - ssse3
  50. - cx16
  51. - sse4_1
  52. - sse4_2
  53. - x2apic
  54. - popcnt
  55. - hypervisor
  56. - lahf_lm
  57. - arat
  58. - dts
  59. cpu_model:
  60. Intel(R)Core(TM) i3 CPU M 380@2.53GHz
  61. cpuarch:
  62. x86_64
  63. domain:
  64. zhurui.com
  65. fqdn:
  66. linux-node1.zhurui.com
  67. fqdn_ip4:
  68. -192.168.0.15
  69. fqdn_ip6:
  70. gpus:
  71. |_
  72. ----------
  73. model:
  74. SVGA II Adapter
  75. vendor:
  76. unknown
  77. host:
  78. linux-node1
  79. hwaddr_interfaces:
  80. ----------
  81. eth0:
  82. 00:0c:29:fc:ba:90
  83. lo:
  84. 00:00:00:00:00:00
  85. id:
  86. linux-node1.zhurui.com
  87. init:
  88. upstart
  89. ip4_interfaces:
  90. ----------
  91. eth0:
  92. -192.168.0.15
  93. lo:
  94. -127.0.0.1
  95. ip6_interfaces:
  96. ----------
  97. eth0:
  98. - fe80::20c:29ff:fefc:ba90
  99. lo:
  100. -::1
  101. ip_interfaces:
  102. ----------
  103. eth0:
  104. -192.168.0.15
  105. - fe80::20c:29ff:fefc:ba90
  106. lo:
  107. -127.0.0.1
  108. -::1
  109. ipv4:
  110. -127.0.0.1
  111. -192.168.0.15
  112. ipv6:
  113. -::1
  114. - fe80::20c:29ff:fefc:ba90
  115. kernel:
  116. Linux
  117. kernelrelease:
  118. 2.6.32-573.el6.x86_64
  119. locale_info:
  120. ----------
  121. defaultencoding:
  122. UTF8
  123. defaultlanguage:
  124. en_US
  125. detectedencoding:
  126. UTF-8
  127. localhost:
  128. linux-node1.zhurui.com
  129. lsb_distrib_codename:
  130. Final
  131. lsb_distrib_id:
  132. CentOS
  133. lsb_distrib_release:
  134. 6.7
  135. machine_id:
  136. da5383e82ce4b8d8a76b5a3e00000010
  137. manufacturer:
  138. VMware,Inc.
  139. master:
  140. 192.168.0.15
  141. mdadm:
  142. mem_total:
  143. 556
  144. nodename:
  145. linux-node1.zhurui.com
  146. num_cpus:
  147. 1
  148. num_gpus:
  149. 1
  150. os:
  151. CentOS
  152. os_family:
  153. RedHat
  154. osarch:
  155. x86_64
  156. oscodename:
  157. Final
  158. osfinger:
  159. CentOS-6
  160. osfullname:
  161. CentOS
  162. osmajorrelease:
  163. 6
  164. osrelease:
  165. 6.7
  166. osrelease_info:
  167. -6
  168. -7
  169. path:
  170. /sbin:/usr/sbin:/bin:/usr/bin
  171. productname:
  172. VMwareVirtualPlatform
  173. ps:
  174. ps -efH
  175. pythonexecutable:
  176. /usr/bin/python2.6
  177. pythonpath:
  178. -/usr/bin
  179. -/usr/lib64/python26.zip
  180. -/usr/lib64/python2.6
  181. -/usr/lib64/python2.6/plat-linux2
  182. -/usr/lib64/python2.6/lib-tk
  183. -/usr/lib64/python2.6/lib-old
  184. -/usr/lib64/python2.6/lib-dynload
  185. -/usr/lib64/python2.6/site-packages
  186. -/usr/lib64/python2.6/site-packages/gtk-2.0
  187. -/usr/lib/python2.6/site-packages
  188. pythonversion:
  189. -2
  190. -6
  191. -6
  192. - final
  193. -0
  194. saltpath:
  195. /usr/lib/python2.6/site-packages/salt
  196. saltversion:
  197. 2015.5.10
  198. saltversioninfo:
  199. -2015
  200. -5
  201. -10
  202. -0
  203. selinux:
  204. ----------
  205. enabled:
  206. True
  207. enforced:
  208. Permissive
  209. serialnumber:
  210. VMware-564d8f43912d3a99-eb c4 3b a9 34 fc ba 90
  211. server_id:
  212. 295577080
  213. shell:
  214. /bin/bash
  215. virtual:
  216. VMware
  217. zmqversion:
  218. 3.2.5

2.1.3系统版本相关信息:

澳门新浦京娱乐场网站 13

2.1.4查看node1所有ip地址:

[root@linux-node1 salt]# salt 'linux-node1*' grains.get ip_interfaces:eth0 ##用于信息的收集

linux-node1.zhurui.com:

    - 192.168.0.15

    - fe80::20c:29ff:fefc:ba90

澳门新浦京娱乐场网站 14

澳门新浦京娱乐场网站 15

2.1.4使用Grains收集系统信息:

[root@linux-node1 salt]# salt 'linux-node1*' grains.get os 

linux-node1.zhurui.com:

    CentOS

[root@linux-node1 salt]# salt -G os:CentOS cmd.run 'w'  ##  -G:代表使用Grains收集,使用w命令,查看登录信息

linux-node2.zhurui.com:

     20:29:40 up 2 days, 16:09,  2 users,  load average: 0.00, 0.00, 0.00

    USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT

    root     tty1     -                Sun14   29:07m  0.32s  0.32s -bash

    root     pts/0    192.168.0.101    Sun20   21:41m  0.46s  0.46s -bash

linux-node1.zhurui.com:

     02:52:01 up 1 day, 22:31,  3 users,  load average: 4.00, 4.01, 4.00

    USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT

    root     tty1     -                Sat20   24:31m  0.19s  0.19s -bash

    root     pts/0    192.168.0.101    Sun02    1.00s  1.33s  0.68s /usr/bin/python

    root     pts/1    192.168.0.101    Sun04   21:36m  0.13s  0.13s -bash

[root@linux-node1 salt]# 

截图如下:

澳门新浦京娱乐场网站 16

2.1.5 使用Grains规则匹配到memcache的主机上运行输入hehe

[root@linux-node1 salt]# vim /etc/salt/minion ##编辑minion配置文件,取消如下几行注释

88 grains:

 89   roles:

 90     - webserver

 91     - memcache

 截图如下:

澳门新浦京娱乐场网站 17

[root@linux-node1 salt]# /etc/init.d/salt-minion restart   ##

Stopping salt-minion daemon:                               [  OK  ]

Starting salt-minion daemon:                               [  OK  ]

[root@linux-node1 salt]# 

[root@linux-node1 salt]# salt -G 'roles:memcache' cmd.run 'echo zhurui'  ##使用grains匹配规则是memcache的客户端机器,然后输出命令

linux-node1.zhurui.com:

    zhurui

[root@linux-node1 salt]#

截图如下:

澳门新浦京娱乐场网站 18

2.1.5 也可以通过创建新的配置文件/etc/salt/grains文件来配置规则

[root@linux-node1 salt]# cat /etc/salt/grains 

web: nginx

[root@linux-node1 salt]# /etc/init.d/salt-minion restart  ##修改完配置文件以后需要重启服务

Stopping salt-minion daemon:                               [  OK  ]

Starting salt-minion daemon:                               [  OK  ]

[root@linux-node1 salt]# 

[root@linux-node1 salt]# salt -G web:nginx cmd.run 'w'  ##使用grains匹配规则为web:nginx的主机运行命令w

linux-node1.zhurui.com:

     03:31:07 up 1 day, 23:11,  3 users,  load average: 4.11, 4.03, 4.01

    USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT

    root     tty1     -                Sat20   25:10m  0.19s  0.19s -bash

    root     pts/0    192.168.0.101    Sun02    0.00s  1.41s  0.63s /usr/bin/python

    root     pts/1    192.168.0.101    Sun04   22:15m  0.13s  0.13s -bash

 

grains的用法:

1.收集底层系统信息

2、远程执行里面匹配minion

3、top.sls里面匹配minion

 

2.1.5 也可以/srv/salt/top.sls配置文件匹配minion

 

[root@linux-node1 salt]# cat /srv/salt/top.sls 

base:

  'web:nginx':

    - match: grain

    - apache

[root@linux-node1 salt]# 

澳门新浦京娱乐场网站 19

2.2、SaltStack之Pillar数据系统

2.2.1 首先在master配置文件552行打开pillar开关

 

[root@linux-node1 salt]# grep '^[a-z]' /etc/salt/master 

file_roots:

pillar_opts: True

[root@linux-node1 salt]# /etc/init.d/salt-master restart   ##重启master

Stopping salt-master daemon:                               [  OK  ]

Starting salt-master daemon:                                 [  OK  ]

[root@linux-node1 salt]# salt '*' pillar.items  ##使用如下命令验证

截图如下:

澳门新浦京娱乐场网站 20

[root@linux-node1 salt]# grep '^[a-z]' /etc/salt/master

529 pillar_roots:  ##打开如下行

530   base:

531     - /srv/pillar

截图如下:

澳门新浦京娱乐场网站 21

[root@linux-node1 salt]# mkdir /srv/pillar

[root@linux-node1 salt]# /etc/init.d/salt-master restart  ##重启master

Stopping salt-master daemon:                               [  OK  ]

Starting salt-master daemon:                                 [  OK  ]

[root@linux-node1 salt]# vim /srv/pillar/apache.sls

[root@linux-node1 salt]# cat /srv/pillar/apache.sls

{%if grains['os'] == 'CentOS' %}

apache: httpd

{% elif grains['os'] == 'Debian' %}

apache: apache2

{% endif %}

[root@linux-node1 salt]# 

截图如下:

澳门新浦京娱乐场网站 22

接着指定哪个minion可以看到:

[root@linux-node1 salt]# cat /srv/pillar/top.sls 

base:

  '*':

    - apache

澳门新浦京娱乐场网站 23

 

[root@linux-node1 salt]# salt '*' pillar.items ##修改完成以后使用该命令验证

linux-node1.zhurui.com:

    ----------

    apache:

        httpd

linux-node2.zhurui.com:

    ----------

    apache:

        httpd

截图如下:

澳门新浦京娱乐场网站 24

2.2.1 使用Pillar定位主机

澳门新浦京娱乐场网站 25

报错处理:

[root@linux-node1 salt]# salt '*' saltutil.refresh_pillar  ##需要执行刷新命令

linux-node2.zhurui.com:

    True

linux-node1.zhurui.com:

    True

[root@linux-node1 salt]# 

截图如下:

澳门新浦京娱乐场网站 26

[root@linux-node1 salt]# salt -I 'apache:httpd' test.ping

linux-node1.zhurui.com:

    True

linux-node2.zhurui.com:

    True

[root@linux-node1 salt]# 

澳门新浦京娱乐场网站 27

 

2.3、SaltStack数据系统区别介绍

名称 存储位置 数据类型 数据采集更新方式 应用
Grains minion端 静态数据 minion启动时收集,也可以使用saltutil.sync_grains进行刷新。 存储minion基本数据,比如用于匹配minion,自身数据可以用来做资产管理等。
Pillar master端 动态数据 在master端定义,指定给对应的minion,可以使用saltutil.refresh_pillar刷新 存储Master指定的数据,只有指定的minion可以看到,用于敏感数据保存。

Saltstack是Python开发的,上千台的服务器都可以管理。

1.环境准备

准备两台虚拟机

主机名

  ip

role

linux-node1

10.0.0.7

master

linux-node2

10.0.0.8

minion

 

在节点1上安装 master 和 minion

[root@linux-node1 ~]yum install salt-master salt-minion -y

 

在节点2上安装 minion

[root@linux-node2 ~]yum install  salt-minion -y

 

分别设置开机自启动

[root@linux-node1 ~]chkconfig  salt-master on

[root@linux-node1 ~]chkconfig  --add salt-master

[root@linux-node1 ~]chkconfig  salt-minion on

[root@linux-node1 ~]chkconfig  --add salt-minion

[root@linux-node2 ~]chkconfig  salt-minion on

[root@linux-node1 ~]chkconfig  --add salt-minion

 

指定master

vim /etc/salt/minion

master: 10.0.0.7

 

授权节点1和节点2

slat-key -a linux*

 

一、环境

运维重复性工作:系统安装、环境部署、添加监控、代码发布(基于git或svn二次开发)、项目迁移、计划任务。

2.测试

测试 ping 节点1 和节点2

salt '*' test.ping

 

执行 cmd.run  执行bash查看负载命令

salt '*' cmd.run 'uptime'

 

设置sls文件的路径

[root@linux-node1 ~]mkdir -p /srv/salt/base

[root@linux-node1 ~]mkdir -p /srv/salt/test

[root@linux-node1 ~]mkdir -p /srv/salt/prod

 

vim /etc/salt/master

file_roots:

  base:

    - /srv/salt/base

  test:

    - /srv/salt/test

  prod:

- /srv/salt/prod

 

重启master

/etc/init.d/salt-master restart

 

编写YMAL安装Apache 并设置启动文件

cd /srv/salt

vim apache.sls

apache-install:

  pkg.installed:

    - names:

      - httpd

      - httpd-devel

 

apache-service:

  service.running:

    - name: httpd

    - enable: True

    - reload: True

 

执行状态文件

salt '*' state.sls apache

 

编写高级状态文件

vim top.sls

base:

  'linux-node2':

  - apache

 

slat '*' state.highstate   #执行高级状态 top.sls

 

系统环境:

salt是一个新的基础平台管理工具。只需花费数分钟即可运行起来,扩展性足以支撑管理上万台服务器,数秒即可完成数据传递。

3.数据系统之 Grains

salt 'linux-node1' grains.items  #查询所有键值

 

salt 'linux-node1' grains.get fqdn #查询单个主机值

 

显示所有 节点1 eth0的ip

[root@linux-node1 ~]# salt 'linux-node1' grains.get ip_interfaces:eth0

linux-node1:

    - 10.0.0.7

- fe80::20c:29ff:fe9d:57e8

 

#根据系统名称匹配执行cmd.run命令

[root@linux-node1 ~]# salt -G os:CentOS cmd.run 'w'  #-G 代表使用grains匹配

linux-node2:

     03:47:49 up  9:58,  2 users,  load average: 0.00, 0.00, 0.00

    USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT

    root     pts/1    10.0.0.1         17:50    1:31m  0.14s  0.14s -bash

    root     pts/0    10.0.0.1         03:37    5:40   0.00s  0.00s -bash

linux-node1:

     03:47:49 up  1:35,  2 users,  load average: 0.00, 0.00, 0.00

    USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT

    root     pts/0    10.0.0.1         02:13    1:01m  0.08s  0.01s vim top.sls

    root     pts/1    10.0.0.1         03:37    0.00s  0.52s  0.34s /usr/bin/python

 

vim /etc/salt/grains

web: nginx

salt -G web:nginx cmd.run 'w'

 

#cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)

salt可以做配置管理、远程命令、包管理。

4.数据系统之 Pillar

设置pillar文件的路径

vim /etc/salt/master

pillar_roots:

  base:

    - /srv/pillar

 

mkdir /srv/pillar #创建默认pillar目录

 

/etc/init.d/salt-master restart

vim /srv/pillar/apache.sls  #使用jinja模板语言

{%if grains['os'] == 'CentOS' %}

apache: httpd

{% elif grains['os'] == 'Debian' %}

apache: apche2

{% endif %}

 

vim /srv/pillar/top.sls

base:

  '*':

- apache

 

[root@linux-node1 ~]# salt '*' pillar.items

linux-node2:

    ----------

    apache:

        httpd

linux-node1:

    ----------

    apache:

        httpd

 

配置完 pillar需要刷新 生效

[root@linux-node1 ~]salt '*' saltutil.refresh_pillar

[root@linux-node1 ~]#  salt -I 'apache:httpd' test.ping

linux-node2:

    True

linux-node1:

    True

 

http://docs.saltstack.cn/topics/index.html    #slatstack中文网站

slatstack 之远程执行  

targeting

moudles 

returners   

 

基于对模块的访问控制

[root@linux-node1 ~]vim /etc/salt/master       

client_acl:

  oldboy:                      #oldboy用户下只能使用test.ping network的所有方法

    - test.ping

    - network.*

  user01:                    

    - linux-node1*:

      - test.ping

 

权限设置   

chmod 755 /var/cache/salt /var/cache/salt/master /var/cache/salt/master/jobs /var/run/salt /var/run/salt/master

 

 

[root@linux-node1 ~]/etc/ini.d/salt-master restart

[root@linux-node1 ~]# su - oldboy

[oldboy@linux-node1 ~]$ salt '*' cmd.run 'df -h'

[WARNING ] Failed to open log file, do you have permission to write to /var/log/salt/master?

Failed to authenticate! This is most likely because this user is not permitted to execute commands, but there is a small possibility that a disk error occurred (check disk/inode usage).

 

创建表结构 3个表:

CREATE DATABASE `salt`

DEFAULT CHARACTER SET utf8

DEFAULT COLLATE utf8_general_ci;

USE `salt`;

 

CREATE TABLE `jids` (

`jid` varchar(255) NOT NULL,

`load` mediumtext NOT NULL,

UNIQUE KEY `jid` (`jid`)

) ENGINE=InnoDB DEFAULT CHARSET=utf8;

CREATE INDEX jid ON jids(jid) USING BTREE;

 

CREATE TABLE `salt_returns` (

`fun` varchar(50) NOT NULL,

`jid` varchar(255) NOT NULL,

`return` mediumtext NOT NULL,

`id` varchar(255) NOT NULL,

`success` varchar(10) NOT NULL,

`full_ret` mediumtext NOT NULL,

`alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,

KEY `id` (`id`),

KEY `jid` (`jid`),

KEY `fun` (`fun`)

) ENGINE=InnoDB DEFAULT CHARSET=utf8;

 

CREATE TABLE `salt_events` (

`id` BIGINT NOT NULL AUTO_INCREMENT,

`tag` varchar(255) NOT NULL,

`data` mediumtext NOT NULL,

`alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,

`master_id` varchar(255) NOT NULL,

PRIMARY KEY (`id`),

KEY `tag` (`tag`)

) ENGINE=InnoDB DEFAULT CHARSET=utf8;

 

授权salt用户

grant all on salt.* to salt@'10.0.0.0/255.255.255.0 identified by 'salt';

 

yum install -y MySQL-python     #同步数据依赖 MySQL-python包

vim /etc/salt/master

底部添加

master_job_cache: mysql   #加上这一句 执行的命令自动保存到数据库不用加--return mysql

mysql.host: '10.0.0.7'

mysql.user: 'salt'

mysql.pass: 'salt'

mysql.db: 'salt'

mysql.port: 3306

/etc/init.d/salt-master restart

 

测试命令执行结果是否同步到数据库

[root@linux-node1 ~]# salt '*' cmd.run 'ls' --return mysql

 

编译安装所需的依赖包

yum install gcc gcc-c glibc autoconf make openssl openssl-devel

 

#python -V

澳门新浦京娱乐场网站 28

5.web集群架构自动化部署

Python 2.7.5

澳门新浦京娱乐场网站 29

5.1安装haproxy

cd /usr/local/src && tar zxf haproxy-1.7.9.tar.gz && cd haproxy-1.7.9 && make TARGET=linux26 PREFIX=/usr/local/haproxy && make install PREFIX=/usr/local/haproxy

cd /usr/local/src/haproxy-1.7.9/examples/

vim haproxy.init

BIN=/usr/local/haproxy/sbin/$BASENAME  更改启动脚本的默认路径

cp haproxy-init /srv/salt/prod/haproxy/files/

 

编写YMAL脚本

mkdir /srv/salt/prod/pkg            #源码安装依赖包sls

mkdir /srv/salt/prod/haproxy        #haproxy安装 sls

mkdir /srv/salt/prod/haproxy/files    #存放haproxy源码压缩包

 

haproxy自动化编译安装。

cd /srv/salt/prod/pkg

 

编译安装所需依赖包的自动化安装

vim pkg-init.sls  

pkg-init:

  pkg.installed:                 #pkg的installed

    - names:

      - gcc

      - gcc-c

      - glibc

      - make

      - autoconf

      - openssl

      - openssl-devel

  

cd /srv/salt/prod/haproxy

vim install.sls   #haproxy自动化编译安装YMAL脚本

include:

  - pkg.pkg-init

 

haproxy-install:

  file.managed:

    - name: /usr/local/src/haproxy-1.7.9.tar.gz

    - source: salt://haproxy/files/haproxy-1.7.9.tar.gz #salt:相当于/srv/salt/prod

    - user: root

    - group: root

    - mode: 755

  cmd.run:

    - name: cd /usr/local/src && tar zxf haproxy-1.7.9.tar.gz && cd haproxy-1.7.9 && make TARGET=linux26 PREFIX=/usr/local/haproxy && make install PREFIX=/usr/local/haproxy

    - unless: test -d /usr/local/haproxy

    - require:

      - pkg: pkg-init

      - file: haproxy-install

 

haproxy-init:

  file.managed:

    - name: /etc/init.d/haproxy   创建一个/etc/init.d/haproxy 文件

    - source: salt://haproxy/files/haproxy.init

    - user: root

    - group: root

    - mode: 755

    - require:

      - cmd: haproxy-install

  cmd.run:

    - name: chkconfig --add haproxy

    - unless: chkconfig --list | grep haproxy #返回false才执行和-onlyif相反,有就不执行上面的命令

    - require:

      - file: haproxy-init

net.ipv4.ip_nonlocal_bind:   #cat /proc/sys/net/ipv4/ip_nonlocal_bind 默认是0改为1,意思是可以监听非本地的ip

  sysctl.present:             #设定内核参数的方法

    - value: 1

 

haproxy-config-dir:

  file.directory:   #文件的创建目录的方法

    - name: /etc/haproxy  #创建一个/etc/haproxy的目录

    - user: root

    - group: root

    - mode: 755

 

手动执行 节点1上面的安装haproxy脚本

salt 'linux-node1' state.sls haproxy.install env=prod #env指定使用prod目录下的

 

创建集群目录

mkdir /srv/salt/prod/cluster

mkdir /srv/salt/prod/cluster/files

cd /srv/salt/prod/cluster/files

vim haproxy-outside.cfg

global

maxconn 100000

chroot /usr/local/haproxy

uid 99

gid 99

daemon

nbproc 1

pidfile /usr/local/haproxy/logs/haproxy.pid

log 127.0.0.1 local3 info

 

defaults

option http-keep-alive

maxconn 100000

mode http

timeout connect 5000ms

timeout client  50000ms

timeout server  50000ms

 

listen stats

mode http

bind 0.0.0.0:8888

stats enable

stats uri       /haproxy-status

stats auth      haproxy:saltstack

frontend frontend_www_example_com

bind    10.0.0.11:80

mode    http

option  httplog

log global

        default_backend backend_www_example_com

 

backend backend_www_example_com

option forwardfor header X-REAL-IP

option httpchk HEAD / HTTP/1.0

balance source

server web-node1        10.0.0.7:8080 check inter 2000 rise 30 fall 15

server web-node2        10.0.0.8:8080 check inter 2000 rise 30 fall 15

 

cd ..

vim haproxy-outside.sls

include:

  - haproxy.install

 

haproxy-service:

  file.managed:

    - name: /etc/haproxy/haproxy.cfg

    - source: salt://cluster/files/haproxy-outside.cfg

    - user: root

    - group: root

    - mode: 644

  service.running:

    - name: haproxy

    - enable: True

    - reload: True

    - require:

      - cmd: haproxy-init

    - watch:

      - file: haproxy-service

编辑top.sls

cd /srv/salt/base/

vim top.sls

base:

  '*':

    - init.env_init

 

prod:

  'linux-node1':

    - cluster.haproxy-outside

  'linux-node2':

    - cluster.haproxy-outside

在节点1和节点2上分别修改httpd 的监听端口

vim /etc/httpd/conf/httpd.conf 将80端口改为8080

Listen 8080  

然后重启 /etc/init.d/httpd restart

 

vim /var/www/html/index.html

linux-node1  #节点2上linux-node2

 

在浏览器中输入 10.0.0.7:8888/haproxy-status  健康检查

账号密码 haproxy/saltstack

 

[root@linux-node1 html]# cd /srv/salt/prod/

[root@linux-node1 prod]# tree

.

|-- cluster

|   |-- files

|   |   `-- haproxy-outside.cfg

|   `-- haproxy-outside.sls

|-- haproxy

|   |-- files

|   |   |-- haproxy-1.7.9.tar.gz

|   |   `-- haproxy.init

|   `-- install.sls

`-- pkg

    `-- pkg-init.sls

 

各节点环境说明:

salt配置

准备3台虚拟机,按照规范修改主机名:test-c2c-console01、test-c2c-php01、test-c2c-php02。

  1. [root@test-c2c-console01 ~]# cat /etc/sysconfig/network

  2. NETWORKING=yes

  3. HOSTNAME=test-c2c-console01.bj

 

  1. [root@test-c2c-console01 ~]# cat /etc/hosts

  2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 oldboylinux

  1. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 oldboylinux

  2.  

  3. 192.168.31.138 test-c2c-php01

  4. 192.168.31.137 test-c2c-php02

  5. 192.168.31.128 test-c2c-console01.bj

配置yum源

  1. [root@test-c2c-console01 ~]# cd /etc/yum.repos.d/

  2. [root@test-c2c-console01 yum.repos.d]# ls

  3. CentOS-Base.repo CentOS-Debuginfo.repo CentOS-Media.repo

  4. CentOS-Base.repo.20161216.oldboy CentOS-fasttrack.repo CentOS-Vault.repo

 

  1. rpm -ivh

 

  1. wget

 

  1. [root@test-c2c-console01 yum.repos.d]# ls

  2. CentOS6-Base-163.repo CentOS-Debuginfo.repo CentOS-Vault.repo

  3. CentOS-Base.repo CentOS-fasttrack.repo epel.repo

  4. CentOS-Base.repo.20161216.oldboy CentOS-Media.repo epel-testing.repo

 

服务端

yum install salt-master –y

/etc/init.d/salt-master start

chkconfig salt-master on

客户端

yum install salt-minion -y

 

vim /etc/salt/minion

master: 192.168.31.128 #master端地址

cachedir: /etc/salt/modules #模块目录

log_file: /var/log/salt/minion.log #日志路径

log_level: warning #日志级别

 

/etc/init.d/salt-minion start

chkconfig salt-minion on

5.2安装keepalived

wget && tar zxf keepalived-1.2.19.tar.gz && cd keepalived-1.2.19 && ./configure --prefix=/usr/local/keepalived --disable-fwmark && make && make install

/usr/local/src/keepalived-1.2.19/keepalived/etc/init.d/keepalived.init #启动脚本

/usr/local/src/keepalived-1.2.19/keepalived/etc/keepalived/keepalived.conf #模板文件

[root@linux-node1 etc]# mkdir /srv/salt/prod/keepalived

[root@linux-node1 etc]# mkdir /srv/salt/prod/keepalived/files

[root@linux-node1 etc]# cp init.d/keepalived.init /srv/salt/prod/keepalived/files/

[root@linux-node1 etc]# cp keepalived/keepalived.conf /srv/salt/prod/keepalived/files/

[root@linux-node1 keepalived]# cd /usr/local/keepalived/etc/sysconfig/

[root@linux-node1 sysconfig]# cp keepalived /srv/salt/prod/keepalived/files/keepalived.sysconfig

[root@linux-node1 etc]# cd /srv//salt/prod/keepalived/files/

[root@linux-node1 files]# vim keepalived.init

daemon /usr/local/keepalived/sbin/keepalived ${KEEPALIVED_OPTIONS} 修改启动时的加载文件路径

[root@linux-node1 files] cp /usr/local/src/keepalived-1.2.19.tar.gz .

[root@linux-node1 files]# cd ..    

[root@linux-node1 keepalived]# vim install.sls

include:

  - pkg.pkg-init

 

keepalived-install:

  file.managed:

    - name: /usr/local/src/keepalived-1.2.19.tar.gz

    - source: salt://keepalived/files/keepalived-1.2.19.tar.gz

    - user: root

    - group: root

    - mode: 755

澳门新浦京娱乐场网站:超详细saltstack安装部署及应用,自动安装salt。  cmd.run:

    - name: wget && tar zxf keepalived-1.2.19.tar.gz && cd keepalived-1.2.19 && ./configure --prefix=/usr/local/keepalived --disable-fwmark && make && make install

    - unless: test -d /usr/local/keepalived

    - require:

      - pkg: pkg-init

      - file: keepalived-install

 

keepalived-init:

  file.managed:

    - name: /etc/init.d/keepalived

    - source: salt://keepalived/files/keepalived.init

    - user: root

    - group: root

    - mode: 755

澳门新浦京娱乐场网站:超详细saltstack安装部署及应用,自动安装salt。  cmd.run:

    - name: chkconfig --add keepalived

    - unless: chkconfig --list | grep keepalived

    - require:

      - file: keepalived-init

 

/etc/sysconfig/keepalived:

  file.managed:

    - source: salt://keepalived/files/keepalived.sysconfig

    - user: root

    - group: root

    - mode: 644

 

/etc/keepalived:

  file.directory:

    - user: root

    - group: root

    - mode: 755

 

[root@linux-node1 ~]# cd /srv/salt/prod/cluster/files/

[root@linux-node1 files]# vim haproxy-outside-keepalived.conf

! Configuration File for keepalived

global_defs {

   notification_email {

     saltstack@example.com

   }

   notification_email_from keepalived@example.com

   smtp_server 127.0.0.1

   smtp_connect_timeout 30

   router_id {{ROUTEID}}

}

vrrp_instance haproxy_ha {

state {{STATEID}}

interface eth0

    virtual_router_id 36

priority {{PRIORITYID}}

    advert_int 1

authentication {

auth_type PASS

        auth_pass 1111

    }

    virtual_ipaddress {

       10.0.0.11

    }

 

[root@linux-node1 cluster]# vim haproxy-outside-keepalived.sls

include:

  - keepalived.install

 

keepalived-service:

  file.managed:

    - name: /etc/keepalived/keepalived.conf

    - source: salt://cluster/files/haproxy-outside-keepalived.conf

    - user: root

    - group: root

    - mode: 644

    - template: jinja

    {% if grains['fqdn'] == 'linux-node1' %}

    - ROUTEID: haproxy_ha

    - STATEID: MASTER

    - PRIORITYID: 150

    {% elif grains['fqdn'] == 'linux-node2' %}

    - ROUTEID: haproxy_ha

    - STATEID: BACKUP

    - PRIORITYID: 100

    {% endif %}

  service.running:

    - name: keepalived

    - enable: True

    - watch:

      - file: keepalived-service

 

[root@linux-node1 cluster]salt '*' state.sls cluster.haproxy-outside-keepalived env=prod

[root@linux-node1 base]# cd /srv/salt/base/

[root@linux-node1 base]# vim top.sls

base:

  '*':

    - init.env_init

 

prod:

  'linux-node1':

    - cluster.haproxy-outside

    - cluster.haproxy-outside-keepalived

  'linux-node2':

    - cluster.haproxy-outside

    - cluster.haproxy-outside-keepalived

验证keeplivedalived

[root@linux-node1 prod]# ip ad li

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 00:0c:29:9d:57:e8 brd ff:ff:ff:ff:ff:ff

    inet 10.0.0.7/24 brd 10.0.0.255 scope global eth0

    inet 10.0.0.11/32 scope global eth0

    inet6 fe80::20c:29ff:fe9d:57e8/64 scope link

       valid_lft forever preferred_lft forever

 

[root@linux-node2 html]# ip ad li

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 00:0c:29:ca:41:95 brd ff:ff:ff:ff:ff:ff

    inet 10.0.0.8/24 brd 10.0.0.255 scope global eth0

    inet6 fe80::20c:29ff:feca:4195/64 scope link

       valid_lft forever preferred_lft forever    

[root@linux-node1 prod]# /etc/init.d/keepalived stop

Stopping keepalived:                                       [  OK  ]

[root@linux-node2 html]# ip ad li

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 00:0c:29:ca:41:95 brd ff:ff:ff:ff:ff:ff

    inet 10.0.0.8/24 brd 10.0.0.255 scope global eth0

    inet 10.0.0.11/32 scope global eth0

    inet6 fe80::20c:29ff:feca:4195/64 scope link

       valid_lft forever preferred_lft forever

[root@linux-node1 prod]# /etc/init.d/keepalived start

Starting keepalived:                                       [  OK  ]

[root@linux-node2 html]# ip ad li

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 00:0c:29:ca:41:95 brd ff:ff:ff:ff:ff:ff

    inet 10.0.0.8/24 brd 10.0.0.255 scope global eth0

    inet6 fe80::20c:29ff:feca:4195/64 scope link

       valid_lft forever preferred_lft forever

[root@linux-node1 prod]# vim /srv/salt/prod/cluster/files/haproxy-outside.cfg    

balance roundrobin   #roundrobin表示轮询,source表示固定。

 

澳门新浦京娱乐场网站 30

key管理

  1. [root@test-c2c-console01 ~]# salt-key -L

  2. Accepted Keys: #已认证

  3. Denied Keys: #未认证

  4. Unaccepted Keys:

  5. test-c2c-php01

  6. test-c2c-php02

  7. Rejected Keys: #被吊销

 

  1. [root@test-c2c-console01 ~]# salt-key -A

  2. The following keys are going to be accepted:

  3. Unaccepted Keys:

  4. test-c2c-php01

  5. test-c2c-php02

  6. Proceed? [n/Y] y

  7. Key for minion test-c2c-php01 accepted.

  1. Key for minion test-c2c-php02 accepted.
  1. [root@test-c2c-console01 ~]# salt-key -L

  2. Accepted Keys:

  3. test-c2c-php01

  4. test-c2c-php02

  5. Denied Keys:

  6. Unaccepted Keys:

  7. Rejected Keys:

 

  1. [root@test-c2c-console01 ~]# salt '*' test.ping

  2. test-c2c-php02:

  3.     True

  4. test-c2c-php01:

  5.     True

常用参数:

-L:查看key状态

-A:允许所有

-D:删除所有

-a:认证指定的key

-d:删除指定的key

-r:注销指定的key(该key状态为未认证)

5.3安装zabbix-agent

[root@linux-node1 prod]# cd /srv/salt/base/init

[root@linux-node1 init]# vim zabbix.agent.sls

zabbix-agent-install:

  pkg.installed:

    - name: zabbix-agent

 

  file.managed:

    - name: /etc/zabbix/zabbix_agentd.conf

    - source: salt://init/files/zabbix_agent.conf

    - template: jinja

    - defaults:

      Server: {{ pillar['zabbix-agent']['Zabbix_Server'] }}

    - require:

      - pkg: zabbix-agent-install

 

  service.running:

    - name: zabbix-agent

    - enable: True

    - watch:

      - pkg: zabbix-agent-install

      - file: zabbix-agent-install

[root@linux-node1 init]# vim /etc/salt/master

pillar_roots:

  base:

    - /srv/pillar/base

[root@linux-node1 init]# mkdir /srv/pillar/base

[root@linux-node1 init]# /etc/init.d/salt-master restart

[root@linux-node1 init]# cd /srv/pillar/base/

[root@linux-node1 base]# vim top.sls

base:

  '*':

    - zabbix

[root@linux-node1 base]# vim zabbix.sls

zabbix-agent:

  Zabbix_Server: 10.0.0.7

[root@linux-node1 base]# cd /srv/salt/base/init/files

[root@linux-node1 files]# cp /etc/zabbix/zabbix_agent.conf .

[root@linux-node1 files]# vim zabbix_agent.conf  #使用模板语言的变量引用

Server={{ Server }}  

 

[root@linux-node1 init]# vim env_init.sls

include:

  - init.dns

  - init.history

  - init.audit

  - init.sysctl

  - init.zabbix_agent

[root@linux-node1 ~]# salt '*' state.highstate

 

澳门新浦京娱乐场网站,nginx php 以及 memcache 的安装

https://github.com/a7260488/slat-test

 

percona-zabbix-templates  #zabbix监控mysql的软件

 

管理

分组

[root@test-c2c-console01 salt]# pwd

/etc/salt

[root@test-c2c-console01 salt]# vim master

nodegroups:

#dev:'L@ops-dev01.bj,ops-dev02.bj' #列表匹配

dev:'E@ops-dev0[1-9].bj' #正则匹配

  1. [root@test-c2c-console01 salt]# salt -N 'php' test.ping #ping php组的机器

  2. test-c2c-php02:

  3.     True

  4. test-c2c-php01:

  5.     True

  6. [root@test-c2c-console01 salt]# salt -N 'php' cmd.run 'uptime' #查看php组机器的负载

  7. test-c2c-php01:

  8.      11:45:01 up 1:45, 2 users, load average: 0.00, 0.00, 0.00

  9. test-c2c-php02:

  10.      11:44:20 up 1:46, 2 users, load average: 0.00, 0.00, 0.00

环境配置

file_roots:

base: #测试环境

-/srv/salt

dev: #开发环境

- /srv/salt/dev/services

- /srv/salt/dev/states

prod: #生产环境

- /srv/salt/prod/services

- /srv/salt/prod/states

即时管理

salt -N 'dev' test.ping #匹配分组主机,即时ping

salt -N 'dev' cmd.run 'uptime' #执行命令

salt -N 'ops-dev(02|03)' test.ping #正则匹配主机,即时ping

salt '*' cmd.run "ab -n 10 -c 2 " #匹配所有机器做压力测试

salt -N 'dev' sys.doc cmd #查看模块文档

salt -N 'dev' saltutil.sync_all #同步到dev分组

salt -N 'dev' sys.doc mi #查看模块使用帮助

salt -N 'dev' mi.sshkey #执行该模块

salt -N 'dev' state.sls yum -v test=true #同步指定配置模块

salt -N 'dev' state.hightstate -v test=true #同步所有模块

5.4配置master-syndic

功能有点类似 zabbix-proxy

[root@linux-node2 ~]# yum install salt-master salt-syndic -y

[root@linux-node2 ~]# vim /etc/salt/master

syndic_master 10.0.0.7

[root@linux-node2 ~]# vim /etc/salt/master

[root@linux-node2 ~]# /etc/init.d/salt-master start

Starting salt-master daemon:                               [  OK  ]

[root@linux-node2 ~]# /etc/init.d/salt-syndic start

Starting salt-syndic daemon:                               [  OK  ]

[root@linux-node1 ~]# vim /etc/salt/mast

order_masters: True

[root@linux-node1 ~]# /etc/init.d/salt-master restart

[root@linux-node1 ~]# /etc/init.d/salt-minion stop

Stopping salt-minion daemon:                               [  OK  ]

[root@linux-node2 ~]# /etc/init.d/salt-minion stop

Stopping salt-minion daemon:                               [  OK  ]

[root@linux-node2 ~]# salt-key -D

[root@linux-node1 ~]# cd /etc/salt/pki/minion/

[root@linux-node1 minion]# rm -fr *

[root@linux-node1 ~]# cd  /etc/salt/pki/minion

[root@linux-node2 minion]# rm -fr *

[root@linux-node1 salt]# vim /etc/salt/minion

master 10.0.0.8

[root@linux-node1 salt]# vim /etc/salt/minion

master 10.0.0.8

[root@linux-node1 salt]# /etc/init.d/salt-minion start

Starting salt-minion daemon:                               [  OK  ]

[root@linux-node2 salt]# /etc/init.d/salt-minion start

Starting salt-minion daemon:                               [  OK  ]

[root@linux-node1 minion]# salt-key -A

The following keys are going to be accepted:

Unaccepted Keys:

linux-node2

Proceed? [n/Y] y

Key for minion linux-node2 accepted.

[root@linux-node1 minion]# salt-key

Accepted Keys:

linux-node2

Denied Keys:

Unaccepted Keys:

Rejected Keys:

[root@linux-node2 salt]# salt-key

Accepted Keys:

Denied Keys:

Unaccepted Keys:

linux-node1

linux-node2

Rejected Keys:

[root@linux-node2 salt]# salt-key -A

The following keys are going to be accepted:

Unaccepted Keys:

linux-node1

linux-node2

Proceed? [n/Y] y

Key for minion linux-node1 accepted.

Key for minion linux-node2 accepted.

 

二、hosts文件解析

5.5saltstack自动扩容

zabbix监控--->Action---->创建一台虚拟机/Docker容器---->部署服务---->部署代码---->测试状态----->加入集群--->加入监控--->通知

基于域名下载etcd

rz etcd-v2.2.1-linux-amd64.tar.gz (2进制包)

[root@linux-node1 src]# cd etcd-v2.0.5-linux-amd64

[root@linux-node1 etcd-v2.0.5-linux-amd64]# cp etcd etcdctl  /usr/local/bin/

[root@linux-node1 etcd-v2.0.5-linux-amd64] . /etcd &

或者这样启动

nohub etcd --name auto_scale --data-dir /data/etcd/

--listen-peer-urls ''

--listen-client-urls ''

--adevertise-client-urls '' &

设置key的值

[root@linux-node1 wal]# curl -s -XPUT -d value="Hello world" | python -m json.tool      

{

    "action": "set",

    "node": {

        "createdIndex": 8,

        "key": "/message",

        "modifiedIndex": 8,

        "value": "Hello world"

    },

    "prevNode": {

        "createdIndex": 7,

        "key": "/message",

        "modifiedIndex": 7,

        "value": "Hello world"

    }

}

获取key的值

[root@linux-node1 wal]# curl -s |python -m json.tool          {

    "action": "get",

    "node": {

        "createdIndex": 8,

        "key": "/message",

        "modifiedIndex": 8,

        "value": "Hello world"

    }

}

删除key

[root@linux-node1 wal]# curl -s -XDELETE |python -m json.tool      

{

    "action": "delete",

    "node": {

        "createdIndex": 8,

        "key": "/message",

        "modifiedIndex": 9

    },

    "prevNode": {

        "createdIndex": 8,

        "key": "/message",

        "modifiedIndex": 8,

        "value": "Hello world"

    }

}

删除key以后再次获取key not found

[root@linux-node1 wal]# curl -s |python -m json.tool          {

    "cause": "/message",

    "errorCode": 100,

    "index": 9,

    "message": "Key not found"

}

设置key 有效时间5秒 5秒后过期  "message": "Key not found"

[root@linux-node1 wal]# curl -s -XPUT -d valu=="Hello world" |"Hello world 1" -d ttl=5 |python -m json.tool        

{

    "action": "set",

    "node": {

        "createdIndex": 10,

        "expiration": "2017-11-17T12:59:41.572099187Z",

        "key": "/ttl_use",

        "modifiedIndex": 10,

        "ttl": 5,

        "value": ""

    }

}

 

[root@linux-node1 ~]# vim /etc/salt/master  #行尾添加

etcd_pillar_config:

  etcd.host: 10.0.0.7

  etcd.port: 4001

 

ext_pillar:

  - etcd: etcd_pillar_config root=/salt/haproxy/

 

[root@linux-node1 ~]# /etc/init.d/salt-master restart

[root@linux-node1 ~]# curl -s -XPUT -d value="10.0.0.7:8080" | python -m json.tool       

{

    "action": "set",

    "node": {

        "createdIndex": 10,

        "key": "/salt/haproxy/backend_www_oldboyedu_com/web-node1", #添加一个web-node1的节点

        "modifiedIndex": 10,

        "value": "10.0.0.7:8080"

    }

}

[root@linux-node1 ~]#pip install python-etcd

[root@linux-node1 etcd-v2.2.1-linux-amd64]# salt '*' pillar.items

linux-node2:

    ----------

    backend_www_oldboyedu_com:

        ----------

        web-node1:

            10.0.0.7:8080

    zabbix-agent:

        ----------

        Zabbix_Server:

            10.0.0.7

linux-node1:

    ----------

    backend_www_oldboyedu_com:

        ----------

        web-node1:

            10.0.0.7:8080

    zabbix-agent:

        ----------

        Zabbix_Server:

            10.0.0.7

 

[root@linux-node1 ~]# vi /srv/salt/prod/cluster/files/haproxy-outside.cfg  #行尾添加

{% for web,web_ip in pillar.backend_www_oldboyedu_com.iteritems() -%}

server {{ web }} {{ web_ip }} check inter 2000 rise 30 fall 15

{% endfor %}

vim /srv/salt/prod/cluster/haproxy-outside.sls

- template: jinja

重启master

执行状态 salt '*' statehighstate

#vim /etc/hosts

192.168.1.101 salt.node1.com
192.168.1.200 salt.node2.com
192.168.1.201 salt.node3.com

三、安装salt-ssh

a.添加yum源:

*参考salt-stack官网:

# vim /etc/yum.repos.d/salt-stack.repo
[saltstack-repo]
name=SaltStack repo for Red Hat Enterprise Linux $releasever
baseurl=
enabled=1
gpgcheck=1
gpgkey=

b.安装salt-ssh

#yum install salt-ssh -y

c.配置roster文件

*可以在user下面配置passwd,如不配置的话,就要使用salt-ssh '*' test.ping -i命令时配置输入密码进行认证

# vim /etc/salt/roster

node1:
host: 192.168.1.200
user: root
port: 22
node2:
host: 192.168.1.201
user: root
port: 22
四、配置state.sls文件及给复制相关文件到部署目录

a.创建文件目录

# mkdir -p /srv/salt/minions

# mkdir -p /srv/salt/minions/conf

# mkdir -p /srv/salt/minions/yum.repos.d

b.编写安装minions的sls文件--install.sls

# cd /srv/salt/minions/

# vim install.sls

#salt_minion_install
minion_yum:             #把本地minions/yum.repos.d下和文件复制到要安装minion的/etc/yum.repos.d下
  file.recurse:
    - name: /etc/yum.repos.d
    - source: salt://minions/yum.repos.d
    - user: root
    - group: root
    - file_mode: 644
    - dir_mode: 755
    - include_empty: True
minion_install:         #安装salt-minion
  pkg.installed:
    - pkgs:
      - salt-minion
    - require:
      - file: minion_yum
    - unless: rpm -qa | grep salt-minion
minion_conf:           #复制准备好的minion配置文件复制到要安装minion下的/etc/salt/minion下
  file.managed:
    - name: /etc/salt/minion
    - source: salt://minions/conf/minion
    - user: root
    - group: root
    - mode: 640
    - template: jinja
    - defaults:
      minion_id: {{ grains['fqdn_ip4'][0] }}        
    - require:
      - pkg: minion_install
minion_service:       #开机自动启动
  service.running:
    - name: salt-minion
    - enable: True
    - require:
      - file: minion_conf

c.编写minion配置文件 

#vim  minion 

# resolved, then the minion will fail to start.
master: 192.168.1.101                     #只用修改master地址

d.把salt源和epel源复制到指定目录下

#cp /etc/yum.repos.d/salt-stack.repo /srv/salt/minions/yum.repos.d/

# cp /etc/yum.repos.d/epel.repo /srv/salt/minions/yum.repos.d/ 

e.最后查看一下目录详情:

# pwd
/srv/salt/minions

# tree
├── conf
│   └── minion
├── install.sls
└── yum.repos.d
├── epel.repo
└── salt-stack.repo

五、执行salt-ssh安装salt-minion

#salt-ssh -i '*' state.sls minions.install

六、验证安装结果

*注:在最后我在salt-ssh这台主机上安装了salt-mater(yum install -y salt-master ),不然下面的命令执行无效

# salt-key
Accepted Keys:
Denied Keys:
Unaccepted Keys:
centos7
node1
node2
Rejected Keys:

 澳门新浦京娱乐场网站 31

本文由澳门新浦京娱乐场网站发布于澳门新浦京娱乐场网站,转载请注明出处:澳门新浦京娱乐场网站:超详细saltstack安装部署