看了下calico k8s 布署全网文档还是比较少的,为了大家少踩坑,特拟写此文,如有任何问题,欢迎各位留言交流

目前k8s 网络最快的第一就是Calico 第二种稍慢flannel ,根据自己的网络环境条件来定

目前经本人测试calico v2.15版本的集群 在k8s 1.6的集群版 此文基于centos7

注意k8s1.6以上kubelet 的bug特别多,大家要注意。

calico即可以加入现有集群也可以初始化集群的时候布署

有几点说明一下 两种布署方案,一般集群都配有ssl证书和非证书的情况

第一种无https 连接etcd方案

第二种https 连接etcd集群方案

  1. http 模式布署即没有证书,直接连接etcd

2.加载etcd https证书模式,有点麻烦

Calico可以不依赖现有集群可以直接布署

kubecel create -f  Calico.yaml
  • 1

在kubelet配置文件指定cni插件的时候calico还没启动,会报错,集群会报kubectl get nodes

jenkins-2       NotReady     1d        v1.6.4
node1.txg.com  NotReady     2d        v1.6.4
node2.txg.com   NotReady     1d        v1.6.4
  • 1
  • 2
  • 3

此时kubelet无法和apiserver建立正常状态,因为我们配置文件指定了cni插件模式,此时只有DaemonSet 的 hostNetwork: true pod 可以启动

这时不要着急,等Calico插件node节点布署完成后即正常,Calico 会在每一个k8s node上启动一个DaemonSet常驻节点 初始化cni插件,目录为宿主机/etc/cni ; /opt/cni

DaemonSet pod为永久常驻node 网络模式为hostnetwork 所以才可以启动,如果,因为此时k8s不会启动cni的pod模式,cni网络还没完成,此时网络为hostnetwork模式 
DaemonSet 没有初始化完成的时候kubectl create -f nginx.yaml是会失败的,因为 集群还没有Ready ,确认kubelet无误,集群即可正常工作

[root@master3 calico]# kubectl get nodesNAME            STATUS    AGE       VERSION
jenkins-2       Ready     1d        v1.6.4
node1.txg.com   Ready     2d        v1.6.4
node2.txg.com   Ready     1d        v1.6.4
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

正常如下

[root@master3 calico]# kubectl get ds --all-namespaces
NAMESPACE     NAME          DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE-SELECTOR   AGE
kube-system   calico-node   5         5         5         5            5           <none>          1d
[root@master3 calico]# 
  • 1
  • 2
  • 3
  • 4

此时k8s 网络已初始化完成

具体流程复制下面的yaml启动即可 如下

# Calico Version v2.1.5
# http://docs.projectcalico.org/v2.1/releases#v2.1.5
# This manifest includes the following component versions:
# 此处为原始镜相,先准备好三个镜相下载好,我这里打了tag到私有仓库
#   calico/node:v1.1.3
#   calico/cni:v1.8.0
#   calico/kube-policy-controller:v0.5.4
#   kubelet 需要配配加入参数 "--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
#   kube-proxy 配置加入参数 "--proxy-mode=iptables"
#内核 调优 所有节点 echo "net.netfilter.nf_conntrack_max=1000000" >> /etc/sysctl.conf  所有节点
#注意,所有节点必需布署kubelet 和Docker 包括k8s master主节点,因为是用DaemonSet常驻节点 初始化cni插件
#注意,calicoctl 需要配置文件才能和etcd 通讯此处是个大坑,用于查看集群状态
#所有docker.service 服务/lib/systemd/system/docker.service 注释#EnvironmentFile=/etc/profile.d/flanneld.env 配置
#取消#--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} ;重启执行systemctl daemon-reload  ;docker.service
#wget -c  https://github.com/projectcalico/calicoctl/releases/download/v1.1.3/calicoctl && chmod +x calicoctl
##master上需要配置 调用calicoctl 这个用来配置calico集群管理ctl工具,需要/etc/calico/calicoctl.cfg  引用etcd```
非http 连接etcd  配置
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

[root@master3 dashboard]# cat /etc/calico/calicoctl.cfg 
kind: calicoApiConfig 
apiVersion: v1 
kind: calicoApiConfig 
metadata: 
spec: 
datastoreType: “etcdv2” 
etcdEndpoints: “http://192.168.1.65:2379,http://192.168.1.66:2379,http://192.168.1.67:2379”

https 如下

[root@master3 calico]# cat /etc/calico/calicoctl.cfg
kind: calicoApiConfig
apiVersion: v1
kind: calicoApiConfig
metadata:
spec:datastoreType: "etcdv2"etcdEndpoints: "https://192.168.1.65:2379,https://192.168.1.66:2379,https://192.168.1.67:2379"etcdKeyFile: "/etc/kubernetes/ssl/kubernetes-key.pem"etcdCertFile: "/etc/kubernetes/ssl/kubernetes.pem"etcdCACertFile: "/etc/kubernetes/ssl/ca.pem"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

删除pool默认可能会有宿主网段IP 池 
建 立新的ipool 池方法

#[root@master3 calico]# cat pool.yaml
#apiVersion: v1
#kind: ipPool
#metadata:
#  cidr: 172.1.0.0/16
#  spec:
#    ipip:
#        enabled: true
#            mode: cross-subnet
#              nat-outgoing: true
#                disabled: false
#
#
#                  calicoctl delete ipPool 192.168.0.0/16
#                  calicoctl apply -f pool.yaml
#
#查看集群状态
#                  [root@master1 ~]# calicoctl node status
#                  Calico process is running.
#
#                  IPv4 BGP status
#                  +--------------+-------------------+-------+----------+--------------------------------+
#                  | PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |              INFO              |
#                  +--------------+-------------------+-------+----------+--------------------------------+
#                  | 192.168.1.62 | node-to-node mesh | up    | 08:29:36 | Established                    |
#                  | 192.168.1.63 | node-to-node mesh | up    | 08:29:36 | Established                    |
#                  | 192.168.1.68 | node-to-node mesh | start | 14:13:42 | Connect Socket: Connection     |
#                  |              |                   |       |          | refused                        |
#                  | 192.168.2.68 | node-to-node mesh | up    | 14:13:45 | Established                    |
#                  | 192.168.2.72 | node-to-node mesh | up    | 14:12:18 | Established                    |
#                  | 192.168.2.69 | node-to-node mesh | up    | 14:12:15 | Established                    |
#                  | 192.168.1.69 | node-to-node mesh | up    | 14:12:22 | Established                    |
#                  +--------------+-------------------+-------+----------+--------------------------------+
#
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34

注意,开启rbac的请创建rbac授权,没有开启的就不用创建,rbac开启会导致calico无法分配pod ip

kubectl create -f  rbac.yaml [root@master3 calico]# cat rbac.yaml
# Calico Version master
# http://docs.projectcalico.org/master/releases#master---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: calico-policy-controllernamespace: kube-system
rules:- apiGroups:- ""- extensionsresources:- pods- namespaces- networkpoliciesverbs:- watch- list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: calico-policy-controller
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-policy-controller
subjects:
- kind: ServiceAccountname: calico-policy-controllernamespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: calico-nodenamespace: kube-system
rules:- apiGroups: [""]resources:- pods- nodesverbs:- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:name: calico-node
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-node
subjects:
- kind: ServiceAccountname: calico-nodenamespace: kube-system
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63

1.无https 连接etcd方案

kubecel create -f  Calico.yaml
cat   Calico.yaml# This ConfigMap is used to configure a self-hosted Calico installation.kind: ConfigMap
apiVersion: v1
metadata:name: calico-confignamespace: kube-system
data:# Configure this with the location of your etcd cluster.注意此处配置 etcd https 集群ip地址etcd_endpoints: "http://192.168.1.65:2379,http://192.168.1.66:2379,http://192.168.1.67:2379"# Configure the Calico backend to use.calico_backend: "bird"# The CNI network configuration to install on each node.cni_network_config: |-{"name": "k8s-pod-network","type": "calico","etcd_endpoints": "__ETCD_ENDPOINTS__","etcd_key_file": "__ETCD_KEY_FILE__","etcd_cert_file": "__ETCD_CERT_FILE__","etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__","log_level": "info","ipam": {"type": "calico-ipam"},"policy": {"type": "k8s","k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__","k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"},"kubernetes": {"kubeconfig": "__KUBECONFIG_FILEPATH__"}}# If you're using TLS enabled etcd uncomment the following.# You must also populate the Secret below with these files.etcd_ca: ""   # "/calico-secrets/etcd-ca"etcd_cert: "" # "/calico-secrets/etcd-cert"etcd_key: ""  # "/calico-secrets/etcd-key"---# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:name: calico-etcd-secretsnamespace: kube-system
data:# Populate the following files with etcd TLS configuration if desired, but leave blank if# not using TLS for etcd.# This self-hosted install expects three files with the following names.  The values# should be base64 encoded strings of the entire contents of each file.# etcd-key: null# etcd-cert: null# etcd-ca: null---# This manifest installs the calico/node Container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:name: calico-nodenamespace: kube-systemlabels:k8s-app: calico-node
spec:selector:matchLabels:k8s-app: calico-nodetemplate:metadata:labels:k8s-app: calico-nodeannotations:scheduler.alpha.kubernetes.io/critical-pod: ''scheduler.alpha.kubernetes.io/tolerations: |[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },{"key":"CriticalAddonsOnly", "operator":"Exists"}]spec:hostNetwork: truecontainers:# Runs calico/node container on each Kubernetes node.  This# container programs network policy and routes on each# host.- name: calico-nodeimage: 192.168.1.103/k8s_public/calico-node:v1.1.3env:# The location of the Calico etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# Choose the backend to use.- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backend# Disable file logging so `kubectl logs` works.- name: CALICO_DISABLE_FILE_LOGGINGvalue: "true"# Set Felix endpoint to host default action to ACCEPT.- name: FELIX_DEFAULTENDPOINTTOHOSTACTIONvalue: "ACCEPT"# Configure the IP Pool from which Pod IPs will be chosen.- name: CALICO_IPV4POOL_CIDR#value: "192.168.0.0/16"此处配置ip分配pod 的池 value: "172.1.0.0/16"- name: CALICO_IPV4POOL_IPIPvalue: "always"# Disable IPv6 on Kubernetes.- name: FELIX_IPV6SUPPORTvalue: "false"# Set Felix logging to "info"- name: FELIX_LOGSEVERITYSCREENvalue: "info"# Location of the CA certificate for etcd.- name: ETCD_CA_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_ca# Location of the client key for etcd.- name: ETCD_KEY_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_key# Location of the client certificate for etcd.- name: ETCD_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_cert# Auto-detect the BGP IP address.- name: IPvalue: ""securityContext:privileged: trueresources:requests:cpu: 250mvolumeMounts:- mountPath: /lib/modulesname: lib-modulesreadOnly: true- mountPath: /var/run/caliconame: var-run-calicoreadOnly: false- mountPath: /calico-secretsname: etcd-certs# This container installs the Calico CNI binaries# and CNI network config file on each node.- name: install-cniimage: 192.168.1.103/k8s_public/calico-cni:v1.8.0command: ["/install-cni.sh"]env:# The location of the Calico etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# The CNI network config to install on each node.- name: CNI_NETWORK_CONFIGvalueFrom:configMapKeyRef:name: calico-configkey: cni_network_configvolumeMounts:- mountPath: /host/opt/cni/binname: cni-bin-dir- mountPath: /host/etc/cni/net.dname: cni-net-dir- mountPath: /calico-secretsname: etcd-certsvolumes:# Used by calico/node.- name: lib-moduleshostPath:path: /lib/modules- name: var-run-calicohostPath:path: /var/run/calico# Used to install CNI.- name: cni-bin-dirhostPath:path: /opt/cni/bin- name: cni-net-dirhostPath:path: /etc/cni/net.d# Mount in the etcd TLS secrets.- name: etcd-certssecret:secretName: calico-etcd-secrets---# This manifest deploys the Calico policy controller on Kubernetes.
# See https://github.com/projectcalico/k8s-policy
apiVersion: extensions/v1beta1
kind: Deployment
metadata:name: calico-policy-controllernamespace: kube-systemlabels:k8s-app: calico-policyannotations:scheduler.alpha.kubernetes.io/critical-pod: ''scheduler.alpha.kubernetes.io/tolerations: |[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },{"key":"CriticalAddonsOnly", "operator":"Exists"}]
spec:# The policy controller can only have a single active instance.replicas: 1strategy:type: Recreatetemplate:metadata:name: calico-policy-controllernamespace: kube-systemlabels:k8s-app: calico-policyspec:# The policy controller must run in the host network namespace so that# it isn't governed by policy that would prevent it from working.hostNetwork: truecontainers:- name: calico-policy-controllerimage: 192.168.1.103/k8s_public/kube-policy-controller:v0.5.4env:# The location of the Calico etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# Location of the CA certificate for etcd.- name: ETCD_CA_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_ca# Location of the client key for etcd.- name: ETCD_KEY_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_key# Location of the client certificate for etcd.- name: ETCD_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_cert# The location of the Kubernetes API.  Use the default Kubernetes# service for API access.- name: K8S_APIvalue: "https://kubernetes.default:443"# Since we're running in the host namespace and might not have KubeDNS# access, configure the container's /etc/hosts to resolve# kubernetes.default to the correct service clusterIP.- name: CONFIGURE_ETC_HOSTSvalue: "true"volumeMounts:# Mount in the etcd TLS secrets.- mountPath: /calico-secretsname: etcd-certsvolumes:# Mount in the etcd TLS secrets.- name: etcd-certssecret:secretName: calico-etcd-secrets-------
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242
  • 243
  • 244
  • 245
  • 246
  • 247
  • 248
  • 249
  • 250
  • 251
  • 252
  • 253
  • 254
  • 255
  • 256
  • 257
  • 258
  • 259
  • 260
  • 261
  • 262
  • 263
  • 264
  • 265
  • 266
  • 267
  • 268
  • 269
  • 270
  • 271
  • 272
  • 273
  • 274
  • 275
  • 276
  • 277
  • 278
  • 279
  • 280
  • 281
  • 282
  • 283
  • 284
  • 285
  • 286
  • 287
  • 288
  • 289
  • 290
  • 291
  • 292
  • 293
  • 294
  • 295
  • 296
  • 297
  • 298
  • 299
  • 300
  • 301
  • 302

2.https 证书连接etcd方案

kubecel create -f  Calico-https.yaml
cat   Calico-https.yaml-------
#注意最后送上https的方式的calico 调用etcd 通讯存储集群配置,保证每个节点存存在三个文件目录/etc/kubernetes/ssl/etcd-ca   /etc/kubernetes/ssl/etcd-cert /etc/kubernetes/ssl/etcd-key
#这三个文件是用kubernets的证书复制重命名过来的 也就是etcd的证书 cd /etc/kubernetes/ssl/ ; cp kubernetes-key.pem etcd-key; cp  kubernetes.pem etcd-cert; cp ca.pem etcd-ca
#下发到所有的kubelet 的节点 /etc/kubernetes/ssl/ 下#calico里面一定要叫这个名字,原理如下,然后用hostpath 挂载卷        - name: etcd-certs    调用configmap 里面的  etcd_ca: "/calico-secrets/etcd-ca"   # "/calico-secrets/etcd-ca"#etcd_cert: "/calico-secrets/etcd-cert" # 最终容器证书目录 "/calico-secrets/etcd-cert"# etcd_key: "/calico-secrets/etcd-key"  # "/calico-secrets/etcd-key"#        hostPath:#          path: /etc/kubernetes/ssl
#calico-https-etcd calico配置文件如下# This ConfigMap is used to configure a self-hosted Calico installation.kind: ConfigMap
apiVersion: v1
metadata:name: calico-confignamespace: kube-system
data:# Configure this with the location of your etcd cluster.注意此处配置 etcd集群ip地址etcd_endpoints: "https://192.168.1.65:2379,https://192.168.1.66:2379,https://192.168.1.67:2379"# Configure the Calico backend to use.calico_backend: "bird"# The CNI network configuration to install on each node.cni_network_config: |-{"name": "k8s-pod-network","type": "calico","etcd_endpoints": "__ETCD_ENDPOINTS__","etcd_key_file": "__ETCD_KEY_FILE__","etcd_cert_file": "__ETCD_CERT_FILE__","etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__","log_level": "info","ipam": {"type": "calico-ipam"},"policy": {"type": "k8s","k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__","k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"},"kubernetes": {"kubeconfig": "__KUBECONFIG_FILEPATH__"}}# If you're using TLS enabled etcd uncomment the following.# You must also populate the Secret below with these files.etcd_ca: "/calico-secrets/etcd-ca"   # "/calico-secrets/etcd-ca"etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"etcd_key: "/calico-secrets/etcd-key"  # "/calico-secrets/etcd-key"---# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:name: calico-etcd-secretsnamespace: kube-system
data:# Populate the following files with etcd TLS configuration if desired, but leave blank if# not using TLS for etcd.# This self-hosted install expects three files with the following names.  The values# should be base64 encoded strings of the entire contents of each file.# etcd-key: null# etcd-cert: null# etcd-ca: null---# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:name: calico-nodenamespace: kube-systemlabels:k8s-app: calico-node
spec:selector:matchLabels:k8s-app: calico-nodetemplate:metadata:labels:k8s-app: calico-nodeannotations:scheduler.alpha.kubernetes.io/critical-pod: ''scheduler.alpha.kubernetes.io/tolerations: |[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },{"key":"CriticalAddonsOnly", "operator":"Exists"}]spec:hostNetwork: truecontainers:# Runs calico/node container on each Kubernetes node.  This# container programs network policy and routes on each# host.- name: calico-nodeimage: 192.168.1.103/k8s_public/calico-node:v1.1.3env:# The location of the Calico etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# Choose the backend to use.- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backend# Disable file logging so `kubectl logs` works.- name: CALICO_DISABLE_FILE_LOGGINGvalue: "true"# Set Felix endpoint to host default action to ACCEPT.- name: FELIX_DEFAULTENDPOINTTOHOSTACTIONvalue: "ACCEPT"# Configure the IP Pool from which Pod IPs will be chosen.- name: CALICO_IPV4POOL_CIDR#value: "192.168.0.0/16"此处配置ip分配pod 的池 value: "172.1.0.0/16"- name: CALICO_IPV4POOL_IPIPvalue: "always"# Disable IPv6 on Kubernetes.- name: FELIX_IPV6SUPPORTvalue: "false"# Set Felix logging to "info"- name: FELIX_LOGSEVERITYSCREENvalue: "info"# Location of the CA certificate for etcd.- name: ETCD_CA_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_ca# Location of the client key for etcd.- name: ETCD_KEY_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_key# Location of the client certificate for etcd.- name: ETCD_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_cert# Auto-detect the BGP IP address.- name: IPvalue: ""securityContext:privileged: trueresources:requests:cpu: 250mvolumeMounts:- mountPath: /lib/modulesname: lib-modulesreadOnly: true- mountPath: /var/run/caliconame: var-run-calicoreadOnly: false- mountPath: /calico-secretsname: etcd-certs# This container installs the Calico CNI binaries# and CNI network config file on each node.- name: install-cniimage: 192.168.1.103/k8s_public/calico-cni:v1.8.0command: ["/install-cni.sh"]env:# The location of the Calico etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# The CNI network config to install on each node.- name: CNI_NETWORK_CONFIGvalueFrom:configMapKeyRef:name: calico-configkey: cni_network_configvolumeMounts:- mountPath: /host/opt/cni/binname: cni-bin-dir- mountPath: /host/etc/cni/net.dname: cni-net-dir- mountPath: /calico-secretsname: etcd-certsvolumes:# Used by calico/node.- name: lib-moduleshostPath:path: /lib/modules- name: var-run-calicohostPath:path: /var/run/calico# Used to install CNI.- name: cni-bin-dirhostPath:path: /opt/cni/bin- name: cni-net-dirhostPath:path: /etc/cni/net.d- name: etcd-certs hostPath:path: /etc/kubernetes/ssl# Mount in the etcd TLS secrets.#  - name: etcd-certs#    secret:#      secretName: calico-etcd-secrets---# This manifest deploys the Calico policy controller on Kubernetes.
# See https://github.com/projectcalico/k8s-policy
apiVersion: extensions/v1beta1
kind: Deployment
metadata:name: calico-policy-controllernamespace: kube-systemlabels:k8s-app: calico-policyannotations:scheduler.alpha.kubernetes.io/critical-pod: ''scheduler.alpha.kubernetes.io/tolerations: |[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },{"key":"CriticalAddonsOnly", "operator":"Exists"}]
spec:# The policy controller can only have a single active instance.replicas: 1strategy:type: Recreatetemplate:metadata:name: calico-policy-controllernamespace: kube-systemlabels:k8s-app: calico-policyspec:# The policy controller must run in the host network namespace so that# it isn't governed by policy that would prevent it from working.hostNetwork: truecontainers:- name: calico-policy-controllerimage: 192.168.1.103/k8s_public/kube-policy-controller:v0.5.4env:# The location of the Calico etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# Location of the CA certificate for etcd.- name: ETCD_CA_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_ca# Location of the client key for etcd.- name: ETCD_KEY_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_key# Location of the client certificate for etcd.- name: ETCD_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_cert# The location of the Kubernetes API.  Use the default Kubernetes# service for API access.- name: K8S_APIvalue: "https://kubernetes.default:443"#value: "https://192.168.1.63:8080"# Since we're running in the host namespace and might not have KubeDNS# access, configure the container's /etc/hosts to resolve# kubernetes.default to the correct service clusterIP.- name: CONFIGURE_ETC_HOSTSvalue: "true"volumeMounts:# Mount in the etcd TLS secrets.- mountPath: /calico-secretsname: etcd-certsvolumes:# Mount in the etcd TLS secrets.#  - name: etcd-certs#    secret:#      secretName: calico-etcd-secrets- name: etcd-certshostPath:path: /etc/kubernetes/ssl------
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242
  • 243
  • 244
  • 245
  • 246
  • 247
  • 248
  • 249
  • 250
  • 251
  • 252
  • 253
  • 254
  • 255
  • 256
  • 257
  • 258
  • 259
  • 260
  • 261
  • 262
  • 263
  • 264
  • 265
  • 266
  • 267
  • 268
  • 269
  • 270
  • 271
  • 272
  • 273
  • 274
  • 275
  • 276
  • 277
  • 278
  • 279
  • 280
  • 281
  • 282
  • 283
  • 284
  • 285
  • 286
  • 287
  • 288
  • 289
  • 290
  • 291
  • 292
  • 293
  • 294
  • 295
  • 296
  • 297
  • 298
  • 299
  • 300
  • 301
  • 302
  • 303
  • 304
  • 305
  • 306
  • 307
  • 308
  • 309
  • 310
  • 311
  • 312
  • 313
  • 314
  • 315
  • 316
  • 317
  • 318
  • 319
  • 320
  • 321
  • 322
  • 323
  • 324

检查状态 所有node启动正常

[root@master3 calico]# kubectl get ds,pod --all-namespaces -o wide|grep calico
kube-system   ds/calico-node   5         5         5         5            5           <none>          1d        calico-node,install-cni   192.168.1.103/k8s_public/calico-node:v1.1.3,192.168.1.103/k8s_public/calico-cni:v1.8.0   k8s-app=calico-nodekube-system   po/calico-node-7xjtm                           2/2       Running   0          22h       192.168.2.68    node3.txg.com
kube-system   po/calico-node-gpng4                           2/2       Running   6          1d        192.168.1.68    node1.txg.com
kube-system   po/calico-node-kl72c                           2/2       Running   4          1d        192.168.2.69    node4.txg.com
kube-system   po/calico-node-klb4b                           2/2       Running   0          22h       192.168.2.72    jenkins-2
kube-system   po/calico-node-w9f9x                           2/2       Running   4          1d        192.168.1.69    node2.txg.com
kube-system   po/calico-policy-controller-2361802377-2tx4k   1/1       Running   0          22h       192.168.1.68    node1.txg.com
[root@master3 calico]# 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

可能有人会说DaemonSet 模式的话,k8s 的node 节点挂了会怎么样,大家可以测试一下

下面我用ansible删除所有节点的配置和docker文件

停止所有服务

ansible -m shell -a "systemctl daemon-reload; systemctl  stop  kubelet.service kube-proxy.service docker.service "  'nodes'
  • 1

删除文件

ansible -m shell -a " rm -rf /etc/cni/* ;rm -rf /opt/cni/* ; rm -rf /var/lib/docker/*   " 'nodes'
  • 1

重启node

 ansible -m shell -a " reboot  " 'nodes'
  • 1

重启后我们发现所有k8s node节点的 DaemonSet Calico 服务已经重新创建了,集群正常,完全正常。

所有node cni 正常之后即可正常创建所有服务kube-dns kube-dashboard 等

转 calico k8s安装方式相关推荐

  1. Kubernetes部署(一):K8s 二进制方式安装

    一.介绍: docker 完全隔离需要在内核3.8 以上,所以Centos6 不行 所有docker解决不了的事情,k8s来解决. k8s思维引导图vsdx-Linux文档类资源-CSDN下载 1.1 ...

  2. centos7中kubeadm方式搭建k8s集群(crio+calico)(k8s v1.21.0)

    文章目录 centos7中kubeadm方式搭建k8s集群(crio+calico)(k8s v1.21.0) 环境说明 注意事项及说明 1.版本兼容问题 2.镜像问题 安装步骤 安装要求 准备环境 ...

  3. 容器云之K8s自动化安装方式的选择

    目前kubernetes 已经发展到1.5的时代,但在这之前学习和使用kubernetes还是走了不少弯路,第一个问题就是安装,也许你会说安装很简单.照着官网或网上抄一篇就可以装上了-- 而我们使用k ...

  4. K8S多种的安装方式简介(待完善补充)

    原文链接:海鸥81-K8S多种安装方式简介 目前安装Kubernetes的方式多样,主要是minikube kubeadm,kops,手动部署(二进制),Rancher,Kubespray. 1.mi ...

  5. k48.第十九章 K8s运维篇-集群升级 -- kubeadm v1.20 安装方式升级(一)

    1.kubeadm安装方式升级 升级k8s集群必须 先升级kubeadm版本到目的k8s版本,也就是说kubeadm是k8s升级的准升证. 1.1 升级准备 在k8s的所有master节点进行组件升级 ...

  6. Kubernetes学习-K8S安装篇-Kubeadm安装高可用K8S集群

    Kubernetes学习-K8S安装篇-Kubeadm高可用安装K8S集群 1. Kubernetes 高可用安装 1.1 kubeadm高可用安装k8s集群1.23.1 1.1.1 基本环境配置 1 ...

  7. k8s安装步骤(1.22.0版本)

    安装前置环境 使用的系统:CentOS Linux release 7.9.2009 (Core) 使用root用户 准备了两台机器:192.168.137.220 作为master,192.168. ...

  8. 整理的最新版的K8S安装教程,看完还不会,请你吃瓜

    最近在参加华为推出的华为云云原生入门级开发者认证人才计划活动 于是想自己动手部署K8S的环境来学习,去年自己也采用二进制的方式部署过,时隔一年K8S的版本已经更新到了v1.24.3啦.在v1.24版本 ...

  9. k8s安装dashboard及账号密码登陆

    1.k8s安装管理后台 (1)获取yaml配置文件wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/depl ...

最新文章

  1. Java学习提升体系结构
  2. 正则表达式之IP地址检验
  3. bzoj1086[SCOI2005]王室联邦
  4. 想了解推荐系统最新研究进展?请收好这16篇论文
  5. less命令的使用方法
  6. 明晚直播预告丨Oracle 19c X86下移经验分享
  7. 微软服务器延迟,经过六个多月的延迟,微软终于推出Hyper-V Server 2019
  8. python怎么一次输入两个数_python如何一次性输入多个数
  9. 第1章-确定superboot210如何为smart210的nand flash进行的分区划分
  10. 软件测试基础知识大全(新手入门必备)
  11. wps在线浏览 java_java实现word转pdf在线预览(前端使用PDF.js;后端使用openoffice、aspose)...
  12. 编译php为opcode,php 中间代码opcode
  13. centOS7 防火墙关闭 远程端口无法访问问题
  14. 【LOJ3124】「CTS2019」氪金手游
  15. cad引出线段lisp_利用lisp给CAD直线取整?
  16. 2022年计算机保研夏令营经验总结,11所院校经历,预推免上岸北大
  17. 计算机中存储的数据类型
  18. 15. 徽章 和 面包屑导航
  19. 脚下,梦開始的地方——七月总结
  20. java sleep的意义_java 线程Thread.Sleep详解 Thread.Sleep(0)的作用

热门文章

  1. 表面粗糙度的基本评定参数是_表面粗糙度最常用评定参数是什么?
  2. VMware设置静态ip地址及不同网络模式讲解【Linux网络问题】
  3. 计算机保研复习数据结构薄弱知识
  4. 【陈工笔记】【复盘】# 服务器集群使用方式 #
  5. iOS 12-12.1.2 完整越狱教程
  6. 是利用计算机技术实现,计算机技术在智慧交通建设中的应用
  7. C语言实验(十三):函数(求两个任意分数和的最简形式、将正整数转换为字符串输出、某日期是该年第几天)
  8. 水文分析--arcgis水文分析模块
  9. 【数据挖掘之关联规则实战】关联规则智能推荐算法
  10. python 70行完成requests抓取csdn阅读量.