二进制部署相对其他部署方式来说要复杂一些,步骤比较多,为了避免篇幅过长,故拆分成了三篇:

二进制部署k8s1.18(上)
二进制部署k8s1.18(中)
二进制部署k8s1.18(下)

部署 kubelet 组件
kubelet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。

kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。

为确保安全,部署时关闭了 kubelet 的非安全 http 端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster 的请求)。

注意:如果没有特殊指明,本文档的所有操作均在 k8s-master01 节点上执行。

下载和分发 kubelet 二进制文件
参考 05-1.部署master节点.md。

创建 kubelet bootstrap kubeconfig 文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}doecho ">>> ${node_name}"# 创建 tokenexport BOOTSTRAP_TOKEN=$(kubeadm token create \--description kubelet-bootstrap-token \--groups system:bootstrappers:${node_name} \--kubeconfig ~/.kube/config)# 设置集群参数kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/cert/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig# 设置客户端认证参数kubectl config set-credentials kubelet-bootstrap \--token=${BOOTSTRAP_TOKEN} \--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig# 设置上下文参数kubectl config set-context default \--cluster=kubernetes \--user=kubelet-bootstrap \--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig# 设置默认上下文kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfigdone

向 kubeconfig 写入的是 token,bootstrap 结束后 kube-controller-manager 为 kubelet 创建 client 和 server 证书;
查看 kubeadm 为各节点创建的 token:

$ kubeadm token list --kubeconfig ~/.kube/config
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION               EXTRA GROUPS
2sb8wy.euialqfpxfbcljby   23h       2020-02-08T15:36:30+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-master02
ta7onm.fcen74h0mczyfbz2   23h       2020-02-08T15:36:30+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-master01
xk27zp.tylnvywx9kc8sq87   23h       2020-02-08T15:36:30+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-master03
  • token 有效期为 1 天,超期后将不能再被用来 boostrap kubelet,且会被 kube-controller-manager 的 tokencleaner 清理;
  • kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,后续将为这个 group 设置 ClusterRoleBinding;

分发 bootstrap kubeconfig 文件到所有 worker 节点

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}doecho ">>> ${node_name}"scp kubelet-bootstrap-${node_name}.kubeconfig root@${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfigdone

创建和分发 kubelet 参数配置文件
从 v1.10 开始,部分 kubelet 参数需在配置文件中配置,kubelet --help 会提示:

DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag

创建 kubelet 参数配置文件模板(可配置项参考代码中注释):

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kubelet-config.yaml.template <<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: "##NODE_IP##"
staticPodPath: ""
syncFrequency: 1m
fileCheckFrequency: 20s
httpCheckFrequency: 20s
staticPodURL: ""
port: 10250
readOnlyPort: 0
rotateCertificates: true
serverTLSBootstrap: true
authentication:anonymous:enabled: falsewebhook:enabled: truex509:clientCAFile: "/etc/kubernetes/cert/ca.pem"
authorization:mode: Webhook
registryPullQPS: 0
registryBurst: 20
eventRecordQPS: 0
eventBurst: 20
enableDebuggingHandlers: true
enableContentionProfiling: true
healthzPort: 10248
healthzBindAddress: "##NODE_IP##"
clusterDomain: "${CLUSTER_DNS_DOMAIN}"
clusterDNS:- "${CLUSTER_DNS_SVC_IP}"
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 1m
imageMinimumGCAge: 2m
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
volumeStatsAggPeriod: 1m
kubeletCgroups: ""
systemCgroups: ""
cgroupRoot: ""
cgroupsPerQOS: true
cgroupDriver: cgroupfs
runtimeRequestTimeout: 10m
hairpinMode: promiscuous-bridge
maxPods: 220
podCIDR: "${CLUSTER_CIDR}"
podPidsLimit: -1
resolvConf: /etc/resolv.conf
maxOpenFiles: 1000000
kubeAPIQPS: 1000
kubeAPIBurst: 2000
serializeImagePulls: false
evictionHard:memory.available:  "100Mi"nodefs.available:  "10%"nodefs.inodesFree: "5%"imagefs.available: "15%"
evictionSoft: {}
enableControllerAttachDetach: true
failSwapOn: true
containerLogMaxSize: 20Mi
containerLogMaxFiles: 10
systemReserved: {}
kubeReserved: {}
systemReservedCgroup: ""
kubeReservedCgroup: ""
enforceNodeAllocatable: ["pods"]
EOF
  • address:kubelet 安全端口(https,10250)监听的地址,不能为 127.0.0.1,否则 kube-apiserver、heapster 等不能调用 kubelet 的 API;
  • readOnlyPort=0:关闭只读端口(默认 10255),等效为未指定;
  • authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
  • authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTP 证书认证;
  • authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;
    对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或其他客户端),将被拒绝,提示 Unauthorized;
  • authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查询 kube-apiserver 某 user、group 是否具有操作资源的权限(RBAC);
  • featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自动 rotate 证书,证书的有效期取决于 kube-controller-manager 的 --experimental-cluster-signing-duration 参数;
  • 需要 root 账户运行;
    为各节点创建和分发 kubelet 配置文件:
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}do echo ">>> ${node_ip}"sed -e "s/##NODE_IP##/${node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_ip}.yaml.templatescp kubelet-config-${node_ip}.yaml.template root@${node_ip}:/etc/kubernetes/kubelet-config.yamldone

创建和分发 kubelet systemd unit 文件
创建 kubelet systemd unit 文件模板:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kubelet.service.template <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=containerd.service
Requires=containerd.service[Service]
WorkingDirectory=${K8S_DIR}/kubelet
ExecStart=/opt/k8s/bin/kubelet \\--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\--cert-dir=/etc/kubernetes/cert \\--network-plugin=cni \\--cni-conf-dir=/etc/cni/net.d \\--container-runtime=remote \\--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\--root-dir=${K8S_DIR}/kubelet \\--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\--config=/etc/kubernetes/kubelet-config.yaml \\--hostname-override=##NODE_NAME## \\--image-pull-progress-deadline=15m \\--volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \\--logtostderr=true \\--v=2
Restart=always
RestartSec=5
StartLimitInterval=0[Install]
WantedBy=multi-user.target
EOF
  • 如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况;
  • –bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求;
  • K8S approve kubelet 的 csr 请求后,在 --cert-dir 目录创建证书和私钥文件,然后写入 --kubeconfig 文件;
  • –pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 镜像,它不能回收容器的僵尸;
    为各节点创建和分发 kubelet systemd unit 文件:
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}do echo ">>> ${node_name}"sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.servicescp kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.servicedone

授予 kube-apiserver 访问 kubelet API 的权限
在执行 kubectl exec、run、logs 等命令时,apiserver 会将请求转发到 kubelet 的 https 端口。这里定义 RBAC 规则,授权 apiserver 使用的证书(kubernetes.pem)用户名(CN:kuberntes-master)访问 kubelet API 的权限:

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes-master

Bootstrap Token Auth 和授予权限
kubelet 启动时查找 --kubeletconfig 参数对应的文件是否存在,如果不存在则使用 --bootstrap-kubeconfig 指定的 kubeconfig 文件向 kube-apiserver 发送证书签名请求 (CSR)。

kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证,认证通过后将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,这一过程称为 Bootstrap Token Auth。

默认情况下,这个 user 和 group 没有创建 CSR 的权限,kubelet 启动失败,错误日志如下:

$ sudo journalctl -u kubelet -a |grep -A 2 'certificatesigningrequests'
May 26 12:13:41 k8s-master01 kubelet[128468]: I0526 12:13:41.798230  128468 certificate_manager.go:366] Rotating certificates
May 26 12:13:41 k8s-master01 kubelet[128468]: E0526 12:13:41.801997  128468 certificate_manager.go:385] Failed while requesting a signed certificate from the master: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:82jfrm" cannot create resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope

解决办法是:创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定:
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
自动 approve CSR 请求,生成 kubelet client 证书
kubelet 创建 CSR 请求后,下一步需要创建被 approve,有两种方式:

kube-controller-manager 自动 aprrove;
手动使用命令 kubectl certificate approve;
CSR 被 approve 后,kubelet 向 kube-controller-manager 请求创建 client 证书,kube-controller-manager 中的 csrapproving controller 使用 SubjectAccessReview API 来检查 kubelet 请求(对应的 group 是 system:bootstrappers)是否具有相应的权限。

创建三个 ClusterRoleBinding,分别授予 group system:bootstrappers 和 group system:nodes 进行 approve client、renew client、renew server 证书的权限(server csr 是手动 approve 的,见后文):

cd /opt/k8s/work
cat > csr-crb.yaml <<EOF# Approve all CSRs for the group "system:bootstrappers"kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: auto-approve-csrs-for-groupsubjects:- kind: Groupname: system:bootstrappersapiGroup: rbac.authorization.k8s.ioroleRef:kind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:nodeclientapiGroup: rbac.authorization.k8s.io
---# To let a node of the group "system:nodes" renew its own credentialskind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: node-client-cert-renewalsubjects:- kind: Groupname: system:nodesapiGroup: rbac.authorization.k8s.ioroleRef:kind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientapiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]resources: ["certificatesigningrequests/selfnodeserver"]verbs: ["create"]
---# To let a node of the group "system:nodes" renew its own server credentialskind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: node-server-cert-renewalsubjects:- kind: Groupname: system:nodesapiGroup: rbac.authorization.k8s.ioroleRef:kind: ClusterRolename: approve-node-server-renewal-csrapiGroup: rbac.authorization.k8s.io
EOF
kubectl apply -f csr-crb.yaml

auto-approve-csrs-for-group:自动 approve node 的第一次 CSR; 注意第一次 CSR 时,请求的 Group 为 system:bootstrappers;
node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 Group 为 system:nodes;
node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 Group 为 system:nodes;
启动 kubelet 服务

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}doecho ">>> ${node_ip}"ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"ssh root@${node_ip} "/usr/sbin/swapoff -a"ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"done
  • 启动服务前必须先创建工作目录;
  • 关闭 swap 分区,否则 kubelet 会启动失败;
    kubelet 启动后使用 --bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 --kubeletconfig 文件。
    注意:kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 参数,才会为 TLS Bootstrap 创建证书和私钥。

查看kubelet 情况
稍等一会,三个节点的 CSR 都被自动 approved:

$ kubectl get csr
NAME        AGE   REQUESTOR                     CONDITION
csr-5rwzm   43s   system:node:k8s-master01   Pending
csr-65nms   55s   system:bootstrap:2sb8wy       Approved,Issued
csr-8t5hj   42s   system:node:k8s-master02   Pending
csr-jkhhs   41s   system:node:k8s-master03   Pending
csr-jv7dn   56s   system:bootstrap:ta7onm       Approved,Issued
csr-vb6p5   54s   system:bootstrap:xk27zp       Approved,Issued

Pending 的 CSR 用于创建 kubelet server 证书,需要手动 approve,参考后文。
所有节点均注册(NotReady 状态是预期的,后续安装了网络插件后就好):

$ kubectl get node
NAME              STATUS     ROLES    AGE   VERSION
k8s-master01   NotReady   <none>   10h   v1.18.0
k8s-master02   NotReady   <none>   10h   v1.18.0
k8s-master03   NotReady   <none>   10h   v1.18.0
kube-controller-manager 为各 node 生成了 kubeconfig 文件和公私钥:$ ls -l /etc/kubernetes/kubelet.kubeconfig
-rw------- 1 root root 2246 Feb  7 15:38 /etc/kubernetes/kubelet.kubeconfig$ ls -l /etc/kubernetes/cert/kubelet-client-*
-rw------- 1 root root 1281 Feb  7 15:38 /etc/kubernetes/cert/kubelet-client-2020-02-07-15-38-21.pem
lrwxrwxrwx 1 root root   59 Feb  7 15:38 /etc/kubernetes/cert/kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2020-02-07-15-38-21.pem

没有自动生成 kubelet server 证书;
手动 approve server cert csr
基于安全性考虑,CSR approving controllers 不会自动 approve kubelet server 证书签名请求,需要手动 approve:

$ kubectl get csr
NAME        AGE     REQUESTOR                     CONDITION
csr-5rwzm   3m22s   system:node:zk8s-master01   Pending
csr-65nms   3m34s   system:bootstrap:2sb8wy       Approved,Issued
csr-8t5hj   3m21s   system:node:k8s-master02   Pending
csr-jkhhs   3m20s   system:node:k8s-master03   Pending
csr-jv7dn   3m35s   system:bootstrap:ta7onm       Approved,Issued
csr-vb6p5   3m33s   system:bootstrap:xk27zp       Approved,Issued

手动 approve

$ kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve

自动生成了 server 证书

$  ls -l /etc/kubernetes/cert/kubelet-*
-rw------- 1 root root 1281 Feb  7 15:38 /etc/kubernetes/cert/kubelet-client-2020-02-07-15-38-21.pem
lrwxrwxrwx 1 root root   59 Feb  7 15:38 /etc/kubernetes/cert/kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2020-02-07-15-38-21.pem
-rw------- 1 root root 1330 Feb  7 15:42 /etc/kubernetes/cert/kubelet-server-2020-02-07-15-42-12.pem
lrwxrwxrwx 1 root root   59 Feb  7 15:42 /etc/kubernetes/cert/kubelet-server-current.pem -> /etc/kubernetes/cert/kubelet-server-2020-02-07-15-42-12.pem

kubelet api 认证和授权
kubelet 配置了如下认证参数:

  • authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
  • authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTPs 证书认证;
  • authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;

同时配置了如下授权参数:

  • authroization.mode=Webhook:开启 RBAC 授权;

kubelet 收到请求后,使用 clientCAFile 对证书签名进行认证,或者查询 bearer token 是否有效。如果两者都没通过,则拒绝请求,提示 Unauthorized:

$ curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.11.11:10250/metrics
Unauthorized$ curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://192.168.11.11:10250/metrics
Unauthorized

通过认证后,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 发送请求,查询证书或 token 对应的 user、group 是否有操作资源的权限(RBAC);

证书认证和授权
$ # 权限不足的证书;

$ curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://192.168.11.11:10250/metrics
Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)

$ # 使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;

$ curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://192.168.11.11:10250/metrics|head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
  • –cacert、–cert、–key 的参数值必须是文件路径,如上面的 ./admin.pem 不能省略 ./,否则返回 401 Unauthorized;
    bear token 认证和授权
    创建一个 ServiceAccount,将它和 ClusterRole system:kubelet-api-admin 绑定,从而具有调用 kubelet API 的权限:
kubectl create sa kubelet-api-test
kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
echo ${TOKEN}
  • –cacert、–cert、–key 的参数值必须是文件路径,如上面的 ./admin.pem 不能省略 ./,否则返回 401 Unauthorized;

bear token 认证和授权
创建一个 ServiceAccount,将它和 ClusterRole system:kubelet-api-admin 绑定,从而具有调用 kubelet API 的权限:

kubectl create sa kubelet-api-test
kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
echo ${TOKEN}

部署 kube-proxy 组件

kube-proxy 运行在所有 worker 节点上,它监听 apiserver 中 service 和 endpoint 的变化情况,创建路由规则以提供服务 IP 和负载均衡功能。

本文档讲解部署 ipvs 模式的 kube-proxy 过程。

注意:如果没有特殊指明,本文档的所有操作均在 k8s-master01 节点上执行,然后远程分发文件和执行命令。

下载和分发 kube-proxy 二进制文件
创建 kube-proxy 证书
创建证书签名请求:

cd /opt/k8s/work
cat > kube-proxy-csr.json <<EOF
{"CN": "system:kube-proxy","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing","O": "k8s","OU": "opsnull"}]
}
EOF
  • CN:指定该证书的 User 为 system:kube-proxy
  • 预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;
  • 该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空;

生成证书和私钥:

bash
cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \-ca-key=/opt/k8s/work/ca-key.pem \-config=/opt/k8s/work/ca-config.json \-profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
ls kube-proxy*

创建和分发 kubeconfig 文件

bash
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
kubectl config set-cluster kubernetes \--certificate-authority=/opt/k8s/work/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kube-proxy.kubeconfigkubectl config set-credentials kube-proxy \--client-certificate=kube-proxy.pem \--client-key=kube-proxy-key.pem \--embed-certs=true \--kubeconfig=kube-proxy.kubeconfigkubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=kube-proxy.kubeconfigkubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

分发 kubeconfig 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}doecho ">>> ${node_name}"scp kube-proxy.kubeconfig root@${node_name}:/etc/kubernetes/done

创建 kube-proxy 配置文件

从 v1.10 开始,kube-proxy 部分参数可以配置文件中配置。可以使用 --write-config-to 选项生成该配置文件,或者参考 源代码的注释。

创建 kube-proxy config 文件模板:

bash
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kube-proxy-config.yaml.template <<EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:burst: 200kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"qps: 100
bindAddress: ##NODE_IP##
healthzBindAddress: ##NODE_IP##:10256
metricsBindAddress: ##NODE_IP##:10249
enableProfiling: true
clusterCIDR: ${CLUSTER_CIDR}
hostnameOverride: ##NODE_NAME##
mode: "ipvs"
portRange: ""
iptables:masqueradeAll: false
ipvs:scheduler: rrexcludeCIDRs: []
EOF
  • bindAddress: 监听地址;
  • clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;
  • clusterCIDR: kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr--masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;
  • hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
  • mode: 使用 ipvs 模式;

为各节点创建和分发 kube-proxy 配置文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))do echo ">>> ${NODE_NAMES[i]}"sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-proxy-config.yaml.template > kube-proxy-config-${NODE_NAMES[i]}.yaml.templatescp kube-proxy-config-${NODE_NAMES[i]}.yaml.template root@${NODE_NAMES[i]}:/etc/kubernetes/kube-proxy-config.yamldone

创建和分发 kube-proxy systemd unit 文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target[Service]
WorkingDirectory=${K8S_DIR}/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \\--config=/etc/kubernetes/kube-proxy-config.yaml \\--logtostderr=true \\--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
EOF

分发 kube-proxy systemd unit 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}do echo ">>> ${node_name}"scp kube-proxy.service root@${node_name}:/etc/systemd/system/done

启动 kube-proxy 服务

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}doecho ">>> ${node_ip}"ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-proxy"ssh root@${node_ip} "modprobe ip_vs_rr"ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"done

检查启动结果

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}doecho ">>> ${node_ip}"ssh root@${node_ip} "systemctl status kube-proxy|grep Active"done

确保状态为 active (running),否则查看日志,确认原因:

journalctl -u kube-proxy

查看监听端口

$ sudo netstat -lnpt|grep kube-prox
tcp        0      0 192.168.11.11:10256    0.0.0.0:*               LISTEN      30590/kube-proxy
tcp        0      0 192.168.11.11:10249    0.0.0.0:*               LISTEN      30590/kube-proxy
  • 10249:http prometheus metrics port;
  • 10256:http healthz port;

查看 ipvs 路由规则

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}doecho ">>> ${node_ip}"ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"done

预期输出:

>>> 192.168.11.11
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr-> 192.168.11.12:6443          Masq    1      0          0         -> 192.168.11.13:6443          Masq    1      0          0         -> 192.168.11.11:6443          Masq    1      0          0
>>> 192.168.11.12
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr-> 192.168.11.12:6443          Masq    1      0          0         -> 192.168.11.13:6443          Masq    1      0          0         -> 192.168.11.11:6443          Masq    1      0          0
>>> 192.168.11.13
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr-> 192.168.11.12:6443          Masq    1      0          0         -> 192.168.11.13:6443          Masq    1      0          0         -> 192.168.11.11:6443          Masq    1      0          0

可见所有通过 https 访问 K8S SVC kubernetes 的请求都转发到 kube-apiserver 节点的 6443 端口;

部署 calico 网络
kubernetes 要求集群内各节点(包括 master 节点)能通过 Pod 网段互联互通。

calico 使用 IPIP 或 BGP 技术(默认为 IPIP)为各节点创建一个可以互通的 Pod 网络。

如果使用 flannel,请参考附件 E.部署flannel网络.md(flannel 与 docker 结合使用)

注意:如果没有特殊指明,本文档的所有操作均在 k8s-master01 节点上执行。

安装 calico 网络插件

cd /opt/k8s/work
curl https://docs.projectcalico.org/manifests/calico.yaml -O

修改配置:

$ cp calico.yaml calico.yaml.orig
$ diff calico.yaml.orig calico.yaml
630c630,632
<               value: "192.168.0.0/16"
---
>               value: "172.30.0.0/16"
>             - name: IP_AUTODETECTION_METHOD
>               value: "interface=eth.*"
699c701
<             path: /opt/cni/bin
---
>             path: /opt/k8s/bin
  • 将 Pod 网段地址修改为 172.30.0.0/16;
  • calico 自动探查互联网卡,如果有多快网卡,则可以配置用于互联的网络接口命名正则表达式,如上面的 eth.*(根据自己服务器的网络接口名修改);

运行 calico 插件:

$ kubectl apply -f  calico.yaml
  • calico 插架以 daemonset 方式运行在所有的 K8S 节点上。

查看 calico 运行状态

$ kubectl get pods -n kube-system -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP               NODE              NOMINATED NODE   READINESS GATES
calico-kube-controllers-77c4b7448-99lfq   1/1     Running   0          2m11s   172.30.184.128   k8s-master03   <none>           <none>
calico-node-dxnjs                         1/1     Running   0          2m11s   192.168.11.12   k8s-master02   <none>           <none>
calico-node-rknzz                         1/1     Running   0          2m11s   192.168.11.13   k8s-master03   <none>           <none>
calico-node-rw84c                         1/1     Running   0          2m11s   192.168.11.11   k8s-master01   <none>           <none>

使用 crictl 命令查看 calico 使用的镜像:

$ crictl  images
IMAGE                                                     TAG                 IMAGE ID            SIZE
docker.io/calico/cni                                      v3.12.0             cb6799752c46c       66.5MB
docker.io/calico/node                                     v3.12.0             fc05bc4225f39       89.7MB
docker.io/calico/pod2daemon-flexvol                       v3.12.0             98793d0a88c82       37.5MB
registry.cn-beijing.aliyuncs.com/images_k8s/pause-amd64   3.1                 21a595adc69ca       326kB
  • 如果 crictl 输出为空或执行失败,则有可能是缺少配置文件 /etc/crictl.yaml 导致的,该文件的配置如下:

    $ cat /etc/crictl.yaml
    runtime-endpoint: unix:///run/containerd/containerd.sock
    image-endpoint: unix:///run/containerd/containerd.sock
    timeout: 10
    debug: false
    

八、验证集群功能

本文档验证 K8S 集群是否工作正常。

注意:如果没有特殊指明,本文档的所有操作均在 k8s-master01 节点上执行,然后远程分发文件和执行命令。

检查节点状态

$ kubectl get nodes
NAME              STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    <none>   15m   v1.18.0
k8s-master02   Ready    <none>   15m   v1.18.0
k8s-master03   Ready    <none>   15m   v1.18.0

都为 Ready 且版本为 v1.18.0 时正常。

创建测试文件

cd /opt/k8s/work
cat > nginx-ds.yml <<EOF
apiVersion: v1
kind: Service
metadata:name: nginx-dslabels:app: nginx-ds
spec:type: NodePortselector:app: nginx-dsports:- name: httpport: 80targetPort: 80
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: nginx-dslabels:addonmanager.kubernetes.io/mode: Reconcile
spec:selector:matchLabels:app: nginx-dstemplate:metadata:labels:app: nginx-dsspec:containers:- name: my-nginximage: nginx:1.7.9ports:- containerPort: 80
EOF

执行测试

kubectl create -f nginx-ds.yml

检查各节点的 Pod IP 连通性

$ kubectl get pods  -o wide -l app=nginx-ds
NAME             READY   STATUS    RESTARTS   AGE   IP               NODE              NOMINATED NODE   READINESS GATES
nginx-ds-j7v5g   1/1     Running   0          61s   172.30.244.1     k8s-master01   <none>           <none>
nginx-ds-js8g8   1/1     Running   0          61s   172.30.82.129    k8s-master02   <none>           <none>
nginx-ds-n2p4x   1/1     Running   0          61s   172.30.184.130   k8s-master03   <none>           <none>

在所有 Node 上分别 ping 上面三个 Pod IP,看是否连通:

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}doecho ">>> ${node_ip}"ssh ${node_ip} "ping -c 1 172.30.244.1"ssh ${node_ip} "ping -c 1 172.30.82.129"ssh ${node_ip} "ping -c 1 172.30.184.130"done

检查服务 IP 和端口可达性

$ kubectl get svc -l app=nginx-ds
NAME       TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx-ds   NodePort   10.254.116.22   <none>        80:30562/TCP   2m7s

可见:

  • Service Cluster IP:10.254.116.22
  • 服务端口:80
  • NodePort 端口:30562

在所有 Node 上 curl Service IP:

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}doecho ">>> ${node_ip}"ssh ${node_ip} "curl -s 10.254.116.22"done

预期输出 nginx 欢迎页面内容。
检查服务的 NodePort 可达性

在所有 Node 上执行:

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}doecho ">>> ${node_ip}"ssh ${node_ip} "curl -s ${node_ip}:30562"done

预期输出 nginx 欢迎页面内容。

tags: addons, dns, coredns

九、部署各插件

部署 coredns 插件
如果没有特殊指明,本文档的所有操作均在 k8s-master01 节点上执行;

下载和配置 coredns

cd /opt/k8s/work
git clone https://github.com/coredns/deployment.git
mv deployment coredns-deployment

创建 coredns

cd /opt/k8s/work/coredns-deployment/kubernetes
source /opt/k8s/bin/environment.sh
./deploy.sh -i ${CLUSTER_DNS_SVC_IP} -d ${CLUSTER_DNS_DOMAIN} | kubectl apply -f -

检查 coredns 功能

$ kubectl get all -n kube-system -l k8s-app=kube-dns
NAME                          READY   STATUS    RESTARTS   AGE
pod/coredns-76b74f549-cwm8d   1/1     Running   0          62sNAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.254.0.2   <none>        53/UDP,53/TCP,9153/TCP   62sNAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   1/1     1            1           62sNAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-76b74f549   1         1         1       62s

新建一个 Deployment:

cd /opt/k8s/work
cat > my-nginx.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:name: my-nginx
spec:replicas: 2selector:matchLabels:run: my-nginxtemplate:metadata:labels:run: my-nginxspec:containers:- name: my-nginximage: nginx:1.7.9ports:- containerPort: 80
EOF
kubectl create -f my-nginx.yaml

expose 该 Deployment, 生成 my-nginx 服务:

$ kubectl expose deploy my-nginx
service "my-nginx" exposed$ kubectl get services my-nginx -o wide
NAME       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE   SELECTOR
my-nginx   ClusterIP   10.254.67.218   <none>        80/TCP    5s    run=my-nginx

创建另一个 Pod,查看 /etc/resolv.conf 是否包含 kubelet 配置的 --cluster-dns--cluster-domain,是否能够将服务 my-nginx 解析到上面显示的 Cluster IP 10.254.40.167

cd /opt/k8s/work
cat > dnsutils-ds.yml <<EOF
apiVersion: v1
kind: Service
metadata:name: dnsutils-dslabels:app: dnsutils-ds
spec:type: NodePortselector:app: dnsutils-dsports:- name: httpport: 80targetPort: 80
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: dnsutils-dslabels:addonmanager.kubernetes.io/mode: Reconcile
spec:selector:matchLabels:app: dnsutils-dstemplate:metadata:labels:app: dnsutils-dsspec:containers:- name: my-dnsutilsimage: tutum/dnsutils:latestcommand:- sleep- "3600"ports:- containerPort: 80
EOF
kubectl create -f dnsutils-ds.yml
$ kubectl get pods -lapp=dnsutils-ds -o wide
NAME                READY   STATUS    RESTARTS   AGE   IP               NODE              NOMINATED NODE   READINESS GATES
dnsutils-ds-7h9np   1/1     Running   0          69s   172.30.244.3    k8s-master01   <none>           <none>
dnsutils-ds-fthdl   1/1     Running   0          69s   172.30.82.131    k8s-master02   <none>           <none>
dnsutils-ds-w69zp   1/1     Running   0          69s   172.30.184.132   k8s-master03   <none>           <none>
$ kubectl -it exec dnsutils-ds-7h9np  cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local 4pd.io
nameserver 10.254.0.2
options ndots:5
$ kubectl -it exec dnsutils-ds-7h9np nslookup kubernetes
Server:         10.254.0.2
Address:        10.254.0.2#53Name:   kubernetes.default.svc.cluster.local
Address: 10.254.0.1
$ kubectl -it exec dnsutils-ds-7h9np nslookup www.baidu.com
Server:         10.254.0.2
Address:        10.254.0.2#53Non-authoritative answer:
*** Can't find www.baidu.com: No answer
$ kubectl -it exec dnsutils-ds-7h9np nslookup www.baidu.com.
Server:         10.254.0.2
Address:        10.254.0.2#53Non-authoritative answer:
www.baidu.com   canonical name = www.a.shifen.com.
Name:   www.a.shifen.com
Address: 220.181.38.150
Name:   www.a.shifen.com
Address: 220.181.38.149
$ kubectl -it exec dnsutils-ds-7h9np nslookup my-nginx
Server:         10.254.0.2
Address:        10.254.0.2#53Name:   my-nginx.default.svc.cluster.local
Address: 10.254.67.218

至此,整个集群算是初步完成了。至于其他插件(如dashboard、prometheus、ELK、harbor等),可以移步我其他的文章,里边有详细的说明。

谢谢大家的关注!

二进制部署k8s1.18(下)相关推荐

  1. 使用ansible部署K8S1.18集群并使用Kubesphere 3.1.1实现devops、日志收集、灰度发布、告警监控

    离线安装集群 参考 https://github.com/easzlab/kubeasz/blob/master/docs/setup/offline_install.md 离线文件准备 在一台能够访 ...

  2. 逃脱只会部署集群系列 —— Kubeadm部署v1.18.0与ETCD操作

    目录 一.Kubeadm部署K8s1.18.0版本 1. 安装要求 2.环境准备 3. docker安装[所有节点都需要安装] 4.docker配置cgroup驱动[所有节点] 5.镜像加速[所有节点 ...

  3. 二进制部署Kubernetes v1.13.4 HA可选

    本次采用二进制文件方式部署,本文过程写成了更详细的ansible部署方案 https://github.com/zhangguanzhang/Kubernetes-ansible 和之前的步骤差不多都 ...

  4. CentOS 使用二进制部署 Kubernetes 1.13集群

    CentOS 使用二进制部署 Kubernetes 1.13集群 一.概述 kubernetes 1.13 已发布,这是 2018 年年内第四次也是最后一次发布新版本.Kubernetes 1.13 ...

  5. 二进制部署 单Master Kubernetes-v1.14.1集群

    一.部署Kubernetes集群 1.1 Kubernetes介绍 Kubernetes(K8S)是Google开源的容器集群管理系统,K8S在Docker容器技术的基础之上,大大地提高了容器化部署应 ...

  6. Kubeadm部署Kubernetes1.18.6集群1

    Kubeadm 部署 Kubernetes1.18.6 集群 一.环境说明 主机名 IP地址 角色 系统 k8s-node-1 192.168.120.128 k8s-master Centos7.6 ...

  7. 01 kubernetes二进制部署

    kubernetes二进制部署文档-集群部分 文章目录 kubernetes二进制部署文档-集群部分 一.系统规划 1.1 系统组件分布 1.2 部署拓扑 1.3 系统环境 二.初始化系统环境 2.1 ...

  8. 云原生|kubernetes|网络插件flannel二进制部署和calico的yaml清单部署总结版

    前言: 前面写了一些关于calico的文章,但感觉好像是浅尝辄止,分散在了几篇文章内,并且很多地方还是没有说的太清楚云原生|kubernetes|kubernetes的网络插件calico和flann ...

  9. 使用Kubeadm部署K8S单节点,速度快于二进制部署

    使用Kubeadmin部署K8S单节点,速度快于二进制部署 一. 环境准备 二.所有节点安装docker 三.所有节点安装kubeadm,kubelet和kubectl 四.部署K8S集群 五.安装d ...

最新文章

  1. NS4225D 类音频功率放大器 - 失败告终
  2. 时间管理读后记(二)
  3. 在java中关于枚举类型的特性_java枚举类型小结
  4. 开发自上而下的Web服务项目
  5. 开源开放 | 一个融合多元关系和事件表示的金融领域本体模型FTHO(CCKS2021)
  6. 使用FileDialog查看文件内容
  7. m_Orchestrate learning system---二十九、什么情况下用数据库做配置字段,什么情况下用配置文件做配置...
  8. WPF 使用MultiBinding ,TwoWay ,ValidationRule ,需要注意的事项
  9. 车聘网框架及源码介绍
  10. c语言实训项目设计设计游戏,C语言实训三贪吃蛇游戏设计.doc
  11. 查看android内置(webview)浏览器和系统浏览器内核信息
  12. Python爬虫——Python基础笔记
  13. 系统集成项目管理工程师10大管理5个过程组47个过程域
  14. size of的用法总结
  15. 设置 IntelliJ IDEA 主题和字体的方法
  16. 深度学习4:网络优化Network Optimization(基于Python MXNet.Gluon框架)
  17. RAID5系统架构和扩容
  18. 【防火墙接口启用DHCP】
  19. 什么是天线的方向图?
  20. 如何在Tomcat中做TLS客户端认证

热门文章

  1. Android图片特效处理(像素处理)
  2. 面试结束后,公司为什么不直接告诉我结果,而是让我回去等消息?
  3. 【C++】类和对象【中篇】--C++六个默认成员函数以及const成员函数
  4. MAC版画图软件 paintbrush 推荐,类似 windows 上系统自带的画图软件
  5. N0wayBack 春节红包题
  6. Servlet API中forward()与redirect()的区别?
  7. 3D游戏建模师加不加班?严重吗?
  8. 前端小白学习路线及知识点汇总(三)-- JavaScript基础
  9. 使用uglifyjs压缩JS文件
  10. 很多人都想做到却只能看别人做到的:自律