8.高可用Master

如果是配置文件初始化集群,不用申请证书,命令行初始化,执行下面命令,申请证书,当前maste生成证书用于添加新控制节点

[root@k8s-master01 ~]# kubeadm init phase upload-certs --upload-certs
I0111 21:05:14.295131    4977 version.go:254] remote version is much newer: v1.23.1; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
5e4b05cc3ea9a54d172f4895e24caada1496a55061226f4c42cfebba9d50404b

添加master02:

[root@k8s-master02 ~]# kubeadm join 172.31.3.188:6443 --token 1d8e8a.p35rsuat5a7hp577 \--discovery-token-ca-cert-hash sha256:0fe81dccab5623a90b0b85dcae8cccc5085f8f3dc7c8cf169a29a84da0d8135e \--control-plane --certificate-key 5e4b05cc3ea9a54d172f4895e24caada1496a55061226f4c42cfebba9d50404b
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master02.example.local localhost] and IPs [172.31.3.102 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master02.example.local localhost] and IPs [172.31.3.102 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master02.example.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.example.local] and IPs [10.96.0.1 172.31.3.102 172.31.3.188]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master02.example.local as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master02.example.local as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]This node has joined the cluster and a new control plane instance was created:* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.To start administering your cluster from this node, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configRun 'kubectl get nodes' to see this node join the cluster.[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS     ROLES                  AGE     VERSION
k8s-master01.example.local   NotReady   control-plane,master   4m12s   v1.20.14
k8s-master02.example.local   NotReady   control-plane,master   45s     v1.20.14

9.Node节点的配置

Node节点上主要部署公司的一些业务应用,生产环境中不建议Master节点部署系统组件之外的其他Pod,测试环境可以允许Master节点部署Pod以节省系统资源。

添加node01:

[root@k8s-node01 ~]# kubeadm join 172.31.3.188:6443 --token 1d8e8a.p35rsuat5a7hp577 \--discovery-token-ca-cert-hash sha256:0fe81dccab5623a90b0b85dcae8cccc5085f8f3dc7c8cf169a29a84da0d8135e
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS     ROLES                  AGE     VERSION
k8s-master01.example.local   NotReady   control-plane,master   7m57s   v1.20.14
k8s-master02.example.local   NotReady   control-plane,master   4m30s   v1.20.14
k8s-node01.example.local     NotReady   <none>                 28s     v1.20.14

添加node02:

[root@k8s-node02 ~]# kubeadm join 172.31.3.188:6443 --token 1d8e8a.p35rsuat5a7hp577 \--discovery-token-ca-cert-hash sha256:0fe81dccab5623a90b0b85dcae8cccc5085f8f3dc7c8cf169a29a84da0d8135e
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS     ROLES                  AGE     VERSION
k8s-master01.example.local   NotReady   control-plane,master   9m22s   v1.20.14
k8s-master02.example.local   NotReady   control-plane,master   5m55s   v1.20.14
k8s-node01.example.local     NotReady   <none>                 113s    v1.20.14
k8s-node02.example.local     NotReady   <none>                 34s     v1.20.14

10.token过期添加新的master和node

注意:以下步骤是上述init命令产生的Token过期了才需要执行以下步骤,如果没有过期不需要执行

#Token过期后生成新的token:
root@k8s-master01:~# kubeadm token create --print-join-command
kubeadm join 172.31.3.188:6443 --token 13pz5i.oc0ja481fj1svy1i     --discovery-token-ca-cert-hash sha256:7e7695a897d9a8afcc779f2f5c335502e091e85a806d6fbe865118abf17e39cd #Master需要生成--certificate-key
root@k8s-master01:~# kubeadm init phase upload-certs  --upload-certs
I0112 21:30:51.234388   16826 version.go:254] remote version is much newer: v1.23.1; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
0a6e1d13be15039189717c81f25dbe369ea0bec0c22e6060a33cb5fedf25e530

添加master03:

root@k8s-master03:~# kubeadm join 172.31.3.188:6443 --token 13pz5i.oc0ja481fj1svy1i \--discovery-token-ca-cert-hash sha256:7e7695a897d9a8afcc779f2f5c335502e091e85a806d6fbe865118abf17e39cd \--control-plane --certificate-key 0a6e1d13be15039189717c81f25dbe369ea0bec0c22e6060a33cb5fedf25e530
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master03.example.local localhost] and IPs [172.31.3.103 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master03.example.local localhost] and IPs [172.31.3.103 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master03.example.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.example.local] and IPs [10.96.0.1 172.31.3.103 172.31.3.188]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master03.example.local as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master03.example.local as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]This node has joined the cluster and a new control plane instance was created:* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.To start administering your cluster from this node, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configRun 'kubectl get nodes' to see this node join the cluster.root@k8s-master01:~# kubectl get nodes
NAME                         STATUS     ROLES                  AGE   VERSION
k8s-master01                 NotReady   control-plane,master   20m   v1.20.14
k8s-master02.example.local   NotReady   control-plane,master   15m   v1.20.14
k8s-master03.example.local   NotReady   control-plane,master   14m   v1.20.14
k8s-node01.example.local     NotReady   <none>                 13m   v1.20.14
k8s-node02.example.local     NotReady   <none>                 13m   v1.20.14

添加node03:

[root@k8s-node03 ~]# kubeadm join 172.31.3.188:6443 --token 13pz5i.oc0ja481fj1svy1i \--discovery-token-ca-cert-hash sha256:7e7695a897d9a8afcc779f2f5c335502e091e85a806d6fbe865118abf17e39cd
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.root@k8s-master01:~# kubectl get nodes
NAME                         STATUS     ROLES                  AGE   VERSION
k8s-master01                 NotReady   control-plane,master   22m   v1.20.14
k8s-master02.example.local   NotReady   control-plane,master   17m   v1.20.14
k8s-master03.example.local   NotReady   control-plane,master   16m   v1.20.14
k8s-node01.example.local     NotReady   <none>                 15m   v1.20.14
k8s-node02.example.local     NotReady   <none>                 15m   v1.20.14
k8s-node03.example.local     NotReady   <none>                 35s   v1.20.14

11.Calico组件的安装

[root@k8s-master01 ~]# cat calico-etcd.yaml
---
# Source: calico/templates/calico-etcd-secrets.yaml
# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:name: calico-etcd-secretsnamespace: kube-system
data:# Populate the following with etcd TLS configuration if desired, but leave blank if# not using TLS for etcd.# The keys below should be uncommented and the values populated with the base64# encoded contents of each file that would be associated with the TLS data.# Example command for encoding a file contents: cat <file> | base64 -w 0# etcd-key: null# etcd-cert: null# etcd-ca: null
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:name: calico-confignamespace: kube-system
data:# Configure this with the location of your etcd cluster.etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"# If you're using TLS enabled etcd uncomment the following.# You must also populate the Secret below with these files.etcd_ca: ""   # "/calico-secrets/etcd-ca"etcd_cert: "" # "/calico-secrets/etcd-cert"etcd_key: ""  # "/calico-secrets/etcd-key"# Typha is disabled.typha_service_name: "none"# Configure the backend to use.calico_backend: "bird"# Configure the MTU to use for workload interfaces and tunnels.# - If Wireguard is enabled, set to your network MTU - 60# - Otherwise, if VXLAN or BPF mode is enabled, set to your network MTU - 50# - Otherwise, if IPIP is enabled, set to your network MTU - 20# - Otherwise, if not using any encapsulation, set to your network MTU.veth_mtu: "1440"# The CNI network configuration to install on each node. The special# values in this config will be automatically populated.cni_network_config: |-{"name": "k8s-pod-network","cniVersion": "0.3.1","plugins": [{"type": "calico","log_level": "info","etcd_endpoints": "__ETCD_ENDPOINTS__","etcd_key_file": "__ETCD_KEY_FILE__","etcd_cert_file": "__ETCD_CERT_FILE__","etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__","mtu": __CNI_MTU__,"ipam": {"type": "calico-ipam"},"policy": {"type": "k8s"},"kubernetes": {"kubeconfig": "__KUBECONFIG_FILEPATH__"}},{"type": "portmap","snat": true,"capabilities": {"portMappings": true}},{"type": "bandwidth","capabilities": {"bandwidth": true}}]}---
# Source: calico/templates/calico-kube-controllers-rbac.yaml# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-kube-controllers
rules:# Pods are monitored for changing labels.# The node controller monitors Kubernetes nodes.# Namespace and serviceaccount labels are used for policy.- apiGroups: [""]resources:- pods- nodes- namespaces- serviceaccountsverbs:- watch- list- get# Watch for changes to Kubernetes NetworkPolicies.- apiGroups: ["networking.k8s.io"]resources:- networkpoliciesverbs:- watch- list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-kube-controllers
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-kube-controllers
subjects:
- kind: ServiceAccountname: calico-kube-controllersnamespace: kube-system
------
# Source: calico/templates/calico-node-rbac.yaml
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-node
rules:# The CNI plugin needs to get pods, nodes, and namespaces.- apiGroups: [""]resources:- pods- nodes- namespacesverbs:- get- apiGroups: [""]resources:- endpoints- servicesverbs:# Used to discover service IPs for advertisement.- watch- list# Pod CIDR auto-detection on kubeadm needs access to config maps.- apiGroups: [""]resources:- configmapsverbs:- get- apiGroups: [""]resources:- nodes/statusverbs:# Needed for clearing NodeNetworkUnavailable flag.- patch---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: calico-node
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-node
subjects:
- kind: ServiceAccountname: calico-nodenamespace: kube-system---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:name: calico-nodenamespace: kube-systemlabels:k8s-app: calico-node
spec:selector:matchLabels:k8s-app: calico-nodeupdateStrategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1template:metadata:labels:k8s-app: calico-nodespec:nodeSelector:kubernetes.io/os: linuxhostNetwork: truetolerations:# Make sure calico-node gets scheduled on all nodes.- effect: NoScheduleoperator: Exists# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- effect: NoExecuteoperator: ExistsserviceAccountName: calico-node# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.terminationGracePeriodSeconds: 0priorityClassName: system-node-criticalinitContainers:# This container installs the CNI binaries# and CNI network config file on each node.- name: install-cniimage: docker.io/calico/cni:v3.15.3command: ["/install-cni.sh"]env:# Name of the CNI config file to create.- name: CNI_CONF_NAMEvalue: "10-calico.conflist"# The CNI network config to install on each node.- name: CNI_NETWORK_CONFIGvalueFrom:configMapKeyRef:name: calico-configkey: cni_network_config# The location of the etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# CNI MTU Config variable- name: CNI_MTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# Prevents the container from sleeping forever.- name: SLEEPvalue: "false"volumeMounts:- mountPath: /host/opt/cni/binname: cni-bin-dir- mountPath: /host/etc/cni/net.dname: cni-net-dir- mountPath: /calico-secretsname: etcd-certssecurityContext:privileged: true# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes# to communicate with Felix over the Policy Sync API.- name: flexvol-driverimage: docker.io/calico/pod2daemon-flexvol:v3.15.3volumeMounts:- name: flexvol-driver-hostmountPath: /host/driversecurityContext:privileged: truecontainers:# Runs calico-node container on each Kubernetes node. This# container programs network policy and routes on each# host.- name: calico-nodeimage: docker.io/calico/node:v3.15.3env:# The location of the etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# Location of the CA certificate for etcd.- name: ETCD_CA_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_ca# Location of the client key for etcd.- name: ETCD_KEY_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_key# Location of the client certificate for etcd.- name: ETCD_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_cert# Set noderef for node controller.- name: CALICO_K8S_NODE_REFvalueFrom:fieldRef:fieldPath: spec.nodeName# Choose the backend to use.- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backend# Cluster type to identify the deployment type- name: CLUSTER_TYPEvalue: "k8s,bgp"# Auto-detect the BGP IP address.- name: IPvalue: "autodetect"# Enable IPIP- name: CALICO_IPV4POOL_IPIPvalue: "Always"# Enable or Disable VXLAN on the default IP pool.- name: CALICO_IPV4POOL_VXLANvalue: "Never"# Set MTU for tunnel device used if ipip is enabled- name: FELIX_IPINIPMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# Set MTU for the VXLAN tunnel device.- name: FELIX_VXLANMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# Set MTU for the Wireguard tunnel device.- name: FELIX_WIREGUARDMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# The default IPv4 pool to create on startup if none exists. Pod IPs will be# chosen from this range. Changing this value after installation will have# no effect. This should fall within `--cluster-cidr`.# - name: CALICO_IPV4POOL_CIDR#   value: "192.168.0.0/16"# Disable file logging so `kubectl logs` works.- name: CALICO_DISABLE_FILE_LOGGINGvalue: "true"# Set Felix endpoint to host default action to ACCEPT.- name: FELIX_DEFAULTENDPOINTTOHOSTACTIONvalue: "ACCEPT"# Disable IPv6 on Kubernetes.- name: FELIX_IPV6SUPPORTvalue: "false"# Set Felix logging to "info"- name: FELIX_LOGSEVERITYSCREENvalue: "info"- name: FELIX_HEALTHENABLEDvalue: "true"securityContext:privileged: trueresources:requests:cpu: 250mlivenessProbe:exec:command:- /bin/calico-node- -felix-live- -bird-liveperiodSeconds: 10initialDelaySeconds: 10failureThreshold: 6readinessProbe:exec:command:- /bin/calico-node- -felix-ready- -bird-readyperiodSeconds: 10volumeMounts:- mountPath: /lib/modulesname: lib-modulesreadOnly: true- mountPath: /run/xtables.lockname: xtables-lockreadOnly: false- mountPath: /var/run/caliconame: var-run-calicoreadOnly: false- mountPath: /var/lib/caliconame: var-lib-calicoreadOnly: false- mountPath: /calico-secretsname: etcd-certs- name: policysyncmountPath: /var/run/nodeagentvolumes:# Used by calico-node.- name: lib-moduleshostPath:path: /lib/modules- name: var-run-calicohostPath:path: /var/run/calico- name: var-lib-calicohostPath:path: /var/lib/calico- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate# Used to install CNI.- name: cni-bin-dirhostPath:path: /opt/cni/bin- name: cni-net-dirhostPath:path: /etc/cni/net.d# Mount in the etcd TLS secrets with mode 400.# See https://kubernetes.io/docs/concepts/configuration/secret/- name: etcd-certssecret:secretName: calico-etcd-secretsdefaultMode: 0400# Used to create per-pod Unix Domain Sockets- name: policysynchostPath:type: DirectoryOrCreatepath: /var/run/nodeagent# Used to install Flex Volume Driver- name: flexvol-driver-hosthostPath:type: DirectoryOrCreatepath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---apiVersion: v1
kind: ServiceAccount
metadata:name: calico-nodenamespace: kube-system---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllers
spec:# The controllers can only have a single active instance.replicas: 1selector:matchLabels:k8s-app: calico-kube-controllersstrategy:type: Recreatetemplate:metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllersspec:nodeSelector:kubernetes.io/os: linuxtolerations:# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- key: node-role.kubernetes.io/mastereffect: NoScheduleserviceAccountName: calico-kube-controllerspriorityClassName: system-cluster-critical# The controllers must run in the host network namespace so that# it isn't governed by policy that would prevent it from working.hostNetwork: truecontainers:- name: calico-kube-controllersimage: docker.io/calico/kube-controllers:v3.15.3env:# The location of the etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# Location of the CA certificate for etcd.- name: ETCD_CA_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_ca# Location of the client key for etcd.- name: ETCD_KEY_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_key# Location of the client certificate for etcd.- name: ETCD_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_cert# Choose which controllers to run.- name: ENABLED_CONTROLLERSvalue: policy,namespace,serviceaccount,workloadendpoint,nodevolumeMounts:# Mount in the etcd TLS secrets.- mountPath: /calico-secretsname: etcd-certsreadinessProbe:exec:command:- /usr/bin/check-status- -rvolumes:# Mount in the etcd TLS secrets with mode 400.# See https://kubernetes.io/docs/concepts/configuration/secret/- name: etcd-certssecret:secretName: calico-etcd-secretsdefaultMode: 0400---apiVersion: v1
kind: ServiceAccount
metadata:name: calico-kube-controllersnamespace: kube-system---
# Source: calico/templates/calico-typha.yaml---
# Source: calico/templates/configure-canal.yaml---
# Source: calico/templates/kdd-crds.yaml

修改calico-etcd.yaml的以下位置

[root@k8s-master01 ~]# grep "etcd_endpoints:.*" calico-etcd.yamletcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"[root@k8s-master01 ~]# sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://172.31.3.101:2379,https://172.31.3.102:2379,https://172.31.3.103:2379"#g' calico-etcd.yaml[root@k8s-master01 ~]# grep "etcd_endpoints:.*" calico-etcd.yamletcd_endpoints: "https://172.31.3.101:2379,https://172.31.3.102:2379,https://172.31.3.103:2379"[root@k8s-master01 ~]# grep -E "(.*etcd-key:.*|.*etcd-cert:.*|.*etcd-ca:.*)" calico-etcd.yaml# etcd-key: null# etcd-cert: null# etcd-ca: null[root@k8s-master01 ~]# ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
[root@k8s-master01 ~]# ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
[root@k8s-master01 ~]# ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`[root@k8s-master01 ~]# sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml[root@k8s-master01 ~]# grep -E "(.*etcd-key:.*|.*etcd-cert:.*|.*etcd-ca:.*)" calico-etcd.yamletcd-key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdkpwdldKZjdpRzhpNFJGeVhXVXlFRFU0dlR5UHprV1IxVE9jaUVpQ0J0N1BsbUJWCmkyOEZVdmZTTFpBbDdZRmJsNTg0STJJV0Y5MU1FOElIRnp4clZyckZ5eVZKdEtwSFZiWEt1OHdUaW81S0FpSEMKUGFrTE43amtrTXRERmJjbDFyaEVVaUxURVRJVFNUYlBmMmE3NU4zc2krTFU5L1packdmQ3hQcTMvQ1NlQjVwdwpDRktzQzhaVi9rbTRoK21LU3pYcmVVbGgyT2tDa3RYWEtReE9Qa1ptTzZDUHB5WkVlV05TL1c2VlArZytZd1I0Ck5KTVE0eUhzODNYTXByMEl4UFk2RVNYNEdVQTQrdVk4VmR6d1Y5aHFKWlcrdy9yMzlSd1VEY2ZhSlA5eXRGZisKVlMwR0dyL053TUd3b0ZZQWJOeGRGUHZJL3prdFRERkJQREZ6cVFJREFRQUJBb0lCQUJ6QWttNzRKSUdGSjlVVgordEJnS0FTdWlHclkrN2RmaGI3eDhsQVlkYklrYjVNbU5vUmVOWHFUaXpnay9KTTdvRUg2Sk8zSCswUkNHV0g5CnQyVUVjZnl6MW9tRXNyclhKcTdiV3YvTU9jSnF0TCtrYzk5QWtSUTZuS1d5UnhUZGFlaFZDUjFZYjhMMFZscFgKLzhRVlhsbWl0M2dQNlpXdnViWDl6NFNHRUZ4ZzJYSmNydWF3algxU1ZGTnRPN2xrU2tqaWgzYjRTb2wvamNNZwo3UExvUUxaOGNvbm5XaUJtUEExRWYzc3N2enBtbkd3M09KRXNJdEhVMEFyQ3VPQ2RxYllHaFpHYWZjdmhmWU1PCnJhKzFIUTg4Tys5VS9ScGkrTVNvelRnUDRTOGtZL1pNbXhmUXAwT2k4d1FRZC9RbmRJMWRuYW5MQy80RlBoSzgKNkVTVFRVMENnWUVBNzdzR0FtZFd2RFhrY1ZVM3hKQk1yK3pyZWV3Nlhla0F0ZGVsU3pBMU81RENWYmQvcEgrNApmOXppd1o4K1dWRC8xWm5wV2NlZ0JBQ0lPVzNsUnUvelJva2NSeXFpRFVGcE5ET0xwWjFEYXJaNzVCekhwQ2QyCjQrNldUdkNDMEVnR0k5enkrYWpKOE5ESjhwcFRPUjR0NDhOR3FTaHorMUdONkRaNU15elpBUmNDZ1lFQXlXY2wKeC9kWkMrVmRCV01uOVpkU0pQcHF1RHQwbVFzS044NG9vSlBIRVhydWphWnd6M3pMcWp0NnJsU1M3SDk2THZWeApaYkxvY1UyQ1hLVVdRaU1vNmhYc1cwa2NmaEJPU0xFQmMrT3o2M0tLWW0zdzl6M3dkYlhGQ1ZPUCtzdlh5bE90CmNkRWNnK1Z2aGZQK0w2VTVZN0d6OW9IL1NnME93d1hPdXk2K3FUOENnWUVBcCsrbUdBejRUOFNaRVdPWE81V3kKZ3hNL0todjREMDFvZC9wbkNyTHN0NXVDNTdVeUw3UmhOUUV4d0YyanVjSHFWbUlKZkNGQjBVdm1JZ1VBTnA5bApGcVo2THNpSTJTeFhYSUEzZFg4amVSLzR6aVh6SE9XZ2ZhL25qOGtnZW5QYUNVbUExTEFQTnltc0xzMDVPNndPCmpaMkFaSU80Sy9oSHBzSnlTUTFEdjJVQ2dZRUFvMHNPUnVNMVAzL251OFo1VDVZdzgrcFZQS3A0RHQzMG11cDcKNWpYcTRURmEyVjVwZU5FbUVBL0ptQzdhTVFYcWVzaGwrSjdsOTNkd2lzMFBEdkNTNjdoNnVraTg0VGszUDVqRQpKTUlwem13LzV5NWNnUm1uTE1rRHlGd0lFTC9WWmlZU0tvWHhLTCtOZkg0blNWb2MvY2ZHc2NjVXhXVnc0bzZDCjN5RTNWT0VDZ1lBNXFHL0t2amxhV3liQndYY3pXVmZWeTJ4VTMwZVNQWVVqWTlUdUR0ZGJqbHFFeTlialZsZzUKWldRb0dKcTVFbjF1YXpUcnc3QlFja2VjaE1zRzBrZkRZSzhZbC9UMThGemxBWDh3TzJaZGlOQnJYVjhGMnRKaQpPYmJwZU45Y0l2ZkVpcjgwOHBVcC9ac05zQWpjMzBERU82THVPblA2VlpmQ1R2Wit4VVJodmc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=etcd-cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURYVENDQWtXZ0F3SUJBZ0lJRDc0VzRkNnJ0MFl3RFFZSktvWklodmNOQVFFTEJRQXdFakVRTUE0R0ExVUUKQXhNSFpYUmpaQzFqWVRBZUZ3MHlNakF4TVRVd05qTTJNVEJhRncweU16QXhNVFV3TmpNMk1UQmFNQ1V4SXpBaApCZ05WQkFNVEdtczRjeTF0WVhOMFpYSXdNUzVsZUdGdGNHeGxMbXh2WTJGc01JSUJJakFOQmdrcWhraUc5dzBCCkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXZKcHZXSmY3aUc4aTRSRnlYV1V5RURVNHZUeVB6a1dSMVRPY2lFaUMKQnQ3UGxtQlZpMjhGVXZmU0xaQWw3WUZibDU4NEkySVdGOTFNRThJSEZ6eHJWcnJGeXlWSnRLcEhWYlhLdTh3VAppbzVLQWlIQ1Bha0xON2pra010REZiY2wxcmhFVWlMVEVUSVRTVGJQZjJhNzVOM3NpK0xVOS9aWnJHZkN4UHEzCi9DU2VCNXB3Q0ZLc0M4WlYva200aCttS1N6WHJlVWxoMk9rQ2t0WFhLUXhPUGtabU82Q1BweVpFZVdOUy9XNlYKUCtnK1l3UjROSk1RNHlIczgzWE1wcjBJeFBZNkVTWDRHVUE0K3VZOFZkendWOWhxSlpXK3cvcjM5UndVRGNmYQpKUDl5dEZmK1ZTMEdHci9Od01Hd29GWUFiTnhkRlB2SS96a3RUREZCUERGenFRSURBUUFCbzRHak1JR2dNQTRHCkExVWREd0VCL3dRRUF3SUZvREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0h3WUQKVlIwakJCZ3dGb0FVQ2ZkNk5va2FXeFJOZlN2Umw4ajk5bU52aUhrd1RnWURWUjBSQkVjd1JZSWFhemh6TFcxaApjM1JsY2pBeExtVjRZVzF3YkdVdWJHOWpZV3lDQ1d4dlkyRnNhRzl6ZEljRXJCOERaWWNFZndBQUFZY1FBQUFBCkFBQUFBQUFBQUFBQUFBQUFBVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBZFNRRnlNckFMQWJqcTRDc29QMUoKTThTTU1aRCthMGV6U29xM01EUmJCQWhqWlhEaU5uczMvMWo2aUhTcDUvaWJ6NGRjQnRsaW1HWHk0ek03MGtvcwo5R0JBZzVwaXJXYVFEcWtVSFkxYjdkWUlZSWN4YW9vQWtQeEVoSlZOYTBKYlFyb21qTnJiTVh4MlVsUjVtRGU2CnFMYUtsVDh4WC9zVStSelRxN1VBckxhOWIzWkZvN2V5UkhzZFBUODY3QnZCQnZkNEdMOElxWDdzbVd0VUhLVEkKQWZLMUQrQ3BEUGxNUWE3M1FOOGhvQVRPNTV2ckVjeEFIeDh2VDJ5VUYrYjZaVjJnQm43Z3hJSUNxVUF6OGhWagpTdzMxUVEvTHZ6ME1HZlQ3dFQ0NE52dyt3aHExZVJyNXJ2enRQcml5MUFvSDB1a2hiZ3VEbGExdElOa2FKc01XClRnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=etcd-ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM0VENDQWNtZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkbGRHTmsKTFdOaE1CNFhEVEl5TURFeE5UQTJNell4TUZvWERUTXlNREV4TXpBMk16WXhNRm93RWpFUU1BNEdBMVVFQXhNSApaWFJqWkMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxRSldKZmVNT3N3CmN4NDZmaUExaWREekU3c05DSk1JRnFXODRxWUdZSEF2ekxBZ2JSS0dXRHdnYi8vM01ZUUhuanhpb1lEN1BjVXUKYnZRRlN0cmtmYW1IWHpaMlAxd0dzRThrSkEvOVhaTStTNWttL0M0UHgxSjFoSHNyYUhjR21wUWYxM3ZCS2IrbgpvdDFHK2lERkZ2ZmdNWVd1U1FvL1M4WGFMZDZTcmZyeTFWOUQzek0zaUF4OGkrVzF3bE41b1hqc0RyRW5XcUFRCmxzVmVteWMxQkZRR0FjSTJLL0dzcXNlUmlUM1dCZ2RhV2JST1RMby83RWoycDdGNHdNcHRiT0kvam56UjM5WkUKSnZ6ZHpvUmJWQlh3NTFXY3cvNFZxOW5aaXJESEY3TWRZVEQ4RXIwRFovd2tYa1FsZ3VidTNBRjNDZEVsSS9wTAoxK1BhaFhvUFZpRUNBd0VBQWFOQ01FQXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CCkFmOHdIUVlEVlIwT0JCWUVGQW4zZWphSkdsc1VUWDByMFpmSS9mWmpiNGg1TUEwR0NTcUdTSWIzRFFFQkN3VUEKQTRJQkFRQ0lCaGtiQzZ5OTFqNVAvbzB0NjJLeWlDMWdWelJCcHB0NWwvSXZabDRDRVJwVHVWSzJMTkxPZitGbwpMbGVUWlBZTmVWVFVkc2ZYdlVCekYvelpsSjJ6OVdBRUhTbk5Ba0haQVQ4N0tzSGZuRksyQi9NeFFuSEFkMWMzCkNHdzBxQ3RvUVBLdFI1U2UwUngrQUxQSE9iaUEwRG5uN3JESVhuTnBtdkx6VFliY1JTbnVhRTk1cFIwVVBPYzQKWTd5Ulg4MkttRWkxQVR6UEZBNXp2NFg4VnFMbVB2MFNnSjZiRVl1RnM3TUhScFErTkFRZlRBaktLQzg2d3J0QQpUbWlxeUVJU1RtQk03cVliOTl3OWRsWlBVcDIwNS9jZDBmY3ZudTNQQlJRRDhCVWdrOEhtRnhtNG1iZE9wdW9KCktzT05rbVBlNm5ZcDV2dGNiUndKWnlsSzJOdGkKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=[root@k8s-master01 ~]# grep -E "(.*etcd_ca:.*|.*etcd_cert:.*|.*etcd_key:.*)" calico-etcd.yamletcd_ca: ""   # "/calico-secrets/etcd-ca"etcd_cert: "" # "/calico-secrets/etcd-cert"etcd_key: ""  # "/calico-secrets/etcd-key"[root@k8s-master01 ~]# sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml[root@k8s-master01 ~]# grep -E "(.*etcd_ca:.*|.*etcd_cert:.*|.*etcd_key:.*)" calico-etcd.yamletcd_ca: "/calico-secrets/etcd-ca"   # "/calico-secrets/etcd-ca"etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"etcd_key: "/calico-secrets/etcd-key"  # "/calico-secrets/etcd-key"[root@k8s-master01 ~]# POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`
[root@k8s-master01 ~]# echo $POD_SUBNET
192.168.0.0/12

# 注意下面的这个步骤是把calico-etcd.yaml文件里面的CALICO_IPV4POOL_CIDR下的网段改成自己的Pod网段,也就是把192.168.x.x/16改成自己的集群网段,并打开注释:

[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico-etcd.yaml# - name: CALICO_IPV4POOL_CIDR#   value: "192.168.0.0/16"[root@k8s-master01 ~]# sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico-etcd.yaml- name: CALICO_IPV4POOL_CIDRvalue: 192.168.0.0/12[root@k8s-master01 ~]# grep "image:" calico-etcd.yamlimage: docker.io/calico/cni:v3.15.3image: docker.io/calico/pod2daemon-flexvol:v3.15.3image: docker.io/calico/node:v3.15.3image: docker.io/calico/kube-controllers:v3.15.3

下载calico镜像并上传harbor

[root@k8s-master01 ~]# vim download_calico_image.sh
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_calico_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'images=$(awk -F "/"  '/image:/{print $NF}' calico-etcd.yaml)
HARBOR_DOMAIN=harbor.raymonds.ccimages_download(){${COLOR}"开始下载Calico镜像"${END}for i in ${images};do docker pull registry.cn-beijing.aliyuncs.com/raymond9/$idocker tag registry.cn-beijing.aliyuncs.com/raymond9/$i ${HARBOR_DOMAIN}/google_containers/$idocker rmi registry.cn-beijing.aliyuncs.com/raymond9/$idocker push ${HARBOR_DOMAIN}/google_containers/$idone${COLOR}"Calico镜像下载完成"${END}
}images_download[root@k8s-master01 ~]# bash download_calico_image.sh[root@k8s-master01 ~]# sed -ri 's@(.*image:) docker.io/calico(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' calico-etcd.yaml
[root@k8s-master01 ~]# grep "image:" calico-etcd.yamlimage: harbor.raymonds.cc/google_containers/cni:v3.15.3image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3image: harbor.raymonds.cc/google_containers/node:v3.15.3image: harbor.raymonds.cc/google_containers/kube-controllers:v3.15.3[root@k8s-master01 ~]# kubectl apply -f calico-etcd.yaml
secret/calico-etcd-secrets created
configmap/calico-config created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created#查看容器状态
[root@k8s-master01 ~]# kubectl get pod -n kube-system |grep calico
calico-kube-controllers-6474888cfb-kgfx8             1/1     Running   0          31s
calico-node-2kpgb                                    1/1     Running   0          31s
calico-node-bpbbp                                    1/1     Running   0          31s
calico-node-cgxdk                                    1/1     Running   0          31s
calico-node-fr6vv                                    1/1     Running   0          31s
calico-node-h8q2d                                    1/1     Running   0          31s
calico-node-tc2nw                                    1/1     Running   0          31s#查看集群状态
[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS   ROLES                  AGE    VERSION
k8s-master01.example.local   Ready    control-plane,master   133m   v1.20.14
k8s-master02.example.local   Ready    control-plane,master   130m   v1.20.14
k8s-master03.example.local   Ready    control-plane,master   128m   v1.20.14
k8s-node01.example.local     Ready    <none>                 126m   v1.20.14
k8s-node02.example.local     Ready    <none>                 124m   v1.20.14
k8s-node03.example.local     Ready    <none>                 123m   v1.20.14

12.Metrics部署

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。

[root@k8s-master01 ~]# cat components.yaml
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-serverrbac.authorization.k8s.io/aggregate-to-admin: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-view: "true"name: system:aggregated-metrics-reader
rules:- apiGroups:- metrics.k8s.ioresources:- pods- nodesverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-servername: system:metrics-server
rules:- apiGroups:- ""resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server-auth-readernamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-reader
subjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server:system:auth-delegator
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegator
subjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: system:metrics-server
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-server
subjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: v1
kind: Service
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:ports:- name: httpsport: 443protocol: TCPtargetPort: httpsselector:k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:selector:matchLabels:k8s-app: metrics-serverstrategy:rollingUpdate:maxUnavailable: 0template:metadata:labels:k8s-app: metrics-serverspec:containers:- args:- --cert-dir=/tmp- --secure-port=4443- --metric-resolution=30s- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostnameimage: k8s.gcr.io/metrics-server/metrics-server:v0.4.1imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 3httpGet:path: /livezport: httpsscheme: HTTPSperiodSeconds: 10name: metrics-serverports:- containerPort: 4443name: httpsprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /readyzport: httpsscheme: HTTPSperiodSeconds: 10securityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- mountPath: /tmpname: tmp-dirnodeSelector:kubernetes.io/os: linuxpriorityClassName: system-cluster-criticalserviceAccountName: metrics-servervolumes:- emptyDir: {}name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:labels:k8s-app: metrics-servername: v1beta1.metrics.k8s.io
spec:group: metrics.k8s.iogroupPriorityMinimum: 100insecureSkipTLSVerify: trueservice:name: metrics-servernamespace: kube-systemversion: v1beta1versionPriority: 100

将Master01节点的front-proxy-ca.crt复制到所有Node节点

[root@k8s-master01 ~]#  for i in k8s-node01 k8s-node02 k8s-node03;do scp /etc/kubernetes/pki/front-proxy-ca.crt $i:/etc/kubernetes/pki/front-proxy-ca.crt ; done

修改下面内容:

[root@k8s-master01 ~]# vim components.yaml
...spec:containers:- args:- --cert-dir=/tmp- --secure-port=4443- --metric-resolution=30s- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
#添加下面内容- --kubelet-insecure-tls- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt #注意kubeadm证书文件是front-proxy-ca.crt- --requestheader-username-headers=X-Remote-User- --requestheader-group-headers=X-Remote-Group- --requestheader-extra-headers-prefix=X-Remote-Extra-
...volumeMounts:- mountPath: /tmpname: tmp-dir
#添加下面内容- name: ca-sslmountPath: /etc/kubernetes/pki
...volumes:- emptyDir: {}name: tmp-dir
#添加下面内容- name: ca-sslhostPath:path: /etc/kubernetes/pki

下载镜像并修改镜像地址

[root@k8s-master01 ~]# grep "image:" components.yamlimage: k8s.gcr.io/metrics-server/metrics-server:v0.4.1[root@k8s-master01 ~]# cat download_metrics_images.sh
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_metrics_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'images=$(awk -F "/"  '/image:/{print $NF}' components.yaml)
HARBOR_DOMAIN=harbor.raymonds.ccimages_download(){${COLOR}"开始下载Metrics镜像"${END}for i in ${images};do docker pull registry.aliyuncs.com/google_containers/$idocker tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$idocker rmi registry.aliyuncs.com/google_containers/$idocker push ${HARBOR_DOMAIN}/google_containers/$idone${COLOR}"Metrics镜像下载完成"${END}
}images_download[root@k8s-master01 ~]# bash download_metrics_images.sh[root@k8s-master01 ~]# docker images|grep metrics
harbor.raymonds.cc/google_containers/metrics-server            v0.4.1              9759a41ccdf0        14 months ago       60.5MB[root@k8s-master01 ~]# sed -ri 's@(.*image:) k8s.gcr.io/metrics-server(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' components.yaml[root@k8s-master01 ~]# grep "image:" components.yamlimage: harbor.raymonds.cc/google_containers/metrics-server:v0.4.1

安装metrics server

[root@k8s-master01 ~]# kubectl apply -f components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

查看状态

[root@k8s-master01 ~]# kubectl get pod -n kube-system | grep metrics
metrics-server-9787b55bd-xhmbx                       1/1     Running            0          50s[root@k8s-master01 ~]# kubectl top node
NAME                         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
k8s-master01.example.local   175m         8%     1674Mi          43%
k8s-master02.example.local   187m         9%     1257Mi          32%
k8s-master03.example.local   164m         8%     1182Mi          30%
k8s-node01.example.local     98m          4%     634Mi           16%
k8s-node02.example.local     72m          3%     729Mi           19%
k8s-node03.example.local     104m         5%     651Mi           17%

12.Metrics部署

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。

[root@k8s-master01 ~]# cat components.yaml
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-serverrbac.authorization.k8s.io/aggregate-to-admin: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-view: "true"name: system:aggregated-metrics-reader
rules:- apiGroups:- metrics.k8s.ioresources:- pods- nodesverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-servername: system:metrics-server
rules:- apiGroups:- ""resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server-auth-readernamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-reader
subjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server:system:auth-delegator
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegator
subjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: system:metrics-server
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-server
subjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: v1
kind: Service
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:ports:- name: httpsport: 443protocol: TCPtargetPort: httpsselector:k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:selector:matchLabels:k8s-app: metrics-serverstrategy:rollingUpdate:maxUnavailable: 0template:metadata:labels:k8s-app: metrics-serverspec:containers:- args:- --cert-dir=/tmp- --secure-port=4443- --metric-resolution=30s- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostnameimage: k8s.gcr.io/metrics-server/metrics-server:v0.4.1imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 3httpGet:path: /livezport: httpsscheme: HTTPSperiodSeconds: 10name: metrics-serverports:- containerPort: 4443name: httpsprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /readyzport: httpsscheme: HTTPSperiodSeconds: 10securityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- mountPath: /tmpname: tmp-dirnodeSelector:kubernetes.io/os: linuxpriorityClassName: system-cluster-criticalserviceAccountName: metrics-servervolumes:- emptyDir: {}name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:labels:k8s-app: metrics-servername: v1beta1.metrics.k8s.io
spec:group: metrics.k8s.iogroupPriorityMinimum: 100insecureSkipTLSVerify: trueservice:name: metrics-servernamespace: kube-systemversion: v1beta1versionPriority: 100

将Master01节点的front-proxy-ca.crt复制到所有Node节点

[root@k8s-master01 ~]#  for i in k8s-node01 k8s-node02 k8s-node03;do scp /etc/kubernetes/pki/front-proxy-ca.crt $i:/etc/kubernetes/pki/front-proxy-ca.crt ; done

修改下面内容:

[root@k8s-master01 ~]# vim components.yaml
...spec:containers:- args:- --cert-dir=/tmp- --secure-port=4443- --metric-resolution=30s- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
#添加下面内容- --kubelet-insecure-tls- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt #注意kubeadm证书文件是front-proxy-ca.crt- --requestheader-username-headers=X-Remote-User- --requestheader-group-headers=X-Remote-Group- --requestheader-extra-headers-prefix=X-Remote-Extra-
...volumeMounts:- mountPath: /tmpname: tmp-dir
#添加下面内容- name: ca-sslmountPath: /etc/kubernetes/pki
...volumes:- emptyDir: {}name: tmp-dir
#添加下面内容- name: ca-sslhostPath:path: /etc/kubernetes/pki

下载镜像并修改镜像地址

[root@k8s-master01 ~]# grep "image:" components.yamlimage: k8s.gcr.io/metrics-server/metrics-server:v0.4.1[root@k8s-master01 ~]# cat download_metrics_images.sh
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_metrics_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'images=$(awk -F "/"  '/image:/{print $NF}' components.yaml)
HARBOR_DOMAIN=harbor.raymonds.ccimages_download(){${COLOR}"开始下载Metrics镜像"${END}for i in ${images};do docker pull registry.aliyuncs.com/google_containers/$idocker tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$idocker rmi registry.aliyuncs.com/google_containers/$idocker push ${HARBOR_DOMAIN}/google_containers/$idone${COLOR}"Metrics镜像下载完成"${END}
}images_download[root@k8s-master01 ~]# bash download_metrics_images.sh[root@k8s-master01 ~]# docker images|grep metrics
harbor.raymonds.cc/google_containers/metrics-server            v0.4.1              9759a41ccdf0        14 months ago       60.5MB[root@k8s-master01 ~]# sed -ri 's@(.*image:) k8s.gcr.io/metrics-server(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' components.yaml[root@k8s-master01 ~]# grep "image:" components.yamlimage: harbor.raymonds.cc/google_containers/metrics-server:v0.4.1

安装metrics server

[root@k8s-master01 ~]# kubectl apply -f components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

查看状态

[root@k8s-master01 ~]# kubectl get pod -n kube-system | grep metrics
metrics-server-9787b55bd-xhmbx                       1/1     Running            0          50s[root@k8s-master01 ~]# kubectl top node
NAME                         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
k8s-master01.example.local   175m         8%     1674Mi          43%
k8s-master02.example.local   187m         9%     1257Mi          32%
k8s-master03.example.local   164m         8%     1182Mi          30%
k8s-node01.example.local     98m          4%     634Mi           16%
k8s-node02.example.local     72m          3%     729Mi           19%
k8s-node03.example.local     104m         5%     651Mi           17%

13.Dashboard部署

Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。

13.1 Dashboard部署

[root@k8s-master01 ~]# cat recommended.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.apiVersion: v1
kind: Namespace
metadata:name: kubernetes-dashboard---apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:ports:- port: 443targetPort: 8443selector:k8s-app: kubernetes-dashboard---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kubernetes-dashboard
type: Opaque---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-csrfnamespace: kubernetes-dashboard
type: Opaque
data:csrf: ""---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-key-holdernamespace: kubernetes-dashboard
type: Opaque---kind: ConfigMap
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-settingsnamespace: kubernetes-dashboard---kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
rules:# Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]verbs: ["get", "update", "delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]# Allow Dashboard to get metrics.- apiGroups: [""]resources: ["services"]resourceNames: ["heapster", "dashboard-metrics-scraper"]verbs: ["proxy"]- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]verbs: ["get"]---kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard
rules:# Allow Metrics Scraper to get metrics from the Metrics server- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboardimage: kubernetesui/dashboard:v2.0.4imagePullPolicy: Alwaysports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates- --namespace=kubernetes-dashboard# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30securityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---kind: Service
apiVersion: v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:ports:- port: 8000targetPort: 8000selector:k8s-app: dashboard-metrics-scraper---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: dashboard-metrics-scrapertemplate:metadata:labels:k8s-app: dashboard-metrics-scraperannotations:seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'spec:containers:- name: dashboard-metrics-scraperimage: kubernetesui/metrics-scraper:v1.0.4ports:- containerPort: 8000protocol: TCPlivenessProbe:httpGet:scheme: HTTPpath: /port: 8000initialDelaySeconds: 30timeoutSeconds: 30volumeMounts:- mountPath: /tmpname: tmp-volumesecurityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedulevolumes:- name: tmp-volumeemptyDir: {}[root@k8s-master01 ~]# vim recommended.yaml
...
kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:type: NodePort #添加这行ports:- port: 443targetPort: 8443nodePort: 30005 #添加这行selector:k8s-app: kubernetes-dashboard
...[root@k8s-master01 ~]# grep "image:" recommended.yamlimage: kubernetesui/dashboard:v2.0.4image: kubernetesui/metrics-scraper:v1.0.4

下载镜像并上传到harbor

[root@k8s-master01 ~]# cat download_dashboard_images.sh
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-11
#FileName:      download_dashboard_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'images=$(awk -F "/"  '/image:/{print $NF}' recommended.yaml)
HARBOR_DOMAIN=harbor.raymonds.ccimages_download(){${COLOR}"开始下载Dashboard镜像"${END}for i in ${images};do docker pull registry.aliyuncs.com/google_containers/$idocker tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$idocker rmi registry.aliyuncs.com/google_containers/$idocker push ${HARBOR_DOMAIN}/google_containers/$idone${COLOR}"Dashboard镜像下载完成"${END}
}images_download[root@k8s-master01 ~]# bash download_dashboard_images.sh[root@k8s-master01 ~]# sed -ri 's@(.*image:) kubernetesui(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' recommended.yaml
[root@k8s-master01 ~]# grep "image:" recommended.yamlimage: harbor.raymonds.cc/google_containers/dashboard:v2.0.4image: harbor.raymonds.cc/google_containers/metrics-scraper:v1.0.4[root@k8s-master01 ~]# kubectl  create -f recommended.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

创建管理员用户admin.yaml

[root@k8s-master01 ~]# vim admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata: name: admin-userannotations:rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kube-system[root@k8s-master01 ~]# kubectl apply -f admin.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

13.2 登录dashboard

在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题,参考图1-1:

--test-type --ignore-certificate-errors

[root@k8s-master01 ~]# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.106.189.113   <none>        443:30005/TCP   18s

访问Dashboard:https://172.31.3.101:30005,参考图1-2

​ 图1-2 Dashboard登录方式

13.2.1 token登录

查看token值:

[root@k8s-master01 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-6bvhm
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-userkubernetes.io/service-account.uid: c6ae76a2-322c-483a-9db1-eaf102859165Type:  kubernetes.io/service-account-tokenData
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IldlTXE5S29BbW1YYVdJNWljRnBGamVEX1E0YV9xRVU5UWM1Ykh0dGQ0UkkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZidmhtIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjNmFlNzZhMi0zMjJjLTQ4M2EtOWRiMS1lYWYxMDI4NTkxNjUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Y_VSCPM9F3L00v5XBGcT6JwpnLoAfrKUw8ufOpOhH5AhuFcXctRwAWvZDzGPOH3mH0GaIiyi1G7GOlZRRnJJVy5C7I3VKnE4mZzMScBvKCEheU40Y6x28CvkZDTmuaaDgSWrm3cfjAvTEJIg45TrtaN25at79GB27_A1LJ3JQUHY59OpG6YUbnFWjW899bCUN99lmYTMGe9M5cjY2RCufuyEam296QEz6b23tyEHdMCcPDJJH6IEDf2I4XhA5e5GWqfdkX1qX5XZ21MRyXXXTSVYqeLvvdNvQS3MxLlNaB5my0WcruRihydkC_n1UamgzXBu-XWfM4QWwk3gzsQ9yg
ca.crt:     1066 bytes
namespace:  11 bytes

将token值输入到令牌后,单击登录即可访问Dashboard,参考图1-3:


13.2.2 使用kubeconfig文件登录dashboard

[root@k8s-master01 ~]# cp /etc/kubernetes/admin.conf kubeconfig[root@k8s-master01 ~]# vim kubeconfig
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1ERXhNVEV6TURJMU1Gb1hEVE15TURFd09URXpNREkxTUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0JoCkQxV3h3Myt3bE9WNU02MEtPYjlEZmo1U09EREZBYjdMTGd3ZXgrK3d3eEVHcnFpaGUxVmVLZnlIMHJmTnEvakEKVHArekxyVXhRNHdzNEw3Z29Na2tJcDc3aXRqOHc1VWJXYUh0c3IwMkp1VVBQQzZiWktieG5hTmFXTldTNjRBegpORFhzeSszU3dxcTNyU3h4WkloTS9ubVZRTEZKL21OanU5MUNVWE03ak9jcXhaMUI2QitSbzhSdHFpRStZUlhFCm1JS1ZCeWhpUXhQWE53VEcwN0NKMnY5WnduNmlxK2VUMUdNbVFsZ0Z1M0pqQm9NUTFteWhYODM3QTNTdXVQNDkKYU1HKzd2YTh5TFFkMWltZEZjSVpDcmNHU2FMekR5SDFmMUQ3ZTM4Qm01MTd4S1ZZZkJQQkNBSjROb3VKQmVXSgpPN1lLK2RFb1liaURHWVBtdWxNQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZQVUlaLzVqSSs0WHQ3b1FROC9USU5RQ1gxbXNNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFBRnVkbFNQUTNTM0VXdUZ0YnhiekVsc2IyR2F2NUQzd1VDWStBdlZWRWhzcmZvYzlqKwp5REEwdjZSamEvS3VRWUpjMG9vVkN5cTkveHVyenZyOU9DS3ZwejBDZDJHWkYyeFFFcDZ6QlMvM3A5VUh5YnU3Cm9Kb0E2S0h4OTd0KzVzaWQyamQ4U29qUGNwSGdzZloySmxJckc3ckJpMktuSTZFSlprdWxjMlVIN09kY2RJWmwKTXpkMWFlVG5xdHlsVkZYSDN6ZkNCTTJyZ045d0RqSHphNjUyMkFRZVQ2ODN0ZTZXRWIxeWwvVEdVUld0RFhmKwpQbXV6b3g5eGpwSFJoVDZlcVYwelVHVGZJUlI3WmRIb3p2TzNRVlhtYmNUdDQxVFFsaDRIMHBkQ2p6dmZLTDA0CnNHMmRIaFRBL0wzUlc0RXlDY2NPQ0o2bWNiT1hyZzNOUnhxWQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==server: https://172.31.3.188:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: kubernetes-adminname: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-adminuser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFekNDQWZ1Z0F3SUJBZ0lJUnQ3eHBrbVg3cjh3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TWpBeE1URXhNekF5TlRCYUZ3MHlNekF4TVRFeE16QXlOVEphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQWxKeHBjcmlNTjh0aGg3ODEKT2FMckxmZmRtU3BRYUdSTmJQZjZUdTMwK1hCMEtUTXloR2EwRC83TWtaajZ5MjAzM0R5SEtpSUlhY2d3QXBnYQpjZE9TcHhwaitsd2pRSy9rN3M3QVJLcVExY2VueUtiaXp0RGMweCt2dGFXN0djcVlQSkpvU2dqWWxuZ0FWSmh4CnlWZDI3R3I2SEVWRFFMSVlra2tqWnFSTzI0U0ZoMDlUK2JCZlhSRGVZaHk1UW1qem5lc0VWbk1nUkdSVElnNTgKYjFBRHR1d1VTZ3BQNTFITTlKWHZtSTBqUytqSXBJNllYQUtodlpLbnhLRjh2d1lpZnhlZDV4ZjhNNVJHWnJEMQpGbFZ5NWQ5ZUNjV2dpQ0tNYVgvdzM4b2pTbE5OZGFwUzlzQXVObXNnbHlMT0MrWVh1TlBNRWZHbDdmeG5yUWl2ClV1dkFMUUlEQVFBQm8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0h3WURWUjBqQkJnd0ZvQVU5UWhuL21NajdoZTN1aEJEejlNZzFBSmZXYXd3RFFZSktvWklodmNOQVFFTApCUUFEZ2dFQkFGU0tBZm9kYi9xSVJmdWIrcXlLdkhEeDFuNEtoNEVWQ2M5ZlRPZG1NdHBHU2tUbCtMbmptc0pNClpncWdTYUtMY0xYWS9JWTVsd3N3OXRsbzBwSElyMUNxYXBYa3M5WDZiSjJzc0pFdGN5ODFocXJSd2pqYzQzdEoKZUp0QkhsNWpvV2tkV0ZCMXpsRVhyWEYwdmU0ckRueVdWL04zSTV3bzVUYXpRMTRZRmZ0c2RVYlYwNXdXa0F6cgo5YWtLd25pWWRVZTRjdlpwNkFMb01uQVJXa29La1h0elI1SElJUFhaTGlHWnEwWGpHMWdpODBvR01ZZXlWb1ZCCnRUMmt1MElJNmhIbzh3VXNJdWlDT3EyQjRMWFpobW9DQU5kcnFDc0FUaXRjTll0bGdkM1RtQUx4ZmpMMkN1cWUKL1lieXZORWhndnh4dFlwN2lJWE9jZks1RDF3VSthdz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBbEp4cGNyaU1OOHRoaDc4MU9hTHJMZmZkbVNwUWFHUk5iUGY2VHUzMCtYQjBLVE15CmhHYTBELzdNa1pqNnkyMDMzRHlIS2lJSWFjZ3dBcGdhY2RPU3B4cGorbHdqUUsvazdzN0FSS3FRMWNlbnlLYmkKenREYzB4K3Z0YVc3R2NxWVBKSm9TZ2pZbG5nQVZKaHh5VmQyN0dyNkhFVkRRTElZa2tralpxUk8yNFNGaDA5VAorYkJmWFJEZVloeTVRbWp6bmVzRVZuTWdSR1JUSWc1OGIxQUR0dXdVU2dwUDUxSE05Slh2bUkwalMraklwSTZZClhBS2h2WktueEtGOHZ3WWlmeGVkNXhmOE01UkdackQxRmxWeTVkOWVDY1dnaUNLTWFYL3czOG9qU2xOTmRhcFMKOXNBdU5tc2dseUxPQytZWHVOUE1FZkdsN2Z4bnJRaXZVdXZBTFFJREFRQUJBb0lCQURrV0tHK1lNc3pRQktRWApzRU4yc083VWt6eGVBOHRHRkhQeWdpWEZ4Ti80OGJaTjQyNzI0TjV3RzNjbWs5aUhHUGt5Q3g0Rk9zUWYwVWw5CjBsSzlXazEwbHNrNmtaUXN2VDE3RUdLUVB0alFQRVNZenZGeFRCS1J6blp4dG9DKzBXSWJQNUtJK1dJN3NLek8KYm85UVdPK1NYSWQxbDlNSFZ1Y0N6MldEWW9OeU85bmFobWdzSWpIRnRqVEo5NWQ2cWRmWDNHZXBSRHA0em5EaQprTVFJMWRBdTg1TE9HMVZyd2lMRUxPa2JVOW5hNGdJS1VIVmY5RW90SndXVzI2K2kxS1JNYVJJVmlkbDVqTm1aCnZwM3JVOUM3L253c01pVktMMTF2MW8wdGptc2gzbkxnTVNEcEJtUE5pTGcxR3AxK0FPYVBXVFNDVEJZTDdOOG8KNGJxcEw0VUNnWUVBeEVpSWhKMzNMS0FTTHBGY3NtZ2RKUDBZOWRwZzZmcHlvOXA4NlpuejYxZXpUVkhyZ0p1SQptc09tTXQ0eHRINGVJbHhRYklWSGNJWC9iZis0aCtkUFJ0Q1ExRUdUTWRaaW9qSkJCd2JhRS9xd0YwMjZpRkRnCm9TZFpiemhFbk5BWmV5NjI1Skp2QXdRdldIanRPRHRNdDQ0dWZmYndGRDErZEtQc3JobkQzWThDZ1lFQXdkTHUKdGJTWDZYUFovTndHaXl6TnBrWHZST0hzNU1TaGFiVW9ibmxMbWxsL3gwUS9WQVkxdmhhakFiQ2t2WUk0T3VrUgowZWl2Wmx1bVNrazFJTlB5VXBNQ1dHR1lVTGJlWURidXhnZDlZd3Z1SWZQRmpwWU1RR0FRcE1SangzTCtMMzlQClplRW9lRmF3ZzdIVTgrYWVWWU9jTk5aaHYvbHhadUM5MzRkSW9JTUNnWUVBb3ZiRndiV1ZYb3VZRE9uTFdLUncKYmlGazg5cFgxR3VIZXRzUUVyTXJmUjNYVkQ3TGxIK05yMUQ1VUFxQ29pU0R5R3QwcW1VTnB6TFptKzVRdXlVbApBTnB4SklrOU9JZVNaSy9zcFhUZTR1K2orL1VoQmNTQWU4dzd5TWVpejc5SEtLcmtWbW50bVVlRU42Uk83L3pyCitRb25ONVlxUmVPNGRnY1Rub2p0d2FrQ2dZQTZYeVVHMGdtQ0JDTGROUUkvZmRHOVJvaUZqU2pEeUxmMzF0Z0QKVlVKQWpMMmZyRjBLR0FpdFk3SFp1M0lScEpyOG10NkVBZmg0OGhjRmZrQ2l6MUhHTG9IaFRoc0tDOWl5enpoZgpxVGZJMFhuNC9hbzhnOUhTdlZ1bDA0TmRPTE4yYUhmbjdjUTdZWmd0UVN3cC9BVXBLY2FzWHZmM1VjOG1OWDdaClI2dkdzd0tCZ1FDd2VBcmptSVV1ejV5cXhkREszbElQd0VsQTlGU3lMTjROU0owNTVrQ2tkTUZMS0xpcUZ0Y2UKSXBrWWhIbXNRc28yRTZwTStHQ0dIMU81YWVRSFNSZWh2SkRZeGdEMVhYaHA5UjRNdHpjTmw2U3cwcTQ4MVNZZQplNVp5Zk9CcWVDbzdOQmZ0dS9ua0tZTDFCTUNMS1hOM0JYNkVpQ0JPUjlSUDJHeEh6S3FBa2c9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=token: eyJhbGciOiJSUzI1NiIsImtpZCI6IldlTXE5S29BbW1YYVdJNWljRnBGamVEX1E0YV9xRVU5UWM1Ykh0dGQ0UkkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZidmhtIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjNmFlNzZhMi0zMjJjLTQ4M2EtOWRiMS1lYWYxMDI4NTkxNjUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Y_VSCPM9F3L00v5XBGcT6JwpnLoAfrKUw8ufOpOhH5AhuFcXctRwAWvZDzGPOH3mH0GaIiyi1G7GOlZRRnJJVy5C7I3VKnE4mZzMScBvKCEheU40Y6x28CvkZDTmuaaDgSWrm3cfjAvTEJIg45TrtaN25at79GB27_A1LJ3JQUHY59OpG6YUbnFWjW899bCUN99lmYTMGe9M5cjY2RCufuyEam296QEz6b23tyEHdMCcPDJJH6IEDf2I4XhA5e5GWqfdkX1qX5XZ21MRyXXXTSVYqeLvvdNvQS3MxLlNaB5my0WcruRihydkC_n1UamgzXBu-XWfM4QWwk3gzsQ9yg

14.一些必须的配置更改

将Kube-proxy改为ipvs模式,因为在初始化集群的时候注释了ipvs配置,所以需要自行修改一下:

在master01节点执行

[root@k8s-master01 ~]# curl 127.0.0.1:10249/proxyMode
iptables[root@k8s-master01 ~]# kubectl edit cm kube-proxy -n kube-systemmode: "ipvs"

更新Kube-Proxy的Pod:

[root@k8s-master01 ~]# kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system
daemonset.apps/kube-proxy patched

验证Kube-Proxy模式

[root@k8s-master01 ~]# curl 127.0.0.1:10249/proxyMode
ipvs[root@k8s-master01 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.17.0.1:30005 rr-> 192.169.111.141:8443         Masq    1      0          0
TCP  172.31.3.101:30005 rr-> 192.169.111.141:8443         Masq    1      0          0
TCP  192.162.55.64:30005 rr-> 192.169.111.141:8443         Masq    1      0          0
TCP  10.96.0.1:443 rr-> 172.31.3.101:6443            Masq    1      0          0         -> 172.31.3.102:6443            Masq    1      0          0         -> 172.31.3.103:6443            Masq    1      0          0
TCP  10.96.0.10:53 rr-> 192.170.21.193:53            Masq    1      0          0         -> 192.170.21.194:53            Masq    1      0          0
TCP  10.96.0.10:9153 rr-> 192.170.21.193:9153          Masq    1      0          0         -> 192.170.21.194:9153          Masq    1      0          0
TCP  10.99.167.144:443 rr-> 192.167.195.132:4443         Masq    1      0          0
TCP  10.101.88.7:8000 rr-> 192.169.111.140:8000         Masq    1      0          0
TCP  10.106.189.113:443 rr-> 192.169.111.141:8443         Masq    1      0          0
TCP  127.0.0.1:30005 rr-> 192.169.111.141:8443         Masq    1      0          0
UDP  10.96.0.10:53 rr-> 192.170.21.193:53            Masq    1      0          0         -> 192.170.21.194:53            Masq    1      0          0

15.注意事项

注意:kubeadm安装的集群,证书有效期默认是一年。master节点的kube-apiserver、kube-scheduler、kube-controller-manager、etcd都是以容器运行的。可以通过kubectl get po -n kube-system查看。

启动和二进制不同的是,

kubelet的配置文件在/etc/sysconfig/kubelet和/var/lib/kubelet/config.yaml,修改后需要重启kubelet进程

ipvs[root@k8s-master01 ~]# ls /etc/sysconfig/kubelet
/etc/sysconfig/kubelet
[root@k8s-master01 ~]# ls /var/lib/kubelet/config.yaml
/var/lib/kubelet/config.yaml

其他组件的配置文件在/etc/kubernetes/manifests目录下,比如kube-apiserver.yaml,该yaml文件更改后,kubelet会自动刷新配置,也就是会重启pod。不能再次创建该文件

[root@k8s-master01 ~]# ls /etc/kubernetes/manifests
etcd.yaml  kube-apiserver.yaml  kube-controller-manager.yaml  kube-scheduler.yaml
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide
NAME                                                 READY   STATUS    RESTARTS   AGE   IP                NODE                         NOMINATED NODE   READINESS GATES
calico-kube-controllers-55bfb655fc-bgdlr             1/1     Running   1          16h   192.167.195.130   k8s-node02.example.local     <none>           <none>
calico-node-2bggs                                    1/1     Running   1          16h   172.31.3.102      k8s-master02.example.local   <none>           <none>
calico-node-2rgfb                                    1/1     Running   1          16h   172.31.3.101      k8s-master01.example.local   <none>           <none>
calico-node-449ws                                    1/1     Running   1          16h   172.31.3.110      k8s-node03.example.local     <none>           <none>
calico-node-4p9t5                                    1/1     Running   1          16h   172.31.3.103      k8s-master03.example.local   <none>           <none>
calico-node-bljzq                                    1/1     Running   1          16h   172.31.3.108      k8s-node01.example.local     <none>           <none>
calico-node-cbv29                                    1/1     Running   1          16h   172.31.3.109      k8s-node02.example.local     <none>           <none>
coredns-5ffd5c4586-rvsm4                             1/1     Running   1          18h   192.170.21.194    k8s-node03.example.local     <none>           <none>
coredns-5ffd5c4586-xzrwx                             1/1     Running   1          18h   192.170.21.193    k8s-node03.example.local     <none>           <none>
etcd-k8s-master01.example.local                      1/1     Running   1          18h   172.31.3.101      k8s-master01.example.local   <none>           <none>
etcd-k8s-master02.example.local                      1/1     Running   1          18h   172.31.3.102      k8s-master02.example.local   <none>           <none>
etcd-k8s-master03.example.local                      1/1     Running   1          18h   172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-apiserver-k8s-master01.example.local            1/1     Running   1          18h   172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-apiserver-k8s-master02.example.local            1/1     Running   1          18h   172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-apiserver-k8s-master03.example.local            1/1     Running   1          18h   172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-controller-manager-k8s-master01.example.local   1/1     Running   2          18h   172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-controller-manager-k8s-master02.example.local   1/1     Running   1          18h   172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-controller-manager-k8s-master03.example.local   1/1     Running   1          18h   172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-proxy-6k8vv                                     1/1     Running   0          88s   172.31.3.108      k8s-node01.example.local     <none>           <none>
kube-proxy-flt2l                                     1/1     Running   0          75s   172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-proxy-ftqqm                                     1/1     Running   0          42s   172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-proxy-m9h72                                     1/1     Running   0          96s   172.31.3.110      k8s-node03.example.local     <none>           <none>
kube-proxy-mjssk                                     1/1     Running   0          54s   172.31.3.109      k8s-node02.example.local     <none>           <none>
kube-proxy-zz2sl                                     1/1     Running   0          61s   172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-scheduler-k8s-master01.example.local            1/1     Running   2          18h   172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-scheduler-k8s-master02.example.local            1/1     Running   1          18h   172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-scheduler-k8s-master03.example.local            1/1     Running   1          18h   172.31.3.103      k8s-master03.example.local   <none>           <none>
metrics-server-5b7c76b46c-2tkz6                      1/1     Running   0          92m   192.167.195.132   k8s-node02.example.local     <none>           <none>[root@k8s-master01 ~]# kubectl get pod -A -o  wide
NAMESPACE              NAME                                                 READY   STATUS    RESTARTS   AGE     IP                NODE                         NOMINATED NODE   READINESS GATES
kube-system            calico-kube-controllers-55bfb655fc-bgdlr             1/1     Running   1          16h     192.167.195.130   k8s-node02.example.local     <none>           <none>
kube-system            calico-node-2bggs                                    1/1     Running   1          16h     172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            calico-node-2rgfb                                    1/1     Running   1          16h     172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            calico-node-449ws                                    1/1     Running   1          16h     172.31.3.110      k8s-node03.example.local     <none>           <none>
kube-system            calico-node-4p9t5                                    1/1     Running   1          16h     172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            calico-node-bljzq                                    1/1     Running   1          16h     172.31.3.108      k8s-node01.example.local     <none>           <none>
kube-system            calico-node-cbv29                                    1/1     Running   1          16h     172.31.3.109      k8s-node02.example.local     <none>           <none>
kube-system            coredns-5ffd5c4586-rvsm4                             1/1     Running   1          18h     192.170.21.194    k8s-node03.example.local     <none>           <none>
kube-system            coredns-5ffd5c4586-xzrwx                             1/1     Running   1          18h     192.170.21.193    k8s-node03.example.local     <none>           <none>
kube-system            etcd-k8s-master01.example.local                      1/1     Running   1          18h     172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            etcd-k8s-master02.example.local                      1/1     Running   1          18h     172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            etcd-k8s-master03.example.local                      1/1     Running   1          18h     172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            kube-apiserver-k8s-master01.example.local            1/1     Running   1          18h     172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            kube-apiserver-k8s-master02.example.local            1/1     Running   1          18h     172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            kube-apiserver-k8s-master03.example.local            1/1     Running   1          18h     172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            kube-controller-manager-k8s-master01.example.local   1/1     Running   2          18h     172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            kube-controller-manager-k8s-master02.example.local   1/1     Running   1          18h     172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            kube-controller-manager-k8s-master03.example.local   1/1     Running   1          18h     172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            kube-proxy-6k8vv                                     1/1     Running   0          2m12s   172.31.3.108      k8s-node01.example.local     <none>           <none>
kube-system            kube-proxy-flt2l                                     1/1     Running   0          119s    172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            kube-proxy-ftqqm                                     1/1     Running   0          86s     172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            kube-proxy-m9h72                                     1/1     Running   0          2m20s   172.31.3.110      k8s-node03.example.local     <none>           <none>
kube-system            kube-proxy-mjssk                                     1/1     Running   0          98s     172.31.3.109      k8s-node02.example.local     <none>           <none>
kube-system            kube-proxy-zz2sl                                     1/1     Running   0          105s    172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            kube-scheduler-k8s-master01.example.local            1/1     Running   2          18h     172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            kube-scheduler-k8s-master02.example.local            1/1     Running   1          18h     172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            kube-scheduler-k8s-master03.example.local            1/1     Running   1          18h     172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            metrics-server-5b7c76b46c-2tkz6                      1/1     Running   0          93m     192.167.195.132   k8s-node02.example.local     <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-575d79bd97-l25f4           1/1     Running   0          21m     192.169.111.140   k8s-node01.example.local     <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-68965ddf9f-c5f7g                1/1     Running   0          21m     192.169.111.141   k8s-node01.example.local     <none>           <none>

Kubeadm安装后,master节点默认不允许部署pod,可以通过以下方式打开:

查看Taints:

[root@k8s-master01 ~]# kubectl  describe node -l node-role.kubernetes.io/master=  | grep Taints
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             node-role.kubernetes.io/master:NoSchedule

删除Taint:

[root@k8s-master01 ~]# kubectl  taint node  -l node-role.kubernetes.io/master node-role.kubernetes.io/master:NoSchedule-
node/k8s-master01 untainted
node/k8s-master02 untainted
node/k8s-master03 untainted[root@k8s-master01 ~]# kubectl  describe node -l node-role.kubernetes.io/master=  | grep Taints
Taints:             <none>
Taints:             <none>
Taints:             <none>

kube-proxy的配置在kube-system命名空间下的configmap中,可以通过

kubectl edit cm kube-proxy -n kube-system

进行更改,更改完成后,可以通过patch重启kube-proxy

kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system

16.集群验证

#查看集群所有namespace的容器
[root@k8s-master01 ~]# kubectl get pod --all-namespaces
NAMESPACE              NAME                                                 READY   STATUS    RESTARTS   AGE
kube-system            calico-kube-controllers-6fdd497b59-xtldf             1/1     Running   1          3d22h
kube-system            calico-node-4dbgm                                    1/1     Running   1          3d22h
kube-system            calico-node-4vqrz                                    1/1     Running   1          3d22h
kube-system            calico-node-6fgtr                                    1/1     Running   2          3d22h
kube-system            calico-node-c5g75                                    1/1     Running   1          3d22h
kube-system            calico-node-vnqgf                                    1/1     Running   1          3d22h
kube-system            calico-node-x8hcl                                    1/1     Running   1          3d22h
kube-system            coredns-5ffd5c4586-nwqgd                             1/1     Running   1          3d22h
kube-system            coredns-5ffd5c4586-z8rs8                             1/1     Running   1          3d22h
kube-system            etcd-k8s-master01.example.local                      1/1     Running   1          3d22h
kube-system            etcd-k8s-master02.example.local                      1/1     Running   1          3d22h
kube-system            etcd-k8s-master03.example.local                      1/1     Running   1          3d22h
kube-system            kube-apiserver-k8s-master01.example.local            1/1     Running   1          3d22h
kube-system            kube-apiserver-k8s-master02.example.local            1/1     Running   1          3d22h
kube-system            kube-apiserver-k8s-master03.example.local            1/1     Running   1          3d22h
kube-system            kube-controller-manager-k8s-master01.example.local   1/1     Running   2          3d22h
kube-system            kube-controller-manager-k8s-master02.example.local   1/1     Running   1          3d22h
kube-system            kube-controller-manager-k8s-master03.example.local   1/1     Running   1          3d22h
kube-system            kube-proxy-7xnlg                                     1/1     Running   1          3d22h
kube-system            kube-proxy-pq2jk                                     1/1     Running   1          3d22h
kube-system            kube-proxy-sbsfn                                     1/1     Running   1          3d22h
kube-system            kube-proxy-vkqc8                                     1/1     Running   1          3d22h
kube-system            kube-proxy-xp24c                                     1/1     Running   1          3d22h
kube-system            kube-proxy-zk5n4                                     1/1     Running   1          3d22h
kube-system            kube-scheduler-k8s-master01.example.local            1/1     Running   2          3d22h
kube-system            kube-scheduler-k8s-master02.example.local            1/1     Running   1          3d22h
kube-system            kube-scheduler-k8s-master03.example.local            1/1     Running   1          3d22h
kube-system            metrics-server-dd9ddfbb-blwb8                        1/1     Running   2          3d22h
kubernetes-dashboard   dashboard-metrics-scraper-6d5db67fb7-fscb9           1/1     Running   1          3d22h
kubernetes-dashboard   kubernetes-dashboard-6b5967f475-g9qmj                1/1     Running   2          3d22h#查看所有pod的cpu和内存使用率
[root@k8s-master01 ~]# kubectl top pod -n kube-system
NAME                                                 CPU(cores)   MEMORY(bytes)
calico-kube-controllers-6fdd497b59-xtldf             2m           18Mi
calico-node-4dbgm                                    18m          105Mi
calico-node-4vqrz                                    22m          102Mi
calico-node-6fgtr                                    18m          66Mi
calico-node-c5g75                                    18m          103Mi
calico-node-vnqgf                                    16m          110Mi
calico-node-x8hcl                                    20m          105Mi
coredns-5ffd5c4586-nwqgd                             2m           13Mi
coredns-5ffd5c4586-z8rs8                             2m           13Mi
etcd-k8s-master01.example.local                      25m          78Mi
etcd-k8s-master02.example.local                      31m          80Mi
etcd-k8s-master03.example.local                      24m          78Mi
kube-apiserver-k8s-master01.example.local            32m          229Mi
kube-apiserver-k8s-master02.example.local            33m          239Mi
kube-apiserver-k8s-master03.example.local            37m          242Mi
kube-controller-manager-k8s-master01.example.local   1m           21Mi
kube-controller-manager-k8s-master02.example.local   13m          54Mi
kube-controller-manager-k8s-master03.example.local   1m           22Mi
kube-proxy-7xnlg                                     1m           24Mi
kube-proxy-pq2jk                                     1m           21Mi
kube-proxy-sbsfn                                     4m           23Mi
kube-proxy-vkqc8                                     1m           24Mi
kube-proxy-xp24c                                     1m           24Mi
kube-proxy-zk5n4                                     1m           22Mi
kube-scheduler-k8s-master01.example.local            2m           20Mi
kube-scheduler-k8s-master02.example.local            2m           18Mi
kube-scheduler-k8s-master03.example.local            2m           21Mi
metrics-server-dd9ddfbb-blwb8                        3m           18Mi #查看svc
[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   3d22h
[root@k8s-master01 ~]# kubectl get svc -n kube-system
NAME             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
kube-dns         ClusterIP   10.96.0.10    <none>        53/UDP,53/TCP,9153/TCP   3d22h
metrics-server   ClusterIP   10.104.2.18   <none>        443/TCP                  3d22h[root@k8s-master01 ~]# telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'.
quit
Connection closed by foreign host.[root@k8s-master01 ~]# telnet 10.96.0.10 53
Trying 10.96.0.10...
Connected to 10.96.0.10.
Escape character is '^]'.
Connection closed by foreign host.[root@k8s-master01 ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE              NAME                                                 READY   STATUS    RESTARTS   AGE     IP                NODE                         NOMINATED NODE   READINESS GATES
kube-system            calico-kube-controllers-6fdd497b59-xtldf             1/1     Running   1          3d22h   172.31.3.110      k8s-node03.example.local     <none>           <none>
kube-system            calico-node-4dbgm                                    1/1     Running   1          3d22h   172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            calico-node-4vqrz                                    1/1     Running   1          3d22h   172.31.3.110      k8s-node03.example.local     <none>           <none>
kube-system            calico-node-6fgtr                                    1/1     Running   2          3d22h   172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            calico-node-c5g75                                    1/1     Running   1          3d22h   172.31.3.108      k8s-node01.example.local     <none>           <none>
kube-system            calico-node-vnqgf                                    1/1     Running   1          3d22h   172.31.3.109      k8s-node02.example.local     <none>           <none>
kube-system            calico-node-x8hcl                                    1/1     Running   1          3d22h   172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            coredns-5ffd5c4586-nwqgd                             1/1     Running   1          3d22h   192.169.111.131   k8s-node01.example.local     <none>           <none>
kube-system            coredns-5ffd5c4586-z8rs8                             1/1     Running   1          3d22h   192.167.195.132   k8s-node02.example.local     <none>           <none>
kube-system            etcd-k8s-master01.example.local                      1/1     Running   1          3d22h   172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            etcd-k8s-master02.example.local                      1/1     Running   1          3d22h   172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            etcd-k8s-master03.example.local                      1/1     Running   1          3d22h   172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            kube-apiserver-k8s-master01.example.local            1/1     Running   1          3d22h   172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            kube-apiserver-k8s-master02.example.local            1/1     Running   1          3d22h   172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            kube-apiserver-k8s-master03.example.local            1/1     Running   1          3d22h   172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            kube-controller-manager-k8s-master01.example.local   1/1     Running   2          3d22h   172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            kube-controller-manager-k8s-master02.example.local   1/1     Running   1          3d22h   172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            kube-controller-manager-k8s-master03.example.local   1/1     Running   1          3d22h   172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            kube-proxy-7xnlg                                     1/1     Running   1          3d22h   172.31.3.109      k8s-node02.example.local     <none>           <none>
kube-system            kube-proxy-pq2jk                                     1/1     Running   1          3d22h   172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            kube-proxy-sbsfn                                     1/1     Running   1          3d22h   172.31.3.108      k8s-node01.example.local     <none>           <none>
kube-system            kube-proxy-vkqc8                                     1/1     Running   1          3d22h   172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            kube-proxy-xp24c                                     1/1     Running   1          3d22h   172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            kube-proxy-zk5n4                                     1/1     Running   1          3d22h   172.31.3.110      k8s-node03.example.local     <none>           <none>
kube-system            kube-scheduler-k8s-master01.example.local            1/1     Running   2          3d22h   172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            kube-scheduler-k8s-master02.example.local            1/1     Running   1          3d22h   172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            kube-scheduler-k8s-master03.example.local            1/1     Running   1          3d22h   172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            metrics-server-dd9ddfbb-blwb8                        1/1     Running   2          3d22h   192.170.21.194    k8s-node03.example.local     <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-6d5db67fb7-fscb9           1/1     Running   1          3d22h   192.167.195.131   k8s-node02.example.local     <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-6b5967f475-g9qmj                1/1     Running   2          3d22h   192.169.111.132   k8s-node01.example.local     <none>           <none>[root@k8s-master01 ~]# kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-6d5db67fb7-fscb9   1/1     Running   1          3d22h
kubernetes-dashboard-6b5967f475-g9qmj        1/1     Running   2          3d22h[root@k8s-master01 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.99.15.144     <none>        8000/TCP        3d22h
kubernetes-dashboard        NodePort    10.106.152.125   <none>        443:30005/TCP   3d22h

k8s创建容器并测试内部网络

[root@k8s-master01 ~]# kubectl run net-test1 --image=alpine sleep 500000
pod/net-test1 created
[root@k8s-master01 ~]# kubectl run net-test2 --image=alpine sleep 500000
pod/net-test2 created[root@k8s-master01 ~]# kubectl get pod  -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP                NODE                       NOMINATED NODE   READINESS GATES
net-test1   1/1     Running   0          30s   192.169.111.142   k8s-node01.example.local   <none>           <none>
net-test2   1/1     Running   0          25s   192.167.195.133   k8s-node02.example.local   <none>           <none>[root@k8s-master01 ~]# kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping -c4 192.167.195.133
PING 192.167.195.133 (192.167.195.133): 56 data bytes
64 bytes from 192.167.195.133: seq=0 ttl=62 time=0.491 ms
64 bytes from 192.167.195.133: seq=1 ttl=62 time=0.677 ms
64 bytes from 192.167.195.133: seq=2 ttl=62 time=0.408 ms
64 bytes from 192.167.195.133: seq=3 ttl=62 time=0.443 ms--- 192.167.195.133 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.408/0.504/0.677 ms
/ # exit

17.kubectl 命令详解

https://kubernetes.io/zh/docs/reference/kubectl/cheatsheet/

[root@k8s-master01 ~]# kubectl get node
NAME                         STATUS   ROLES                  AGE     VERSION
k8s-master01.example.local   Ready    control-plane,master   3d22h   v1.20.14
k8s-master02.example.local   Ready    control-plane,master   3d22h   v1.20.14
k8s-master03.example.local   Ready    control-plane,master   3d22h   v1.20.14
k8s-node01.example.local     Ready    <none>                 3d22h   v1.20.14
k8s-node02.example.local     Ready    <none>                 3d22h   v1.20.14
k8s-node03.example.local     Ready    <none>                 3d22h   v1.20.14[root@k8s-master01 ~]# kubectl get node -owide
NAME                         STATUS   ROLES                  AGE     VERSION    INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-master01.example.local   Ready    control-plane,master   3d22h   v1.20.14   172.31.3.101   <none>        CentOS Linux 7 (Core)   4.19.12-1.el7.elrepo.x86_64   docker://19.3.15
k8s-master02.example.local   Ready    control-plane,master   3d22h   v1.20.14   172.31.3.102   <none>        CentOS Linux 7 (Core)   4.19.12-1.el7.elrepo.x86_64   docker://19.3.15
k8s-master03.example.local   Ready    control-plane,master   3d22h   v1.20.14   172.31.3.103   <none>        CentOS Linux 7 (Core)   4.19.12-1.el7.elrepo.x86_64   docker://19.3.15
k8s-node01.example.local     Ready    <none>                 3d22h   v1.20.14   172.31.3.108   <none>        CentOS Linux 7 (Core)   4.19.12-1.el7.elrepo.x86_64   docker://19.3.15
k8s-node02.example.local     Ready    <none>                 3d22h   v1.20.14   172.31.3.109   <none>        CentOS Linux 7 (Core)   4.19.12-1.el7.elrepo.x86_64   docker://19.3.15
k8s-node03.example.local     Ready    <none>                 3d22h   v1.20.14   172.31.3.110   <none>        CentOS Linux 7 (Core)   4.19.12-1.el7.elrepo.x86_64   docker://19.3.15

17.1 Kubectl 自动补全

#CentOS
[root@k8s-master01 ~]# yum -y install bash-completion#Ubuntu
[root@k8s-master01 ~]# apt -y install bash-completion[root@k8s-master01 ~]# source <(kubectl completion bash) # 在 bash 中设置当前 shell 的自动补全,要先安装 bash-completion 包。
[root@k8s-master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc # 在您的 bash shell 中永久的添加自动补全[root@k8s-master01 ~]# kubectl get
apiservices.apiregistration.k8s.io                            namespaces
certificatesigningrequests.certificates.k8s.io                networkpolicies.networking.k8s.io
clusterrolebindings.rbac.authorization.k8s.io                 nodes
clusterroles.rbac.authorization.k8s.io                        nodes.metrics.k8s.io
componentstatuses                                             persistentvolumeclaims
configmaps                                                    persistentvolumes
controllerrevisions.apps                                      poddisruptionbudgets.policy
cronjobs.batch                                                pods
csidrivers.storage.k8s.io                                     podsecuritypolicies.policy
csinodes.storage.k8s.io                                       pods.metrics.k8s.io
customresourcedefinitions.apiextensions.k8s.io                podtemplates
daemonsets.apps                                               priorityclasses.scheduling.k8s.io
deployments.apps                                              prioritylevelconfigurations.flowcontrol.apiserver.k8s.io
endpoints                                                     replicasets.apps
endpointslices.discovery.k8s.io                               replicationcontrollers
events                                                        resourcequotas
events.events.k8s.io                                          rolebindings.rbac.authorization.k8s.io
flowschemas.flowcontrol.apiserver.k8s.io                      roles.rbac.authorization.k8s.io
horizontalpodautoscalers.autoscaling                          runtimeclasses.node.k8s.io
ingressclasses.networking.k8s.io                              secrets
ingresses.extensions                                          serviceaccounts
ingresses.networking.k8s.io                                   services
jobs.batch                                                    statefulsets.apps
leases.coordination.k8s.io                                    storageclasses.storage.k8s.io
limitranges                                                   validatingwebhookconfigurations.admissionregistration.k8s.io
mutatingwebhookconfigurations.admissionregistration.k8s.io    volumeattachments.storage.k8s.io

17.2 Kubectl 上下文和配置

设置 kubectl 与哪个 Kubernetes 集群进行通信并修改配置信息。 查看使用 kubeconfig 跨集群授权访问 文档获取配置文件详细信息。

[root@k8s-master01 ~]# ll /etc/kubernetes/admin.conf
-rw------- 1 root root 5565 Jan  4 15:33 /etc/kubernetes/admin.conf[root@k8s-master01 ~]# cat /etc/kubernetes/admin.conf
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1ERXlNVEV3TlRjek1Gb1hEVE15TURFeE9URXdOVGN6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTkZTCmU4Nk1MNS9kRHFzMlVQdmVmWmNxbzB3Y0NsR2t0VDVJMUMyV1lZWVpjTE9tRE0xLzlIblhxQ3FENm4zSUpqQWQKNHVvcUI2amYrcyswRHozb1FGc21OeG9ybnlyeG5pWHk0R2ZQczI1WnNYZC9HelhnNGZ1OFBnSHgvbWl1M1RHZApoUFRWVFBTUStrcHhtdU5jTzhFRklQc1V2aXRpempmVWNUaDV2UVRWc0JWU1ZLSGFOK0x0bm02VTY4SGhtaThGCk9GdVB6dUQ0RWlnWjVBcWtaRWRxaUR1Q2VGSnBvaHBCbGNsalpHUXh2b3JWS3BQQVBEZVhESU5PWmkySnVia00KR2tydDhqZTNCMEtFSzFFZVlWTnVQTjA4ek8yc1NhUVdUcUdnMURTQ2FUV1BmRy92cnR5QkV6VmtoQ3JyWXZNWQpDOHYxa01HYU5JQlJoekJHU05FQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZMc0tlREtrSVVyT2tHRUlVeDhXSGwxUFZnTlRNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCYjBxUWZ6aVhFWXJMS3phRUI2UlhlL1FmcllsdytTZDZKeHRET2lScm9XT1U4L0tvSAoxbFdQL3U0cEQ0dnVJUGFZQ211d01McnpNbHM3MHFrV3J0K0R6ZFR2UUxSUEp0SjlDMjVuRFNlT09TUXUxVVpOCmMzTTlHSEVqb2I0OEdDWEkzdHF0eUo4cTZsNnRwL29mVlBBR0NEcFlrUGlKZE5OVWpBMWNUdktlK0wzd2c0SXEKVUNqZ0grZ0ZqcVJMOVlLNzhjRmJTRmlDMk0yNkZUbHRKbXZNUDcyeXM0Uk55eEZGb3RzQTl4eHE4aXV4U1FGLwpNNFFGdTV2OGZJaHh4L09WZ2xicTU3ZWdzUVp1UFlkQ2c4THgrV2psZmdFS1VWNVFBWFBBQktpTmROb0JzV0w3CmsvSUpoTEQ2RCsrZ1k5NXpYdHVwclNzSmlxdEtXSU1DWjdxegotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==server: https://172.31.3.188:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: kubernetes-adminname: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-adminuser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFekNDQWZ1Z0F3SUJBZ0lJT1RyTmNmcElBTVV3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TWpBeE1qRXhNRFUzTXpCYUZ3MHlNekF4TWpFeE1EVTNNekphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQW9QSGJpRXdtWU9tbkZveGIKaWRiVUF5a2lQTjdrSi9OZzh0RVV6eEhTT0d0eFhKQklMZnpBN3hFUHBTbC83aUpjUEJzekVTTERIbko5WlRMMwpmMzhYS2tJUnlJM0FSWG4yc1grOWtqZDVtYlVxbzVHd2tVaHkvWmxCTW84TE1ld2tUczNoSW1lTVFPUGxtcndYClJhVy93VWZkK21OWXRMMi9DU211Wk1MclF0VndodzBIRlVHSkx4ampZK3B4L3A1Zkg2b2hQTGxxTktYYm5QaXEKQ3plNTZKMENIY0tSR1hIYnFyd1J1R2E2U1FjSmRKRXF3YllOcTBOUTc5aXI0QXB2QWRNcGJLV05ENEdYR1BvZgpabENCVVdRTkVLNndRcjg4Znl0R2owRFBFM2V4ZzRDRTY2bEsxZjhYV3plSWZ2bmhRRERBYjhMMmZmRzk0Tzk3CjhDcGFFd0lEQVFBQm8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0h3WURWUjBqQkJnd0ZvQVV1d3A0TXFRaFNzNlFZUWhUSHhZZVhVOVdBMU13RFFZSktvWklodmNOQVFFTApCUUFEZ2dFQkFFdkY1aW11dWVXMkdnekxMZGUyUzR3UmMyYTFZZWJOMWRvYzY3R1Y5eDc2dnQwK1hIMGV1aTRwCkd1YURNUGZIbk8vWUQvYlYrVFcrUUxMRGNBWWt2SHpFd2N1a0lHaU1jVVcrMHRILzNtZk40a0tSczZiaTZ6RnAKZ1lLTm5xbWhFVjhvanBxU0toa3F2QWl4eDhxRnVKREdIdGFrdXVjZ3I3SlE2MDZraDJwRVo0bHd6Z0JRQitLTgpJelU1RzZWZ29ERUVqNzFYcE1ZYmw3ODkyaDB5RGhBUWdPUXZ6RTlseWJYMXh6TVJFV0dDZjBucEx4MFQ3SENICkF6WDd2RzF6Z2JobFg5UnR2anZjak4ycTZJTTFhcmZvNVFCNXF4V1RJb2MrRWYrblh2cmo3U3RvaENVUWRhWWgKUlFsVldNRGZIenJ5ZHhFN1NkSTN2d050VW0rclZrdz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBb1BIYmlFd21ZT21uRm94YmlkYlVBeWtpUE43a0ovTmc4dEVVenhIU09HdHhYSkJJCkxmekE3eEVQcFNsLzdpSmNQQnN6RVNMREhuSjlaVEwzZjM4WEtrSVJ5STNBUlhuMnNYKzlramQ1bWJVcW81R3cKa1VoeS9abEJNbzhMTWV3a1RzM2hJbWVNUU9QbG1yd1hSYVcvd1VmZCttTll0TDIvQ1NtdVpNTHJRdFZ3aHcwSApGVUdKTHhqalkrcHgvcDVmSDZvaFBMbHFOS1hiblBpcUN6ZTU2SjBDSGNLUkdYSGJxcndSdUdhNlNRY0pkSkVxCndiWU5xME5RNzlpcjRBcHZBZE1wYktXTkQ0R1hHUG9mWmxDQlVXUU5FSzZ3UXI4OGZ5dEdqMERQRTNleGc0Q0UKNjZsSzFmOFhXemVJZnZuaFFEREFiOEwyZmZHOTRPOTc4Q3BhRXdJREFRQUJBb0lCQUJ1V21jMFpVSkxZT240UQovVGY1alVvbGFPc0tRZzNmR0VWSE5jdnhBQm9Qd05UZkhxQlRiVGNOczZMYUpFWEx1Z2ZMbWN0Y0xCb3lBZkN3CjlkL3pCeU9GUThzZkVWQlhnY1FYWTRXRzROOUtRTTdkRUdrM0JBOFlrQ1o4Z3F6Q0Q4ODZWMWN3Yk1oS3lIYm4KdXcyRFJnVjVya0ZYZWtNeEsyZ0VyeEl5bjJWTEwydGlLQ3JVS2pNa2Z5eDhoNXRSalNkUnRrMnpXTCs1Z0FvWQpPVk9IcE1LRnp0VS9ERS9YaFh5ejNiUkp6eWY4aE81MlQ4YmFCSzJkcXMxNkJlTDlONFczYkJvVExZcVo4M2IrCnovYzdDajlqU252VjU4Z0Nkalp5NHp5cyt0ajFJVVduMGZlcWRDZGxjclV2WDIrVWQxNG1HWU4vY3ZlWlRDdTUKWjlqOStwRUNnWUVBMEpMOUZBRU83SCtwcDNOczV0MDBRczl1enB5WWRJOE5RMGdpSGNxcEJtelBieWlxMmlGZgpQR1FSbVFlZXh3NlY1dGtjb3YvdmdDQTlDa1Yxa2NjK2ZUbnN4Tnc3Q0pJN3UwTmlzc3ppWGJDZ29ITndOMU9kCmNCZlFzTGpKTEZ5MTdPYk1pYnQ4aUNnbDhiSCtHVFM1TGJzcjB5eUQ5SUdFV2lPMGY4WFc5czhDZ1lFQXhZcGgKb2IrODZWd2JHQURRTjFIaGdqODJVV2lJdWNnQURzcmZyUjQzZGxFaGlpTWdoVnkxVEViMFArWkFzWU13UW56dgp3eDJwVGNwcHF1Z1QveU52citjQnZoWXlwbkxOVHhNRmc0WG1tN1gzZVpQa1lLVGlIN0llYnFjRWtUYlRacEl5CkdUQlkrU1ZQMHhBaUNrV20zR01mZGE0OGlUdEZQaEgzT1M2MGVYMENnWUFGOFJvS2x0a2cvYVlNb2lvcERZWXUKblJBd0RKLy9PaEFMcWFObks5M1MxQWk0eHZUUEVBSlJpeHhCT3NsWUxGOHkyMTZJZWpnTmMxMnB6RDdFTDJQbApWMkFhWDVmQzc3K0ozeXFSbzJxVGRyT3N2bjBrNWxubTFwYllZZnRCSzBiM2Y3KzE4TVJrY0poY0lWRDIwTnl4Cm85Smt5ckRicDFEbzdIbDQ1bDd3V3dLQmdRQ2tuZHNhaGNRUjIvV2dIUjFtM0U5RzBSS2M2TFgzeTlsd2VsUEgKMm9SeGpzNmFaUWQyMjNraDVZY3BzT0Y4akV5dE81dzZSditObWY1UXRESGx6a3dHbEVWNWVOb2dwMDY4ZEtlRgpvUkk1OUh3VXpzL2tVY00ya3FLVnA0MUF6aVdCTnBlVk1oc1RGS3Jld25UN2htdTFBTTE0cmdnNGZESUp0Y01GCjNndjdxUUtCZ0FkVTBsNjBOU2VobEQ2MGlSVDM2ZjFuQ3RvNUx0N0FRcDV3MUR2WlZJVFVEa2Q2dGxacTNWNHIKQ3l2aUw5aFg5S0NudHhJNXVOL1R4MHR3Ymt5TUt0bjlzdUxNem1YZFFPU1JIY2NOOURuSzYzaWVYYUVVbjIvVgpFSm9wYlMzMU1LMG15eG4xT3FiWkgvM3ZxMjVXTGphdHJZUlZqcUlYRmh5MFpBZFE2bDgvCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==[root@k8s-master01 ~]# kubectl config view # 显示合并的 kubeconfig 配置。
apiVersion: v1
clusters:
- cluster:certificate-authority-data: DATA+OMITTEDserver: https://172.31.3.188:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: kubernetes-adminname: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-adminuser:client-certificate-data: REDACTEDclient-key-data: REDACTED[root@k8s-master01 ~]# kubectl config use-context hk8s # 设置默认的上下文为 hk8s
error: no context exists with the name: "hk8s"

17.3 kubectl create|apply

[root@k8s-master01 ~]# kubectl create -f recommended.yaml
Error from server (AlreadyExists): error when creating "recommended.yaml": namespaces "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "recommended.yaml": serviceaccounts "kubernetes-dashboard" already exists
Error from server (Invalid): error when creating "recommended.yaml": Service "kubernetes-dashboard" is invalid: spec.ports[0].nodePort: Invalid value: 30005: provided port is already allocated
Error from server (AlreadyExists): error when creating "recommended.yaml": secrets "kubernetes-dashboard-certs" already exists
Error from server (AlreadyExists): error when creating "recommended.yaml": secrets "kubernetes-dashboard-csrf" already exists
Error from server (AlreadyExists): error when creating "recommended.yaml": secrets "kubernetes-dashboard-key-holder" already exists
Error from server (AlreadyExists): error when creating "recommended.yaml": configmaps "kubernetes-dashboard-settings" already exists
Error from server (AlreadyExists): error when creating "recommended.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "recommended.yaml": clusterroles.rbac.authorization.k8s.io "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "recommended.yaml": rolebindings.rbac.authorization.k8s.io "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "recommended.yaml": clusterrolebindings.rbac.authorization.k8s.io "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "recommended.yaml": deployments.apps "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "recommended.yaml": services "dashboard-metrics-scraper" already exists
Error from server (AlreadyExists): error when creating "recommended.yaml": deployments.apps "dashboard-metrics-scraper" already exists
#create 如果创建的实例已存在,会提示已存在[root@k8s-master01 ~]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf configured
Warning: resource secrets/kubernetes-dashboard-key-holder is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
secret/kubernetes-dashboard-key-holder configured
configmap/kubernetes-dashboard-settings unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
deployment.apps/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
deployment.apps/dashboard-metrics-scraper unchanged
#apply,如果创建的实例已存在,不会提示
#如果yaml文件已修改,create 不会立即生效,apply会立即生效,建议使用apply[root@k8s-master01 ~]# kubectl apply -f admin.yaml,recommended.yaml
#使用"," 逗号后面的文件不会自动补全[root@k8s-master01 ~]# kubectl apply -f admin.yaml -f recommended.yaml
#使用"-f",后面的文件会自动补全[root@k8s-master01 dashboard]# kubectl create deployment nginx --image=nginx # 启动单实例 nginx
deployment.apps/nginx created[root@k8s-master01 dashboard]# kubectl get deploy nginx
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           112s[root@k8s-master01 dashboard]# kubectl get deploy nginx -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:annotations:deployment.kubernetes.io/revision: "1"creationTimestamp: "2022-01-04T12:59:55Z"generation: 1labels:app: nginxmanagedFields:- apiVersion: apps/v1fieldsType: FieldsV1fieldsV1:f:metadata:f:labels:.: {}f:app: {}f:spec:f:progressDeadlineSeconds: {}f:replicas: {}f:revisionHistoryLimit: {}f:selector: {}f:strategy:f:rollingUpdate:.: {}f:maxSurge: {}f:maxUnavailable: {}f:type: {}f:template:f:metadata:f:labels:.: {}f:app: {}f:spec:f:containers:k:{"name":"nginx"}:.: {}f:image: {}f:imagePullPolicy: {}f:name: {}f:resources: {}f:terminationMessagePath: {}f:terminationMessagePolicy: {}f:dnsPolicy: {}f:restartPolicy: {}f:schedulerName: {}f:securityContext: {}f:terminationGracePeriodSeconds: {}manager: kubectl-createoperation: Updatetime: "2022-01-04T12:59:55Z"- apiVersion: apps/v1fieldsType: FieldsV1fieldsV1:f:metadata:f:annotations:.: {}f:deployment.kubernetes.io/revision: {}f:status:f:availableReplicas: {}f:conditions:.: {}k:{"type":"Available"}:.: {}f:lastTransitionTime: {}f:lastUpdateTime: {}f:message: {}f:reason: {}f:status: {}f:type: {}k:{"type":"Progressing"}:.: {}f:lastTransitionTime: {}f:lastUpdateTime: {}f:message: {}f:reason: {}f:status: {}f:type: {}f:observedGeneration: {}f:readyReplicas: {}f:replicas: {}f:updatedReplicas: {}manager: kube-controller-manageroperation: Updatetime: "2022-01-04T13:01:11Z"name: nginxnamespace: defaultresourceVersion: "35672"uid: f4564a11-4c56-4ec8-8a5a-9609042d916c
spec:progressDeadlineSeconds: 600replicas: 1revisionHistoryLimit: 10selector:matchLabels:app: nginxstrategy:rollingUpdate:maxSurge: 25%maxUnavailable: 25%type: RollingUpdatetemplate:metadata:creationTimestamp: nulllabels:app: nginxspec:containers:- image: nginximagePullPolicy: Alwaysname: nginxresources: {}terminationMessagePath: /dev/termination-logterminationMessagePolicy: FilednsPolicy: ClusterFirstrestartPolicy: AlwaysschedulerName: default-schedulersecurityContext: {}terminationGracePeriodSeconds: 30
status:availableReplicas: 1conditions:- lastTransitionTime: "2022-01-04T13:01:11Z"lastUpdateTime: "2022-01-04T13:01:11Z"message: Deployment has minimum availability.reason: MinimumReplicasAvailablestatus: "True"type: Available- lastTransitionTime: "2022-01-04T12:59:55Z"lastUpdateTime: "2022-01-04T13:01:11Z"message: ReplicaSet "nginx-6799fc88d8" has successfully progressed.reason: NewReplicaSetAvailablestatus: "True"type: ProgressingobservedGeneration: 1readyReplicas: 1replicas: 1updatedReplicas: 1[root@k8s-master01 dashboard]# kubectl create deployment nginx --image=nginx --dry-run -oyaml
W0104 21:03:42.125588   68384 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.
apiVersion: apps/v1
kind: Deployment
metadata:creationTimestamp: nulllabels:app: nginxname: nginx
spec:replicas: 1selector:matchLabels:app: nginxstrategy: {}template:metadata:creationTimestamp: nulllabels:app: nginxspec:containers:- image: nginxname: nginxresources: {}
status: {}[root@k8s-master01 dashboard]# kubectl create deployment nginx2 --image=nginx --dry-run=client -oyaml
apiVersion: apps/v1
kind: Deployment
metadata:creationTimestamp: nulllabels:app: nginx2name: nginx2
spec:replicas: 1selector:matchLabels:app: nginx2strategy: {}template:metadata:creationTimestamp: nulllabels:app: nginx2spec:containers:- image: nginxname: nginxresources: {}
status: {}#生成yaml文件
[root@k8s-master01 dashboard]# kubectl create deployment nginx2 --image=nginx --dry-run=client -oyaml > nginx2-dp.yaml[root@k8s-master01 dashboard]# kubectl get deploy
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           5m46s
#并不会生成nginx2

17.4 kubectl delete

[root@k8s-master01 dashboard]# kubectl delete deploy nginx
deployment.apps "nginx" deleted[root@k8s-master01 ~]# kubectl delete -f admin.yaml
serviceaccount "admin-user" deleted
clusterrolebinding.rbac.authorization.k8s.io "admin-user" deleted[root@k8s-master01 dashboard]# kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7645f69d8c-hmwc6   1/1     Running   0          3h9m
kubernetes-dashboard-78cb679857-j4kgg        1/1     Running   0          3h9m[root@k8s-master01 dashboard]# kubectl delete pod kubernetes-dashboard-78cb679857-j4kgg -n kubernetes-dashboard
pod "kubernetes-dashboard-78cb679857-j4kgg" deletedkubectl delete -f ./pod.json                                              # 删除在 pod.json 中指定的类型和名称的 Pod
kubectl delete pod,service baz foo                                        # 删除名称为 "baz" 和 "foo" 的 Pod 和服务
kubectl delete pods,services -l name=myLabel                              # 删除包含 name=myLabel 标签的 pods 和服务
kubectl -n my-ns delete pod,svc --all                                     # 删除在 my-ns 名字空间中全部的 Pods 和服务
# 删除所有与 pattern1 或 pattern2 awk 模式匹配的 Pods
kubectl get pods  -n mynamespace --no-headers=true | awk '/pattern1|pattern2/{print $1}' | xargs  kubectl delete -n mynamespace pod

17.5 查看和查找资源

[root@k8s-master01 ~]# kubectl get services # 列出当前命名空间下的所有 services
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   5h38m[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   5h39m[root@k8s-master01 ~]# kubectl get svc -n kube-system
NAME             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
kube-dns         ClusterIP   10.96.0.10    <none>        53/UDP,53/TCP,9153/TCP   3d22h
metrics-server   ClusterIP   10.104.2.18   <none>        443/TCP                  3d22h[root@k8s-master01 ~]# kubectl get svc -A
NAMESPACE              NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   10.96.0.1        <none>        443/TCP                  3d22h
kube-system            kube-dns                    ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   3d22h
kube-system            metrics-server              ClusterIP   10.104.2.18      <none>        443/TCP                  3d22h
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.99.15.144     <none>        8000/TCP                 3d22h
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.106.152.125   <none>        443:30005/TCP            3d22h[root@k8s-master01 ~]# kubectl get pod -A -o wide # 列出当前命名空间下的全部 Pods,并显示更详细的信息
NAMESPACE              NAME                                                 READY   STATUS    RESTARTS   AGE     IP                NODE                         NOMINATED NODE   READINESS GATES
kube-system            calico-kube-controllers-6fdd497b59-xtldf             1/1     Running   1          3d22h   172.31.3.110      k8s-node03.example.local     <none>           <none>
kube-system            calico-node-4dbgm                                    1/1     Running   1          3d22h   172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            calico-node-4vqrz                                    1/1     Running   1          3d22h   172.31.3.110      k8s-node03.example.local     <none>           <none>
kube-system            calico-node-6fgtr                                    1/1     Running   2          3d22h   172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            calico-node-c5g75                                    1/1     Running   1          3d22h   172.31.3.108      k8s-node01.example.local     <none>           <none>
kube-system            calico-node-vnqgf                                    1/1     Running   1          3d22h   172.31.3.109      k8s-node02.example.local     <none>           <none>
kube-system            calico-node-x8hcl                                    1/1     Running   1          3d22h   172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            coredns-5ffd5c4586-nwqgd                             1/1     Running   1          3d22h   192.169.111.131   k8s-node01.example.local     <none>           <none>
kube-system            coredns-5ffd5c4586-z8rs8                             1/1     Running   1          3d22h   192.167.195.132   k8s-node02.example.local     <none>           <none>
kube-system            etcd-k8s-master01.example.local                      1/1     Running   1          3d22h   172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            etcd-k8s-master02.example.local                      1/1     Running   1          3d22h   172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            etcd-k8s-master03.example.local                      1/1     Running   1          3d22h   172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            kube-apiserver-k8s-master01.example.local            1/1     Running   1          3d22h   172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            kube-apiserver-k8s-master02.example.local            1/1     Running   1          3d22h   172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            kube-apiserver-k8s-master03.example.local            1/1     Running   1          3d22h   172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            kube-controller-manager-k8s-master01.example.local   1/1     Running   2          3d22h   172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            kube-controller-manager-k8s-master02.example.local   1/1     Running   1          3d22h   172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            kube-controller-manager-k8s-master03.example.local   1/1     Running   1          3d22h   172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            kube-proxy-7xnlg                                     1/1     Running   1          3d22h   172.31.3.109      k8s-node02.example.local     <none>           <none>
kube-system            kube-proxy-pq2jk                                     1/1     Running   1          3d22h   172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            kube-proxy-sbsfn                                     1/1     Running   1          3d22h   172.31.3.108      k8s-node01.example.local     <none>           <none>
kube-system            kube-proxy-vkqc8                                     1/1     Running   1          3d22h   172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            kube-proxy-xp24c                                     1/1     Running   1          3d22h   172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            kube-proxy-zk5n4                                     1/1     Running   1          3d22h   172.31.3.110      k8s-node03.example.local     <none>           <none>
kube-system            kube-scheduler-k8s-master01.example.local            1/1     Running   2          3d22h   172.31.3.101      k8s-master01.example.local   <none>           <none>
kube-system            kube-scheduler-k8s-master02.example.local            1/1     Running   1          3d22h   172.31.3.102      k8s-master02.example.local   <none>           <none>
kube-system            kube-scheduler-k8s-master03.example.local            1/1     Running   1          3d22h   172.31.3.103      k8s-master03.example.local   <none>           <none>
kube-system            metrics-server-dd9ddfbb-blwb8                        1/1     Running   2          3d22h   192.170.21.194    k8s-node03.example.local     <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-6d5db67fb7-fscb9           1/1     Running   1          3d22h   192.167.195.131   k8s-node02.example.local     <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-6b5967f475-g9qmj                1/1     Running   2          3d22h   192.169.111.132   k8s-node01.example.local     <none>           <none>[root@k8s-master01 ~]# kubectl get node -o wide
NAME                         STATUS   ROLES                  AGE     VERSION    INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-master01.example.local   Ready    control-plane,master   3d22h   v1.20.14   172.31.3.101   <none>        CentOS Linux 7 (Core)   4.19.12-1.el7.elrepo.x86_64   docker://19.3.15
k8s-master02.example.local   Ready    control-plane,master   3d22h   v1.20.14   172.31.3.102   <none>        CentOS Linux 7 (Core)   4.19.12-1.el7.elrepo.x86_64   docker://19.3.15
k8s-master03.example.local   Ready    control-plane,master   3d22h   v1.20.14   172.31.3.103   <none>        CentOS Linux 7 (Core)   4.19.12-1.el7.elrepo.x86_64   docker://19.3.15
k8s-node01.example.local     Ready    <none>                 3d22h   v1.20.14   172.31.3.108   <none>        CentOS Linux 7 (Core)   4.19.12-1.el7.elrepo.x86_64   docker://19.3.15
k8s-node02.example.local     Ready    <none>                 3d22h   v1.20.14   172.31.3.109   <none>        CentOS Linux 7 (Core)   4.19.12-1.el7.elrepo.x86_64   docker://19.3.15
k8s-node03.example.local     Ready    <none>                 3d22h   v1.20.14   172.31.3.110   <none>        CentOS Linux 7 (Core)   4.19.12-1.el7.elrepo.x86_64   docker://19.3.15[root@k8s-master01 ~]# kubectl get clusterrole
NAME                                                                   CREATED AT
admin                                                                  2022-01-21T10:57:46Z
calico-kube-controllers                                                2022-01-21T11:03:47Z
calico-node                                                            2022-01-21T11:03:47Z
cluster-admin                                                          2022-01-21T10:57:46Z
edit                                                                   2022-01-21T10:57:46Z
kubeadm:get-nodes                                                      2022-01-21T10:58:02Z
kubernetes-dashboard                                                   2022-01-21T11:12:25Z
system:aggregate-to-admin                                              2022-01-21T10:57:46Z
system:aggregate-to-edit                                               2022-01-21T10:57:46Z
system:aggregate-to-view                                               2022-01-21T10:57:46Z
system:aggregated-metrics-reader                                       2022-01-21T11:04:57Z
system:auth-delegator                                                  2022-01-21T10:57:46Z
system:basic-user                                                      2022-01-21T10:57:46Z
system:certificates.k8s.io:certificatesigningrequests:nodeclient       2022-01-21T10:57:46Z
system:certificates.k8s.io:certificatesigningrequests:selfnodeclient   2022-01-21T10:57:46Z
system:certificates.k8s.io:kube-apiserver-client-approver              2022-01-21T10:57:46Z
system:certificates.k8s.io:kube-apiserver-client-kubelet-approver      2022-01-21T10:57:46Z
system:certificates.k8s.io:kubelet-serving-approver                    2022-01-21T10:57:46Z
system:certificates.k8s.io:legacy-unknown-approver                     2022-01-21T10:57:46Z
system:controller:attachdetach-controller                              2022-01-21T10:57:46Z
system:controller:certificate-controller                               2022-01-21T10:57:46Z
system:controller:clusterrole-aggregation-controller                   2022-01-21T10:57:46Z
system:controller:cronjob-controller                                   2022-01-21T10:57:46Z
system:controller:daemon-set-controller                                2022-01-21T10:57:46Z
system:controller:deployment-controller                                2022-01-21T10:57:46Z
system:controller:disruption-controller                                2022-01-21T10:57:46Z
system:controller:endpoint-controller                                  2022-01-21T10:57:46Z
system:controller:endpointslice-controller                             2022-01-21T10:57:46Z
system:controller:endpointslicemirroring-controller                    2022-01-21T10:57:46Z
system:controller:expand-controller                                    2022-01-21T10:57:46Z
system:controller:generic-garbage-collector                            2022-01-21T10:57:46Z
system:controller:horizontal-pod-autoscaler                            2022-01-21T10:57:46Z
system:controller:job-controller                                       2022-01-21T10:57:46Z
system:controller:namespace-controller                                 2022-01-21T10:57:46Z
system:controller:node-controller                                      2022-01-21T10:57:46Z
system:controller:persistent-volume-binder                             2022-01-21T10:57:46Z
system:controller:pod-garbage-collector                                2022-01-21T10:57:46Z
system:controller:pv-protection-controller                             2022-01-21T10:57:46Z
system:controller:pvc-protection-controller                            2022-01-21T10:57:46Z
system:controller:replicaset-controller                                2022-01-21T10:57:46Z
system:controller:replication-controller                               2022-01-21T10:57:46Z
system:controller:resourcequota-controller                             2022-01-21T10:57:46Z
system:controller:root-ca-cert-publisher                               2022-01-21T10:57:46Z
system:controller:route-controller                                     2022-01-21T10:57:46Z
system:controller:service-account-controller                           2022-01-21T10:57:46Z
system:controller:service-controller                                   2022-01-21T10:57:46Z
system:controller:statefulset-controller                               2022-01-21T10:57:46Z
system:controller:ttl-controller                                       2022-01-21T10:57:46Z
system:coredns                                                         2022-01-21T10:58:02Z
system:discovery                                                       2022-01-21T10:57:46Z
system:heapster                                                        2022-01-21T10:57:46Z
system:kube-aggregator                                                 2022-01-21T10:57:46Z
system:kube-controller-manager                                         2022-01-21T10:57:46Z
system:kube-dns                                                        2022-01-21T10:57:46Z
system:kube-scheduler                                                  2022-01-21T10:57:46Z
system:kubelet-api-admin                                               2022-01-21T10:57:46Z
system:metrics-server                                                  2022-01-21T11:04:57Z
system:monitoring                                                      2022-01-21T10:57:46Z
system:node                                                            2022-01-21T10:57:46Z
system:node-bootstrapper                                               2022-01-21T10:57:46Z
system:node-problem-detector                                           2022-01-21T10:57:46Z
system:node-proxier                                                    2022-01-21T10:57:46Z
system:persistent-volume-provisioner                                   2022-01-21T10:57:46Z
system:public-info-viewer                                              2022-01-21T10:57:46Z
system:service-account-issuer-discovery                                2022-01-21T10:57:46Z
system:volume-scheduler                                                2022-01-21T10:57:46Z
view                                                                   2022-01-21T10:57:46Z[root@k8s-master01 ~]# kubectl api-resources --namespaced=true # 所有命名空间作用域的资源
NAME                        SHORTNAMES   APIVERSION                     NAMESPACED   KIND
bindings                                 v1                             true         Binding
configmaps                  cm           v1                             true         ConfigMap
endpoints                   ep           v1                             true         Endpoints
events                      ev           v1                             true         Event
limitranges                 limits       v1                             true         LimitRange
persistentvolumeclaims      pvc          v1                             true         PersistentVolumeClaim
pods                        po           v1                             true         Pod
podtemplates                             v1                             true         PodTemplate
replicationcontrollers      rc           v1                             true         ReplicationController
resourcequotas              quota        v1                             true         ResourceQuota
secrets                                  v1                             true         Secret
serviceaccounts             sa           v1                             true         ServiceAccount
services                    svc          v1                             true         Service
controllerrevisions                      apps/v1                        true         ControllerRevision
daemonsets                  ds           apps/v1                        true         DaemonSet
deployments                 deploy       apps/v1                        true         Deployment
replicasets                 rs           apps/v1                        true         ReplicaSet
statefulsets                sts          apps/v1                        true         StatefulSet
localsubjectaccessreviews                authorization.k8s.io/v1        true         LocalSubjectAccessReview
horizontalpodautoscalers    hpa          autoscaling/v1                 true         HorizontalPodAutoscaler
cronjobs                    cj           batch/v1beta1                  true         CronJob
jobs                                     batch/v1                       true         Job
leases                                   coordination.k8s.io/v1         true         Lease
endpointslices                           discovery.k8s.io/v1beta1       true         EndpointSlice
events                      ev           events.k8s.io/v1               true         Event
ingresses                   ing          extensions/v1beta1             true         Ingress
pods                                     metrics.k8s.io/v1beta1         true         PodMetrics
ingresses                   ing          networking.k8s.io/v1           true         Ingress
networkpolicies             netpol       networking.k8s.io/v1           true         NetworkPolicy
poddisruptionbudgets        pdb          policy/v1beta1                 true         PodDisruptionBudget
rolebindings                             rbac.authorization.k8s.io/v1   true         RoleBinding
roles                                    rbac.authorization.k8s.io/v1   true         Role[root@k8s-master01 ~]# kubectl api-resources --namespaced=false # 所有非命名空间作用域的资源
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
componentstatuses                 cs           v1                                     false        ComponentStatus
namespaces                        ns           v1                                     false        Namespace
nodes                             no           v1                                     false        Node
persistentvolumes                 pv           v1                                     false        PersistentVolume
mutatingwebhookconfigurations                  admissionregistration.k8s.io/v1        false        MutatingWebhookConfiguration
validatingwebhookconfigurations                admissionregistration.k8s.io/v1        false        ValidatingWebhookConfiguration
customresourcedefinitions         crd,crds     apiextensions.k8s.io/v1                false        CustomResourceDefinition
apiservices                                    apiregistration.k8s.io/v1              false        APIService
tokenreviews                                   authentication.k8s.io/v1               false        TokenReview
selfsubjectaccessreviews                       authorization.k8s.io/v1                false        SelfSubjectAccessReview
selfsubjectrulesreviews                        authorization.k8s.io/v1                false        SelfSubjectRulesReview
subjectaccessreviews                           authorization.k8s.io/v1                false        SubjectAccessReview
certificatesigningrequests        csr          certificates.k8s.io/v1                 false        CertificateSigningRequest
flowschemas                                    flowcontrol.apiserver.k8s.io/v1beta1   false        FlowSchema
prioritylevelconfigurations                    flowcontrol.apiserver.k8s.io/v1beta1   false        PriorityLevelConfiguration
nodes                                          metrics.k8s.io/v1beta1                 false        NodeMetrics
ingressclasses                                 networking.k8s.io/v1                   false        IngressClass
runtimeclasses                                 node.k8s.io/v1                         false        RuntimeClass
podsecuritypolicies               psp          policy/v1beta1                         false        PodSecurityPolicy
clusterrolebindings                            rbac.authorization.k8s.io/v1           false        ClusterRoleBinding
clusterroles                                   rbac.authorization.k8s.io/v1           false        ClusterRole
priorityclasses                   pc           scheduling.k8s.io/v1                   false        PriorityClass
csidrivers                                     storage.k8s.io/v1                      false        CSIDriver
csinodes                                       storage.k8s.io/v1                      false        CSINode
storageclasses                    sc           storage.k8s.io/v1                      false        StorageClass
volumeattachments                              storage.k8s.io/v1                      false        VolumeAttachment# 列出当前名字空间下所有 Services,按名称排序
[root@k8s-master01 ~]# kubectl get services --sort-by=.metadata.name
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   5h46m# 列出 Pods,按重启次数排序
[root@k8s-master01 ~]# kubectl get pods --sort-by='.status.containerStatuses[0].restartCount' -n kube-system
NAME                                                 READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6fdd497b59-xtldf             1/1     Running   1          3d22h
calico-node-4dbgm                                    1/1     Running   1          3d22h
calico-node-4vqrz                                    1/1     Running   1          3d22h
kube-scheduler-k8s-master03.example.local            1/1     Running   1          3d22h
calico-node-c5g75                                    1/1     Running   1          3d22h
calico-node-vnqgf                                    1/1     Running   1          3d22h
calico-node-x8hcl                                    1/1     Running   1          3d22h
coredns-5ffd5c4586-nwqgd                             1/1     Running   1          3d22h
coredns-5ffd5c4586-z8rs8                             1/1     Running   1          3d22h
etcd-k8s-master01.example.local                      1/1     Running   1          3d22h
etcd-k8s-master02.example.local                      1/1     Running   1          3d22h
etcd-k8s-master03.example.local                      1/1     Running   1          3d22h
kube-apiserver-k8s-master01.example.local            1/1     Running   1          3d22h
kube-apiserver-k8s-master02.example.local            1/1     Running   1          3d22h
kube-apiserver-k8s-master03.example.local            1/1     Running   1          3d22h
kube-scheduler-k8s-master02.example.local            1/1     Running   1          3d22h
kube-controller-manager-k8s-master02.example.local   1/1     Running   1          3d22h
kube-controller-manager-k8s-master03.example.local   1/1     Running   1          3d22h
kube-proxy-7xnlg                                     1/1     Running   1          3d22h
kube-proxy-pq2jk                                     1/1     Running   1          3d22h
kube-proxy-sbsfn                                     1/1     Running   1          3d22h
kube-proxy-vkqc8                                     1/1     Running   1          3d22h
kube-proxy-xp24c                                     1/1     Running   1          3d22h
kube-proxy-zk5n4                                     1/1     Running   1          3d22h
kube-scheduler-k8s-master01.example.local            1/1     Running   2          3d22h
kube-controller-manager-k8s-master01.example.local   1/1     Running   2          3d22h
calico-node-6fgtr                                    1/1     Running   2          3d22h
metrics-server-dd9ddfbb-blwb8                        1/1     Running   2          3d22h# 列举所有 PV 持久卷,按容量排序
kubectl get pv --sort-by=.spec.capacity.storage[root@k8s-master01 ~]#  kubectl get pod -n kube-system
NAME                                                 READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6fdd497b59-xtldf             1/1     Running   1          3d22h
calico-node-4dbgm                                    1/1     Running   1          3d22h
calico-node-4vqrz                                    1/1     Running   1          3d22h
calico-node-6fgtr                                    1/1     Running   2          3d22h
calico-node-c5g75                                    1/1     Running   1          3d22h
calico-node-vnqgf                                    1/1     Running   1          3d22h
calico-node-x8hcl                                    1/1     Running   1          3d22h
coredns-5ffd5c4586-nwqgd                             1/1     Running   1          3d22h
coredns-5ffd5c4586-z8rs8                             1/1     Running   1          3d22h
etcd-k8s-master01.example.local                      1/1     Running   1          3d22h
etcd-k8s-master02.example.local                      1/1     Running   1          3d22h
etcd-k8s-master03.example.local                      1/1     Running   1          3d22h
kube-apiserver-k8s-master01.example.local            1/1     Running   1          3d22h
kube-apiserver-k8s-master02.example.local            1/1     Running   1          3d22h
kube-apiserver-k8s-master03.example.local            1/1     Running   1          3d22h
kube-controller-manager-k8s-master01.example.local   1/1     Running   2          3d22h
kube-controller-manager-k8s-master02.example.local   1/1     Running   1          3d22h
kube-controller-manager-k8s-master03.example.local   1/1     Running   1          3d22h
kube-proxy-7xnlg                                     1/1     Running   1          3d22h
kube-proxy-pq2jk                                     1/1     Running   1          3d22h
kube-proxy-sbsfn                                     1/1     Running   1          3d22h
kube-proxy-vkqc8                                     1/1     Running   1          3d22h
kube-proxy-xp24c                                     1/1     Running   1          3d22h
kube-proxy-zk5n4                                     1/1     Running   1          3d22h
kube-scheduler-k8s-master01.example.local            1/1     Running   2          3d22h
kube-scheduler-k8s-master02.example.local            1/1     Running   1          3d22h
kube-scheduler-k8s-master03.example.local            1/1     Running   1          3d22h
metrics-server-dd9ddfbb-blwb8                        1/1     Running   2          3d22h[root@k8s-master01 ~]# kubectl get pod -n kube-system --show-labels
NAME                                                 READY   STATUS    RESTARTS   AGE     LABELS
calico-kube-controllers-6fdd497b59-xtldf             1/1     Running   1          3d22h   k8s-app=calico-kube-controllers,pod-template-hash=6fdd497b59
calico-node-4dbgm                                    1/1     Running   1          3d22h   controller-revision-hash=7d4955b46f,k8s-app=calico-node,pod-template-generation=1
calico-node-4vqrz                                    1/1     Running   1          3d22h   controller-revision-hash=7d4955b46f,k8s-app=calico-node,pod-template-generation=1
calico-node-6fgtr                                    1/1     Running   2          3d22h   controller-revision-hash=7d4955b46f,k8s-app=calico-node,pod-template-generation=1
calico-node-c5g75                                    1/1     Running   1          3d22h   controller-revision-hash=7d4955b46f,k8s-app=calico-node,pod-template-generation=1
calico-node-vnqgf                                    1/1     Running   1          3d22h   controller-revision-hash=7d4955b46f,k8s-app=calico-node,pod-template-generation=1
calico-node-x8hcl                                    1/1     Running   1          3d22h   controller-revision-hash=7d4955b46f,k8s-app=calico-node,pod-template-generation=1
coredns-5ffd5c4586-nwqgd                             1/1     Running   1          3d22h   k8s-app=kube-dns,pod-template-hash=5ffd5c4586
coredns-5ffd5c4586-z8rs8                             1/1     Running   1          3d22h   k8s-app=kube-dns,pod-template-hash=5ffd5c4586
etcd-k8s-master01.example.local                      1/1     Running   1          3d22h   component=etcd,tier=control-plane
etcd-k8s-master02.example.local                      1/1     Running   1          3d22h   component=etcd,tier=control-plane
etcd-k8s-master03.example.local                      1/1     Running   1          3d22h   component=etcd,tier=control-plane
kube-apiserver-k8s-master01.example.local            1/1     Running   1          3d22h   component=kube-apiserver,tier=control-plane
kube-apiserver-k8s-master02.example.local            1/1     Running   1          3d22h   component=kube-apiserver,tier=control-plane
kube-apiserver-k8s-master03.example.local            1/1     Running   1          3d22h   component=kube-apiserver,tier=control-plane
kube-controller-manager-k8s-master01.example.local   1/1     Running   2          3d22h   component=kube-controller-manager,tier=control-plane
kube-controller-manager-k8s-master02.example.local   1/1     Running   1          3d22h   component=kube-controller-manager,tier=control-plane
kube-controller-manager-k8s-master03.example.local   1/1     Running   1          3d22h   component=kube-controller-manager,tier=control-plane
kube-proxy-7xnlg                                     1/1     Running   1          3d22h   controller-revision-hash=78f6767bb8,k8s-app=kube-proxy,pod-template-generation=2
kube-proxy-pq2jk                                     1/1     Running   1          3d22h   controller-revision-hash=78f6767bb8,k8s-app=kube-proxy,pod-template-generation=2
kube-proxy-sbsfn                                     1/1     Running   1          3d22h   controller-revision-hash=78f6767bb8,k8s-app=kube-proxy,pod-template-generation=2
kube-proxy-vkqc8                                     1/1     Running   1          3d22h   controller-revision-hash=78f6767bb8,k8s-app=kube-proxy,pod-template-generation=2
kube-proxy-xp24c                                     1/1     Running   1          3d22h   controller-revision-hash=78f6767bb8,k8s-app=kube-proxy,pod-template-generation=2
kube-proxy-zk5n4                                     1/1     Running   1          3d22h   controller-revision-hash=78f6767bb8,k8s-app=kube-proxy,pod-template-generation=2
kube-scheduler-k8s-master01.example.local            1/1     Running   2          3d22h   component=kube-scheduler,tier=control-plane
kube-scheduler-k8s-master02.example.local            1/1     Running   1          3d22h   component=kube-scheduler,tier=control-plane
kube-scheduler-k8s-master03.example.local            1/1     Running   1          3d22h   component=kube-scheduler,tier=control-plane
metrics-server-dd9ddfbb-blwb8                        1/1     Running   2          3d22h   k8s-app=metrics-server,pod-template-hash=dd9ddfbb[root@k8s-master01 ~]# kubectl get pod -n kube-system -l k8s-app=calico-node
NAME                READY   STATUS    RESTARTS   AGE
calico-node-4dbgm   1/1     Running   1          3d22h
calico-node-4vqrz   1/1     Running   1          3d22h
calico-node-6fgtr   1/1     Running   2          3d22h
calico-node-c5g75   1/1     Running   1          3d22h
calico-node-vnqgf   1/1     Running   1          3d22h
calico-node-x8hcl   1/1     Running   1          3d22h[root@k8s-master01 ~]# kubectl get pod -n kube-system -l k8s-app=calico-node |grep Running
calico-node-4dbgm   1/1     Running   1          3d22h
calico-node-4vqrz   1/1     Running   1          3d22h
calico-node-6fgtr   1/1     Running   2          3d22h
calico-node-c5g75   1/1     Running   1          3d22h
calico-node-vnqgf   1/1     Running   1          3d22h
calico-node-x8hcl   1/1     Running   1          3d22h

17.6 更新资源

[root@k8s-master01 ~]# kubectl create deploy nginx --image=nginx
deployment.apps/nginx created[root@k8s-master01 ~]# kubectl get deploy
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           22s[root@k8s-master01 ~]# kubectl get deploy nginx -oyaml
apiVersion: apps/v1
kind: Deployment
metadata:annotations:deployment.kubernetes.io/revision: "1"creationTimestamp: "2022-01-04T13:32:18Z"generation: 1labels:app: nginxmanagedFields:- apiVersion: apps/v1fieldsType: FieldsV1fieldsV1:f:metadata:f:labels:.: {}f:app: {}f:spec:f:progressDeadlineSeconds: {}f:replicas: {}f:revisionHistoryLimit: {}f:selector: {}f:strategy:f:rollingUpdate:.: {}f:maxSurge: {}f:maxUnavailable: {}f:type: {}f:template:f:metadata:f:labels:.: {}f:app: {}f:spec:f:containers:k:{"name":"nginx"}:.: {}f:image: {}f:imagePullPolicy: {}f:name: {}f:resources: {}f:terminationMessagePath: {}f:terminationMessagePolicy: {}f:dnsPolicy: {}f:restartPolicy: {}f:schedulerName: {}f:securityContext: {}f:terminationGracePeriodSeconds: {}manager: kubectl-createoperation: Updatetime: "2022-01-04T13:32:18Z"- apiVersion: apps/v1fieldsType: FieldsV1fieldsV1:f:metadata:f:annotations:.: {}f:deployment.kubernetes.io/revision: {}f:status:f:availableReplicas: {}f:conditions:.: {}k:{"type":"Available"}:.: {}f:lastTransitionTime: {}f:lastUpdateTime: {}f:message: {}f:reason: {}f:status: {}f:type: {}k:{"type":"Progressing"}:.: {}f:lastTransitionTime: {}f:lastUpdateTime: {}f:message: {}f:reason: {}f:status: {}f:type: {}f:observedGeneration: {}f:readyReplicas: {}f:replicas: {}f:updatedReplicas: {}manager: kube-controller-manageroperation: Updatetime: "2022-01-04T13:32:31Z"name: nginxnamespace: defaultresourceVersion: "39129"uid: 00a272bf-8a8b-4bde-9353-784969dac9f4
spec:progressDeadlineSeconds: 600replicas: 1revisionHistoryLimit: 10selector:matchLabels:app: nginxstrategy:rollingUpdate:maxSurge: 25%maxUnavailable: 25%type: RollingUpdatetemplate:metadata:creationTimestamp: nulllabels:app: nginxspec:containers:- image: nginximagePullPolicy: Alwaysname: nginxresources: {}terminationMessagePath: /dev/termination-logterminationMessagePolicy: FilednsPolicy: ClusterFirstrestartPolicy: AlwaysschedulerName: default-schedulersecurityContext: {}terminationGracePeriodSeconds: 30
status:availableReplicas: 1conditions:- lastTransitionTime: "2022-01-04T13:32:31Z"lastUpdateTime: "2022-01-04T13:32:31Z"message: Deployment has minimum availability.reason: MinimumReplicasAvailablestatus: "True"type: Available- lastTransitionTime: "2022-01-04T13:32:18Z"lastUpdateTime: "2022-01-04T13:32:31Z"message: ReplicaSet "nginx-6799fc88d8" has successfully progressed.reason: NewReplicaSetAvailablestatus: "True"type: ProgressingobservedGeneration: 1readyReplicas: 1replicas: 1updatedReplicas: 1[root@k8s-master01 ~]# kubectl set image deploy nginx nginx=nginx:v2
deployment.apps/nginx image updated[root@k8s-master01 ~]# kubectl get deploy nginx -oyaml
apiVersion: apps/v1
kind: Deployment
metadata:annotations:deployment.kubernetes.io/revision: "2"creationTimestamp: "2022-01-04T13:32:18Z"generation: 2labels:app: nginxmanagedFields:- apiVersion: apps/v1fieldsType: FieldsV1fieldsV1:f:metadata:f:labels:.: {}f:app: {}f:spec:f:progressDeadlineSeconds: {}f:replicas: {}f:revisionHistoryLimit: {}f:selector: {}f:strategy:f:rollingUpdate:.: {}f:maxSurge: {}f:maxUnavailable: {}f:type: {}f:template:f:metadata:f:labels:.: {}f:app: {}f:spec:f:containers:k:{"name":"nginx"}:.: {}f:imagePullPolicy: {}f:name: {}f:resources: {}f:terminationMessagePath: {}f:terminationMessagePolicy: {}f:dnsPolicy: {}f:restartPolicy: {}f:schedulerName: {}f:securityContext: {}f:terminationGracePeriodSeconds: {}manager: kubectl-createoperation: Updatetime: "2022-01-04T13:32:18Z"- apiVersion: apps/v1fieldsType: FieldsV1fieldsV1:f:metadata:f:annotations:.: {}f:deployment.kubernetes.io/revision: {}f:status:f:availableReplicas: {}f:conditions:.: {}k:{"type":"Available"}:.: {}f:lastTransitionTime: {}f:lastUpdateTime: {}f:message: {}f:reason: {}f:status: {}f:type: {}k:{"type":"Progressing"}:.: {}f:lastTransitionTime: {}f:lastUpdateTime: {}f:message: {}f:reason: {}f:status: {}f:type: {}f:observedGeneration: {}f:readyReplicas: {}f:replicas: {}f:unavailableReplicas: {}f:updatedReplicas: {}manager: kube-controller-manageroperation: Updatetime: "2022-01-04T13:53:01Z"- apiVersion: apps/v1fieldsType: FieldsV1fieldsV1:f:spec:f:template:f:spec:f:containers:k:{"name":"nginx"}:f:image: {}manager: kubectl-setoperation: Updatetime: "2022-01-04T13:53:01Z"name: nginxnamespace: defaultresourceVersion: "41360"uid: 00a272bf-8a8b-4bde-9353-784969dac9f4
spec:progressDeadlineSeconds: 600replicas: 1revisionHistoryLimit: 10selector:matchLabels:app: nginxstrategy:rollingUpdate:maxSurge: 25%maxUnavailable: 25%type: RollingUpdatetemplate:metadata:creationTimestamp: nulllabels:app: nginxspec:containers:- image: nginx:v2 #镜像名称已改imagePullPolicy: Alwaysname: nginxresources: {}terminationMessagePath: /dev/termination-logterminationMessagePolicy: FilednsPolicy: ClusterFirstrestartPolicy: AlwaysschedulerName: default-schedulersecurityContext: {}terminationGracePeriodSeconds: 30
status:availableReplicas: 1conditions:- lastTransitionTime: "2022-01-04T13:32:31Z"lastUpdateTime: "2022-01-04T13:32:31Z"message: Deployment has minimum availability.reason: MinimumReplicasAvailablestatus: "True"type: Available- lastTransitionTime: "2022-01-04T13:32:18Z"lastUpdateTime: "2022-01-04T13:53:01Z"message: ReplicaSet "nginx-5b8495f49b" is progressing.reason: ReplicaSetUpdatedstatus: "True"type: ProgressingobservedGeneration: 2readyReplicas: 1replicas: 2unavailableReplicas: 1updatedReplicas: 1[root@k8s-master01 ~]# kubectl edit deploy nginx[root@k8s-master01 ~]# kubectl create -f admin.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created[root@k8s-master01 ~]# vim admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:labels:  #添加labelscka: "true"name: admin-usernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata: name: admin-userannotations:rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kube-system[root@k8s-master01 ~]# kubectl replace -f admin.yaml
serviceaccount/admin-user replaced
clusterrolebinding.rbac.authorization.k8s.io/admin-user replaced[root@k8s-master01 dashboard]# kubectl get sa -n kube-system admin-user -oyaml
apiVersion: v1
kind: ServiceAccount
metadata:creationTimestamp: "2022-01-04T13:58:21Z"labels:cka: "true"name: admin-usernamespace: kube-systemresourceVersion: "43165"uid: 00070fb9-6fb5-4492-a3eb-de22929aa475
secrets:
- name: admin-user-token-6gkkm

17.7 编辑资源

kubectl edit svc/docker-registry                      # 编辑名为 docker-registry 的服务
KUBE_EDITOR="nano" kubectl edit svc/docker-registry   # 使用其他编辑器

17.8 kubectl logs

[root@k8s-master01 ~]# kubectl get pod
NAME                     READY   STATUS             RESTARTS   AGE
nginx-5b8495f49b-vwt78   0/1     ImagePullBackOff   0          3m22s
nginx-6799fc88d8-8vkqf   1/1     Running            0          4m38s[root@k8s-master01 ~]# kubectl logs -f nginx-6799fc88d8-8vkqf
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/01/25 09:46:10 [notice] 1#1: using the "epoll" event method
2022/01/25 09:46:10 [notice] 1#1: nginx/1.21.5
2022/01/25 09:46:10 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2022/01/25 09:46:10 [notice] 1#1: OS: Linux 4.19.12-1.el7.elrepo.x86_64
2022/01/25 09:46:10 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/01/25 09:46:10 [notice] 1#1: start worker processes
2022/01/25 09:46:10 [notice] 1#1: start worker process 30
2022/01/25 09:46:10 [notice] 1#1: start worker process 31[root@k8s-master01 ~]# kubectl get pod -n kube-system
NAME                                                 READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6fdd497b59-xtldf             1/1     Running   1          3d22h
calico-node-4dbgm                                    1/1     Running   1          3d22h
calico-node-4vqrz                                    1/1     Running   1          3d22h
calico-node-6fgtr                                    1/1     Running   2          3d22h
calico-node-c5g75                                    1/1     Running   1          3d22h
calico-node-vnqgf                                    1/1     Running   1          3d22h
calico-node-x8hcl                                    1/1     Running   1          3d22h
coredns-5ffd5c4586-nwqgd                             1/1     Running   1          3d22h
coredns-5ffd5c4586-z8rs8                             1/1     Running   1          3d22h
etcd-k8s-master01.example.local                      1/1     Running   1          3d22h
etcd-k8s-master02.example.local                      1/1     Running   1          3d22h
etcd-k8s-master03.example.local                      1/1     Running   1          3d22h
kube-apiserver-k8s-master01.example.local            1/1     Running   1          3d22h
kube-apiserver-k8s-master02.example.local            1/1     Running   1          3d22h
kube-apiserver-k8s-master03.example.local            1/1     Running   1          3d22h
kube-controller-manager-k8s-master01.example.local   1/1     Running   2          3d22h
kube-controller-manager-k8s-master02.example.local   1/1     Running   1          3d22h
kube-controller-manager-k8s-master03.example.local   1/1     Running   1          3d22h
kube-proxy-7xnlg                                     1/1     Running   1          3d22h
kube-proxy-pq2jk                                     1/1     Running   1          3d22h
kube-proxy-sbsfn                                     1/1     Running   1          3d22h
kube-proxy-vkqc8                                     1/1     Running   1          3d22h
kube-proxy-xp24c                                     1/1     Running   1          3d22h
kube-proxy-zk5n4                                     1/1     Running   1          3d22h
kube-scheduler-k8s-master01.example.local            1/1     Running   2          3d22h
kube-scheduler-k8s-master02.example.local            1/1     Running   1          3d22h
kube-scheduler-k8s-master03.example.local            1/1     Running   1          3d22h
metrics-server-dd9ddfbb-blwb8                        1/1     Running   2          3d22h[root@k8s-master01 ~]# kubectl logs -f calico-node-4dbgm -n kube-system[root@k8s-master01 ~]# kubectl logs -f calico-node-4dbgm -n kube-system --tail 10
2022-01-25 09:52:08.204 [INFO][45] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2022-01-25 09:52:08.204 [INFO][45] felix/wireguard.go 578: Wireguard is not enabled
2022-01-25 09:52:08.204 [INFO][45] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2022-01-25 09:52:08.206 [INFO][45] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=2.36139ms
2022-01-25 09:52:08.206 [INFO][45] felix/int_dataplane.go 1259: Finished applying updates to dataplane. msecToApply=2.637458
2022-01-25 09:52:13.384 [INFO][45] felix/int_dataplane.go 1245: Applying dataplane updates
2022-01-25 09:52:13.384 [INFO][45] felix/xdp_state.go 172: Resyncing XDP state with dataplane.
2022-01-25 09:52:13.393 [INFO][45] felix/xdp_state.go 561: Finished XDP resync. family=4 resyncDuration=9.33017ms
2022-01-25 09:52:13.393 [INFO][45] felix/wireguard.go 578: Wireguard is not enabled
2022-01-25 09:52:13.394 [INFO][45] felix/int_dataplane.go 1259: Finished applying updates to dataplane. msecToApply=9.574829
2022-01-25 09:52:18.499 [INFO][45] felix/int_dataplane.go 1245: Applying dataplane updates
2022-01-25 09:52:18.499 [INFO][45] felix/ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2022-01-25 09:52:18.499 [INFO][45] felix/wireguard.go 578: Wireguard is not enabled
2022-01-25 09:52:18.500 [INFO][45] felix/ipsets.go 306: Resyncing ipsets with dataplane. family="inet"
2022-01-25 09:52:18.502 [INFO][45] felix/ipsets.go 356: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=2.434512ms
2022-01-25 09:52:18.502 [INFO][45] felix/int_dataplane.go 1259: Finished applying updates to dataplane. msecToApply=2.907959kubectl logs my-pod -c my-container                 # 获取 Pod 容器的日志(标准输出, 多容器场景)[root@k8s-master01 ~]# kubectl edit deploy nginxspec:containers:- image: nginximagePullPolicy: Alwaysname: nginxresources: {}terminationMessagePath: /dev/termination-logterminationMessagePolicy: File- image: redis #添加下面内容imagePullPolicy: Alwaysname: redisresources: {}terminationMessagePath: /dev/termination-logterminationMessagePolicy: File[root@k8s-master01 ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-689dc9f579-gssz6   2/2     Running   0          16m[root@k8s-master01 ~]# kubectl logs -f nginx-689dc9f579-gssz6  -c redis
1:C 25 Jan 2022 09:54:55.108 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 25 Jan 2022 09:54:55.108 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 25 Jan 2022 09:54:55.108 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 25 Jan 2022 09:54:55.108 * monotonic clock: POSIX clock_gettime
1:M 25 Jan 2022 09:54:55.109 * Running mode=standalone, port=6379.
1:M 25 Jan 2022 09:54:55.109 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 25 Jan 2022 09:54:55.109 # Server initialized
1:M 25 Jan 2022 09:54:55.109 * Ready to accept connections[root@k8s-master01 ~]# kubectl logs -f nginx-689dc9f579-gssz6  -c nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/01/25 09:54:33 [notice] 1#1: using the "epoll" event method
2022/01/25 09:54:33 [notice] 1#1: nginx/1.21.5
2022/01/25 09:54:33 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2022/01/25 09:54:33 [notice] 1#1: OS: Linux 4.19.12-1.el7.elrepo.x86_64
2022/01/25 09:54:33 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/01/25 09:54:33 [notice] 1#1: start worker processes
2022/01/25 09:54:33 [notice] 1#1: start worker process 30
2022/01/25 09:54:33 [notice] 1#1: start worker process 31[root@k8s-master01 ~]# kubectl exec nginx-689dc9f579-gssz6  -- ls
Defaulting container name to nginx.
Use 'kubectl describe pod/nginx-689dc9f579-gssz6 -n default' to see all of the containers in this pod.
bin
boot
dev
docker-entrypoint.d
docker-entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var[root@k8s-master01 ~]# kubectl exec -it nginx-689dc9f579-gssz6  -- bash
Defaulting container name to nginx.
Use 'kubectl describe pod/nginx-689dc9f579-gssz6 -n default' to see all of the containers in this pod.
root@nginx-689dc9f579-gssz6:/# exit
exit
[root@k8s-master01 ~]# kubectl exec -it nginx-689dc9f579-gssz6  -- sh
Defaulting container name to nginx.
Use 'kubectl describe pod/nginx-689dc9f579-gssz6 -n default' to see all of the containers in this pod.
# exit#把logs导入到文件
[root@k8s-master01 ~]# kubectl logs nginx-689dc9f579-gssz6  -c redis > /tmp/a.txt
[root@k8s-master01 ~]# cat /tmp/a.txt
1:C 25 Jan 2022 09:54:55.108 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 25 Jan 2022 09:54:55.108 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 25 Jan 2022 09:54:55.108 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 25 Jan 2022 09:54:55.108 * monotonic clock: POSIX clock_gettime
1:M 25 Jan 2022 09:54:55.109 * Running mode=standalone, port=6379.
1:M 25 Jan 2022 09:54:55.109 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 25 Jan 2022 09:54:55.109 # Server initialized
1:M 25 Jan 2022 09:54:55.109 * Ready to accept connections

17.9 kubectl top

[root@k8s-master01 ~]# kubectl top node
NAME                         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
k8s-master01.example.local   127m         6%     1045Mi          27%
k8s-master02.example.local   157m         7%     933Mi           24%
k8s-master03.example.local   131m         6%     876Mi           22%
k8s-node01.example.local     58m          2%     557Mi           14%
k8s-node02.example.local     61m          3%     597Mi           15%
k8s-node03.example.local     65m          3%     588Mi           15%[root@k8s-master01 ~]# kubectl top pod
NAME                     CPU(cores)   MEMORY(bytes)
nginx-689dc9f579-gssz6   1m           11Mi [root@k8s-master01 ~]# kubectl top pod -A
NAMESPACE              NAME                                                 CPU(cores)   MEMORY(bytes)
default                nginx-689dc9f579-gssz6                               1m           11Mi
kube-system            calico-kube-controllers-6fdd497b59-xtldf             1m           18Mi
kube-system            calico-node-4dbgm                                    19m          106Mi
kube-system            calico-node-4vqrz                                    21m          103Mi
kube-system            calico-node-6fgtr                                    17m          66Mi
kube-system            calico-node-c5g75                                    19m          104Mi
kube-system            calico-node-vnqgf                                    22m          111Mi
kube-system            calico-node-x8hcl                                    20m          105Mi
kube-system            coredns-5ffd5c4586-nwqgd                             2m           13Mi
kube-system            coredns-5ffd5c4586-z8rs8                             2m           13Mi
kube-system            etcd-k8s-master01.example.local                      25m          75Mi
kube-system            etcd-k8s-master02.example.local                      33m          76Mi
kube-system            etcd-k8s-master03.example.local                      27m          74Mi
kube-system            kube-apiserver-k8s-master01.example.local            36m          243Mi
kube-system            kube-apiserver-k8s-master02.example.local            34m          251Mi
kube-system            kube-apiserver-k8s-master03.example.local            35m          252Mi
kube-system            kube-controller-manager-k8s-master01.example.local   1m           22Mi
kube-system            kube-controller-manager-k8s-master02.example.local   11m          54Mi
kube-system            kube-controller-manager-k8s-master03.example.local   1m           22Mi
kube-system            kube-proxy-7xnlg                                     5m           25Mi
kube-system            kube-proxy-pq2jk                                     3m           22Mi
kube-system            kube-proxy-sbsfn                                     3m           24Mi
kube-system            kube-proxy-vkqc8                                     1m           24Mi
kube-system            kube-proxy-xp24c                                     5m           24Mi
kube-system            kube-proxy-zk5n4                                     4m           23Mi
kube-system            kube-scheduler-k8s-master01.example.local            2m           21Mi
kube-system            kube-scheduler-k8s-master02.example.local            2m           18Mi
kube-system            kube-scheduler-k8s-master03.example.local            3m           21Mi
kube-system            metrics-server-dd9ddfbb-blwb8                        2m           19Mi
kubernetes-dashboard   dashboard-metrics-scraper-6d5db67fb7-fscb9           1m           10Mi
kubernetes-dashboard   kubernetes-dashboard-6b5967f475-g9qmj                1m           13Mi        #获取内存使用最大的pod,并写入文件
[root@k8s-master01 ~]# kubectl top pod -n kube-system
NAME                                                 CPU(cores)   MEMORY(bytes)
calico-kube-controllers-6fdd497b59-xtldf             2m           18Mi
calico-node-4dbgm                                    18m          106Mi
calico-node-4vqrz                                    18m          103Mi
calico-node-6fgtr                                    19m          66Mi
calico-node-c5g75                                    21m          104Mi
calico-node-vnqgf                                    20m          111Mi
calico-node-x8hcl                                    16m          105Mi
coredns-5ffd5c4586-nwqgd                             2m           13Mi
coredns-5ffd5c4586-z8rs8                             2m           13Mi
etcd-k8s-master01.example.local                      25m          75Mi
etcd-k8s-master02.example.local                      34m          75Mi
etcd-k8s-master03.example.local                      27m          74Mi
kube-apiserver-k8s-master01.example.local            31m          243Mi
kube-apiserver-k8s-master02.example.local            41m          251Mi
kube-apiserver-k8s-master03.example.local            33m          252Mi
kube-controller-manager-k8s-master01.example.local   1m           22Mi
kube-controller-manager-k8s-master02.example.local   12m          54Mi
kube-controller-manager-k8s-master03.example.local   1m           22Mi
kube-proxy-7xnlg                                     3m           24Mi
kube-proxy-pq2jk                                     4m           22Mi
kube-proxy-sbsfn                                     4m           24Mi
kube-proxy-vkqc8                                     5m           24Mi
kube-proxy-xp24c                                     1m           24Mi
kube-proxy-zk5n4                                     4m           23Mi
kube-scheduler-k8s-master01.example.local            2m           21Mi
kube-scheduler-k8s-master02.example.local            2m           18Mi
kube-scheduler-k8s-master03.example.local            2m           21Mi
metrics-server-dd9ddfbb-blwb8                        2m           19Mi[root@k8s-master01 ~]# echo "kube-apiserver-k8s-master03.example.local"  > /tmp/b.txt
[root@k8s-master01 ~]# cat /tmp/b.txt
kube-apiserver-k8s-master03.example.local

18.k8s故障排查

pod状态字段phase的不同取值

状态 说明
Pending(挂起) Pod已被Kubernetes系统接收,但仍有一个或多个容器未被创建,可以通过kubectl describe 查看处于Pending状态的原因
Running(运行中) Pod已经被绑定到一个节点上,并且所有的容器都已经被创建,而且至少有一个是运行状态,或者是正在启动或重启,可以通过kubectl logs查看Pod的日志
Succeeded(成功) 所有容器执行成功并终止,并且不会再次重启,可以通过kubectl logs查看Pod的日志
Failed(失败) 所有容器都已终止,并且至少有一个容器以失败的方式终止,也就是说这个容器要么以非零状态退出,要么被系统终止,可以通过logs和describe查看Pod的日志和状态
Unknown(未知) 通常是由于通信问题造成的无法获得Pod的状态
ImagePullBackoff ErrImagePull 镜像拉取失败,可以通过logs命令查看具体原因,网络不通或者需要登录认证引起的,可以使用describe 命令查看具体原因
CrashLoopBackoff 容器启动失败,可以通过logs命令查看具体原因,一般为启动命令不正确,健康检查不通过等
OOMKilled 容器内存溢出,一般是容器的运行Limit设置过小,或者程序本身有内存溢出,可以通过logs查看程序启动日志
Terminating Pod正在被删除,可以通过describe查看状态
SysctlForbidden Pod自定义了内核配置,但kubelet没有添加内核配置或配置的内核参数不支持,可以通过describe查看具体原因
Completed 容器内部主进程退出,一般计划任务执行结束会显示该状态,此时可以通过logs查看容器日志
ContainerCreating Pod正在创建,一般为正在下载镜像,或者有配置不当的地方,可以通过describe查看具体原因

注意:Pod的phase字段只有Pending、Running、Succeeded、Failed、Unknown,其余的为处于上述状态的原因,可以通过kubectl get pod xxx -oyaml查看。

19.k8s集群管理

19.1 token管理

[root@k8s-master01 ~]# kubeadm token --helpThis command manages bootstrap tokens. It is optional and needed only for advanced use cases.In short, bootstrap tokens are used for establishing bidirectional trust between a client and a server.
A bootstrap token can be used when a client (for example a node that is about to join the cluster) needs
to trust the server it is talking to. Then a bootstrap token with the "signing" usage can be used.
bootstrap tokens can also function as a way to allow short-lived authentication to the API Server
(the token serves as a way for the API Server to trust the client), for example for doing the TLS Bootstrap.What is a bootstrap token more exactly?- It is a Secret in the kube-system namespace of type "bootstrap.kubernetes.io/token".- A bootstrap token must be of the form "[a-z0-9]{6}.[a-z0-9]{16}". The former part is the public token ID,while the latter is the Token Secret and it must be kept private at all circumstances!- The name of the Secret must be named "bootstrap-token-(token-id)".You can read more about bootstrap tokens here:https://kubernetes.io/docs/admin/bootstrap-tokens/Usage:kubeadm token [flags]kubeadm token [command]Available Commands:create      Create bootstrap tokens on the server #创建token,默认有效期24小时delete      Delete bootstrap tokens on the server #删除tokengenerate    Generate and print a bootstrap token, but do not create it on the server #⽣成并打印token,但不在服务器上创建,即将token⽤于其他操作list        List bootstrap tokens on the serverFlags:--dry-run             Whether to enable dry-run mode or not-h, --help                help for token--kubeconfig string   The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. (default "/etc/kubernetes/admin.conf")Global Flags:--add-dir-header           If true, adds the file directory to the header of the log messages--log-file string          If non-empty, use this log file--log-file-max-size uint   Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)--one-output               If true, only write logs to their native severity level (vs also writing to each lower severity level--rootfs string            [EXPERIMENTAL] The path to the 'real' host root filesystem.--skip-headers             If true, avoid header prefixes in the log messages--skip-log-headers         If true, avoid headers when opening log files-v, --v Level                  number for the log level verbosityUse "kubeadm token [command] --help" for more information about a command.[root@k8s-master01 ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
1d8e8a.p35rsuat5a7hp577   3h          2022-01-12T21:03:14+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

19.2 reset命令

# kubeadm reset #还原kubeadm操作

19.3 查看证书有效期

[root@k8s-master01 ~]# kubeadm alpha certs check-expiration
Command "check-expiration" is deprecated, please use the same command under "kubeadm certs"
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Jan 11, 2023 13:02 UTC   364d            ca                      no
apiserver                  Jan 11, 2023 13:02 UTC   364d            ca                      no
apiserver-etcd-client      Jan 11, 2023 13:02 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Jan 11, 2023 13:02 UTC   364d            ca                      no
controller-manager.conf    Jan 11, 2023 13:02 UTC   364d            ca                      no
etcd-healthcheck-client    Jan 11, 2023 13:02 UTC   364d            etcd-ca                 no
etcd-peer                  Jan 11, 2023 13:02 UTC   364d            etcd-ca                 no
etcd-server                Jan 11, 2023 13:02 UTC   364d            etcd-ca                 no
front-proxy-client         Jan 11, 2023 13:02 UTC   364d            front-proxy-ca          no
scheduler.conf             Jan 11, 2023 13:02 UTC   364d            ca                      no      CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Jan 09, 2032 13:02 UTC   9y              no
etcd-ca                 Jan 09, 2032 13:02 UTC   9y              no
front-proxy-ca          Jan 09, 2032 13:02 UTC   9y              no

19.4 更新证书有效期

[root@k8s-master01 ~]# kubeadm alpha certs renew --help
Command "renew" is deprecated, please use the same command under "kubeadm certs"
This command is not meant to be run on its own. See list of available subcommands.Usage:kubeadm alpha certs renew [flags]Flags:-h, --help   help for renewGlobal Flags:--add-dir-header           If true, adds the file directory to the header of the log messages--log-file string          If non-empty, use this log file--log-file-max-size uint   Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)--one-output               If true, only write logs to their native severity level (vs also writing to each lower severity level--rootfs string            [EXPERIMENTAL] The path to the 'real' host root filesystem.--skip-headers             If true, avoid header prefixes in the log messages--skip-log-headers         If true, avoid headers when opening log files-v, --v Level                  number for the log level verbosity[root@k8s-master01 ~]# kubeadm alpha certs renew all
Command "all" is deprecated, please use the same command under "kubeadm certs"
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewedDone renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.[root@k8s-master01 ~]# kubeadm alpha certs check-expiration
Command "check-expiration" is deprecated, please use the same command under "kubeadm certs"
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Jan 12, 2023 09:30 UTC   364d            ca                      no
apiserver                  Jan 12, 2023 09:30 UTC   364d            ca                      no
apiserver-etcd-client      Jan 12, 2023 09:30 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Jan 12, 2023 09:30 UTC   364d            ca                      no
controller-manager.conf    Jan 12, 2023 09:30 UTC   364d            ca                      no
etcd-healthcheck-client    Jan 12, 2023 09:30 UTC   364d            etcd-ca                 no
etcd-peer                  Jan 12, 2023 09:30 UTC   364d            etcd-ca                 no
etcd-server                Jan 12, 2023 09:30 UTC   364d            etcd-ca                 no
front-proxy-client         Jan 12, 2023 09:30 UTC   364d            front-proxy-ca          no
scheduler.conf             Jan 12, 2023 09:30 UTC   364d            ca                      no      CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Jan 09, 2032 13:02 UTC   9y              no
etcd-ca                 Jan 09, 2032 13:02 UTC   9y              no
front-proxy-ca          Jan 09, 2032 13:02 UTC   9y              no

k2.第一章 基于kubeadm安装kubernetes v1.20 -- 集群部署(二)相关推荐

  1. a24.ansible 生产实战案例 -- 基于kubeadm安装kubernetes v1.20 -- 集群部署(一)

    源码下载地址:https://github.com/raymond999999/kubernetes-ansible 1.高可用Kubernetes集群规划 角色 机器名 机器配置 ip地址 安装软件 ...

  2. a32.ansible 生产实战案例 -- 基于kubeadm安装kubernetes v1.22 -- 集群部署(一)

    源码下载地址:https://github.com/raymond999999/kubernetes-ansible 1.高可用Kubernetes集群规划 角色 机器名 机器配置 ip地址 安装软件 ...

  3. kubeadm安装Kubernetes V1.10集群详细文档

    1:服务器信息以及节点介绍 系统信息:centos1708 minimal    只修改IP地址 主机名称 IP 备注 node01 192.168.150.181 master and etcd r ...

  4. 使用kubeadm安装kubernetes高可用集群

    kubeadm安装kubernetes高可用集群搭建  第一步:首先搭建etcd集群 yum install -y etcd 配置文件 /etc/etcd/etcd.confETCD_NAME=inf ...

  5. kubernetes v1.20项目之部署二进制安装_系统环境配置

    kubernetes v1.20项目之二进制部署安装系统环境配置 好久没有操作过k8s了,自从离开了大厂也没有接触k8s的机会了,正好最近有朋友打听k8s相关的事情,这个文章也是自己根据自己脑子里面的 ...

  6. 使用 kubeadm 创建 kubernetes 1.9 集群

    简介 kubeadm是一个kubernetes官方提供的快速安装和初始化拥有最佳实践(best practice)的kubernetes集群的工具,虽然目前还处于 beta 和 alpha 状态,还不 ...

  7. Centos7.4 安装elasticsearch6.1.3集群部署

    Centos7.4 安装elasticsearch6.1.3集群部署 安装elasticsearch 1.依赖环境安装 这里使用的java 是1.8.0_77的版本.使用的是rpm 安装包的形式进行部 ...

  8. p2p 文件服务器集群,基于云计算的P2P流媒体服务器集群部署算法.doc

    基于云计算的P2P流媒体服务器集群部署算法.doc 基于云计算的P2P流媒体服务器集群部署算法 摘 要: 针对云计算数据中心网络(DCN)环境下,P2P流媒体服务器集群部署引起的较高带宽占用问题,提出 ...

  9. 深入玩转K8S之使用kubeadm安装Kubernetes v1.10以及常见问题解答

    原文链接:http://blog.51cto.com/devingeng/2096495 关于K8S: Kubernetes是Google开源的容器集群管理系统.它构建于docker技术之上,为容器化 ...

  10. 二进制安装kubernetes(v1.20.16)

    目录 1.集群规划 2.软件版本 3.下载地址 4.初始化虚拟机 4.1安装虚拟机 4.2升级内核 4.3安装模块 4.4系统设置 4.5设置hoss 4.6设置IPv4转发 4.7时间同步 4.8安 ...

最新文章

  1. C#如何使用REST接口读写数据
  2. Cell重磅:记忆研究的突破进展!在诺奖成果基础上,用“全光学”组合来“操纵记忆”...
  3. easyUI实现tabs形式
  4. 2021年春季学期-信号与系统-第十二次作业参考答案-第三小题
  5. vs2005c语言连接mysql_VS2005连接MySQL C API
  6. Python计算机视觉:第一章 图像处理基础
  7. 4G网络在物联网应用中的重要性
  8. namefilter 前台反斜杠格式_001获取小猴子的信息并格式化输出
  9. 解决关闭hbase时stop-hbase.sh报错stopping hbasecat: /tmp/hbase-xxxx-master.pid: No such file or directory
  10. (转贴)正则表达式学习心得体会(5)
  11. access查找楼号为01_2015年计算机二级考试Access每日一练(9月19日)
  12. “撤县设市”の利与弊
  13. app应用软件开发流程是怎样的?
  14. “TOP面对面” 技术AMA系列第一期:揭开TOP技术团队的神秘面纱
  15. 25款国外优秀大气的UI界面设计欣赏
  16. 最全的 JVM 面试知识点(二):垃圾收集
  17. 张俊 中国科技大学 计算机,张俊-中国科学院大学-UCAS
  18. 风云三国2.4问鼎天下修改作弊大全
  19. C语言实现cuckoo hash
  20. 估值85亿,年复合增长率超92%,以萨技术为何还要坚持上市?

热门文章

  1. 从此甩掉光驱 U盘安装系统最详攻略(转自腾讯数码)
  2. 基于ISO27001的数据中心信息安全管理体系
  3. 存储卡的使用方法大全
  4. 如何提升邮箱邮件安全性,邮箱管理制度有哪些?
  5. Linux unison 效率,linux利用unison实现双向或多向实时同步
  6. ANSYS公开课圆满落幕
  7. sql注入 mysql 猜数据库名字_sql注入 - osc_dfi5j6xi的个人空间 - OSCHINA - 中文开源技术交流社区...
  8. 挂载NTFS分区导致Docker容器无法启动,Exited (137)错误
  9. 矩阵转置行列式的运算规律
  10. Centos 设置开机自动启动脚本