注意:我这里的离线安装包是V1.20.5的,单安装一个master节点并部署服务,保证可以使用。如果安装集群也是可以的,但是需要把离线包上传到所有的node节点,导入,最后把node节点接入到K8S集群即可,本人都亲测过。

K8SV1.20.5离线安装包下载:
链接:https://pan.baidu.com/s/1gjp8gta8SLAbdLrc8e-zJA?pwd=MAQQ
提取码:MAQQ
–来自百度网盘的分享

1.系统性能优化

hostnamectl set-hostname k8s-master && bash
cat >> /etc/hosts << EOF
192.168.243.215  k8s-master
EOFsystemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时
swapoff -a # 临时
sed -i 's/.*swap.*/#&/' /etc/fstab # 永久cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效

2.离线安装docker

docker离线安装请参考博客

3.安装kubeadm,kubelet 和kubectl

[root@k8s-master qq]# ls
0f2a2afd740d476ad77c508847bad1f559afc2425816c1f2ce4432a62dfe0b9d-kubernetes-cni-1.2.0-0.x86_64.rpm  libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
356e511f8963b4b68fdf41593e64e92f03f0b58c72aae0613aeff3e770078cf7-kubelet-1.20.5-0.x86_64.rpm        libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
3f5ba2b53701ac9102ea7c7ab2ca6616a8cd5966591a77577585fde1c434ef74-cri-tools-1.26.0-0.x86_64.rpm      libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
8593f28d972a6818131c1a6cd34f52b22a6acd0c4c7dcf3d7447ad53a9f24cc3-kubectl-1.20.5-0.x86_64.rpm        socat-1.7.3.2-2.el7.x86_64.rpm
c2634321e0d8ebe24ba7c6f025df171f5d1707c75a90e3bdd08199ab47aac565-kubeadm-1.20.5-0.x86_64.rpm        安装说明.txt
conntrack-tools-1.4.4-7.el7.x86_64.rpm
[root@k8s-master qq]# rpm -ivh *.rpm #直接安装

4.部署Kubernetes Master

kubeadm init --apiserver-advertise-address=192.168.243.215  --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.5  --service-cidr=10.96.0.0/12  --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master mqq]# kubectl get nodes #安装后看到状态是NotReady
NAME         STATUS     ROLES                  AGE   VERSION
k8s-master   NotReady   control-plane,master   33m   v1.20.5

5.安装Pod 网络插件(CNI)

cat > calico.yaml << EOF
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:name: calico-confignamespace: kube-system
data:# Typha is disabled.typha_service_name: "none"# Configure the backend to use.calico_backend: "bird"# Configure the MTU to useveth_mtu: "1440"# The CNI network configuration to install on each node.  The special# values in this config will be automatically populated.cni_network_config: |-{"name": "k8s-pod-network","cniVersion": "0.3.1","plugins": [{"type": "calico","log_level": "info","datastore_type": "kubernetes","nodename": "__KUBERNETES_NODE_NAME__","mtu": __CNI_MTU__,"ipam": {"type": "calico-ipam"},"policy": {"type": "k8s"},"kubernetes": {"kubeconfig": "__KUBECONFIG_FILEPATH__"}},{"type": "portmap","snat": true,"capabilities": {"portMappings": true}}]}
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: felixconfigurations.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: FelixConfigurationplural: felixconfigurationssingular: felixconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ipamblocks.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPAMBlockplural: ipamblockssingular: ipamblock
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: blockaffinities.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BlockAffinityplural: blockaffinitiessingular: blockaffinity
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ipamhandles.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPAMHandleplural: ipamhandlessingular: ipamhandle
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ipamconfigs.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPAMConfigplural: ipamconfigssingular: ipamconfig
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: bgppeers.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BGPPeerplural: bgppeerssingular: bgppeer
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: bgpconfigurations.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BGPConfigurationplural: bgpconfigurationssingular: bgpconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ippools.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPPoolplural: ippoolssingular: ippool
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: hostendpoints.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: HostEndpointplural: hostendpointssingular: hostendpoint
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: clusterinformations.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: ClusterInformationplural: clusterinformationssingular: clusterinformation
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: globalnetworkpolicies.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: GlobalNetworkPolicyplural: globalnetworkpoliciessingular: globalnetworkpolicy
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: globalnetworksets.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: GlobalNetworkSetplural: globalnetworksetssingular: globalnetworkset
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: networkpolicies.crd.projectcalico.org
spec:scope: Namespacedgroup: crd.projectcalico.orgversion: v1names:kind: NetworkPolicyplural: networkpoliciessingular: networkpolicy
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: networksets.crd.projectcalico.org
spec:scope: Namespacedgroup: crd.projectcalico.orgversion: v1names:kind: NetworkSetplural: networksetssingular: networkset
---
# Source: calico/templates/rbac.yaml# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-kube-controllers
rules:# Nodes are watched to monitor for deletions.- apiGroups: [""]resources:- nodesverbs:- watch- list- get# Pods are queried to check for existence.- apiGroups: [""]resources:- podsverbs:- get# IPAM resources are manipulated when nodes are deleted.- apiGroups: ["crd.projectcalico.org"]resources:- ippoolsverbs:- list- apiGroups: ["crd.projectcalico.org"]resources:- blockaffinities- ipamblocks- ipamhandlesverbs:- get- list- create- update- delete# Needs access to update clusterinformations.- apiGroups: ["crd.projectcalico.org"]resources:- clusterinformationsverbs:- get- create- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-kube-controllers
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-kube-controllers
subjects:
- kind: ServiceAccountname: calico-kube-controllersnamespace: kube-system
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-node
rules:# The CNI plugin needs to get pods, nodes, and namespaces.- apiGroups: [""]resources:- pods- nodes- namespacesverbs:- get- apiGroups: [""]resources:- endpoints- servicesverbs:# Used to discover service IPs for advertisement.- watch- list# Used to discover Typhas.- get- apiGroups: [""]resources:- nodes/statusverbs:# Needed for clearing NodeNetworkUnavailable flag.- patch# Calico stores some configuration information in node annotations.- update# Watch for changes to Kubernetes NetworkPolicies.- apiGroups: ["networking.k8s.io"]resources:- networkpoliciesverbs:- watch- list# Used by Calico for policy information.- apiGroups: [""]resources:- pods- namespaces- serviceaccountsverbs:- list- watch# The CNI plugin patches pods/status.- apiGroups: [""]resources:- pods/statusverbs:- patch# Calico monitors various CRDs for config.- apiGroups: ["crd.projectcalico.org"]resources:- globalfelixconfigs- felixconfigurations- bgppeers- globalbgpconfigs- bgpconfigurations- ippools- ipamblocks- globalnetworkpolicies- globalnetworksets- networkpolicies- networksets- clusterinformations- hostendpoints- blockaffinitiesverbs:- get- list- watch# Calico must create and update some CRDs on startup.- apiGroups: ["crd.projectcalico.org"]resources:- ippools- felixconfigurations- clusterinformationsverbs:- create- update# Calico stores some configuration information on the node.- apiGroups: [""]resources:- nodesverbs:- get- list- watch# These permissions are only requried for upgrade from v2.6, and can# be removed after upgrade or on fresh installations.- apiGroups: ["crd.projectcalico.org"]resources:- bgpconfigurations- bgppeersverbs:- create- update# These permissions are required for Calico CNI to perform IPAM allocations.- apiGroups: ["crd.projectcalico.org"]resources:- blockaffinities- ipamblocks- ipamhandlesverbs:- get- list- create- update- delete- apiGroups: ["crd.projectcalico.org"]resources:- ipamconfigsverbs:- get# Block affinities must also be watchable by confd for route aggregation.- apiGroups: ["crd.projectcalico.org"]resources:- blockaffinitiesverbs:- watch# The Calico IPAM migration needs to get daemonsets. These permissions can be# removed if not upgrading from an installation using host-local IPAM.- apiGroups: ["apps"]resources:- daemonsetsverbs:- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: calico-node
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-node
subjects:
- kind: ServiceAccountname: calico-nodenamespace: kube-system---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:name: calico-nodenamespace: kube-systemlabels:k8s-app: calico-node
spec:selector:matchLabels:k8s-app: calico-nodeupdateStrategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1template:metadata:labels:k8s-app: calico-nodeannotations:# This, along with the CriticalAddonsOnly toleration below,# marks the pod as a critical add-on, ensuring it gets# priority scheduling and that its resources are reserved# if it ever gets evicted.scheduler.alpha.kubernetes.io/critical-pod: ''spec:nodeSelector:beta.kubernetes.io/os: linuxhostNetwork: truetolerations:# Make sure calico-node gets scheduled on all nodes.- effect: NoScheduleoperator: Exists# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- effect: NoExecuteoperator: ExistsserviceAccountName: calico-node# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.terminationGracePeriodSeconds: 0priorityClassName: system-node-criticalinitContainers:# This container performs upgrade from host-local IPAM to calico-ipam.# It can be deleted if this is a fresh installation, or if you have already# upgraded to use calico-ipam.- name: upgrade-ipamimage: calico/cni:v3.11.3command: ["/opt/cni/bin/calico-ipam", "-upgrade"]env:- name: KUBERNETES_NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backendvolumeMounts:- mountPath: /var/lib/cni/networksname: host-local-net-dir- mountPath: /host/opt/cni/binname: cni-bin-dirsecurityContext:privileged: true# This container installs the CNI binaries# and CNI network config file on each node.- name: install-cniimage: calico/cni:v3.11.3command: ["/install-cni.sh"]env:# Name of the CNI config file to create.- name: CNI_CONF_NAMEvalue: "10-calico.conflist"# The CNI network config to install on each node.- name: CNI_NETWORK_CONFIGvalueFrom:configMapKeyRef:name: calico-configkey: cni_network_config# Set the hostname based on the k8s node name.- name: KUBERNETES_NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# CNI MTU Config variable- name: CNI_MTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# Prevents the container from sleeping forever.- name: SLEEPvalue: "false"volumeMounts:- mountPath: /host/opt/cni/binname: cni-bin-dir- mountPath: /host/etc/cni/net.dname: cni-net-dirsecurityContext:privileged: true# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes# to communicate with Felix over the Policy Sync API.- name: flexvol-driverimage: calico/pod2daemon-flexvol:v3.11.3volumeMounts:- name: flexvol-driver-hostmountPath: /host/driversecurityContext:privileged: truecontainers:# Runs calico-node container on each Kubernetes node.  This# container programs network policy and routes on each# host.- name: calico-nodeimage: calico/node:v3.11.3env:# Use Kubernetes API as the backing datastore.- name: DATASTORE_TYPEvalue: "kubernetes"# Wait for the datastore.- name: WAIT_FOR_DATASTOREvalue: "true"# Set based on the k8s node name.- name: NODENAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# Choose the backend to use.- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backend# Cluster type to identify the deployment type- name: CLUSTER_TYPEvalue: "k8s,bgp"# Auto-detect the BGP IP address.- name: IPvalue: "autodetect"# Enable IPIP- name: CALICO_IPV4POOL_IPIPvalue: "Always"# Set MTU for tunnel device used if ipip is enabled- name: FELIX_IPINIPMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# The default IPv4 pool to create on startup if none exists. Pod IPs will be# chosen from this range. Changing this value after installation will have# no effect. This should fall within `--cluster-cidr`.- name: CALICO_IPV4POOL_CIDRvalue: "10.244.0.0/16"# Disable file logging so `kubectl logs` works.- name: CALICO_DISABLE_FILE_LOGGINGvalue: "true"# Set Felix endpoint to host default action to ACCEPT.- name: FELIX_DEFAULTENDPOINTTOHOSTACTIONvalue: "ACCEPT"# Disable IPv6 on Kubernetes.- name: FELIX_IPV6SUPPORTvalue: "false"# Set Felix logging to "info"- name: FELIX_LOGSEVERITYSCREENvalue: "info"- name: FELIX_HEALTHENABLEDvalue: "true"securityContext:privileged: trueresources:requests:cpu: 250mlivenessProbe:exec:command:- /bin/calico-node- -felix-live- -bird-liveperiodSeconds: 10initialDelaySeconds: 10failureThreshold: 6readinessProbe:exec:command:- /bin/calico-node- -felix-ready- -bird-readyperiodSeconds: 10volumeMounts:- mountPath: /lib/modulesname: lib-modulesreadOnly: true- mountPath: /run/xtables.lockname: xtables-lockreadOnly: false- mountPath: /var/run/caliconame: var-run-calicoreadOnly: false- mountPath: /var/lib/caliconame: var-lib-calicoreadOnly: false- name: policysyncmountPath: /var/run/nodeagentvolumes:# Used by calico-node.- name: lib-moduleshostPath:path: /lib/modules- name: var-run-calicohostPath:path: /var/run/calico- name: var-lib-calicohostPath:path: /var/lib/calico- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate# Used to install CNI.- name: cni-bin-dirhostPath:path: /opt/cni/bin- name: cni-net-dirhostPath:path: /etc/cni/net.d# Mount in the directory for host-local IPAM allocations. This is# used when upgrading from host-local to calico-ipam, and can be removed# if not using the upgrade-ipam init container.- name: host-local-net-dirhostPath:path: /var/lib/cni/networks# Used to create per-pod Unix Domain Sockets- name: policysynchostPath:type: DirectoryOrCreatepath: /var/run/nodeagent# Used to install Flex Volume Driver- name: flexvol-driver-hosthostPath:type: DirectoryOrCreatepath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---
apiVersion: v1
kind: ServiceAccount
metadata:name: calico-nodenamespace: kube-system
---
# Source: calico/templates/calico-kube-controllers.yaml# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllers
spec:# The controllers can only have a single active instance.replicas: 1selector:matchLabels:k8s-app: calico-kube-controllersstrategy:type: Recreatetemplate:metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllersannotations:scheduler.alpha.kubernetes.io/critical-pod: ''spec:nodeSelector:beta.kubernetes.io/os: linuxtolerations:# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- key: node-role.kubernetes.io/mastereffect: NoScheduleserviceAccountName: calico-kube-controllerspriorityClassName: system-cluster-criticalcontainers:- name: calico-kube-controllersimage: calico/kube-controllers:v3.11.3env:# Choose which controllers to run.- name: ENABLED_CONTROLLERSvalue: node- name: DATASTORE_TYPEvalue: kubernetesreadinessProbe:exec:command:- /usr/bin/check-status- -r
---
apiVersion: v1
kind: ServiceAccount
metadata:name: calico-kube-controllersnamespace: kube-system
---
# Source: calico/templates/calico-etcd-secrets.yaml
---
# Source: calico/templates/calico-typha.yaml
---
# Source: calico/templates/configure-canal.yaml
EOFkubectl apply -f calico.yaml
[root@k8s-master manifests]# kubectl get nodes #现在看到状态是Ready就OK
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   40m   v1.20.5
[root@k8s-master manifests]# [root@k8s-master manifests]# kubectl get pods -n kube-system #全部状态是Running就OK
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6b8f6f78dc-qrw2g   1/1     Running   0          2m39s
calico-node-s5ddr                          1/1     Running   0          2m39s
coredns-7f89b7bc75-b49sr                   1/1     Running   0          40m
coredns-7f89b7bc75-gtft5                   1/1     Running   0          40m
etcd-k8s-master                            1/1     Running   0          40m
kube-apiserver-k8s-master                  1/1     Running   0          40m
kube-controller-manager-k8s-master         1/1     Running   0          40m
kube-proxy-grkw8                           1/1     Running   0          40m
kube-scheduler-k8s-master                  1/1     Running   0          40m
[root@k8s-master manifests]#

6.单集版的k8s安装后, 无法部署服务

因为默认master不能部署pod,有污点, 需要去掉污点或者新增一个node,我这里是去除污点.

kubectl get node -o yaml | grep taint -A 5
kubectl taint nodes --all node-role.kubernetes.io/master- #执行这句就行,就是取消污点
kubectl get node -o yaml | grep taint -A 5 #什么查不到就OK拉

7.部署metrics服务

cat > metrics-server.yaml << EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: system:aggregated-metrics-readerlabels:rbac.authorization.k8s.io/aggregate-to-view: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: metrics-server:system:auth-delegator
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegator
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: metrics-server-auth-readernamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:name: v1beta1.metrics.k8s.io
spec:service:name: metrics-servernamespace: kube-systemgroup: metrics.k8s.ioversion: v1beta1insecureSkipTLSVerify: truegroupPriorityMinimum: 100versionPriority: 100
---
apiVersion: v1
kind: ServiceAccount
metadata:name: metrics-servernamespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:name: metrics-servernamespace: kube-systemlabels:k8s-app: metrics-server
spec:selector:matchLabels:k8s-app: metrics-servertemplate:metadata:name: metrics-serverlabels:k8s-app: metrics-serverspec:serviceAccountName: metrics-servervolumes:# mount in tmp so we can safely use from-scratch images and/or read-only containers- name: tmp-diremptyDir: {}containers:- name: metrics-serverimage: lizhenliang/metrics-server:v0.3.7 imagePullPolicy: IfNotPresentargs:- --cert-dir=/tmp- --secure-port=4443- --kubelet-insecure-tls- --kubelet-preferred-address-types=InternalIPports:- name: main-portcontainerPort: 4443protocol: TCPsecurityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- name: tmp-dirmountPath: /tmpnodeSelector:kubernetes.io/os: linuxkubernetes.io/arch: "amd64"
---
apiVersion: v1
kind: Service
metadata:name: metrics-servernamespace: kube-systemlabels:kubernetes.io/name: "Metrics-server"kubernetes.io/cluster-service: "true"
spec:selector:k8s-app: metrics-serverports:- port: 443protocol: TCPtargetPort: main-port
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: system:metrics-server
rules:
- apiGroups:- ""resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: system:metrics-server
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-server
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
EOFkubectl apply -f metrics-server.yamlkubectl top nodes
[root@k8s-master manifests]# kubectl top nodes #看到CUP的%出来就OK
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
k8s-master   531m         17%    2035Mi          55%
[root@k8s-master manifests]#

8.部署Dashboard

cat > recommended.yaml << EOF
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.apiVersion: v1
kind: Namespace
metadata:name: kubernetes-dashboard---apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:type: NodePort #新添加的,下面有一个删除了,注意缩进了2个空格ports:- port: 443targetPort: 8443nodePort: 30001 #新添加的selector:k8s-app: kubernetes-dashboard---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kubernetes-dashboard
type: Opaque---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-csrfnamespace: kubernetes-dashboard
type: Opaque
data:csrf: ""---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-key-holdernamespace: kubernetes-dashboard
type: Opaque---kind: ConfigMap
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-settingsnamespace: kubernetes-dashboard---kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
rules:# Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]verbs: ["get", "update", "delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]# Allow Dashboard to get metrics.- apiGroups: [""]resources: ["services"]resourceNames: ["heapster", "dashboard-metrics-scraper"]verbs: ["proxy"]- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]verbs: ["get"]---kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard
rules:# Allow Metrics Scraper to get metrics from the Metrics server- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboardimage: kubernetesui/dashboard:v2.0.0imagePullPolicy: Alwaysports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates- --namespace=kubernetes-dashboard# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30securityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---kind: Service
apiVersion: v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:ports:- port: 8000targetPort: 8000selector:k8s-app: dashboard-metrics-scraper---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: dashboard-metrics-scrapertemplate:metadata:labels:k8s-app: dashboard-metrics-scraperannotations:seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'spec:containers:- name: dashboard-metrics-scraperimage: kubernetesui/metrics-scraper:v1.0.4ports:- containerPort: 8000protocol: TCPlivenessProbe:httpGet:scheme: HTTPpath: /port: 8000initialDelaySeconds: 30timeoutSeconds: 30volumeMounts:- mountPath: /tmpname: tmp-volumesecurityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedulevolumes:- name: tmp-volumeemptyDir: {}
EOFkubectl apply -f recommended.yaml
[root@k8s-master manifests]# kubectl get pods -n kubernetes-dashboard #状态全部是 Running就OK
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7b59f7d4df-5n42w   1/1     Running   0          50s
kubernetes-dashboard-74d688b6bc-rdw9r        1/1     Running   0          50s
[root@k8s-master manifests]#

访问地址:https://192.168.0.134:30001/ #必须要用https://


创建service account并绑定默认cluster-admin管理员群集角色
使用输出的token登录Dashboard

kubectl create serviceaccount dashboard-admin -n kube-system #创建用户
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin #用户授权
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}') #获取用户Token,用于页面登录

9.测试kubernetes是否正常

cat > nginx.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: nginxname: nginx
spec:replicas: 1selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- image: nginxname: nginximagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:labels:app: nginxname: nginx
spec:type: NodePortports:- port: 80protocol: TCPtargetPort: 80selector:app: nginx
EOFkubectl apply -f nginx.yaml
kubectl get pods,svc
[root@k8s-master mqq]# kubectl get pods,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-7cf7d6dbc8-8lrzb   1/1     Running   0          46sNAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        76m
service/nginx        NodePort    10.98.34.181   <none>        80:30762/TCP   46s
[root@k8s-master mqq]# ip:30762 去页面访问,能访问就OK

10.安装k8s补全命令

#上传安装包bash-completion-2.1-8.el7.noarch.rpm
rpm -ivh bash-completion-2.1-8.el7.noarch.rpm
kubectl completion bash
source /usr/share/bash-completion/bash_completion
kubectl completion bash >/etc/profile.d/kubectl.sh
source /etc/profile.d/kubectl.shcat >> /root/.bashrc << EOF
source /etc/profile.d/kubectl.sh
EOF

离线安装k8sv1.20.5版本并部署服务相关推荐

  1. 通过RPM包离线安装Clickhouse 20.3(LTS版本)

    1.原因 由于公司内网服务器环境是不能联网的,没法通过在线方式安装.这里记录一下通过RPM包离线安装Clickhouse 20.3(LTS版本)的过程. 2.下载RPM包 (1)官方安装文档 http ...

  2. K8Sv1.20二进制多master部署

    二进制搭建 Kubernetes v1.20 k8s集群master01:192.168.80.80 kube-apiserver kube-controller-manager kube-sched ...

  3. pytorch离线安装(探索尝试版本)

    pytorch离线安装 1.失败的尝试 1.1清华云阿里云反复横跳 2.本地安装Torch 2.1去哪里下载 2.2虚拟环境的问题 2.3安装过程说明 2.4想更多 3.另外的库的安装torchvis ...

  4. Docker 离线安装 .net Core 6.0 环境部署

    一.背景 最近参与开发一个烟草行业的项目, 由于项目的特殊性, 所有的服务器都只能访问内网, 以往使用 " docker pull images " 下载镜像的方式不可行了.只能另 ...

  5. k8s 离线安装_阿里开源 k8s 事件通知服务

    背景 在 Kubernetes 开源生态中,资源监控有 metrics-server.Prometheus等,但这些监控并不能实时推送 Kubernetes 事件,监控准确性也不足.当 kuberne ...

  6. k8s-v1.20.10 二进制部署指导文档

    k8s-v1.20.10 1master&2node 部署过程中用到的相关文件: 链接:https://pan.baidu.com/s/12mk4bThqXKJ04OPZP2nxWQ 提取码: ...

  7. 离线安装PostgreSQL数据库(v13.4版本)

    记录:328 场景:在CentOS 7.9操作系统上,离线安装PostgreSQL数据库,版本:v13.4.主要是PostgreSQL的编译.安装.启动.登录.设置远程可登录.创建数据库.创建数据库用 ...

  8. ROS入门-4.安装ROS系统(ubuntu20.04版本安装ros的noetic版本)

    ubuntu20.04版本安装ros的noetic版本 1.添加软件源 2.添加密钥 3.更新 4.安装ROS 5.初始化rosdep 6.设置环境变量 7.测试ROS安装是否成功 1.添加软件源 2 ...

  9. Windows部署服务(WDS)

    一.安装Windows部署服务(WDS) 1.选择"开始-程序-管理工具-服务器管理器",打开"服务器管理器"控制台,选择"角色"选项,单击 ...

最新文章

  1. shell --- awk规范 系统总结
  2. ESIM (Enhanced LSTM for Natural Language Inference)
  3. C~K的班级(II)_JAVA
  4. mybatis比hibernate处理速度快的原因
  5. 机器学习之几个好用的数据下载网站
  6. php怎么求最小公倍数,C++_详解C语言求两个数的最大公约数及最小公倍数的方法,求两个正整数的最大公约数nbs - phpStudy...
  7. 满二叉树与完全二叉树入门
  8. java对象转json
  9. arm-linux-g++ crypto,在Ubuntu中找不到libcrypto
  10. LINUX-AWK-删除首行、删除尾行、删除首尾两行
  11. 期末考试查分,基于青果高校教务系统的一个自动python脚本代码。
  12. 使用文本排版大师(TxtEdit/TEditer)在记事本文件中绘制表格。
  13. java怎样生成epub文件_使用Zip库创建Epub文件
  14. loadrunner的安装
  15. 协议(五)-从电报机到智能手机
  16. java集成华为推送
  17. 程序员如何摆脱35岁技术泥潭?
  18. 可视化工具–D3–比例尺的使用(ordinal)
  19. 微信小程序·前端-锦囊
  20. O2O分享6:O2O的社会化营销(下)

热门文章

  1. 用c语言写垃圾分类小游戏,沙坪坝:玩趣味游戏 学垃圾分类
  2. akka入门系列-3. Actor进阶:创建带参数的actor和发送复杂的消息类型
  3. 一个楼梯共有 n级台阶,每次可以走一级或者两级或者三级,问从第 0级台阶走到第 n级台阶一共有多少种方案。(java)
  4. macos运行Linux应用,在Linux系统上执行MacOS应用的模拟器
  5. 特斯拉又”降价“,12月推出现车保险补贴4000元
  6. iPhone 15全系列即将登场:英国巴克莱银行预测PRO系列售价或涨100到200美元,换来的是游戏效能提升!
  7. 只能输入数字和一个小数点的正则表达式
  8. 传iWatch 将在7月投入生产,10月出货,支持无线充电、触控、测量脉搏
  9. 女人秋季养生食谱大全
  10. 客户关系管理理论的核心内容是什么?