环境准备

系统使用的Ubuntu18.04

主机IP 主机名 docker版本
172.31.1.10 k8s-master1 19.03.15
172.31.1.11 k8s-master2 19.03.15
172.31.1.12 k8s-master3 19.03.15
172.31.1.13 harbor 19.03.15
172.31.1.14 haproxy1
172.31.1.15 haproxy2
172.31.1.16 k8s-node1 19.03.15
172.31.1.17 k8s-node2 19.03.15
172.31.1.18 k8s-node3 19.03.15

改主机名,因为k8s是以主机名区分的

[root@long-ubuntu ~]# hostnamectl set-hostname k8s-master1.example.local[root@long-ubuntu ~]# hostnamectl set-hostname k8s-master2.example.local[root@long-ubuntu ~]# hostnamectl set-hostname k8s-master3.example.localroot@k8s-ubuntu:~# hostnamectl set-hostname harbor.example.localroot@k8s-ubuntu:~# hostnamectl set-hostname ha1.example.local[root@long-ubuntu ~]# hostnamectl set-hostname k8s-node1.example.local[root@long-ubuntu ~]# hostnamectl set-hostname k8s-node2.example.local[root@long-ubuntu ~]# hostnamectl set-hostname k8s-node3.example.local

Ubuntu1804一键安装docker-ce

#!/bin/bash
# Ubuntu Install docker-ceapt purge ufw lxd lxd-client lxcfs -y lxc-commonapt install -y iproute2 ntpdate tcpdump telnet traceroute nfs-kernel-server nfs-common  \
lrzsz tree openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet  \
traceroute gcc openssh-server lrzsz tree openssl libssl-dev libpcre3 libpcre3-dev \
zlib1g-dev  ntpdate tcpdump telnet traceroute iotop unzip zipapt-get remove docker docker-engine docker.ioapt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-commoncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository    "deb [arch=amd64] https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ubuntu \$(lsb_release -cs) \stable"apt updateapt install -y docker-ce=5:19.03.15~3-0~ubuntu-bionic docker-ce-cli=5:19.03.15~3-0~ubuntu-bionicsudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://rzd1bb7q.mirror.aliyuncs.com"]
}
EOFsudo systemctl daemon-reload
sudo systemctl restart docker
docker version

记得关闭swap

关闭防火墙

优化内核参数

[root@long ~]# sysctl -a | grep forward
net.ipv4.ip_forward = 1[root@long ~]# sysctl -a | grep bridge-nf-call
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

keepalived + haproxy 安装

# 172.31.1.14
[root@ha1 ~]# apt -y install keepalived haproxy

配置keepalived

[root@ha1 ~]# find / -name "*keepalived*"# 拷贝
[root@ha1 ~]# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf

测试ip是有被使用

[root@k8s-master1 ~]# ping 172.31.1.188
PING 172.31.1.188 (172.31.1.188) 56(84) bytes of data.
From 172.31.1.10 icmp_seq=1 Destination Host Unreachable
From 172.31.1.10 icmp_seq=2 Destination Host Unreachable
From 172.31.1.10 icmp_seq=3 Destination Host Unreachable# 上面提示就是没有,所以以下可以设置成VIP的ip地址

修改配置

[root@ha1 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {notification_email {acassen}notification_email_from Alexandre.Cassen@firewall.locsmtp_server 192.168.200.1smtp_connect_timeout 30router_id LVS_DEVEL
}vrrp_instance VI_1 {state MASTERinterface eth0garp_master_delay 10smtp_alertvirtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {172.31.1.188 dev eth0 label eth0:1}
}

开机启动

[root@ha1 ~]# systemctl enable --now keepalived

查看

[root@ha1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:da:36:40 brd ff:ff:ff:ff:ff:ffinet 172.31.1.14/21 brd 172.31.7.255 scope global eth0valid_lft forever preferred_lft foreverinet 172.31.1.188/32 scope global eth0:1valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:feda:3640/64 scope linkvalid_lft forever preferred_lft forever

配置HAproxy

[root@ha1 ~]# vim /etc/haproxy/haproxy.cfglisten statsmode httpbind 0.0.0.0:9999stats enablelog globalstats uri /haproxy-statusstats auth haadmin:123456listen k8s-m44-6443bind 172.31.1.188:6443mode tcpserver 172.31.1.10 172.31.1.10:6443 check inter 2s fall 3 rise 5server 172.31.1.11 172.31.1.11:6443 check inter 2s fall 3 rise 5server 172.31.1.12 172.31.1.12:6443 check inter 2s fall 3 rise 5

开机启动

[root@ha1 ~]# systemctl enable --now haproxy
Synchronizing state of haproxy.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable haproxy

配置harbor

先下载docker-compose,记得安装好docker

[root@harbor ~]# wget https://github.com/docker/compose/releases/download/1.29.2/docker-compose-Linux-x86_64[root@harbor ~]# cp docker-compose-Linux-x86_64 /usr/bin/docker-compose# 授权
[root@harbor ~]# chmod +x /usr/bin/docker-compose# 查看版本
[root@harbor ~]# docker-compose version
docker-compose version 1.29.2, build 5becea4c
docker-py version: 5.0.0
CPython version: 3.7.10
OpenSSL version: OpenSSL 1.1.0l  10 Sep 2019

安装harbor

# 172.31.1.13
[root@harbor ~]# mkdir /apps
[root@harbor harbor]# mkdir certs[root@harbor ~]# cp harbor-offline-installer-v2.2.3.tgz /apps/
[root@harbor ~]# cd /apps/
[root@harbor apps]# ll
total 500924
drwxr-xr-x  2 root root      4096 Jul 24 01:38 ./
drwxr-xr-x 26 root root      4096 Jul 24 01:38 ../
-rw-r--r--  1 root root 512937171 Jul 24 01:38 harbor-offline-installer-v2.2.3.tgz# 解压
[root@harbor apps]# tar xf harbor-offline-installer-v2.2.3.tgz

制作harbor实现https

报错如下:

[root@harbor certs]# openssl req -x509 -new -nodes -key harbor-ca.key  -subj "/CN=harbor.longxuan.vip" -days 7120 -out harbor-ca.crt
Can't load /root/.rnd into RNG
140654265754048:error:2406F079:random number generator:RAND_load_file:Cannot open file:../crypto/rand/randfile.c:88:Filename=/root/.rnd

解决方法:(提前创建这个文件)

# 删除
[root@harbor certs]# rm -rf harbor-ca.crt[root@harbor certs]# touch /root/.rnd

重新制作https

[root@harbor certs]# openssl genrsa -out  harbor-ca.key
Generating RSA private key, 2048 bit long modulus (2 primes)
.............+++++
......................................................+++++
e is 65537 (0x010001)[root@harbor certs]# openssl req -x509 -new -nodes -key harbor-ca.key  -subj "/CN=harbor.longxuan.vip" -days 7120 -out harbor-ca.crt

修改harbor

[root@harbor harbor]# vim harbor.yml
hostname: harbor.longxuan.viphttps:# https port for harbor, default is 443port: 443# The path of cert and key files for nginxcertificate: /apps/harbor/certs/harbor-ca.crtprivate_key: /apps/harbor/certs/harbor-ca.keyharbor_admin_password: 123456

开始安装

[root@harbor harbor]# ./install.sh --with-trivy

测试浏览器访问 172.31.1.13

创建目录(需要docker拉镜像的所有机器都要创建)自己生成的证书只能这么操作

[root@harbor harbor]# mkdir /etc/docker/certs.d/harbor.longxuan.vip -p

拷贝公钥

[root@harbor harbor]# cd certs/
[root@harbor certs]# ll
total 16
drwxr-xr-x 2 root root 4096 Jul 24 01:50 ./
drwxr-xr-x 4 root root 4096 Jul 24 01:56 ../
-rw-r--r-- 1 root root 1139 Jul 24 01:50 harbor-ca.crt
-rw------- 1 root root 1679 Jul 24 01:42 harbor-ca.key
[root@harbor certs]# scp harbor-ca.crt 172.31.1.10:/etc/docker/certs.d/harbor.longxuan.vip/

测试

每台需要docker拉镜像的,因为需要域名解析,所以要做好域名解析

[root@k8s-master1 ~]# echo "172.31.1.13 harbor.longxuan.vip"  >> /etc/hosts# 登录docker
[root@k8s-master1 ~]# docker login harbor.longxuan.vip
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded

拉个官方的小镜像

[root@k8s-master1 ~]# docker pull alpine
Using default tag: latest
latest: Pulling from library/alpine
5843afab3874: Pull complete
Digest: sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0
Status: Downloaded newer image for alpine:latest
docker.io/library/alpine:latest

打标签

[root@k8s-master1 ~]# docker tag alpine harbor.longxuan.vip/baseimages/alpine

上传

[root@k8s-master1 ~]# docker push harbor.longxuan.vip/baseimages/alpine
The push refers to repository [harbor.longxuan.vip/baseimages/alpine]
72e830a4dff5: Pushed
latest: digest: sha256:1775bebec23e1f3ce486989bfc9ff3c4e951690df84aa9f926497d82f2ffca9d size: 528

查看浏览器的镜像是否存在

某一台拉取

[root@k8s-master2 ~]# vim /etc/hosts
[root@k8s-master2 ~]# docker login harbor.longxuan.vip
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded
[root@k8s-master2 ~]#
[root@k8s-master2 ~]# docker pull harbor.longxuan.vip/baseimages/alpine
Using default tag: latest
latest: Pulling from baseimages/alpine
5843afab3874: Pull complete
Digest: sha256:1775bebec23e1f3ce486989bfc9ff3c4e951690df84aa9f926497d82f2ffca9d
Status: Downloaded newer image for harbor.longxuan.vip/baseimages/alpine:latest
harbor.longxuan.vip/baseimages/alpine:latest

安装kubeadm (每台都要操作,一步都不能少)

# 更新aliyun的k8s
[root@k8s-master1 ~]# apt-get update && apt-get install -y apt-transport-https[root@k8s-master1 ~]# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -[root@k8s-master1 ~]# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF[root@k8s-master1 ~]# apt update

master节点安装

[root@k8s-master1 ~]# apt-cache madison kubeadm# 需要有控制需求才安装kubectl
[root@k8s-master1 ~]# apt install -y kubeadm=1.20.5-00 kubelet=1.20.5-00 kubectl=1.20.5-00

node节点安装

[root@k8s-master1 ~]# apt install -y kubeadm=1.20.5-00 kubelet=1.20.5-00

k8s补全命令

[root@k8s-master1 ~]# mkdir /data/scipts -p[root@k8s-master1 ~]# kubeadm completion bash > /data/scipts/kubeadm_completion.sh[root@k8s-master1 ~]# source /data/scipts/kubeadm_completion.sh# 写入开机启动
[root@k8s-master1 ~]# vim /etc/profile
...
source /data/scipts/kubeadm_completion.sh[root@k8s-master1 ~]# chmod a+x /data/scipts/kubeadm_completion.sh

kubectl补全命令

[root@k8s-master2 ~]# kubectl completion bash > /data/scipts/kubectl_completion.sh
[root@k8s-master2 ~]# source /data/scipts/kubectl_completion.sh

查看需要的镜像(默认)

[root@k8s-master1 ~]# kubeadm config images list
I0724 03:03:42.202676    8619 version.go:254] remote version is much newer: v1.21.3; falling back to: stable-1.20
k8s.gcr.io/kube-apiserver:v1.20.9
k8s.gcr.io/kube-controller-manager:v1.20.9
k8s.gcr.io/kube-scheduler:v1.20.9
k8s.gcr.io/kube-proxy:v1.20.9
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

查看指定版本需要的镜像

[root@k8s-master1 ~]# kubeadm config images list --kubernetes-version v1.20.5
k8s.gcr.io/kube-apiserver:v1.20.5
k8s.gcr.io/kube-controller-manager:v1.20.5
k8s.gcr.io/kube-scheduler:v1.20.5
k8s.gcr.io/kube-proxy:v1.20.5
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

把他们改成国内(脚本 每台master都要执行,这样安装快)

[root@k8s-master1 ~]# cat k8s-v1.20.5-install.sh
#!/bin/bash
# Images k8s-v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0

初始化

# 172.31.1.10
[root@k8s-master1 ~]# kubeadm  init \
--apiserver-advertise-address=172.31.1.10 \
--control-plane-endpoint=172.31.1.188 \
--apiserver-bind-port=6443  \
--kubernetes-version=v1.20.5 \
--pod-network-cidr=10.100.0.0/16 \
--service-cidr=10.200.0.0/16 \
--service-dns-domain=longxuan.local \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
--ignore-preflight-errors=swap

正确安装后得出如下信息:

Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:# 添加master节点kubeadm join 172.31.1.188:6443 --token jumav2.xxqxqx8sm49qqpkb \--discovery-token-ca-cert-hash sha256:0ab1061fcfe2543fc53694513329b332cbc78ebf49600ecb40a0ee226cbd4b63 \--control-planeThen you can join any number of worker nodes by running the following on each as root:# 添加node节点
kubeadm join 172.31.1.188:6443 --token jumav2.xxqxqx8sm49qqpkb \--discovery-token-ca-cert-hash sha256:0ab1061fcfe2543fc53694513329b332cbc78ebf49600ecb40a0ee226cbd4b63

按照要求创建

[root@k8s-master1 ~]# mkdir -p $HOME/.kube
[root@k8s-master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果其他master节点需要可以执行kubectl命令按如下操作:

# 创建目录
[root@k8s-server2 ~]# mkdir /root/.kube/ -p# 拷贝
[root@k8s-server1 m44]# scp /root/.kube/config 172.31.1.11:/root/.kube/

下载网络

[root@k8s-master1 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

修改网络

下载需要的镜像

[root@k8s-master1 ~]# docker pull quay.io/coreos/flannel:v0.14.0

打标签

[root@k8s-master1 ~]# docker tag quay.io/coreos/flannel:v0.14.0 harbor.longxuan.vip/baseimages/flannel:v0.14.0

上传harbor仓库

[root@k8s-master1 ~]# docker push harbor.longxuan.vip/baseimages/flannel:v0.14.0

修改网络配置 (注意文件不能添加#作为注释)

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:name: psp.flannel.unprivilegedannotations:seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/defaultseccomp.security.alpha.kubernetes.io/defaultProfileName: docker/defaultapparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/defaultapparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:privileged: falsevolumes:- configMap- secret- emptyDir- hostPathallowedHostPaths:- pathPrefix: "/etc/cni/net.d"- pathPrefix: "/etc/kube-flannel"- pathPrefix: "/run/flannel"readOnlyRootFilesystem: false# Users and groupsrunAsUser:rule: RunAsAnysupplementalGroups:rule: RunAsAnyfsGroup:rule: RunAsAny# Privilege EscalationallowPrivilegeEscalation: falsedefaultAllowPrivilegeEscalation: false# CapabilitiesallowedCapabilities: ['NET_ADMIN', 'NET_RAW']defaultAddCapabilities: []requiredDropCapabilities: []# Host namespaceshostPID: falsehostIPC: falsehostNetwork: truehostPorts:- min: 0max: 65535# SELinuxseLinux:# SELinux is unused in CaaSPrule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: flannel
rules:
- apiGroups: ['extensions']resources: ['podsecuritypolicies']verbs: ['use']resourceNames: ['psp.flannel.unprivileged']
- apiGroups:- ""resources:- podsverbs:- get
- apiGroups:- ""resources:- nodesverbs:- list- watch
- apiGroups:- ""resources:- nodes/statusverbs:- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:name: flannelnamespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flannel
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}# 这里的网络是初始化时定义的pod网络net-conf.json: |{"Network": "10.100.0.0/16",  "Backend": {"Type": "vxlan"}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-dsnamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxhostNetwork: truepriorityClassName: system-node-criticaltolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cni# 镜像是改成上传harbor仓库image: harbor.longxuan.vip/baseimages/flannel:v0.14.0command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannel# 镜像是改成上传harbor仓库image: harbor.longxuan.vip/baseimages/flannel:v0.14.0command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN", "NET_RAW"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg

部署网络

[root@k8s-master1 ~]# kubectl apply -f kube-flannel.ymlpodsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

查看node

[root@k8s-master1 ~]# kubectl get node

删除某个pod

[root@k8s-master1 ~]# kubectl delete pod kube-flannel-ds-ddpnj -n kube-systempod "kube-flannel-ds-ddpnj" deleted

查看所有pod

[root@k8s-master1 ~]# kubectl get pod -A
NAMESPACE     NAME                                                READY   STATUS                  RESTARTS   AGE
kube-system   coredns-54d67798b7-7t9pv                            1/1     Running                 0          73m
kube-system   coredns-54d67798b7-znmkk                            1/1     Running                 0          73m
kube-system   etcd-k8s-master2.example.local                      1/1     Running                 0          73m
kube-system   kube-apiserver-k8s-master2.example.local            1/1     Running                 0          73m
kube-system   kube-controller-manager-k8s-master2.example.local   1/1     Running                 0          73m
kube-system   kube-flannel-ds-l7n5s                               1/1     Running                 0          6m1s
kube-system   kube-flannel-ds-mcxtp                               1/1     Running                 0          25m
kube-system   kube-proxy-8rrxj                                    1/1     Running                 0          73m
kube-system   kube-proxy-rkt2m                                    1/1     Running                 0          25m
kube-system   kube-scheduler-k8s-master2.example.local            1/1     Running                 0          73m

添加node节点

[root@k8s-node1 ~]# kubeadm join 172.31.1.188:6443 --token jumav2.xxqxqx8sm49qqpkb \--discovery-token-ca-cert-hash sha256:0ab1061fcfe2543fc53694513329b332cbc78ebf49600ecb40a0ee226cbd4b63

添加master节点

生成一个key

[root@k8s-master1 ~]# kubeadm init phase upload-certs --upload-certs
I0724 07:55:01.996821   50433 version.go:254] remote version is much newer: v1.21.3; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
03c8ae8a4b1e298157011910e110d7acf4855c354710f227c692a0be8ac54617

添加其他master节点命令

[root@k8s-master3 ~]# kubeadm join 172.31.1.188:6443 --token jumav2.xxqxqx8sm49qqpkb \--discovery-token-ca-cert-hash sha256:0ab1061fcfe2543fc53694513329b332cbc78ebf49600ecb40a0ee226cbd4b63 \--control-plane --certificate-key 03c8ae8a4b1e298157011910e110d7acf4855c354710f227c692a0be8ac54617

各node节点会自动加入到master节点,下载镜像并启动flannel,直到最终在master看到node处于Ready状态

如果单master允许pod运行在master节点执行如下命令

[root@k8s-master1 ~]# kubectl taint nodes --all node-role.kubernetes.io/master-

k8s创建容器并测试内部网络

[root@k8s-master1 ~]# kubectl run net-test1 --image=alpine sleep 60000
pod/net-test1 created
[root@k8s-master1 ~]# kubectl run net-test2 --image=alpine sleep 60000
pod/net-test2 created
[root@k8s-master1 ~]# kubectl run net-test3 --image=alpine sleep 60000
pod/net-test3 created

查看ip

[root@k8s-master1 ~]# kubectl get pod -o wide
NAME        READY   STATUS    RESTARTS   AGE     IP           NODE                      NOMINATED NODE   READINESS GATES
net-test1   1/1     Running   0          2m34s   10.100.5.2   k8s-node3.example.local   <none>           <none>
net-test2   1/1     Running   0          2m27s   10.100.2.2   k8s-node2.example.local   <none>           <none>
net-test3   1/1     Running   0          2m22s   10.100.1.4   k8s-node1.example.local   <none>           <none>

测试网络连通性

[root@k8s-master1 ~]# kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft forever
3: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UPlink/ether 7e:0c:85:29:c4:8c brd ff:ff:ff:ff:ff:ffinet 10.100.5.2/24 brd 10.100.5.255 scope global eth0valid_lft forever preferred_lft forever
/ # ping 10.100.2.2
PING 10.100.2.2 (10.100.2.2): 56 data bytes
64 bytes from 10.100.2.2: seq=0 ttl=62 time=0.979 ms
64 bytes from 10.100.2.2: seq=1 ttl=62 time=0.600 ms# 验证外部网络
/ # ping www.baidu.com
PING www.baidu.com (110.242.68.4): 56 data bytes
64 bytes from 110.242.68.4: seq=0 ttl=127 time=49.085 ms
64 bytes from 110.242.68.4: seq=1 ttl=127 time=54.397 ms
64 bytes from 110.242.68.4: seq=2 ttl=127 time=128.386 ms

报错

flannel没有启动成功,coredns是启动不了的,先检查flannel有什么报错信息比如:(coredns依赖网络组件,比如flannel等等)

[root@k8s-server1 m44]# kubectl logs -f kube-flannel-ds-bn9sd -n kube-system
I0725 09:49:33.570637       1 main.go:520] Determining IP address of default interface
I0725 09:49:33.571818       1 main.go:533] Using interface with name eth0 and address 172.18.8.149
I0725 09:49:33.571867       1 main.go:550] Defaulting external address to interface address (172.18.8.149)
W0725 09:49:33.572628       1 client_config.go:608] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
E0725 09:49:34.163087       1 main.go:251] Failed to create SubnetManager: error parsing subnet config: invalid character '#' looking for beginning of object key string

解决方法:

# 删除flannel.yaml再重新apply,即可
[root@k8s-server1 m44]# kubectl delete -f kube-flannel.yaml[root@k8s-server1 m44]# kubectl apply -f kube-flannel.yaml

使用kubeadm搭建k8s高可用集群相关推荐

  1. Kubeadm安装k8s高可用集群实战

    请不要使用带中文的服务器和克隆的虚拟机! 文档中的IP地址请统一替换,不要一个一个替换! 一.集群安装网段划分 集群安装时会涉及到三个网段: 宿主机网段:就是安装k8s的服务器 Pod网段:k8s P ...

  2. 【K8SRockyLinux】基于开源操作系统搭建K8S高可用集群(详细版)

    文章目录 一.实验节点规划表

  3. k8s高可用集群搭建部署

    简介 k8s普通搭建出来只是单master节点,如果该节点挂掉,则整个集群都无法调度,K8s高可用集群是用多个master节点加负载均衡节点组成,外层再接高可用分布式存储集群例如ceph集群,实现计算 ...

  4. 使用kubeadm安装kubernetes高可用集群

    kubeadm安装kubernetes高可用集群搭建  第一步:首先搭建etcd集群 yum install -y etcd 配置文件 /etc/etcd/etcd.confETCD_NAME=inf ...

  5. 企业实战-Kubernetes(十四)k8s高可用集群

    k8s高可用集群 1 使用pacemaker搭建k8s的高可用(haproxy的高可用) 安装并配置haproxy 安装并配置pacemaker 2 k8s集群部署 master准备 三个结点关闭交换 ...

  6. 部署一套完整的K8s高可用集群(二进制-V1.20)

    <部署一套完整的企业级K8s集群> v1.20,二进制方式 作者信息 李振良(阿良),微信:xyz12366699 DevOps实战学院 http://www.aliangedu.cn 说 ...

  7. 自动化运维之k8s——Helm、普罗米修斯、EFK日志管理、k8s高可用集群(未完待续)

    一.k8s高可用集群(3.12日课) 几种常见的集群结构 1.堆叠的 etcd 拓扑 2. 外部 etcd 拓扑 3.外部 etcd 拓扑(load balancer = lvs + keepaliv ...

  8. ansible-playbook部署K8S高可用集群

    通过ansible-playbook,以Kubeadm方式部署K8S高可用集群(多主多从). kubernetes安装目录: /etc/kubernetes/KubeConfig: ~/.kube/c ...

  9. Nginx配置——搭建 Nginx 高可用集群(双机热备)

    Nginx配置--搭建 Nginx 高可用集群(双机热备) https://blog.csdn.net/zxd1435513775/article/details/102508573?utm_medi ...

  10. docker搭建redis高可用集群

    目标:docker搭建redis高可用集群 1.架构:六个redis容器,三主三从,主从复制,主机宕机从机自动替代 2.网络架构设计:设计一个专属redis的docker网络 docker netwo ...

最新文章

  1. 浏览器打不开计算机二级网页,电脑的的所有浏览器都打不开二级网页 该怎么处理 网上好多办法都尝试了 没用 谁能帮帮我 谢谢...
  2. [css] 会引起Reflow和Repaint的操作有哪些?
  3. spring整合kafka项目生产和消费测试结果记录(一)
  4. webStorm编辑器(左侧目录)如何显示、隐藏?
  5. Python构建跳转表
  6. [.Net] 一句话Linq(递归查询)
  7. Matlab绘制圆饼统计图pie的用法详解
  8. 客户端性能测试工具-Wetest、cude PC
  9. 【实习周报】2019年6月 前端开发实习工作周报汇总
  10. XtraBackUp 全量备份
  11. 5GC 网元AMF、SMF、UPF、PCF、UDM等介绍
  12. JavaScript立即执行函数
  13. 随机迷宫生成算法整理分析
  14. 心形线方程-Geek献给女友的爱意情人节
  15. 微信转账一次显示两个_微信转账又出新玩法!同时满足两个条件,收款转账畅通无阻...
  16. git 代码记录单条合并的方法
  17. 最详细的App自动化常用的元素审查器
  18. C++ 内存泄漏调试
  19. 思科网络安全 第十一章测验答案
  20. 区块链在中国怎么练?

热门文章

  1. uva 12307(点集的外接矩形)
  2. Delphi xe7组件和控件的安装方法
  3. 360度评估中的问题示范:如何提问
  4. 搭建web服务器asp网站传马
  5. 自定义监控项及告警升级
  6. 简单的玻璃材质效果——UnityShader学习笔记
  7. 弘辽科技:淘宝店铺违规再也不用怕了 这个新规能抵消扣分处罚
  8. beamer制作学术slide
  9. 网络编程三剑客之sed
  10. KeeperErrorCode = ConnectionLoss for /dubbo报错问题解决方法