Kubernetes快速部署
Kubernetes快速部署
- 1. 安装要求
- 2. 学习目标
- 3. 准备环境
- 4. 所有节点安装Docker/kubeadm/kubelet
- 4.1 安装Docker
- 4.2 添加kubernetes阿里云YUM软件源
- 4.3 安装kubeadm,kubelet和kubectl
- 5. 部署Kubernetes Master
- 6. 安装Pod网络插件(CNI)
- 7. 加入Kubernetes Node
- 8. 测试kubernetes集群
- 9. 部署Kubernetes集群故障案例
- 9.1 案例一
- 9.2 案例二
1. 安装要求
在开始之前,部署Kubernetes集群机器需要满足以下几个条件:
至少3台机器,操作系统 CentOS7+
- 硬件配置:2GB或更多RAM,2个CPU或更多* CPU,硬盘20GB或更多
- 集群中所有机器之间网络互通
- 可以访问外网,需要拉取镜像
- 禁止swap分区
2. 学习目标
- 在所有节点上安装Docker和kubeadm
- 部署Kubernetes Master
- 部署容器网络插件
- 部署 Kubernetes Node,将节点加入Kubernetes集群中
3. 准备环境
系统环境 | 角色 | IP |
---|---|---|
centos8 | master | 192.168.8.120 |
centos7 | node1 | 192.168.8.128 |
centos7 | node2 | 192.168.8.152 |
//更改主机名
//master主机上操作
[root@localhost ~]# cat /etc/redhat-release
CentOS Stream release 8
[root@localhost ~]# hostnamectl set-hostname master.example.com
[root@localhost ~]# bash
[root@master ~]# hostname
master.example.com
[root@master ~]# //node1主机上操作
[root@localhost ~]# cat /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)
[root@localhost ~]# hostnamectl set-hostname node1.example.com
[root@localhost ~]# bash
[root@node1 ~]# hostname
node1.example.com
[root@node1 ~]# //node2主机上操作
[root@localhost ~]# cat /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)
[root@localhost ~]# hostnamectl set-hostname node2.example.com
[root@localhost ~]# bash
[root@node2 ~]# hostname
node2.example.com
[root@node2 ~]#
//关闭防火墙
//master主机上操作
[root@master ~]# systemctl disable --now firewalld
[root@master ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config//node1主机上操作
[root@node1 ~]# systemctl disable --now firewalld
[root@node1 ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config//node2主机上操作
[root@node2 ~]# systemctl disable --now firewalld
[root@node2 ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config//master主机上操作
// 关闭swap
注释掉或删掉swap分区(我这里是注释掉的)
[root@master ~]# vim /etc/fstab
#/dev/mapper/cs-swap none swap defaults 0 0//node1主机上操作
[root@node1 ~]# vim /etc/fstab
#/dev/mapper/centos-swap swap swap defaults 0 0//node2主机上操作
[root@node2 ~]# vim /etc/fstab
#/dev/mapper/cs-swap none swap defaults 0 0
//在master添加hosts
//master主机上操作
[root@master ~]# vim /etc/hosts
[root@master ~]# sed -n '1,5p' /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.8.120 master master.example.com
192.168.8.128 node1 node1.example.com
192.168.8.152 node2 node2.example.com//测试
[root@master ~]# ping master
PING master (192.168.8.120) 56(84) bytes of data.
64 bytes from master (192.168.8.120): icmp_seq=1 ttl=64 time=0.047 ms
64 bytes from master (192.168.8.120): icmp_seq=2 ttl=64 time=0.021 ms
^C
--- master ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1046ms
rtt min/avg/max/mdev = 0.021/0.034/0.047/0.013 ms
[root@master ~]# ping node1
PING node1 (192.168.8.128) 56(84) bytes of data.
64 bytes from node1 (192.168.8.128): icmp_seq=1 ttl=64 time=0.358 ms
64 bytes from node1 (192.168.8.128): icmp_seq=2 ttl=64 time=1.69 ms
^C
--- node1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1034ms
rtt min/avg/max/mdev = 0.358/1.024/1.690/0.666 ms
[root@master ~]# ping node2
PING node2 (192.168.8.152) 56(84) bytes of data.
64 bytes from node2 (192.168.8.152): icmp_seq=1 ttl=64 time=1.67 ms
64 bytes from node2 (192.168.8.152): icmp_seq=2 ttl=64 time=1.44 ms
64 bytes from node2 (192.168.8.152): icmp_seq=3 ttl=64 time=0.983 ms
^C
--- node2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2005ms
rtt min/avg/max/mdev = 0.983/1.365/1.674/0.286 ms
//将桥接的IPv4流量传递到iptables的链
//master主机上操作
[root@master ~]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@master ~]# sysctl --system
//配置时间同步
//所有主机上操作
[root@master ~]# yum -y install chrony
[root@master ~]# vim /etc/chrony.conf 3 pool time1.aliyun.com iburst
[root@master ~]# systemctl enable --now chronyd[root@node1 ~]# yum -y install chrony
[root@node1 ~]# vim /etc/chrony.conf 3 server time1.aliyun.com iburst4 #server 0.centos.pool.ntp.org iburst5 #server 1.centos.pool.ntp.org iburst6 #server 2.centos.pool.ntp.org iburst7 #server 3.centos.pool.ntp.org iburst
[root@node1 ~]# systemctl enable --now chronyd[root@node2 ~]# yum -y install chrony
[root@node2 ~]# vim /etc/chrony.conf 3 pool time1.aliyun.com iburst
[root@node2 ~]# systemctl enable --now chronyd
//配置免密认证
//master主机上操作
[root@master ~]# ssh-keygen -t rsa
[root@master ~]# ssh-copy-id master
[root@master ~]# ssh-copy-id node1
[root@master ~]# ssh-copy-id node2//测试
[root@master ~]# for i in master node1 node2;do ssh $i 'date';done
2021年 12月 18日 星期六 13:44:45 CST
2021年 12月 18日 星期六 13:44:45 CST
2021年 12月 18日 星期六 13:44:45 CST
[root@master ~]# //重启让刚刚的操作生效
[root@master ~]# reboot
[root@node1 ~]# reboot
[root@node2 ~]# reboot
4. 所有节点安装Docker/kubeadm/kubelet
Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。
4.1 安装Docker
//所有主机上操作
// 下载docker源
[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo[root@node1 ~]# cd /etc/yum.repos.d/
[root@node1 yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo[root@node2 ~]# cd /etc/yum.repos.d/
[root@node2 yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo//安装docker
[root@master ~]# yum -y install docker-ce[root@node1 ~]# yum -y install docker-ce[root@node2 ~]# yum -y install docker-ce//设置docker开机自启
[root@master ~]# systemctl enable --now docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.[root@node1 ~]# systemctl enable --now docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.[root@node2 ~]# systemctl enable --now docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.//查看docker版本号
[root@master ~]# docker --version
Docker version 20.10.12, build e91ed57[root@node1 ~]# docker --version
Docker version 20.10.12, build e91ed57[root@node2 ~]# docker --version
Docker version 20.10.12, build e91ed57
//配置加速器
//所有主机上操作
[root@master ~]# cat > /etc/docker/daemon.json << EOF
{"registry-mirrors": ["https://faq69nhk.mirror.aliyuncs.com"], ## 配置加速器"exec-opts": ["native.cgroupdriver=systemd"], ## 用systemd的方式来管理cgroup"log-driver": "json-file", ## 日志格式用json"log-opts": {"max-size": "100m" ## 最大的大小100M,达到100M就滚动},"storage-driver": "overlay2" ## 存储驱动用overlay2
}
EOF[root@node1 ~]# cat > /etc/docker/daemon.json << EOF
> {> "registry-mirrors": ["https://faq69nhk.mirror.aliyuncs.com"],
> "exec-opts": ["native.cgroupdriver=systemd"],
> "log-driver": "json-file",
> "log-opts": {> "max-size": "100m"
> },
> "storage-driver": "overlay2"
> }
> EOF[root@node2 ~]# cat > /etc/docker/daemon.json << EOF
> {> "registry-mirrors": ["https://faq69nhk.mirror.aliyuncs.com"],
> "exec-opts": ["native.cgroupdriver=systemd"],
> "log-driver": "json-file",
> "log-opts": {> "max-size": "100m"
> },
> "storage-driver": "overlay2"
> }
> EOF//重启docker
[root@master ~]# systemctl restart docker
[root@master ~]# docker info
Client:Context: defaultDebug Mode: falsePlugins:app: Docker App (Docker Inc., v0.9.1-beta3)buildx: Docker Buildx (Docker Inc., v0.7.1-docker)scan: Docker Scan (Docker Inc., v0.12.0)Server:Containers: 0Running: 0Paused: 0Stopped: 0Images: 0Server Version: 20.10.12Storage Driver: overlay2Backing Filesystem: xfsSupports d_type: trueNative Overlay Diff: trueuserxattr: falseLogging Driver: json-fileCgroup Driver: systemdCgroup Version: 1Plugins:Volume: localNetwork: bridge host ipvlan macvlan null overlayLog: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslogSwarm: inactiveRuntimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linuxDefault Runtime: runcInit Binary: docker-initcontainerd version: 7b11cfaabd73bb80907dd23182b9347b4245eb5drunc version: v1.0.2-0-g52b36a2init version: de40ad0Security Options:seccompProfile: defaultKernel Version: 4.18.0-257.el8.x86_64Operating System: CentOS Stream 8OSType: linuxArchitecture: x86_64CPUs: 4Total Memory: 3.622GiBName: master.example.comID: DBQF:HN7K:Z3DZ:SA7F:G6HA:XXPP:6MGI:ZT5A:WLQZ:S667:LVAX:Y6T3Docker Root Dir: /var/lib/dockerDebug Mode: falseRegistry: https://index.docker.io/v1/Labels:Experimental: falseInsecure Registries:127.0.0.0/8Registry Mirrors:https://faq69nhk.mirror.aliyuncs.com/ //加速器配置成功Live Restore Enabled: false[root@master ~]# [root@node1 ~]# systemctl restart docker
[root@node1 ~]# docker info
Client:Context: defaultDebug Mode: falsePlugins:app: Docker App (Docker Inc., v0.9.1-beta3)buildx: Docker Buildx (Docker Inc., v0.7.1-docker)scan: Docker Scan (Docker Inc., v0.12.0)Server:Containers: 0Running: 0Paused: 0Stopped: 0Images: 0Server Version: 20.10.12Storage Driver: overlay2Backing Filesystem: xfsSupports d_type: trueNative Overlay Diff: trueuserxattr: falseLogging Driver: json-fileCgroup Driver: systemdCgroup Version: 1Plugins:Volume: localNetwork: bridge host ipvlan macvlan null overlayLog: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslogSwarm: inactiveRuntimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runcDefault Runtime: runcInit Binary: docker-initcontainerd version: 7b11cfaabd73bb80907dd23182b9347b4245eb5drunc version: v1.0.2-0-g52b36a2init version: de40ad0Security Options:seccompProfile: defaultKernel Version: 3.10.0-862.el7.x86_64Operating System: CentOS Linux 7 (Core)OSType: linuxArchitecture: x86_64CPUs: 4Total Memory: 3.685GiBName: node1.example.comID: PNQ3:PJUA:ZBZQ:O2QR:JLJB:TZVF:M3TT:6OXS:MVG2:7FM2:B4NL:IVQIDocker Root Dir: /var/lib/dockerDebug Mode: falseRegistry: https://index.docker.io/v1/Labels:Experimental: falseInsecure Registries:127.0.0.0/8Registry Mirrors:https://faq69nhk.mirror.aliyuncs.com/Live Restore Enabled: false[root@node1 ~]# [root@node2 ~]# systemctl restart docker
[root@node2 ~]# docker info
Client:Context: defaultDebug Mode: falsePlugins:app: Docker App (Docker Inc., v0.9.1-beta3)buildx: Docker Buildx (Docker Inc., v0.7.1-docker)scan: Docker Scan (Docker Inc., v0.12.0)Server:Containers: 0Running: 0Paused: 0Stopped: 0Images: 0Server Version: 20.10.12Storage Driver: overlay2Backing Filesystem: xfsSupports d_type: trueNative Overlay Diff: trueuserxattr: falseLogging Driver: json-fileCgroup Driver: systemdCgroup Version: 1Plugins:Volume: localNetwork: bridge host ipvlan macvlan null overlayLog: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslogSwarm: inactiveRuntimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runcDefault Runtime: runcInit Binary: docker-initcontainerd version: 7b11cfaabd73bb80907dd23182b9347b4245eb5drunc version: v1.0.2-0-g52b36a2init version: de40ad0Security Options:seccompProfile: defaultKernel Version: 3.10.0-862.el7.x86_64Operating System: CentOS Linux 7 (Core)OSType: linuxArchitecture: x86_64CPUs: 4Total Memory: 3.685GiBName: node2.example.comID: QSKV:MGXT:H67I:T3AW:JCLN:DVQL:DRBI:4D6R:FHA2:VZN6:AFRM:EMZIDocker Root Dir: /var/lib/dockerDebug Mode: falseRegistry: https://index.docker.io/v1/Labels:Experimental: falseInsecure Registries:127.0.0.0/8Registry Mirrors:https://faq69nhk.mirror.aliyuncs.com/Live Restore Enabled: false[root@node2 ~]#
4.2 添加kubernetes阿里云YUM软件源
//所有主机上操作
[root@master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@master ~]# yum clean all
[root@master ~]# yum makecache[root@node1 ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@node1 ~]# yum clean all
[root@node1 ~]# yum makecache[root@node2 ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@node2 ~]# yum clean all
[root@node2 ~]# yum makecache
4.3 安装kubeadm,kubelet和kubectl
由于版本更新频繁,这里指定版本号部署:
[root@master ~]# yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0
[root@master ~]# systemctl enable kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.[root@node1 ~]# yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0
[root@node1 ~]# systemctl enable kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.[root@node2 ~]# yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0
[root@node2 ~]# systemctl enable kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
5. 部署Kubernetes Master
在192.168.8.120(Master)执行。
[root@master ~]# kubeadm init --apiserver-advertise-address=192.168.8.120 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks[WARNING FileExisting-tc]: tc not found in system path[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.12. Latest validated version: 19.03[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master.example.com] and IPs [10.96.0.1 192.168.8.120]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master.example.com] and IPs [192.168.8.120 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master.example.com] and IPs [192.168.8.120 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 23.501797 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master.example.com as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node master.example.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: m1xenp.ra8y9d28h88dyfe6
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: ## 如果是普通用户就运行下面三条命令mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf ## 如果是管理员就运行下面一条命令You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.8.120:6443 --token m1xenp.ra8y9d28h88dyfe6 \--discovery-token-ca-cert-hash sha256:7dab997afd127fc4c9921808d04be57953694cc694a99c46925ae0f2b50e4308
[root@master ~]# //将上面这段写到一个文件中,方便以后用到
[root@master ~]# vim init
[root@master ~]# cat init
kubeadm join 192.168.8.120:6443 --token m1xenp.ra8y9d28h88dyfe6 \--discovery-token-ca-cert-hash sha256:7dab997afd127fc4c9921808d04be57953694cc694a99c46925ae0f2b50e4308
[root@master ~]#
由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址。
使用kubectl工具:
//让环境变量永久生效
//master主机上操作
[root@master ~]# echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' > /etc/profile.d/k8s.sh
[root@master ~]# source /etc/profile.d/k8s.sh //查看是否有控制节点
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com NotReady control-plane,master 13m v1.20.0
6. 安装Pod网络插件(CNI)
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master ~]#
确保能够访问到quay.io这个registery。
7. 加入Kubernetes Node
在192.168.8.128、192.168.8.152上(Node)执行。
向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:
[root@master ~]# cat init
kubeadm join 192.168.8.120:6443 --token m1xenp.ra8y9d28h88dyfe6 \--discovery-token-ca-cert-hash sha256:7dab997afd127fc4c9921808d04be57953694cc694a99c46925ae0f2b50e4308
[root@master ~]# //node1主机上操作
[root@node1 ~]# kubeadm join 192.168.8.120:6443 --token m1xenp.ra8y9d28h88dyfe6 \
> --discovery-token-ca-cert-hash sha256:7dab997afd127fc4c9921808d04be57953694cc694a99c46925ae0f2b50e4308
[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.12. Latest validated version: 19.03[WARNING Hostname]: hostname "node1.example.com" could not be reached[WARNING Hostname]: hostname "node1.example.com": lookup node1.example.com on 192.168.8.2:53: no such host[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@node1 ~]# //node2主机上操作
[root@node2 ~]# kubeadm join 192.168.8.120:6443 --token m1xenp.ra8y9d28h88dyfe6 \
> --discovery-token-ca-cert-hash sha256:7dab997afd127fc4c9921808d04be57953694cc694a99c46925ae0f2b50e4308
[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.12. Latest validated version: 19.03[WARNING Hostname]: hostname "node2.example.com" could not be reached[WARNING Hostname]: hostname "node2.example.com": lookup node2.example.com on 192.168.8.2:53: no such host[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@node2 ~]#
//查看,下面node2还是notready,是因为它的后台还有东西在运行,等一会就可以了,等它变成ready
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com Ready control-plane,master 27m v1.20.0
node1.example.com Ready <none> 4m20s v1.20.0
node2.example.com Ready <none> 4m15s v1.20.0
[root@master ~]# //注意!!!
//如果想删除集群中的节点(部署k8s集群过程中不需要执行,这只是怎么删除节点的命令)
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com Ready control-plane,master 44h v1.20.0
node1.example.com NotReady <none> 43h v1.20.0
node2.example.com Ready <none> 43h v1.20.0
[root@master ~]# kubectl delete nodes node1.example.com
node "node1.example.com" deleted
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com Ready control-plane,master 44h v1.20.0
node2.example.com Ready <none> 43h v1.20.0
//部署成功后node1和node2会自动运行一些容器,镜像
[root@node1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/coreos/flannel v0.15.1 e6ea68648f0c 5 weeks ago 69.5MB
rancher/mirrored-flannelcni-flannel-cni-plugin v1.0.0 cd5235cd7dc2 7 weeks ago 9.03MB
registry.aliyuncs.com/google_containers/kube-proxy v1.20.0 10cc881966cf 12 months ago 118MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 22 months ago 683kB
[root@node1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a6100c28c624 e6ea68648f0c "/opt/bin/flanneld -…" About a minute ago Up About a minute k8s_kube-flannel_kube-flannel-ds-bdpn4_kube-system_96ef077d-4074-4560-94b5-a9f7e501e5e7_0
3e8e1acde157 registry.aliyuncs.com/google_containers/kube-proxy "/usr/local/bin/kube…" 2 minutes ago Up 2 minutes k8s_kube-proxy_kube-proxy-2x7vb_kube-system_39d424f9-8c51-4d21-b88e-4d2e87fd3221_0
3b5bc96fcabc registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-flannel-ds-bdpn4_kube-system_96ef077d-4074-4560-94b5-a9f7e501e5e7_0
1ff87315f455 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-proxy-2x7vb_kube-system_39d424f9-8c51-4d21-b88e-4d2e87fd3221_0
[root@node1 ~]# [root@node2 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/coreos/flannel v0.15.1 e6ea68648f0c 5 weeks ago 69.5MB
rancher/mirrored-flannelcni-flannel-cni-plugin v1.0.0 cd5235cd7dc2 7 weeks ago 9.03MB
registry.aliyuncs.com/google_containers/kube-proxy v1.20.0 10cc881966cf 12 months ago 118MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 22 months ago 683kB
[root@node2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
62ccccb7d37e e6ea68648f0c "/opt/bin/flanneld -…" 2 minutes ago Up 2 minutes k8s_kube-flannel_kube-flannel-ds-gxbvs_kube-system_f2ec47a2-388a-489b-984d-1751a3eb8ff8_0
98763f5ae1bd registry.aliyuncs.com/google_containers/kube-proxy "/usr/local/bin/kube…" 3 minutes ago Up 3 minutes k8s_kube-proxy_kube-proxy-pl2mc_kube-system_5bb6310b-88ca-4471-9fbf-dc562645eb06_0
5a00d5d02749 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-proxy-pl2mc_kube-system_5bb6310b-88ca-4471-9fbf-dc562645eb06_0
d1e1898fbb65 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-flannel-ds-gxbvs_kube-system_f2ec47a2-388a-489b-984d-1751a3eb8ff8_0
[root@node2 ~]#
//查看名称空间
[root@master ~]# kubectl get ns
NAME STATUS AGE
default Active 31m
kube-node-lease Active 31m
kube-public Active 31m
kube-system Active 31m//指定名称空间去看pods的信息
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7f89b7bc75-6w5zs 1/1 Running 0 31m
coredns-7f89b7bc75-9jkvz 1/1 Running 0 31m
etcd-master.example.com 1/1 Running 0 31m
kube-apiserver-master.example.com 1/1 Running 0 31m
kube-controller-manager-master.example.com 1/1 Running 0 31m
kube-flannel-ds-bdpn4 1/1 Running 0 8m41s
kube-flannel-ds-gxbvs 1/1 Running 0 8m36s
kube-flannel-ds-pmn55 1/1 Running 0 14m
kube-proxy-2x7vb 1/1 Running 0 8m41s
kube-proxy-d8fb5 1/1 Running 0 31m
kube-proxy-pl2mc 1/1 Running 0 8m36s
kube-scheduler-master.example.com 1/1 Running 0 31m//查看pods在哪运行
[root@master ~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-7f89b7bc75-6w5zs 1/1 Running 0 31m 10.244.0.3 master.example.com <none> <none>
coredns-7f89b7bc75-9jkvz 1/1 Running 0 31m 10.244.0.2 master.example.com <none> <none>
etcd-master.example.com 1/1 Running 0 31m 192.168.8.120 master.example.com <none> <none>
kube-apiserver-master.example.com 1/1 Running 0 31m 192.168.8.120 master.example.com <none> <none>
kube-controller-manager-master.example.com 1/1 Running 0 31m 192.168.8.120 master.example.com <none> <none>
kube-flannel-ds-bdpn4 1/1 Running 0 8m53s 192.168.8.128 node1.example.com <none> <none>
kube-flannel-ds-gxbvs 1/1 Running 0 8m48s 192.168.8.152 node2.example.com <none> <none>
kube-flannel-ds-pmn55 1/1 Running 0 14m 192.168.8.120 master.example.com <none> <none>
kube-proxy-2x7vb 1/1 Running 0 8m53s 192.168.8.128 node1.example.com <none> <none>
kube-proxy-d8fb5 1/1 Running 0 31m 192.168.8.120 master.example.com <none> <none>
kube-proxy-pl2mc 1/1 Running 0 8m48s 192.168.8.152 node2.example.com <none> <none>
kube-scheduler-master.example.com 1/1 Running 0 31m 192.168.8.120 master.example.com <none> <none>
[root@master ~]#
8. 测试kubernetes集群
[root@master ~]# kubectl create deployment nginx --image nginx
deployment.apps/nginx created//暴露端口号,暴露的是service的端口号,不能用pod的IP地址去访问,也不能用pod的主机名去访问,要访问的话只能用service访问
[root@master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35m
nginx NodePort 10.104.160.232 <none> 80:31085/TCP 24s
[root@master ~]# kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-6799fc88d8-2kpth 1/1 Running 0 61sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35m
service/nginx NodePort 10.104.160.232 <none> 80:31085/TCP 37s
[root@master ~]#
[root@master ~]# curl 10.104.160.232
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@master ~]# // 测试能否ping通(这个IP是随时会变的)
[root@master ~]# ping 10.244.1.2
PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data.
64 bytes from 10.244.1.2: icmp_seq=1 ttl=63 time=0.315 ms
64 bytes from 10.244.1.2: icmp_seq=2 ttl=63 time=0.231 ms// 测试能否访问
[root@master ~]# curl http://10.244.1.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>
网页上访问
master IP:service端口号(我这里的端口号是31085)
9. 部署Kubernetes集群故障案例
9.1 案例一
在使用k8s
的过程中,相信很多人都遇到过使用kubeadm join
命令,将node
加入master
时,出现error execution phase preflight: couldn't validate the identity of the API Server: abort connecting to API servers after timeout of 5m0s
错误,即节点纳入管理失败,五分钟后超时放弃连接。具体信息如下
[root@node1 ~]# kubeadm join 192.168.8.120:6443 --token m1xenp.ra8y9d28h88dyfe6 \
> --discovery-token-ca-cert-hash sha256:7dab997afd127fc4c9921808d04be57953694cc694a99c46925ae0f2b50e4308
[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.12. Latest validated version: 19.03[WARNING Hostname]: hostname "node1.example.com" could not be reached[WARNING Hostname]: hostname "node1.example.com": lookup node1.example.com on 192.168.8.2:53: no such host
error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
//解决方法
[root@node1 ~]# echo "1">/proc/sys/net/bridge/bridge-nf-call-iptables
[root@node1 ~]# chmod +rw /proc/sys/net/bridge/bridge-nf-call-iptables
chmod: 更改"/proc/sys/net/bridge/bridge-nf-call-iptables" 的权限: 不允许的操作
[root@node1 ~]# kubeadm join 192.168.8.120:6443 --token m1xenp.ra8y9d28h88dyfe6 --discovery-token-ca-cert-hash sha256:7dab997afd127fc4c9921808d04be57953694cc694a99c46925ae0f2b50e4308
[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.12. Latest validated version: 19.03[WARNING Hostname]: hostname "node1.example.com" could not be reached[WARNING Hostname]: hostname "node1.example.com": lookup node1.example.com on 192.168.8.2:53: no such host
error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "m1xenp"
To see the stack trace of this error execute with --v=5 or higher//从上面报错信息仔细的去看,现在已经解决了一条报错,但报错信息依然存在,此时需要通过案例二来解决报错
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
9.2 案例二
token
过期 此时需要通过kubedam
重新生成token
[root@node1 ~]# kubeadm join 192.168.8.120:6443 --token m1xenp.ra8y9d28h88dyfe6 --discovery-token-ca-cert-hash sha256:7dab997afd127fc4c9921808d04be57953694cc694a99c46925ae0f2b50e4308
[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.12. Latest validated version: 19.03[WARNING Hostname]: hostname "node1.example.com" could not be reached[WARNING Hostname]: hostname "node1.example.com": lookup node1.example.com on 192.168.8.2:53: no such host
error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "m1xenp"
To see the stack trace of this error execute with --v=5 or higher
//解决方法
//master主机上重新生成token
[root@master ~]# kubeadm token generate #生成toke
fen9ed.98vjvkhle103ufht #下面这条命令中会用到该结果
[root@master ~]# kubeadm token create fen9ed.98vjvkhle103ufht --print-join-command --ttl=0 #根据token输出添加命令
kubeadm join 192.168.8.120:6443 --token fen9ed.98vjvkhle103ufht --discovery-token-ca-cert-hash sha256:7dab997afd127fc4c9921808d04be57953694cc694a99c46925ae0f2b50e4308
然后用上面输出的kubeadm
join
命令放到想要添加的节点中执行
//问题完美解决
[root@node1 ~]# kubeadm join 192.168.8.120:6443 --token fen9ed.98vjvkhle103ufht --discovery-token-ca-cert-hash sha256:7dab997afd127fc4c9921808d04be57953694cc694a99c46925ae0f2b50e4308
[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.12. Latest validated version: 19.03[WARNING Hostname]: hostname "node1.example.com" could not be reached[WARNING Hostname]: hostname "node1.example.com": lookup node1.example.com on 192.168.8.2:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@node1 ~]#
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com Ready control-plane,master 45h v1.20.0
node1.example.com Ready <none> 20m v1.20.0
node2.example.com Ready <none> 44h v1.20.0
[root@master ~]#
Kubernetes快速部署相关推荐
- 使用FIT2CLOUD在青云QingCloud快速部署和管理Kubernetes集群
一.Kubernetes概述 Kubernetes是Google一直在推进的容器调度和管理系统,是Google内部使用的容器管理系统Borg的开源版本.它可以实现对Docker容器的部署,配置,伸缩和 ...
- Kubernetes之(五)快速部署应用
目录 Kubernetes之(五)快速部署应用 kubectl命令介绍 kubectl run命令行部署应用 kubectl expose 通过service暴漏Pod kubectl scale 动 ...
- helm安装_如何利用 Helm 在 Kubernetes 上快速部署 Jenkins
Jenkins 做为最著名的 CI/CD 工具,在全世界范围内被广泛使用,而随着以 Kubernetes 为首的云平台的不断发展与壮大,在 Kubernetes 上运行 Jenkins 的需求越来越多 ...
- 教你在Kubernetes中快速部署ES集群
摘要:ES集群是进行大数据存储和分析,快速检索的利器,本文简述了ES的集群架构,并提供了在Kubernetes中快速部署ES集群的样例:对ES集群的监控运维工具进行了介绍,并提供了部分问题定位经验,最 ...
- ② kubeadm快速部署Kubernetes集群
文章目录 1. 安装要求 2. 目标 3. 准备环境 4. 所有节点安装Docker/kubeadm/kubelet 4.1 安装Docker(以一台为例,其他相同) 4.2 添加kubernetes ...
- 最新版kubeadm快速部署Kubernetes
最新版kubeadm快速部署Kubernetes kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具 一.操作要求 在开始之前,部署kubernetes集群需要满足以下几个条 ...
- 基于kubeadm快速部署kubernetes K8S V1.17.4集群-无坑完整版
基于kubeadm快速部署kubernetes K8S V1.17.4集群,并部署Dashboard Web页面,实现可视化查看Kubernetes资源 主机配置规划 服务器名称(hostname) ...
- Kubernetes---通过Ansible自动化快速部署Kubernetes集群
Kubernetes-通过Ansible自动化快速部署Kubernetes集群 目录 Kubernetes-通过Ansible自动化快速部署Kubernetes集群 一.安装ansible 二.环境准 ...
- kubernetes集群快速部署1.23
kubernetes集群快速部署1.23 文章目录 kubernetes集群快速部署1.23 1.环境准备(所有节点) 2.配置免密登录 3.配置ipv4 4.安装docker(所有节点) 5.部署k ...
- 2、使用 kubeadm 方式快速部署K8S集群
文章目录 一.kubernetes 官方提供的三种部署方式 二.使用kubeadm搭建k8s集群 2.1 基础环境设置 2.2 安装Docker 2.3 添加kubernetes软件源 2.4 安装k ...
最新文章
- 第二单元作业——电梯模拟总结
- 深入理解C++内存管理
- Perl 中级教程 第5章课后习题
- 软件基本功:数组赋值,一定要对齐
- mysql alter操作
- 传智播客黑马程序员_新程序员的最佳播客,以及聆听他们的最佳工具
- 使用U盘制作启动盘重装mac系统
- Linux权限设置方法
- Flink CEP greedy理解
- Java-彩票游戏例题
- 华为S5300系列交换机V100R006SPH019升级补丁
- 7-21 输出大写英文字母 (15分)
- jQuery.jqGrid
- 从抄书到开源之巅:章亦春的程序人生
- Android EditText 格式化手机号 xxx xxxx xxxx
- 基于SphereFace深度学习的人脸考勤系统(Caffe+windows+OpenCV)
- verilog能直接用c语言编程软件,verilog语言编程
- 零基础小白学习UI设计 UI的学习路线是什么
- 小学计算机教案动画欣赏,小学信息技术《文字动画》教案
- 信息学奥赛第十七节 —— 栈与队列(stack、queue)