【K8S集群安装二】K8S集群安装步骤
【K8S集群安装二】K8S集群安装步骤
- K8S集群初始化
- 遇到问题解决方法
- **更改HostName
- 验证K8S集群是否可用
- k8s部署总结
- 安装容器运行环境(1.24以后)
- 1.先下载 containerd-1.6.6-linux-amd64.tar.gz
- 安装 systemd
- 安装runc
- 添加K8S集群yum源
- 降级软件包(非必须)
- 自定义容器(每个节点都必须执行)
- 查看需要的镜像
- 常用命令
- 从新初始化集群
- 退出集群
- 错误提示
- 视频链接
- 温馨提示
【K8S集群安装一】K8S集群安装步骤
K8S集群初始化
#指定containerd-version版本
kubeadm init
--image-repository registry.aliyuncs.com/google_containers
--containerd-version=v1.24.3
--pod-network-cidr=10.244.0.0/16
--apiserver-advertise-address=192.168.10.100
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1] and IPs [10.96.0.1 192.168.10.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master1] and IPs [192.168.10.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master1] and IPs [192.168.10.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.016692 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: bck71j.fog6srhqn4admzag
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.10.100:6443 --token bck71j.fog6srhqn4admzag \--discovery-token-ca-cert-hash sha256:54dcaff96319c07f8de243c920751f4e54962b1587c8d9d5358990ef00c5b77f
遇到问题解决方法
#如果初始化遇到如下问题
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR CRI]: container runtime is not running: output: time="2022-07-19T15:55:47+08:00" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/containerd/containerd.sock: connect: no such file or directory\""
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher#错误原因是v1.24.1版本开始 移除了CIR的包属性 需要单独下载#解决办法
#解决地址https://github.com/kubernetes-sigs/cri-tools
VERSION="v1.24.2"
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
rm -f crictl-$VERSION-linux-amd64.tar.gzVERSION="v1.24.2"
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/critest-$VERSION-linux-amd64.tar.gz
sudo tar zxvf critest-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
rm -f critest-$VERSION-linux-amd64.tar.gz
**更改HostName
hostnamectl set-hostname master1
#移动root 权限管理集群的脚本
scp -r /root/.kube root@worker1:/root
验证K8S集群是否可用
#查看集群node
kubectl get nodes
#查看集群健康状态
kubectl get cs
kubectl cluster-info
kubectl get pods --namespace kube-system
k8s部署总结
( flannel安装地址)GitHub - flannel-io/flannel: flannel is a network fabric for containers, designed for Kubernetes
安装容器运行环境(1.24以后)
1.先下载 containerd-1.6.6-linux-amd64.tar.gz
containerd/getting-started.md at main · containerd/containerd
#设置containerd 状态
systemctl restart containerd
systemctl enable containerd
systemctl start containerd
systemctl status containerd
安装 systemd
#创建目录
mkdir -p /usr/local/lib/systemd/system# https://github.com/containerd/containerd/blob/main/containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerdType=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999[Install]
WantedBy=multi-user.target
安装runc
#下载地址:https://github.com/opencontainers/runc/releases
$ install -m 755 runc.amd64 /usr/local/sbin/runc
其他CNI安装都会在 kubeadm init 集群时候进行初始化。
添加K8S集群yum源
地址:https://developer.aliyun.com/mirror/?serviceType=&tag=
https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.59aa1b11I1JMTz
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
降级软件包(非必须)
yum downgrade -y kubelet kubeadm kubectl
自定义容器(每个节点都必须执行)
containerd 使用位于的配置文件/etc/containerd/config.toml来指定守护进程级别选项。可以在此处找到示例配置文件。
默认配置可以通过生成containerd config default > /etc/containerd/config.toml。
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml#修改
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"#registry.aliyuncs.com/google_containers/pause:3.7为本次安装的镜像地址(注意版本号)
查看需要的镜像
#查询需要的镜像kubeadm config images listkubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.24.2
常用命令
#查看系统信息
kubectl get node -o wide
kubectl get pods --all-namespaces -owide
journalctl -u kubelet
kubectl cluster-info
kubectl get pod -n kube-system |grep "flannel"
kubectl delete pod corendns-74586cf9b6-xjqp7
kubectl apply -f kube-flannel.yml
从新初始化集群
334 kubeadm reset335 kubectl get pod -A336 kubectl get node337 kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=192.168.10.100338 mkdir -p $HOME/.kube339 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config340 kubectl get node341 kubectl get nodes342 kubectl get pod -A343 kubectl get pod -n kube-system344 删除$HOME/.kube345 rm -rf $HOME/.kube346 kubectl get pod -n kube-system347 mkdir -p $HOME/.kube348 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config349 sudo chown $(id -u):$(id -g) $HOME/.kube/config350 kubectl get pod -n kube-system351 kubectl get pod -A352 kubectl apply -f kube-flannel.yml353 kubectl get pod -A354 kubectl get pod -n kube-system355 kubectl get pod -A356 kubectl get pod -n kube-system357 kubectl get pod -A358 history359 kubectl version360 cd /etc/361 ll362 cd containerd/363 ll364 vi /etc/containerd/config.toml365 kubectl get node -o wide366 kubectl get node -h367 kubectl get node -o368 kubectl get node -o -h369 kubectl get node -o --help370 kubectl get node -o yaml371 wide372 kubectl get node -o wide373 kubectl get node -A -o wide374 kubectl get pod -A -o wide375 scp -r /root/.kube root@worker1:/root376 scp -r /root/.kube root@worker2:/root377*378 kubectl get pod379 kubectl get pod -A380 cd /opt/bin/381 ll382 cd flanneld/383 ll384 kubectl apply -f kube-flannel.yml385 cat /run/flannel/subnet.env386 vi /run/flannel/subnet.env387 kubectl apply -f kube-flannel.yml388 kubectl get node -o wide389 journalctl -u kubelet390 kubectl get pods -n kube-system -owide391 kubectl get node -o wide392 kubectl get pods --all-namespaces -owide393 kubectl describe pod kube-flannel-ds-5v4rt -n kube-flannel394 ctr images395 ctr images pull docker.io/rancher/mirrored-flannelcni-flannel:v0.19.0396 kubectl describe pod kube-flannel-ds-5v4rt -n kube-flannel397 kubectl get pods --all-namespaces -owide398 cat /etc/containerd/config.toml399 journalctl -u flanneld400 kubectl logs --namespace kube-system kube-flannel-ds-5v4rt -c kube-flannel401 kubectl logs kube-flannel-ds-5v4rt -c kube-flannel402 kubectl logs kube-flannel-ds-5v4rt403 kubectl get pods --all-namespaces -owide404 kubectl describe pod kube-flannel -n kube-system405 kubectl describe pod kube-flannel406 kubectl describe pod kube-flannel-ds-fdtb7 -n kube-flannel407 kubectl get pods --all-namespaces -owide408 pwd409 ll410 cat kube-flannel.yml411 kubectl get pods --all-namespaces -owide412 kubectl describe pod kube-flannel-ds-fdtb7 -n kube-flannel413 kubectl get pods --all-namespaces -owide414 crictl images415 crictl ps -a416 crictl version417 crictl -logs a75418 crictl logs a75419 crictl logs a75ec7b48983e420 crictl ps -a421 crictl logs 18d422 vim /etc/kubernetes/manifests/kube-controller-manager.yaml423 crictl logs 18d424 crictl ps -a425 systemctl restart kubelet426 kubectl get nodes427 kubectl get pods --all-namespaces -owide428 cat /run/flannel/subnet.env429 ifconfig -a430 kubectl get pods --all-namespaces -owide431 kubectl delete pod -n kube-system kube-flannel-*432 kubectl delete pod -n kube-flannel kube-flannel-*433 kubectl delete pod -n kube-flannel kube-flannel-ds-5v4rt434 kubectl delete pod -n kube-flannel kube-flannel-ds-fdtb7435 kubectl get pods --all-namespaces -owide436 kubectl delete pod -n kube-flannel kube-flannel-ds-fdtb7437 kubectl get pods --all-namespaces -owide438 kubectl delete pod -n kube-system kube-controller-manager-master1439 kubectl get pods --all-namespaces -owide440 kubectl get nodes441 kubectl delete node worker1442 kubectl get nodes443 kubectl delete node worker2444 kubectl get nodes445 kubeadm config -h446 kubeadm config images list
退出集群
# 配置节点不可调度
kubectl cordon <NODE_NAME>
然后,对该节点上运行的pod资源进行驱逐。kubectl drain --ignore-daemonsets <NODE_NAME>
# --ignore-daemonsets 选项为忽略DaemonSet类型的pod# 获取node节点上所有的pod名称
kubectl get pod -A -o wide | grep <NODE_NAME>
# 删除pod资源,重新调度到其他节点
kubectl delete pod -n <NAMESPACE> <POD_NAME>kubectl delete node worker2
# 验证已无该node资源
kubectl get node | grep <NODE_NAME>
# 验证该node上无pod资源
kubectl get pod -A -o wide | grep <NODE_NAME>
错误提示
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
视频链接
https://www.bilibili.com/video/BV1Qt4y1H7fV/?spm_id_from=333.788.recommend_more_video.2
温馨提示
卸载 k8s
(虽然在主节点上,我做了几次,包括最后一次排空节点)kubectl drain mynodename --delete-local-data --force --ignore-daemonsets
kubectl delete node mynodename
kubeadm reset
systemctl stop kubelet
yum remove kubeadm kubectl kubelet kubernetes-cni kube*
yum autoremove
rm -rf ~/.kube
rm -rf /var/lib/kubelet/*卸载docker:docker rm 码头工人ps -a -q``
docker stop (as needed)
docker rmi -f 码头工人图像-q``
检查所有容器和图像是否已删除docker ps -a: docker images
systemctl stop docker
yum remove yum-utils device-mapper-persistent-data lvm2
yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine
yum remove docker-ce
rm -rf /var/lib/docker
12. rm -rf /etc/docker卸载flannelrm -rf /var/lib/cni/
rm -rf /run/flannel
rm -rf /etc/cni/移除 docker 和 flannel 相关的接口:
ip link
对于 docker 或 flannel 的每个接口,执行以下操作
ifconfig <name of interface from ip link> down
ip link delete <name of interface from ip link>
【K8S集群安装二】K8S集群安装步骤相关推荐
- 大型复杂项目集管理之二——项目集治理
治理是政府的治理工具,是指政府的行为方式,以及通过某些途径用以调节政府行为的机制.释义是管理:统治:得到管理.统治.指理政的成绩.治理政务的道理.处理:整修.如:治理黄河.出自 <荀子·君道&g ...
- K8S实战集训第二课 K8S 存储 之 Ceph 分布式存储系统
文章目录 为什么要用Ceph Ceph架构介绍 Ceph核心概念 RADOS Librados Crush Pool PG Object Ceph核心组件 OSD Monitor MDS Mgr RG ...
- ubuntu安装python3.6_Ubuntu16.04怎样安装Python3.6
原博文 2018-03-24 22:50 − Ubuntu16.04默认安装了Python2.7和3.5 请注意,系统自带的python千万不能卸载! 输入命令python 按Ctrl+D退出pyth ...
- 验证python安装_Python环境搭建(安装、验证与卸载)
电脑系统版本 :Win8.1/64位 Python官网地址:www.python.org Python安装版本:3.7.2 本文目录: 一.Python的安装 二.验证Python安装的情况 三. ...
- k8s入门之集群搭建(二)
一.准备三台节点 从上篇文章k8s入门之基础环境准备(一)安装的Ubuntu虚拟机克隆出三台虚拟机,如图所示 启动这三台虚拟机节点,分别做如下配置 虚拟机名称 IP HostName k8sMast ...
- k8s 的etcd备份、CoreDNS和dashboard安装,集群升级,yaml详解
前言:本文k8s环境搭建是采用kubeasz 3.2.0方式二进制部署的,这个种部署方式是经过CNCF(云原生基金会)认证的,可以用在生产上,本演示环境已装好k8s和calico 安装包链接:http ...
- python twisted安装_windows python twisted下载 安装 使用
windows: 一.下载地址 http://twistedmatrix.com/trac/wiki/Downloads Twisted-13.2.0.win-amd64-py2.7.msi zope ...
- VMware下centos7安装k8s(Kubernetes)多master集群
上一节:VMware下centos7安装k8s(Kubernetes)集群 1.使用MobaXterm打开多个窗口,进行多窗口同时编辑,已提前改好IP和hostname. 2.修改hosts,用vim ...
- 本地k8s集群搭建保姆级教程(4)-安装k8s集群Dashboard
安装k8s集群管理UI 1 Dashboard安装 1.1 参考文档 Dashboard 是基于网页的 Kubernetes 用户界面. 你可以使用 Dashboard 将容器应用部署到 Kubern ...
- centos7.8 安装部署 k8s 集群
centos7.8 安装部署 k8s 集群 文章目录 centos7.8 安装部署 k8s 集群 环境说明 Docker 安装 k8s 安装准备工作 Master 节点安装 k8s 版本查看 安装 k ...
最新文章
- 发布或重启线上服务时抖动问题解决方案
- 使用python对比两个目录下的文件名差异
- lighttpd mysql_lighttpd+mysql+php
- CodeTank iOS App Technical Support
- Python模拟删除字符串两边的空白
- 逻辑SQL Server数据复制101
- 手机锁屏密码忘记了怎么办,清除锁屏的办法
- AngularJs 入门系列-1 使用 AngularJs 搭建页面基本框架
- Linux指纹识别程序,linux上应用指纹识别(转载)
- 断代、新生、创未来-Zoomla!逐浪CMS2 x3.9.6全面发布...
- POI设置单元格格式
- Java集合框架学习总结
- 华为 项目管理10大模板 【Word版 (可直接套用)】
- 【pyTranscriber】开源免费语音转字幕软件及替代方案
- linux2t硬盘格式化时间,linux下大于2T硬盘格式化方法
- ★关于人类体质弱化的分析
- Android 梯形TextView
- UE4 C++编程入门整理
- ceph 代码分析 读_Ceph代码分析
- Windows 10企业批量部署实战之WDS配置
热门文章
- 计算机word页面设置A5,word页面缩放怎样设置
- 586A 586B线序
- android 原生camera——设置模块修改
- 记录一次C#爬虫记录,获取必应图片
- 营收环比增幅近50%,星巴克在经历“劫”后重生吗?
- 创建一个员工类(Employee),其中包括:1) 4个私有属性:员工姓名(name)、员工年龄(age)、员工职位(position)、工资(salary)
- 吴伯凡-认知方法论-认知的两重性
- 计算机与音乐整合的教学设计,小学音乐课程整合研究《郊游》优秀教学设计
- 又到一年清明时,又是一年踏春季
- c语言黄金分割法搜索过程,【大话数据结构C语言】53 斐波那契查找(黄金分割法查找)...