如何删除不一致状态下的 rc,deployment,service

在某些情况下,经常发现 kubectl 进程挂起现象,然后在 get 时候发现删了一半,而另外的删除不了

[root@k8s-master ~]# kubectl get -f fluentd-elasticsearch/
NAME DESIRED CURRENT READY AGE
rc/elasticsearch-logging-v1 0 2 2 15hNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/kibana-logging 0 1 1 1 15h
Error from server (NotFound): services "elasticsearch-logging" not found
Error from server (NotFound): daemonsets.extensions "fluentd-es-v1.22" not found
Error from server (NotFound): services "kibana-logging" not found

删除这些 deployment,service 或者 rc 命令如下:

kubectl delete deployment kibana-logging -n kube-system --cascade=falsekubectl delete deployment kibana-logging -n kube-system  --ignore-not-founddelete rc elasticsearch-logging-v1 -n kube-system --force now --grace-period=0
1|2删除不了后如何重置etcd

删除不了后如何重置 etcd

rm -rf /var/lib/etcd/*

删除后重新 reboot master 结点。

reset etcd 后需要重新设置网络

etcdctl mk /atomic.io/network/config '{ "Network": "192.168.0.0/16" }'

启动 apiserver 失败

每次启动都是报如下问题:

start request repeated too quickly for kube-apiserver.service

但其实不是启动频率问题,需要查看, /var/log/messages,在我的情况中是因为开启 ServiceAccount 后找不到 ca.crt 等文件,导致启动出错。

May 21 07:56:41 k8s-master kube-apiserver: Flag --port has been deprecated, see --insecure-port instead.
May 21 07:56:41 k8s-master kube-apiserver: F0521 07:56:41.692480 4299 universal_validation.go:104] Validate server run options failed: unable to load client CA file: open /var/run/kubernetes/ca.crt: no such file or directory
May 21 07:56:41 k8s-master systemd: kube-apiserver.service: main process exited, code=exited, status=255/n/a
May 21 07:56:41 k8s-master systemd: Failed to start Kubernetes API Server.
May 21 07:56:41 k8s-master systemd: Unit kube-apiserver.service entered failed state.
May 21 07:56:41 k8s-master systemd: kube-apiserver.service failed.
May 21 07:56:41 k8s-master systemd: kube-apiserver.service holdoff time over, scheduling restart.
May 21 07:56:41 k8s-master systemd: start request repeated too quickly for kube-apiserver.service
May 21 07:56:41 k8s-master systemd: Failed to start Kubernetes API Server.

在部署 fluentd 等日志组件的时候,很多问题都是因为需要开启 ServiceAccount 选项需要配置安全导致,所以说到底还是需要配置好 ServiceAccount.

出现 Permission denied 情况

在配置 fluentd 时候出现cannot create /var/log/fluentd.log: Permission denied 错误,这是因为没有关掉 SElinux 安全导致。

可以在 /etc/selinux/config 中将 SELINUX=enforcing 设置成 disabled,然后 reboot

基于 ServiceAccount 的配置

首先生成各种需要的 keys,k8s-master 需替换成 master 的主机名.

openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj "/CN=k8s-master" -days 10000 -out ca.crt
openssl genrsa -out server.key 2048echo subjectAltName=IP:10.254.0.1 > extfile.cnf#ip由下述命令决定#kubectl get services --all-namespaces |grep 'default'|grep 'kubernetes'|grep '443'|awk '{print $3}'openssl req -new -key server.key -subj "/CN=k8s-master" -out server.csropenssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -extfile extfile.cnf -out server.crt -days 10000

如果修改 /etc/kubernetes/apiserver 的配置文件参数的话,通过 systemctl start kube-apiserver 启动失败,出错信息为:

Validate server run options failed: unable to load client CA file: open /root/keys/ca.crt: permission denied

但可以通过命令行启动 API Server

/usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://k8s-master:2379 --address=0.0.0.0 --port=8080 --kubelet-port=10250 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --admission-control=ServiceAccount --insecure-bind-address=0.0.0.0 --client-ca-file=/root/keys/ca.crt --tls-cert-file=/root/keys/server.crt --tls-private-key-file=/root/keys/server.key --basic-auth-file=/root/keys/basic_auth.csv --secure-port=443 &>> /var/log/kubernetes/kube-apiserver.log &

命令行启动 Controller-manager

/usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://k8s-master:8080 --root-ca-file=/root/keys/ca.crt --service-account-private-key-file=/root/keys/server.key & >>/var/log/kubernetes/kube-controller-manage.log

ETCD 启动不起来-问题<1>

etcd是kubernetes 集群的zookeeper进程,几乎所有的service都依赖于etcd的启动,比如flanneld,apiserver,docker…在启动etcd是报错日志如下:

May 24 13:39:09 k8s-master systemd: Stopped Flanneld overlay address etcd agent.
May 24 13:39:28 k8s-master systemd: Starting Etcd Server...
May 24 13:39:28 k8s-master etcd: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=http://etcd:2379,http://etcd:4001
May 24 13:39:28 k8s-master etcd: recognized environment variable ETCD_NAME, but unused: shadowed by corresponding flag
May 24 13:39:28 k8s-master etcd: recognized environment variable ETCD_DATA_DIR, but unused: shadowed by corresponding flag
May 24 13:39:28 k8s-master etcd: recognized environment variable ETCD_LISTEN_CLIENT_URLS, but unused: shadowed by corresponding flag
May 24 13:39:28 k8s-master etcd: etcd Version: 3.1.3
May 24 13:39:28 k8s-master etcd: Git SHA: 21fdcc6
May 24 13:39:28 k8s-master etcd: Go Version: go1.7.4
May 24 13:39:28 k8s-master etcd: Go OS/Arch: linux/amd64
May 24 13:39:28 k8s-master etcd: setting maximum number of CPUs to 1, total number of available CPUs is 1
May 24 13:39:28 k8s-master etcd: the server is already initialized as member before, starting as etcd member...
May 24 13:39:28 k8s-master etcd: listening for peers on http://localhost:2380
May 24 13:39:28 k8s-master etcd: listening for client requests on 0.0.0.0:2379
May 24 13:39:28 k8s-master etcd: listening for client requests on 0.0.0.0:4001
May 24 13:39:28 k8s-master etcd: recovered store from snapshot at index 140014
May 24 13:39:28 k8s-master etcd: name = master
May 24 13:39:28 k8s-master etcd: data dir = /var/lib/etcd/default.etcd
May 24 13:39:28 k8s-master etcd: member dir = /var/lib/etcd/default.etcd/member
May 24 13:39:28 k8s-master etcd: heartbeat = 100ms
May 24 13:39:28 k8s-master etcd: election = 1000ms
May 24 13:39:28 k8s-master etcd: snapshot count = 10000
May 24 13:39:28 k8s-master etcd: advertise client URLs = http://etcd:2379,http://etcd:4001
May 24 13:39:28 k8s-master etcd: ignored file 0000000000000001-0000000000012700.wal.broken in wal
May 24 13:39:29 k8s-master etcd: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 148905
May 24 13:39:29 k8s-master etcd: 8e9e05c52164694d became follower at term 12
May 24 13:39:29 k8s-master etcd: newRaft 8e9e05c52164694d [peers: [8e9e05c52164694d], term: 12, commit: 148905, applied: 140014, lastindex: 148905, lastterm: 12]
May 24 13:39:29 k8s-master etcd: enabled capabilities for version 3.1
May 24 13:39:29 k8s-master etcd: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 from store
May 24 13:39:29 k8s-master etcd: set the cluster version to 3.1 from store
May 24 13:39:29 k8s-master etcd: starting server... [version: 3.1.3, cluster version: 3.1]
May 24 13:39:29 k8s-master etcd: raft save state and entries error: open /var/lib/etcd/default.etcd/member/wal/0.tmp: is a directory
May 24 13:39:29 k8s-master systemd: etcd.service: main process exited, code=exited, status=1/FAILURE
May 24 13:39:29 k8s-master systemd: Failed to start Etcd Server.
May 24 13:39:29 k8s-master systemd: Unit etcd.service entered failed state.
May 24 13:39:29 k8s-master systemd: etcd.service failed.
May 24 13:39:29 k8s-master systemd: etcd.service holdoff time over, scheduling restart.

核心语句:

raft save state and entries error: open /var/lib/etcd/default.etcd/member/wal/0.tmp: is a directory

进入相关目录,删除 0.tmp,然后就可以启动啦!

ETCD启动不起来-超时问题<2>

问题背景:当前部署了 3 个 etcd 节点,突然有一天 3 台集群全部停电宕机了。重新启动之后发现 K8S 集群是可以正常使用的,但是检查了一遍组件之后,发现有一个节点的 etcd 启动不了。

经过一遍探查,发现时间不准确,通过以下命令 ntpdate ntp.aliyun.com 重新将时间调整正确,重新启动 etcd,发现还是起不来,报错如下:

Mar 05 14:27:15 k8s-node2 etcd[3248]: etcd Version: 3.3.13
Mar 05 14:27:15 k8s-node2 etcd[3248]: Git SHA: 98d3084
Mar 05 14:27:15 k8s-node2 etcd[3248]: Go Version: go1.10.8
Mar 05 14:27:15 k8s-node2 etcd[3248]: Go OS/Arch: linux/amd64
Mar 05 14:27:15 k8s-node2 etcd[3248]: setting maximum number of CPUs to 4, total number of available CPUs is 4
Mar 05 14:27:15 k8s-node2 etcd[3248]: the server is already initialized as member before, starting as etcd member
...
Mar 05 14:27:15 k8s-node2 etcd[3248]: peerTLS: cert = /opt/etcd/ssl/server.pem, key = /opt/etcd/ssl/server-key.pe
m, ca = , trusted-ca = /opt/etcd/ssl/ca.pem, client-cert-auth = false, crl-file =
Mar 05 14:27:15 k8s-node2 etcd[3248]: listening for peers on https://192.168.25.226:2380
Mar 05 14:27:15 k8s-node2 etcd[3248]: The scheme of client url http://127.0.0.1:2379 is HTTP while peer key/cert
files are presented. Ignored key/cert files.
Mar 05 14:27:15 k8s-node2 etcd[3248]: listening for client requests on 127.0.0.1:2379
Mar 05 14:27:15 k8s-node2 etcd[3248]: listening for client requests on 192.168.25.226:2379
Mar 05 14:27:15 k8s-node2 etcd[3248]: member 9c166b8b7cb6ecb8 has already been bootstrapped
Mar 05 14:27:15 k8s-node2 systemd[1]: etcd.service: main process exited, code=exited, status=1/FAILURE
Mar 05 14:27:15 k8s-node2 systemd[1]: Failed to start Etcd Server.
Mar 05 14:27:15 k8s-node2 systemd[1]: Unit etcd.service entered failed state.
Mar 05 14:27:15 k8s-node2 systemd[1]: etcd.service failed.
Mar 05 14:27:15 k8s-node2 systemd[1]: etcd.service failed.
Mar 05 14:27:15 k8s-node2 systemd[1]: etcd.service holdoff time over, scheduling restart.
Mar 05 14:27:15 k8s-node2 systemd[1]: Starting Etcd Server...
Mar 05 14:27:15 k8s-node2 etcd[3258]: recognized environment variable ETCD_NAME, but unused: shadowed by correspo
nding flag
Mar 05 14:27:15 k8s-node2 etcd[3258]: recognized environment variable ETCD_DATA_DIR, but unused: shadowed by corr
esponding flag
Mar 05 14:27:15 k8s-node2 etcd[3258]: recognized environment variable ETCD_LISTEN_PEER_URLS, but unused: shadowedby corresponding flag
Mar 05 14:27:15 k8s-node2 etcd[3258]: recognized environment variable ETCD_LISTEN_CLIENT_URLS, but unused: shadow
ed by corresponding flag
Mar 05 14:27:15 k8s-node2 etcd[3258]: recognized environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS, but unuse
d: shadowed by corresponding flag
Mar 05 14:27:15 k8s-node2 etcd[3258]: recognized environment variable ETCD_ADVERTISE_CLIENT_URLS, but unused: sha
dowed by corresponding flag
Mar 05 14:27:15 k8s-node2 etcd[3258]: recognized environment variable ETCD_INITIAL_CLUSTER, but unused: shadowed
by corresponding flag
Mar 05 14:27:15 k8s-node2 etcd[3258]: recognized environment variable ETCD_INITIAL_CLUSTER_TOKEN, but unused: sha
dowed by corresponding flag
Mar 05 14:27:15 k8s-node2 etcd[3258]: recognized environment variable ETCD_INITIAL_CLUSTER_STATE, but unused: sha
dowed by corresponding flag

解决方法:

检查日志发现并没有特别明显的错误,根据经验来讲,etcd 节点坏掉一个其实对集群没有大的影响,这时集群已经可以正常使用了,但是这个坏掉的 etcd 节点并没有启动,解决方法如下:

进入 etcd 的数据存储目录进行备份 备份原有数据:

cd /var/lib/etcd/default.etcd/member/
cp *  /data/bak/

删除这个目录下的所有数据文件

rm -rf /var/lib/etcd/default.etcd/member/*

停止另外两台 etcd 节点,因为 etcd 节点启动时需要所有节点一起启动,启动成功后即可使用。

#master 节点
systemctl stop etcd
systemctl restart etcd#node1 节点
systemctl stop etcd
systemctl restart etcd#node2 节点
systemctl stop etcd
systemctl restart etcd

CentOS下配置主机互信

在每台服务器需要建立主机互信的用户名执行以下命令生成公钥/密钥,默认回车即可

ssh-keygen -t rsa

可以看到生成个公钥的文件。

互传公钥,第一次需要输入密码,之后就OK了。

ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.199.132 (-p 2222)

-p 端口 默认端口不加-p,如果更改过端口,就得加上-p. 可以看到是在.ssh/下生成了个 authorized_keys的文件,记录了能登陆这台服务器的其他服务器的公钥。

测试看是否能登陆:

ssh 192.168.199.132 (-p 2222)

CentOS 主机名的修改

hostnamectl set-hostname k8s-master1

Virtualbox 实现 CentOS 复制和粘贴功能

如果不安装或者不输出,可以将 update 修改成 install 再运行。

yum install update
yum update kernel
yum update kernel-devel
yum install kernel-headers
yum install gcc
yum install gcc make

运行完后

sh VBoxLinuxAdditions.run

删除Pod一直处于Terminating状态

可以通过下面命令强制删除

kubectl delete pod NAME --grace-period=0 --force

删除namespace一直处于Terminating状态

可以通过以下脚本强制删除

[root@k8s-master1 k8s]# cat delete-ns.sh
#!/bin/bash
set -euseage(){echo "useage:"echo " delns.sh NAMESPACE"
}if [ $# -lt 1 ];thenuseageexit
fiNAMESPACE=$1
JSONFILE=${NAMESPACE}.json
kubectl get ns "${NAMESPACE}" -o json > "${JSONFILE}"
vi "${JSONFILE}"
curl -k -H "Content-Type: application/json" -X PUT --data-binary @"${JSONFLE}" \http://127.0.0.1:8001/api/v1/namespaces/"${NAMESPACE}"/finalize

容器包含有效的 CPU/内存 requests 且没有指定 limits 可能会出现什么问题?

下面我们创建一个对应的容器,该容器只有 requests 设定,但是没有 limits 设定,

- name: busybox-cnt02image: busyboxcommand: ["/bin/sh"]args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"]resources:requests:memory: "100Mi"cpu: "100m"

这个容器创建出来会有什么问题呢?

其实对于正常的环境来说没有什么问题,但是对于资源型 pod 来说,如果有的容器没有设定 limit 限制,资源会被其他的 pod 抢占走,可能会造成容器应用失败的情况。可以通过 limitrange 策略来去匹配,让 pod 自动设定,前提是要提前配置好limitrange 规则。

K8S学习笔记之ETCD启动失败注意事项

最近搭建K8S集群遇到ETCD的报错,报错信息如下,一定要关闭防火墙、iptables和SELINUX,三个都要关闭!!

Mar 26 20:39:24 k8s-m1 etcd[6437]: health check for peer 3de62d4888b330ab could not connect: dial tcp 192.168.26.137:2380: connect: no route to host (prober "ROUND_TRIPPER_SNAPSHOT")
Mar 26 20:39:24 k8s-m1 etcd[6437]: health check for peer 3de62d4888b330ab could not connect: dial tcp 192.168.26.137:2380: connect: no route to host (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar 26 20:39:24 k8s-m1 etcd[6437]: health check for peer c6f4c021208d2dfe could not connect: dial tcp 192.168.26.136:2380: i/o timeout (prober "ROUND_TRIPPER_SNAPSHOT")
Mar 26 20:39:24 k8s-m1 etcd[6437]: health check for peer c6f4c021208d2dfe could not connect: dial tcp 192.168.26.136:2380: i/o timeout (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar 26 20:39:24 k8s-m1 etcd[6437]: publish error: etcdserver: request timed out关闭CentOS7防火墙
# 查看防火墙状态
firewall-cmd --state# 停止firewall
systemctl stop firewalld.service# 禁止firewall开机启动
systemctl disable firewalld.service关闭SELINUX
# 编辑SELINUX文件
vim /etc/selinux/config# 将SELINUX=enforcing改为SELINUX=disabled清除和关闭iptables
# 清空iptables规则
iptables -F# 保存
iptables-save





参考链接 :

Kubernetes 常见问题总结 :https://mp.weixin.qq.com/s/h9ceDnnQtFWP9h3sBvzMjg

K8S学习笔记之ETCD启动失败注意事项 :https://www.jianshu.com/p/0ce8119acbdb

Kubernetes 常见问题总结相关推荐

  1. kubernetes常见问题

    kubernetes常见问题 1.service代理到某个节点上无法访问 $ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NOD ...

  2. Kubernetes 学习总结(28)—— Kubernetes 常见问题总结

    如何删除不一致状态下的 rc.deployment.service 在某些情况下经常发现 kubectl 进程挂起现象,然后在 get 时候发现删了一半而另外的删除不了 [root@k8s-maste ...

  3. 深入剖析Kubernetes笔记:Kubernetes常见问题

    问题 1:你是否知道如何修复容器中的 top 指令以及 /proc 文件系统中的信息呢?(提示:lxcfs) 其实,这个问题的答案在提示里其实已经给出了,即 lxcfs 方案.通过 lxcfs,你可以 ...

  4. kubernetes 的安装与部署

    文章目录 一.前置工作与注意事项 二.安装 1. 初始准备 2. docker安装 3. k8s安装 三.网络插件的安装 1. 常用网络插件 2. 插件安装 四.安装dashboard 1.注意事项 ...

  5. K8s 很难么?带你从头到尾捋一遍,不信你学不会

    点击关注公众号,回复"1024"获取2TB学习资源! 为什么要学习 Kubernetes? 虽然 Docker 已经很强大了,但是在实际使用上还是有诸多不便,比如集群管理.资源调度 ...

  6. 最详细的 K8S 学习笔记总结(2021最新版)

    虽然 Docker 已经很强大了,但是在实际使用上还是有诸多不便,比如集群管理.资源调度.文件管理等等.那么在这样一个百花齐放的容器时代涌现出了很多解决方案,比如 Mesos.Swarm.Kubern ...

  7. 40000+字超强总结?阿里P8把Java全栈知识体系详解整理成这份PDF

    40000 +字长文总结,已将此文整理成PDF文档了,需要的见文后下载获取方式. 全栈知识体系总览 Java入门与进阶面向对象与Java基础 Java 基础 - 面向对象 Java 基础 - 知识点 ...

  8. Kubernetes与docker集群管理常见问题解析

    很荣幸受邀参加开源中国社区的高手问答,我是时速云团队的后端工程师,负责主机管理功能开发.在互动过程中,发现大家在使用/调研kubernetes(简称k8s)过程中遇到了很多问题,这里我总结为几点: l ...

  9. Kubernetes PV和PVC 常见问题

    文章目录 Kubernetes PV和PVC 常见问题 PV和PVC的关系 删除Kubernetes的PV和PVC时状态一直为Terminating PVC创建后一直处在Pending状态 Kuber ...

最新文章

  1. ESXI GLusterFS ISCSI 构建低端虚拟化解决方案
  2. 关于MAC升级后,vim更新插件报错
  3. python参数方法_Python方法的几种常见参数类型
  4. 用window.open在同一个新窗口中访问指定url【IE页面缓存问题】
  5. 大竹中学2021高考成绩查询,四川大竹中学2021录取分数线
  6. Oracle APEX 系列文章2:在阿里云上打造属于你自己的APEX完整开发环境 (准备工作)...
  7. 文本挖掘(part7)--Word2vec
  8. Android开发之HttpClient网络请求以Json方式提交Post请求代码
  9. LINUX下用脚本实现JDK+TOMCAT
  10. web服务中使用线程池减少时间的方法
  11. JavaScript - 理解面向对象编程
  12. 【MapGIS必备】常见问题处理(第十四期)
  13. Laravel之队列
  14. 主成分分析法原理与MATLAB实现
  15. 【个人】项目实训 | 图片风格_流年滤镜
  16. 111111111111
  17. 产品读书.心理学《梦的解析》
  18. android 高德地图动画,使用MotionLayout实现高德地图bottomSheets效果
  19. 要跑步,选对鞋!给新手的跑步鞋指南
  20. 小虎电商浏览器:卖家精灵利用关键词选品和查同类产品工具

热门文章

  1. 零基础入门语义分割——Task1 赛题理解
  2. mysql 递归查找父节点_MySQL递归查询树状表的子节点、父节点具体实现
  3. 获取python安装路径
  4. factorybean 代理类不能按照类型注入_彻底搞懂依赖注入(一)Bean实例创建过程
  5. LTE学习:传输块大小的计算
  6. 特殊权限 set_uid、set_gid、stick_bit,软链接文件,硬链接文件
  7. Oracle分析函数Over()
  8. innodb_file_per_table 理解
  9. 【UNITY3D 游戏开发之三】NGUI HUDTEXT 的练习源码及资源
  10. 美女程序员如何面对男友出轨