MountVolume.NewMounter initialization failed for volume “pvc-61dedc85-ea5a-4ac7-aaf3-e072e2e46e18“
报错
本地测试环境k8s重启后,stateful set报错了
# 报错信息
MountVolume.NewMounter initialization failed for volume "pvc-61dedc85-ea5a-4ac7-aaf3-e072e2e46e18" : path "/var/openebs/local/pvc-61dedc85-ea5a-4ac7-aaf3-e072e2e46e18" does not exist
原因
观察到的现象就是本机的目录文件不存在,也就是docker里面的文件没有保存到本地文件系统上。
我们来想想可能的原因,第一个想到就是volumeMounts,这里涉及到挂载
volumeMounts:- name: proxysql-datamountPath: /var/lib/proxysqlvolumeMounts:- name: proxysql-datamountPath: /var/lib/proxysqlmountPropagation: HostToContainer# 结果还是空,说明不是这个问题。看下官网对mountPropagation的解释
# HostToContainer - 此卷挂载将会感知到主机后续针对此卷或其任何子目录的挂载操作。
ls /var/openebs/local
在看看第二个,创建sc的时候有一个RECLAIM POLICY
-> % kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-5ea2d181-27b7-4d92-8488-c6f966fab97b 1Gi RWO Delete Bound proxysql/proxysql-data-proxysql4406-1 sc-file-hdd 6m58s
pvc-72e28c0a-7c65-4920-917d-a9a47841968c 1Gi RWO Delete Bound proxysql/proxysql-data-proxysql4406-0 sc-file-hdd 7m3s-> % kubectl patch pv pvc-5ea2d181-27b7-4d92-8488-c6f966fab97b -p "{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}"
persistentvolume/pvc-5ea2d181-27b7-4d92-8488-c6f966fab97b patched-> % kubectl patch pv pvc-72e28c0a-7c65-4920-917d-a9a47841968c -p "{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}"
persistentvolume/pvc-72e28c0a-7c65-4920-917d-a9a47841968c patched-> % kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-5ea2d181-27b7-4d92-8488-c6f966fab97b 1Gi RWO Retain Bound proxysql/proxysql-data-proxysql4406-1 sc-file-hdd 8m6s
pvc-72e28c0a-7c65-4920-917d-a9a47841968c 1Gi RWO Retain Bound proxysql/proxysql-data-proxysql4406-0 sc-file-hdd 8m11s
然后重启minikube,还是报一样的错。
RECLAIM POLICY的说明
- 默认回收策略为 “Delete”。 这表示当用户删除对应的 PersistentVolumeClaim 时,动态配置的 volume 将被自动删除
- 使用 “Retain” 时,如果用户删除 PersistentVolumeClaim,对应的 PersistentVolume 不会被删除
上面的这个参数是对PVC操作的时候有用。但是当k8s重启后就没用了
做到这里,只能是和openebs的配置有关了,为什么本机hostpath
kubectl get pv pvc-5ea2d181-27b7-4d92-8488-c6f966fab97b -o yamllocal:fsType: ""path: /var/openebs/local/pvc-5ea2d181-27b7-4d92-8488-c6f966fab97b-> % ls /var/openebs/local/pvc-5ea2d181-27b7-4d92-8488-c6f966fab97b
ls: /var/openebs/local/pvc-5ea2d181-27b7-4d92-8488-c6f966fab97b: No such file or directory
明明绑定的时候这个目录,但是本地的文件系统上确没有。
最初安装的时候是最简化安装, 官方安装地址说明
helm install openebs openebs/openebs -n openebs --create-namespace \--set legacy.enabled=false \--set ndm.enabled=false \--set ndmOperator.enabled=false \--set localprovisioner.enableDeviceClass=false \--set localprovisioner.basePath=/var/openebs/test# 重新按照官方上推荐的部署openebs
If you would like to use only Local PV (hostpath and device), you can install a lite version of OpenEBS using the following command.
kubectl apply -f https://openebs.github.io/charts/openebs-operator-lite.yaml
kubectl apply -f https://openebs.github.io/charts/openebs-lite-sc.yaml# 结论是重新部署已经没有用
-> % ll -h /var/openebs/local/pvc-6361f69b-a77e-4a18-9851-be8a2b8183a8
ls: /var/openebs/local/pvc-6361f69b-a77e-4a18-9851-be8a2b8183a8: No such file or directory# 下一步只能把涉及到的容器的日志都看一遍
-> % kubectl get pods -n openebs | awk '{print $1}' | tail -n +2 | while read pod;do echo "--------------$pod-
------------";kubectl logs $pod --tail=20 -n openebs;echo "\n";done
## 这里都是成功的日志,Successfully provisioned volume pvc-6361f69b-a77e-4a18-9851-be8a2b8183a8,不报错太扎眼了,不知道从哪差起了
--------------openebs-localpv-provisioner-6f686f7697-dvjvb-------------
I1030 03:30:40.038022 1 start.go:75] Starting Provisioner...
I1030 03:30:43.162527 1 start.go:139] Leader election enabled for localpv-provisioner via leaderElectionKey
I1030 03:30:43.358356 1 leaderelection.go:248] attempting to acquire leader lease openebs/openebs.io-local...
I1030 03:31:00.804536 1 leaderelection.go:258] successfully acquired lease openebs/openebs.io-local
I1030 03:31:00.804996 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"openebs", Name:"openebs.io-local", UID:"9b9afe88-028e-43ff-ab8c-a18f3f50139d", APIVersion:"v1", ResourceVersion:"96722", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openebs-localpv-provisioner-6f686f7697-dvjvb_d4f6a73b-c3d8-465d-9230-0c060f4d5644 became leader
I1030 03:31:00.805392 1 controller.go:810] Starting provisioner controller openebs.io/local_openebs-localpv-provisioner-6f686f7697-dvjvb_d4f6a73b-c3d8-465d-9230-0c060f4d5644!
I1030 03:31:00.905967 1 controller.go:859] Started provisioner controller openebs.io/local_openebs-localpv-provisioner-6f686f7697-dvjvb_d4f6a73b-c3d8-465d-9230-0c060f4d5644!
I1030 04:45:02.585505 1 controller.go:1279] provision "default/local-hostpath-pvc" class "sc-file-hdd": started
I1030 04:45:02.611805 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-hostpath-pvc", UID:"6361f69b-a77e-4a18-9851-be8a2b8183a8", APIVersion:"v1", ResourceVersion:"106794", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/local-hostpath-pvc"
I1030 04:45:02.613652 1 provisioner_hostpath.go:69] Creating volume pvc-6361f69b-a77e-4a18-9851-be8a2b8183a8 at node with label kubernetes.io/hostname=minikube, path:/var/openebs/local/pvc-6361f69b-a77e-4a18-9851-be8a2b8183a8,ImagePullSecrets:[]
2021-10-30T04:45:04.693Z INFO app/provisioner_hostpath.go:173 {"eventcode": "local.pv.provision.success", "msg": "Successfully provisioned Local PV", "rname": "pvc-6361f69b-a77e-4a18-9851-be8a2b8183a8", "storagetype": "hostpath"}
I1030 04:45:04.693892 1 controller.go:1384] provision "default/local-hostpath-pvc" class "sc-file-hdd": volume "pvc-6361f69b-a77e-4a18-9851-be8a2b8183a8" provisioned
I1030 04:45:04.693904 1 controller.go:1397] provision "default/local-hostpath-pvc" class "sc-file-hdd": succeeded
I1030 04:45:04.693911 1 volume_store.go:212] Trying to save persistentvolume "pvc-6361f69b-a77e-4a18-9851-be8a2b8183a8"
I1030 04:45:04.713989 1 volume_store.go:219] persistentvolume "pvc-6361f69b-a77e-4a18-9851-be8a2b8183a8" saved
I1030 04:45:04.715491 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-hostpath-pvc", UID:"6361f69b-a77e-4a18-9851-be8a2b8183a8", APIVersion:"v1", ResourceVersion:"106794", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-6361f69b-a77e-4a18-9851-be8a2b8183a8# 只有启动信息
--------------openebs-ndm-cluster-exporter-5c985f8b77-jrxwv-------------
E1030 04:36:22.033741 1 logs.go:35] unable to set flag, Error: no such flag -logtostderr
I1030 04:36:22.033876 1 exporter.go:52] Starting NDM Exporter...
I1030 04:36:22.033881 1 exporter.go:53] Version Tag : 1.7.0
I1030 04:36:22.033884 1 exporter.go:54] GitCommit : 6745dcf02d78c3b273c2646ff4bf027709c3a038
I1030 04:36:23.756423 1 exporter.go:108] Starting cluster level exporter . . .
I1030 04:36:23.757050 1 server.go:37] Starting HTTP server at http://localhost:9100/metrics# 只有启动信息
--------------openebs-ndm-node-exporter-vfbzv-------------
E1030 04:36:25.332180 1 logs.go:35] unable to set flag, Error: no such flag -logtostderr
I1030 04:36:25.332495 1 exporter.go:52] Starting NDM Exporter...
I1030 04:36:25.332522 1 exporter.go:53] Version Tag : 1.7.0
I1030 04:36:25.332526 1 exporter.go:54] GitCommit : 6745dcf02d78c3b273c2646ff4bf027709c3a038
I1030 04:36:27.040752 1 exporter.go:119] Starting node level exporter . . .
I1030 04:36:27.041052 1 server.go:37] Starting HTTP server at http://localhost:9101/metrics# 只有启动信息
--------------openebs-ndm-operator-6cf55f778b-f89ns-------------
I1030 04:36:00.135434 1 deleg.go:130] controller-runtime/injectors-warning "msg"="Injectors are deprecated, and will be removed in v0.10.x"
I1030 04:36:00.135470 1 deleg.go:130] controller-runtime/injectors-warning "msg"="Injectors are deprecated, and will be removed in v0.10.x"
I1030 04:36:00.135579 1 deleg.go:130] controller-runtime/injectors-warning "msg"="Injectors are deprecated, and will be removed in v0.10.x"
I1030 04:36:00.135603 1 deleg.go:130] controller-runtime/injectors-warning "msg"="Injectors are deprecated, and will be removed in v0.10.x"
I1030 04:36:00.135658 1 deleg.go:130] controller-runtime/injectors-warning "msg"="Injectors are deprecated, and will be removed in v0.10.x"
I1030 04:36:00.135753 1 deleg.go:130] controller-runtime/injectors-warning "msg"="Injectors are deprecated, and will be removed in v0.10.x"
I1030 04:36:00.135875 1 main.go:61] Go Version: go1.14.7
I1030 04:36:00.135959 1 main.go:62] Go OS/Arch: linux/amd64
I1030 04:36:00.136008 1 main.go:63] Version Tag: 1.7.0
I1030 04:36:00.136062 1 main.go:64] Git Commit: 6745dcf02d78c3b273c2646ff4bf027709c3a038
I1030 04:36:00.136086 1 deleg.go:130] setup "msg"="starting manager"
I1030 04:36:00.136375 1 internal.go:384] controller-runtime/manager "msg"="starting metrics server" "path"="/metrics"
I1030 04:36:00.136851 1 controller.go:164] controller-runtime/manager/controller/blockdevice "msg"="Starting EventSource" "reconciler group"="openebs.io" "reconciler kind"="BlockDevice" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"capacity":{"storage":0,"physicalSectorSize":0,"logicalSectorSize":0},"details":{"deviceType":"","driveType":"","logicalBlockSize":0,"physicalBlockSize":0,"hardwareSectorSize":0,"model":"","compliance":"","serial":"","vendor":"","firmwareRevision":""},"devlinks":null,"filesystem":{},"nodeAttributes":{"nodeName":""},"partitioned":"","path":""},"status":{"claimState":"","state":""}}}
I1030 04:36:00.136969 1 controller.go:164] controller-runtime/manager/controller/blockdeviceclaim "msg"="Starting EventSource" "reconciler group"="openebs.io" "reconciler kind"="BlockDeviceClaim" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"resources":{"requests":null},"deviceType":"","hostName":"","deviceClaimDetails":{},"blockDeviceNodeAttributes":{}},"status":{"phase":""}}}
I1030 04:36:00.237523 1 controller.go:164] controller-runtime/manager/controller/blockdevice "msg"="Starting EventSource" "reconciler group"="openebs.io" "reconciler kind"="BlockDevice" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"capacity":{"storage":0,"physicalSectorSize":0,"logicalSectorSize":0},"details":{"deviceType":"","driveType":"","logicalBlockSize":0,"physicalBlockSize":0,"hardwareSectorSize":0,"model":"","compliance":"","serial":"","vendor":"","firmwareRevision":""},"devlinks":null,"filesystem":{},"nodeAttributes":{"nodeName":""},"partitioned":"","path":""},"status":{"claimState":"","state":""}}}
I1030 04:36:00.237749 1 controller.go:172] controller-runtime/manager/controller/blockdevice "msg"="Starting Controller" "reconciler group"="openebs.io" "reconciler kind"="BlockDevice"
I1030 04:36:00.238095 1 controller.go:210] controller-runtime/manager/controller/blockdevice "msg"="Starting workers" "reconciler group"="openebs.io" "reconciler kind"="BlockDevice" "worker count"=1
I1030 04:36:00.238359 1 controller.go:164] controller-runtime/manager/controller/blockdeviceclaim "msg"="Starting EventSource" "reconciler group"="openebs.io" "reconciler kind"="BlockDeviceClaim" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"resources":{"requests":null},"deviceType":"","hostName":"","deviceClaimDetails":{},"blockDeviceNodeAttributes":{}},"status":{"phase":""}}}
I1030 04:36:00.238521 1 controller.go:172] controller-runtime/manager/controller/blockdeviceclaim "msg"="Starting Controller" "reconciler group"="openebs.io" "reconciler kind"="BlockDeviceClaim"
I1030 04:36:00.239069 1 controller.go:210] controller-runtime/manager/controller/blockdeviceclaim "msg"="Starting workers" "reconciler group"="openebs.io" "reconciler kind"="BlockDeviceClaim" "worker count"=1# 只有启动信息,而且是ndm信息
--------------openebs-ndm-t2k8p-------------
I1030 04:35:27.188837 8 eventhandler.go:63] Processing details for /dev/loop7
I1030 04:35:27.189035 8 probe.go:118] details filled by udev probe
W1030 04:35:27.189248 8 sysfsprobe.go:94] unable to get capacity for device: /dev/loop7, err: block count reported as zero
I1030 04:35:27.189343 8 sysfsprobe.go:97] blockdevice path: /dev/loop7 capacity :0 filled by sysfs probe.
I1030 04:35:27.189520 8 sysfsprobe.go:125] blockdevice path: /dev/loop7 logical block size :512 filled by sysfs probe.
I1030 04:35:27.189688 8 sysfsprobe.go:137] blockdevice path: /dev/loop7 physical block size :512 filled by sysfs probe.
I1030 04:35:27.189797 8 sysfsprobe.go:149] blockdevice path: /dev/loop7 hardware sector size :512 filled by sysfs probe.
I1030 04:35:27.190040 8 sysfsprobe.go:160] blockdevice path: /dev/loop7 drive type :HDD filled by sysfs probe.
I1030 04:35:27.190106 8 probe.go:118] details filled by sysfs probe
E1030 04:35:27.190140 8 smartprobe.go:101] map[errorCheckingConditions:the device type is not supported yet, device type: "unknown"]
I1030 04:35:27.190265 8 probe.go:118] details filled by smart probe
I1030 04:35:27.190808 8 mountprobe.go:134] no mount point found for /dev/loop7. clearing mount points if any
I1030 04:35:27.190904 8 probe.go:118] details filled by mount probe
I1030 04:35:27.191005 8 usedbyprobe.go:122] device: /dev/loop7 is not having any zfs partitions
E1030 04:35:27.191119 8 usedbyprobe.go:159] error reading spdk signature from device: /dev/loop7, error reading from /dev/loop7: EOF
I1030 04:35:27.191172 8 probe.go:118] details filled by used-by probe
I1030 04:35:27.191279 8 probe.go:118] details filled by Custom Tag Probe
I1030 04:35:27.191381 8 addhandler.go:52] device: /dev/loop7 does not exist in cache, the device is now connected to this node
I1030 04:35:27.191454 8 osdiskexcludefilter.go:131] applying os-filter regex ^/dev/vda[0-9]*$ on /dev/loop7
I1030 04:35:27.191579 8 filter.go:89] /dev/loop7 ignored by path filter
接着只能看看代码到底是怎么成功的,特别是下面的逻辑到底是怎样的
I1030 04:45:02.613652 1 provisioner_hostpath.go:69] Creating volume pvc-6361f69b-a77e-4a18-9851-be8a2b8183a8 at node with label kubernetes.io/hostname=minikube, path:/var/openebs/local/pvc-6361f69b-a77e-4a18-9851-be8a2b8183a8,ImagePullSecrets:[]
I1030 04:45:04.713989 1 volume_store.go:219] persistentvolume "pvc-6361f69b-a77e-4a18-9851-be8a2b8183a8" saved
结论
MountVolume.NewMounter initialization failed for volume “pvc-61dedc85-ea5a-4ac7-aaf3-e072e2e46e18“相关推荐
- Error: “MountVolume.SetUp failed for volume pvc 故障处理
文章目录 故障描述 排查思路 1.尝试重启Pod 2.查看pod events事件 3.查看kubelet日志 4.检查pvc与pv资源对象 5.检查磁盘挂载 解决方案 故障描述 内部环境收到Pod异 ...
- MountVolume.MountDevice failed for volume “pvc“ ...问题解决
一.问题描述 Warning FailedMount 44s (x2 over 108s) kubelet MountVolume.MountDevice failed for volume &quo ...
- pod一直处于ContainerCreating,查看报错信息为挂载错误MountVolume.SetUp failed for volume
背景,在搭建redis集群时,使用的是nfs挂载卷,中途我好像把挂载盘的文件移走了,当我再次启动pod时就出现挂载错误. [root@master redis-cluster-sts]# kubect ...
- 解决argo workflow报错:MountVolume.SetUp failed for volume “docker-sock“ : hostPath type check failed
提交workflow时报错: MountVolume.SetUp failed for volume "docker-sock" : hostPath type check fai ...
- 严重: Dispatcher initialization failed java.lang.RuntimeException
严重: Dispatcher initialization failed java.lang.RuntimeException: java.lang.reflect.InvocationTargetE ...
- maven 项目报错Context initialization failed
Context initialization failed 当我对maven 项目进行clean 的时候再次打开就报错Context initialization failed 同时i显示找不到dao ...
- Schema initialization FAILED! Metastore state would be inconsistent !!
这个问题是在进行 schematool -dbType mysql -initSchema的时候碰到的. 我的解决方案是: 集群中其中一个节点的hive-site.xml不小心误删了. 另外可以参考[ ...
- 有关eclipse for java ee版本遇到的坑( Context initialization failed)
前些天发现了一个巨牛的人工智能学习网站,通俗易懂,风趣幽默,忍不住分享一下给大家.点击跳转到教程. 这几天把以前网上看的视频的源代码拷贝到eclipse下面进行学习,当时用的是eclipse-jee- ...
- IDEA之Initialization failed for ‘http://start.spring.io‘ Please check URL, network and proxy settings
今天在使用IDEA创建项目的时候,出现了一个问题,在这里记录下解决方法. 文章目录 问题描述 如何解决 问题描述 今天在用IDEA创建springboot项目时报错Initialization fai ...
最新文章
- jquery 简单分页插件jQuerypage
- java实现rtsp转rtmp
- Spring基于XML装配Bean
- cocos2d-x 2.x版本接入bugly的总结
- 二叉树的存储方式以及递归和非递归的三种遍历方式
- cmake构建qt工程
- c语言整数转浮点数_浮点数的秘密
- C++中acculumate函数使用
- 网络安全|墨者学院在线靶场|投票系统程序设计缺陷分析
- 五月,温暖,风带着花香沁人心脾,独坐窗前
- linux 光纤网卡 软路由,联想M720Q、光网卡、Pon stick、Openwrt我的完美软路由折腾记...
- 双系统卸载ubuntu后开机进入grub界面的解决方案
- Win11磁盘扩展卷变成灰色无法点击解决方法
- 深度了解自监督学习,就看这篇解读 !何恺明新作MAE:通向CV大模型
- Python-hrvanalysis库 挖掘心电信号特征 方法总结
- 【GCC编译优化系列】宏定义名称与函数同名是一种什么骚操作?
- 中小学学校视频直播系统怎么搭建
- python爬虫-网易云音乐的歌曲热评
- 大学极域电子教室控屏100%脱离控屏
- 写小论文之引言写什么?