//参考https://github.com/kubernetes-retired/external-storage/tree/master/ceph/rbd

//参考https://www.wenjiangs.com/doc/hqefraum

1、创建pool,动态pv专用的数据池

2、创建ceph-secret.yaml

apiVersion: v1
kind: Secret
metadata:name: ceph-secret-adminnamespace: kube-system
type: "kubernetes.io/rbd"
data:key: QVFCTVQrUmdxYkxzTUJBQS90ZExaTUVBNjY5bmxtODJkNitCeXc9PQ==---apiVersion: v1
kind: Secret
metadata:name: ceph-secretnamespace: kube-system
type: "kubernetes.io/rbd"
data:key: QVFETkJ3UmhxamhVTkJBQVhjWTJoQUlpVnczQmlOc1F6bndoUlE9PQ==

这里ceph-secret-admin的key和ceph-secret的key值可以一样,上面的内容时参考https://github.com/kubernetes-retired/external-storage/tree/master/ceph/rbd

如若使用admin用户可具有ceph操作的所有权限

若使用创建新用户可以重新定义ceph的操作权限,如下

ceph osd pool create kube 8 8
ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube'
ceph auth get-key client.kube > /tmp/key
kubectl create secret generic ceph-secret --from-file=/tmp/key --namespace=kube-system --type=kubernetes.io/rbd

这里需要注意get-key是需要base64转换的,如果和官网上一样直接用是会报错的。

ceph auth get-key client.kube | base64

//查看

//kubectl get secret -n kube-system |grep cephceph-secret                   kubernetes.io/rbd                     1      3d11h
ceph-secret-admin             kubernetes.io/rbd                     1      3d11h

3、部署rbd-provisioner

这里需要注意,因为k8s上的kube-controller-manager资源是运行在容器里,它要调用物理机上的ceph操作需要另外在容器上部署一个rbd-provisioner才能操作成功,否则会报错如下:

"rbd: create volume failed, err: failed to create rbd image: executable file not found in $PATH:"

创建rbd-provisioner.yaml

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: rbd-provisioner
rules:- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]- apiGroups: [""]resources: ["services"]resourceNames: ["kube-dns", "coredns"]verbs: ["list", "get"]- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]---kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: rbd-provisioner
subjects:- kind: ServiceAccountname: rbd-provisionernamespace: kube-system
roleRef:kind: ClusterRolename: rbd-provisionerapiGroup: rbac.authorization.k8s.io---apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:name: rbd-provisionernamespace: kube-system
rules:
- apiGroups: [""]resources: ["secrets"]verbs: ["get"]
- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: rbd-provisionernamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: rbd-provisioner
subjects:
- kind: ServiceAccountname: rbd-provisionernamespace: kube-system---apiVersion: apps/v1
kind: Deployment
metadata:name: rbd-provisionernamespace: kube-systemlabels:app: rbd-provisioner
spec:replicas: 1selector:matchLabels:app: rbd-provisionerstrategy:type: Recreatetemplate:metadata:labels:app: rbd-provisionerspec:nodeSelector:app: rbd-provisionercontainers:- name: rbd-provisionerimage: "quay.io/external_storage/rbd-provisioner:latest"volumeMounts:- name: ceph-confmountPath: /etc/cephenv:- name: PROVISIONER_NAMEvalue: ceph.com/rbdserviceAccount: rbd-provisionervolumes:- name: ceph-confhostPath:path: /etc/cephtolerations:- key: "node-role.kubernetes.io/master"operator: "Exists"effect: "NoSchedule"---apiVersion: v1
kind: ServiceAccount
metadata:name: rbd-provisionernamespace: kube-system

4、创建ceph-rbd-sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: ceph-rbdannotations:storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: ceph.com/rbd
#reclaimPolicy: Retain
allowVolumeExpansion: true
parameters:monitors: 10.12.70.201:6789,10.12.70.202:6789,10.12.70.203:6789adminId: adminadminSecretName: ceph-secret-adminadminSecretNamespace: kube-systempool: kubeuserId: kubeuserSecretName: ceph-secretuserSecretNamespace: kube-systemfsType: ext4imageFormat: "2"imageFeatures: "layering"

5、创建pvc和测试应用

apiVersion: v1
kind: Pod
metadata:name: ceph-pod1
spec:containers:- name: ceph-busyboximage: busyboxcommand: ["sleep", "60000"]volumeMounts:- name: ceph-vol1mountPath: /usr/share/basyboxreadOnly: falsevolumes:- name: ceph-vol1persistentVolumeClaim:claimName: ceph-claim---kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: ceph-claim
spec:storageClassName: ceph-rbdaccessModes:- ReadWriteOnceresources:requests:storage: 2Gi

//报错

当运行第5步后开始发现pod是处于pedding状态的,然后查看pvc也是pending状态

//kubectl get pvc
NAME           STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ceph-claim     pending                                      ceph-rbd       2d10h

这说明rbd-provisioner没有生效。查看pod里的logs的报错信息

//kubectl get pod -n kube-system |grep rbd
rbd-provisioner-f4956975f-4ksqt            1/1     Running   0          2d15h//kubectl logs rbd-provisioner-f4956975f-4ksqt -n kube-system//报错
1 controller.go:1004] provision "default/ceph-claim" class "ceph-rbd": unexpected error getting claim reference: selfLink was empty, can't make reference

//找了找资料发现,kubernetes 1.20版本 禁用了 selfLink。
当前的解决方法是编辑/etc/kubernetes/manifests/kube-apiserver.yaml
在这里:

spec:
  containers:
  - command:
    - kube-apiserver
添加这一行:

- --feature-gates=RemoveSelfLink=false

需要k8s的每个master节点都进行此操作。

//更改完后,pvc继续处于pending状态,继续查看logs信息

//报错
I0728 14:26:55.704256       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"ceph-claim", UID:"e252cc3d-4ff0-400f-9bc2-feee20ecbb40", APIVersion:"v1", ResourceVersion:"19495043", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "ceph-rbd": failed to create rbd image: exit status 13, command output: did not load config file, using default settings.
2021-07-28 14:26:52.645 7f70da266900 -1 Errors while parsing config file!
2021-07-28 14:26:52.645 7f70da266900 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2021-07-28 14:26:52.645 7f70da266900 -1 parse_file: cannot open /root/.ceph/ceph.conf: (2) No such file or directory
2021-07-28 14:26:52.645 7f70da266900 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2021-07-28 14:26:52.645 7f70da266900 -1 Errors while parsing config file!
2021-07-28 14:26:52.645 7f70da266900 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2021-07-28 14:26:52.645 7f70da266900 -1 parse_file: cannot open /root/.ceph/ceph.conf: (2) No such file or directory
2021-07-28 14:26:52.645 7f70da266900 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2021-07-28 14:26:52.685 7f70da266900 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2021-07-28 14:26:55.689 7f70da266900 -1 monclient: get_monmap_and_config failed to get config
2021-07-28 14:26:55.689 7f70da266900 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
rbd: couldn't connect to the cluster!

这个报错是说rbd-provisioner需要ceph.conf等配置信息,在网上找到临时解决办法是通过docker拷贝将本地/etc/ceph/里的文件拷贝到镜像里去。对了这里忘了说明在执行rbd-provisioner.yaml成功后docker本地镜像会成功拉下一个quay.io/external_storage/rbd-provisioner:latest镜像,如下

//sudo docker images
REPOSITORY                                 TAG        IMAGE ID       CREATED         SIZE
quay.io/external_storage/rbd-provisioner   latest     9fb54e49f9bf   2 years ago     405MB

//临时拷贝命令

//sudo docker ps
CONTAINER ID   IMAGE                                      COMMAND                  CREATED       STATUS       PORTS     NAMES
52218eacb4a9   quay.io/external_storage/rbd-provisioner   "/usr/local/bin/rbd-…"   2 days ago    Up 2 days              k8s_rbd-provisioner_rbd-provisioner-f4956975f-4ksqt_kube-system_c6e08e90-3775-45f2-90fe-9fbc0eb16efc_0//sudo docker cp /etc/ceph/ceph.conf 52218eacb4a9:/etc/ceph

这种方法一旦docker 镜像重启,拷贝的文件就没有了,所以我在rbd-provisioner.yaml文件了加载了hostpath将本地目录挂载到容器里,如下

    containers:- name: rbd-provisionerimage: "quay.io/external_storage/rbd-provisioner:latest"volumeMounts:- name: ceph-confmountPath: /etc/cephenv:- name: PROVISIONER_NAMEvalue: ceph.com/rbdserviceAccount: rbd-provisionervolumes:- name: ceph-confhostPath:path: /etc/ceph

//为保证rbd-provisioner和kube-controller-manager运行在同一个节点上,在master节点上打上标签

//kubectl label nodes k8s70131 app=rbd-provisioner//kubectl get nodes --show-labels
NAME       STATUS   ROLES                  AGE    VERSION   LABELS
k8s70131   Ready    control-plane,master   137d   v1.21.2   app=rbd-provisioner,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s70132,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=//可以看到labels栏里已经有了app=rbd-provisioner标签//删除标签操作
//kubectl label nodes k8s70131 app-

因为master节点都设置了污点,要想在其节点上部署pod需要设置容忍污点。在rbd-provisioner.yaml文件中添加如下

 tolerations:- key: "node-role.kubernetes.io/master"operator: "Exists"effect: "NoSchedule"

然后查看,可以看到pod在同一个节点上了

//kubectl get pod -n kube-system -o widerbd-provisioner-f4956975f-4ksqt            1/1     Running   0          2d16h   22.244.157.77   k8s70131   <none>           <none>
kube-controller-manager-k8s70131           1/1     Running   2          44d     10.12.70.131    k8s70131   <none>           <none>

//此时发现pvc依旧是pending状态,查看log信息也不再报错了,捣鼓了很久发现是rbd-provisioner镜像里的ceph-common的版本和物理机ceph集群的版本不一致导致,接下来升级镜像里的ceph版本。

//物理机上执行ceph -v
ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)
版本是v15.2.13
//进入镜像
//kubectl get pod -n kube-system |grep rbd
rbd-provisioner-f4956975f-4ksqt            1/1     Running   0          2d16h
//kubectl describe pod rbd-provisioner-f4956975f-4ksqt -n kube-system
Containers:rbd-provisioner:Container ID:   docker://52218eacb4a91b8338cf38958fe5a5213f0bd4cc8c4d5b3d15d9cda69e8af98eImage:          quay.io/external_storage/rbd-provisioner:latestImage ID:       docker-pullable://quay.io/external_storage/rbd-provisioner@sha256:94fd36b8625141b62ff1addfa914d45f7b39619e55891bad0294263ecd2ce09aPort:           <none>Host Port:      <none>State:          RunningStarted:      Sat, 31 Jul 2021 18:27:44 +0800Ready:          TrueRestart Count:  0Environment:PROVISIONER_NAME:  ceph.com/rbdMounts:/etc/ceph from ceph-conf (rw)/var/run/secrets/kubernetes.io/serviceaccount from rbd-provisioner-token-5lx5m (ro)//kubectl exec -it rbd-provisioner-f4956975f-4ksqt -c rbd-provisioner -n kube-systemceph -v   版本是v13.2.1
对ceph升级
修改/etc/yum.repos.d/cph.repo
https://mirrors.aliyun.com/ceph/keys/
https://mirrors.aliyun.com/ceph/rpm-15.2.13/el7/yum clean all
yum makecache
yum -y update

升级完毕后查看pvc

//kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ceph-claim      Bound    pvc-6f7de232-7a76-454f-8c72-f24e21e3230a   2Gi        RWO            ceph-rbd       2d11h//查看pv
//kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS   REASON   AGE
pvc-6f7de232-7a76-454f-8c72-f24e21e3230a   2Gi        RWO            Delete           Bound    default/ceph-claim      ceph-rbd                2d11h

k8s ceph pv rbd动态挂载设置成功

k8s学习笔记——ceph pv rbd动态挂载相关推荐

  1. k8s学习笔记——ceph rbd本地手动挂载

    //在客户端节点上执行 1.创建rbd手动挂载 //ceph osd pool create <pool> [<pg_num:int>] [<pgp_num:int> ...

  2. K8S 学习笔记三 核心技术 Helm nfs prometheus grafana 高可用集群部署 容器部署流程

    K8S 学习笔记三 核心技术 2.13 Helm 2.13.1 Helm 引入 2.13.2 使用 Helm 可以解决哪些问题 2.13.3 Helm 概述 2.13.4 Helm 的 3 个重要概念 ...

  3. 梓益C语言学习笔记之链表&动态内存&文件

    梓益C语言学习笔记之链表&动态内存&文件 一.定义: 链表是一种物理存储上非连续,通过指针链接次序,实现的一种线性存储结构. 二.特点: 链表由一系列节点(链表中每一个元素称为节点)组 ...

  4. mysql 临时表 事务_MySQL学习笔记十:游标/动态SQL/临时表/事务

    逆天十三少 发表于:2020-11-12 08:12 阅读: 90次 这篇教程主要讲解了MySQL学习笔记十:游标/动态SQL/临时表/事务,并附有相关的代码样列,我觉得非常有帮助,现在分享出来大家一 ...

  5. 学习笔记:VB.net动态添加控件数组并传递事件

    学习笔记:VB.net动态添加控件数组并传递事件 控件数组和事件 "中间人" 动态添加控件 控件数组和事件 新建一个用户窗体,在定义控件数组时,不能用Withevnets来定义数组 ...

  6. Linux学习笔记1--Linux文件系统之CentOS7挂载U盘

    Linux学习笔记(一) CentOS7挂载U盘 插入U盘连接虚拟机 打开终端 创建U盘目录 挂载U盘 卸载U盘 可能出现的bug Linux文件系统与Windows文件系统之比 二者文件系统具体 二 ...

  7. docker,k8s学习笔记汇总

    整理了下博客里关于docker和k8s的文章,方便查看 docker学习笔记(一)docker入门 docker学习笔记(二)创建自己的镜像 docker学习笔记(三)docker中的网络 docke ...

  8. 自控原理学习笔记-反馈控制系统的动态模型(1)

    自控原理学习笔记 1.导论 2.反馈控制系统的动态模型(1) 3.反馈控制系统的动态模型(2) 3.反馈控制系统的动态模型(3) 4.反馈控制系统的动态模型(4) 5.反馈控制系统的动态模型(5) 文 ...

  9. C语言学习笔记10-指针(动态内存分配malloc/calloc、realloc、释放free,可变数组实现;Tips:返回指针的函数使用本地变量有风险!;最后:函数指针)

    C语言:指针 1. 指针:保存地址的变量 *p (pointer) ,这种变量的值是内存的地址.   取地址符& 只用于获取变量(有地址的东西)的地址:scanf函数-取地址符   地址的大小 ...

最新文章

  1. Linux安装Oracle 10g
  2. 郁金香汇编代码注入怎么写看雪_雷军1994年写的诗一样的代码,我把它运行起来了!...
  3. maven mockito_如何:测试Maven项目(JUnit,Mockito,Hamcrest,AssertJ)中的依赖项
  4. /usr/bin/ld: cannot find -l*** 这里***可以指lapack等
  5. 论文浅尝 | 对于知识图谱嵌入表示的几何形状理解
  6. SCUT - 290 - PARCO的因数游戏 - 博弈论
  7. GAE-BBS v.10 开源下载
  8. 是你渡过人生难关的助力_人工智能将助力安全返回工作场所。 这是如何做
  9. 单元测试的必要性 从bug修复 费用成本和时间成本综合考虑
  10. python底层源码_python源码剖析——系列一
  11. 大智慧加密指标源码恢复,指标破解工具
  12. Simulink积分器出现奇点_教训
  13. 转义sed替换模式字符串
  14. 查看正在运行docker容器的启动命令
  15. 网关冗余技术、链路冗余技术 、 ACL原理、ACL配置
  16. (转)图文版本全面讲解电脑主板
  17. 2.C++-选择排序、冒泡排序、插入排序、希尔排序、归并排序、快速排序
  18. 学习Flash制作高射炮游戏
  19. ecb里使用自定义快捷键切换窗口
  20. Unity Application Block 1.2 学习笔记 [转]

热门文章

  1. 【JDBC】JPA和JDBC的区别
  2. Unsupported SQL of `create database xxx CHARACTER SET UTF8
  3. hive时间戳函数之unix_timestamp(),from_unixtime,to_utc_timestamp
  4. jcifs报错,jcifs.util.transport.TransportException: Transport1 timedout waiting for response to SmbComR
  5. openstack 权限管理
  6. STM32F10xxx20xxx21xxxL1xxxx Cortex-M3程序设计手册 阅读笔记二(5):Cortex-M3处理器能量管理
  7. LeetCode:Database 21.统计各专业学生人数
  8. 微信小程序webview识别二维码长按点击识别二维码
  9. SNMP协议——网络管理概述
  10. 前端Mocha+Chai单元测试