k8s ceph rbd动态挂载
k8s ceph rbd动态挂载 :
k8s 使用ceph:
本文介绍k8s怎么使用ceph 的rbd 存储
1.首先要有一个ceph 集群。搭建在此不描述
2.确保集群状态非error
k8s动态挂载ceph rbd 存储
- 首先需要在ceph 集群上创建一个池,初始化池
root@ubuntu2-15:~# ceph osd pool create kubernetes 16 16
pool 'kubernetes' already exists
root@ubuntu2-15:~# rbd pool init kubernetes
root@ubuntu2-15:~# ceph osd pool ls
device_health_metrics
myfs-metadata
myfs-data0
postgres
my-store.rgw.control
my-store.rgw.meta
my-store.rgw.log
my-store.rgw.buckets.index
my-store.rgw.buckets.non-ec
.rgw.root
my-store.rgw.buckets.data
kubernetes
- 给池和k8s集群创建一个使用kubernetes池的用户
root@ubuntu2-15:~# ceph auth get-or-create client.kubernetes mon 'allow *' mds 'allow *' osd 'allow *'
[client.kubernetes]key = AQC/EodiBYJXABAAfcDFDItPJ8Ve2xRa3ZhyuA==
- 到k8s创建cm
root@k8s-ceph:~/lim/csi# cat csi-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:config.json: |-[{"clusterID": "985fbc88-22d9-47d4-96b1-166c106d2787","monitors": ["192.168.2.15:6789","192.168.2.16:6789","192.168.2.17:6789"]}]
metadata:name: ceph-csi-config
root@k8s-ceph:~/lim/csi# kubectl apply -f csi-config-map.yaml
configmap/ceph-csi-config created
最新的ceph-csi 还需要一个kms 的配置 集群没有设置的话值可以为空,但必须创建
root@k8s-ceph:~/lim/csi# cat csi-kms-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:config.json: |-{}
metadata:name: ceph-csi-encryption-kms-config
root@k8s-ceph:~/lim/csi# kubectl apply -f csi-kms-config-map.yaml
configmap/ceph-csi-encryption-kms-config created
ceph-config-map.yaml 里面的keyring 需要admin 的keyring
root@k8s-ceph:~/lim/csi# cat ceph-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:ceph.conf: |[global]# keyring is a required key and its value should be emptykeyring: AQCdGF1i+4v7HRAAhxj6p2EV9z7sZ38CtEqakw==
metadata:name: ceph-config
root@k8s-ceph:~/lim/csi# kubectl apply -f ceph-config-map.yaml
configmap/ceph-config createdroot@k8s-ceph:~/lim/csi# cat /etc/ceph/keyring
[client.admin]
key = AQCdGF1i+4v7HRAAhxj6p2EV9z7sZ38CtEqakw==
- 创建sercert 使用第二步创建的kubernetes用户
root@k8s-ceph:~/lim/csi# cat csi-rbd-secret.yaml
---
apiVersion: v1
kind: Secret
metadata:name: csi-rbd-secretnamespace: default
stringData:userID: kubernetesuserKey: AQC/EodiBYJXABAAfcDFDItPJ8Ve2xRa3ZhyuA==
root@k8s-ceph:~/lim/csi# kubectl apply -f csi-rbd-secret.yaml
secret/csi-rbd-secret created
root@k8s-ceph:~/lim/csi#
- 需要创建ceph-csi plugins ( 这一步需要科学上网,可以直接在网页输入地址。可以获取到yaml,复制到终端)
$ kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-provisioner-rbac.yaml
$ kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml
#这两个rbac 先创建了。$ wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin-provisioner.yaml
$ kubectl apply -f csi-rbdplugin-provisioner.yaml
$ wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin.yaml
$ kubectl apply -f csi-rbdplugin.yaml
---
以上均可以到网页输入地址获取内容。 直接复制到自己的终端即可root@k8s-ceph:~/lim/csi# kubectl apply -f csi-nodeplugin-rbac.yam
error: the path "csi-nodeplugin-rbac.yam" does not exist
root@k8s-ceph:~/lim/csi# kubectl apply -f csi-nodeplugin-rbac.yaml
serviceaccount/rbd-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
root@k8s-ceph:~/lim/csi# kubectl apply -f kubec get pod
error: Unexpected args: [get pod]
See 'kubectl apply -h' for help and examples
root@k8s-ceph:~/lim/csi# kubectl get pod
No resources found in default namespace.
root@k8s-ceph:~/lim/csi# kubectl get sa
NAME SECRETS AGE
default 1 123m
rbd-csi-nodeplugin 1 16s
root@k8s-ceph:~/lim/csi# kubectl apply -f csi-provisioner-rbac.yaml
serviceaccount/rbd-csi-provisioner created
clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created
role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created
rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created
root@k8s-ceph:~/lim/csi# kubectl get sa
NAME SECRETS AGE
default 1 123m
rbd-csi-nodeplugin 1 34s
rbd-csi-provisioner 1 4s
root@k8s-ceph:~/lim/csi# kubectl apply -f csi-rbdplugin-provisioner.yaml
service/csi-rbdplugin-provisioner created
deployment.apps/csi-rbdplugin-provisioner created
root@k8s-ceph:~/lim/csi# ls
ceph-config-map.yaml csi-config-map.yaml csi-kms-config-map.yaml csi-nodeplugin-rbac.yaml csi-provisioner-rbac.yaml csi-rbdplugin-provisioner.yaml csi-rbdplugin.yaml csi-rbd-sc.yaml csi-rbd-secret.yaml csi.tar.gz raw-block-pod.yaml raw-block-pvc.yaml
root@k8s-ceph:~/lim/csi# kubectl get pod
NAME READY STATUS RESTARTS AGE
csi-rbdplugin-provisioner-54d9db86b5-dczd9 0/7 ContainerCreating 0 4s
root@k8s-ceph:~/lim/csi# kubectl apply -f csi-rbdplugin.yaml
daemonset.apps/csi-rbdplugin created
service/csi-metrics-rbdplugin created
root@k8s-ceph:~/lim/csi# kubectl get pod
NAME READY STATUS RESTARTS AGE
csi-rbdplugin-nrdmh 3/3 Running 0 21s
csi-rbdplugin-provisioner-54d9db86b5-dczd9 7/7 Running 0 35s
这一步会创建两个pod 其中的镜像均需要科学上网。 (网络不太好。传不了网盘。实在不会拉在私信我)
- 需要创建一个sc 来动态创建pv
root@k8s-ceph:~/lim/csi# cat csi-rbd-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:clusterID: 985fbc88-22d9-47d4-96b1-166c106d2787 # id从 ceph -s 获取即可pool: kubernetes imageFeatures: layeringcsi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret #上面创建的secretcsi.storage.k8s.io/provisioner-secret-namespace: defaultcsi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secretcsi.storage.k8s.io/controller-expand-secret-namespace: defaultcsi.storage.k8s.io/node-stage-secret-name: csi-rbd-secretcsi.storage.k8s.io/node-stage-secret-namespace: default
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:- discard
root@k8s-ceph:~/lim/csi# kubectl apply -f csi-rbd-sc.yaml
storageclass.storage.k8s.io/csi-rbd-sc created
root@k8s-ceph:~/lim/csi# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
csi-rbd-sc rbd.csi.ceph.com Delete Immediate true 59s
- 创建pvc
root@k8s-ceph:~/lim/csi# kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/aiplatform-ailab-data-pvc Bound default-aiplatform-ailab-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/aiplatform-app-data-pvc Bound default-aiplatform-app-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/aiplatform-dataset-data-pvc Bound default-aiplatform-dataset-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/aiplatform-gitea-data-pvc Bound default-aiplatform-gitea-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/aiplatform-model-data-pvc Bound default-aiplatform-model-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/static-rbd-claim Bound static-rbd-pv 2Gi RWO,ROX 3d3h
persistentvolumeclaim/static-rbd-k8s-claim Bound static-rbd-k8s-pv 1Gi RWO,ROX 84mNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/default-aiplatform-ailab-data-pv 300Mi RWX Retain Bound default/aiplatform-ailab-data-pvc 3d3h
persistentvolume/default-aiplatform-app-data-pv 300Mi RWX Retain Bound default/aiplatform-app-data-pvc 3d3h
persistentvolume/default-aiplatform-dataset-data-pv 300Mi RWX Retain Bound default/aiplatform-dataset-data-pvc 3d3h
persistentvolume/default-aiplatform-gitea-data-pv 300Mi RWX Retain Bound default/aiplatform-gitea-data-pvc 3d3h
persistentvolume/default-aiplatform-model-data-pv 300Mi RWX Retain Bound default/aiplatform-model-data-pvc 3d3h
persistentvolume/kube-system-aiplatform-component-data-pv 300Mi RWX Retain Bound kube-system/aiplatform-component-data-pvc 3d3h
persistentvolume/logging-aiplatform-logging-data-pv 300Mi RWX Retain Available 3d3h
persistentvolume/static-rbd-k8s-pv 1Gi RWO,ROX Retain Bound default/static-rbd-k8s-claim 84m
persistentvolume/static-rbd-pv 2Gi RWO,ROX Retain Bound default/static-rbd-claim 3d3h
root@k8s-ceph:~/lim/csi# cat raw-block-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: raw-block-pvc
spec:accessModes:- ReadWriteOncevolumeMode: Blockresources:requests:storage: 1GistorageClassName: csi-rbd-sc
root@k8s-ceph:~/lim/csi# kubectl apply -f raw-block-pvc.yaml
persistentvolumeclaim/raw-block-pvc created
root@k8s-ceph:~/lim/csi# kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/aiplatform-ailab-data-pvc Bound default-aiplatform-ailab-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/aiplatform-app-data-pvc Bound default-aiplatform-app-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/aiplatform-dataset-data-pvc Bound default-aiplatform-dataset-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/aiplatform-gitea-data-pvc Bound default-aiplatform-gitea-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/aiplatform-model-data-pvc Bound default-aiplatform-model-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/raw-block-pvc Bound pvc-68eac306-2863-4efb-8abc-f395eca95169 1Gi RWO csi-rbd-sc 18m
persistentvolumeclaim/static-rbd-claim Bound static-rbd-pv 2Gi RWO,ROX 3d3h
persistentvolumeclaim/static-rbd-k8s-claim Bound static-rbd-k8s-pv 1Gi RWO,ROX 103mNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/default-aiplatform-ailab-data-pv 300Mi RWX Retain Bound default/aiplatform-ailab-data-pvc 3d3h
persistentvolume/default-aiplatform-app-data-pv 300Mi RWX Retain Bound default/aiplatform-app-data-pvc 3d3h
persistentvolume/default-aiplatform-dataset-data-pv 300Mi RWX Retain Bound default/aiplatform-dataset-data-pvc 3d3h
persistentvolume/default-aiplatform-gitea-data-pv 300Mi RWX Retain Bound default/aiplatform-gitea-data-pvc 3d3h
persistentvolume/default-aiplatform-model-data-pv 300Mi RWX Retain Bound default/aiplatform-model-data-pvc 3d3h
persistentvolume/kube-system-aiplatform-component-data-pv 300Mi RWX Retain Bound kube-system/aiplatform-component-data-pvc 3d3h
persistentvolume/logging-aiplatform-logging-data-pv 300Mi RWX Retain Available 3d3h
persistentvolume/pvc-68eac306-2863-4efb-8abc-f395eca95169 1Gi RWO Delete Bound default/raw-block-pvc csi-rbd-sc 18m
可以看到只需要创建pvc 就可以自动拉起pv
- 使用pod 挂载对应的pvc
root@k8s-ceph:~/lim/csi# cat raw-block-pod.yaml
---
apiVersion: v1
kind: Pod
metadata:name: pod-with-raw-block-volume
spec:containers:- name: fc-containerimage: fedora:26command: ["/bin/sh", "-c"]args: ["tail -f /dev/null"]volumeDevices:- name: datadevicePath: /dev/xvdavolumes:- name: datapersistentVolumeClaim:claimName: raw-block-pvc
root@k8s-ceph:~/lim/csi# kubectl apply -f ^C
root@k8s-ceph:~/lim/csi# kubectl get pod
NAME READY STATUS RESTARTS AGE
csi-rbdplugin-nrdmh 3/3 Running 0 45m
csi-rbdplugin-provisioner-54d9db86b5-dczd9 7/7 Running 5 45m
root@k8s-ceph:~/lim/csi# cat raw-block-pod.yaml
---
apiVersion: v1
kind: Pod
metadata:name: pod-with-raw-block-volume
spec:containers:- name: fc-containerimage: fedora:26command: ["/bin/sh", "-c"]args: ["tail -f /dev/null"]volumeDevices:- name: datadevicePath: /dev/xvdavolumes:- name: datapersistentVolumeClaim:claimName: raw-block-pvc
root@k8s-ceph:~/lim/csi# kubectl apply -f raw-block-pod.yaml
pod/pod-with-raw-block-volume createdroot@k8s-ceph:~/lim/csi# kubectl get pod
NAME READY STATUS RESTARTS AGE
csi-rbdplugin-nrdmh 3/3 Running 0 45m
csi-rbdplugin-provisioner-54d9db86b5-dczd9 7/7 Running 5 45m
pod-with-raw-block-volume 1/1 Running 0 19s
这一步报挂载失败的话,apt install ceph-common
自此k8s 动态挂载ceph rbd 完成
k8s ceph rbd动态挂载相关推荐
- k8s学习笔记——ceph pv rbd动态挂载
//参考https://github.com/kubernetes-retired/external-storage/tree/master/ceph/rbd //参考https://www.wenj ...
- ceph rbd双挂载导致ext4文件系统inode链接数据污染
转载自:https://my.oschina.net/xueyi28/blog/1596003 ###故障现象 /data/rbd1/dir1/a/file1 /data/rbd1/dir2/a/fi ...
- k8s学习笔记——k8s pv rbd手动挂载
//创建image ceph osd pool create kube 9 9 //可做或者使用默认的rbd rbd pool init kube rbd create --size 2048 kub ...
- k8s1.18 StorageClass 使用rbd-provisioner提供ceph rbd持久化存储
rbd-provisioner为kubernetes 1.5+版本提供了类似于kubernetes.io/rbd的ceph rbd持久化存储动态配置实现. 一些用户会使用kubeadm来部署集群,或者 ...
- Kubernetes 基于ceph rbd生成pv
1.创建ceph-secret这个k8s secret对象,这个secret对象用于k8s volume插件访问ceph集群,获取client.admin的keyring值,并用base64编码,在m ...
- k8s学习笔记——ceph rbd本地手动挂载
//在客户端节点上执行 1.创建rbd手动挂载 //ceph osd pool create <pool> [<pg_num:int>] [<pgp_num:int> ...
- k8s主从自动切换mysql_K8S与Ceph RBD集成-多主与主从数据库示例
参考文章: 感谢以上作者提供的技术参考,这里我加以整理,分别实现了多主数据库集群和主从数据库结合Ceph RDB的实现方式.以下配置只为测试使用,不能做为生产配置. K8S中存储的分类 在K8S的持久 ...
- k8s(十二)、分布式存储Ceph RBD使用
前言 上篇文章介绍了k8s使用pv/pvc 的方式使用cephfs, k8s(十一).分布式存储Cephfs使用 Ceph存储有三种存储接口,分别是: 对象存储 Ceph Object Gateway ...
- K8S使用Ceph RBD作为后端存储
一.准备工作 Ceph版本:v13.2.5 mimic稳定版 1.Ceph上准备存储池 [root@ceph-node1 ceph]# ceph osd pool create k8s 128 128 ...
- kubernetes挂载ceph rbd和cephfs
微信公众号搜索 DevOps和k8s全栈技术 ,关注之后,在后台回复 k8s视频,就可获取k8s免费视频和文档,也可扫描文章最后的二维码关注公众号. 目录 k8s挂载Ceph RBD 创建secret ...
最新文章
- 21岁华人本科生,凭什么拿下CVPR 2020最佳论文提名?
- UVA11300分金币
- Docker 常见问题
- Android应用底部导航栏(选项卡)实例
- IntelliJ IDEA 添加项目后编译显示包不存在的解决方案
- 收集100 个网络基础知识
- Opencv imshow显示不出来图片
- 微信小程序测试的策略和注意事项
- vantUI组件:获取验证码 - 踩坑篇
- Android View框架总结(八)ViewGroup事件分发机制
- 【Python学习笔记】列表生成式和生成器
- 开源表单系统|Tduck填鸭表单docker部署详细教程
- SSM项目实战之十二:用户信息的修改
- 深度学习教程(8) | AI应用实践策略(上)(吴恩达·完整版)
- Excel学习日记:L4-资料排序
- MS-TCN: Multi-Stage Temporal Convolutional Network for Action Segmentation
- 《当下即是生活》季羡林——读书笔记
- 数论:欧几里得与扩展欧几里得算法
- Sql Server 指定日期所在周的第一天和最后一天
- 《五子棋大师》技术支持
热门文章
- 计算机添加定时启动软件,有什么软件可以让电脑定时开机?除了设定BIOS!
- 小米摄像头修改wifi
- 项目助理是打杂的吗_应届生如何着手准备应聘产品助理?
- Windows无法访问指定设备路径或文件,您可能没有合适的权限访问这个项目
- Nginx编译安装云锁
- SpringCloud整合Feign和Nacos报错:No Feign Client for loadBalancing defined. Did you forget to include?
- php mysql webim_webim(icomet) 使用
- Effective Modern C++笔记汇总
- 海归王垠 V.S. 阿里P10赵海平,不对等面试所引起的争议
- 基于Docker的Redis集群搭建