k8s——kubernetes使用rook部署ceph集群
kubernetes使用rook部署ceph集群
一:环境准备
1.安装时间服务器进行时间同步
所有的kubernetes的集群节点
[root@master ~]# yum -y install ntpdate
[root@master ~]# ntpdate ntp2.aliyun.com
2.启动rdb模块
[root@master ~]# modprobe rbd
[root@master ~]# cat > /etc/rc.sysinit << EOF#!/bin/bashfor file in /etc/sysconfig/modules/*.modulesdo[ -x \$file ] && \$filedoneEOF
[root@master ~]# cat > /etc/sysconfig/modules/rbd.modules << EOFmodprobe rbdEOF
[root@master ~]# chmod 755 /etc/sysconfig/modules/rbd.modules
[root@master ~]# lsmod |grep rbd
rbd 83889 0
libceph 282661 1 rbd
二:Rook部署ceph集群
1.各个节点拉取镜像
[root@k8s-master01 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/vinc-auto/ceph:v1.2.6
[root@k8s-master01 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/vinc-auto/ceph:v14.2.8
[root@k8s-master01 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/vinc-auto/csi-node-driver-registrar:v1.2.0
[root@k8s-master01 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/vinc-auto/csi-provisioner:v1.4.0
[root@k8s-master01 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/vinc-auto/csi-attacher:v1.2.0
[root@k8s-master01 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/vinc-auto/csi-snapshotter:v1.2.2
[root@k8s-master01 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/vinc-auto/cephcsi:v1.2.2
2.手动做镜像tag
[root@k8s-master01 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/vinc-auto/csi-node-driver-registrar:v1.2.0 quay.io/k8scsi/csi-node-driver-registrar:v1.2.0
[root@k8s-master01 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/vinc-auto/csi-provisioner:v1.4.0 quay.io/k8scsi/csi-provisioner:v1.4.0
[root@k8s-master01 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/vinc-auto/csi-attacher:v1.2.0 quay.io/k8scsi/csi-attacher:v1.2.0
[root@k8s-master01 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/vinc-auto/csi-snapshotter:v1.2.2 quay.io/k8scsi/csi-snapshotter:v1.2.2
[root@k8s-master01 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/vinc-auto/cephcsi:v1.2.2 quay.io/cephcsi/cephcsi:v1.2.2
三:ceph集群部署
1.master节点下载Rook部署 Ceph 集群
注意:这里本地下载
[root@k8s-master01 ~]# cd /tmp
[root@k8s-master01 ~]# git clone --single-branch --branch release-1.2 https://github.com/rook/rook.git
2.配置 ceph 集群环境
[root@master ~]# cd /tmp/rook/cluster/examples/kubernetes/ceph/
[root@master ceph]# kubectl create -f common.yaml
[root@master ceph]# sed -i 's#rook/ceph:v1.2.7#registry.cn-hangzhou.aliyuncs.com/vinc-auto/ceph:v1.2.6#g' operator.yaml
[root@master ceph]# kubectl apply -f operator.yaml
[root@master ceph]# kubectl -n rook-ceph get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
rook-ceph-operator-c78d48dd8-vpzcv 1/1 Running 0 2m54s 10.244.1.3 node-1 <none> <none>
rook-discover-bt2lc 1/1 Running 0 2m36s 10.244.2.4 node-2 <none> <none>
rook-discover-ht9gg 1/1 Running 0 2m36s 10.244.3.5 node-3 <none> <none>
rook-discover-r27t4 1/1 Running 0 2m36s 10.244.1.4 node-1 <none> <none>
3.ceph 集群部署配置
cluster.yaml 是生产存储集群配置,需要至少三个节点
cluster-test.yaml 是测试集群配置,只需要一个节点
cluster-minimal.yaml 仅仅会配置一个ceph-mon和一个ceph-mgr
修改集群配置文件,替换镜像,关闭所有节点和所有设备选择,手动指定节点和设备
[root@master ceph]# sed -i 's|ceph/ceph:v14.2.9|registry.cn-hangzhou.aliyuncs.com/vinc-auto/ceph:v14.2.8|g' cluster.yaml
[root@master ceph]# sed -i 's|useAllNodes: true|useAllNodes: false|g' cluster.yaml
[root@master ceph]# sed -i 's|useAllDevices: true|useAllDevices: false|g' cluster.yaml
四:集群节点部署
1.获取第二块磁盘名称
[root@node-1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 40G 0 disk
├─vda1 252:1 0 500M 0 part /boot
└─vda2 252:2 0 39.5G 0 part └─centos-root 253:0 0 39.5G 0 lvm /
vdb 252:16 0 8G 0 disk
2.获取集群节点名称
[root@master ceph]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready master 10h v1.19.4
node-1 Ready <none> 10h v1.19.4
node-2 Ready <none> 10h v1.19.4
node-3 Ready <none> 10h v1.19.4
[root@master ceph]# kubectl describe node master|grep kubernetes.io/hostnamekubernetes.io/hostname=master
3.添加 ceph 集群节点配置
在storage标签的config:下添加配置
注意,name 不能够配置为IP,而应该是标签 kubernetes.io/hostname 的内容
config:metadataDevice:databaseSizeMB: "1024"journalSizeMB: "1024"nodes:- name: "node-1"devices:- name: "vdb"config:storeType: bluestore- name: "node-2"devices:- name: "vdb"config:storeType: bluestore- name: "node-3"devices:- name: "vdb"config:storeType: bluestore
4.创建 ceph 集群
[root@master ceph]# kubectl create -f cluster.yaml
cephcluster.ceph.rook.io/rook-ceph created
[root@master ceph]# kubectl -n rook-ceph get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
csi-cephfsplugin-hmnbl 3/3 Running 0 6m44s 10.0.1.11 node-1 <none> <none>
csi-cephfsplugin-lj6mp 3/3 Running 0 6m44s 10.0.1.12 node-2 <none> <none>
csi-cephfsplugin-provisioner-558b4777b-vl892 4/4 Running 0 6m44s 10.244.1.5 node-1 <none> <none>
csi-cephfsplugin-provisioner-558b4777b-zfpl5 4/4 Running 0 6m43s 10.244.2.6 node-2 <none> <none>
csi-cephfsplugin-vncpk 3/3 Running 0 6m44s 10.0.1.13 node-3 <none> <none>
csi-rbdplugin-2cmjr 3/3 Running 0 6m48s 10.0.1.11 node-1 <none> <none>
csi-rbdplugin-55xwl 3/3 Running 0 6m48s 10.0.1.13 node-3 <none> <none>
csi-rbdplugin-provisioner-55494cc8b4-hqjbz 5/5 Running 0 6m48s 10.244.3.6 node-3 <none> <none>
csi-rbdplugin-provisioner-55494cc8b4-k974x 5/5 Running 0 6m48s 10.244.2.5 node-2 <none> <none>
csi-rbdplugin-t78nf 3/3 Running 0 6m48s 10.0.1.12 node-2 <none> <none>
rook-ceph-crashcollector-node-1-f6df45cd5-dtjmb 1/1 Running 0 3m56s 10.244.1.8 node-1 <none> <none>
rook-ceph-crashcollector-node-2-546b678bbb-dxfpr 1/1 Running 0 4m38s 10.244.2.11 node-2 <none> <none>
rook-ceph-crashcollector-node-3-579889779-n2p9t 1/1 Running 0 87s 10.244.3.14 node-3 <none> <none>
rook-ceph-mgr-a-5ff48dbfbb-qd6cw 1/1 Running 0 3m29s 10.244.3.9 node-3 <none> <none>
rook-ceph-mon-a-559ccdd4c-6rnsz 1/1 Running 0 4m39s 10.244.2.9 node-2 <none> <none>
rook-ceph-mon-b-699cb56d4f-kn4fb 1/1 Running 0 4m21s 10.244.3.8 node-3 <none> <none>
rook-ceph-mon-c-7b79ff8bb4-t6zsn 1/1 Running 0 3m56s 10.244.1.7 node-1 <none> <none>
rook-ceph-operator-c78d48dd8-vpzcv 1/1 Running 0 27m 10.244.1.3 node-1 <none> <none>
rook-ceph-osd-0-5d7b89c8d5-28snp 1/1 Running 0 105s 10.244.1.10 node-1 <none> <none>
rook-ceph-osd-1-79959f5b49-6qnth 1/1 Running 0 102s 10.244.2.12 node-2 <none> <none>
rook-ceph-osd-2-676cc4df65-zms9n 1/1 Running 0 90s 10.244.3.13 node-3 <none> <none>
rook-discover-bt2lc 1/1 Running 0 27m 10.244.2.4 node-2 <none> <none>
rook-discover-ht9gg 1/1 Running 0 27m 10.244.3.5 node-3 <none> <none>
rook-discover-r27t4 1/1 Running 0 27m 10.244.1.4 node-1 <none> <none>
五:部署 Ceph 工具
1.部署托管于K8S的Ceph工具
[root@master ceph]# sed -i 's|rook/ceph:v1.2.7|registry.cn-hangzhou.aliyuncs.com/vinc-auto/ceph:v1.2.6|g' toolbox.yaml
[root@master ceph]# kubectl apply -f toolbox.yaml
2.检测 Ceph 集群状态
[root@master ceph]# kubectl -n rook-ceph get pod -l "app=rook-ceph-tools"
NAME READY STATUS RESTARTS AGE
rook-ceph-tools-7476c966b7-5f5kg 1/1 Running 0 40s
[root@master ceph]# NAME=$(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}')
[root@master ceph]# kubectl -n rook-ceph exec -it ${NAME} sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
sh-4.2# ceph statuscluster:id: fb3cdbc2-8fea-4346-b752-131fd1eb2bafhealth: HEALTH_WARNclock skew detected on mon.b, mon.cservices:mon: 3 daemons, quorum a,b,c (age 10m)mgr: a(active, since 9m)osd: 3 osds: 3 up (since 7m), 3 in (since 7m)data:pools: 0 pools, 0 pgsobjects: 0 objects, 0 Busage: 3.0 GiB used, 21 GiB / 24 GiB availpgs:
sh-4.2# ceph osd status
+----+--------+-------+-------+--------+---------+--------+---------+-----------+
| id | host | used | avail | wr ops | wr data | rd ops | rd data | state |
+----+--------+-------+-------+--------+---------+--------+---------+-----------+
| 0 | node-1 | 1025M | 7162M | 0 | 0 | 0 | 0 | exists,up |
| 1 | node-2 | 1025M | 7162M | 0 | 0 | 0 | 0 | exists,up |
| 2 | node-3 | 1025M | 7162M | 0 | 0 | 0 | 0 | exists,up |
+----+--------+-------+-------+--------+---------+--------+---------+-----------+
sh-4.2# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS 0 hdd 0.00780 1.00000 8.0 GiB 1.0 GiB 1.4 MiB 0 B 1 GiB 7.0 GiB 12.52 1.00 0 up 1 hdd 0.00780 1.00000 8.0 GiB 1.0 GiB 1.4 MiB 0 B 1 GiB 7.0 GiB 12.52 1.00 0 up 2 hdd 0.00780 1.00000 8.0 GiB 1.0 GiB 1.4 MiB 0 B 1 GiB 7.0 GiB 12.52 1.00 0 up TOTAL 24 GiB 3.0 GiB 4.1 MiB 0 B 3 GiB 21 GiB 12.52
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
sh-4.2# ceph osd pool stats
there are no pools!
sh-4.2# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.02339 root default
-3 0.00780 host node-1 0 hdd 0.00780 osd.0 up 1.00000 1.00000
-5 0.00780 host node-2 1 hdd 0.00780 osd.1 up 1.00000 1.00000
-7 0.00780 host node-3 2 hdd 0.00780 osd.2 up 1.00000 1.00000
sh-4.2# ceph pg stat
0 pgs: ; 0 B data, 4.3 MiB used, 21 GiB / 24 GiB avail
sh-4.2# ceph df
RAW STORAGE:CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 24 GiB 21 GiB 4.3 MiB 3.0 GiB 12.52 TOTAL 24 GiB 21 GiB 4.3 MiB 3.0 GiB 12.52 POOLS:POOL ID STORED OBJECTS USED %USED MAX AVAIL
sh-4.2# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR total_objects 0
total_used 3.0 GiB
total_avail 21 GiB
total_space 24 GiB
六:Ceph Dashboard
在Ceph集群配置文件中配置开启了dashboard,但是需要配置后才能进行登陆
1.暴露 Ceph Dashboard
Dashboard默认是ClusterIP类型的,无法在k8s节点之外的主机访问,修改ClusterIP为NodePort
[root@master ceph]# kubectl edit service rook-ceph-mgr-dashboard -n rook-ceph
service/rook-ceph-mgr-dashboard edited
[root@master ceph]# kubectl -n rook-ceph get service rook-ceph-mgr-dashboard -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
rook-ceph-mgr-dashboard NodePort 10.101.215.33 <none> 8443:30624/TCP 60m app=rook-ceph-mgr,rook_cluster=rook-ceph
2.获取登陆密码
[root@master ceph]# Ciphertext=$(kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}")
[root@master ceph]# Pass=$(echo ${Ciphertext}|base64 --decode)
[root@master ceph]# echo ${Pass}
IJpf1nsvcY
3.登录验证
用户名为:admin
ceph 集群配置清除
[root@master ceph]# kubectl -n rook-ceph delete cephcluster rook-ceph
[root@master ceph]# kubectl -n rook-ceph get cephcluster
系统环境清理
Ceph集群的所有节点均要清空相应的配置目录和抹除相应的存储设备数据并重启
否则再次部署集群时会出现各种问题导致集群部署失败
[root@master ceph]# yum -y install gdisk
[root@master ceph]# sgdisk --zap-all /dev/nvme0n2
[root@master ceph]# rm -rvf /var/lib/rook/*
[root@master ceph]# reboot
k8s——kubernetes使用rook部署ceph集群相关推荐
- K8S集群rook部署ceph集群
前言: 之前自己用rook部署过几次ceph集群,每次部署或多或少都会遇到一些问题.有些网上还能找到解决方法,有的只能靠自己去解决,毕竟每个人部署遇到的问题不一定都相同.因为每次部署完自己也没做记录, ...
- K8s上使用rook搭建Ceph集群
目录 准备工作 一.安装kubectl 二:win10 安装Docker Desktop for Windows(非必须) 三.Harbor 知识补充: 1.Ceph mgr和mon 2:Ceph 中 ...
- 教你在Kubernetes中快速部署ES集群
摘要:ES集群是进行大数据存储和分析,快速检索的利器,本文简述了ES的集群架构,并提供了在Kubernetes中快速部署ES集群的样例:对ES集群的监控运维工具进行了介绍,并提供了部分问题定位经验,最 ...
- Kubernetes学习总结(16)—— Kubernetes 实战之部署 Redis 集群
一.问题分析 本质上来说在 k8s 上部署一个 redis 集群和部署一个普通应用没有什么太大的区别,但需要注意下面几个问题: Redis 是一个有状态应用:这是部署 redis 集群时我们最需要注意 ...
- 部署Ceph集群(块存储,文件系统存储,对象存储)
一 前言 分布式文件系统(Distributed File System):文件系统管理的物理存储资源不一定直接连接在本地节点上,而是通过计算机网络与节点相连.分布式文件系统的设计基于C/S模式 1, ...
- ceph-deploy离线部署ceph集群及报错解决FAQ
Python微信订餐小程序课程视频 https://edu.csdn.net/course/detail/36074 Python实战量化交易理财系统 https://edu.csdn.net/cou ...
- Cluster04 - Ceph概述 部署Ceph集群 Ceph块存储
ceph 快照:可用做备份 一.ceph概述 1.1 什么是分布式文件系统 • 分布式文件系统(Distributed File System)是指文 件系统管理的物理存储资源不一定直接连接在本地节 ...
- 集群基础之04(部署ceph实验环境、部署ceph集群、创建Ceph块存储、块存储应用、挂载Ceph文件系统、创建对象存储服务器)
目录 前言: Ceph简介 Ceph特点 Ceph架构 Ceph核心组件及概念介绍 1.部署ceph实验环境: 2 .部署ceph集群 3.创建Ceph块存储 4.块存储应用 5.挂载Ceph文件系统 ...
- 使用ceph/daemon镜像手动部署ceph集群
使用ceph/daemon镜像手动部署ceph集群 使用ceph/daemon镜像手动部署ceph集群 三台机器的基本信息 启动mon集群 部署mgr 部署osd 部署rgw 三台机器的基本信息 IP ...
最新文章
- 软件定义的数据中心已经来临
- Linux 死机了怎么办
- Antechinus C# Editor!
- 26期20180606 chmod chown umask 隐藏权限
- 模拟利器Mockito
- 硬盘安装Debian
- pandas.DataFrame.sample随机抽样
- 《诗六十首》一个会写诗的程序员
- 解决手机提示TF卡受损需要格式化问题
- 培训linux系统下载,非常好的Linux培训教程集合下载
- EasyUI的Vue版本
- 浙江省计算机英语等级考试,2020年上半年浙江省高校计算机等级考试报名(浙江外国语...
- Elasticsearch(简称ES)实现日报表、月报表、年报表统计,没数据补0
- android手机间的通讯,(一)Android 两部手机经过UDP在局域网内通讯
- gis平移至所选要素_ArcGIS中如何实现矢量数据平移
- win10关于仅仅只能创建文件夹的问题
- 如何对网站漏洞修补进行渗透测试
- 用vs画出马来西亚国旗
- BZOJ3827[Poi2014] Around the world
- 计算机是如何执行程序的(转)