Linux与云计算——第二阶段Linux服务器架设

第五章:存储Storage服务器架设—分布式存储Ceph

1 Ceph 配置Ceph集群

Install Distributed File System "Ceph" to Configure Storage Cluster.

For example on here, Configure Cluster with 1 admin Node and 3 Storage Node like follows.

|

+--------------------+           |           +-------------------+

|   [dlp.srv.world]  |10.0.0.30  |   10.0.0.x|   [   Client  ]   |

|    Ceph-Deploy     +-----------+-----------+                   |

|                    |           |           |                   |

+--------------------+           |           +-------------------+

+----------------------------+----------------------------+

|                            |                            |

|10.0.0.51                   |10.0.0.52                   |10.0.0.53

+-----------+-----------+    +-----------+-----------+    +-----------+-----------+

|   [node01.srv.world]  |    |  [node02.srv.world]   |    |   [node03.srv.world]  |

|     Object Storage    +----+     Object Storage    +----+     Object Storage    |

|     Monitor Daemon    |    |                       |    |                       |

|                       |    |                       |    |                       |

+-----------------------+    +-----------------------+    +-----------------------+

[1] Add a user for Ceph admin on all Nodes.

It adds "cent" user on this exmaple.

[2] Grant root priviledge to Ceph admin user just added above with sudo settings.

And also install required packages.

Furthermore, If Firewalld is running on all Nodes, allow SSH service.

Set all of above on all Nodes.

[root@dlp ~]# echo -e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph

[root@dlp ~]# chmod 440 /etc/sudoers.d/ceph

[root@dlp ~]# yum -y install centos-release-ceph-hammer epel-release yum-plugin-priorities

[root@dlp ~]# sed -i -e "s/enabled=1/enabled=1\npriority=1/g" /etc/yum.repos.d/CentOS-Ceph-Hammer.repo

[root@dlp ~]# firewall-cmd --add-service=ssh --permanent

[root@dlp ~]# firewall-cmd –reload

[3] On Monitor Node (Monitor Daemon), If Firewalld is running, allow required port.

[root@dlp ~]# firewall-cmd --add-port=6789/tcp --permanent

[root@dlp ~]# firewall-cmd –reload

[4] On Storage Nodes (Object Storage), If Firewalld is running, allow required ports.

[root@dlp ~]# firewall-cmd --add-port=6800-7100/tcp --permanent

[root@dlp ~]# firewall-cmd –reload

[4] Login as a Ceph admin user and configure Ceph.

Set SSH key-pair from Ceph Admin Node (it's "dlp.srv.world" on this example) to all storage Nodes.

[cent@dlp ~]$ ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/home/cent/.ssh/id_rsa):

Created directory '/home/cent/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/cent/.ssh/id_rsa.

Your public key has been saved in /home/cent/.ssh/id_rsa.pub.

The key fingerprint is:

54:c3:12:0e:d3:65:11:49:11:73:35:1b:e3:e8:63:5a cent@dlp.srv.world

The key's randomart p_w_picpath is:

[cent@dlp ~]$ vi ~/.ssh/config

# create new ( define all nodes and users )

Host dlp

Hostname dlp.srv.world

User cent

Host node01

Hostname node01.srv.world

User cent

Host node02

Hostname node02.srv.world

User cent

Host node03

Hostname node03.srv.world

User cent

[cent@dlp ~]$ chmod 600 ~/.ssh/config

# transfer key file

[cent@dlp ~]$ ssh-copy-id node01

cent@node01.srv.world's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node01'"

and check to make sure that only the key(s) you wanted were added.

[cent@dlp ~]$ ssh-copy-id node02

[cent@dlp ~]$ ssh-copy-id node03

[5] Install Ceph to all Nodes from Admin Node.

[cent@dlp ~]$ sudo yum -y install ceph-deploy

[cent@dlp ~]$ mkdir ceph

[cent@dlp ~]$ cd ceph

[cent@dlp ceph]$ ceph-deploy new node01

[cent@dlp ceph]$ vi ./ceph.conf

# add to the end

osd pool default size = 2

# Install Ceph on each Node

[cent@dlp ceph]$ ceph-deploy install dlp node01 node02 node03

# settings for monitoring and keys

[cent@dlp ceph]$ ceph-deploy mon create-initial

[6] Configure Ceph Cluster from Admin Node.

Beforeit, Create a directory /storage01 on Node01, /storage02 on Node02, /storage03 on node03 on this example.

# prepare Object Storage Daemon

[cent@dlp ceph]$ ceph-deploy osd prepare node01:/storage01 node02:/storage02 node03:/storage03

# activate Object Storage Daemon

[cent@dlp ceph]$ ceph-deploy osd activate node01:/storage01 node02:/storage02 node03:/storage03

# transfer config files

[cent@dlp ceph]$ ceph-deploy admin dlp node01 node02 node03

[cent@dlp ceph]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

# show status (display like follows if no ploblem)

[cent@dlp ceph]$ ceph health

HEALTH_OK

[7] By the way, if you'd like to clean settings and re-configure again, do like follows.

# remove packages

[cent@dlp ceph]$ ceph-deploy purge dlp node01 node02 node03

# remove settings

[cent@dlp ceph]$ ceph-deploy purgedata dlp node01 node02 node03

[cent@dlp ceph]$ ceph-deploy forgetkeys

2 作为Block设备使用

Configure Clients to use Ceph Storage like follows.

|

+--------------------+           |           +-------------------+

|   [dlp.srv.world]  |10.0.0.30  |   10.0.0.x|   [   Client  ]   |

|    Ceph-Deploy     +-----------+-----------+                   |

|                    |           |           |                   |

+--------------------+           |           +-------------------+

+----------------------------+----------------------------+

|                            |                            |

|10.0.0.51                   |10.0.0.52                   |10.0.0.53

+-----------+-----------+    +-----------+-----------+    +-----------+-----------+

|   [node01.srv.world]  |    |   [node02.srv.world]  |    |   [node03.srv.world]  |

|     Object Storage    +----+     Object Storage    +----+     Object Storage    |

|     Monitor Daemon    |    |                       |    |                       |

|                       |    |                       |    |                       |

+-----------------------+    +-----------------------+    +-----------------------+

For exmaple, Create a block device and mount it on a Client.

[1] First, Configure Sudo and SSH key-pair for a user on a Client and next, Install Ceph from Ceph admin Node like follows.

[cent@dlp ceph]$ ceph-deploy install client

[cent@dlp ceph]$ ceph-deploy admin client

[2] Create a Block device and mount it on a Client.

[cent@client ~]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

# create a disk with 10G

[cent@client ~]$ rbd create disk01 --size 10240

# show list

[cent@client ~]$ rbd ls -l

NAME     SIZE PARENT FMT PROT LOCK

disk01 10240M          2

# map the p_w_picpath to device

[cent@client ~]$ sudo rbd map disk01

/dev/rbd0

# show mapping

[cent@client ~]$ rbd showmapped

id pool p_w_picpath  snap device

0  rbd  disk01 -    /dev/rbd0

# format with XFS

[cent@client ~]$ sudo mkfs.xfs /dev/rbd0

# mount device

[cent@client ~]$ sudo mount /dev/rbd0 /mnt

[cent@client ~]$ df -hT

Filesystem              Type      Size  Used Avail Use% Mounted on

/dev/mapper/centos-root xfs        27G  1.3G   26G   5% /

devtmpfs                devtmpfs  2.0G     0  2.0G   0% /dev

tmpfs                   tmpfs     2.0G     0  2.0G   0% /dev/shm

tmpfs                   tmpfs     2.0G  8.4M  2.0G   1% /run

tmpfs                   tmpfs     2.0G     0  2.0G   0% /sys/fs/cgroup

/dev/vda1               xfs       497M  151M  347M  31% /boot

/dev/rbd0               xfs        10G   33M   10G   1% /mnt

3 作为文件系统使用

Configure Clients to use Ceph Storage like follows.

|

+--------------------+           |           +-------------------+

|   [dlp.srv.world]  |10.0.0.30  |   10.0.0.x|   [   Client  ]   |

|    Ceph-Deploy     +-----------+-----------+                   |

|                    |           |           |                   |

+--------------------+           |           +-------------------+

+----------------------------+----------------------------+

|                            |                            |

|10.0.0.51                   |10.0.0.52                   |10.0.0.53

+-----------+-----------+    +-----------+-----------+    +-----------+-----------+

|   [node01.srv.world]  |    |   [node02.srv.world]  |    |   [node03.srv.world]  |

|     Object Storage    +----+     Object Storage    +----+     Object Storage    |

|     Monitor Daemon    |    |                       |    |                       |

|                       |    |                       |    |                       |

+-----------------------+    +-----------------------+    +-----------------------+

For example, mount as Filesystem on a Client.

[1] Create MDS (MetaData Server) on a Node which you'd like to set MDS. It sets to node01 on this exmaple.

[cent@dlp ceph]$ ceph-deploy mds create node01

[2] Create at least 2 RADOS pools on MDS Node and activate MetaData Server.

For pg_num which is specified at the end of a creating command, refer to official document and decide appropriate value.

http://docs.ceph.com/docs/master/rados/operations/placement-groups/

[cent@node01 ~]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

# create pools

[cent@node01 ~]$ ceph osd pool create cephfs_data 128

pool 'cephfs_data' created

[cent@node01 ~]$ ceph osd pool create cephfs_metadata 128

pool 'cephfs_metadata' created

# enable pools

[cent@node01 ~]$ ceph fs new cephfs cephfs_metadata cephfs_data

new fs with metadata pool 2 and data pool 1

# show list

[cent@node01 ~]$ ceph fs ls

name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

[cent@node01 ~]$ ceph mds stat

e5: 1/1/1 up {0=node01=up:active}

[3] Mount CephFS on a Client.

[root@client ~]# yum -y install ceph-fuse

# get admin key

[root@client ~]# ssh cent@node01.srv.world "sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring" > admin.key

cent@node01.srv.world's password:

[root@client ~]# chmod 600 admin.key

[root@client ~]# mount -t ceph node01.srv.world:6789:/ /mnt -o name=admin,secretfile=admin.key

[root@client ~]# df -hT

Filesystem              Type      Size  Used Avail Use% Mounted on

/dev/mapper/centos-root xfs        27G  1.3G   26G   5% /

devtmpfs                devtmpfs  2.0G     0  2.0G   0% /dev

tmpfs                   tmpfs     2.0G     0  2.0G   0% /dev/shm

tmpfs                   tmpfs     2.0G  8.3M  2.0G   1% /run

tmpfs                   tmpfs     2.0G     0  2.0G   0% /sys/fs/cgroup

/dev/vda1               xfs       497M  151M  347M  31% /boot

10.0.0.51:6789:/        ceph       80G   19G   61G  24% /mnt


详细视频课程请戳—→ http://edu.51cto.com/course/course_id-6574.html


转载于:https://blog.51cto.com/11840455/1833048

Linux与云计算——第二阶段 第五章:存储Storage服务器架设—分布式存储Ceph相关推荐

  1. Linux与云计算——第二阶段 第五章:存储Storage服务器架设—分布式存储GlusterFS基础...

    Linux与云计算--第二阶段Linux服务器架设 第五章:存储Storage服务器架设-分布式存储GlusterFS基础 1 GlusterFS GlusterFS安装 Install Gluste ...

  2. Linux与云计算——第二阶段Linux服务器架设 第七章:网站WEB服务器架设—日志分析平台...

    Linux与云计算--第二阶段Linux服务器架设 第七章:网站WEB服务器架设-日志分析平台 日志分析:AWstats 安装AWstats分析http日志信息. [1] Install AWstat ...

  3. Linux与云计算——第二阶段Linux服务器架设 第七章:网站WEB服务器架设—电子邮件WEB端搭建SquirrelMail...

    Linux与云计算--第二阶段Linux服务器架设 第七章:网站WEB服务器架设-电子邮件WEB端搭建SquirrelMail WEBMAIL:SquirrelMail 使用SquirrelMail配 ...

  4. 第五章 存储数据 web scraping with python

    第五章.存储数据 尽管在终端打印是有很多乐趣的,但是当谈到数据汇总和分析时候这不是非常有用的.为了使大部分的爬虫有用,你需要能够保存它们抓取的信息. 在本章中,我们将着眼于的三个数据管理的方法满足任何 ...

  5. Linux与云计算——第二阶段Linux服务器架设 第八章:FTP服务器架设—安装配置FTP服务和客户端配置...

    Linux与云计算--第二阶段Linux服务器架设 第八章:FTP服务器架设-安装配置FTP服务和客户端配置 安装Vsftpd [1] 安装并配置Vsftpd. [root@server ~]# yu ...

  6. linux键盘设置的文件在哪个文件夹,「正点原子Linux连载」第十五章按键输入试验...

    原标题:「正点原子Linux连载」第十五章按键输入试验 第十五章按键输入试验 前面几章试验都是讲解如何使用I.MX6U的GPIO输出控制功能,I.MX6U的IO不仅能作为输出,而且也可以作为输入.I. ...

  7. linux计算服务器配置,Linux与云计算——第二阶段Linux服务器架设 第八章:FTP服务器架设—安装配置FTP服务和客户端配置...

    Linux与云计算--第二阶段Linux服务器架设 第八章:FTP服务器架设-安装配置FTP服务和客户端配置 安装Vsftpd [1]安装并配置Vsftpd. [root@server ~]# yum ...

  8. stty详解-Unix/Linux编程实践教程第五章 学习stty

    读书笔记-第五章 连接控制 学习stty 先放上思维导图 为设备编程&设备就像文件 前面所学的知识都是基于文件的,而在unix/linux系统中,所有的设备(打印机,终端,话筒,摄像头等等)也 ...

  9. 【正点原子Linux连载】第十五章点亮LED-摘自【正点原子】I.MX6U嵌入式Linux C应用编程指南V1.1

    1)实验平台:正点原子阿尔法Linux开发板 2)平台购买地址:https://item.taobao.com/item.htm?id=603672744434 2)全套实验源码+手册+视频下载地址: ...

最新文章

  1. 消防管件做的机器人图片_消防管件组装成机器人 PM值临界时会报警并自动喷淋...
  2. 在ubuntu上mysql出现ERROR 1045(28000):Access denied for user ‘young‘@’localhost‘(using password:NO)...
  3. 谷歌cloud_通过使用Google Cloud ML大规模提供机器学习模型,我们学到了什么
  4. 草稿-xpath了解-python 操作xpath小例子
  5. 揭秘自动驾驶纯视觉算法,探索自动驾驶的未来
  6. 常用PMP资料下载地址
  7. 减号android string,在Android应用软键盘上减号加号登录
  8. 群体遗传进化专题之选择性清除分析
  9. python3之微信文章爬虫
  10. 打印机有关术语及解释
  11. 在VMWare虚拟机上安装Kali linux系统的完整过程(图文)
  12. html中实现简单计算器功能,js实现简易计算器功能
  13. 赛博朋克小建筑系列模型
  14. 1725 天黑请闭眼
  15. GoldenGate添加进程及初始化
  16. sun公司:太阳的升起与衰落
  17. 团购网站8月份用户普及率淘宝聚划算最高
  18. Unity随记(七) shader实现石像石化效果
  19. java web图片旋转_修正web项目中图片旋转方向
  20. 通过cmd处理汉王酷学手写板在Win10x64下时而失灵的问题

热门文章

  1. anaconda配置环境变量
  2. 修改Intellij IDEA中工程对应的Java SDK、Scala SDK
  3. bootstrap-table页码ALL显示为NAN
  4. X-007 FriendlyARM tiny4412 u-boot移植之内存初始化
  5. 微信电脑网页二维码扫描登录简单实现
  6. Windows(64位IIS)未在本地计算机上注册“Microsoft.Jet.OLEDB.4.0”提供程序
  7. opencv Mat类
  8. python对象不接受参数什么意思___new\=TypeError:object()不接受参数
  9. idea中实体类右击没有ptg_几个牛逼的IDEA插件,Java开发者撸码神器,还带动图的!...
  10. python 驱动级鼠标_Python介绍、安装