Linux与云计算——第二阶段Linux服务器架设

第五章:存储Storage服务器架设—分布式存储GlusterFS基础

1 GlusterFS GlusterFS安装

Install GlusterFS to Configure Storage Cluster.

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1] Install GlusterFS Server on all Nodes in Cluster.

[root@node01 ~]# curl http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo -o /etc/yum.repos.d/glusterfs-epel.repo

# enable EPEL, too

[root@node01 ~]# yum --enablerepo=epel -y install glusterfs-server

[root@node01 ~]# systemctl start glusterd

[root@node01 ~]# systemctl enable glusterd

[2] If Firewalld is running, allow GlusterFS service on all nodes.

[root@node01 ~]# firewall-cmd --add-service=glusterfs --permanent

success

[root@node01 ~]# firewall-cmd --reload

Success

It's OK if you mount GlusterFS volumes from clients with GlusterFS Native Client.

[3] GlusterFS supports NFS (v3), so if you mount GlusterFS volumes from clients with NFS, Configure additinally like follows.

[root@node01 ~]# yum -y install rpcbind

[root@node01 ~]# systemctl start rpcbind

[root@node01 ~]# systemctl enable rpcbind

[root@node01 ~]# systemctl restart glusterd

[4] Installing and Basic Settings of GlusterFS are OK. Refer to next section for settings of clustering.

2 设置Distributed

Configure Storage Clustering.

For example, create a distributed volume with 2 servers.

This example shows to use 2 servers but it's possible to use more than 3 servers.

|

+----------------------+          |          +----------------------+

| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |

|   node01.srv.world   +----------+----------+    node02.srv.world  |

|                      |                     |                      |

+----------------------+                     +----------------------+

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1] Install GlusterFS Server on All Nodes, refer to here.

[2] Create a Directory for GlusterFS Volume on all Nodes.

[root@node01 ~]# mkdir /glusterfs/distributed

[3] Configure Clustering like follows on a node. (it's OK on any node)

# probe the node

[root@node01 ~]# gluster peer probe node02

peer probe: success.

# show status

[root@node01 ~]# gluster peer status

Number of Peers: 1

Hostname: node02

Uuid: 2ca22769-28a1-4204-9957-886579db2231

State: Peer in Cluster (Connected)

# create volume

[root@node01 ~]# gluster volume create vol_distributed transport tcp \

node01:/glusterfs/distributed \

node02:/glusterfs/distributed

volume create: vol_distributed: success: please start the volume to access data

# start volume

[root@node01 ~]# gluster volume start vol_distributed

volume start: vol_distributed: success

# show volume info

[root@node01 ~]# gluster volume info

Volume Name: vol_distributed

Type: Distribute

Volume ID: 6677caa9-9aab-4c1a-83e5-2921ee78150d

Status: Started

Number of Bricks: 2

Transport-type: tcp

Bricks:

Brick1: node01:/glusterfs/distributed

Brick2: node02:/glusterfs/distributed

Options Reconfigured:

performance.readdir-ahead: on

[4] To mount GlusterFS volume on clients, refer to here.

3 设置Replication

Configure Storage Clustering.

For example, create a Replication volume with 2 servers.

This example shows to use 2 servers but it's possible to use more than 3 servers.

|

+----------------------+          |          +----------------------+

| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |

|   node01.srv.world   +----------+----------+   node02.srv.world   |

|                      |                     |                      |

+----------------------+                     +----------------------+

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1] Install GlusterFS Server on All Nodes, refer to here.

[2] Create a Directory for GlusterFS Volume on all Nodes.

[root@node01 ~]# mkdir /glusterfs/replica

[3] Configure Clustering like follows on a node. (it's OK on any node)

# probe the node

[root@node01 ~]# gluster peer probe node02

peer probe: success.

# show status

[root@node01 ~]# gluster peer status

Number of Peers: 1

Hostname: node02

Uuid: 2ca22769-28a1-4204-9957-886579db2231

State: Peer in Cluster (Connected)

# create volume

[root@node01 ~]# gluster volume create vol_replica replica 2 transport tcp \

node01:/glusterfs/replica \

node02:/glusterfs/replica

volume create: vol_replica: success: please start the volume to access data

# start volume

[root@node01 ~]# gluster volume start vol_replica

volume start: vol_replica: success

# show volume info

[root@node01 ~]# gluster volume info

Volume Name: vol_replica

Type: Replicate

Volume ID: 0d5d5ef7-bdfa-416c-8046-205c4d9766e6

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: node01:/glusterfs/replica

Brick2: node02:/glusterfs/replica

Options Reconfigured:

performance.readdir-ahead: on

[4] To mount GlusterFS volume on clients, refer to here.

4 设置Striping

Configure Storage Clustering.

For example, create a Striping volume with 2 servers.

This example shows to use 2 servers but it's possible to use more than 3 servers.

|

+----------------------+          |          +----------------------+

| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |

|   node01.srv.world   +----------+----------+   node02.srv.world   |

|                      |                     |                      |

+----------------------+                     +----------------------+

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1] Install GlusterFS Server on All Nodes, refer to here.

[2] Create a Directory for GlusterFS Volume on all Nodes.

[root@node01 ~]# mkdir /glusterfs/striped

[3] Configure Clustering like follows on a node. (it's OK on any node)

# probe the node

[root@node01 ~]# gluster peer probe node02

peer probe: success.

# show status

[root@node01 ~]# gluster peer status

Number of Peers: 1

Hostname: node02

Uuid: 2ca22769-28a1-4204-9957-886579db2231

State: Peer in Cluster (Connected)

# create volume

[root@node01 ~]# gluster volume create vol_striped stripe 2 transport tcp \

node01:/glusterfs/striped \

node02:/glusterfs/striped

volume create: vol_striped: success: please start the volume to access data

# start volume

[root@node01 ~]# gluster volume start vol_striped

volume start: vol_replica: success

# show volume info

[root@node01 ~]# gluster volume info

Volume Name: vol_striped

Type: Stripe

Volume ID: b6f6b090-3856-418c-aed3-bc430db91dc6

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: node01:/glusterfs/striped

Brick2: node02:/glusterfs/striped

Options Reconfigured:

performance.readdir-ahead: on

[4] To mount GlusterFS volume on clients, refer to here.

5 Distributed+Replication

Configure Storage Clustering.

For example, create a Distributed + Replication volume with 4 servers.

|

+----------------------+          |          +----------------------+

| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |

|   node01.srv.world   +----------+----------+   node02.srv.world   |

|                      |          |          |                      |

+----------------------+          |          +----------------------+

|

+----------------------+          |          +----------------------+

| [GlusterFS Server#3] |10.0.0.53 | 10.0.0.54| [GlusterFS Server#4] |

|   node03.srv.world   +----------+----------+   node04.srv.world   |

|                      |                     |                      |

+----------------------+                     +----------------------+

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1] Install GlusterFS Server on All Nodes, refer to here.

[2] Create a Directory for GlusterFS Volume on all Nodes.

[root@node01 ~]# mkdir /glusterfs/dist-replica

[3] Configure Clustering like follows on a node. (it's OK on any node)

# probe the node

[root@node01 ~]# gluster peer probe node02

peer probe: success.

[root@node01 ~]# gluster peer probe node03

peer probe: success.

[root@node01 ~]# gluster peer probe node04

peer probe: success.

# show status

[root@node01 ~]# gluster peer status

Number of Peers: 3

Hostname: node02

Uuid: 2ca22769-28a1-4204-9957-886579db2231

State: Peer in Cluster (Connected)

Hostname: node03

Uuid: 79cff591-1e98-4617-953c-0d3e334cf96a

State: Peer in Cluster (Connected)

Hostname: node04

Uuid: 779ab1b3-fda9-46da-af95-ba56477bf638

State: Peer in Cluster (Connected)

# create volume

[root@node01 ~]# gluster volume create vol_dist-replica replica 2 transport tcp \

node01:/glusterfs/dist-replica \

node02:/glusterfs/dist-replica \

node03:/glusterfs/dist-replica \

node04:/glusterfs/dist-replica

volume create: vol_dist-replica: success: please start the volume to access data

# start volume

[root@node01 ~]# gluster volume start vol_dist-replica

volume start: vol_dist-replica: success

# show volume info

[root@node01 ~]# gluster volume info

Volume Name: vol_dist-replica

Type: Distributed-Replicate

Volume ID: 784d2953-6599-4102-afc2-9069932894cc

Status: Started

Number of Bricks: 2 x 2 = 4

Transport-type: tcp

Bricks:

Brick1: node01:/glusterfs/dist-replica

Brick2: node02:/glusterfs/dist-replica

Brick3: node03:/glusterfs/dist-replica

Brick4: node04:/glusterfs/dist-replica

Options Reconfigured:

performance.readdir-ahead: on

[4] To mount GlusterFS volume on clients, refer to here.

6 Striping+Replication

Configure Storage Clustering.

For example, create a Striping + Replication volume with 4 servers.

|

+----------------------+          |          +----------------------+

| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |

|   node01.srv.world   +----------+----------+   node02.srv.world   |

|                      |          |          |                      |

+----------------------+          |          +----------------------+

|

+----------------------+          |          +----------------------+

| [GlusterFS Server#3] |10.0.0.53 | 10.0.0.54| [GlusterFS Server#4] |

|   node03.srv.world   +----------+----------+   node04.srv.world   |

|                      |                     |                      |

+----------------------+                     +----------------------+

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1] Install GlusterFS Server on All Nodes, refer to here.

[2] Create a Directory for GlusterFS Volume on all Nodes.

[root@node01 ~]# mkdir /glusterfs/strip-replica

[3] Configure Clustering like follows on a node. (it's OK on any node)

# probe the node

[root@node01 ~]# gluster peer probe node02

peer probe: success.

[root@node01 ~]# gluster peer probe node03

peer probe: success.

[root@node01 ~]# gluster peer probe node04

peer probe: success.

# show status

[root@node01 ~]# gluster peer status

Number of Peers: 3

Hostname: node02

Uuid: 2ca22769-28a1-4204-9957-886579db2231

State: Peer in Cluster (Connected)

Hostname: node03

Uuid: 79cff591-1e98-4617-953c-0d3e334cf96a

State: Peer in Cluster (Connected)

Hostname: node04

Uuid: 779ab1b3-fda9-46da-af95-ba56477bf638

State: Peer in Cluster (Connected)

# create volume

[root@node01 ~]# gluster volume create vol_strip-replica stripe 2 replica 2 transport tcp \

node01:/glusterfs/strip-replica \

node02:/glusterfs/strip-replica \

node03:/glusterfs/strip-replica \

node04:/glusterfs/strip-replica

volume create: vol_strip-replica: success: please start the volume to access data

# start volume

[root@node01 ~]# gluster volume start vol_strip-replica

volume start: vol_strip-replica: success

# show volume info

[root@node01 ~]# gluster volume info

Volume Name: vol_strip-replica

Type: Striped-Replicate

Volume ID: ec36b0d3-8467-47f6-aa83-1020555f58b6

Status: Started

Number of Bricks: 1 x 2 x 2 = 4

Transport-type: tcp

Bricks:

Brick1: node01:/glusterfs/strip-replica

Brick2: node02:/glusterfs/strip-replica

Brick3: node03:/glusterfs/strip-replica

Brick4: node04:/glusterfs/strip-replica

Options Reconfigured:

performance.readdir-ahead: on

[4] To mount GlusterFS volume on clients, refer to here.

7 Clients Settings

It's the settings for GlusterFS clients to mount GlusterFS volumes.

[1] For mounting with GlusterFS Native Client, Configure like follows.

[root@client ~]# curl http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo -o /etc/yum.repos.d/glusterfs-epel.repo

[root@client ~]# yum -y install glusterfs glusterfs-fuse

# mount vol_distributed volume on /mnt

[root@client ~]# mount -t glusterfs node01.srv.world:/vol_distributed /mnt

[root@client ~]# df -hT

Filesystem                           Type            Size  Used Avail Use% Mounted on

/dev/mapper/centos-root              xfs              27G  1.1G   26G   5% /

devtmpfs                             devtmpfs        2.0G     0  2.0G   0% /dev

tmpfs                                tmpfs           2.0G     0  2.0G   0% /dev/shm

tmpfs                                tmpfs           2.0G  8.3M  2.0G   1% /run

tmpfs                                tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup

/dev/vda1                            xfs             497M  151M  347M  31% /boot

node01.srv.world:/vol_distributed fuse.glusterfs   40G   65M   40G   1% /mnt

[2] NFS (v3) is also supported, so it's possible to mount with NFS.

Configure for it on GlusterFS Servers first, refer to here.

[root@client ~]# yum -y install nfs-utils

[root@client ~]# systemctl start rpcbind rpc-statd

[root@client ~]# systemctl enable rpcbind rpc-statd

[root@client ~]# mount -t nfs -o mountvers=3 node01.srv.world:/vol_distributed /mnt

[root@client ~]# df -hT

Filesystem                           Type      Size  Used Avail Use% Mounted on

/dev/mapper/centos-root              xfs        27G  1.1G   26G   5% /

devtmpfs                             devtmpfs  2.0G     0  2.0G   0% /dev

tmpfs                                tmpfs     2.0G     0  2.0G   0% /dev/shm

tmpfs                                tmpfs     2.0G  8.3M  2.0G   1% /run

tmpfs                                tmpfs     2.0G     0  2.0G   0% /sys/fs/cgroup

/dev/vda1                            xfs       497M  151M  347M  31% /boot

node01.srv.world:/vol_distributed nfs        40G   64M   40G   1% /mnt


详细视频课程请戳—→ http://edu.51cto.com/course/course_id-6574.html


转载于:https://blog.51cto.com/11840455/1833056

Linux与云计算——第二阶段 第五章:存储Storage服务器架设—分布式存储GlusterFS基础...相关推荐

  1. Linux与云计算——第二阶段 第五章:存储Storage服务器架设—分布式存储Ceph

    Linux与云计算--第二阶段Linux服务器架设 第五章:存储Storage服务器架设-分布式存储Ceph 1 Ceph 配置Ceph集群 Install Distributed File Syst ...

  2. Linux与云计算——第二阶段Linux服务器架设 第七章:网站WEB服务器架设—日志分析平台...

    Linux与云计算--第二阶段Linux服务器架设 第七章:网站WEB服务器架设-日志分析平台 日志分析:AWstats 安装AWstats分析http日志信息. [1] Install AWstat ...

  3. Linux与云计算——第二阶段Linux服务器架设 第七章:网站WEB服务器架设—电子邮件WEB端搭建SquirrelMail...

    Linux与云计算--第二阶段Linux服务器架设 第七章:网站WEB服务器架设-电子邮件WEB端搭建SquirrelMail WEBMAIL:SquirrelMail 使用SquirrelMail配 ...

  4. 第五章 存储数据 web scraping with python

    第五章.存储数据 尽管在终端打印是有很多乐趣的,但是当谈到数据汇总和分析时候这不是非常有用的.为了使大部分的爬虫有用,你需要能够保存它们抓取的信息. 在本章中,我们将着眼于的三个数据管理的方法满足任何 ...

  5. Linux与云计算——第二阶段Linux服务器架设 第八章:FTP服务器架设—安装配置FTP服务和客户端配置...

    Linux与云计算--第二阶段Linux服务器架设 第八章:FTP服务器架设-安装配置FTP服务和客户端配置 安装Vsftpd [1] 安装并配置Vsftpd. [root@server ~]# yu ...

  6. linux键盘设置的文件在哪个文件夹,「正点原子Linux连载」第十五章按键输入试验...

    原标题:「正点原子Linux连载」第十五章按键输入试验 第十五章按键输入试验 前面几章试验都是讲解如何使用I.MX6U的GPIO输出控制功能,I.MX6U的IO不仅能作为输出,而且也可以作为输入.I. ...

  7. linux计算服务器配置,Linux与云计算——第二阶段Linux服务器架设 第八章:FTP服务器架设—安装配置FTP服务和客户端配置...

    Linux与云计算--第二阶段Linux服务器架设 第八章:FTP服务器架设-安装配置FTP服务和客户端配置 安装Vsftpd [1]安装并配置Vsftpd. [root@server ~]# yum ...

  8. stty详解-Unix/Linux编程实践教程第五章 学习stty

    读书笔记-第五章 连接控制 学习stty 先放上思维导图 为设备编程&设备就像文件 前面所学的知识都是基于文件的,而在unix/linux系统中,所有的设备(打印机,终端,话筒,摄像头等等)也 ...

  9. 【正点原子Linux连载】第十五章点亮LED-摘自【正点原子】I.MX6U嵌入式Linux C应用编程指南V1.1

    1)实验平台:正点原子阿尔法Linux开发板 2)平台购买地址:https://item.taobao.com/item.htm?id=603672744434 2)全套实验源码+手册+视频下载地址: ...

最新文章

  1. flutter ios启动白屏_Flutter技术架构概览
  2. Ubuntu14.04下切换系统自带的Python和Anaconda 下的Python
  3. 维特根斯坦:为何夸大人工智能是对人性的贬损?
  4. 电脑无法安装python-无法安装Python库
  5. (JavaWeb)JSP,JSTL
  6. 阿里java高级工程师面试100题(建议收藏)
  7. java utf-8 gbk_Java 字符转码之UTF-8转为GBK/GB2312
  8. 2022.管理类软件工具
  9. 《基于ArcGIS的Python编程秘笈(第2版)》——第1章 面向ArcGIS的Python语言基础
  10. 2021牛客暑期多校训练营10,签到题FH
  11. 百度地图 android SDKv2.2.0
  12. 各国股市开盘与收盘时间
  13. python操作pdf加密解密
  14. Charles手机 APP 抓包仅需这三步
  15. Win10怎么永久关闭自动更新?有效的Win10强制更新关闭方法
  16. 电商平台减少服务器性能,电商平台服务器数据安全灾备方案规划.doc
  17. 南宁第一职业技术学校计算机专业,南宁第一职业技术学校
  18. 模拟银行ATM存款存取款的相关功能
  19. 数据结构: 算法的时间复杂度和空间复杂度
  20. java计算机毕业设计干洗店订单管理系统设计与实现源码+mysql数据库+系统+lw文档+部署

热门文章

  1. Different Layouts for Different Widths
  2. windows下忘记mysql超级管理员密码的解决办法
  3. BeagleBone Black项目实训手册(大学霸内部资料)
  4. er图转为数据流程图_draw.io for Mac(流程图绘制工具)
  5. excel2010设置列宽为像素_怎么改变Excel中列宽的像素
  6. 平衡查找树C语言程序,树4. Root of AVL Tree-平衡查找树AVL树的实现
  7. python easygui_EasyGUI是python的一个超级简单的GUI工具介绍(一)
  8. node.js request get 请求怎么拿到返回的数据_使用JS和NodeJS爬取Web内容
  9. java向上造型的优点_老榆木家具适合什么装修风格?老榆木家具有哪些优点
  10. java短横线转驼峰_第二讲:Java的运作原理