官网介绍

Gluster is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace.

简单说就是一个开源的分布式文件系统,可以用NFS或者SAMBA访问。
实验用node1作为客户端,node2 node3 node4 搭建GlusterFS集群。

准备环境

[root@node2 ~]# vim prepare_bricks.sh
[root@node2 ~]# cat prepare_bricks.sh
#!/bin/bash# Prepare bricks
pvcreate /dev/sdb
vgcreate -s 4M vol /dev/sdb
lvcreate -l 100%FREE -T vol/pool
lvcreate -V 10G -T vol/pool -n brick
mkfs.xfs -i size=512 /dev/vol/brick
mkdir -p /data/brick${1}
echo "/dev/vol/brick /data/brick${1} xfs defaults 0 0" >> /etc/fstab
mount /data/brick${1}# Install package and start service
yum install -y glusterfs-server
systemctl start glusterd
systemctl enable glusterd[root@node2 ~]# sh prepare_bricks.sh 2
...略
[root@node2 ~]# df -Th
Filesystem              Type      Size  Used Avail Use% Mounted on
...略
/dev/mapper/vol-brick   xfs        10G   33M   10G   1% /data/brick2
[root@node2 ~]# scp prepare_bricks.sh node3:/root
...略
root@node3's password:
prepare_bricks.sh                                                                                                                          100%  418   104.4KB/s   00:00
[root@node2 ~]# scp prepare_bricks.sh node4:/root
...略
root@node4's password:
prepare_bricks.sh                                                                                                                          100%  418    88.8KB/s   00:00
[root@node3 ~]# sh prepare_bricks.sh 3
...略
[root@node4 ~]# sh prepare_bricks.sh 4
...略

加入存储池

[root@node2 ~]# gluster peer probe node3
peer probe: success.
[root@node2 ~]# gluster peer probe node4
peer probe: success.
[root@node2 ~]# gluster pool list
UUID          Hostname   State
483b4b06-bcdb-4399-bc1c-a36e7a0e5274  node3      Connected
fa553747-3feb-4422-8dad-b5f61a93aa39  node4      Connected
19d39a4f-4e92-4ff4-a3a2-539d44358dec  localhost  Connected
[root@node2 ~]# gluster peer status
Number of Peers: 2Hostname: node3
Uuid: 483b4b06-bcdb-4399-bc1c-a36e7a0e5274
State: Peer in Cluster (Connected)Hostname: node4
Uuid: fa553747-3feb-4422-8dad-b5f61a93aa39
State: Peer in Cluster (Connected)

创建卷

卷有5种类型:

1)分布式 - Distributed

默认的类型,将文件按照hash算法随机分布到 一个 brick 中存储。

[root@node2 ~]# gluster volume create vol_distributed node2:/data/brick2/distributed node3:/data/brick3/distributed node4:/data/brick4/distributed
volume create: vol_distributed: success: please start the volume to access data
[root@node2 ~]# gluster volume info vol_distributedVolume Name: vol_distributed
Type: Distribute
Volume ID: ecd70c34-5808-46ee-b813-9ed6f707b1a3
Status: Created
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node2:/data/brick2/distributed
Brick2: node3:/data/brick3/distributed
Brick3: node4:/data/brick4/distributed
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
[root@node2 ~]# gluster volume start vol_distributed
volume start: vol_distributed: success
[root@node2 ~]# gluster volume status
Status of volume: vol_distributed
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node2:/data/brick2/distributed        49152     0          Y       1821
Brick node3:/data/brick3/distributed        49152     0          Y       1770
Brick node4:/data/brick4/distributed        49152     0          Y       16476Task Status of Volume vol_distributed
------------------------------------------------------------------------------
There are no active volume tasks

2)复制 - Replicated

将数据按照指定的份数同时存储到每个 brick 。

[root@node2 ~]# gluster volume create vol_replicated replica 3 node2:/data/brick2/replicated node3:/data/brick3/replicated node4:/data/brick4/replicated
volume create: vol_replicated: success: please start the volume to access data
[root@node2 ~]# gluster volume info vol_replicatedVolume Name: vol_replicated
Type: Replicate
Volume ID: e50727b4-d71b-4dab-b74a-cfd2a0027bb3
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node2:/data/brick2/replicated
Brick2: node3:/data/brick3/replicated
Brick3: node4:/data/brick4/replicated
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
[root@node2 ~]# gluster volume start vol_replicated
volume start: vol_replicated: success
[root@node2 ~]# gluster volume status vol_replicated
Status of volume: vol_replicated
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node2:/data/brick2/replicated         49153     0          Y       1873
Brick node3:/data/brick3/replicated         49153     0          Y       1828
Brick node4:/data/brick4/replicated         49153     0          Y       1811
Self-heal Daemon on localhost               N/A       N/A        Y       1894
Self-heal Daemon on node4                   N/A       N/A        Y       1832
Self-heal Daemon on node3                   N/A       N/A        Y       1849 Task Status of Volume vol_replicated
------------------------------------------------------------------------------
There are no active volume tasks

3)分散 - Dispersed

类似RAID5,数据分片存储到 brick 中,其中一个用作奇偶校验。

[root@node2 ~]# gluster volume create vol_dispersed disperse 3 redundancy 1 node2:/data/brick2/dispersed node3:/data/brick3/dispersed node4:/data/brick4/dispersed
volume create: vol_dispersed: success: please start the volume to access data
[root@node2 ~]# gluster volume info vol_dispersedVolume Name: vol_dispersed
Type: Disperse
Volume ID: e3894a96-7823-43c7-8f24-c5b628eb86ed
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: node2:/data/brick2/dispersed
Brick2: node3:/data/brick3/dispersed
Brick3: node4:/data/brick4/dispersed
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
[root@node2 ~]# gluster volume start vol_dispersed
volume start: vol_dispersed: success
[root@node2 ~]# gluster volume status vol_dispersed
Status of volume: vol_dispersed
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node2:/data/brick2/dispersed          49154     0          Y       2028
Brick node3:/data/brick3/dispersed          49154     0          Y       1918
Brick node4:/data/brick4/dispersed          49154     0          Y       16630
Self-heal Daemon on localhost               N/A       N/A        Y       1930
Self-heal Daemon on node4                   N/A       N/A        Y       16558
Self-heal Daemon on node3                   N/A       N/A        Y       1851 Task Status of Volume vol_dispersed
------------------------------------------------------------------------------
There are no active volume tasks

4)分布复制 - Distributed Replicated

既分布,又复制。

[root@node2 ~]# gluster volume create vol_distributed_replicated replica 3 node2:/data/brick2/distributed_replicated21 node3:/data/brick3/distributed_replicated31 node4:/data/brick4/distributed_replicated41 node2:/data/brick2/distributed_replicated22 node3:/data/brick3/distributed_replicated32 node4:/data/brick4/distributed_replicated42
volume create: vol_distributed_replicated: success: please start the volume to access data
[root@node2 ~]# gluster volume info vol_distributed_replicatedVolume Name: vol_distributed_replicated
Type: Distributed-Replicate
Volume ID: b8049701-9587-49ac-9cb2-1861421125c2
Status: Created
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: node2:/data/brick2/distributed_replicated21
Brick2: node3:/data/brick3/distributed_replicated31
Brick3: node4:/data/brick4/distributed_replicated41
Brick4: node2:/data/brick2/distributed_replicated22
Brick5: node3:/data/brick3/distributed_replicated32
Brick6: node4:/data/brick4/distributed_replicated42
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
[root@node2 ~]# gluster volume start vol_distributed_replicated
volume start: vol_distributed_replicated: success
[root@node2 ~]# gluster volume status vol_distributed_replicated
Status of volume: vol_distributed_replicated
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node2:/data/brick2/distributed_replic
ated21                                      49155     0          Y       2277
Brick node3:/data/brick3/distributed_replic
ated31                                      49155     0          Y       2166
Brick node4:/data/brick4/distributed_replic
ated41                                      49155     0          Y       2141
Brick node2:/data/brick2/distributed_replic
ated22                                      49156     0          Y       2297
Brick node3:/data/brick3/distributed_replic
ated32                                      49156     0          Y       2186
Brick node4:/data/brick4/distributed_replic
ated42                                      49156     0          Y       2161
Self-heal Daemon on localhost               N/A       N/A        Y       1894
Self-heal Daemon on node3                   N/A       N/A        Y       1849
Self-heal Daemon on node4                   N/A       N/A        Y       1832 Task Status of Volume vol_distributed_replicated
------------------------------------------------------------------------------
There are no active volume tasks

5)分布分散 - Distributed Dispersed

既分布,又分散。

[root@node2 ~]# gluster volume create vol_distributed_dispersed disperse-data 2 redundancy 1 \
> node2:/data/brick2/distributed_dispersed21 \
> node3:/data/brick3/distributed_dispersed31 \
> node4:/data/brick4/distributed_dispersed41 \
> node2:/data/brick2/distributed_dispersed22 \
> node3:/data/brick3/distributed_dispersed32 \
> node4:/data/brick4/distributed_dispersed42
volume create: vol_distributed_dispersed: success: please start the volume to access data
[root@node2 ~]# gluster volume info vol_distributed_dispersedVolume Name: vol_distributed_dispersed
Type: Distributed-Disperse
Volume ID: 797e2e88-61e0-4df4-a308-16c5140b2480
Status: Created
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: node2:/data/brick2/distributed_dispersed21
Brick2: node3:/data/brick3/distributed_dispersed31
Brick3: node4:/data/brick4/distributed_dispersed41
Brick4: node2:/data/brick2/distributed_dispersed22
Brick5: node3:/data/brick3/distributed_dispersed32
Brick6: node4:/data/brick4/distributed_dispersed42
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
[root@node2 ~]# gluster volume start vol_distributed_dispersed
volume start: vol_distributed_dispersed: success
[root@node2 ~]# gluster volume status vol_distributed_dispersed
Status of volume: vol_distributed_dispersed
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node2:/data/brick2/distributed_disper
sed21                                       49157     0          Y       2529
Brick node3:/data/brick3/distributed_disper
sed31                                       49157     0          Y       17071
Brick node4:/data/brick4/distributed_disper
sed41                                       49157     0          Y       17051
Brick node2:/data/brick2/distributed_disper
sed22                                       49158     0          Y       2549
Brick node3:/data/brick3/distributed_disper
sed32                                       49158     0          Y       17091
Brick node4:/data/brick4/distributed_disper
sed42                                       49158     0          Y       17071
Self-heal Daemon on localhost               N/A       N/A        Y       1894
Self-heal Daemon on node4                   N/A       N/A        Y       1832
Self-heal Daemon on node3                   N/A       N/A        Y       1849 Task Status of Volume vol_distributed_dispersed
------------------------------------------------------------------------------
There are no active volume tasks

查看生成的目录

[root@node2 ~]# tree /data/brick2/
/data/brick2/
├── dispersed
├── distributed
├── distributed_dispersed21
├── distributed_dispersed22
├── distributed_replicated21
├── distributed_replicated22
└── replicated7 directories, 0 files

创建完卷,node2 node3 node4虚拟机新建一个快照。

客户端挂载

可以通过3种方式挂载:

1)通过glusterfs挂载

[root@node1 ~]# yum install -y glusterfs glusterfs-fuse
[root@node1 ~]# mkdir /mnt/distributed
[root@node1 ~]# mount -t glusterfs node2:/vol_distributed /mnt/distributed/
[root@node1 ~]# df -Th
Filesystem              Type            Size  Used Avail Use% Mounted on
...略
node2:/vol_distributed  fuse.glusterfs   30G  407M   30G   2% /mnt/distributed

为防止由于node2挂掉而不可用,可以用多个节点挂载

[root@node1 ~]# vim /etc/fstab
[root@node1 ~]# cat /etc/fstab
...略
node2:/vol_distributed,node3:/vol_distributed,node4:/vol_distributed /mnt/distributed glusterfs defaults,_netdev 0 0
[root@node1 ~]# umount /mnt/distributed/
[root@node1 ~]# mount -a
[root@node1 ~]# df -Th
Filesystem              Type            Size  Used Avail Use% Mounted on
...略
node2:/vol_distributed  fuse.glusterfs   30G  407M   30G   2% /mnt/distributed
[root@node1 ~]# echo "Here is node1" > /mnt/distributed/welcome.txt

2)通过nfs挂载

用 NFS-Ganesha 导出目录

[root@node2 ~]# yum install -y nfs-ganesha nfs-ganesha-gluster
[root@node2 ~]# cp /etc/ganesha/ganesha.conf /etc/ganesha/ganesha.conf.bak
[root@node2 ~]# vim /etc/ganesha/ganesha.conf
[root@node2 ~]# egrep -v "#|^$" /etc/ganesha/ganesha.conf
EXPORT{Export_Id = 1 ;   # Export ID unique to each exportPath = "/vol_replicated";  # Path of the volume to be exported. Eg: "/test_volume"FSAL {name = GLUSTER;hostname = "node2";  # IP of one of the nodes in the trusted poolvolume = "vol_replicated";       # Volume name. Eg: "test_volume"}Access_type = RW;        # Access permissionsSquash = No_root_squash; # To enable/disable root squashingDisable_ACL = TRUE;      # To enable/disable ACLPseudo = "/vol_replicated_pseudo";       # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"Protocols = "3,4" ;      # NFS protocols supportedTransports = "UDP,TCP" ; # Transport protocols supportedSecType = "sys";         # Security flavors supported
}
[root@node2 ~]# systemctl start nfs-ganesha
[root@node2 ~]# systemctl enable nfs-ganesha

客户端挂载

[root@node1 ~]# yum install -y nfs-utils
[root@node1 ~]# showmount -e node2
Export list for node2:
/vol_replicated (everyone)
[root@node1 ~]# mkdir /mnt/replicated
[root@node1 ~]# mount -t nfs node2:/vol_replicated /mnt/replicated/
[root@node1 ~]# df -Th
Filesystem              Type            Size  Used Avail Use% Mounted on
...略
node2:/vol_replicated   nfs              10G  135M  9.9G   2% /mnt/replicated
[root@node1 ~]# echo "Here is node1" > /mnt/replicated/welcome.txt

3)通过samba挂载

服务端准备

[root@node2 ~]# yum install -y samba samba-vfs-glusterfs
...略
[root@node2 ~]# adduser glusteruser
[root@node2 ~]# smbpasswd -a glusteruser
New SMB password:
Retype new SMB password:
Added user glusteruser.
[root@node2 ~]# vim /etc/samba/smb.conf
[root@node2 ~]# cat /etc/samba/smb.conf
...略
[gluster_vol_dispersed]comment = For samba share of volume vol_dispersedvfs objects = glusterfsglusterfs:volume = vol_dispersedglusterfs:logfile = /var/log/samba/glusterfs.%M.logglusterfs:loglevel = 7path = /read only = noguest ok = yeskernel share modes = no
[root@node2 ~]# systemctl start smb
[root@node2 ~]# systemctl enable smb

客户端挂载

立即用CIFS挂载的话不能写入数据,先用FUSE挂载一下,修改权限再挂载就可以;

[root@node1 ~]# mkdir /mnt/dispersed_temp
[root@node1 ~]# mount -t glusterfs node2:/vol_dispersed /mnt/dispersed_temp/
[root@node1 ~]# echo "Here is node1" > /mnt/dispersed_temp/welcome.txt
[root@node1 ~]# chmod 777 /mnt/dispersed_temp/
[root@node1 ~]# umount /mnt/dispersed_temp/

用CIFS挂载

[root@node1 ~]# yum install -y samba-client cifs-utils
[root@node1 ~]# smbclient -L node2 -U glusteruser
Enter SAMBA\glusteruser's password: Sharename       Type      Comment---------       ----      -------print$          Disk      Printer Driversgluster_vol_dispersed Disk      For samba share of volume repvolIPC$            IPC       IPC Service (Samba 4.10.4)glusteruser     Disk      Home Directories
Reconnecting with SMB1 for workgroup listing.Server               Comment---------            -------Workgroup            Master---------            -------
[root@node1 ~]# mkdir /mnt/dispersed
[root@node1 ~]# mount -t cifs -o username=glusteruser,password=123456 //node2/gluster_vol_dispersed /mnt/dispersed
[root@node1 ~]# df -Th
Filesystem                    Type            Size  Used Avail Use% Mounted on
...略
//node2/gluster_vol_dispersed cifs             20G  272M   20G   2% /mnt/dispersed
[root@node1 ~]# echo "mount vol_dispersed via cifs from node1" > /mnt/dispersed/second.txt

下节预告
卷的常用选项:
限制IP连接、ACL、配额、扩容和缩小
快照
CTDB

本文转载自公众号:开源Ops
本文源连接:https://mp.weixin.qq.com/s/AxTZisaFybfJhM0-ZOWwiw

GlusterFS(上)相关推荐

  1. GlusterFS 和 Ceph 比比看

    存储世界最近发生了很大变化.十年前,Fibre Channel SAN 文件管理器是企业存储的标准.而在目前的环境中,受到基础架构即服务云的影响,数据存储需要更加灵活. GlusterFS 和 Cep ...

  2. Glusterfs的Input/Output Error问题

    最近接手一个任务(其实是个bug),在部署新服务器时,新机器没有获取到ip地址. 其实新增节点是通过pxe进行安装与部署的,pxe步骤中主机IP地址是通过dhcp获取的,dhcp服务跑在其中一台主机上 ...

  3. GlusterFS 和 Ceph 比较

    两个产品Ceph和Gluster是Red Hat旗下的成熟的开源存储产品,Ceph与GlusterFS 在原理上有着本质上的不同.但是两个都是非常灵活的存储系统,在云环境中表现也是非常出色. Ceph ...

  4. 一篇讲透Kubernetes与GlusterFS之间的爱恨情仇

    http://rdc.hundsun.com/portal/article/826.html http://rdcqii.hundsun.com/portal/article/827.html 存储是 ...

  5. 关于glusterfs-3.0.4中AFR修复的一个bug

    夜深了,困扰了我好几天的bug今天终于让我揪出来了,有点如释重负的感觉. 问题描述: 在AFR冗余度为2的glusterfs3.0.4系统上,创建虚拟机(img文件),然后启动虚拟机,然后宕掉AFR一 ...

  6. 关于Mongodb的全面总结,学习mongodb的人,可以从这里开始!

    转载地址:http://blog.csdn.net/he90227/article/details/45674513 原文地址:http://blog.csdn.NET/jakenson/articl ...

  7. CentOS7上Glusterfs的安装及使用(gluster/heketi)

    1.glusterfs安装 安装并设置自启动: yum -y install centos-release-gluster yum -y install glusterfs-server system ...

  8. 史上最全的“大数据”学习资源

    2019独角兽企业重金招聘Python工程师标准>>> 资源列表: 关系数据库管理系统(RDBMS) 框架 分布式编程 分布式文件系统 文件数据模型 Key -Map 数据模型 键- ...

  9. GlusterFS下如何修复裂脑文件?(续一)

    关于网上一些修复GlusterFS裂脑文件的说明 1.Fixing a GlusterFS split-brain https://inuits.eu/blog/fixing-glusterfs-sp ...

  10. GlusterFS 安装与配置

    GlusterFS是一个开源的分布式文件系统,于2011年被红帽收购.它具有高扩展性.高性能.高可用性.可横向扩展的弹性特点,无元数据服务器设计使glusterfs没有单点故障隐患,详细介绍请查看官网 ...

最新文章

  1. QS最新世界大学排名发布,清华北大获史上最高名次
  2. android广告轮播图之匀速规律播放
  3. TMS320 C6000系列 DSP之 CCS5.5 仿真调试
  4. 多线程之线程池-各个参数的含义- 阿里,美团,京东面试题目
  5. 37岁程序员失业投500份简历就3次面试猎头:超35岁不要
  6. 联想陈旭东:我们有工匠精神
  7. Linux服务器系统管理优化,Linux服务器性能管理与优化
  8. Windows环境zip版PostgreSQL数据库安装
  9. linux+默认启动windows系统,windows liunx两个系统修改默认启动项
  10. CMMI5访谈学习笔记(项目经理角色)(转)
  11. Hamcrest匹配器框架
  12. 学习游戏服务器编程提高篇
  13. 华为任正非写的《致新员工书》
  14. android qq 功能,Android-类qq功能(一)
  15. 算法归总—短除法求最大公约数
  16. systemd 服务使用
  17. win7记事本如何转换html,Win7打开记事本显示乱码是为什么?怎么才能正常?
  18. 2020年Java面试题及答案_Java面试宝典_Java笔试题(持续更新中)
  19. 不只是休闲:关于体感游戏的一些思考(六)--- 飞行
  20. 5.4 成员变量的隐藏和方法重写

热门文章

  1. 金三银四找工作,真没你想的那么难!
  2. Java程序员该如何准备明年的「金三银四」跳槽季,你准备好了吗?
  3. 逻辑斯谛回归(logistic regression)
  4. 但行好事 莫问前程(四月)
  5. 象棋小游戏java_java实现象棋小游戏
  6. 华为发展鸿蒙再出奇招,学习宝马推出官方认证二手手机
  7. 程序员的有个坏习惯!
  8. mac电脑如何找到usr文件夹
  9. 留学生交流互动论坛网站
  10. 19、21、22、24英寸液晶屏幕实际尺寸4:3、16:9、16:10详表