.glusterfs

An earlier version of this tutorial was written by Justin Ellingwood.

本教程的早期版本由Justin Ellingwood编写。

介绍 (Introduction)

When storing any critical data, having a single point of failure is very risky. While many databases and other software allow you to spread data out in the context of a single application, other systems can operate on the filesystem level to ensure that data is copied to another location whenever it’s written to disk.

当存储任何关键数据时,单点故障是非常危险的。 尽管许多数据库和其他软件允许您在单个应用程序的上下文中散布数据,但是其他系统可以在文件系统级别上运行,以确保每当将数据写入磁盘时将其复制到另一个位置。

GlusterFS is a network-attached storage filesystem that allows you to pool storage resources of multiple machines. In turn, this lets you treat multiple storage devices that are distributed among many computers as a single, more powerful unit. GlusterFS also gives you the freedom to create different kinds of storage configurations, many of which are functionally similar to RAID levels. For instance, you can stripe data across different nodes in the cluster, or you can implement redundancy for better data availability.

GlusterFS是一个网络连接的存储文件系统,使您可以池多台计算机的存储资源。 反过来,这使您可以将分布在许多计算机中的多个存储设备视为一个功能更强大的单元。 GlusterFS还使您可以自由创建各种存储配置,其中许多功能在功能上与RAID级别相似。 例如,您可以跨集群中的不同节点对数据进行条带化,也可以实施冗余以提高数据可用性。

目标 (Goals)

In this guide, you will create a redundant clustered storage array, also known as a distributed file system or, as it’s referred to in the GlusterFS documentation, a Trusted Storage Pool. This will provide functionality similar to a mirrored RAID configuration over the network: each independent server will contain its own copy of the data, allowing your applications to access either copy, thereby helping distribute your read load.

在本指南中,您将创建一个冗余的群集存储阵列,也称为分布式文件系统,或者在GlusterFS文档中称为“ 可信存储池” 。 这将提供类似于网络上镜像RAID配置的功能:每台独立服务器将包含其自己的数据副本,从而使您的应用程序可以访问任一副本,从而有助于分配读取负载。

This redundant GlusterFS cluster will consist of two Ubuntu 18.04 servers. This will act similar to an NAS server with mirrored RAID. You’ll then access the cluster from a third Ubuntu 18.04 server configured to function as a GlusterFS client.

这个冗余的GlusterFS集群将由两个Ubuntu 18.04服务器组成。 这将类似于具有镜像RAID的NAS服务器。 然后,您将从配置为GlusterFS客户端的第三台Ubuntu 18.04服务器访问群集。

关于安全运行GlusterFS的注意事项 (A Note About Running GlusterFS Securely)

When you add data to a GlusterFS volume, that data gets synced to every machine in the storage pool where the volume is hosted. This traffic between nodes isn’t encrypted by default, meaning there’s a risk it could be intercepted by malicious actors.

将数据添加到GlusterFS卷时,该数据将同步到托管该卷的存储池中的每台计算机。 节点之间的流量默认情况下未加密,这意味着存在被恶意行为者拦截的风险。

For this reason, if you’re going to use GlusterFS in production, it’s recommended that you run it on an isolated network. For example, you could set it up to run in a Virtual Private Cloud (VPC) or with a VPN running between each of the nodes.

因此,如果要在生产环境中使用GlusterFS,建议您在隔离的网络上运行它。 例如,您可以将其设置为在虚拟私有云 (VPC)中运行或在每个节点之间运行VPN。

If you plan to deploy GlusterFS on DigitalOcean, you can set it up in an isolated network by adding your server infrastructure to a DigitalOcean Virtual Private Cloud. For details on how to set this up, see our VPC product documentation.

如果您打算在DigitalOcean上部署GlusterFS,则可以通过将服务器基础结构添加到DigitalOcean虚拟私有云来在隔离的网络中进行设置。 有关如何设置的详细信息,请参见我们的VPC产品文档 。

先决条件 (Prerequisites)

To follow this tutorial, you will need three servers running Ubuntu 18.04. Each server should have a non-root user with administrative privileges, and a firewall configured with UFW. To set this up, follow our initial server setup guide for Ubuntu 18.04.

要遵循本教程,您将需要三台运行Ubuntu 18.04的服务器。 每个服务器都应具有一个具有管理特权的非root用户,以及一个用UFW配置的防火墙。 要进行设置,请遵循我们针对Ubuntu 18.04的初始服务器设置指南 。

Note: As mentioned in the Goals section, this tutorial will walk you through configuring two of your Ubuntu servers to act as servers in your storage pool and the remaining one to act as a client which you’ll use to access these nodes.

注意 :如“目标”部分所述,本教程将引导您配置两个Ubuntu服务器作为存储池中的服务器,其余的作为客户端来访问这些节点。

For clarity, this tutorial will refer to these machines with the following hostnames:

为了清楚起见,本教程将使用以下主机名引用这些计算机:

Hostname Role in Storage Pool
gluster0 Server
gluster1 Server
gluster2 Client
主机名 在存储池中的角色
gluster0 服务器
gluster1 服务器
gluster2 客户

Commands that should be run on either gluster0 or gluster1 will have blue and red backgrounds, respectively:

应该在gluster0gluster1上运行的命令将分别具有蓝色和红色背景:

Commands that should only be run on the client (gluster2) will have a green background:

仅应在客户端( gluster2 )上运行的命令将具有绿色背景:

Commands that can or should be run on more than one machine will have a gray background:

可以或应该在多台计算机上运行的命令将具有灰色背景:

步骤1 —在每台计算机上配置DNS解析 (Step 1 — Configuring DNS Resolution on Each Machine)

Setting up some kind of hostname resolution between each computer can help with managing your Gluster storage pool. This way, whenever you have to reference one of your machines in a gluster command later in this tutorial, you can do so with an easy-to-remember domain name or even a nickname instead of their respective IP addresses.

在每台计算机之间设置某种主机名解析方式可以帮助管理Gluster存储池。 这样,每当您必须在本教程后面的gluster命令中引用一台计算机时,都可以使用易于记忆的域名甚至昵称来代替它们各自的IP地址。

If you do not have a spare domain name, or if you just want to set up something quickly, you can instead edit the /etc/hosts file on each computer. This is a special file on Linux machines where you can statically configure the system to resolve any hostnames contained in the file to static IP addresses.

如果您没有备用域名,或者只想快速设置,则可以在每台计算机上编辑/etc/hosts文件。 这是Linux计算机上的一个特殊文件,您可以在其中静态配置系统,以将文件中包含的所有主机名解析为静态IP地址。

Note: If you’d like to configure your servers to authenticate with a domain that you own, you’ll first need to obtain a domain name from a domain registrar — like Namecheap or Enom — and configure the appropriate DNS records.

注意 :如果要将服务器配置为通过您拥有的域进行身份验证,则首先需要从域名注册商(例如Namecheap或Enom)获得域名,并配置适当的DNS记录。

Once you’ve configured an A record for each server, you can jump ahead to Step 2. As you follow this guide, make sure that you replace glusterN.example.com and glusterN with the domain name that resolves to the respective server referenced in the example command.

为每台服务器配置A记录后,您可以跳至步骤2。按照本指南进行操作,请确保将gluster N .example.comgluster N替换为可解析到相应服务器的域名在示例命令中引用。

If you obtained your infrastructure from DigitalOcean, you could add your domain name to DigitalOcean then set up a unique A record for each of your servers.

如果您是从DigitalOcean获得基础架构的,则可以将域名添加到DigitalOcean,然后为每个服务器设置唯一的A记录 。

Using your preferred text editor, open this file with root privileges on each of your machines. Here, we’ll use nano:

使用您喜欢的文本编辑器,在每台计算机上以root特权打开该文件。 在这里,我们将使用nano

  • sudo nano /etc/hosts 须藤nano / etc / hosts

By default, the file will look something like this with comments removed:

默认情况下,该文件将类似于以下内容:

/etc/hosts
/ etc / hosts
127.0.1.1 hostname hostname
127.0.0.1 localhost::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

On one of your Ubuntu servers, add each server’s IP address followed by any names you wish to use to reference them in commands below the local host definition.

在您的一台Ubuntu服务器上,在本地主机定义下面的命令中添加每台服务器的IP地址,然后添加您想用来引用它们的任何名称。

In the following example, each server is given a long hostname that aligns with glusterN.example.com and a short one that aligns with glusterN. You can change the glusterN.example.com and glusterN portions of each line to whatever name — or names separated by single spaces — you would like to use to access each server. Note, though, that this tutorial will use these examples throughout:

在下面的示例中,每个服务器被赋予一个长的主机名,与对准gluster N .example.com和短的一个与对齐gluster N 。 您可以将每行的gluster N .example.comgluster N部分更改为您想用来访问每个服务器的任何名称(或用单个空格分隔的名称)。 但是请注意,本教程将始终使用以下示例:

Note: If your servers are part of a Virtual Private Cloud infrastructure pool, you should use each server’s private IP address in the /etc/hosts file rather than their public IPs.

注意 :如果服务器是虚拟私有云基础架构池的一部分,则应在/etc/hosts文件中使用每个服务器的私有IP地址,而不要使用它们的公共IP。

/etc/hosts
/ etc / hosts
. . .
127.0.0.1       localhost
first_ip_address gluster0.example.com gluster0
second_ip_address gluster1.example.com gluster1
third_ip_address gluster2.example.com gluster2. . .

When you are finished adding these new lines to the /etc/hosts file of one machine, copy and add them to the /etc/hosts files on your other machines. Each /etc/hosts file should contain the same lines, linking your servers’ IP addresses to the names you’ve selected.

将这些新行添加到一台计算机的/etc/hosts文件中后,将它们复制并添加到另一台计算机的/etc/hosts文件中。 每个/etc/hosts文件应包含相同的行,将服务器的IP地址链接到您选择的名称。

Save and close each file when you are finished. If you used nano, do so by pressing CTRL + X, Y, and then ENTER.

完成后,保存并关闭每个文件。 如果您使用过nano ,请按CTRL + XY ,然后按ENTER

Now that you’ve configured hostname resolution between each of your servers, it will be easier to run commands later on as you set up a storage pool and volume. Next, you’ll go through another step that must be completed on each of your servers. Namely, you’ll add the Gluster project’s official personal package archive (PPA) to each of your three Ubuntu servers to ensure that you can install the latest version of GlusterFS.

现在,您已经在每个服务器之间配置了主机名解析,以后在设置存储池和卷时将更容易运行命令。 接下来,您将执行必须在每台服务器上完成的另一步骤。 也就是说,您将向三个Ubuntu服务器中的每一个添加Gluster项目的官方个人软件包存档(PPA),以确保您可以安装最新版本的GlusterFS。

步骤2 —在每台计算机上设置软件源 (Step 2 — Setting Up Software Sources on Each Machine)

Although the default Ubuntu 18.04 APT repositories contain GlusterFS packages, they are fairly out-of-date. One way to install the latest stable version of GlusterFS (version 7 as of this writing) is to add the Gluster project’s official PPA to each of your three Ubuntu servers.

尽管默认的Ubuntu 18.04 APT存储库包含GlusterFS软件包,但它们已经过时了。 安装最新稳定版本的GlusterFS(在撰写本文时为版本7 )的一种方法是将Gluster项目的官方PPA添加到三个Ubuntu服务器中的每一个。

First, update the local package index on each of your servers:

首先,更新每个服务器上的本地软件包索引:

  • sudo apt update sudo apt更新

Then install the software-properties-common package on each machine. This package will allow you to manage PPAs with greater flexibility:

然后在每台计算机上安装software-properties-common软件包。 该软件包将使您能够更灵活地管理PPA:

  • sudo apt install software-properties-common sudo apt安装软件属性通用

Once the PPA tools are installed, add the PPA for the GlusterFS packages by running the following command on each server:

一旦安装了PPA工具,请通过在每台服务器上运行以下命令为GlusterFS软件包添加PPA:

  • sudo add-apt-repository ppa:gluster/glusterfs-7 sudo add-apt-repository ppa:gluster / glusterfs-7

Press ENTER when prompted to confirm that you actually want to add the PPA.

当提示您确认您确实要添加PPA时,按ENTER

After adding the PPA, refresh each server’s local package index. This will make each server aware of the new packages available:

添加PPA后,刷新每个服务器的本地软件包索引。 这将使每个服务器都知道可用的新软件包:

  • sudo apt update sudo apt更新

After adding the Gluster project’s official PPA to each server and updating the local package index, you’re ready to install the necessary GlusterFS packages. However, because two of your three machines will act as Gluster servers and the other will act as a client, there are two separate installation and configuration procedures. First, you’ll install and set up the server components.

将Gluster项目的官方PPA添加到每台服务器并更新本地软件包索引之后,即可准备安装必要的GlusterFS软件包。 但是,由于三台计算机中的两台将充当Gluster服务器,另一台将充当客户端,因此有两个单独的安装和配置过程。 首先,您将安装和设置服务器组件。

步骤3 —安装服务器组件并创建受信任的存储池 (Step 3 — Installing Server Components and Creating a Trusted Storage Pool)

A storage pool is any amount of storage capacity aggregated from more than one storage resource. In this step, you will configure two of your servers — gluster0 and gluster1 — as the cluster components.

存储池是从一个以上的存储资源聚合而来的任意数量的存储容量。 在此步骤中,您将配置两个服务器gluster0gluster1作为集群组件。

On both gluster0 and gluster1, install the GlusterFS server package by typing:

gluster0gluster1上 ,通过键入以下命令安装GlusterFS服务器软件包:

  • sudo apt install glusterfs-server sudo apt安装glusterfs-server

When prompted, press Y and then ENTER to confirm the installation.

出现提示时,按Y ENTER ,然后按ENTER以确认安装。

At this point, both gluster0 and gluster1 have GlusterFS installed and the glusterd service should be running. You can test this by running following command on each server:

此时, gluster0gluster1都安装了GlusterFS,并且glusterd服务应该正在运行。 您可以通过在每台服务器上运行以下命令来进行测试:

  • sudo systemctl status glusterd.service sudo systemctl状态glusterd.service

If the service is up and running, you’ll receive output like this:

如果该服务已启动并正在运行,您将收到以下输出:

Output
● glusterd.service - GlusterFS, a clustered file-system serverLoaded: loaded (/lib/systemd/system/glusterd.service; enabled; vendor preset: enabled)Active: active (running) since Tue 2020-06-02 21:32:21 UTC; 32s agoDocs: man:glusterd(8)Main PID: 14742 (glusterd)Tasks: 9 (limit: 2362)CGroup: /system.slice/glusterd.service└─14742 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

Assuming you followed the prerequisite initial server setup guide, you will have set up a firewall with UFW on each of your machines. Because of this, you’ll need to open up the firewall on each node before you can establish communications between them and create a storage pool.

假设您已按照必备的初始服务器安装指南进行操作 ,则将在每台计算机上使用UFW设置防火墙。 因此,您需要在每个节点上打开防火墙,然后才能在它们之间建立通信并创建存储池。

The Gluster daemon uses port 24007, so you’ll need to allow each node access to that port through the firewall of each other node in your storage pool. To do so, run the following command on gluster0. Remember to change gluster1_ip_address to gluster1’s IP address:

Gluster守护程序使用端口24007 ,因此您需要允许每个节点通过存储池中每个其他节点的防火墙访问该端口。 为此,请在gluster0上运行以下命令。 请记住,将gluster1_ip_address更改为gluster1的IP地址:

  • sudo ufw allow from gluster1_ip_address to any port 24007

    sudo ufw允许从gluster1_ip_address到任何端口24007

And run the following command on gluster1. Again, be sure to change gluster0_ip_address to gluster0’s IP address:

并在gluster1上运行以下命令。 同样,请确保将gluster0_ip_address更改为gluster0的IP地址:

  • sudo ufw allow from gluster0_ip_address to any port 24007

    sudo ufw允许从gluster0_ip_address到任何端口24007

You’ll also need to allow your client machine (gluster2) access to this port. Otherwise, you’ll run into issues later on when you try to mount the volume. Run the following command on both gluster0 and gluster1 to open up this port to your client machine:

您还需要允许您的客户端计算机( gluster2 )访问此端口。 否则,稍后尝试装入该卷时会遇到问题。 在gluster0和gluster1上运行以下命令以打开此端口到客户端计算机:

  • sudo ufw allow from gluster2_ip_address to any port 24007

    sudo ufw允许从gluster2_ip_address到任何端口24007

Then, to ensure that no other machines are able to access Gluster’s port on either server, add the following blanket deny rule to both gluster0 and gluster1:

然后,为确保没有其他机器能够访问任一服务器上的Gluster端口,请对gluster0和gluster1都添加以下覆盖deny规则:

  • sudo ufw deny 24007 须藤ufw拒绝24007

Next, you’ll need to establish communication between gluster0 and gluster1.

接下来,您需要在gluster0gluster1之间建立通信。

To do so, you’ll need to run the gluster peer probe command on one of your nodes. It doesn’t matter which node you use, but the following example shows the command being run on gluster0:

为此,您需要在一个节点上运行gluster peer probe命令。 使用哪个节点都没有关系,但是以下示例显示了在gluster0上运行的命令:

  • sudo gluster peer probe gluster1 sudo gluster对等探针gluster1

Essentially, this command tells gluster0 to trust gluster1 and register it as part of its storage pool. If the probe is successful, it will return the following output:

本质上,此命令告诉gluster0信任gluster1并将其注册为其存储池的一部分。 如果探测成功,它将返回以下输出:

Output
peer probe: success

You can check that the nodes are communicating at any time by running the gluster peer status command on either one. In this example, it’s run on gluster1:

您可以通过在任一节点上运行gluster peer status命令来随时检查节点是否在通信。 在此示例中,它在gluster1上运行:

  • sudo gluster peer status sudo gluster对等状态

If you run this command from gluster1, it will show output like this:

如果从gluster1运行此命令,它将显示如下输出:

Output
Number of Peers: 1Hostname: gluster0.example.com
Uuid: a3fae496-c4eb-4b20-9ed2-7840230407be
State: Peer in Cluster (Connected)

At this point, your two servers are communicating and ready to create storage volumes with each other.

此时,您的两台服务器正在通信,并准备彼此创建存储卷。

第4步-创建存储卷 (Step 4 — Creating a Storage Volume)

Recall that the primary goal of this tutorial is to create a redundant storage pool. To this end you’ll set up a volume with replica functionality, allowing you to keep multiple copies of your data and prevent your cluster from having a single point of failure.

回想一下,本教程的主要目标是创建冗余存储池。 为此,您将设置一个具有副本功能的卷,从而允许您保留数据的多个副本并防止群集出现单点故障。

To create a volume, you’ll use the gluster volume create command with this general syntax:

要创建卷,将使用具有以下常规语法的gluster volume create命令:

sudo gluster volume create volume_name replica number_of_servers domain1.com:/path/to/data/directory domain2.com:/path/to/data/directory force

Here’s what this gluster volume create command’s arguments and options mean:

以下是此gluster volume create命令的参数和选项的含义:

  • volume_name: This is the name you’ll use to refer to the volume after it’s created. The following example command creates a volume named volume1.

    volume_name :这是创建卷后将用来引用该卷的名称。 以下示例命令创建一个名为volume1的卷。

  • replica number_of_servers: Following the volume name, you can define what type of volume you want to create. Recall that the goal of this tutorial is to create a redundant storage pool, so we’ll use the 2 in the case of this tutorial).

    replica number_of_servers :在卷名称之后,您可以定义要创建的卷类型。 回想一下,本教程的目标是创建一个冗余存储池,因此在本教程的情况下,我们将使用2

  • domain1.com:/… and domain2.com:/…: These define the machines and directory location of the bricks — GlusterFS’s term for its basic unit of storage, which includes any directories on any machines that serve as a part or a copy of a larger volume — that will make up volume1. The following example will create a directory named gluster-storage in the root directory of both servers.

    domain1.com:/…domain2.com:/… :它们定义了砖块的机器和目录位置-GlusterFS的基本存储单位的术语,包括用作其一部分或副本的任何机器上的任何目录更大的音量-将构成volume1 。 以下示例将在两台服务器的根目录中创建一个名为gluster-storage的目录。

  • force: This option will override any warnings or options that would otherwise come up and halt the volume’s creation.

    force :此选项将覆盖所有警告或选项,否则它们将出现并停止卷的创建。

Following the conventions established earlier in this tutorial, you can run this command to create a volume. Note that you can run it from either gluster0 or gluster1:

遵循本教程前面建立的约定,您可以运行此命令来创建卷。 请注意,您可以从gluster0gluster1运行它:

  • sudo gluster volume create volume1 replica 2 gluster0.example.com:/gluster-storage gluster1.example.com:/gluster-storage force sudo gluster卷创建volume1副本2 gluster0.example.com:/gluster-storage gluster1.example.com:/gluster-storage强制

If the volume was created successfully, you’ll receive the following output:

如果成功创建了卷,您将收到以下输出:

Output
volume create: volume1: success: please start the volume to access data

At this point, your volume has been created, but it’s not yet active. You can start the volume and make it available for use by running the following command, again from either of your Gluster servers:

至此,您的卷已经创建,但是尚未激活。 您可以再次从任一台Gluster服务器上运行以下命令来启动该卷并使之可用:

  • sudo gluster volume start volume1sudo gluster音量开始volume1

You’ll receive this output if the volume was started correctly:

如果正确启动了卷,您将收到以下输出:

Output
volume start: volume1: success

Next, check that the volume is online. Run the following command from either one of your nodes:

接下来,检查该卷是否在线。 从您的任一节点运行以下命令:

  • sudo gluster volume statussudo gluster音量状态

This will return output similar to this:

这将返回类似以下的输出:

Output
Status of volume: volume1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster0.example.com:/gluster-storage 49152     0          Y       18801
Brick gluster1.example.com:/gluster-storage 49152     0          Y       19028
Self-heal Daemon on localhost               N/A       N/A        Y       19049
Self-heal Daemon on gluster0.example.com    N/A       N/A        Y       18822Task Status of Volume volume1
------------------------------------------------------------------------------
There are no active volume tasks

Based on this output, the bricks on both servers are online.

根据此输出,两台服务器上的模块都在线。

As a final step to configuring your volume, you’ll need to open up the firewall on both servers so your client machine will be able to connect to and mount the volume. Based on the previous command’s sample output, volume1 is running on port 49152 on both machines. This is GlusterFS’s default port for its initial volume, and the next volume you create will use port 49153, then 49154, and so on.

作为配置卷的最后一步,您需要在两台服务器上打开防火墙,以便您的客户端计算机能够连接到并安装卷。 根据前一个命令的示例输出, volume1在两台计算机上的端口49152上运行。 这是GlusterFS初始卷的默认端口,您创建的下一个卷将使用端口49153 ,然后是49154 ,依此类推。

Run the following command on both gluster0 and gluster1 to allow gluster2 access to this port through each one’s respective firewall:

gluster0gluster1上运行以下命令,以允许gluster2通过每个人各自的防火墙访问此端口:

  • sudo ufw allow from gluster2_ip_address to any port 49152

    sudo ufw允许从gluster2_ip_address到任何端口49152

Then, for an added layer of security, add another blanket deny rule for the volume’s port on both gluster0 and gluster1 This will ensure that no machines other than your client can access the volume on either server:

然后,为了增加安全性,在gluster0和gluster1上为卷的端口添加另一个覆盖deny规则。这将确保除客户端之外的任何机器都不能访问任一服务器上的卷:

  • sudo ufw deny 49152 sudo ufw拒绝49152

Now that your volume is up and running, you can set up your client machine and begin using it remotely.

现在您的卷已建立并正在运行,您可以设置客户端计算机并开始远程使用它。

步骤5 —安装和配置客户端组件 (Step 5 — Installing and Configuring Client Components)

Now that your volume has been configured, it’s available for use by your client machine.

现在,您的卷已配置完毕,可供您的客户端计算机使用。

Before you begin though, you need to install the glusterfs-client package from the PPA you set up in Step 1. This package’s dependencies include some of GlusterFS’s common libraries and translator modules and the FUSE tools required for it to work.

但是,在开始之前,您需要从在步骤1中设置的PPA安装glusterfs-client软件包。该软件包的依赖项包括GlusterFS的一些常用库和翻译器模块,以及使其工作所需的FUSE工具。

Run the following command on gluster2:

gluster2上运行以下命令:

  • sudo apt install glusterfs-client sudo apt安装glusterfs-client

You will mount your remote storage volume on your client computer shortly. Before you can do that, though, you need to create a mount point. Traditionally, this is in the /mnt directory, but anywhere convenient can be used.

您将很快在您的客户端计算机上安装远程存储卷。 但是,在执行该操作之前,您需要创建一个安装点。 传统上,它在/mnt目录中,但是可以在任何方便的地方使用。

For simplicity’s sake, create a directory named /storage-pool on your client machine to serve as the mount point. This directory name starts with a forward slash (/) which places it in the root directory, so you’ll need to create it with sudo privileges:

为简单起见,请在客户端计算机上创建一个名为/storage-pool的目录作为安装点。 该目录名称以正斜杠( / )开头,并将其放置在根目录中,因此您需要使用sudo特权创建它:

  • sudo mkdir /storage-pool sudo mkdir /存储池

Now you can mount the remote volume. Before that, though, take a look at the syntax of the mount command you’ll use to do so:

现在您可以挂载远程卷。 不过,在此之前,请看一下用于执行此操作的mount命令的语法:

sudo mount -t glusterfs domain1.com:volume_name /path/to/mount/point

mount is a utility found in many Unix-like operating systems. It’s used to mount filesystems — anything from external storage devices, like SD cards or USB sticks, to network-attached storage as in the case of this tutorial — to directories on the machine’s existing filesystem. The mount command syntax you’ll use includes the -t option, which requires three arguments: the type of filesystem to be mounted, the device where the filesystem to mount can be found, and the directory on the client where you’ll mount the volume.

mount是许多类Unix操作系统中的实用程序。 它用于将文件系统(从外部存储设备(如SD卡或USB记忆棒)到网络连接的存储,如本教程中的示例)安装到计算机现有文件系统上的目录中。 您将使用的mount命令语法包括-t选项,该选项需要三个参数:要挂载的文件系统的类型 ,可以找到要挂载的文件系统的设备以及要挂载该文​​件系统的客户机上的目录 。卷。

Notice that in this example syntax, the device argument points to a hostname followed by a colon and then the volume’s name. GlusterFS abstracts the actual storage directories on each host, meaning that this command doesn’t look to mount the /gluster-storage directory, but instead the volume1 volume.

注意,在此示例语法中,设备参数指向主机名,后跟冒号,然后是卷的名称。 GlusterFS抽象化每个主机上的实际存储目录,这意味着该命令看起来不是挂载/gluster-storage目录,而是挂载volume1卷。

Also notice that you only have to specify one member of the storage cluster. This can be either node, since the GlusterFS service treats them as one machine.

还要注意,您只需指定存储集群的一个成员。 这可以是任何一个节点,因为GlusterFS服务将它们视为一台计算机。

Run the following command on your client machine (gluster2) to mount the volume to the /storage-pool directory you created:

在客户端计算机( gluster2 )上运行以下命令,将卷安装到您创建的/storage-pool目录中:

  • sudo mount -t glusterfs gluster0.example.com:/volume1 /storage-pool 须藤安装-t glusterfs gluster0.example.com:/volume1 / storage-pool

Following that, run the df command. This will display the amount of available disk space for file systems to which the user invoking it has access:

之后,运行df命令。 这将显示调用该文件的用户有权访问的文件系统的可用磁盘空间量:

  • df df

This command will show that the GlusterFS volume is mounted at the correct location:

此命令将显示GlusterFS卷已安装在正确的位置:

Output
Filesystem                    1K-blocks    Used Available Use% Mounted on
. . .
gluster0.example.com:/volume1  50633164 1747596  48885568   4% /storage-pool

Now, you can move on to testing that any data you write to the volume on your client gets replicated to your server nodes as expected.

现在,您可以继续进行测试,以确保将写入客户端的卷上的所有数据都按预期复制到服务器节点。

第6步-测试冗余功能 (Step 6 — Testing Redundancy Features)

Now that you’ve set up your client to use your storage pool and volume, you can test its functionality.

既然已经设置了客户端以使用存储池和卷,就可以测试其功能了。

On your client machine (gluster2), navigate to the mount point that you defined in the previous step:

在客户端计算机( gluster2 )上,导航到上一步中定义的安装点:

  • cd /storage-poolcd /存储池

Then create a few test files. The following command creates ten separate empty files in your storage pool:

然后创建一些测试文件。 以下命令在您的存储池中创建十个单独的空文件:

  • sudo touch file_{0..9}.testsudo touch file_ {0..9} .test

If you examine the storage directories you defined earlier on each storage host, you’ll discover that all of these files are present on each system.

如果检查先前在每个存储主机上定义的存储目录,则会发现每个系统上都存在所有这些文件。

On gluster0:

gluster0上

  • ls /gluster-storagels / gluster-storage
Output
file_0.test  file_2.test  file_4.test  file_6.test  file_8.test
file_1.test  file_3.test  file_5.test  file_7.test  file_9.test

Likewise, on gluster1:

同样,在gluster1上

  • ls /gluster-storagels / gluster-storage
Output
file_0.test  file_2.test  file_4.test  file_6.test  file_8.test
file_1.test  file_3.test  file_5.test  file_7.test  file_9.test

As these outputs show, the test files that you added to the client were also written to both of your nodes.

如这些输出所示,您添加到客户端的测试文件也已写入两个节点。

If there is ever a point when one of the nodes in your storage cluster is down, it could fall out of sync with the storage pool if any changes are made to the filesystem. Doing a read operation on the client mount point after the node comes back online will alert the node to get any missing files:

如果您的存储集群中的某个节点出现故障,那么如果对文件系统进行了任何更改,它都可能与存储池不同步。 节点重新联机后,在客户端安装点上执行读取操作将警告节点获取任何丢失的文件:

  • ls /storage-poolls /存储池

Now that you’ve verified that your storage volume is mounted correctly and can replicate data to both machines in the cluster, you can lock down access to the storage pool.

现在,您已经确认存储卷已正确安装并且可以将数据复制到群集中的两台计算机上,接下来可以锁定对存储池的访问了。

步骤7 —限制冗余功能 (Step 7 — Restricting Redundancy Features)

At this point, any computer can connect to your storage volume without any restrictions. You can change this by setting the auth.allow option, which defines the IP addresses of whatever clients should have access to the volume.

此时,任何计算机都可以不受限制地连接到您的存储卷。 您可以通过设置auth.allow选项来更改此设置,该选项定义了任何有权访问该卷的客户端的IP地址。

If you’re using the /etc/hosts configuration, the names you’ve set for each server will not route correctly. You must use a static IP address instead. On the other hand, if you’re using DNS records, the domain name you’ve configured will work here.

如果您使用/etc/hosts配置,则为每个服务器设置的名称将无法正确路由。 您必须改为使用静态IP地址。 另一方面,如果您使用的是DNS记录,则已配置的域名将在此处起作用。

On either one of your storage nodes (gluster0 or gluster1), run the following command:

在任何一个存储节点 ( gluster0gluster1 )上,运行以下命令:

  • sudo gluster volume set volume1 auth.allow gluster2_ip_address

    sudo gluster卷设置volume1 auth.allow gluster2_ip_address

If the command completes successfully, it will return this output:

如果命令成功完成,它将返回以下输出:

Output
volume set: success

If you need to remove the restriction at any point, you can type:

如果您需要随时删除限制,则可以键入:

  • sudo gluster volume set volume1 auth.allow *sudo gluster音量设置volume1 auth.allow *

This will allow connections from any machine again. This is insecure, but can be useful for debugging issues.

这将允许再次从任何计算机进行连接。 这是不安全的,但是对于调试问题很有用。

If you have multiple clients, you can specify their IP addresses or domain names at the same time (depending whether you are using /etc/hosts or DNS hostname resolution), separated by commas:

如果有多个客户端,则可以同时指定其IP地址或域名(取决于您使用的是/etc/hosts还是DNS主机名解析),并用逗号分隔:

  • sudo gluster volume set volume1 auth.allow gluster_client1_ip,gluster_client2_ip

    sudo gluster卷设置volume1 auth.allow gluster_client1_ip , gluster_client2_ip

Your storage pool is now configured, secured, and ready for use. Next you’ll learn a few commands that will help you get information about the status of your storage pool.

现在,您的存储池已配置,保护并可以使用。 接下来,您将学习一些命令,这些命令将帮助您获取有关存储池状态的信息。

步骤8 —使用GlusterFS命令获取有关存储池的信息 (Step 8 — Getting Information About your Storage Pool with GlusterFS Commands)

When you begin changing some of the settings for your GlusterFS storage, you might get confused about what options you have available, which volumes are live, and which nodes are associated with each volume.

当开始更改GlusterFS存储的某些设置时,您可能会对可用的选项,活动的卷以及与每个卷关联的节点感到困惑。

There are a number of different commands that are available on your nodes to retrieve this data and interact with your storage pool.

节点上有许多不同的命令可用于检索此数据并与存储池进行交互。

If you want information about each of your volumes, run the gluster volume info command:

如果需要有关每个gluster volume info ,请运行gluster volume info命令:

  • sudo gluster volume info sudo gluster的体积信息
Output
Volume Name: volume1
Type: Replicate
Volume ID: 8da67ffa-b193-4973-ad92-5b7fed8d0357
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster0.example.com:/gluster-storage
Brick2: gluster1.example.com:/gluster-storage
Options Reconfigured:
auth.allow: gluster2_ip_address
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

Similarly, to get information about any peers that this node is connected to, you can type:

同样,要获取有关此节点连接到的任何对等方的信息,可以键入:

  • sudo gluster peer statussudo gluster对等状态
Number of Peers: 1Hostname: gluster0.example.com
Uuid: cb00a2fc-2384-41ac-b2a8-e7a1793bb5a9
State: Peer in Cluster (Connected)

If you want detailed information about how each node is performing, you can profile a volume by typing:

如果需要有关每个节点的性能的详细信息,可以通过键入以下内容来分析卷:

  • sudo gluster volume profile volume_name start

    sudo gluster卷配置文件volume_name开始

When this command is complete, you can obtain the information that was gathered by typing:

完成此命令后,您可以通过键入以下内容获取收集的信息:

  • sudo gluster volume profile volume_name info

    sudo gluster卷配置文件volume_name信息

Output
Brick: gluster0.example.com:/gluster-storage
--------------------------------------------
Cumulative Stats:%-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls         Fop---------   -----------   -----------   -----------   ------------        ----0.00       0.00 us       0.00 us       0.00 us             30      FORGET0.00       0.00 us       0.00 us       0.00 us             36     RELEASE0.00       0.00 us       0.00 us       0.00 us             38  RELEASEDIRDuration: 5445 secondsData Read: 0 bytes
Data Written: 0 bytesInterval 0 Stats:%-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls         Fop---------   -----------   -----------   -----------   ------------        ----0.00       0.00 us       0.00 us       0.00 us             30      FORGET0.00       0.00 us       0.00 us       0.00 us             36     RELEASE0.00       0.00 us       0.00 us       0.00 us             38  RELEASEDIRDuration: 5445 secondsData Read: 0 bytes
Data Written: 0 bytes
. . .

As shown previously, for a list of all of the GlusterFS associated components running on each of your nodes, run the gluster volume status command:

如前所示,对于在每个节点上运行的所有与GlusterFS相关的组件的列表,请运行gluster volume status命令:

  • sudo gluster volume status sudo gluster音量状态
Output
Status of volume: volume1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster0.example.com:/gluster-storage 49152     0          Y       19003
Brick gluster1.example.com:/gluster-storage 49152     0          Y       19040
Self-heal Daemon on localhost               N/A       N/A        Y       19061
Self-heal Daemon on gluster0.example.com    N/A       N/A        Y       19836Task Status of Volume volume1
------------------------------------------------------------------------------
There are no active volume tasks

If you are going to be administering your GlusterFS storage volumes, it may be a good idea to drop into the GlusterFS console. This will allow you to interact with your GlusterFS environment without needing to type sudo gluster before everything:

如果您要管理GlusterFS存储卷,最好进入GlusterFS控制台。 这将允许您与GlusterFS环境进行交互,而无需在所有内容之前都键入sudo gluster

  • sudo gluster 须藤胶

This will give you a prompt where you can type your commands. help is a good one to get yourself oriented:

这将提示您可以在其中键入命令。 help是使自己适应目标的好方法:

  • help 救命
Output
 peer help                - display help for peer commandsvolume help              - display help for volume commandsvolume bitrot help       - display help for volume bitrot commandsvolume quota help        - display help for volume quota commandssnapshot help            - display help for snapshot commandsglobal help              - list global commands

When you are finished, run exit to exit the Gluster console:

完成后,运行exit退出Gluster控制台:

  • exit 出口

With that, you’re ready to begin integrating GlusterFS with your next application.

这样,您就可以开始将GlusterFS与下一个应用程序集成。

结论 (Conclusion)

By completing this tutorial, you have a redundant storage system that will allow you to write to two separate servers simultaneously. This can be useful for a number of applications and can ensure that your data is available even when one server goes down.

完成本教程后,您将拥有一个冗余存储系统,该系统将允许您同时写入两个单独的服务器。 这对于许多应用程序很有用,并且即使一台服务器出现故障,也可以确保您的数据可用。

翻译自: https://www.digitalocean.com/community/tutorials/how-to-create-a-redundant-storage-pool-using-glusterfs-on-ubuntu-18-04

.glusterfs

.glusterfs_如何在Ubuntu 18.04上使用GlusterFS创建冗余存储池相关推荐

  1. skype linux 安装,如何在Ubuntu 18.04上安装Skype

    Skype是世界上最流行的通信应用程序之一,它使您可以拨打免费的在线音频和视频电话,以及可负担得起的拨打全球移动电话和固定电话的国际电话. Skype不是开源应用程序,也不包含在Ubuntu存储库中. ...

  2. 如何在Ubuntu 18.04上安装Django

    Django是一个免费的开源高级Python Web框架,旨在帮助开发人员构建安全,可扩展和可维护的Web应用程序. 根据您的需要,有不同的方法来安装Django.它可以使用pip在系统范围内安装或在 ...

  3. 如何在Ubuntu 18.04上创建多节点MySQL集群

    翻译转载:https://www.digitalocean.com/community/tutorials/how-to-create-a-multi-node-mysql-cluster-on-ub ...

  4. centos8 配置 dns_如何在Ubuntu 18.04上设置DNS名称服务器 | linux资讯

    域名系统(DNS)是网络基础设施的核心部分,提供了将域名转换为IP地址的方法.您可以将DNS视为Internet的电话簿. 连接到Internet的每个设备都由其IP地址唯一标识.当您在浏览器中输入要 ...

  5. webmin安装_如何在Ubuntu 18.04上安装Webmin

    webmin安装 Are you averse to running commands on a terminal and instead prefer managing your Linux sys ...

  6. 如何在Ubuntu 18.04上安装/卸载NodeJS

    NodeJS is a JavaScript framework that allows you to build fast network applications with ease. In th ...

  7. 如何在Ubuntu 18.04上安装Elasticsearch Logstash Kibana(Elastic Stack)

    In this guide, you will learn to install Elastic stack on Ubuntu 18.04. Elastic stack, formerly know ...

  8. 如何在Ubuntu 18.04上安装OpenCV

    本教程介绍了如何在Ubuntu 18.04上安装OpenCV. OpenCV(开源计算机视觉库)是一个开源计算机视觉库,具有C ++,Python和Java的绑定.它的用途非常广泛,包括医学图像分析, ...

  9. ubuntu配置mta_如何在Ubuntu 18.04上使用Apache为您的域配置MTA-STS和TLS报告

    ubuntu配置mta The author selected Electronic Frontier Foundation Inc to receive a donation as part of ...

  10. eclipse theia_如何在Ubuntu 18.04上设置Eclipse Theia Cloud IDE平台[快速入门]

    eclipse theia 介绍 (Introduction) Eclipse Theia is an extensible cloud IDE running on a remote server ...

最新文章

  1. 吴甘沙:天外飞“厕”、红绿灯消失,未来无人驾驶将被重新定义 | AI ProCon 2019
  2. git私立的代码库邀请合作者步骤
  3. oracle 11g 清除 trc后缀文件,请教一个跟踪文件的问题。11g 很多trc文件。。
  4. [BUUCTF-pwn]——xman_2019_format
  5. myeclispe快捷键一\(≧▽≦)/终于也收藏了
  6. python正则获取网页标签里面的内容
  7. 【5折秒杀】戴尔轻薄商务本只卖2899元,狂降1000元
  8. vue设置textarea最大字数_【Vue 学习】 Vue常用系统指令
  9. windows 防火墙疑难解答程序_Win8系统设置允许程序通过防火墙的方法
  10. 逻辑斯蒂回归与最大熵模型---最大熵模型
  11. 心电图数据结构化标准_自己实现一个类 JSON 数据结构
  12. DEAP2.1软件与Malmquist指数操作说明
  13. 平行交通:虚实互动的智能交通管理与控制
  14. linux搭建ntp发包教程,linux 搭建本地ntp服务器
  15. 【观察】广州供电局:能源行业产业生态变革新标杆
  16. 十个免费专利检索分析网站
  17. 服务器打补丁重启时候系统掉,服务器自动重启我的服务器windowssever高级版,但每次开 爱问知识人...
  18. 网络信息安全-U盘病毒编写
  19. C语言实现通讯录1.0
  20. 仿照Kafka,从零开始自实现 MQ

热门文章

  1. Top100图神经网络论文大盘点
  2. 用C语言打印平行四边形
  3. 《mysql必知必会》学习笔记
  4. 2022数维杯问题D:三重拉尼娜事件下极端气候灾害的损失评估和应对策略研究-思路分析
  5. unix服务器日志文件,UNIX 系统日志
  6. [BZOJ2827]千山鸟飞绝
  7. GSM/GPRS模组硬件电源设计指南
  8. 新站如何做到短时间内获得大量的seo流量?
  9. 关于纯流量卡-物联网卡的一点个人看法
  10. 二分查找--天堂珍珠(珍珠项链)pearl