NFS HA架构部署(NFS + Pacemaker + Corosync + DRBD)

环境:kvm虚拟机2台
OS:CentOS7.6
Kernel: Linux 3.10.0-957.21.3.el7.x86_64
IP地址      主机名
192.9.200.25   centos7v6a    节点一
192.9.200.26   centos7v6b    节点二

两台服务器上都有一个大小相同的硬盘 /dev/vdb
1、安装DRBD(Distributed Replicated Block Device,分布式复制块设备)
 

从官方下载源码:

# curl -O https://www.linbit.com/downloads/drbd/9.0/drbd-9.0.18-1.tar.gz\\\

# curl -O https://www.linbit.com/downloads/drbd/utils/drbd-utils-9.10.0.tar.gz

解压,编译,安装

# tar drbd-utils-9.10.0.tar.gz

# cd drbd-utils-9.10.0/

# yum -y install flex po4a

# ./configure --prefix=/opt/drbd-utils --without-83support --without-84support --with-pacemaker --with-rgmanager --with-bashcompletion --with-initdir

# make && make install

# tar xf drbd-9.0.18-1.tar.gz

# cd drbd-9.0.18-1/

# yum -y install kernel-devel

# make KDIR=/usr/src/kernels/3.10.0-957.21.3.el7.x86_64

# make install

# modprobe drbd 加载drbd模块

# lsmod |grep drbd 加快模块是否加载

# echo drbd >/etc/modules-load.d/drbd.conf 开机加载此模块

2、配置
配置hosts 文件:
 

# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.9.200.25 centos7v6a

192.9.200.26 centos7v6b

配置drbd.conf 文件:
 

# cd /opt/drbd-utils/etc

# ls

bash_completion.d drbd.conf drbd.d xen

# cat drbd.conf

# You can find an example in /usr/share/doc/drbd.../drbd.conf.example

include "drbd.d/global_common.conf";

include "drbd.d/*.res";

# cat global_common.conf

global {

usage-count no;

}

common {

startup {

wfc-timeout 120;

degr-wfc-timeout 120;

}

disk {

resync-rate 10M;

}

net {

protocol C;

timeout 60;

  connect-int 10;

  ping-int 10;

}

}

# cat r0.res

resource r0 {

on centos7v6a {

device /dev/drbd1;

disk /dev/vdb1;

address 192.9.200.25:7789;

meta-disk internal;

}

on centos7v6b {

device /dev/drbd1;

disk /dev/vdb1;

address 192.9.200.26:7789;

meta-disk internal;

}

}

防火墙设置:
firewall-cmd --permanent --add-port=7789/tcp
firewall-cmd --reload 
节点一,节点二都做同样配置。
接下来同步块数据
 

#drbdadm create-md r0 //建立 drbd resource

#drbdadm up r0 //启动 resource r0

#drbdadm primary --force r0 //设置节点一为主节点

两个节点都启动drbd.service

systemctl start drbd.service

systemctl enable drbd.service

这里会等待其他服务器也start,操作第一台后,马上启动第二台。

r0状态查看

# drbdadm dstate r0

UpToDate/UpToDate

创建DRBD文件系统,以下仅在节点一上操作
mkfs.xfs  /dev/drbd1
注: drbd 设备只能在 Primary 端使用,为避免误操作,当机器重启后,默认都处于 Secondary 状态,如要使用drbd设备,需手动把其设置为Primary。primary正常时,在secondary上挂载drbd设备会出现下面的错误:
 

# mount /dev/drbd1 /share

mount: /dev/drbd1 写保护,将以只读方式挂载

mount: 将 /dev/drbd1 挂载到 /share 失败: 错误的介质类型

在从节点上进行挂载  
1、将主节点drbd的状态变为从  
umount /share
drbdadm secondary all  

2、在从节点上进行挂载  
drbdadm primary all  
mount  /dev/drbd1  /share

测试
 

节点一:

cp anaconda-ks.cfg /share/ks.cfg

umount /share

drbdadm secondary all

节点二:

drbdadm primary all

mount /dev/drbd1 /share

观察是否同步

#ls /share/

ks.cfg

注意:必须先umount,否则出错。
# drbdadm secondary all
r0: State change failed: (-12) Device is held open by someone
additional info from kernel:
/dev/drbd1 opened by mount (pid 13506) at 2019-07-01 08:48:03.408
Command 'drbdsetup secondary r0' terminated with exit code 11

3、Pacemaker 安装
节点一:192.9.200.25  centos7v6a
节点二:192.9.200.26  centos7v6b
安装 pacemaker 之前,需要ssh免密登录配置。
安装
[ALL] # yum install pacemaker pcs resource-agents

# firewall-cmd --permanent --add-service=high-availability success # firewall-cmd --reload success

pacemaker 安装的时候会自动corosync。
创建集群
 

启动 pcsd服务

[ALL] # systemctl start pcsd.service

[ALL] # systemctl enable pcsd.service

创建pcs 所需身份认证,安装pacemaker时,系统自动创建用于hacluster,需要给该用户设置密码

[ALL] # echo 123456789 | passwd --stdin hacluster

[ONE] # pcs cluster auth node1 node2 -u hacluster -p 123456789 --force

配置 corosync
在集群的一个节点上执行
 

[root@centos7v6a ~]# pcs cluster auth centos7v6a centos7v6b

Username: hacluster

Password:

centos7v6b: Authorized

centos7v6a: Authorized

在同一节点上使用pcs cluster setup来生成和同步corosync配置。现在创建一个集群并指定其节点,集群名的字符数不能超过15
[ONE] pcs cluster setup --force --name nfscluster centos7v6a centos7v6b 

pcs工具将自动生成 corosync配置文件,如下:
 

# cat /etc/corosync/corosync.conf

totem {

version: 2

cluster_name: nfscluster

secauth: off

transport: udpu

}

nodelist {

node {

ring0_addr: centos7v6a

nodeid: 1

}

node {

ring0_addr: centos7v6b

nodeid: 2

}

}

quorum {

provider: corosync_votequorum

two_node: 1

}

logging {

to_logfile: yes

logfile: /var/log/cluster/corosync.log

to_syslog: yes

}

如果不使用pcs 生成corosync.conf 配置文件,则需要在每个节点上手动进行相同的配置。
启动集群
[ONE] # pcs cluster start --all
此命令行相当于:
# systemctl start corosync.service
# systemctl start pacemaker.service

注:pcs cluster start nodename (or --all)  #启动某个节点(或者所有节点)
验证 corosync 安装
首先,使用corosync-cfgtool检查集群通信是否正常
 

# corosync-cfgtool -s

Printing ring status.

Local node ID 2

RING ID 0

id = 192.9.200.26

status = ring 0 active with no faults

接下来,检查成员资格(membership)和总裁(quorum)APIs
 

# corosync-cmapctl |grep member

runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0

runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.9.200.25)

runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1

runtime.totem.pg.mrp.srp.members.1.status (str) = joined

runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0

runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.9.200.26)

runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1

runtime.totem.pg.mrp.srp.members.2.status (str) = joined

# pcs status corosync

Membership information

----------------------

Nodeid Votes Name

1 1 centos7v6a

2 1 centos7v6b (local)

验证Pacemaker安装
1、必须的进程是否在运行
 

# ps axf| grep -v grep| grep pacemaker

16006 ? Ss 0:00 /usr/sbin/pacemakerd -f

16007 ? Ss 0:00 \_ /usr/libexec/pacemaker/cib

16008 ? Ss 0:00 \_ /usr/libexec/pacemaker/stonithd

16009 ? Ss 0:00 \_ /usr/libexec/pacemaker/lrmd

16010 ? Ss 0:00 \_ /usr/libexec/pacemaker/attrd

16011 ? Ss 0:00 \_ /usr/libexec/pacemaker/pengine

16012 ? Ss 0:00 \_ /usr/libexec/pacemaker/crmd

2、检查pcs status

 

# pcs status

Cluster name: nfscluster

WARNINGS:

No stonith devices and stonith-enabled is not false

Stack: corosync

Current DC: centos7v6a (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum

Last updated: Tue Jul 2 13:17:54 2019

Last change: Tue Jul 2 13:03:27 2019 by hacluster via crmd on centos7v6a

2 nodes configured

0 resources configured

Online: [ centos7v6a centos7v6b ]

No resources

Daemon Status:

corosync: active/enabled

pacemaker: active/disabled

pcsd: active/enabled

3、最后,确保corosync或pacemaker没有启动错误(除了与未配置STONITH相关的消息,此时此操作正常)
# journalctl -b | grep -i error
创建主动/被动(Active/Passive) 集群
检查集群状态
1、查看当前集群状态
 

以简洁的方式显示当前集群状态

# pcs status

以XML格式显示当前集群状态

# pcs cluster cib

2、改变集群前,建议查看当前配置的合法性
 

# crm_verify -L -V

error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined

error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option

error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity

Errors found during check: config not valid

出现上面的错误,是因为默认情况下开启了STONITH。关闭它
 

pcs property set stonith-enabled=false

crm_verify -L

现在没有发现错误,表示配置合法了。
添加资源
设置浮动IP 为192.9.200.27,每隔30秒检查浮动IP的可用性
 

pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.9.200.27 cidr_netmask=24 op monitor interval=30s

ocf:heartbeat:IPaddr2. This tells Pacemaker three things about the resource you want to add:
  • The first field (ocf in this case) is the standard to which the resource script conforms and where to find it.
  • The second field (heartbeat in this case) is standard-specific; for OCF resources, it tells the cluster which OCF namespace the resource script is in.
  • The third field (IPaddr2 in this case) is the name of the resource script.

确定可用资源列表(ocf):
 

pcs resource standards

lsb

ocf

service

systemd

确认资源的提供者(相当于ocf:heartbeat:IPaddr2 的heartbeat):
 

# pcs resource providers

heartbeat

linbit

openstack

pacemaker

最后,确定 IPaddr2 部分内容
 

[root@centos7v6b ~]# pcs resource agents ocf:heartbeat

aliyun-vpc-move-ip

apache

aws-vpc-move-ip

awseip

awsvip

azure-lb

clvm

conntrackd

……

验证资源添加成功
 

[root@centos7v6b ~]# pcs status

Cluster name: nfscluster

Stack: corosync

Current DC: centos7v6a (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum

Last updated: Tue Jul 2 13:52:32 2019

Last change: Tue Jul 2 13:44:59 2019 by root via cibadmin on centos7v6a

2 nodes configured

1 resource configured

Online: [ centos7v6a centos7v6b ]

Full list of resources:

ClusterIP (ocf::heartbeat:IPaddr2): Started centos7v6a

Daemon Status:

corosync: active/enabled

pacemaker: active/disabled

pcsd: active/enabled

创建nfs集群
创建NfsServer资源
 

# pcs resource create NfsShare ocf:heartbeat:nfsserver nfs_ip=192.9.200.27

将VIP与nfs-server.service 绑定在同一服务器上
 

[root@centos7v6a ~]# pcs constraint colocation add NfsShare with ClusterIP INFINITY

[root@centos7v6a ~]# pcs constraint

Location Constraints:

Ordering Constraints:

Colocation Constraints:

NfsShare with ClusterIP (score:INFINITY)

Ticket Constraints:

[root@centos7v6a ~]# pcs status

Cluster name: nfscluster

Stack: corosync

Current DC: centos7v6b (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum

Last updated: Tue Jul 2 16:45:03 2019

Last change: Tue Jul 2 16:44:48 2019 by root via cibadmin on centos7v6a

2 nodes configured

2 resources configured

Online: [ centos7v6a centos7v6b ]

Full list of resources:

ClusterIP (ocf::heartbeat:IPaddr2): Started centos7v6a

NfsShare (ocf::heartbeat:nfsserver): Started centos7v6a

Daemon Status:

corosync: active/enabled

pacemaker: active/disabled

pcsd: active/enabled

设置资源的启动顺序
 

[root@centos7v6a ~]# pcs constraint order ClusterIP then NfsShare

Adding ClusterIP NfsShare (kind: Mandatory) (Options: first-action=start then-action=start)

[root@centos7v6a ~]# pcs constraint

Location Constraints:

Ordering Constraints:

start ClusterIP then start NfsShare (kind:Mandatory)

Colocation Constraints:

NfsShare with ClusterIP (score:INFINITY)

Ticket Constraints:

设置服务优先运行在哪台服务器上(值越大优先级越高)
# pcs constraint location NfsShare prefers centos7v6a=50

查看优先级
crm_simulate -sL

创建drbd设备集群
 

[root@centos7v6a ~]# pcs cluster cib drbd_cfg //生成 drbd_cfg 文件,这步是必须的

[root@centos7v6a ~]# pcs -f drbd_cfg resource create NfsData ocf:linbit:drbd drbd_resource=r0 op monitor interval=60s //使用 -f 选项,将修改保存在drbd_cfg

[root@centos7v6a ~]# pcs -f drbd_cfg resource master NfsDataClone NfsData master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

[root@centos7v6a ~]# pcs -f drbd_cfg resource show

ClusterIP(ocf::heartbeat:IPaddr2):Started centos7v6b

NfsShare(ocf::heartbeat:nfsserver):Started centos7v6b

Master/Slave Set: NfsDataClone [NfsData]

Stopped: [ centos7v6a centos7v6b ]

对所有更改感到满意后,可以通过将drbd_cfg文件推送到实时CIB中来一次提交所有更改。

[root@centos7v6a ~]# pcs cluster cib-push drbd_cfg --config

CIB updated[root@centos7v6a ~]# pcs status

Cluster name: nfscluster

Stack: corosync

Current DC: centos7v6b (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum

Last updated: Wed Jul 3 09:30:02 2019

Last change: Wed Jul 3 09:29:57 2019 by root via cibadmin on centos7v6a

2 nodes configured

4 resources configured

Online: [ centos7v6a centos7v6b ]

Full list of resources:

ClusterIP (ocf::heartbeat:IPaddr2): Started centos7v6b

NfsShare (ocf::heartbeat:nfsserver): Started centos7v6b

Master/Slave Set: NfsDataClone [NfsData]

Masters: [ centos7v6a ]

Slaves: [ centos7v6b ]

Daemon Status:

corosync: active/enabled

pacemaker: active/disabled

pcsd: active/enabled

注意:drbd设备的活动节点需要在VIP所在服务器,否则出现问题。
创建文件系统集群(自动的在主节点上挂载共享设备)
确保 drbd1 只在主节点上可用
 

[root@centos7v6a ~]# pcs cluster cib fs_cfg

[root@centos7v6a ~]# pcs -f fs_cfg resource create NfsFS Filesystem device="/dev/drbd1" directory="/share" fstype="xfs"

Assumed agent name 'ocf:heartbeat:Filesystem' (deduced from 'Filesystem')

[root@centos7v6a ~]# pcs -f fs_cfg constraint colocation add NfsFS with NfsDataClone INFINITY with-rsc-role=Master

[root@centos7v6a ~]# pcs -f fs_cfg constraint order promote NfsDataClone then start NfsFS

Adding NfsDataClone NfsFS (kind: Mandatory) (Options: first-action=promote then-action=start)

让NFS服务与共享目录在同一服务器上
 

[root@centos7v6a ~]# pcs -f fs_cfg constraint colocation add NfsShare with NfsFS INFINITY

[root@centos7v6a ~]# pcs -f fs_cfg constraint order NfsFS then NfsShare

Adding NfsFS NfsShare (kind: Mandatory) (Options: first-action=start then-action=start)

使配置生效

 

[root@centos7v6a ~]# pcs cluster cib-push fs_cfg --config

CIB updated

测试集群故障转移
先查看pacemaker状态
 

[root@centos7v6a ~]# pcs status

Cluster name: nfscluster

Stack: corosync

Current DC: centos7v6b (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum

Last updated: Wed Jul 3 10:03:43 2019

Last change: Wed Jul 3 10:03:29 2019 by root via cibadmin on centos7v6a

2 nodes configured

5 resources configured

Online: [ centos7v6a centos7v6b ]

Full list of resources:

ClusterIP (ocf::heartbeat:IPaddr2): Started centos7v6b

NfsShare (ocf::heartbeat:nfsserver): Started centos7v6b

Master/Slave Set: NfsDataClone [NfsData]

Masters: [ centos7v6b ]

Slaves: [ centos7v6a ]

NfsFS (ocf::heartbeat:Filesystem): Started centos7v6b

Daemon Status:

corosync: active/enabled

pacemaker: active/disabled

pcsd: active/enabled

方法一:
使用 pcs cluster stop nodename 停止节点nodename 上的所有集群服务,以此来模拟故障转移  。
 

[root@centos7v6a ~]# pcs cluster stop centos7v6b //现在停止Master centos7v6b

centos7v6b: Stopping Cluster (pacemaker)...

centos7v6b: Stopping Cluster (corosync)...

[root@centos7v6a ~]# pcs status

Cluster name: nfscluster

Stack: corosync

Current DC: centos7v6a (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum

Last updated: Wed Jul 3 10:19:51 2019

Last change: Wed Jul 3 10:03:29 2019 by root via cibadmin on centos7v6a

2 nodes configured

5 resources configured

Online: [ centos7v6a ]

OFFLINE: [ centos7v6b ]

Full list of resources:

ClusterIP (ocf::heartbeat:IPaddr2): Started centos7v6a

NfsShare (ocf::heartbeat:nfsserver): Started centos7v6a

Master/Slave Set: NfsDataClone [NfsData]

Masters: [ centos7v6a ]

Stopped: [ centos7v6b ]

NfsFS (ocf::heartbeat:Filesystem): Started centos7v6a

Daemon Status:

corosync: active/enabled

pacemaker: active/disabled

pcsd: active/enabled

[root@centos7v6a ~]# df -Th

文件系统 类型 容量 已用 可用 已用% 挂载点

/dev/mapper/centos-root xfs 47G 1.8G 46G 4% /

devtmpfs devtmpfs 908M 0 908M 0% /dev

tmpfs tmpfs 920M 54M 866M 6% /dev/shm

tmpfs tmpfs 920M 8.6M 911M 1% /run

tmpfs tmpfs 920M 0 920M 0% /sys/fs/cgroup

/dev/vda1 xfs 1014M 187M 828M 19% /boot

tmpfs tmpfs 184M 0 184M 0% /run/user/0

/dev/drbd1 xfs 2.0G 33M 2.0G 2% /share

故障转移完成。使用 pcs cluster start centos7v6b 恢复集群
方法二:
将节点置于待机(standby)模式。处于此状态的节点继续运行corosync和pacemaker,但不允许运行资源。这样也可以进行故障转移模拟
这个特性,在执行系统管理任务(例如更新群集资源使用的包)时,此功能特别有用。
 

[root@centos7v6a ~]# pcs cluster standby centos7v6b

[root@centos7v6a ~]# pcs status

Cluster name: nfscluster

Stack: corosync

Current DC: centos7v6a (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum

Last updated: Wed Jul 3 10:28:56 2019

Last change: Wed Jul 3 10:28:52 2019 by root via cibadmin on centos7v6a

2 nodes configured

5 resources configured

Node centos7v6b: standby

Online: [ centos7v6a ]

Full list of resources:

ClusterIP(ocf::heartbeat:IPaddr2):Started centos7v6a

NfsShare(ocf::heartbeat:nfsserver):Starting centos7v6a

Master/Slave Set: NfsDataClone [NfsData]

Masters: [ centos7v6a ]

Stopped: [ centos7v6b ]

NfsFS(ocf::heartbeat:Filesystem):Started centos7v6a

Daemon Status:

corosync: active/enabled

pacemaker: active/disabled

pcsd: active/enabled

[root@centos7v6a ~]# ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

link/ether 52:54:00:04:36:c2 brd ff:ff:ff:ff:ff:ff

inet 192.9.200.25/24 brd 192.9.200.255 scope global noprefixroute eth0

valid_lft forever preferred_lft forever

inet 192.9.200.27/24 brd 192.9.200.255 scope global secondary eth0

valid_lft forever preferred_lft forever

inet6 fe80::5054:ff:fe04:36c2/64 scope link

valid_lft forever preferred_lft forever

[root@centos7v6a ~]# df -Th

文件系统 类型 容量 已用 可用 已用% 挂载点

/dev/mapper/centos-root xfs 47G 1.8G 46G 4% /

devtmpfs devtmpfs 908M 0 908M 0% /dev

tmpfs tmpfs 920M 54M 866M 6% /dev/shm

tmpfs tmpfs 920M 8.6M 911M 1% /run

tmpfs tmpfs 920M 0 920M 0% /sys/fs/cgroup

/dev/vda1 xfs 1014M 187M 828M 19% /boot

tmpfs tmpfs 184M 0 184M 0% /run/user/0

/dev/drbd1 xfs 2.0G 33M 2.0G 2% /share

故障转移完成。使用 pcs cluster unstandby centos7v6b 恢复集群

参考:
https://blog.csdn.net/levy_cui/article/details/53644810
https://docs.linbit.com/docs/users-guide-9.0/#p-work
https://www.centos.bz/2012/02/drbd-compile-install-deploy/
https://blog.51cto.com/koumm/1738795
https://clusterlabs.org/quickstart-redhat.html
https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/index.html
https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/ch07.html
https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/_configure_the_cluster_for_the_drbd_device.html

转载于:https://my.oschina.net/u/3090649/blog/3068920

NFS HA架构部署(NFS + Pacemaker + Corosync + DRBD)相关推荐

  1. CentOS6安装NFS HA架构部署(NFS + Heartbeat + DRBD)

    因业务需求,使用NFS共享服务,为了避免硬件故障导致的服务不可用,使用NFS + Heartbeat + DRBD架构实现NFS高可用.(我当时在测试环境操作的/dev/sdb 有时候写成了/dev/ ...

  2. CentOS7数据库架构之NFS+heartbeat+DRBD(亲测,详解)

    目录 参考文档 理论概述 DRBD 架构 NFS 架构部署 部署DRBD 部署heartbeat 部署NFS及配合heartbeat nfs切记要挂载到别的机器上不要为了省事,省机器 参考文档 htt ...

  3. Pacemaker+Corosync PostgreSQL流复制HA的部署(pha4pgsql)

    简介 在众多的PostgreSQL HA方案中,流复制HA方案是性能,可靠性,部署成本等方面都比较好的,也是目前被普遍采用的方案.而用于管理流复制集群的工具中,Pacemaker+Corosync又是 ...

  4. linux6.5下配置nfs,CentOS 6.5 NFS配置详细教程

    1.1 NFS介绍 NFS是Network File System的缩写,它的主要功能是通过网络让不同的机器系统之间可以彼此共享文件或目录 1.1.2 NFS挂载结构图 1.1.3 NFS工作原理简图 ...

  5. mysql服务器的HA集群之corosync+drbd+pacemaker实现 上

    通过HA实现服务器的高可用性,即通过corosync+drbd+pacemaker实现mysql服务器的高可用集群. 案例应用实现主要步骤: 一.准备工作: 二.DRBD安装配置步骤 三.mysql安 ...

  6. CentOS 6.5环境实现corosync+pacemaker实现DRBD高可用

    DRBD (Distributed Replicated Block Device)分布式复制块设备,它是 Linux 平台上的分散式储存系统,通常用于高可用性(high availability, ...

  7. HA集群之三:corosync+pacemaker实现httpd服务的高可用

    一.基础概念 1.集群的组成结构 HA Cluster: Messaging and Infrastructure Layer|Heartbeat Layer 集群信息事务层 Membership L ...

  8. NFS - MIPS架构下构建NFS共享目录服务

    文章目录 概 Lin 和 Win 共享文件 需求 原理 环境信息 检查依赖 如何找mips的rpm包 NFS服务端 上传RPM安装包 安装RPCBIND和NFSSERVER 创建共享目录 配置文件/e ...

  9. nfs服务器_Kubernetes集群下部署NFS持久存储

    NFS是网络文件系统Network File System的缩写,NFS服务器可以让PC将网络中的NFS服务器共享的目录挂载到本地的文件系统中,而在本地的系统中来看,那个远程主机的目录就好像是自己的一 ...

最新文章

  1. Makefile理解
  2. Atitit.研发管理---api版本号策略与版本控制
  3. RSA算法原理——(2)RSA简介及基础数论知识
  4. 在虚拟机上运行vxWorks
  5. 《C和指针》——宏中容易出现的错误
  6. python-pymysql模块的使用入门
  7. We change lives !
  8. 7天期限已过,谷歌披露已遭利用的 Windows 内核 0day 详情
  9. GeoTools入门(八)-- (SLD)样式处理
  10. 串口转HID键盘鼠标芯片沁恒微电子CH9329
  11. oracle导入.dmp,oracle导入.dmp脚本
  12. 程序员睡前发「73 76 79 86 85」,女孩熬夜破解发现竟是表白密码!
  13. redis 五大数据类型
  14. Java实现 兔子数问题
  15. 负数时的有符号整型和无符号整型的转换
  16. Oracle 11g数据库基础教程(第2版)-课后习题-第五章
  17. 易语言 用精易的网页_访问 请求https的时候返回不了数据
  18. 固态硬盘为什么不建议分区?
  19. 计算机网络实验mooc,MOOC计算机网络第三单元测验(哈尔滨工业大学)
  20. 微信小程序地图签到功能

热门文章

  1. mybatis之注解开发与逆向工程
  2. 腾讯2014校园招聘2013.10.26杭州笔试题
  3. 【JAVA操作系统——进程调度】非抢占式短作业优先算法
  4. 【Ini格式文件】Ini格式文件的读写
  5. 中文顿号怎么输入_如何打顿号?详细教您键盘上的顿号怎么打出来
  6. 2021年12月中国车企新能源汽车销量排行榜:Top前三的车企销量远超于其他车企,且市场份额占比均超过10%(附月榜TOP68详单)
  7. Java时间处理:根据日期计算并转化为前天昨天凌晨早上下午晚上明天后天15天内等标签
  8. .for 与.fori区别
  9. 使用agg方法聚合数据
  10. 福州预防DDOS攻击CC攻击首选高防服务器