• 本文作者: lemon
  • 本文链接: https://lemon2013.github.io/2016/11/06/配置基于IPv6的Ceph/
  • 版权声明: 本博客所有文章除特别声明外,均采用 CC BY-NC-SA 3.0 许可协议。转载请注明出处!

引言

为什么突然想起搭建一个基于IPv6的Ceph环境?纯属巧合,原本有一个项目需要搭建一个基于IPv6的文件系统,可惜Hadoop不支持(之前一直觉得Hadoop比较强大),几经折腾,Ceph给了我希望,好了闲话少说,直接进入正题。

实验环境

  1. Linux操作系统版本:CentOS Linux release 7.2.1511 (Core)

    • Minimal镜像 603M左右
    • Everything镜像 7.2G左右
  2. Ceph版本:0.94.9(hammer版本)
    1. 原本选取的为jewel最新版本,环境配置成功后,在使用Ceph的对象存储功能时,导致不能通过IPv6访问,出现类似如下错误提示,查阅资料发现是Ceph jewel版本的一个bug,正在修复,另外也给大家一个建议,在生产环境中,尽量不要选择最新版本。

      set_ports_option:[::]8888:invalid port sport spec

预检

网络配置

参考之前的一篇文章CentOS7 设置静态IPv6/IPv4地址完成网络配置

修改主机名

1

2

3

4

5

[root@localhost ~]# hostnamectl set-hostname ceph001 #ceph001即为你想要修改的名字

[root@localhost ~]# vim /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

2001:250:4402:2001:20c:29ff:fe25:8888 ceph001 #新增,前面IPv6地址即主机ceph001的静态IPv6地址

修改yum源

由于某些原因,可能导致官方的yum在下载软件时速度较慢,这里我们将yum源换为aliyun源

1

2

3

4

5

6

7

[root@localhost ~]# yum clean all #清空yum源

[root@localhost ~]# rm -rf /etc/yum.repos.d/*.repo

[root@localhost ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo #下载阿里base源

[root@localhost ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo #下载阿里epel源

[root@localhost ~]# sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo

[root@localhost ~]# sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo

[root@localhost ~]# sed -i 's/$releasever/7.2.1511/g' /etc/yum.repos.d/CentOS-Base.repo

添加ceph源

1

2

3

4

5

6

7

8

9

10

[root@localhost ~]# vim /etc/yum.repos.d/ceph.repo

[ceph]

name=ceph

baseurl=http://mirrors.aliyun.com/ceph/rpm-hammer/el7/x86_64/ #可以选择需要安装的版本

gpgcheck=0

[ceph-noarch]

name=cephnoarch

baseurl=http://mirrors.aliyun.com/ceph/rpm-hammer/el7/noarch/ #可以选择需要安装的版本

gpgcheck=0

[root@localhost ~]# yum makecache

安装ceph与ceph-deploy

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

[root@localhost ~]# yum install ceph ceph-deploy

Loaded plugins: fastestmirror, langpacks

Loading mirror speeds from cached hostfile

Resolving Dependencies

--> Running transaction check

---> Package ceph.x86_64 1:0.94.9-0.el7 will be installed

--> Processing Dependency: librbd1 = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: python-rbd = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: python-cephfs = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: libcephfs1 = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: librados2 = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: python-rados = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: ceph-common = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: python-requests for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: python-flask for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: redhat-lsb-core for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: hdparm for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: libcephfs.so.1()(64bit) for package: 1:ceph-0.94.9-0.el7.x86_64

.......

Dependencies Resolved

=======================================================================================

Package Arch Version Repository Size

=======================================================================================

Installing:

ceph x86_64 1:0.94.9-0.el7 ceph 20 M

ceph-deploy noarch 1.5.36-0 ceph-noarch 283 k

Installing for dependencies:

boost-program-options x86_64 1.53.0-25.el7 base 155 k

ceph-common x86_64 1:0.94.9-0.el7 ceph 7.2 M

...

Transaction Summary

=======================================================================================

Install 2 Packages (+24 Dependent packages)

Upgrade ( 2 Dependent packages)

Total download size: 37 M

Is this ok [y/d/N]: y

Downloading packages:

No Presto metadata available for ceph

warning: /var/cache/yum/x86_64/7/base/packages/boost-program-options-1.53.0-25.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY

Public key for boost-program-options-1.53.0-25.el7.x86_64.rpm is not installed

(1/28): boost-program-options-1.53.0-25.el7.x86_64.rpm | 155 kB 00:00:00

(2/28): hdparm-9.43-5.el7.x86_64.rpm | 83 kB 00:00:00

(3/28): ceph-deploy-1.5.36-0.noarch.rpm | 283 kB 00:00:00

(4/28): leveldb-1.12.0-11.el7.x86_64.rpm | 161 kB 00:00:00

...

---------------------------------------------------------------------------------------

Total 718 kB/s | 37 MB 00:53

Retrieving key from http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

Importing GPG key 0xF4A80EB5:

Userid : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"

Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5

From : http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

Is this ok [y/N]: y

...

Complete!

验证安装版本

1

2

3

4

[root@localhost ~]# ceph-deploy --version

1.5.36

[root@localhost ~]# ceph -v

ceph version 0.94.9 (fe6d859066244b97b24f09d46552afc2071e6f90)

安装NTP(如果是多节点还需要配置服务端与客户端),并设置selinux与firewalld

1

2

3

4

5

6

7

[root@localhost ~]# yum install ntp

[root@localhost ~]# sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

[root@localhost ~]# setenforce 0

[root@localhost ~]# systemctl stop firewalld

[root@localhost ~]# systemctl disable firewalld

Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

创建Ceph集群

在管理节点(ceph001)

[root@ceph001 ~]# mkdir cluster
[root@ceph001 ~]# cd cluster/

创建集群

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

[root@ceph001 cluster]# ceph-deploy new ceph001

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy new ceph001

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] func : <function new at 0xfe0668>

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x104c680>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] ssh_copykey : True

[ceph_deploy.cli][INFO ] mon : ['ceph001']

[ceph_deploy.cli][INFO ] public_network : None

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] cluster_network : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] fsid : None

[ceph_deploy.new][DEBUG ] Creating new cluster named ceph

[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds

[ceph001][DEBUG ] connected to host: ceph001

[ceph001][DEBUG ] detect platform information from remote host

[ceph001][DEBUG ] detect machine type

[ceph001][DEBUG ] find the location of an executable

[ceph001][INFO ] Running command: /usr/sbin/ip link show

[ceph001][INFO ] Running command: /usr/sbin/ip addr show

[ceph001][DEBUG ] IP addresses found: [u'192.168.122.1', u'49.123.105.124']

[ceph_deploy.new][DEBUG ] Resolving host ceph001

[ceph_deploy.new][DEBUG ] Monitor ceph001 at 2001:250:4402:2001:20c:29ff:fe25:8888

[ceph_deploy.new][INFO ] Monitors are IPv6, binding Messenger traffic on IPv6

[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph001']

[ceph_deploy.new][DEBUG ] Monitor addrs are ['[2001:250:4402:2001:20c:29ff:fe25:8888]']

[ceph_deploy.new][DEBUG ] Creating a random mon key...

[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...

[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

[root@ceph001 cluster]# ll

total 12

-rw-r--r--. 1 root root 244 Nov 6 21:54 ceph.conf

-rw-r--r--. 1 root root 3106 Nov 6 21:54 ceph-deploy-ceph.log

-rw-------. 1 root root 73 Nov 6 21:54 ceph.mon.keyring

[root@ceph001 cluster]# cat ceph.conf

[global]

fsid = 865e6b01-b0ea-44da-87a5-26a4980aa7a8

ms_bind_ipv6 = true

mon_initial_members = ceph001

mon_host = [2001:250:4402:2001:20c:29ff:fe25:8888]

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

由于我们采用的单节点部署,将默认的复制备份数改为1(原本是3)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

[root@ceph001 cluster]# echo "osd_pool_default_size = 1" >> ceph.conf

[root@ceph001 cluster]# ceph-deploy --overwrite-conf config push ceph001

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy --overwrite-conf config push ceph001

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : True

[ceph_deploy.cli][INFO ] subcommand : push

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x14f9710>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] client : ['ceph001']

[ceph_deploy.cli][INFO ] func : <function config at 0x14d42a8>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.config][DEBUG ] Pushing config to ceph001

[ceph001][DEBUG ] connected to host: ceph001

[ceph001][DEBUG ] detect platform information from remote host

[ceph001][DEBUG ] detect machine type

[ceph001][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

创建监控节点

将ceph001作为监控节点

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

[root@ceph001 cluster]# ceph-deploy mon create-initial

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy mon create-initial

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] subcommand : create-initial

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x23865a8>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] func : <function mon at 0x237e578>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] keyrings : None

[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph001

[ceph_deploy.mon][DEBUG ] detecting platform for host ceph001 ...

[ceph001][DEBUG ] connected to host: ceph001

[ceph001][DEBUG ] detect platform information from remote host

[ceph001][DEBUG ] detect machine type

[ceph001][DEBUG ] find the location of an executable

[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.2.1511 Core

[ceph001][DEBUG ] determining if provided host has same hostname in remote

[ceph001][DEBUG ] get remote short hostname

[ceph001][DEBUG ] deploying mon to ceph001

[ceph001][DEBUG ] get remote short hostname

[ceph001][DEBUG ] remote hostname: ceph001

[ceph001][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph001][DEBUG ] create the mon path if it does not exist

[ceph001][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph001/done

[ceph001][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph001/done

[ceph001][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph001.mon.keyring

[ceph001][DEBUG ] create the monitor keyring file

[ceph001][INFO ] Running command: ceph-mon --cluster ceph --mkfs -i ceph001 --keyring /var/lib/ceph/tmp/ceph-ceph001.mon.keyring

[ceph001][DEBUG ] ceph-mon: mon.noname-a [2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0 is local, renaming to mon.ceph001

[ceph001][DEBUG ] ceph-mon: set fsid to 865e6b01-b0ea-44da-87a5-26a4980aa7a8

[ceph001][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ceph001 for mon.ceph001

[ceph001][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph001.mon.keyring

[ceph001][DEBUG ] create a done file to avoid re-doing the mon deployment

[ceph001][DEBUG ] create the init path if it does not exist

[ceph001][DEBUG ] locating the `service` executable...

[ceph001][INFO ] Running command: /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.ceph001

[ceph001][DEBUG ] === mon.ceph001 ===

[ceph001][DEBUG ] Starting Ceph mon.ceph001 on ceph001...

[ceph001][WARNIN] Running as unit ceph-mon.ceph001.1478441156.735105300.service.

[ceph001][DEBUG ] Starting ceph-create-keys on ceph001...

[ceph001][INFO ] Running command: systemctl enable ceph

[ceph001][WARNIN] ceph.service is not a native service, redirecting to /sbin/chkconfig.

[ceph001][WARNIN] Executing /sbin/chkconfig ceph on

[ceph001][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph001.asok mon_status

[ceph001][DEBUG ] ********************************************************************************

[ceph001][DEBUG ] status for monitor: mon.ceph001

[ceph001][DEBUG ] {

[ceph001][DEBUG ] "election_epoch": 2,

[ceph001][DEBUG ] "extra_probe_peers": [],

[ceph001][DEBUG ] "monmap": {

[ceph001][DEBUG ] "created": "0.000000",

[ceph001][DEBUG ] "epoch": 1,

[ceph001][DEBUG ] "fsid": "865e6b01-b0ea-44da-87a5-26a4980aa7a8",

[ceph001][DEBUG ] "modified": "0.000000",

[ceph001][DEBUG ] "mons": [

[ceph001][DEBUG ] {

[ceph001][DEBUG ] "addr": "[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0",

[ceph001][DEBUG ] "name": "ceph001",

[ceph001][DEBUG ] "rank": 0

[ceph001][DEBUG ] }

[ceph001][DEBUG ] ]

[ceph001][DEBUG ] },

[ceph001][DEBUG ] "name": "ceph001",

[ceph001][DEBUG ] "outside_quorum": [],

[ceph001][DEBUG ] "quorum": [

[ceph001][DEBUG ] 0

[ceph001][DEBUG ] ],

[ceph001][DEBUG ] "rank": 0,

[ceph001][DEBUG ] "state": "leader",

[ceph001][DEBUG ] "sync_provider": []

[ceph001][DEBUG ] }

[ceph001][DEBUG ] ********************************************************************************

[ceph001][INFO ] monitor: mon.ceph001 is running

[ceph001][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph001.asok mon_status

[ceph_deploy.mon][INFO ] processing monitor mon.ceph001

[ceph001][DEBUG ] connected to host: ceph001

[ceph001][DEBUG ] detect platform information from remote host

[ceph001][DEBUG ] detect machine type

[ceph001][DEBUG ] find the location of an executable

[ceph001][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph001.asok mon_status

[ceph_deploy.mon][INFO ] mon.ceph001 monitor has reached quorum!

[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum

[ceph_deploy.mon][INFO ] Running gatherkeys...

[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmpgY2IT7

[ceph001][DEBUG ] connected to host: ceph001

[ceph001][DEBUG ] detect platform information from remote host

[ceph001][DEBUG ] detect machine type

[ceph001][DEBUG ] get remote short hostname

[ceph001][DEBUG ] fetch remote file

[ceph001][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph001.asok mon_status

[ceph001][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph001/keyring auth get client.admin

[ceph001][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph001/keyring auth get client.bootstrap-mds

[ceph001][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph001/keyring auth get client.bootstrap-osd

[ceph001][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph001/keyring auth get client.bootstrap-rgw

[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring

[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring

[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpgY2IT7

查看集群状态

1

2

3

4

5

6

7

8

9

10

11

12

[root@ceph001 cluster]# ceph -s

cluster 865e6b01-b0ea-44da-87a5-26a4980aa7a8

health HEALTH_ERR

64 pgs stuck inactive

64 pgs stuck unclean

no osds

monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}

election epoch 2, quorum 0 ceph001

osdmap e1: 0 osds: 0 up, 0 in

pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects

0 kB used, 0 kB / 0 kB avail

64 creating

添加OSD

查看硬盘

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

[root@ceph001 cluster]# ceph-deploy disk list ceph001

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy disk list ceph001

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] subcommand : list

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1c79bd8>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] func : <function disk at 0x1c70e60>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] disk : [('ceph001', None, None)]

[ceph001][DEBUG ] connected to host: ceph001

[ceph001][DEBUG ] detect platform information from remote host

[ceph001][DEBUG ] detect machine type

[ceph001][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core

[ceph_deploy.osd][DEBUG ] Listing disks on ceph001...

[ceph001][DEBUG ] find the location of an executable

[ceph001][INFO ] Running command: /usr/sbin/ceph-disk list

[ceph001][DEBUG ] /dev/sda :

[ceph001][DEBUG ] /dev/sda1 other, xfs, mounted on /boot

[ceph001][DEBUG ] /dev/sda2 other, LVM2_member

[ceph001][DEBUG ] /dev/sdb other, unknown

[ceph001][DEBUG ] /dev/sdc other, unknown

[ceph001][DEBUG ] /dev/sdd other, unknown

[ceph001][DEBUG ] /dev/sr0 other, iso9660

添加第一个OSD(/dev/sdb)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

[root@ceph001 cluster]# ceph-deploy disk zap ceph001:/dev/sdb

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy disk zap ceph001:/dev/sdb

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] subcommand : zap

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1b14bd8>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] func : <function disk at 0x1b0be60>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] disk : [('ceph001', '/dev/sdb', None)]

[ceph_deploy.osd][DEBUG ] zapping /dev/sdb on ceph001

[ceph001][DEBUG ] connected to host: ceph001

[ceph001][DEBUG ] detect platform information from remote host

[ceph001][DEBUG ] detect machine type

[ceph001][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core

[ceph001][DEBUG ] zeroing last few blocks of device

[ceph001][DEBUG ] find the location of an executable

[ceph001][INFO ] Running command: /usr/sbin/ceph-disk zap /dev/sdb

[ceph001][DEBUG ] Creating new GPT entries.

[ceph001][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or

[ceph001][DEBUG ] other utilities.

[ceph001][DEBUG ] Creating new GPT entries.

[ceph001][DEBUG ] The operation has completed successfully.

[ceph001][WARNIN] partx: specified range <1:0> does not make sense

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

[root@ceph001 cluster]# ceph-deploy osd create ceph001:/dev/sdb

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy osd create ceph001:/dev/sdb

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] disk : [('ceph001', '/dev/sdb', None)]

[ceph_deploy.cli][INFO ] dmcrypt : False

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] bluestore : None

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] subcommand : create

[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x19b6680>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] fs_type : xfs

[ceph_deploy.cli][INFO ] func : <function osd at 0x19aade8>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] zap_disk : False

[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph001:/dev/sdb:

[ceph001][DEBUG ] connected to host: ceph001

[ceph001][DEBUG ] detect platform information from remote host

[ceph001][DEBUG ] detect machine type

[ceph001][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core

[ceph_deploy.osd][DEBUG ] Deploying osd to ceph001

[ceph001][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdb journal None activate True

[ceph001][DEBUG ] find the location of an executable

[ceph001][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdb

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type

[ceph001][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb

[ceph001][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sdb

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:ae307314-3a81-4da2-974b-b21c24d9bba1 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb

[ceph001][DEBUG ] The operation has completed successfully.

[ceph001][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdb

[ceph001][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partx -a /dev/sdb

[ceph001][WARNIN] partx: /dev/sdb: error adding partition 2

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle

[ceph001][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/ae307314-3a81-4da2-974b-b21c24d9bba1

[ceph001][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/ae307314-3a81-4da2-974b-b21c24d9bba1

[ceph001][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdb

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:16a6298d-59bb-4190-867a-10a5b519e7c0 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdb

[ceph001][DEBUG ] The operation has completed successfully.

[ceph001][WARNIN] INFO:ceph-disk:calling partx on created device /dev/sdb

[ceph001][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partx -a /dev/sdb

[ceph001][WARNIN] partx: /dev/sdb: error adding partitions 1-2

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle

[ceph001][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sdb1

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1

[ceph001][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks

[ceph001][DEBUG ] = sectsz=512 attr=2, projid32bit=1

[ceph001][DEBUG ] = crc=0 finobt=0

[ceph001][DEBUG ] data = bsize=4096 blocks=24903419, imaxpct=25

[ceph001][DEBUG ] = sunit=0 swidth=0 blks

[ceph001][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=0

[ceph001][DEBUG ] log =internal log bsize=4096 blocks=12159, version=2

[ceph001][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1

[ceph001][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0

[ceph001][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.2SMGIk with options noatime,inode64

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.2SMGIk

[ceph001][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.2SMGIk

[ceph001][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.2SMGIk/journal -> /dev/disk/by-partuuid/ae307314-3a81-4da2-974b-b21c24d9bba1

[ceph001][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.2SMGIk

[ceph001][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.2SMGIk

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb

[ceph001][DEBUG ] Warning: The kernel is still using the old partition table.

[ceph001][DEBUG ] The new table will be used at the next reboot.

[ceph001][DEBUG ] The operation has completed successfully.

[ceph001][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdb

[ceph001][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partx -a /dev/sdb

[ceph001][WARNIN] partx: /dev/sdb: error adding partitions 1-2

[ceph001][INFO ] Running command: systemctl enable ceph

[ceph001][WARNIN] ceph.service is not a native service, redirecting to /sbin/chkconfig.

[ceph001][WARNIN] Executing /sbin/chkconfig ceph on

[ceph001][INFO ] checking OSD status...

[ceph001][DEBUG ] find the location of an executable

[ceph001][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json

[ceph001][WARNIN] there is 1 OSD down

[ceph001][WARNIN] there is 1 OSD out

[ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use.

查看集群状态

1

2

3

4

5

6

7

8

9

10

11

[root@ceph001 cluster]# ceph -s

cluster 865e6b01-b0ea-44da-87a5-26a4980aa7a8

health HEALTH_WARN

64 pgs stuck inactive

64 pgs stuck unclean

monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}

election epoch 1, quorum 0 ceph001

osdmap e3: 1 osds: 0 up, 0 in

pgmap v4: 64 pgs, 1 pools, 0 bytes data, 0 objects

0 kB used, 0 kB / 0 kB avail

64 creating

继续添加其他OSD

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

[root@ceph001 cluster]# ceph-deploy disk zap ceph001:/dev/sdc

[root@ceph001 cluster]# ceph-deploy disk zap ceph001:/dev/sdd

[root@ceph001 cluster]# ceph-deploy osd create ceph001:/dev/sdc

[root@ceph001 cluster]# ceph-deploy osd create ceph001:/dev/sdd

[root@ceph001 cluster]# ceph -s

cluster 865e6b01-b0ea-44da-87a5-26a4980aa7a8

health HEALTH_WARN

64 pgs stuck inactive

64 pgs stuck unclean

monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}

election epoch 1, quorum 0 ceph001

osdmap e7: 3 osds: 0 up, 0 in

pgmap v8: 64 pgs, 1 pools, 0 bytes data, 0 objects

0 kB used, 0 kB / 0 kB avail

64 creating

重启机器,查看集群状态

1

2

3

4

5

6

7

8

9

10

[root@ceph001 ~]# ceph -s

cluster 2818c750-8724-4a70-bb26-f01af7f6067f

health HEALTH_WARN

too few PGs per OSD (21 < min 30)

monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}

election epoch 1, quorum 0 ceph001

osdmap e9: 3 osds: 3 up, 3 in

pgmap v11: 64 pgs, 1 pools, 0 bytes data, 0 objects

102196 kB used, 284 GB / 284 GB avail

64 active+clean

错误处理

我们可以看到,目前集群状态为HEALTH_WARN,存在以下警告提示

1

too few PGs per OSD (21 < min 30)

增大rbd的pg数(too few PGs per OSD (21 < min 30))

1

2

3

4

[root@ceph001 cluster]# ceph osd pool set rbd pg_num 128

set pool 0 pg_num to 128

[root@ceph001 cluster]# ceph osd pool set rbd pgp_num 128

set pool 0 pgp_num to 128

查看集群状态

1

2

3

4

5

6

7

8

9

[root@ceph001 ~]# ceph -s

cluster 2818c750-8724-4a70-bb26-f01af7f6067f

health HEALTH_OK

monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}

election epoch 1, quorum 0 ceph001

osdmap e13: 3 osds: 3 up, 3 in

pgmap v17: 128 pgs, 1 pools, 0 bytes data, 0 objects

101544 kB used, 284 GB / 284 GB avail

128 active+clean

小结

  1. 本教程只是简单的搭建了一个单节点的Ceph环境,如果要换成多节点也很简单,操作大同小异
  2. 在基于IPv6的Ceph配置上,个人觉得与IPv4操作相差不大,只需要注意两点
    1. 配置静态的IPv6地址
    2. 修改主机名并添加域名解析,将主机名对应于前面设置的静态IPv6地址

配置基于IPv6的单节点Ceph相关推荐

  1. 使用cephadm部署单节点ceph集群,后期可扩容(基于官方文档,靠谱,读起来舒服)

    目录 ceph各种部署工具比较(来自官方文档的翻译,靠谱!) 材料准备 cephadm使用条件 服务器有外网访问能力 服务器没有外网访问能力 安装cephadm cephadm的功能 两种安装方式 基 ...

  2. elasticsearch7.9安装[单集群单节点、开启权限认证]

    文章目录 前言 下载 官网 下载链接 国内gitee镜像(源码版) 解压 配置文件 启动 开放文件夹权限 切换普通用户 启动命令 开启权限 第一步 第二步 第三步 第四步 与elasticsearch ...

  3. 服务搭建篇(七) Elasticsearch单节点部署以及多节点集群部署

    感兴趣的话大家可以关注一下公众号 : 猿人刘先生 , 欢迎大家一起学习 , 一起进步 , 一起来交流吧! 1.Elasticsearch Elasticsearch(简称ES) 是一个分布式 , RE ...

  4. ceph单节点安装部署

    目录 背景 第一步.创建虚拟机 第二步.启动虚拟机 第三步.更新源 第四步.修改hosts 第五步,关闭selinux 第六步,安装软件 第七步,开始部署 第八步,部署其他服务 背景 在学习Ceph基 ...

  5. Apache Kafka-初体验Kafka(02)-Centos7下搭建单节点kafka_配置参数详解_基本命令实操

    文章目录 安装JDK 安装zookeeper 安装kafka 下载解压 配置hosts 启动kafka服务 server.properties核心配置详解 基本命令 创建主题 发送消息 消费消息 查看 ...

  6. 基于Docker本地运行k8s(单节点)

    基于Docker本地运行Kubernetes 概览 下面的指引将高速你如何通过Docker创建一个单机.单节点的Kubernetes集群. 下图是最终的结果: 先决条件 你必须拥有一台安装有Docke ...

  7. 搭建ceph单节点对象存储服务器

    2019独角兽企业重金招聘Python工程师标准>>> 之前的校验与ceph-deploy安装工作不再赘述,直接进入正题--单节点配置 ceph-deploy new cephnod ...

  8. 【项目自动发布】基于Docker/单节点Rancher/GitLab搭建简易的CI/CD流水线(备忘+补充完善)

    前言 最近面试好像也经常问到一个问题: 你们项目是怎么发布的 传统的Java项目都是本地打包成 jar包 或者 war包, 上传到服务器, 然后通过shell脚本的方式启动的 要求我们具备一定的she ...

  9. openstack 系列: 基于CentOS7系统使用packstack工具单节点部署openstacktrain---Part-I安装简易命令

    1说明 本人非linux专业人士,更不是云计算专家 部署过程是从各大博客自己百度知道 各种搜索排查,硬是搭起了train环境 过程纠结,先是在win 10 vmware 上安装centos7 再基于c ...

最新文章

  1. Android窗口管理服务WindowManagerService计算Activity窗口大小的过程分析
  2. Spring MVC集成Log4j
  3. 怎么使用ajax重定向,如何通过在特定控制器和动作上使用Ajax调用来重定向用户...
  4. 【POJ - 3281】Dining(拆点建图,网络流最大流)
  5. C语言变量未赋初值时,输出为乱七八糟解释
  6. hdu 4430 Yukari's Birthday(二分)
  7. .Net页面缓存OutPutCache详解
  8. SinoBBD王帅宇:成为最大的第三方公立大数据平台,关键在于“联动”
  9. AtCoder Grand Contest 028题解
  10. hc06蓝牙模块介绍_微测评 | 小米智能插座蓝牙网关版
  11. 复盘模型_如何运用MT4软件进行复盘,提高水平
  12. 使用java进行远程控制,java实现远程控制
  13. 使用蓝牙模块和笔记本自带蓝牙通讯
  14. 解决tensorflow2.x中使用tf.contrib.slim包时出现的No module named:tensorflow.contrib 问题
  15. 华为鸿蒙harmonyos-面向全场,华为鸿蒙 OS 下月发布?别做梦了……
  16. 照片整理工具(日历相册, 重复文件清理, 手机照片同步, 图片尺寸缩减)
  17. 萌新改代码系列(一)--VINS+GPS
  18. 一个资深程序员看12306 (三)
  19. PHP 技巧 * 附近的人功能实现
  20. 泛微oa流程表单之开始时间与结束时间限制在本周且不能跨月

热门文章

  1. Thumbnailator实现图片压缩
  2. 5G(3)---全球第一个5G标准发布_5g标准谁制定_5g标准有哪些
  3. ubuntu wps缺少字体_一个字体:系统风汜霰汜源圆
  4. AMetal平台学习——初步了解篇
  5. 模拟器计算机内存不足,模拟器内存不足要怎么办_怎样修改模拟器的内存大小 - 驱动管家...
  6. Debezium报错处理系列十一:Data row is smaller than a column index, internal schema representation is probabl
  7. vivo:不做开发者的过客,变成IoT的归人
  8. 格式化什么意思?格式化了数据还能恢复吗?
  9. 不得不知的Web知识 —— HttpClient中SocketTimeOut、ConnectionTimeOut与ConnectionManagerTimeOut区别
  10. discuz7.2sql注入漏洞