ceph luminous 12.2.0 bluestore 添加 osd 出错:

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy --overwrite-conf --ceph-conf /etc/ceph/ceph.conf osd activate node-2:/dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3:/dev/disk/by-partuuid/8fb82b3e-0e9f-4b4e-ac2f-24aa99c428c5
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1705290>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x1698de8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : /etc/ceph/ceph.conf
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('node-2', '/dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3', '/dev/disk/by-partuuid/8fb82b3e-0e9f-4b4e-ac2f-24aa99c428c5')]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks node-2:/dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3:/dev/disk/by-partuuid/8fb82b3e-0e9f-4b4e-ac2f-24aa99c428c5
[node-2][DEBUG ] connected to host: node-2
[node-2][DEBUG ] detect platform information from remote host
[node-2][DEBUG ] detect machine type
[node-2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
[ceph_deploy.osd][DEBUG ] activating host node-2 disk /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[node-2][DEBUG ] find the location of an executable
[node-2][INFO  ] Running command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3
[node-2][WARNIN] main_activate: path = /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3
[node-2][WARNIN] get_dm_uuid: get_dm_uuid /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3 uuid path is /sys/dev/block/65:37/dm/uuid
[node-2][WARNIN] command: Running command: /usr/sbin/blkid -o udev -p /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3
[node-2][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3
[node-2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[node-2][WARNIN] mount: Mounting /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3 on /var/lib/ceph/tmp/mnt.xOcLiw with options rw,noexec,nodev,noatime,nodiratime,nobarrier,logbsize=256k
[node-2][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o rw,noexec,nodev,noatime,nodiratime,nobarrier,logbsize=256k -- /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3 /var/lib/ceph/tmp/mnt.xOcLiw
[node-2][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.xOcLiw
[node-2][WARNIN] activate: Cluster uuid is 3d292754-fcdc-4144-8120-c2883f0ff0a3
[node-2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node-2][WARNIN] activate: Cluster name is ceph
[node-2][WARNIN] activate: OSD uuid is 153f5c9c-4259-41dc-b883-57b4c1ce9f3b
[node-2][WARNIN] activate: OSD id is 44
[node-2][WARNIN] activate: Initializing OSD...
[node-2][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.xOcLiw/activate.monmap
[node-2][WARNIN] got monmap epoch 1
[node-2][WARNIN] command_check_call: Running command: /usr/bin/ceph-osd --cluster ceph --mkfs -i 44 --monmap /var/lib/ceph/tmp/mnt.xOcLiw/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.xOcLiw --osd-uuid 153f5c9c-4259-41dc-b883-57b4c1ce9f3b --setuser ceph --setgroup ceph
[node-2][WARNIN] 2017-08-30 13:45:29.602611 7fdb4c382d00 -1 bdev(0x7fdb5746b000 /var/lib/ceph/tmp/mnt.xOcLiw/block.wal) _aio_start io_setup(2) failed with EAGAIN; try increasing /proc/sys/fs/aio-max-nr
[node-2][WARNIN] 2017-08-30 13:45:29.602671 7fdb4c382d00 -1 bluestore(/var/lib/ceph/tmp/mnt.xOcLiw) _open_db add block device(/var/lib/ceph/tmp/mnt.xOcLiw/block.wal) returned: (11) Resource temporarily unavailable
[node-2][WARNIN] 2017-08-30 13:45:30.359153 7fdb4c382d00 -1 bluestore(/var/lib/ceph/tmp/mnt.xOcLiw) mkfs failed, (11) Resource temporarily unavailable
[node-2][WARNIN] 2017-08-30 13:45:30.359189 7fdb4c382d00 -1 OSD::mkfs: ObjectStore::mkfs failed with error (11) Resource temporarily unavailable
[node-2][WARNIN] 2017-08-30 13:45:30.359347 7fdb4c382d00 -1  ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.xOcLiw: (11) Resource temporarily unavailable
[node-2][WARNIN] mount_activate: Failed to activate
[node-2][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.xOcLiw
[node-2][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.xOcLiw
[node-2][WARNIN] Traceback (most recent call last):
[node-2][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in <module>
[node-2][WARNIN]     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
[node-2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5704, in run
[node-2][WARNIN]     main(sys.argv[1:])
[node-2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5655, in main
[node-2][WARNIN]     args.func(args)
[node-2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3759, in main_activate
[node-2][WARNIN]     reactivate=args.reactivate,
[node-2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3522, in mount_activate
[node-2][WARNIN]     (osd_id, cluster) = activate(path, activate_key_template, init)
[node-2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3699, in activate
[node-2][WARNIN]     keyring=keyring,
[node-2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3151, in mkfs
[node-2][WARNIN]     '--setgroup', get_ceph_group(),
[node-2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 558, in command_check_call
[node-2][WARNIN]     return subprocess.check_call(arguments)
[node-2][WARNIN]   File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
[node-2][WARNIN]     raise CalledProcessError(retcode, cmd)
[node-2][WARNIN] subprocess.CalledProcessError: Command '['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', '-i', u'44', '--monmap', '/var/lib/ceph/tmp/mnt.xOcLiw/activate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.xOcLiw', '--osd-uuid', u'153f5c9c-4259-41dc-b883-57b4c1ce9f3b', '--setuser', 'ceph', '--setgroup', 'ceph']' returned non-zero exit status 1
[node-2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/disk/by-partuuid/3ed2c4be-9183-4a97-a87e-2a9bda4510b3

检查了 /proc/sys/fs/aio-max-nr: 65535 和 /proc/sys/fs/aio-nr: 62053 的值

发现一个节点最多只能添加到 14 个 osd, 发现添加一个 osd aio-nr 的值增加 4096

当添加到 14 个 osd 时,aio-nr 值为 62053, 再添加一个 osd ,便会超出 65536 的限制

所以 ,修改 aio-max-nr 的最大值,可以解决这个问题

# sysctl fs.aio-max-nr=1048576

ceph-deploy osd activate xxx bluestore ERROR相关推荐

  1. ceph修复osd为down的情况

    尝试一.直接重新激活所有osd 1.查看osd树 [root@ceph01 ~]# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY ...

  2. SDK ..\OBJ\XXX.axf: Error: L6218E: Undefined symbol XXXX (referred from XXX.o).

    STM32 MDK 编译时出现: 主要时变量传递问题,中断函数B,调用主函数main中的变量,虽然B中用了extern main.c中 int main(void) { u16 a=1200; } e ...

  3. Ceph daemon osd perf dump 命令

    Ceph 版本:0.94.7 http://docs.ceph.com/docs/master/dev/perf_counters/?highlight=perf%20dump Ceph 提供了许多非 ...

  4. Anaconda下激活环境conda activate xxx 报错

    Anaconda下激活环境conda activate xxx 报错 报错 原因 解决 报错 在Anaconda下安装完python环境后,在激活python2.7环境时conda activate报 ...

  5. 【ceph】OSD心跳检测机制(前端后端网)

    目录 ceph心跳机制 OSD间的心跳机制 发送 接收 超时检测 peer OSD选择 OSD和MON间的心跳机制 总结: @bandaoyu,本文随时更新,连接:https://blog.csdn. ...

  6. conda activate xxx 激活虚拟环境报错

    如果使用 conda activate xxx 激活虚拟环境报以下错误: CommandNotFoundError: Your shell has not been properly configur ...

  7. 编译错误:XXX.axf: Error: L6218E: Undefined symbol xxx (referred from xxxx.o).

    #XXX.axf: Error: L6218E: Undefined symbol xxx (referred from xxxx.o). MDK 报错: linking- stm32f103.axf ...

  8. Ceph 学习——OSD读写流程与源码分析(一)

    消息从客户端发送而来,之前几节介绍了 客户端下 对象存储.块存储库的实现以及他们在客户端下API请求的发送过程(Ceph学习--Librados与Osdc实现源码解析 . Ceph学习--客户端读写操 ...

  9. Ceph集群显示XXX daemons have recently crashed警告

    问题: Ceph集群一直显示XXX daemons have recently crashed,而且数目越来越多; 解决方法: 最近有一个或多个Ceph守护进程崩溃,管理员尚未对该崩溃进行存档(确认) ...

最新文章

  1. OTA整包的制作流程(未完)
  2. java应该学的基础东西
  3. ORA-28000: the account is locked
  4. 我的世界javamod怎么装_装暖气片,10个有8个人都想知道的这点事儿!
  5. Unicode - 想说爱你不容易
  6. python怎么使用base64_python常用库之base64
  7. OpenCV学习笔记(二十六)——小试SVM算法ml
  8. 多云时代-着眼布局开源技术之多云数据管理
  9. 故障:删除不存在的设备或完全卸载驱动程序
  10. 英语思维导图大全 名词(三)
  11. wireshark密码嗅探侵入后台管理系统
  12. 系统分析和设计方法之输出设计和原型化
  13. 你应该如何学习一个未知的技术领域?- 菜鸟小白篇
  14. 卖货的 Keep,是垂直社区的未来缩影
  15. 1049: 平方和与立方和 C语言
  16. python文本处理尝试
  17. 计算机7层网络以及每层协议
  18. 机器学习笔记(2)----“没有免费的午餐”定理
  19. html中的变圆的属性,CSS3属性之圆角效果——border-radius属性
  20. linux触摸板设置密码程序6,Touchégg: Linux 上触摸板/屏的多指手势

热门文章

  1. 2019年双十一倒计时JS代码
  2. C语言中typeof作用,c语言中typeof关键字
  3. 微信小程序引入iconfont图标,解决渲染层失败(2022年7月11日)
  4. java狗叫_java学习(8) | 学步园
  5. win10安装jdk11
  6. 整合Springboot+Vue(基础框架)
  7. springboot启动错误 Could not resolve placeholder
  8. RDkit |基于RDkit计算PBF(Plane of Best Fit)描述符数值
  9. 如何在百度搜索到自己的网站,加快百度的关键字录入
  10. React组件通信-非父子组件间的通信