linux6磁盘绑定,关于Linux 6使用udev绑定共享磁盘的测试
--- 0.环境描述
2节点RAC配置共享存储
系统版本RedHat 6.6
10块共享存储磁盘/dev/sdb~sdk
--- 系统版本
[root@dbtest3 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.6 (Santiago)
[root@dbtest4 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.6 (Santiago)
--- 10块共享磁盘
[root@dbtest3 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8, 0 Aug 2 22:09 /dev/sda
brw-rw---- 1 root disk 8, 1 Aug 2 22:09 /dev/sda1
brw-rw---- 1 root disk 8, 2 Aug 2 22:09 /dev/sda2
brw-rw---- 1 root disk 8, 3 Aug 2 22:09 /dev/sda3
brw-rw---- 1 root disk 8, 16 Aug 2 22:09 /dev/sdb
brw-rw---- 1 root disk 8, 32 Aug 2 22:09 /dev/sdc
brw-rw---- 1 root disk 8, 48 Aug 2 22:09 /dev/sdd
brw-rw---- 1 root disk 8, 64 Aug 2 22:09 /dev/sde
brw-rw---- 1 root disk 8, 80 Aug 2 22:09 /dev/sdf
brw-rw---- 1 root disk 8, 96 Aug 2 22:09 /dev/sdg
brw-rw---- 1 root disk 8, 112 Aug 2 22:09 /dev/sdh
brw-rw---- 1 root disk 8, 128 Aug 2 22:09 /dev/sdi
brw-rw---- 1 root disk 8, 144 Aug 2 22:09 /dev/sdj
brw-rw---- 1 root disk 8, 160 Aug 2 22:09 /dev/sdk
[root@dbtest4 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8, 0 Aug 2 22:10 /dev/sda
brw-rw---- 1 root disk 8, 1 Aug 2 22:10 /dev/sda1
brw-rw---- 1 root disk 8, 2 Aug 2 22:10 /dev/sda2
brw-rw---- 1 root disk 8, 3 Aug 2 22:10 /dev/sda3
brw-rw---- 1 root disk 8, 16 Aug 2 22:10 /dev/sdb
brw-rw---- 1 root disk 8, 32 Aug 2 22:10 /dev/sdc
brw-rw---- 1 root disk 8, 48 Aug 2 22:10 /dev/sdd
brw-rw---- 1 root disk 8, 64 Aug 2 22:10 /dev/sde
brw-rw---- 1 root disk 8, 80 Aug 2 22:10 /dev/sdf
brw-rw---- 1 root disk 8, 96 Aug 2 22:10 /dev/sdg
brw-rw---- 1 root disk 8, 112 Aug 2 22:10 /dev/sdh
brw-rw---- 1 root disk 8, 128 Aug 2 22:10 /dev/sdi
brw-rw---- 1 root disk 8, 144 Aug 2 22:10 /dev/sdj
brw-rw---- 1 root disk 8, 160 Aug 2 22:10 /dev/sdk
--- 10块共享磁盘fdisk输出
[root@dbtest3 ~]# fdisk -l
Disk /dev/sda: 85.9 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002c572
Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 536 4096000 82 Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3 536 10444 79584256 8e Linux LVM
Disk /dev/sdb: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x6ae81c6f
Device Boot Start End Blocks Id System
Disk /dev/sdc: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sde: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdd: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdf: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdh: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdg: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdi: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdk: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdj: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/vg_dbtest1-LogVol00: 81.5 GB, 81491132416 bytes
255 heads, 63 sectors/track, 9907 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
[root@dbtest4 ~]# fdisk -l
Disk /dev/sda: 85.9 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002c572
Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 536 4096000 82 Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3 536 10444 79584256 8e Linux LVM
Disk /dev/sdb: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x6ae81c6f
Device Boot Start End Blocks Id System
Disk /dev/sdc: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdd: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sde: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdf: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdg: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdh: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdj: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdi: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdk: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/vg_dbtest1-LogVol00: 81.5 GB, 81491132416 bytes
255 heads, 63 sectors/track, 9907 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
--- 10块共享磁盘uuid
[root@dbtest3 ~]# scsi_id /dev/sdb
36000c29cadb411725a7d6daacd6ad108
[root@dbtest3 ~]# scsi_id /dev/sdc
36000c29838a242f103bb6941175efec1
[root@dbtest3 ~]# scsi_id /dev/sdd
36000c29227146322659b155492a717c3
[root@dbtest3 ~]# scsi_id /dev/sde
36000c298040617a958533e6a46671d60
[root@dbtest3 ~]# scsi_id /dev/sdf
36000c2973cf2951a3b61c87301e1c99a
[root@dbtest3 ~]# scsi_id /dev/sdg
36000c29a926c801b7f9a3b245308e092
[root@dbtest3 ~]# scsi_id /dev/sdh
36000c29944cdbb8110dc96a802e142c8
[root@dbtest3 ~]# scsi_id /dev/sdi
36000c29b1312cf84809d67bc7c8dbe28
[root@dbtest3 ~]# scsi_id /dev/sdj
36000c29d4d97c71a36232c4e0a322be0
[root@dbtest3 ~]# scsi_id /dev/sdk
36000c29d2c6230eae26892a4670d909e
[root@dbtest4 ~]# scsi_id /dev/sdb
36000c29cadb411725a7d6daacd6ad108
[root@dbtest4 ~]# scsi_id /dev/sdc
36000c29838a242f103bb6941175efec1
[root@dbtest4 ~]# scsi_id /dev/sdd
36000c29227146322659b155492a717c3
[root@dbtest4 ~]# scsi_id /dev/sde
36000c298040617a958533e6a46671d60
[root@dbtest4 ~]# scsi_id /dev/sdf
36000c2973cf2951a3b61c87301e1c99a
[root@dbtest4 ~]# scsi_id /dev/sdg
36000c29a926c801b7f9a3b245308e092
[root@dbtest4 ~]# scsi_id /dev/sdh
36000c29944cdbb8110dc96a802e142c8
[root@dbtest4 ~]# scsi_id /dev/sdi
36000c29b1312cf84809d67bc7c8dbe28
[root@dbtest4 ~]# scsi_id /dev/sdj
36000c29d4d97c71a36232c4e0a322be0
[root@dbtest4 ~]# scsi_id /dev/sdk
36000c29d2c6230eae26892a4670d909e
--- 1.使用99规则文件udev绑定共享磁盘
--- 将options=--whitelisted --replace-whitespace写入/etc/scsi_id.config配置文件
[root@dbtest3 ~]# echo "options=--whitelisted --replace-whitespace" > /etc/scsi_id.config
[root@dbtest4 ~]# echo "options=--whitelisted --replace-whitespace" > /etc/scsi_id.config
--- 在RAC两节点获取10块共享磁盘的uudi并生成udev的99规则
--- 将以下输出结果分别添加到两节点的/etc/udev/rules.d/99-oracle-asmdevices.rules文件中
[root@dbtest3 ~]# for i in b c d e f g h i j k;
> do
> echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\""
> done
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29cadb411725a7d6daacd6ad108", NAME="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29838a242f103bb6941175efec1", NAME="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29227146322659b155492a717c3", NAME="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c298040617a958533e6a46671d60", NAME="asm-diske", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c2973cf2951a3b61c87301e1c99a", NAME="asm-diskf", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29a926c801b7f9a3b245308e092", NAME="asm-diskg", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29944cdbb8110dc96a802e142c8", NAME="asm-diskh", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29b1312cf84809d67bc7c8dbe28", NAME="asm-diski", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29d4d97c71a36232c4e0a322be0", NAME="asm-diskj", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29d2c6230eae26892a4670d909e", NAME="asm-diskk", OWNER="grid", GROUP="asmadmin", MODE="0660"
[root@dbtest4 ~]# for i in b c d e f g h i j k;
> do
> echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\""
> done
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29cadb411725a7d6daacd6ad108", NAME="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29838a242f103bb6941175efec1", NAME="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29227146322659b155492a717c3", NAME="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c298040617a958533e6a46671d60", NAME="asm-diske", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c2973cf2951a3b61c87301e1c99a", NAME="asm-diskf", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29a926c801b7f9a3b245308e092", NAME="asm-diskg", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29944cdbb8110dc96a802e142c8", NAME="asm-diskh", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29b1312cf84809d67bc7c8dbe28", NAME="asm-diski", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29d4d97c71a36232c4e0a322be0", NAME="asm-diskj", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29d2c6230eae26892a4670d909e", NAME="asm-diskk", OWNER="grid", GROUP="asmadmin", MODE="0660"
[root@dbtest3 ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules
[root@dbtest4 ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules
--- 重新加载规则文件并启动udev
[root@dbtest3 ~]# udevadm control --reload-rules
[root@dbtest3 ~]# start_udev
Starting udev: udevd[7979]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
[ OK ]
[root@dbtest4 ~]# udevadm control --reload-rules
[root@dbtest4 ~]# start_udev
Starting udev: udevd[7967]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
[ OK ]
--- 检查绑定生成的ams共享磁盘
[root@dbtest3 ~]# ls -l /dev/asm-disk*
brw-rw---- 1 grid asmadmin 8, 16 Aug 3 10:33 /dev/asm-diskb
brw-rw---- 1 grid asmadmin 8, 32 Aug 3 10:33 /dev/asm-diskc
brw-rw---- 1 grid asmadmin 8, 48 Aug 3 10:33 /dev/asm-diskd
brw-rw---- 1 grid asmadmin 8, 64 Aug 3 10:33 /dev/asm-diske
brw-rw---- 1 grid asmadmin 8, 80 Aug 3 10:33 /dev/asm-diskf
brw-rw---- 1 grid asmadmin 8, 96 Aug 3 10:33 /dev/asm-diskg
brw-rw---- 1 grid asmadmin 8, 112 Aug 3 10:33 /dev/asm-diskh
brw-rw---- 1 grid asmadmin 8, 128 Aug 3 10:33 /dev/asm-diski
brw-rw---- 1 grid asmadmin 8, 144 Aug 3 10:33 /dev/asm-diskj
brw-rw---- 1 grid asmadmin 8, 160 Aug 3 10:33 /dev/asm-diskk
[root@dbtest4 ~]# ls -l /dev/asm-disk*
brw-rw---- 1 grid asmadmin 8, 16 Aug 3 10:33 /dev/asm-diskb
brw-rw---- 1 grid asmadmin 8, 32 Aug 3 10:33 /dev/asm-diskc
brw-rw---- 1 grid asmadmin 8, 48 Aug 3 10:33 /dev/asm-diskd
brw-rw---- 1 grid asmadmin 8, 64 Aug 3 10:33 /dev/asm-diske
brw-rw---- 1 grid asmadmin 8, 80 Aug 3 10:33 /dev/asm-diskf
brw-rw---- 1 grid asmadmin 8, 96 Aug 3 10:33 /dev/asm-diskg
brw-rw---- 1 grid asmadmin 8, 112 Aug 3 10:33 /dev/asm-diskh
brw-rw---- 1 grid asmadmin 8, 128 Aug 3 10:33 /dev/asm-diski
brw-rw---- 1 grid asmadmin 8, 144 Aug 3 10:33 /dev/asm-diskj
brw-rw---- 1 grid asmadmin 8, 160 Aug 3 10:33 /dev/asm-diskk
--- Linux 6使用udev绑定共享磁盘之后原有的/dev/sdb~k不再显示
[root@dbtest3 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8, 0 Aug 3 10:33 /dev/sda
brw-rw---- 1 root disk 8, 1 Aug 3 10:33 /dev/sda1
brw-rw---- 1 root disk 8, 2 Aug 3 10:33 /dev/sda2
brw-rw---- 1 root disk 8, 3 Aug 3 10:33 /dev/sda3
[root@dbtest4 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8, 0 Aug 3 10:33 /dev/sda
brw-rw---- 1 root disk 8, 1 Aug 3 10:33 /dev/sda1
brw-rw---- 1 root disk 8, 2 Aug 3 10:33 /dev/sda2
brw-rw---- 1 root disk 8, 3 Aug 3 10:33 /dev/sda3
--- fdisk输出也不再显示/dev/sdb~k
[root@dbtest3 ~]# fdisk -l
Disk /dev/sda: 85.9 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002c572
Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 536 4096000 82 Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3 536 10444 79584256 8e Linux LVM
Disk /dev/mapper/vg_dbtest1-LogVol00: 81.5 GB, 81491132416 bytes
255 heads, 63 sectors/track, 9907 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
[root@dbtest4 ~]# fdisk -l
Disk /dev/sda: 85.9 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002c572
Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 536 4096000 82 Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3 536 10444 79584256 8e Linux LVM
Disk /dev/mapper/vg_dbtest1-LogVol00: 81.5 GB, 81491132416 bytes
255 heads, 63 sectors/track, 9907 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
---【关于Linux不同版本下udev的变化和区别可以参考Tim Hall的技术博客https://oracle-base.com/articles/linux/udev-scsi-rules-configuration-in-oracle-linux】
---【在文章中Tim Hall在使用udev绑定共享磁盘时,先将共享磁盘分区格式化,然后使用/dev/sdb1~sde1进行绑定】
---【经测试使用udev绑定后/dev/sdb1~sde1也不再显示,只显示/dev/sdb~e】
---【在测试完使用99-rules配置文件绑定共享磁盘后,同时也进行了使用60-raw.rules配置文件绑定共享磁盘生成裸设备的测试】
---【经测试发现,在Linux 6上使用60-raw.rules配置文件绑定共享磁盘生成裸设备已经不再支持uuid方式】
---【经测试使使用uuid绑定共享磁盘生成裸设备,start_udev时并不会加载60-raw.rules文件生效】
---【但通过盘符绑定共享磁盘生成裸设备,start_udev时并会加载60-raw.rules文件生效】
---【但因为共享磁盘在RAC的多个节点上,同一块盘的顺序可能会不同,并且重启系统后共享磁盘的盘符也会发生变化,所以使用盘符绑定共享磁盘生成裸设备会导致盘符漂移错乱】
---【经测试还发现,使用60-raw.rules配置文件绑定共享磁盘生成裸设备,存在裸设备的缓存问题,即便60-raw.rules配置文件被修改更新,重新加载规则文件后启动udev】
---【之前配置绑定生成的裸设备依然存在,除非系统重启后才会去除,以下为测试过程结果】
---【准备绑定裸设备前的磁盘信息】
[root@dbtest3 ~]# ls -l /dev/raw/*
crw-rw---- 1 root disk 162, 0 Aug 3 11:02 /dev/raw/rawctl
[root@dbtest3 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8, 0 Aug 3 11:02 /dev/sda
brw-rw---- 1 root disk 8, 1 Aug 3 11:02 /dev/sda1
brw-rw---- 1 root disk 8, 2 Aug 3 11:02 /dev/sda2
brw-rw---- 1 root disk 8, 3 Aug 3 11:02 /dev/sda3
brw-rw---- 1 root disk 8, 16 Aug 3 11:02 /dev/sdb
brw-rw---- 1 root disk 8, 32 Aug 3 11:02 /dev/sdc
brw-rw---- 1 root disk 8, 48 Aug 3 11:02 /dev/sdd
brw-rw---- 1 root disk 8, 64 Aug 3 11:02 /dev/sde
brw-rw---- 1 root disk 8, 80 Aug 3 11:02 /dev/sdf
brw-rw---- 1 root disk 8, 96 Aug 3 11:02 /dev/sdg
brw-rw---- 1 root disk 8, 112 Aug 3 11:02 /dev/sdh
brw-rw---- 1 root disk 8, 128 Aug 3 11:02 /dev/sdi
brw-rw---- 1 root disk 8, 144 Aug 3 11:02 /dev/sdj
brw-rw---- 1 root disk 8, 160 Aug 3 11:02 /dev/sdk
[root@dbtest4 ~]# ls -l /dev/raw/*
crw-rw---- 1 root disk 162, 0 Aug 3 11:03 /dev/raw/rawctl
[root@dbtest4 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8, 0 Aug 3 11:03 /dev/sda
brw-rw---- 1 root disk 8, 1 Aug 3 11:03 /dev/sda1
brw-rw---- 1 root disk 8, 2 Aug 3 11:03 /dev/sda2
brw-rw---- 1 root disk 8, 3 Aug 3 11:03 /dev/sda3
brw-rw---- 1 root disk 8, 16 Aug 3 11:03 /dev/sdb
brw-rw---- 1 root disk 8, 32 Aug 3 11:03 /dev/sdc
brw-rw---- 1 root disk 8, 48 Aug 3 11:03 /dev/sdd
brw-rw---- 1 root disk 8, 64 Aug 3 11:03 /dev/sde
brw-rw---- 1 root disk 8, 80 Aug 3 11:03 /dev/sdf
brw-rw---- 1 root disk 8, 96 Aug 3 11:03 /dev/sdg
brw-rw---- 1 root disk 8, 112 Aug 3 11:03 /dev/sdh
brw-rw---- 1 root disk 8, 128 Aug 3 11:03 /dev/sdi
brw-rw---- 1 root disk 8, 144 Aug 3 11:03 /dev/sdj
brw-rw---- 1 root disk 8, 160 Aug 3 11:03 /dev/sdk
--- 2.将以下规则写入/etc/udev/rules.d/60-raw.rules 文件
---【使用uudi方式绑定共享磁盘生成裸设备】
[root@dbtest3 ~]# vi /etc/udev/rules.d/60-raw.rules
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29cadb411725a7d6daacd6ad108", RUN+="/bin/raw /dev/raw/raw11 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29838a242f103bb6941175efec1", RUN+="/bin/raw /dev/raw/raw12 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29227146322659b155492a717c3", RUN+="/bin/raw /dev/raw/raw13 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c298040617a958533e6a46671d60", RUN+="/bin/raw /dev/raw/raw14 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c2973cf2951a3b61c87301e1c99a", RUN+="/bin/raw /dev/raw/raw15 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29a926c801b7f9a3b245308e092", RUN+="/bin/raw /dev/raw/raw16 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29944cdbb8110dc96a802e142c8", RUN+="/bin/raw /dev/raw/raw17 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29b1312cf84809d67bc7c8dbe28", RUN+="/bin/raw /dev/raw/raw18 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29d4d97c71a36232c4e0a322be0", RUN+="/bin/raw /dev/raw/raw19 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29d2c6230eae26892a4670d909e", RUN+="/bin/raw /dev/raw/raw20 %N"
KERNEL=="raw[11-20]", OWNER="grid", GROUP="asmadmin", MODE="660"
[root@dbtest4 ~]# vi /etc/udev/rules.d/60-raw.rules
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29cadb411725a7d6daacd6ad108", RUN+="/bin/raw /dev/raw/raw11 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29838a242f103bb6941175efec1", RUN+="/bin/raw /dev/raw/raw12 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29227146322659b155492a717c3", RUN+="/bin/raw /dev/raw/raw13 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c298040617a958533e6a46671d60", RUN+="/bin/raw /dev/raw/raw14 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c2973cf2951a3b61c87301e1c99a", RUN+="/bin/raw /dev/raw/raw15 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29a926c801b7f9a3b245308e092", RUN+="/bin/raw /dev/raw/raw16 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29944cdbb8110dc96a802e142c8", RUN+="/bin/raw /dev/raw/raw17 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29b1312cf84809d67bc7c8dbe28", RUN+="/bin/raw /dev/raw/raw18 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29d4d97c71a36232c4e0a322be0", RUN+="/bin/raw /dev/raw/raw19 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29d2c6230eae26892a4670d909e", RUN+="/bin/raw /dev/raw/raw20 %N"
KERNEL=="raw[11-20]", OWNER="grid", GROUP="asmadmin", MODE="660"
--- 重新加载规则文件并启动udev
[root@dbtest3 ~]# udevadm control --reload-rules
[root@dbtest3 ~]# start_udev
Starting udev: udevd[9693]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
[ OK ]
[root@dbtest4 ~]# udevadm control --reload-rules
[root@dbtest4 ~]# start_udev
Starting udev: udevd[11386]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
[ OK ]
--- 并没有绑定生成裸设备文件
[root@dbtest3 ~]# ls -l /dev/raw/*
crw-rw---- 1 root disk 162, 0 Aug 3 11:20 /dev/raw/rawctl
[root@dbtest4 ~]# ls -l /dev/raw/*
crw-rw---- 1 root disk 162, 0 Aug 3 11:24 /dev/raw/rawctl
[root@dbtest4 ~]# vi /etc/udev/rules.d/60-raw.rules
--- 3.使用盘符方式绑定共享磁盘生成裸设备
---【为了演示上面提到的裸设备缓存问题,本次测试,每次绑定5块共享磁盘,分别绑定两次,以此来确认裸设备的缓存问题确实存在】
---【注意这里两个节点在使用盘符绑定共享磁盘时,只是为了测试裸设备的缓存问题确实存在,并没有确认1节点的sdb~f和2节点的sdb~f是否一一对应同一块盘】
---【第1次使用盘符绑定共享磁盘生成裸设备】
--- 将以下规则写入/etc/udev/rules.d/60-raw.rules 文件
[root@dbtest3 ~]# vi /etc/udev/rules.d/60-raw.rules
ACTION=="add", KERNEL=="sdb", RUN+="/bin/raw /dev/raw/raw11 %N"
ACTION=="add", KERNEL=="sdc", RUN+="/bin/raw /dev/raw/raw12 %N"
ACTION=="add", KERNEL=="sdd", RUN+="/bin/raw /dev/raw/raw13 %N"
ACTION=="add", KERNEL=="sde", RUN+="/bin/raw /dev/raw/raw14 %N"
ACTION=="add", KERNEL=="sdf", RUN+="/bin/raw /dev/raw/raw15 %N"
KERNEL=="raw[11-15]", OWNER="grid", GROUP="asmadmin", MODE="660"
[root@dbtest4 ~]# vi /etc/udev/rules.d/60-raw.rules
ACTION=="add", KERNEL=="sdb", RUN+="/bin/raw /dev/raw/raw11 %N"
ACTION=="add", KERNEL=="sdc", RUN+="/bin/raw /dev/raw/raw12 %N"
ACTION=="add", KERNEL=="sdd", RUN+="/bin/raw /dev/raw/raw13 %N"
ACTION=="add", KERNEL=="sde", RUN+="/bin/raw /dev/raw/raw14 %N"
ACTION=="add", KERNEL=="sdf", RUN+="/bin/raw /dev/raw/raw15 %N"
KERNEL=="raw[11-15]", OWNER="grid", GROUP="asmadmin", MODE="660"
--- 重新加载规则文件并启动udev
[root@dbtest3 ~]# udevadm control --reload-rules
[root@dbtest3 ~]# start_udev
Starting udev: udevd[10586]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
[ OK ]
[root@dbtest4 ~]# udevadm control --reload-rules
[root@dbtest4 ~]# start_udev
Starting udev: udevd[12262]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
[ OK ]
--- 绑定成功生成裸设备文件
[root@dbtest3 ~]# ls -l /dev/raw/*
crw-rw---- 1 root disk 162, 11 Aug 3 11:31 /dev/raw/raw11
crw-rw---- 1 root disk 162, 12 Aug 3 11:31 /dev/raw/raw12
crw-rw---- 1 root disk 162, 13 Aug 3 11:31 /dev/raw/raw13
crw-rw---- 1 root disk 162, 14 Aug 3 11:31 /dev/raw/raw14
crw-rw---- 1 root disk 162, 15 Aug 3 11:31 /dev/raw/raw15
crw-rw---- 1 root disk 162, 0 Aug 3 11:31 /dev/raw/rawctl
[root@dbtest4 ~]# ls -l /dev/raw/*
crw-rw---- 1 root disk 162, 11 Aug 3 11:31 /dev/raw/raw11
crw-rw---- 1 root disk 162, 12 Aug 3 11:31 /dev/raw/raw12
crw-rw---- 1 root disk 162, 13 Aug 3 11:31 /dev/raw/raw13
crw-rw---- 1 root disk 162, 14 Aug 3 11:31 /dev/raw/raw14
crw-rw---- 1 root disk 162, 15 Aug 3 11:31 /dev/raw/raw15
crw-rw---- 1 root disk 162, 0 Aug 3 11:31 /dev/raw/rawctl
---【第2次使用盘符绑定共享磁盘生成裸设备】
--- 将以下规则写入/etc/udev/rules.d/60-raw.rules 文件
---【删除第1次绑定生成的裸设备文件/dev/raw/raw11~15】
---【将/etc/udev/rules.d/60-raw.rules 文件中第1次写入的绑定规则删除后添加下面的规则】
[root@dbtest3 ~]# rm -f /dev/raw/raw1*
[root@dbtest4 ~]# rm -f /dev/raw/raw1*
[root@dbtest3 ~]# vi /etc/udev/rules.d/60-raw.rules
ACTION=="add", KERNEL=="sdg", RUN+="/bin/raw /dev/raw/raw21 %N"
ACTION=="add", KERNEL=="sdh", RUN+="/bin/raw /dev/raw/raw22 %N"
ACTION=="add", KERNEL=="sdi", RUN+="/bin/raw /dev/raw/raw23 %N"
ACTION=="add", KERNEL=="sdj", RUN+="/bin/raw /dev/raw/raw24 %N"
ACTION=="add", KERNEL=="sdk", RUN+="/bin/raw /dev/raw/raw25 %N"
KERNEL=="raw[21-25]", OWNER="grid", GROUP="asmadmin", MODE="660"
[root@dbtest4 ~]# vi /etc/udev/rules.d/60-raw.rules
ACTION=="add", KERNEL=="sdg", RUN+="/bin/raw /dev/raw/raw21 %N"
ACTION=="add", KERNEL=="sdh", RUN+="/bin/raw /dev/raw/raw22 %N"
ACTION=="add", KERNEL=="sdi", RUN+="/bin/raw /dev/raw/raw23 %N"
ACTION=="add", KERNEL=="sdj", RUN+="/bin/raw /dev/raw/raw24 %N"
ACTION=="add", KERNEL=="sdk", RUN+="/bin/raw /dev/raw/raw25 %N"
KERNEL=="raw[21-25]", OWNER="grid", GROUP="asmadmin", MODE="660"
--- 重新加载规则文件并启动udev
[root@dbtest3 ~]# udevadm control --reload-rules
[root@dbtest3 ~]# start_udev
Starting udev: udevd[11431]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
[ OK ]
[root@dbtest4 ~]# udevadm control --reload-rules
[root@dbtest4 ~]# start_udev
Starting udev: udevd[13102]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
[ OK ]
--- 绑定生成裸设备文件
---【第1次绑定共享磁盘生成的5个裸设备文件/dev/raw/raw11~15依然存在】
---【重启系统后第1次绑定共享磁盘生成的5个裸设备文件/dev/raw/raw11~15消失】
[root@dbtest3 ~]# ls -l /dev/raw/*
crw-rw---- 1 root disk 162, 11 Aug 3 12:16 /dev/raw/raw11
crw-rw---- 1 root disk 162, 12 Aug 3 12:16 /dev/raw/raw12
crw-rw---- 1 root disk 162, 13 Aug 3 12:16 /dev/raw/raw13
crw-rw---- 1 root disk 162, 14 Aug 3 12:16 /dev/raw/raw14
crw-rw---- 1 root disk 162, 15 Aug 3 12:16 /dev/raw/raw15
crw-rw---- 1 root disk 162, 21 Aug 3 12:16 /dev/raw/raw21
crw-rw---- 1 root disk 162, 22 Aug 3 12:16 /dev/raw/raw22
crw-rw---- 1 root disk 162, 23 Aug 3 12:16 /dev/raw/raw23
crw-rw---- 1 root disk 162, 24 Aug 3 12:16 /dev/raw/raw24
crw-rw---- 1 root disk 162, 25 Aug 3 12:16 /dev/raw/raw25
crw-rw---- 1 root disk 162, 0 Aug 3 12:16 /dev/raw/rawctl
[root@dbtest4 ~]# ls -l /dev/raw/*
crw-rw---- 1 root disk 162, 11 Aug 3 12:16 /dev/raw/raw11
crw-rw---- 1 root disk 162, 12 Aug 3 12:16 /dev/raw/raw12
crw-rw---- 1 root disk 162, 13 Aug 3 12:16 /dev/raw/raw13
crw-rw---- 1 root disk 162, 14 Aug 3 12:16 /dev/raw/raw14
crw-rw---- 1 root disk 162, 15 Aug 3 12:16 /dev/raw/raw15
crw-rw---- 1 root disk 162, 21 Aug 3 12:16 /dev/raw/raw21
crw-rw---- 1 root disk 162, 22 Aug 3 12:16 /dev/raw/raw22
crw-rw---- 1 root disk 162, 23 Aug 3 12:16 /dev/raw/raw23
crw-rw---- 1 root disk 162, 24 Aug 3 12:16 /dev/raw/raw24
crw-rw---- 1 root disk 162, 25 Aug 3 12:16 /dev/raw/raw25
crw-rw---- 1 root disk 162, 0 Aug 3 12:16 /dev/raw/rawctl
linux6磁盘绑定,关于Linux 6使用udev绑定共享磁盘的测试相关推荐
- oracle udev绑定,关于oracle RAC 通过udev绑定磁盘
通过udev绑定磁盘是一件特别烦人的事情,尤其是在使用了多路径软件emcpower后,下面是从metalink上摘下来的部分信息,不做翻译,仅供大家参考学习 Real Application Clus ...
- linux 网卡绑定updelay,Linux 配置双网卡绑定实现负载均衡
Linux 配置双网卡绑定,实现负载均衡 1.Bond的工作模式 Linux bonding驱动提供了一个把多个网络接口设备捆绑为单个的网络接口设置来使用,用于网络负载均衡及网络冗余. bonding ...
- linux查看双网卡绑定模式,Linux中双网卡绑定实现的各种模式
一.Linux中双网卡绑定实现的原理 Linux双网卡绑定实现就是使用两块网卡虚拟成为一块网卡;linux设置bond网卡绑定---有些用. Linux双网卡绑定实现就是使用两块网卡虚拟成为一块网卡, ...
- linux下怎么绑定arp,LINUX 下进行arp 绑定.doc
LINUX 下进行arp 绑定 LINUX 下进行arp 绑定 一.约定 1.网关上已经对下面所带的机器作了绑定.网关IP: MAC:00:02:B3:38:08:62 2.要进行绑定的Linux主机 ...
- 虚拟机vcenter如何增加磁盘_vSphere 6 下创建数据库RAC虚拟机共享磁盘
前言: 公司最近要在测试环境部署一套数据库RAC集群,为了节省资源,领导要求跑在vCenter虚机上,但是在vsphere环境中搭建虚机RAC需要遵循一定的步骤,虚拟机配置RAC,如果不是挂裸盘RDM ...
- linux解挂文件磁盘的命令,Linux学习笔记(4)磁盘分区(fdisk)、挂载与文件系统命令...
Linux学习笔记(4)磁盘分区(fdisk).挂载与文件系统命令 1.磁盘分区是怎么表示的? 1.1 对于IDE接口,第一主盘为hda,第1从盘为hdb,第1从盘的第1个分区为hdb1 1.2 对于 ...
- linux网卡端口绑定bond,Linux下双网卡绑定bond0
一:原理: linux操作系统下双网卡绑定有七种模式.现在一般的企业都会使用双网卡接入,这样既能添加网络带宽,同时又能做相应的冗余,可以说是好处多多.而一般企业都会使用linux操作系统下自带的网卡绑 ...
- linux关闭磁盘缓存,在linux上禁用apache2的所有磁盘缓存
出于基准测试目的,我想强制Apache 2从磁盘加载每个请求的文件,而不是从内存中的缓存加载它.从我所读到的做同步后跟 echo 3 > /proc/sys/vm/drop_caches 让我放 ...
- linux 磁盘挂载 uuid,Linux如何根据UUID自动挂载磁盘分区
一般服务器都有多个硬盘分区,在重启后,这些分区的逻辑位置加载时可能会发生变动,如果使用传统的设备名称(例如:/dev/sda)方式挂载磁盘,就可能因为磁盘顺序变化而造成混乱. Linux环境中每个Bl ...
- linux磁盘分区6,Linux学习笔记(6)磁盘分区(LVM)
1.逻辑管理技术LVM的概念 1.1 LVM ,逻辑卷管理,以便扩展管理盘符. PV:物理卷 VG:卷组 LV:逻辑卷 PE(physical Extend):物理扩展(默认4M),就是我们逻辑卷管理 ...
最新文章
- IOS-百度地图API用点生成线路、导航、自定义标注 2013年11月更新
- SqlServer学习笔记【暂】
- PythonWeb仿51edu项目实战篇视频教程教学视频
- .NET Core开发实战(第18课:日志框架:聊聊记日志的最佳姿势)--学习笔记(上)...
- 在下一个项目中不使用JavaDoc的5大原因
- 编译moveit!时缺失manipulation_msgs相关文件
- oracle下载配置文件,oracle 11G、12C BBED 配置和库文件下载!
- linux内核 端口,Linux内核中IO端口资源管理
- Keras——用Keras搭建分类神经网络
- seo链轮应该怎么去做
- 关键时刻救一命:旧手机改造求生工具
- Mac 制作U盘操作系统并清空Mac全部数据后重装系统
- 共享计算机桌面需要密码,win10局域网共享文件需要输密码怎么办?_win10访问共享文件需要密码的解决办法-爱纯净...
- [BZOJ]4198 [NOI2015] 荷马史诗 哈夫曼树
- Delphi的常用函数
- 关于使用ajax动态输出cnzz统计代码的问题
- TP-Admin 一个拥有站群功能的多功能CMS基础系统
- 树莓派最新raspbian系统换国内源
- 调配颜色(自己随便造的名字)
- 一款性能比IDM(Internet Download Manager)下载器强大,且免费的下载器NDM(Neat Download Manager)
热门文章
- apache和php结合、apache的默认虚拟主机
- 统计字符串、九宫格、编码问题
- java.util.BitSet 研究
- Python的Numpy库简述
- Android 多个listview的实现
- NSData与UIImage之间的转换
- Windows 7(server 2008) 下直接硬盘安装 Ubuntu 10.04成为双系统的方法
- 搭建Web站点和FTP站点
- Using #region Directive With JavaScript Files in Visual Studio 【转载】
- 微信小程序微商城(十):用户收货地址管理