最近公司到了一台R510满配服务器,就装了2张光纤卡,用来山寨SAN存储。

操作系统选择的是 nexenta (www.nexenta.com下载,社区版本,其实nexenta 是基于opensolaris系统的,所以读性能还不错,比OPENFILER好很多!要是可以装上SSD硬盘,做读写的1级cache,性能会有大幅度提高)
nexenta 每1TB数据需要4GB内存做元数据缓存。
nexenta 安装就不介绍了,网上一大堆,
nexenta 社区版本 ,容量有限制,必须小于18TB,并且图形界面不能做 FC FOR nexenta,但是可以使用命令行实现。
言归正传,安装好,nexenta后,进行如下设置:
1,ssh登录服务器 
2.升级和安装必要的软件包

setup appliance upgrade -v

setup appliance upgrade sunwlibc

3.进入expert mode模式 :

$ option expert_mode=1

$ !bash

4. 查看并设置光纤卡

root@myhost:/volumes# svcadm enable stmf

root@myhost:/volumes# svcs -v stmf

STATE          NSTATE        STIME    CTID   FMRI

online         -              3:12:33      - svc:/system/stmf:default

root@myhost:/volumes# mdb -k

Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc pcplusmp scsi_vhci zfs sd sata mr_sas qlc sockfs ip hook

neti sctp arp usba uhci stmf_sbd stmf lofs idm cpc random crypto smbsrv nfs logindmux ptm nsctl sdbc ufs nsmb sv rdc ii ipc ]

> ::devbindings -q qlc

ffffff06f652da48 pciex1077,2432, instance #0 (driver name: qlc)

ffffff06f652d7c0 pciex1077,2432, instance #1 (driver name: qlc)

在/etc/driver_aliases文件添加下面2行信息:

vi  /etc/driver_aliases:

qlt "pciex1077,2432"

qlc "pciex1077,2532"

重启服务器。

root@myhost:/volumes# fcinfo hba-port

HBA Port WWN: 21000024ff062e5c

Port Mode: Target

Port ID: 10400

OS Device Name: Not Applicable

Manufacturer: QLogic Corp.

Model: QLE2460

Firmware Version: 5.2.1

FCode/BIOS Version: N/A

Serial Number: not available

Driver Name: COMSTAR QLT

Driver Version: 20100505-1.05

Type: F-port

State: online

Supported Speeds: 1Gb 2Gb 4Gb

Current Speed: 4Gb

Node WWN: 20000024ff062e5c

HBA Port WWN: 2100001b32944fd3

Port Mode: Target

Port ID: 10500

OS Device Name: Not Applicable

Manufacturer: QLogic Corp.

Model: QLE2460

Firmware Version: 5.2.1

FCode/BIOS Version: N/A

Serial Number: not available

Driver Name: COMSTAR QLT

Driver Version: 20100505-1.05

Type: F-port

State: online

Supported Speeds: 1Gb 2Gb 4Gb

Current Speed: 4Gb

Node WWN: 2000001b32944fd3

root@myhost:/volumes# stmfadm list-target

Target: wwn.2100001B32944FD3

Target: wwn.21000024FF062E5C

root@myhost:/volumes# format

Searching for disks...done

AVAILABLE DISK SELECTIONS:

0. c0t0d0

/pci@0,0/pci8086,340a@3/pci1028,1f17@0/sd@0,0

1. c0t1d0

/pci@0,0/pci8086,340a@3/pci1028,1f17@0/sd@1,0

Specify disk (enter its number): ^C

/加参数 -f是强制建立

zpool create -f BIFC  c0t1d0

root@myhost:/volumes# zpool status BIFC

pool: BIFC

state: ONLINE

scan: none requested

config:

NAME        STATE     READ WRITE CKSUM

BIFC        ONLINE       0     0     0

c0t1d0    ONLINE       0     0     0

errors: No known data errors

root@myhost:/volumes#  zfs list

NAME                     USED  AVAIL  REFER  MOUNTPOINT

BIFC                    95.5K  10.7T    31K  /BIFC

syspool                 19.1G   255G  34.5K  legacy

syspool/dump            16.8G   255G  16.8G  -

syspool/rootfs-nmu-000  1.28G   255G   860M  legacy

syspool/rootfs-nmu-001  62.5K   255G   837M  legacy

syspool/rootfs-nmu-002  61.5K   255G   840M  legacy

syspool/rootfs-nmu-003  68.5K   255G   841M  legacy

syspool/rootfs-nmu-004  75.5K   255G   841M  legacy

syspool/rootfs-nmu-005  80.5K   255G   840M  legacy

syspool/rootfs-nmu-006  87.5K   255G   840M  legacy

syspool/rootfs-nmu-007  89.5K   255G   841M  legacy

syspool/swap            1.03G   256G    16K  -

root@myhost:/volumes# df -h

Filesystem             size   used  avail capacity  Mounted on

syspool/rootfs-nmu-000

274G   860M   255G     1%    /

/devices                 0K     0K     0K     0%    /devices

/dev                     0K     0K     0K     0%    /dev

ctfs                     0K     0K     0K     0%    /system/contract

proc                     0K     0K     0K     0%    /proc

mnttab                   0K     0K     0K     0%    /etc/mnttab

swap                    20G   304K    20G     1%    /etc/svc/volatile

objfs                    0K     0K     0K     0%    /system/object

sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab

/usr/lib/libc/libc_hwcap1.so.1

255G   860M   255G     1%    /lib/libc.so.1

fd                       0K     0K     0K     0%    /dev/fd

swap                    20G   152K    20G     1%    /tmp

swap                    20G   104K    20G     1%    /var/run

BIFC                    11T    31K    11T     1%    /BIFC

After creating a data pool we can start carving out ZFS volumes (zVol):

zfs create [-ps] [-b blocksize] [-o property=value] ... -V size volume

root@myhost:/volumes# zfs create -V 7T BIFC/akebono-scratchvol

sbdadm – SCSI block device administration CLI

Albeit we have our zVol available we have to tell STMF that this is a volume that can be mapped as a LUN, this is why sbdadm

is here:

root@myhost:/volumes# sbdadm create-lu /dev/zvol/rdsk/BIFC/akebono-scratchvol

Created the following LU:

GUID                    DATA SIZE           SOURCE

--------------------------------  -------------------  ----------------

600144f0383e8200000050c00a8e0004  7696581394432        /dev/zvol/rdsk/BIFC/akebono-scratchvol

root@myhost:/volumes# stmfadm add-view 600144f0383e8200000050c00a8e0004

root@myhost:/volumes# stmfadm list-view -l 600144f0383e8200000050c00a8e0004

View Entry: 0

Host group   : All

Target group : All

LUN          : 0

root@myhost:/volumes# stmfadm create-hg akebono

root@myhost:/volumes# stmfadm list-hg

Host Group: akebono

root@myhost:/volumes# stmfadm list-target -v

Target: wwn.2100001B32944FD3

Operational Status: Online

Provider Name     : qlt

Alias             : qlt1,0

Protocol          : Fibre Channel

Sessions          : 2

Initiator: wwn.2100001B3282A309

Alias: -

Logged in since: Wed Dec  5 17:27:51 2012

Initiator: wwn.2100001B3282270A

Alias: -

Logged in since: Wed Dec  5 17:27:51 2012

Target: wwn.21000024FF062E5C

Operational Status: Online

Provider Name     : qlt

Alias             : qlt0,0

Protocol          : Fibre Channel

Sessions          : 3

Initiator: wwn.2100001B32829B04

Alias: -

Logged in since: Wed Dec  5 17:28:00 2012

Initiator: wwn.2100001B3282EC05

Alias: -

Logged in since: Wed Dec  5 17:28:00 2012

Initiator: wwn.2100001B3282FE02

Alias: -

Logged in since: Wed Dec  5 17:28:00 2012

stmfadm add-hg-member -g akebono wwn.2100001B3282270A

stmfadm add-hg-member -g akebono wwn.2100001b32829b04

stmfadm add-hg-member -g akebono wwn.2100001b3282a309

stmfadm add-hg-member -g akebono wwn.2100001b3282fe02

root@myhost:/volumes# stmfadm list-hg -v

Host Group: akebono

Member: wwn.2100001B3282270A

Member: wwn.2100001B32829B04

Member: wwn.2100001B3282A309

Member: wwn.2100001B3282FE02

root@myhost:/volumes# stmfadm list-view -l  600144f0383e8200000050c00a8e0004

View Entry: 0

Host group   : All

Target group : All

LUN          : 0

root@myhost:/volumes#  stmfadm remove-view -a -l 600144f0383e8200000050c00a8e0004

root@myhost:/dev/zvol/rdsk/BIFC# stmfadm add-view -h akebono 600144f0383e8200000050c00a8e0004

root@myhost:/dev/zvol/rdsk/BIFC# stmfadm list-view -l 600144f0383e8200000050c00a8e0004

View Entry: 0

Host group   : akebono

Target group : All

LUN          : 0

Target: wwn.2100001B32944FD3

Target: wwn.21000024FF062E5C

root@myhost:/dev/zvol/rdsk/BIFC# stmfadm create-tg fc-ports

root@myhost:/dev/zvol/rdsk/BIFC# stmfadm add-tg-member -g fc-ports  wwn.2100001B32944FD3

stmfadm: STMF target must be offline

root@myhost:/dev/zvol/rdsk/BIFC# stmfadm offline-target wwn.2100001B32944FD3

root@myhost:/dev/zvol/rdsk/BIFC# stmfadm offline-target wwn.21000024FF062E5C

root@myhost:/dev/zvol/rdsk/BIFC# stmfadm add-tg-member -g fc-ports wwn.2100001B32944FD3

root@myhost:/dev/zvol/rdsk/BIFC# stmfadm add-tg-member -g fc-ports wwn.21000024FF062E5C

root@myhost:/dev/zvol/rdsk/BIFC# stmfadm online-target wwn.2100001B32944FD3

root@myhost:/dev/zvol/rdsk/BIFC# stmfadm online-target wwn.21000024FF062E5C

目标机(挂载存储服务器)设置:

root@prod # luxadm probe

No Network Array enclosures found in /dev/es

Found Fibre Channel device(s):

Node WWN:200200a0b8563d8e  Device Type:Disk device

Logical Path:/dev/rdsk/c1t201300A0B8563D8Ed31s2

Logical Path:/dev/rdsk/c2t201200A0B8563D8Ed31s2

Node WWN:200200a0b8564f0e  Device Type:Disk device

Logical Path:/dev/rdsk/c1t201300A0B8564F0Ed31s2

Logical Path:/dev/rdsk/c2t202200A0B8564F0Ed31s2

Node WWN:200200a0b8563d8e  Device Type:Disk device

Logical Path:/dev/rdsk/c4t600A0B8000563D8E000002F64AA02509d0s2

Node WWN:200200a0b8563d8e  Device Type:Disk device

Logical Path:/dev/rdsk/c4t600A0B8000563D8E000002F94AA02524d0s2

Node WWN:200200a0b8563d8e  Device Type:Disk device

Logical Path:/dev/rdsk/c4t600A0B8000563D8E000002FB4AA02699d0s2

Node WWN:200200a0b8563d8e  Device Type:Disk device

Logical Path:/dev/rdsk/c4t600A0B8000563DC6000005124BB1A243d0s2

Node WWN:200200a0b8564f0e  Device Type:Disk device

Logical Path:/dev/rdsk/c4t600A0B8000564E18000006694BB30DB5d0s2

Node WWN:200200a0b8564f0e  Device Type:Disk device

Logical Path:/dev/rdsk/c4t600A0B8000564F0E000006EE4BB30D1Fd0s2

Node WWN:2000001b32944fd3  Device Type:Disk device

Logical Path:/dev/rdsk/c4t600144F0383E8200000050C00A8E0004d0s2

root@prod # luxadm -e port

/devices/pci@400/pci@0/pci@8/pci@0/pci@9/SUNW,qlc@0/fp@0,0:devctl  CONNECTED

/devices/pci@500/pci@0/pci@d/SUNW,qlc@0/fp@0,0:devctl              CONNECTED

从中可以看到有2块光纤卡连接到存储设备。

选择已经连接的HBA ,查看其WWN 号:

格式: # luxadm -e dump_map 设备port。其中port 值可从步骤2 得到。如下所示:

        Current Speed: 4Gb

        Node WWN: 2000001b3282fe02

root@prod # luxadm -e dump_map  /devices/pci@500/pci@0/pci@d/SUNW,qlc@0/fp@0,0:devctl

Pos  Port_ID Hard_Addr Port WWN         Node WWN         Type

0    10100   0         2100001b32829b04 2000001b32829b04 0x1f (Unknown Type)

1    10200   0         201200a0b8563d8e 200200a0b8563d8e 0x0  (Disk device)

2    10300   0         2100001b3282ec05 2000001b3282ec05 0x1f (Unknown Type)

3    10400   0         21000024ff062e5c 20000024ff062e5c 0x0  (Disk device)

4    10500   0         202200a0b8564f0e 200200a0b8564f0e 0x0  (Disk device)

5    10000   0         2100001b3282fe02 2000001b3282fe02 0x1f (Unknown Type,Host Bus Adapter)

root@prod # luxadm -e dump_map /devices/pci@400/pci@0/pci@8/pci@0/pci@9/SUNW,qlc@0/fp@0,0:devctl

Pos  Port_ID Hard_Addr Port WWN         Node WWN         Type

0    10000   0         2100001b3282270a 2000001b3282270a 0x1f (Unknown Type)

1    10100   0         201300a0b8563d8e 200200a0b8563d8e 0x0  (Disk device)

2    10400   0         201300a0b8564f0e 200200a0b8564f0e 0x0  (Disk device)

3    10500   0         2100001b32944fd3 2000001b32944fd3 0x0  (Disk device)

4    10800   0         2100001b32822d0a 2000001b32822d0a 0x1f (Unknown Type)

5    10200   0         2100001b3282a309 2000001b3282a309 0x1f (Unknown Type,Host Bus Adapter)

从该命令我们输出看到有5 个WWN,那么究竟这5 个WWN 具体代表的含义是什么呢?

Unknown Type,Host Bus Adapter,在此代表本地HBA 卡WWN。

Unknown Type,有时从上述命令输出有该类型的WWN,有时没有,这跟系统是否为集群有关。如果是集群,需要将所有集群节点的HBA 卡及存储前端端口

换分在以个Zone 里面,所以上述命令大的输出为Zone 里面所有节点的WWN。Unknown Type 显示的WWN 其实是集群节点对端的HBA 卡WWN。

Disk device,表示存储前端端口WWN。

root@prod # cfgadm -al

Ap_Id                          Type         Receptacle   Occupant     Condition

c0                             scsi-bus     connected    configured   unknown

c0::dsk/c0t0d0                 disk         connected    configured   unknown

c1                             fc-fabric    connected    configured   unknown

c1::201300a0b8563d8e           disk         connected    configured   unknown

c1::201300a0b8564f0e           disk         connected    configured   unknown

c1::2100001b3282270a           unknown      connected    unconfigured unknown

c1::2100001b32822d0a           unknown      connected    unconfigured unknown

c1::2100001b32944fd3           disk         connected    configured   unknown

c2                             fc-fabric    connected    configured   unknown

c2::201200a0b8563d8e           disk         connected    configured   unknown

c2::202200a0b8564f0e           disk         connected    configured   unknown

c2::2100001b32829b04           unknown      connected    unconfigured unknown

c2::2100001b3282ec05           unknown      connected    unconfigured unknown

c2::21000024ff062e5c           disk         connected    configured   unknown

usb0/1                         unknown      empty        unconfigured ok

usb0/2                         unknown      empty        unconfigured ok

usb0/3                         unknown      empty        unconfigured ok

usb1/1                         unknown      empty        unconfigured ok

usb1/2                         unknown      empty        unconfigured ok

usb2/1                         unknown      empty        unconfigured ok

usb2/2                         usb-hub      connected    configured   ok

usb2/2.1                       unknown      empty        unconfigured ok

usb2/2.2                       unknown      empty        unconfigured ok

usb2/2.3                       usb-storage  connected    configured   ok

usb2/2.4                       unknown      empty        unconfigured ok

usb2/3                         unknown      empty        unconfigured ok

usb2/4                         unknown      empty        unconfigured ok

usb2/5                         unknown      empty        unconfigured ok

root@prod # cfgadm -c configure  c1::2100001b32944fd3

root@prod # cfgadm -c configure  c3::21000024ff062e5c

 

# format 

Searching for disks...done

 

c4t600144F0383E8200000050C00A8E0004d0: configured with capacity of 7168.00GB

 

 

AVAILABLE DISK SELECTIONS:

       0. c0t0d0

          /pci@400/pci@0/pci@1/scsi@0/sd@0,0

       1. c0t1d0

          /pci@400/pci@0/pci@1/scsi@0/sd@1,0

       2. c0t2d0

          /pci@400/pci@0/pci@1/scsi@0/sd@2,0

       3. c1t201300A0B8563D8Ed31

          /pci@400/pci@0/pci@8/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w201300a0b8563d8e,1f

       4. c1t201300A0B8564F0Ed31

          /pci@400/pci@0/pci@8/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w201300a0b8564f0e,1f

       5. c3t201200A0B8563D8Ed31

          /pci@500/pci@0/pci@d/SUNW,qlc@0/fp@0,0/ssd@w201200a0b8563d8e,1f

       6. c3t202200A0B8564F0Ed31

          /pci@500/pci@0/pci@d/SUNW,qlc@0/fp@0,0/ssd@w202200a0b8564f0e,1f

       7. c4t600A0B8000563D8E000002F64AA02509d0

          /scsi_vhci/ssd@g600a0b8000563d8e000002f64aa02509

       8. c4t600A0B8000563D8E000002F94AA02524d0

          /scsi_vhci/ssd@g600a0b8000563d8e000002f94aa02524

       9. c4t600A0B8000563D8E000002FB4AA02699d0

          /scsi_vhci/ssd@g600a0b8000563d8e000002fb4aa02699

      10. c4t600A0B8000563DC6000005124BB1A243d0

          /scsi_vhci/ssd@g600a0b8000563dc6000005124bb1a243

      11. c4t600A0B8000564E18000006694BB30DB5d0

          /scsi_vhci/ssd@g600a0b8000564e18000006694bb30db5

      12. c4t600A0B8000564F0E000006EE4BB30D1Fd0

          /scsi_vhci/ssd@g600a0b8000564f0e000006ee4bb30d1f

      13. c4t600144F0383E8200000050C00A8E0004d0

          /scsi_vhci/ssd@g600144f0383e8200000050c00a8e0004

      14. c6t5d0

          /iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.3e8eb39d1ca50001,0

Specify disk (enter its number): ^C

 

 

 

 

 

 

 

 

 

 

 

 

 

# zpool list

NAME        SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT

bi_bak      212G   159G  53.2G    74%  ONLINE  -

bipool      278G  81.1G   197G    29%  ONLINE  -

dbdata     3.25T  3.14T   114G    96%  ONLINE  -

iscsidata  1.09T   450G   662G    40%  ONLINE  -

u01        1.23T   782G   474G    62%  ONLINE  -

 

# zfs list

NAME                         USED  AVAIL  REFER  MOUNTPOINT

bi_bak                       159G  49.9G   159G  /bi_bak

bipool                       113G   161G    97K  /bipool

bipool/ROOT                 64.9G   161G    21K  legacy

bipool/ROOT/s10s_u8wos_08a  64.9G   161G  64.9G  /

bipool/dump                 2.00G   161G  2.00G  -

bipool/export               14.1G   161G  14.0G  /export

bipool/export/home          65.8M   161G  65.8M  /export/home

bipool/swap                   32G   193G  98.5M  -

dbdata                      3.14T  56.7G  3.14T  /dbdata

iscsidata                    450G   645G   450G  /iscsidata

u01                          784G   452G   784G  /u01

 

# zpool create dbdata_1  c4t600144F0383E8200000050C00A8E0004d0

# zpool list

NAME        SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT

bi_bak      212G   159G  53.2G    74%  ONLINE  -

bipool      278G  81.1G   197G    29%  ONLINE  -

dbdata     3.25T  3.14T   109G    96%  ONLINE  -

dbdata_1   6.94T  76.5K  6.94T     0%  ONLINE  -

iscsidata  1.09T   450G   662G    40%  ONLINE  -

u01        1.23T   787G   469G    62%  ONLINE  -

# zfs list

NAME                         USED  AVAIL  REFER  MOUNTPOINT

bi_bak                       159G  49.9G   159G  /bi_bak

bipool                       113G   161G    97K  /bipool

bipool/ROOT                 64.9G   161G    21K  legacy

bipool/ROOT/s10s_u8wos_08a  64.9G   161G  64.9G  /

bipool/dump                 2.00G   161G  2.00G  -

bipool/export               14.1G   161G  14.0G  /export

bipool/export/home          65.8M   161G  65.8M  /export/home

bipool/swap                   32G   193G  98.5M  -

dbdata                      3.14T  56.7G  3.14T  /dbdata

dbdata_1                      72K  6.83T    21K  /dbdata_1

iscsidata                    450G   645G   450G  /iscsidata

u01                          787G   449G   787G  /u01

 

# luxadm probe                      

No Network Array enclosures found in /dev/es

 

Found Fibre Channel device(s):

  Node WWN:200200a0b8563d8e  Device Type:Disk device

    Logical Path:/dev/rdsk/c1t201300A0B8563D8Ed31s2

    Logical Path:/dev/rdsk/c3t201200A0B8563D8Ed31s2

  Node WWN:200200a0b8564f0e  Device Type:Disk device

    Logical Path:/dev/rdsk/c1t201300A0B8564F0Ed31s2

    Logical Path:/dev/rdsk/c3t202200A0B8564F0Ed31s2

  Node WWN:200200a0b8563d8e  Device Type:Disk device

    Logical Path:/dev/rdsk/c4t600A0B8000563D8E000002F64AA02509d0s2

  Node WWN:200200a0b8563d8e  Device Type:Disk device

    Logical Path:/dev/rdsk/c4t600A0B8000563D8E000002F94AA02524d0s2

  Node WWN:200200a0b8563d8e  Device Type:Disk device

    Logical Path:/dev/rdsk/c4t600A0B8000563D8E000002FB4AA02699d0s2

  Node WWN:200200a0b8563d8e  Device Type:Disk device

    Logical Path:/dev/rdsk/c4t600A0B8000563DC6000005124BB1A243d0s2

  Node WWN:200200a0b8564f0e  Device Type:Disk device

    Logical Path:/dev/rdsk/c4t600A0B8000564E18000006694BB30DB5d0s2

  Node WWN:200200a0b8564f0e  Device Type:Disk device

    Logical Path:/dev/rdsk/c4t600A0B8000564F0E000006EE4BB30D1Fd0s2

  Node WWN:2000001b32944fd3  Device Type:Disk device

Logical Path:/dev/rdsk/c4t600144F0383E8200000050C00A8E0004d0s2

 

 

# mpathadm list lu

        /dev/rdsk/c4t600A0B8000563DC6000005124BB1A243d0s2

                Total Path Count: 2

                Operational Path Count: 2

        /dev/rdsk/c4t600A0B8000563D8E000002FB4AA02699d0s2

                Total Path Count: 2

                Operational Path Count: 2

        /dev/rdsk/c4t600A0B8000563D8E000002F94AA02524d0s2

                Total Path Count: 2

                Operational Path Count: 2

        /dev/rdsk/c4t600A0B8000563D8E000002F64AA02509d0s2

                Total Path Count: 2

                Operational Path Count: 2

        /dev/rdsk/c4t600A0B8000564E18000006694BB30DB5d0s2

                Total Path Count: 2

                Operational Path Count: 2

        /dev/rdsk/c4t600A0B8000564F0E000006EE4BB30D1Fd0s2

                Total Path Count: 2

                Operational Path Count: 2

        /dev/rdsk/c4t600144F0383E8200000050C00A8E0004d0s2

                Total Path Count: 2

                Operational Path Count: 2

设置完毕 !

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/8183550/viewspace-750885/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/8183550/viewspace-750885/

用 FC FOR nexenta 山寨SAN存储相关推荐

  1. 内卷了!DAS、NAS、SAN区别和FC SAN存储

    内卷了!DAS.NAS.SAN区别和FC SAN存储 https://www.toutiao.com/i6943518812004172299/?tt_from=weixin&utm_camp ...

  2. vmware挂载san存储_细述企业级存储NAS和SAN差异

    常见服务器磁盘类型 SAS:容量小,300G,600G,价格贵 SATA:容量大,4T,不支持热插拔,价格低 假SAS:容量大,支持热插拔,价格低,(就是SAS接口的SATA盘)(缺点:故障率比SAS ...

  3. san分布式共享文件系统_基于SAN存储共享卷实现openstack高可用的方法与流程

    本发明涉及一种高可用方法,特别是一种基于SAN存储共享卷实现openstack高可用的方法. 背景技术: 云计算是利用虚拟化技术,将数据中心的计算.存储.网络等资源整合起来,形成统一的资源池,再将这些 ...

  4. 详解SAN存储技术的前世今生

    SAN存储自从80年代产生以来,推动着存储技术不断前进.SAN首先带来了大容量高性能的存储,能够适用于从小型服务器到大型主机的应用需求. SAN存储更重要的核心特点就是存储的大整合, SAN存储可以灵 ...

  5. 虚拟化技术-什么是SAN存储?

    虚拟化存储的演进 根据美国IDC预计,到2020年,我国数据的存储总量将达到39ZB.每个企业都把数据当作自己最核心的价值,就像国内某打车公司,一直标榜自己是数据公司.的确,他们掌握了用户每天上下班的 ...

  6. 存储基础:DAS/NAS/SAN存储类型及应用

    一.硬盘接口类型 1. 并行接口还是串行接口   (1) 并行接口,指的是并行传输的接口,比如有0~9十个数字,用10条传输线,那么每根线只需要传输一位数字,即可完成. 从理论上看,并行传输效率很高, ...

  7. 服务器和交换机物理连接_Brocade博科交换机 SAN存储区域网络

    随着企业网络数据的不断增加和网络应用的频繁,许多企业开始意识到需要专门构建自己的存储系统网络来满足日益提升的数据存储性能要求.当前,最为热门的数据存储网络就是SAN(Storage Area Netw ...

  8. SCSI总线和协议以及SAN存储网络详解

    在讲解SAN存储网络之前,我想先在物理层面为大家讲解一下以太网网卡,FC-HBA卡(光纤网卡),iSCSi-HBA卡的区别.         以太网卡:就是我们平常见到的电脑或服务器上面的网卡,它传输 ...

  9. Symantec Backup Exec备份SAN存储上的VMware

    Symantec Backup Exec备份SAN存储上的VMware   经过安装测试并结合Symantec的资料,Symantec backup Exec能够使用FC直接备份SAN share s ...

最新文章

  1. c# mysql fill_C#里sqlDataAdapter.fill(DataSet,String)的用法
  2. 随机模拟【1】:随机模拟的研究范围与特征
  3. matlab如何创建table,MATLAB table数据结构 首篇
  4. 中国移动云智融合峰会 | 1+1>2, 引领创新发展
  5. spring+quartz实现定时调度
  6. BZOJ4591 SHOI2015超能粒子炮·改(卢卡斯定理+数位dp)
  7. 微服务网关总结之 —— zuul
  8. 手写一个网关服务,理解更透彻!
  9. MongoDB 复制集的结构以及基本概念
  10. 抽象工厂模式java_抽象工厂模式
  11. C++11 多线程 线程管理基础
  12. memcached 分布式锁 java_分布式锁的三种实现方式
  13. SQL 删除重复记录,并保留其中一条
  14. linux PS1 变量设置
  15. Flask: windows下flask + tornado+ nginx组合
  16. laydate 在vue中使用_Vue中使用ArcGIS JS API 4.14开发
  17. python是自由开放源代码软件吗_附录:免费/自由和开放源码软件
  18. 【PowerBuilder 9.0 使用时第一次遇到的错误 c0031】
  19. Ribbon界面制作
  20. 什么是RESTful风格的API设计?

热门文章

  1. 3个5相乘列乘法算式_三年级下册期中数学17个考点专题复习资料
  2. 采用morison方程基于matlab计算大直径波浪力,用Morison方程计算分析悬浮隧道所受波浪力初探...
  3. r语言在linux怎么实现,如何在linux环境下使用r语言
  4. Android开发中所需颜色的RGB值
  5. 怎样把计算机里的W0rd放到电脑桌面,电脑怎么把Word图标放到桌面?把Word图标放到桌面的设置方法...
  6. 图片转pdf/pdf多文件合并,在线一键完成
  7. redis:redis的底层数据结构
  8. STM32CubeMX快速生成STM32F407ZG芯片寄存器初始化
  9. 百度ueditor编辑器如何使用自定义的高大上高亮皮肤?
  10. 用计算机sp画笑脸,用AI技术给名画P上笑脸,看上去整幅画的画风都不好了……...