利用虚拟化存储技术把多个硬盘组合起来,成为一个或多个硬盘阵列组,目的为提升性能或资料冗余,或是两者同时提升。

RAID (/reɪd/; “Redundant Array of Inexpensive Disks”[1] or “Redundant Array of Independent Disks”[2]) is a data storage virtualization technology that combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvement, or both. This was in contrast to the previous concept of highly reliable mainframe disk drives referred to as “single large expensive disk” (SLED).[3][1]

Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the required level of redundancy and performance. The different schemes, or data distribution layouts, are named by the word “RAID” followed by a number, for example RAID 0 or RAID 1. Each scheme, or RAID level, provides a different balance among the key goals: reliability, availability, performance, and capacity. RAID levels greater than RAID 0 provide protection against unrecoverable sector read errors, as well as against failures of whole physical drives.

文章目录

  • History
  • Overview
  • Standard levels
  • Nested (hybrid) RAID
  • Non-standard levels
  • Implementations
    • Hardware-based
    • Software-based
  • Firmware- and driver-based
  • Integrity
  • Weaknesses
    • Correlated failures
    • Unrecoverable read errors during rebuild
    • Increasing rebuild time and failure probability
    • Atomicity
    • Write-cache reliability

History

The term “RAID” was invented by David Patterson, Garth A. Gibson, and Randy Katz at the University of California, Berkeley in 1987. In their June 1988 paper “A Case for Redundant Arrays of Inexpensive Disks (RAID)”, presented at the SIGMOD conference, they argued that the top-performing mainframe disk drives of the time could be beaten on performance by an array of the inexpensive drives that had been developed for the growing personal computer market. Although failures would rise in proportion to the number of drives, by configuring for redundancy, the reliability of an array could far exceed that of any large single drive.[4]

Although not yet using that terminology, the technologies of the five levels of RAID named in the June 1988 paper were used in various products prior to the paper’s publication,[3] including the following:

Mirroring (RAID 1) was well established in the 1970s including, for example, Tandem NonStop Systems.
In 1977, Norman Ken Ouchi at IBM filed a patent disclosing what was subsequently named RAID 4.[5]
Around 1983, DEC began shipping subsystem mirrored RA8X disk drives (now known as RAID 1) as part of its HSC50 subsystem.[6]
In 1986, Clark et al. at IBM filed a patent disclosing what was subsequently named RAID 5.[7]
Around 1988, the Thinking Machines’ DataVault used error correction codes (now known as RAID 2) in an array of disk drives.[8] A similar approach was used in the early 1960s on the IBM 353.[9][10]
Industry manufacturers later redefined the RAID acronym to stand for “Redundant Array of Independent Disks”.[2][11][12][13]

Overview

Many RAID levels employ an error protection scheme called “parity”, a widely used method in information technology to provide fault tolerance in a given set of data. Most use simple XOR, but RAID 6 uses two separate parities based respectively on addition and multiplication in a particular Galois field or Reed–Solomon error correction.[14]

RAID can also provide data security with solid-state drives (SSDs) without the expense of an all-SSD system. For example, a fast SSD can be mirrored with a mechanical drive. For this configuration to provide a significant speed advantage an appropriate controller is needed that uses the fast SSD for all read operations. Adaptec calls this “hybrid RAID”.[15]

Standard levels

Main article: Standard RAID levels

Storage servers with 24 hard disk drives each and built-in hardware RAID controllers supporting various RAID levels
Originally, there were five standard levels of RAID, but many variations have evolved, including several nested levels and many non-standard levels (mostly proprietary). RAID levels and their associated data formats are standardized by the Storage Networking Industry Association (SNIA) in the Common RAID Disk Drive Format (DDF) standard:[16][17]

RAID 0 consists of striping, but no mirroring or parity. Compared to a spanned volume, the capacity of a RAID 0 volume is the same; it is the sum of the capacities of the drives in the set. But because striping distributes the contents of each file among all drives in the set, the failure of any drive causes the entire RAID 0 volume and all files to be lost. In comparison, a spanned volume preserves the files on the unfailing drives. The benefit of RAID 0 is that the throughput of read and write operations to any file is multiplied by the number of drives because, unlike spanned volumes, reads and writes are done concurrently.[11] The cost is increased vulnerability to drive failures—since any drive in a RAID 0 setup failing causes the entire volume to be lost, the average failure rate of the volume rises with the number of attached drives.

RAID 1 consists of data mirroring, without parity or striping. Data is written identically to two or more drives, thereby producing a “mirrored set” of drives. Thus, any read request can be serviced by any drive in the set. If a request is broadcast to every drive in the set, it can be serviced by the drive that accesses the data first (depending on its seek time and rotational latency), improving performance. Sustained read throughput, if the controller or software is optimized for it, approaches the sum of throughputs of every drive in the set, just as for RAID 0. Actual read throughput of most RAID 1 implementations is slower than the fastest drive. Write throughput is always slower because every drive must be updated, and the slowest drive limits the write performance. The array continues to operate as long as at least one drive is functioning.[11]

RAID 2 consists of bit-level striping with dedicated Hamming-code parity. All disk spindle rotation is synchronized and data is striped such that each sequential bit is on a different drive. Hamming-code parity is calculated across corresponding bits and stored on at least one parity drive.[11] This level is of historical significance only; although it was used on some early machines (for example, the Thinking Machines CM-2),[18] as of 2014 it is not used by any commercially available system.[19]

RAID 3 consists of byte-level striping with dedicated parity. All disk spindle rotation is synchronized and data is striped such that each sequential byte is on a different drive. Parity is calculated across corresponding bytes and stored on a dedicated parity drive.[11] Although implementations exist,[20] RAID 3 is not commonly used in practice.

RAID 4 consists of block-level striping with dedicated parity. This level was previously used by NetApp, but has now been largely replaced by a proprietary implementation of RAID 4 with two parity disks, called RAID-DP.[21] The main advantage of RAID 4 over RAID 2 and 3 is I/O parallelism: in RAID 2 and 3, a single read I/O operation requires reading the whole group of data drives, while in RAID 4 one I/O read operation does not have to spread across all data drives. As a result, more I/O operations can be executed in parallel, improving the performance of small transfers.[1]

RAID 5 consists of block-level striping with distributed parity. Unlike RAID 4, parity information is distributed among the drives, requiring all drives but one to be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost. RAID 5 requires at least three disks.[11] Like all single-parity concepts, large RAID 5 implementations are susceptible to system failures because of trends regarding array rebuild time and the chance of drive failure during rebuild (see “Increasing rebuild time and failure probability” section, below).[22] Rebuilding an array requires reading all data from all disks, opening a chance for a second drive failure and the loss of the entire array.

RAID 6 consists of block-level striping with double distributed parity. Double parity provides fault tolerance up to two failed drives. This makes larger RAID groups more practical, especially for high-availability systems, as large-capacity drives take longer to restore. RAID 6 requires a minimum of four disks. As with RAID 5, a single drive failure results in reduced performance of the entire array until the failed drive has been replaced.[11] With a RAID 6 array, using drives from multiple sources and manufacturers, it is possible to mitigate most of the problems associated with RAID 5. The larger the drive capacities and the larger the array size, the more important it becomes to choose RAID 6 instead of RAID 5.[23] RAID 10 also minimizes these problems.[24]

Nested (hybrid) RAID

Main article: Nested RAID levels
In what was originally termed hybrid RAID,[25] many storage controllers allow RAID levels to be nested. The elements of a RAID may be either individual drives or arrays themselves. Arrays are rarely nested more than one level deep.[26]

The final array is known as the top array. When the top array is RAID 0 (such as in RAID 1+0 and RAID 5+0), most vendors omit the “+” (yielding RAID 10 and RAID 50, respectively).

RAID 0+1: creates two stripes and mirrors them. If a single drive failure occurs then one of the mirrors has failed, at this point it is running effectively as RAID 0 with no redundancy. Significantly higher risk is introduced during a rebuild than RAID 1+0 as all the data from all the drives in the remaining stripe has to be read rather than just from one drive, increasing the chance of an unrecoverable read error (URE) and significantly extending the rebuild window.[27][28][29]
RAID 1+0: (see: RAID 10) creates a striped set from a series of mirrored drives. The array can sustain multiple drive losses so long as no mirror loses all its drives.[30]
JBOD RAID N+N: With JBOD (just a bunch of disks), it is possible to concatenate disks, but also volumes such as RAID sets. With larger drive capacities, write delay and rebuilding time increase dramatically (especially, as described above, with RAID 5 and RAID 6). By splitting a larger RAID N set into smaller subsets and concatenating them with linear JBOD,[clarification needed] write and rebuilding time will be reduced. If a hardware RAID controller is not capable of nesting linear JBOD with RAID N, then linear JBOD can be achieved with OS-level software RAID in combination with separate RAID N subset volumes created within one, or more, hardware RAID controller(s). Besides a drastic speed increase, this also provides a substantial advantage: the possibility to start a linear JBOD with a small set of disks and to be able to expand the total set with disks of different size, later on (in time, disks of bigger size become available on the market). There is another advantage in the form of disaster recovery (if a RAID N subset happens to fail, then the data on the other RAID N subsets is not lost, reducing restore time).[citation needed]

Non-standard levels

Main article: Non-standard RAID levels
Many configurations other than the basic numbered RAID levels are possible, and many companies, organizations, and groups have created their own non-standard configurations, in many cases designed to meet the specialized needs of a small niche group. Such configurations include the following:

Linux MD RAID 10 provides a general RAID driver that in its “near” layout defaults to a standard RAID 1 with two drives, and a standard RAID 1+0 with four drives; however, it can include any number of drives, including odd numbers. With its “far” layout, MD RAID 10 can run both striped and mirrored, even with only two drives in f2 layout; this runs mirroring with striped reads, giving the read performance of RAID 0. Regular RAID 1, as provided by Linux software RAID, does not stripe reads, but can perform reads in parallel.[30][31][32]
Hadoop has a RAID system that generates a parity file by xor-ing a stripe of blocks in a single HDFS file.[33]
BeeGFS, the parallel file system, has internal striping (comparable to file-based RAID0) and replication (comparable to file-based RAID10) options to aggregate throughput and capacity of multiple servers and is typically based on top of an underlying RAID to make disk failures transparent.
Declustered RAID scatters dual (or more) copies of the data across all disks (possibly hundreds) in a storage subsystem, while holding back enough spare capacity to allow for a few disks to fail. The scattering is based on algorithms which give the appearance of arbitrariness. When one or more disks fail the missing copies are rebuilt into that spare capacity, again arbitrarily. Because the rebuild is done from and to all the remaining disks, it operates much faster than with traditional RAID, reducing the overall impact on clients of the storage system.

Implementations

The distribution of data across multiple drives can be managed either by dedicated computer hardware or by software. A software solution may be part of the operating system, part of the firmware and drivers supplied with a standard drive controller (so-called “hardware-assisted software RAID”), or it may reside entirely within the hardware RAID controller.

Hardware-based

Main article: RAID controller
Configuration of hardware RAID
Hardware RAID controllers can be configured through card BIOS or Option ROM before an operating system is booted, and after the operating system is booted, proprietary configuration utilities are available from the manufacturer of each controller. Unlike the network interface controllers for Ethernet, which can usually be configured and serviced entirely through the common operating system paradigms like ifconfig in Unix, without a need for any third-party tools, each manufacturer of each RAID controller usually provides their own proprietary software tooling for each operating system that they deem to support, ensuring a vendor lock-in, and contributing to reliability issues.[34]

For example, in FreeBSD, in order to access the configuration of Adaptec RAID controllers, users are required to enable Linux compatibility layer, and use the Linux tooling from Adaptec,[35] potentially compromising the stability, reliability and security of their setup, especially when taking the long term view.[34]

Some other operating systems have implemented their own generic frameworks for interfacing with any RAID controller, and provide tools for monitoring RAID volume status, as well as facilitation of drive identification through LED blinking, alarm management and hot spare disk designations from within the operating system without having to reboot into card BIOS. For example, this was the approach taken by OpenBSD in 2005 with its bio(4) pseudo-device and the bioctl utility, which provide volume status, and allow LED/alarm/hotspare control, as well as the sensors (including the drive sensor) for health monitoring;[36] this approach has subsequently been adopted and extended by NetBSD in 2007 as well.[37]

Software-based

Software RAID implementations are provided by many modern operating systems. Software RAID can be implemented as:

A layer that abstracts multiple devices, thereby providing a single virtual device (such as Linux kernel’s md and OpenBSD’s softraid)
A more generic logical volume manager (provided with most server-class operating systems such as Veritas or LVM)
A component of the file system (such as ZFS, Spectrum Scale or Btrfs)
A layer that sits above any file system and provides parity protection to user data (such as RAID-F)[38]
Some advanced file systems are designed to organize data across multiple storage devices directly, without needing the help of a third-party logical volume manager:

ZFS supports the equivalents of RAID 0, RAID 1, RAID 5 (RAID-Z1) single-parity, RAID 6 (RAID-Z2) double-parity, and a triple-parity version (RAID-Z3) also referred to as RAID 7.[39] As it always stripes over top-level vdevs, it supports equivalents of the 1+0, 5+0, and 6+0 nested RAID levels (as well as striped triple-parity sets) but not other nested combinations. ZFS is the native file system on Solaris and illumos, and is also available on FreeBSD and Linux. Open-source ZFS implementations are actively developed under the OpenZFS umbrella project.[40][41][42][43][44]
Spectrum Scale, initially developed by IBM for media streaming and scalable analytics, supports declustered RAID protection schemes up to n+3. A particularity is the dynamic rebuilding priority which runs with low impact in the background until a data chunk hits n+0 redundancy, in which case this chunk is quickly rebuilt to at least n+1. On top, Spectrum Scale supports metro-distance RAID 1.[45]
Btrfs supports RAID 0, RAID 1 and RAID 10 (RAID 5 and 6 are under development).[46][47]
XFS was originally designed to provide an integrated volume manager that supports concatenating, mirroring and striping of multiple physical storage devices.[48] However, the implementation of XFS in Linux kernel lacks the integrated volume manager.[49]
Many operating systems provide RAID implementations, including the following:

Hewlett-Packard’s OpenVMS operating system supports RAID 1. The mirrored disks, called a “shadow set”, can be in different locations to assist in disaster recovery.[50]
Apple’s macOS and macOS Server support RAID 0, RAID 1, and RAID 1+0.[51][52]
FreeBSD supports RAID 0, RAID 1, RAID 3, and RAID 5, and all nestings via GEOM modules and ccd.[53][54][55]
Linux’s md supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6, and all nestings.[56] Certain reshaping/resizing/expanding operations are also supported.[57]
Microsoft Windows supports RAID 0, RAID 1, and RAID 5 using various software implementations. Logical Disk Manager, introduced with Windows 2000, allows for the creation of RAID 0, RAID 1, and RAID 5 volumes by using dynamic disks, but this was limited only to professional and server editions of Windows until the release of Windows 8.[58][59] Windows XP can be modified to unlock support for RAID 0, 1, and 5.[60] Windows 8 and Windows Server 2012 introduced a RAID-like feature known as Storage Spaces, which also allows users to specify mirroring, parity, or no redundancy on a folder-by-folder basis. These options are similar to RAID 1 and RAID 5, but are implemented at a higher abstraction level.[61]
NetBSD supports RAID 0, 1, 4, and 5 via its software implementation, named RAIDframe.[62]
OpenBSD supports RAID 0, 1 and 5 via its software implementation, named softraid.[63]
If a boot drive fails, the system has to be sophisticated enough to be able to boot from the remaining drive or drives. For instance, consider a computer whose disk is configured as RAID 1 (mirrored drives); if the first drive in the array fails, then a first-stage boot loader might not be sophisticated enough to attempt loading the second-stage boot loader from the second drive as a fallback. The second-stage boot loader for FreeBSD is capable of loading a kernel from such an array.[64]

Firmware- and driver-based

A SATA 3.0 controller that provides RAID functionality through proprietary firmware and drivers
See also: MD RAID external metadata
Software-implemented RAID is not always compatible with the system’s boot process, and it is generally impractical for desktop versions of Windows. However, hardware RAID controllers are expensive and proprietary. To fill this gap, inexpensive “RAID controllers” were introduced that do not contain a dedicated RAID controller chip, but simply a standard drive controller chip with proprietary firmware and drivers. During early bootup, the RAID is implemented by the firmware and, once the operating system has been more completely loaded, the drivers take over control. Consequently, such controllers may not work when driver support is not available for the host operating system.[65] An example is Intel Rapid Storage Technology, implemented on many consumer-level motherboards.[66][67]

Because some minimal hardware support is involved, this implementation is also called “hardware-assisted software RAID”,[68][69][70] “hybrid model” RAID,[70] or even “fake RAID”.[71] If RAID 5 is supported, the hardware may provide a hardware XOR accelerator. An advantage of this model over the pure software RAID is that—if using a redundancy mode—the boot drive is protected from failure (due to the firmware) during the boot process even before the operating system’s drivers take over.[70]

Integrity

Data scrubbing (referred to in some environments as patrol read) involves periodic reading and checking by the RAID controller of all the blocks in an array, including those not otherwise accessed. This detects bad blocks before use.[72] Data scrubbing checks for bad blocks on each storage device in an array, but also uses the redundancy of the array to recover bad blocks on a single drive and to reassign the recovered data to spare blocks elsewhere on the drive.[73]

Frequently, a RAID controller is configured to “drop” a component drive (that is, to assume a component drive has failed) if the drive has been unresponsive for eight seconds or so; this might cause the array controller to drop a good drive because that drive has not been given enough time to complete its internal error recovery procedure. Consequently, using consumer-marketed drives with RAID can be risky, and so-called “enterprise class” drives limit this error recovery time to reduce risk.[citation needed] Western Digital’s desktop drives used to have a specific fix. A utility called WDTLER.exe limited a drive’s error recovery time. The utility enabled TLER (time limited error recovery), which limits the error recovery time to seven seconds. Around September 2009, Western Digital disabled this feature in their desktop drives (such as the Caviar Black line), making such drives unsuitable for use in RAID configurations.[74] However, Western Digital enterprise class drives are shipped from the factory with TLER enabled. Similar technologies are used by Seagate, Samsung, and Hitachi. For non-RAID usage, an enterprise class drive with a short error recovery timeout that cannot be changed is therefore less suitable than a desktop drive.[74] In late 2010, the Smartmontools program began supporting the configuration of ATA Error Recovery Control, allowing the tool to configure many desktop class hard drives for use in RAID setups.[74]

While RAID may protect against physical drive failure, the data is still exposed to operator, software, hardware, and virus destruction. Many studies cite operator fault as a common source of malfunction,[75][76] such as a server operator replacing the incorrect drive in a faulty RAID, and disabling the system (even temporarily) in the process.[77]

An array can be overwhelmed by catastrophic failure that exceeds its recovery capacity and the entire array is at risk of physical damage by fire, natural disaster, and human forces, however backups can be stored off site. An array is also vulnerable to controller failure because it is not always possible to migrate it to a new, different controller without data loss.[78]

Weaknesses

Correlated failures

In practice, the drives are often the same age (with similar wear) and subject to the same environment. Since many drive failures are due to mechanical issues (which are more likely on older drives), this violates the assumptions of independent, identical rate of failure amongst drives; failures are in fact statistically correlated.[11] In practice, the chances for a second failure before the first has been recovered (causing data loss) are higher than the chances for random failures. In a study of about 100,000 drives, the probability of two drives in the same cluster failing within one hour was four times larger than predicted by the exponential statistical distribution—which characterizes processes in which events occur continuously and independently at a constant average rate. The probability of two failures in the same 10-hour period was twice as large as predicted by an exponential distribution.[79]

Unrecoverable read errors during rebuild

Unrecoverable read errors (URE) present as sector read failures, also known as latent sector errors (LSE). The associated media assessment measure, unrecoverable bit error (UBE) rate, is typically guaranteed to be less than one bit in 1015[disputed – discuss] for enterprise-class drives (SCSI, FC, SAS or SATA), and less than one bit in 1014[disputed – discuss] for desktop-class drives (IDE/ATA/PATA or SATA). Increasing drive capacities and large RAID 5 instances have led to the maximum error rates being insufficient to guarantee a successful recovery, due to the high likelihood of such an error occurring on one or more remaining drives during a RAID set rebuild.[11][obsolete source][80][deprecated source?] When rebuilding, parity-based schemes such as RAID 5 are particularly prone to the effects of UREs as they affect not only the sector where they occur, but also reconstructed blocks using that sector for parity computation.[81]

Double-protection parity-based schemes, such as RAID 6, attempt to address this issue by providing redundancy that allows double-drive failures; as a downside, such schemes suffer from elevated write penalty—the number of times the storage medium must be accessed during a single write operation.[82] Schemes that duplicate (mirror) data in a drive-to-drive manner, such as RAID 1 and RAID 10, have a lower risk from UREs than those using parity computation or mirroring between striped sets.[24][83] Data scrubbing, as a background process, can be used to detect and recover from UREs, effectively reducing the risk of them happening during RAID rebuilds and causing double-drive failures. The recovery of UREs involves remapping of affected underlying disk sectors, utilizing the drive’s sector remapping pool; in case of UREs detected during background scrubbing, data redundancy provided by a fully operational RAID set allows the missing data to be reconstructed and rewritten to a remapped sector.[84][85]

Increasing rebuild time and failure probability

Drive capacity has grown at a much faster rate than transfer speed, and error rates have only fallen a little in comparison. Therefore, larger-capacity drives may take hours if not days to rebuild, during which time other drives may fail or yet undetected read errors may surface. The rebuild time is also limited if the entire array is still in operation at reduced capacity.[86] Given an array with only one redundant drive (which applies to RAID levels 3, 4 and 5, and to “classic” two-drive RAID 1), a second drive failure would cause complete failure of the array. Even though individual drives’ mean time between failure (MTBF) have increased over time, this increase has not kept pace with the increased storage capacity of the drives. The time to rebuild the array after a single drive failure, as well as the chance of a second failure during a rebuild, have increased over time.[22]

Some commentators have declared that RAID 6 is only a “band aid” in this respect, because it only kicks the problem a little further down the road.[22] However, according to the 2006 NetApp study of Berriman et al., the chance of failure decreases by a factor of about 3,800 (relative to RAID 5) for a proper implementation of RAID 6, even when using commodity drives.[87][citation not found] Nevertheless, if the currently observed technology trends remain unchanged, in 2019 a RAID 6 array will have the same chance of failure as its RAID 5 counterpart had in 2010.[87][unreliable source?]

Mirroring schemes such as RAID 10 have a bounded recovery time as they require the copy of a single failed drive, compared with parity schemes such as RAID 6, which require the copy of all blocks of the drives in an array set. Triple parity schemes, or triple mirroring, have been suggested as one approach to improve resilience to an additional drive failure during this large rebuild time.[87][unreliable source?]

Atomicity

A system crash or other interruption of a write operation can result in states where the parity is inconsistent with the data due to non-atomicity of the write process, such that the parity cannot be used for recovery in the case of a disk failure. This is commonly termed the RAID 5 write hole.[11] The RAID write hole is a known data corruption issue in older and low-end RAIDs, caused by interrupted destaging of writes to disk.[88] The write hole can be addressed with write-ahead logging. This was fixed in mdadm by introducing a dedicated journaling device (to avoid performance penalty, typically, SSDs and NVMs are preferred) for that purpose.[89][90]

This is a little understood and rarely mentioned failure mode for redundant storage systems that do not utilize transactional features. Database researcher Jim Gray wrote “Update in Place is a Poison Apple” during the early days of relational database commercialization.[91]

Write-cache reliability

There are concerns about write-cache reliability, specifically regarding devices equipped with a write-back cache, which is a caching system that reports the data as written as soon as it is written to cache, as opposed to when it is written to the non-volatile medium. If the system experiences a power loss or other major failure, the data may be irrevocably lost from the cache before reaching the non-volatile storage. For this reason good write-back cache implementations include mechanisms, such as redundant battery power, to preserve cache contents across system failures (including power failures) and to flush the cache at system restart time.[92]

什么是raid 容错式磁盘阵列?相关推荐

  1. 《introduction to information retrieval》信息检索学习笔记3 词典和容错式检索

    第3章 词典和容错式检索 3.1 用于词典的搜索结构 给定一个反向索引和一个查询,我们的第一个任务是确定每个查询词是否存在于词汇表中,如果是,则返回指向相应倒排记录表的指针.涉及在数据结构中定位词项. ...

  2. RAID 独立冗余磁盘阵列详解(RAID 0、RAID 1、RAID 5、RAID 10)

    目录 什么是RAID RAID 0 RAID 1 RAID 5 RAID 10 Linux系统中部署磁盘阵列 mdadm命令 损坏磁盘阵列及修复 磁盘阵列+备份盘 什么是RAID RAID(Redun ...

  3. 第六章 使用RAID与LVM磁盘阵列技术

    文章目录 第六章 使用RAID与LVM磁盘阵列技术 一.RAID磁盘冗余阵列 1.部署磁盘阵列 (1).RAID0.1.5.10方案技术对比 (2).RAID0 RAID1 RAID5 RAID10介 ...

  4. 《linux就该这么学——笔记》第7章 使用RAID与LVM磁盘阵列技术

    使用RAID与LVM磁盘阵列技术 7.1 RAID(独立冗余磁盘阵列) 7.1.1 RAID0 7.1.2 RAID1 7.1.3 RAID5 7.1.4 RAID1 0 7.1.5 部署磁盘阵列(实 ...

  5. DELL R740服务器配置RAID 5+1 磁盘阵列

    1.开机点击F2选择第一个选项进入到BIOS 2.选择Boot Settings 3.选择BIOS,然后按Esc,保存配置重启服务器 4.服务器重启后,重复按Ctrl+R进入阵列卡,如下图所示: 5. ...

  6. RAID 与 LVM 磁盘阵列技术

    文章目录 物理设备的命名规则 硬盘相关的知识 一.RAID(独立冗余磁盘阵列) 1.1 RAID 简介 1.2 RAID的几种工作模式 1.2.1 RAID 0 (存储性能) 1.2.1 RAID 1 ...

  7. RAID——独立冗余磁盘阵列

    RAID 磁盘冗余阵列 本章将深入讲解RAID(Redundant Array of Independent Disks,独立冗余磁盘阵列)技术方案的特性,并通过实际部署RAID 10.RAID 5+ ...

  8. RAID与LVM磁盘阵列技术

    RAID(Redundant Array of Independent Disks,独立冗余磁盘阵列) RAID概念: RAID技术通过把多个硬盘设备组合成一个容量更大.安全性更好的磁盘阵列,并把数据 ...

  9. RAID 廉价冗余磁盘阵列

               RAID raid 是廉价冗余磁盘阵列,可以基于块状设备存储,做数据的备份,可以做为外围设备(外部存储)独立存在.它根据原理的不同分为不同的等级.RAID的磁盘类型为fd < ...

最新文章

  1. YII显示sql进行调试
  2. 全球及中国纳米材料行业竞争格局及发展规模预测报告2021年版
  3. 算法练习day4——190321(小和、逆序对、划分、荷兰国旗问题)
  4. 文本模式下的分辨率对照表
  5. 张萍萍 计科高职13-1 201303014010
  6. Android插件化开发之AMS与应用程序(客户端ActivityThread、Instrumentation、Activity)通信模型分析
  7. c语言最大公约数和最小公倍数_五年级奥数课堂之七:公因数和公倍数
  8. 前端的请求最大线程数是多少啊_面试官:创建多少个线程合适,我该怎么说?...
  9. linux之地址空间
  10. brew php imagemagick,关于node使用gm和imageMagic在mac的坑
  11. java中的全等和相似
  12. 在iphone上安装多个微信 【微信营销必备】
  13. [2019南京网络赛D题]Robots
  14. 多线程中redistemplate不执行_在 Flink 算子中使用多线程如何保证不丢数据?
  15. xp共享文件夹服务器,xp共享文件夹服务器
  16. vmware之VMware Remote Console (VMRC) SDK(一)
  17. 地图上如何量方位角_野外怎样确定方位 户外辨别方向和位置的方法有哪些?...
  18. 单片机|CC2530实验入门
  19. 电驴服务器更新的作用,[转载]【强烈推荐更新】最新电驴服务器列表(2013.7.11)...
  20. MapReduce作业运行机制

热门文章

  1. 2019春季季节跳动招聘笔试(回忆版)第二题
  2. spring扫描自定义注解并进行操作
  3. ASP.NET中序列化与反序列化-以显示上一次登录的信息为例
  4. Ionic+Angular+Express实现前后端交互使用HttpClient发送get请求数据并加载显示(附代码下载)
  5. Winforn中使用代码动态生成控件
  6. Java中通过substring和charAt截取字符串并获取指定字符
  7. Sqlserver2014下载与安装
  8. IDEA中记一次BuildProject不好使的解决过程
  9. 快速入门SSM整合配置建立第一个SSM项目模板
  10. 配置Eclipse中的Maven环境