Поделиться через


How Dynamic Disks and Volumes Work

Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2

How Dynamic Disks and Volumes Work

In this section

  • Dynamic Disks and Volumes Architecture

  • Dynamic Disk and Volume Physical Structure

  • Dynamic Disk and Volume Processes and Interactions

  • Types of RAID Volumes

Like basic disks which are the most commonly used storage type found on computers running Microsoft Windows, dynamic disks can use the master boot record (MBR) or GUID partition table (GPT) partitioning scheme. All volumes on dynamic disks are known as dynamic volumes. Dynamic disks were first introduced with Windows 2000 and provide features that basic disks do not, such as the ability to create volumes that span multiple disks (spanned and striped volumes), and the ability to create fault tolerant volumes (mirrored and RAID-5 volumes).

Dynamic disks offer greater flexibility for volume management because they use a database to track information about dynamic volumes on the disk and about other dynamic disks in the computer. Because each dynamic disk in a computer stores a replica of the dynamic disk database, Windows Server 2003 can repair a corrupted database on one dynamic disk by using the database on another dynamic disk.

An optimal environment for basic disks and volumes is defined as follows:

  • Windows Server 2003 operating system is installed and functioning properly.

  • The dynamic disks are functioning properly and they display the Online status in the Disk Management snap-in.

  • The dynamic volumes display the Healthy status in the Disk Management snap-in.

The following sections provide an in-depth view of how basic disks and volumes work in an optimal environment.

Dynamic Disk and Volume Architecture

Dynamic disks and volumes rely on the Logical Disk Manager (LDM) and Virtual Disk Service (VDS) and their associated components. These components enable you to perform tasks such as converting basic disks into dynamic disks, and creating fault-tolerant volumes. The following diagram shows the LDM and VDS components.

Logical Disk Manager and Virtual Disk Service Components

Logical Disk Manager and Virtual Disk Service

The following table lists the LDM and VDS components and provides a brief description of each.

Logical Disk Manager and Virtual Disk Service Components

Component Description

Disk Management snap-in:

Dmdlgs.dll

Dmdskmgr.dll

Dmview.ocx

Diskmgmt.msc

Binaries that comprise the Disk Management snap-in user interface.

DiskPart command line utility:

Diskpart.exe

A scriptable alternative to the Disk Management snap-in.

Mount Manager command line:

Mountvol.exe

A command line utility that can be used to create, delete, or list volume mount points.

Virtual Disk Service:

Vds.exe

Vdsutil.dll

A program used to configure and maintain volume and disk storage.

Virtual Disk Service provider for basic disks and volumes:

Vdsbas.dll

The Virtual Disk Service calls into the basic provider when configuring basic disks and volumes.

Virtual Disk Service provider for dynamic disks and volumes:

Vdsdyndr.dll

The Virtual Disk Service calls into the dynamic provider when configuring dynamic disks and volumes.

Dmboot.sys

Dmconfig.dll

Dmintf.dll

Dmio.sys

Dmload.sys

Dmremote.exe

Dmutil.dll

Drivers and user mode components used to configure dynamic disks and volumes and perform I/O.

Logical Disk Administrator service:

Dmadmin.exe

The VDS provider for dynamic disks and volumes uses the interfaces exposed by this service to configure dynamic disks.

Logical Disk Service:

Dmserver.dll

A service that detects and monitors new hard disk drives and sends disk volume information to Logical Disk Manager Administrative Service for configuration. If this service is stopped, dynamic disk status and configuration information might become outdated. If this service is disabled, any services that explicitly depend on it will fail to start.

Basic disk I/O driver:

Ftdisk.sys

A driver that manages all I/O for basic disks. Other system components, such as mount point manager, call into this driver to get information about basic disk volumes.

Mount point manager driver:

Mountmgr.sys

A binary that tracks drive letters, folder mount paths and other mount points for volumes. Assigns a unique volume mount point of the form \??\Volume<GUID> to each volume, in addition to any drive letters or folder paths that have been assigned by the user. Ensures that a volume will get the same drive letter each time the computer boots, and also tries to retain a volume’s drive letter when the volume’s disk is moved to a new computer.

Partition manager:

Partmgr.sys

A filter driver that sits on top of the disk driver. All disk driver requests pass through the partition manager driver. This driver creates partition devices and notifies the volume managers of partition arrivals and removals. Exposes IOCTLs that return information about partitions to other components, and allow partition configuration.

Dynamic Disk and Volume Physical Structure

Dynamic disks can use either the master boot record (MBR) or GUID partition table (GPT) partitioning style. x86-based computers use disks with the MBR partitioning style and Itanium-based computers use disks with the GPT partitioning style.

The following diagram compares a dynamic MBR disk to a dynamic GPT disk.

Comparison of Dynamic MBR and GPT Disks

Comparison of Dynamic MBR and GPT Disks

The following table compares dynamic MBR and GPT disks.

Comparison of Dynamic MBR and GPT Disks

Characteristic MBR Disk(x86-based Computer) GPT Disk(Itanium-based Computer)

Number of volumes on dynamic disks

Supports up to 1000 volumes per disk group.

Supports up to 1000 volumes per disk group.

Compatible operating systems

Can be read by:

  • Windows 2000, all versions

  • Windows XP

  • Windows Server 2003, all versions for x86-based computers and Itanium-based computers

Can be read by:

  • Windows XP 64-Bit Edition

  • The 64-bit version of Windows Server 2003, Enterprise Edition

  • The 64-bit version of Windows Server 2003, Datacenter Edition

Maximum size of dynamic volumes

Supports the maximum volume size of the file system used to format the volume.

Up to 64 terabytes for a striped or spanned volume using 32 disks.

Supports the maximum volume size of the file system used to format the volume.

Up to 64 terabytes for a striped or spanned volume using 32 disks.

Partition tables (copies)

Contains one copy of the partition table.

Contains primary and backup partition tables for redundancy and checksum fields for improved partition structure integrity.

Locations for data storage

Stores data in partitions and in unpartitioned space. Although most user and program data is stored within partitions, some system metadata might be stored in hidden or unpartitioned sectors created by OEMs or other operating systems.

Stores user and program data in partitions that are visible to the user. Stores system metadata that is critical to platform operation in partitions that the 64-bit versions of Windows Server 2003 recognize but do not make visible to the user. Does not store any data in unpartitioned space.

Troubleshooting methods

Uses the same methods and tools used in Windows 2000.

Uses tools designed for GPT disks. (Do not use MBR troubleshooting tools on GPT disks.)

Dynamic Disk and Volume Processes and Interactions

The following section discusses the different processes that are used by dynamic disks and volumes and discusses the ways in which those processes interact. This section assumes that your computer has at least three dynamic disks and that the dynamic disk is functioning properly.

Creating a simple volume

Creating a simple volume involves the Virtual Disk Service (VDS) and Logical Disk Manager (LDM). A simple volume is a dynamic volume made up of disk space from a single dynamic disk. When you use Disk Management or DiskPart to create a simple volume, they call the VDS API, which sends an IOCTL to the Volume Manager to create the volume. Volume Manager creates the simple volume and documents it in the dynamic disk’s database. Next, Volume Manager sends information about the new volume to Plug and Play. Plug and Play sends information about the new volume to the Mount Manager and to VDS. Mount Manager assigns a drive letter to the volume and after VDS sends the information to Disk Management or DiskPart, the volume is available and ready for use.

What Happens When a Simple Volume Is Created

Volume Is Created

Partition Entries on MBR Dynamic Disks

Like basic disks, dynamic disks contain an MBR that includes the master boot code, the disk signature, and the partition table for the disk. However, the partition table on a dynamic disk does not contain an entry for each volume on the disk because volume information is stored in the dynamic disk database. Instead, the partition table contains entries for the system volume, boot volume (if it is not the same as the system volume), and one or more additional partitions that cover all the remaining unallocated space on the disk. All these partitions use System ID 0x42, which indicates that these partitions are on a dynamic disk. Placing these partitions in the partition table prevents MBR-based disk utilities from interpreting the space as available for new partitions.

Note

  • In Windows 2000, the partition entries for existing basic volumes were preserved in the partition table when the disk was converted to dynamic. These entries prevented the converted dynamic volumes from being extended. This limitation has been removed from Windows Server 2003 for all converted volumes except the boot and system volumes. Partition entries for all other converted volumes are removed from the partition table, and therefore these volumes can be extended.

The following example shows a partial printout of an MBR on a dynamic disk that contains four simple volumes: the system volume, the boot volume, and two data volumes. Note, however, that the partition table contains entries for only three partitions. The first entry is the system volume, which is marked as active. The second entry is the boot volume, and the third entry is the container partition for the two data volumes on the disk. All entries are type 0x42, which specifies dynamic volumes.

000001B0:                                              80 01   .....,Dc!.!.....
000001C0: 01 00 42 FE 7F 04 3F 00 - 00 00 86 FA 3F 00 00 00   ..B..?.....?...
000001D0: 41 05 42 FE FF 02 C5 FA - 3F 00 7E 04 7D 00 00 00   A.B.....?.~.}...
000001E0: C1 03 42 FE FF FF 43 FF - BC 00 58 53 54 00 00 00   ..B...C...XST...
000001F0: 00 00 00 00 00 00 00 00 - 00 00 00 00 00 00 55 AA   ..............U.

Partition Entries on Dynamic GPT Disks

The following example illustrates a partial hexadecimal printout of a GUID partition entry array on a dynamic GPT disk. The GUID partition entry array shows the Microsoft Reserved partition plus additional entries that appear only on dynamic GPT disks:

  • The LDM Metadata partition is a 1-megabyte hidden partition that stores the dynamic disk database, which contains information about all dynamic disks and volumes installed on the computer.

  • The LDM Data partition acts as a container for dynamic volumes. Individual dynamic volumes do not contain entries in the GUID partition entry array.

The partition type GUIDs are bold and match the entries in the table titled “Partition Type GUIDs” later in this section.

00000000: 16 E3 C9 E3 5C 0B B8 4D - 81 7D F9 2D F0 02 15 AE   ....\..M.}.-....
00000010: 31 C3 97 A6 A4 9F 1D 44 - 85 61 15 49 4A E9 7C 24   1......D.a.IJ.|$
00000020: 22 08 00 00 00 00 00 00 - 21 00 01 00 00 00 00 00   ".......!.......
00000030: 00 00 00 00 00 00 00 00 - 4D 00 69 00 63 00 72 00   ........M.i.c.r.
00000040: 6F 00 73 00 6F 00 66 00 - 74 00 20 00 72 00 65 00   o.s.o.f.t. .r.e.
00000050: 73 00 65 00 72 00 76 00 - 65 00 64 00 20 00 70 00   s.e.r.v.e.d. .p.
00000060: 61 00 72 00 74 00 69 00 - 74 00 69 00 6F 00 6E 00   a.r.t.i.t.i.o.n.
00000070: 00 00 00 00 00 00 00 00 - 00 00 00 00 00 00 00 00   ................
00000080: AA C8 08 58 8F 7E E0 42 - 85 D2 E1 E9 04 34 CF B3   ...X.~.B.....4..
00000090: 66 F2 3F 3A 09 D9 EA 49 - B1 32 75 D5 98 04 3C 34   f.?:...I.2u...<4
000000A0: 22 00 00 00 00 00 00 00 - 21 08 00 00 00 00 00 00   ".......!.......
000000B0: 00 00 00 00 00 00 00 00 - 4C 00 44 00 4D 00 20 00   ........L.D.M. .
000000C0: 6D 00 65 00 74 00 61 00 - 64 00 61 00 74 00 61 00   m.e.t.a.d.a.t.a.
000000D0: 20 00 70 00 61 00 72 00 - 74 00 69 00 74 00 69 00    .p.a.r.t.i.t.i.
000000E0: 6F 00 6E 00 00 00 00 00 - 00 00 00 00 00 00 00 00   o.n.............
000000F0: 00 00 00 00 00 00 00 00 - 00 00 00 00 00 00 00 00   ................
00000100: A0 60 9B AF 31 14 62 4F - BC 68 33 11 71 4A 69 AD   .`..1.bO.h3.qJi.
00000110: E2 33 A2 82 3A 5E D5 4C - AE 8E 4B EC 6B 76 4D ED   .3..:^.L..K.kvM.
00000120: 22 00 01 00 00 00 00 00 - 09 77 11 01 00 00 00 00   "........w......
00000130: 00 00 00 00 00 00 00 00 - 4C 00 44 00 4D 00 20 00   ........L.D.M. .
00000140: 64 00 61 00 74 00 61 00 - 20 00 70 00 61 00 72 00   d.a.t.a. .p.a.r.
00000150: 74 00 69 00 74 00 69 00 - 6F 00 6E 00 00 00 00 00   t.i.t.i.o.n.....
00000160: 00 00 00 00 00 00 00 00 - 00 00 00 00 00 00 00 00   ................
00000170: 00 00 00 00 00 00 00 00 - 00 00 00 00 00 00 00 00   ................

Partition Type GUIDs

Partition Type GUID Value

Unused entry

{00000000–0000–0000–0000–000000000000}

EFI System partition

{28732AC1–1FF8–D211–BA4B–00A0C93EC93B}

Microsoft Reserved partition

{16E3C9E3–5C0B–B84D–817D–F92DF00215AE}

Primary partition on a basic disk

{A2A0D0EB–E5B9–3344–87C0–68B6B72699C7}

LDM Metadata partition on a dynamic disk

{AAC80858–8F7E–E042–85D2–E1E90434CFB3}

LDM Data partition on a dynamic disk

{A0609BAF–3114–624F–BC68–3311714A69AD}

GUID Partition Entry Attributes

GUID partition entry attributes are descriptors for how a partition is used. The attributes are specified within a 64-bit value, so EFI supports up to 64 different attributes. The 64-bit versions of Windows Server 2003 use two attributes as described in the following table.

GUID Partition Entry Attributes Used by the 64-Bit Editions of Windows on Itanium-based Computers

Bits Description

Bit 0

Specifies that this partition is required for the platform to function. All original equipment manufacturer (OEM) partitions must have this bit set to protect the OEM partition from being overwritten by the disk tools supplied with Windows Server 2003.

Bit 60

Marks the partition as read-only. Used only for primary basic partitions of type {EBD0A0A2-B9E5-4433-87C0-68B6B72699C7}.

Bit 62

Marks the partition as hidden. Used only for primary basic partitions of type {EBD0A0A2-B9E5-4433-87C0-68B6B72699C7}.

Bit 63

Prevents the system from assigning a default drive letter to the partition. Used only for primary basic partitions of type {EBD0A0A2-B9E5-4433-87C0-68B6B72699C7}.

Types of RAID Volumes

A redundant array of independent disks (RAID) is a fault-tolerant disk configuration in which part of the physical storage capacity contains redundant information about data stored on the disks. The redundant information is either parity information (in the case of a RAID-5 volume), or a complete, separate copy of the data (in the case of a mirrored volume). The redundant information enables regeneration of the data if one of the disks or the access path to it fails, or a sector on the disk cannot be read.

Windows Server 2003 supports three types of software RAID configurations:

  • Striped volumes use RAID-0, which stripes data across multiple disks. RAID-0 does not offer fault tolerance, but it does offer increased performance.

  • Mirrored volumes use RAID-1, which provides redundancy by creating two identical copies of a volume.

  • RAID-5 volumes use RAID-5, which stripes parity information across multiple disks. This parity information can be used to recreate data stored on a failed disk.

Note

  • Fault tolerance is never an alternative to performing regular backups.

Use DiskPart or Disk Management to configure and repair mirrored volumes and RAID-5 volumes. The following figure shows a mirrored and a RAID-5 volume with failed redundancy status. The mirrored and RAID-5 volumes are in failed redundancy because one of the disks that makes up the volumes is offline.

Mirrored and RAID-5 Volumes That Have Failed Redundancy Status

Art Image

Striped Volumes

Striped volumes improve input/output (I/O) performance by distributing I/O requests across two or more disks. Striped volumes are composed of stripes of data of equal size written across each disk in the volume. They are created from equally sized, unallocated areas on two or more disks. For Windows Server 2003, the size of each stripe is 64 kilobytes (KB).

The disks in a striped volume do not need to be identical, but there must be unused space available on each disk that you want to include in the volume.

You cannot increase the size of a striped volume after it is created. To change the size of a striped volume, you must first complete the following steps:

  1. Back up the data.

  2. Delete the striped volume by using Disk Management or DiskPart.

  3. Create a new, larger, striped volume by using Disk Management or DiskPart.

  4. Restore the data to the new striped volume.

Striped volumes do not contain redundant information. Therefore, the cost per gigabyte on a striped volume is identical to that for the same amount of storage configured from a contiguous area on a single disk. If one disk fails, the entire striped volume fails and no data can be recovered. The reliability for the striped volume is less than the least reliable disk in the set.

Striped volumes are used for performance reasons. In general, striped volumes work well when you need to distribute disk I/O operations. Access to the data on a striped volume is usually faster than access to the same data would be on a single disk, because the I/O is spread across more than one disk. Therefore, Windows Server 2003 can be seeking on more than one disk at the same time and can have simultaneous read or write operations.

A striped volume works well in the following situations:

  • When users need rapid read or write access to large databases or other data structures.

  • When collecting data from external sources at very high transfer rates. This is especially useful when collection is done asynchronously.

  • When multiple independent applications require access to data stored on the striped volume. When the operating system supports asynchronous multithreading, which helps with load balancing of disk read and write operations.

Mirrored Volumes

A mirrored volume provides an identical twin of the selected volume. All data written to the mirrored volume is written to both volumes, which results in disk capacity of only 50 percent. Because dual-write operations can degrade system performance, many mirrored volume configurations use duplexing, which means that each disk in the mirrored volume resides on its own disk controller. The benefit of duplexing is that you reduce the risk of a single point of failure: if one disk controller fails, the other controller (and the disk on that controller) continues to operate normally. If you do not use two controllers, a failed controller makes both volumes in a mirrored volume inaccessible until the controller is replaced.

Note

  • If one disk in a mirrored volume fails, the computer continues to run and the mirrored volume is still accessible. However, the mirrored volume is no longer fault-tolerant, so you need to replace the failed disk or controller as soon as possible. If your computer supports hot-swappable hard disks, you do not need to restart the computer to install a new disk and resynchronize the mirror.

Almost any volume can be mirrored, including the system and boot volumes. However, you cannot mirror the EFI System partition on GPT disks. In addition, you cannot add disk space to a mirrored volume to increase the size of the volume later.

Advantages of Mirrored Volumes

Random disk-read operations on a mirrored volume are more efficient than random disk-read operations on a single volume. Windows Server 2003 has the capacity to load balance read operations across the disks. With current SCSI and Fibre Channel technology, two disk read operations can be done simultaneously.

When one of the volumes that makes up a mirrored volume fails, the mirrored volume is said to have lost redundancy. A mirrored volume that loses redundancy impacts system performance the least, because the remaining disk contains all of the data. No data recomputation is needed to run the system. When you configure your boot volume on a mirrored volume, you do not have to reinstall Windows Server 2003 to restart the computer after a disk failure.

When compared to a RAID-5 volume, a mirrored volume:

  • Has a lower entry cost because it requires only two disks, whereas a RAID-5 volume requires three or more disks.

  • Requires less system memory.

  • Provides good overall performance.

  • Does not degrade performance during a failure except when high-volume read operations are performed. However, if a single write error occurs, redundancy is lost.

A mirrored volume works well in the following situations:

  • When extremely high data reliability is required. A duplexed mirrored volume has the best data reliability because the entire I/O subsystem is duplicated.

  • When you have heavy write loads that need fault tolerance. In this case, mirrored volumes perform better than RAID-5 volumes.

  • When simplicity is important. Mirrored volumes are simple to understand and easy to set up.

Disadvantages of Mirrored Volumes

Disk-write operations on mirrored volumes are less efficient because data must be written to both disks. This performance penalty is minor, however, because writing to both disks usually takes place concurrently. In many situations, an end-user application is not affected by data being written do both disks.

Another performance penalty occurs when you resynchronize a mirrored volume. Resynchronization is the process by which a mirrored volume’s mirrors are made to contain identical data. During resynchronization, performance is affected because the computer is performing many I/O operations to copy the data.

Mirrored volumes are the least efficient at maximizing storage space. Because the data is duplicated, the space requirements for a mirrored volume are higher than for a RAID-5 volume.

Best Practices for Configuring Mirrored Volumes

To a large extent, how you configure your mirrored volumes depends on the number of disks and controllers that you want to have on the computer running Windows Server 2003. The following are general guidelines for configuring mirrored volumes:

  • Keep data volumes separate from boot volumes for better performance. Configuring your boot volume on a disk (and controller) that does not contain data sets gives you better performance.

  • Do not put the paging file on a mirrored volume. The paging file does not need to be redundant and can decrease the mirrored volume’s performance due to frequent disk-writing operations. Instead, put the paging file on a striped or simple volume.

  • For additional protection, put each disk in a mirrored volume on its own disk controller. When you use a mirrored volume for your system or boot volumes, you can make the configuration more fault-tolerant by putting each disk member of the mirrored volume on a separate controller. This approach allows you to survive controller or disk failures. Putting each disk member of the mirrored volume on a separate channel of a multichannel controller does not make the controller fault tolerant. However, this approach might improve performance.

  • Use identical disks when putting the system or boot volume on a mirrored volume. Although it is not necessary to use identical disks or to have the same volumes on each disk, it is strongly recommended that you use identical disks and controllers if you put your system and boot volume on a mirrored volume.

Note

  • After you mirror your system volume, you must test your configuration by starting Windows Server 2003 from each volume to ensure that you can start Windows Server 2003 if one of the disks fails. Startup problems can occur if the disks use different geometries or if the system volumes are at different offsets on the disk.

Creating Mirrored Volumes

To create a mirrored volume, use the Disk Management snap-in or the DiskPart command-line tool. You can create a mirrored volume in two ways:

  • Add a mirror to an existing simple volume on a dynamic disk. You must have an area of unused space on a different dynamic disk at least as large as the original simple volume. If you do not have a dynamic disk with enough unallocated space, the Add Mirror command is unavailable.

  • Create a new mirrored volume from unallocated space on two dynamic disks. The amount of disk space used for each half of the mirrored volume must be equal. If you have less unallocated space on one disk than the other, the mirrored volume can be no larger than the smaller of the two unallocated spaces.

In either case, if you have unallocated space left over, you can use the space to create other volumes.

Mirroring the System and Boot Volumes in x86-based Computers

To ensure that your x86-based computer can load Windows Server 2003 if one of the disks or controllers fails, you can mirror the system and boot volumes.

  • Mirroring the system volume makes an exact copy of the volume that contains the hardware-specific files needed to load Windows Server 2003.

  • Mirroring the boot volume makes an exact copy of the volume that contains the Windows Server 2003 operating system.

The boot and system volumes can be separate volumes on the same disk, separate volumes on different disks, or they can be the same volume on the same disk. In addition, the system and boot volumes can be mirrored to a different disk on the same controller or to a different disk on a different controller. The following figure illustrates some common configurations for mirroring system and boot volumes.

Common Configurations of Mirrored System Volumes and Boot Volumes (x86-based Computers)

Configurations for mirror system and boot volumes

Guidelines for Mirroring System or Boot Volumes in x86-based Computers

Before you mirror the system or boot volume in an x86-based computer, note the following guidelines:

  • Use care when selecting Advanced Technology Attachment (ATA) disks for a mirrored system volume. Although using ATA disks is supported, the recovery procedure is more complicated when the master disk on the primary integrated device electronics (IDE) channel fails. In this case, you must move the disk with the remaining mirror to the primary IDE channel and set its jumper to master position.

  • Do not mirror the system volume by using an ATA disk with a SCSI disk because startup problems can occur if one of the disks fails.

  • If you use duplexed SCSI controllers, make sure to use identical controllers from the same manufacturer.

  • You must test the mirrored system volume before a failure to ensure that the computer can start from the remaining mirror.

Mirroring the Boot Volume and Replicating the EFI System Partition in Itanium-based Computers

To ensure that your Itanium-based computer can load Windows Server 2003 if one of the disks or controllers fails, you can mirror the boot volume. Mirroring the boot volume makes an exact copy of the volume that contains the Windows Server 2003 operating system. In addition to mirroring the boot volume, you should also replicate the EFI system partition. If you do not replicate the EFI system partition, and the disk holding it fails, you will not be able to boot the computer, even if there is a good, remaining boot volume on another disk.

The process to replicate the EFI system partition involves creating a new EFI system partition on a second GUID partition table (GPT) disk. Because the second EFI system partition is empty, you must copy the contents from original EFI system partition into the second EFI system partition. After replicating the EFI system partition, you must use Bootcfg.exe to add the appropriate boot entries to the NVRAM to point to the copy of the EFI system partition you placed on the second disk. Later, if you make any changes to the original EFI system partition, you must manually replicate those changes in the second EFI system partition.

The boot volume and the EFI system partition can be on the same disk, or they can be the on different disks. The following figure illustrates the most common configuration.

Common Configurations of Mirrored Boot Volumes and Replicated EFI System Partitions (Itanium-based Computers)

Common Configurations of Mirrored Boot Volumes

Guidelines for Mirroring the Boot Volume and Replicating the EFI System Partition in Itanium-based Computers

Before you mirror the boot volume or replicate the EFI system partition in an Itanium-based computer, you must note the following guidelines:

  • Use care when selecting Advanced Technology Attachment (ATA) disks for a replicated EFI sys partition. Although using ATA disks is supported, the recovery procedure is more complicated when the master disk on the primary integrated device electronics (IDE) channel fails. In this case, you must move the disk with the remaining EFI system partition to the primary IDE channel and set its jumper to master position.

  • Do not replicate the EFI system partition by using an ATA disk with a SCSI disk, because startup problems can occur if one of the disks fails.

  • If you use duplexed SCSI controllers, make sure to use identical controllers from the same manufacturer.

  • You must test the mirrored boot volume and replicated EFI system partition before a failure, to ensure that the computer can start from the remaining boot volume and EFI system partition.

RAID-5 Volumes

Using three or more disks, a RAID-5 volume dedicates the equivalent of the space of one disk in the RAID-5 volume for storing parity stripes, but distributes the parity stripes across all the disks in the group. The data and parity information are arranged on the volume so that they are always on different disks.

Implementing a RAID-5 volume requires a minimum of three disks. The disks do not need to be identical, but there must be equally sized blocks of unallocated space available on each disk in the set. The disks can be on the same or different controllers. However, neither the system volume nor boot volume can be on a RAID-5 volume.

Note

  • As with striped volumes, you cannot add disks to a RAID-5 volume if you need to increase the size of the volume later.

If one of the disks in a RAID-5 volume fails, none of the data is lost. When a read operation requires data from the failed disk, the system reads all of the remaining good data stripes in the stripe and the parity stripe. Each data stripe is subtracted (by using XOR) from the parity stripe; the order is not important. The result is the missing data stripe.

When the system needs to write a data stripe to a disk that has failed, it reads the data stripes on the other disks. The system uses the data stripes on the remaining disks to calculate the parity. Because the data stripe on the failed disk is unavailable, it is not written; the system only updates the parity stripe.

Advantages of RAID-5 Volumes

RAID-5 volumes work well for storing data that will need to be read frequently but written to less frequently and also work well in the following situations:

  • In large query or database mining applications where reads occur much more frequently than writes. Performance degrades as the percentage of write operations increases. Database applications that read randomly work well with the built-in load balancing of a RAID-5 volume.

  • Where a high degree of fault tolerance is required without the expense (incurred by the additional disk space required) of a mirrored volume. A RAID-5 volume is significantly more efficient than a mirrored volume when larger numbers of disks are used. The space required for storing the parity information is equivalent to 1/Number of disks, so a 10-disk array uses 1/10 of its capacity for parity information. The disk space that is used for parity decreases as the number of disks in the array increases.

Disadvantages of RAID-5 Volumes

RAID-5 volumes are not well suited for most write-intensive workloads because a single write is likely to generate a two disk reads (one to read the old data and one to read the old parity information) and two writes (one to update the data and a second to update the parity information).

For example, a RAID-5 volume is not well suited for the following situations:

  • For hosting applications that require high-speed data collection. This type of application requires continuous high-speed disk writes, which do not work well with the asymmetrical I/O balance inherent in RAID-5 volumes and the extra I/Os required to write the parity stripe.

  • In transaction-processing database applications in which records are continually updated, such as in financial applications where balances are frequently updated.

If a disk that is part of a RAID-5 volume fails, read operations for data stripes on that disk are substantially slower than for a single disk. The software has to read all of the other disks in the set to calculate the data.

A RAID-5 volume requires more system memory than a mirrored volume. In addition, regenerating a RAID-5 volume negatively impacts performance more than regenerating a mirrored volume does.

Guidelines for Configuring RAID-5 Volumes

When configuring a RAID-5 volume, buy disks based on:

  • Performance. RAID-5 performance improves for each disk that you use.

  • Percentage of usable storage. The space lost to parity information decreases with each additional disk.

  • Cost per gigabyte. A RAID-5 volume requires a minimum of three disks. Buying three large disks might cost less per gigabyte, but buying six smaller disks results in better performance and more available disk space because less space is used for parity.

The following table compares two RAID-5 configurations that provide the same disk capacities. The configuration that uses six disks is the more efficient storage solution in terms of capacity and performance.

Comparison of Two RAID-5 Configurations

Comparative Feature 3 Disks at 36.4 GB Each 6 Disks at 18.2 GB Each

Total capacity

109.2 GB

109.2 GB

Spaced used for parity

36.0 GB

18.6 GB

Available disk space

73.2 GB

90.6 GB

Follow these guidelines for configuring RAID-5 volumes:

  • Do not configure your system volume or your boot volume on a RAID-5 volume. In addition, keep the RAID-5 volume on a different controller and disk than your system and boot volume. Using separate controllers improves performance and can accelerate recovery from hardware failures.

  • Do not put the paging file on a RAID-5 volume. The paging file does not need to be redundant and can decrease the RAID-5 volume’s performance due to frequent disk-writing operations. Instead, put the paging file on a striped or simple volume.

Fault-Tolerant Hardware and Software

You can create a RAID-5 volume using hardware- or software-based solutions. With hardware-based RAID, an intelligent disk controller handles the creation and regeneration of redundant information on the disks that make up the RAID-5 volume. The Windows Server 2003 family of operating systems provides software-based RAID, where the creation and regeneration of redundant information on the disks in the RAID-5 volume is handled by the Logical Disk Manager (LDM). In either case, data is stored across all members in the disk array.

In general, hardware-based RAID offers performance advantages over software-based RAID because hardware-based RAID incurs no overhead on the system processor. For example, you can improve data throughput significantly by implementing RAID-5 through hardware that does not use system software resources. Read and write performance and total storage size can be further improved by using multiple disk controllers.

Some hardware-based RAID arrays support hot swapping, which enables you to replace a failed disk or controller while the computer is still running Windows Server 2003. Consider the following points when you evaluate a fault-tolerant hardware or software solution:

  • Hardware fault tolerance provides better performance.

  • Hardware fault tolerance offers features such as hot sparing, in which additional disks are attached to the controller and left in standby mode. If a failure occurs, the controller uses one of the spare disks to replace the bad disk.

  • Software fault tolerance is less expensive.

Regardless of whether you implement fault tolerance by using hardware, software, or both, implementing fault tolerance does not reduce the need for backups.