Install Arch Linux with Fake RAID (简体中文)
本指南的目的是使用由板上BIOS的RAID控制器所生成的RAID,从而使得GRUB可以从RAID内的Linux和Windows的分区启动。当使用所谓的"fake RAID"或"host RAID"时,硬盘是/dev/mapper/chipsetName_randomName
而不是/dev/sdX
.
什么是"fake RAID"
维基:
- 基于操作系统的RAID并不总是保护引导过程,对Windows桌面版本一般的不实用。硬件RAID昂贵且是专有的。为了填补这一缺口,引入了价格便宜的“RAID控制器“,它不包含RAID控制器芯片,但只是一个带特殊的固件和控制器芯片的标准磁盘驱动器。在启动初期,RAID通过固件实现。当加载了保护模式操作系统内核如Linux或现代微软Windows系统,驱动程序接管RAID。
- 这些控制器被制造商描述为RAID控制器,但很少清楚地告诉购买者,RAID处理的开销是由主机的CPU承担,而不是RAID控制器本身,硬件RAID不存在这样的开销。固件控制器往往只能使用特定的几种硬盘(例如:Intel的Matrix RAID使用SATA硬盘,现代Intel ICH南桥不支持PATA和SCSI;但主板厂商在一些主板的南桥之外实现了RAID控制器)。因为在此之前,“RAID控制器“已经实现了--控制器做了处理,所以这种新技术被技术知识界称为“fake RAID“,即使RAID本身是正确实施。 Adaptec的称他们为“host RAID“。wikipedia:RAID
参考 Wikipedia:RAID or FakeRaidHowto @ Community Ubuntu Documentation以获得更多的信息。
不考虑术语,通过dmraid建立的"fake RAID"软RAID是健壮的,提供了一个通过多个磁盘实现的坚实的镜像或条带的数据系统,并只有可以忽略不计开销。 dmraid的对比于mdraid(纯粹的Linux软RAID)提供了如下好处:当出错时,能够在重启之前完全重建一个硬盘。
历史
在Linux 2.4中, ATARAID kernel framework提供了对fake RAID (由BIOS协助的软RAID)的支持. Linux 2.6中,device-mapper framework ,包括其它的如LVM和EVMS, 可以做ATARAID在2.4中做的事.虽然新的代码中处理RAID的I/O仍然在内核中运行时,device-mapper通常是由一个用户空间应用程序配置。很明显,当使用RAID的device-mapper,检测会在用户空间。
Heinz Maulshagen开发了dmraid工具来检测RAID和创建它们的映射.支持的硬件是带BIOS功能的fake RAID IDE/SATA. 常见的如: Promise FastTrak controllers; HighPoint HPT37x; Intel Matrix RAID; Silicon Image Medley; 和 NVIDIA nForce.
备份
提纲
- 准备
- 从安装盘启动
- 加载dmraid
- 执行传统安装
- 安装GRUB
准备
- 在其它机器上打开需要的指南(如 Installation guide)。如果没有其它的机器,打印出来。
- 下载最新的Arch Linux安装镜像.
- 备份所有重要的文件,因为目标分区中的所有文件将被破坏。
配置RAID
- 进入BIOS设置,激活RAID控制器。
- BIOS可能包含配置SATA硬盘的选项如"IDE","AHCI"或者"RAID"; 确认选择了"RAID"。
- 保存并退出BIOS设置。启动时进入RAID设置工具。
- RAID设置工具通常可以通过启动菜单(通常是F8, F10或CTRL+I)或RAID控制器启动时进入。
- 使用RAID设置工具来建立所选的条带或镜像的集。
从安装盘启动
详情参见Installation guide#Pre-installation。
vga=795
选项以获得更高的framebuffer分辨率。加载dmraid
加载device-mapper并寻找RAID:
# modprobe dm_mod # dmraid -ay # ls -la /dev/mapper/
输出例子:
/dev/mapper/control <- 由device-mapper生成;如果存在,device-mapper是工作的。 /dev/mapper/sil_aiageicechah <- Silicon镜像SATA RAID控制器下的一个RAID集 /dev/mapper/sil_aiageicechah1 <- 该RAID的第一个分区
如果只有(/dev/mapper/control
),用lsmod
检查您是否加载了控制芯片的模块。如果加载了,那么dmraid不支持该控制芯片或系统没有RAID集(再检查一下BIOS中的RAID设置)。如果还是正确的,那您只能用software RAID (也就是说您不能用双启动的RAID系统)。
如果你的芯片模块没有加载,加载它,如:
# modprobe sata_sil
可用的驱动参见/lib/modules/`uname -r`/kernel/drivers/ata/
。
测试RAID:
# dmraid -tay
执行传统安装
切换到tty2开始安装:
# /arch/setup
RAID分区
- 在Prepare Hard Drive选择Manually partition hard drives因为Auto-prepare选项找不到您的RAID。
- 选择OTHER,输入您的RAID路径(如
/dev/mapper/sil_aiageicechah
)。切换到tty1检查您的输入。 - 按常规方法分区。
加载文件系统
如果在Manually configure block devices, filesystems and mountpoints没有找到新的分区--很可能是这种情况:
- 切换到tty1.
- 去除所有device-mapper节点:
# dmsetup remove_all
- 重新激活新建立的RAID节点:
# dmraid -ay # ls -la /dev/mapper
- 切换到tty2,重新进入Manually configure block devices, filesystems and mountpoints菜单,分区应该就可用了。
安装和配置Archlinux
- tty1: chroot和安装grub
- tty2: /arch/setup
- tty3: 用cfdisk做输入参考,分区
切换到安装程序(tty2)继续:
- 选择包
- 确认标记了安装dmraid
- 配置系统
- 在
mkinitcpio.conf
中的MODULES行加入dm_mod。如果使用镜像阵列还要加入dm_mirror - 如果需要还要在MODULES行加入芯片驱动模块chipset_module_driver
- 在
mkinitcpio.conf
中HOOKS行加入dmraid;可加在sata but before filesystems之后
- 在
安装 GRUB
menu.lst
so that the default entry is the entry saved with the command savedefault. If you are using dmraid do not use savedefault or your array will de-sync and will not let you boot your system.Please read GRUB for more information about configuring GRUB. Installation is begun by selecting Install Bootloader from the Arch installer.
menu.lst
will likely be incorrectly populated when installing via fake RAID. Double-check the root lines (e.g. root (hd0,0)
).
Additionally, if you did not create a separate /boot
partition, ensure the kernel/initrd paths are correct (e.g. /boot/vmlinuz-linux
and /boot/initramfs-linux.img
instead of /vmlinuz-linux
and /initramfs-linux.img
.For example, if you created logical partitions (creating the equivalent of sda5, sda6, sda7, etc.) that were mapped as:
/dev/mapper | Linux GRUB Partition | Partition Number nvidia_fffadgic | nvidia_fffadgic5 | / 4 nvidia_fffadgic6 | /boot 5 nvidia_fffadgic7 | /home 6
The correct root designation would be (hd0,5) in this example.
/boot
partition as the GRUB root. In the example above, if nvidia_fffadgic was the second dmraid array you were installing to, your root designation would be root (hd1,5).After saving the configuration file, the GRUB installer will FAIL. However it will still copy files to /boot
. DO NOT GIVE UP AND REBOOT -- just follow the directions below:
- Switch to tty1 and chroot into our installed system:
# mount -o bind /dev /mnt/dev # mount -t proc none /mnt/proc # mount -t sysfs none /mnt/sys # chroot /mnt /bin/bash
- Switch to tty3 and look up the geometry of the RAID set. In order for cfdisk to find the array and provide the proper C H S information, you may need to start cfdisk providing your raid set as the first argument. (i.e. cfdisk /dev/mapper/nvidia_fffadgic):
- The number of Cylinders, Heads and Sectors on the RAID set should be written at the top of the screen inside cfdisk. Note: cfdisk shows the information in H S C order, but grub requires you to enter the geometry information in C H S order.
- Example:
18079 255 63
for a RAID stripe of two 74GB Raptor discs. - Example:
38914 255 63
for a RAID stripe of two 160GB laptop discs.
- GRUB will fail to properly read the drives; the geometry command must be used to manually direct GRUB:
- Switch to tty1, the chrooted environment.
- Install GRUB on
/dev/mapper/raidSet
:
# dmsetup mknodes # grub --device-map=/dev/null grub> device (hd0) /dev/mapper/raidSet grub> geometry (hd0) C H S
Exchange C H S above with the proper numbers (be aware: they are not entered in the same order as they are read from cfdisk).
If geometry is entered properly, GRUB will list partitions found on this RAID set. You can confirm that grub is using the correct geometry and verify the proper grub root device to boot from by using the grub find command. If you have created a separate boot partition, then search for /grub/stage1 with find. If you have no separate boot partition, then search /boot/grub/stage1 with find. Examples:
grub> find /grub/stage1 # use when you have a separate boot partition grub> find /boot/grub/stage1 # use when you have no separate boot partition
Grub will report the proper device to designate as the grub root below (i.e. (hd0,0), (hd0,4), etc...) Then, continue to install the bootloader into the Master Boot Record, changing "hd0" to "hd1" if required.
grub> root (hd0,0) grub> setup (hd0) grub> quit
The problem is that GRUB still uses an older detection algorithm, and is looking for /dev/mapper/raidSet1
instead of /dev/mapper/raidSetp1
.
/dev/mapper/raidSetp1
to /dev/mapper/raidSet1
(changing the partition number as needed). The simplest way to accomplish this is to:# cd /dev/mapper # for i in raidSetp*; do ln -s $i ${i/p/}; done
Lastly, if you have multiple dmraid devices with multiple sets of arrays set up (say: nvidia_fdaacfde and nvidia_fffadgic), then create the /boot/grub/device.map
file to help GRUB retain its sanity when working with the arrays. All the file does is map the dmraid device to a traditional hd#. Using these dmraid devices, your device.map file will look like this:
(hd0) /dev/mapper/nvidia_fdaacfde (hd1) /dev/mapper/nvidia_fffadgic
And now you are finished with the installation!
# reboot
Troubleshooting
Booting with degraded array
One drawback of the fake RAID approach on GNU/Linux is that dmraid is currently unable to handle degraded arrays, and will refuse to activate. In this scenario, one must resolve the problem from within another OS (e.g. Windows) or via the BIOS/chipset RAID utility.
Alternatively, if using a mirrored (RAID 1) array, users may temporarily bypass dmraid during the boot process and boot from a single drive:
- Edit the kernel line from the GRUB menu
- Remove references to dmraid devices (e.g. change
/dev/mapper/raidSet1
to/dev/sda1
) - Append
disablehooks=dmraid
to prevent a kernel panic when dmraid discovers the degraded array
- Remove references to dmraid devices (e.g. change
- Boot the system