I had one of these chips on my mainboard, the ABIT KR7A-133R. It is a 4 drive IDE controller. RAID is implemented in an expansion firmware on the system BIOS EEPROM. As the raid is in the firmware and not in the chip itself, kernels such as Linux that access hardware directly cannot use this code and need to do software raid.
Firstly I have Linux built with an initrd, with the hpt366.ko modulised and a copy of that module in the initrd.
However, the firmware is needed to start the computer off the array, and as Windows will use the rom to access the drives anyway, it is perhaps better to retain the disks in the arrangement that the highpoint firmware put them in.
I select Mirror+Stripe with 64k block size in the raid firmware. In order to determine the block arrangement of the drives we need to write some data to the array. I fetch the highpoint proprietary drivers (version 2.0) and compile them like so.
We will get some error with entry.c (expect problems with proprietary drivers like this), so I write #define scsi_to_pci_dma_dir(scsi_dir) ((int)(scsi_dir)) near the top of entry.c, which comes from drivers/scsi/scsi.h in linux 2.6.5
I then run make RR1520=0, and to install, assuming you installed this kernel from a binary .deb in /usr/src, and that you still have that .deb, do:
# overwrite inbuilt driver for proprietary test driver cp hpt37x2.ko /lib/modules/`uname -r`/kernel/drivers/ide/pci/hpt366.ko # copy it into the initrd mkinitrd -o /boot/initrd.img-`uname -r`
When restarted you have the proprietary driver, and can use it via /dev/sda after loading the scsi disk module. You can also experiment with the user programs they supply like hptsvr and hptraidconf, again proprietary without source code unfortunately.
We are only really interested however in discovering the disk layout, so run hexedit /dev/sda. This will allow you to write data via the proprietary driver, and later, when the free driver is loaded, you can examine the effect on the disks that form the array, thus giving good clue to writing your own rules for linux software raid via dmraid.
As we selected a 64k block size, 64 * 1024 = 0x10000
. Go to that address in hexedit
and write some words around that mark that you could easily recognise or search for in hexedit
later. I suggest making the 0x10000 the first character of an english word in hex, and 0x999F the
last character of another english word.
0x0999C | 0x0999D | 0x0999E | 0x0999F | 0x10000 | 0x10001 | 0x10002 | 0x10003 |
---|---|---|---|---|---|---|---|
F | R | E | E | S | O | F | T |
Do a similar thing around the 0x20000 mark, then you can re-install your kernel from your binary .deb, reverting to the free hpt driver, and restart the computer to get it back. You could now run hexedit on /dev/hde thru /dev/hdh and look around for those words, or pieces thereof.
It seemed to reveal the following....
/dev/hde mirror's /dev/hdg (except the bootsector and proprietary bit) and /dev/hdf mirror's /dev/hdh (except proprietary bit)
0x00000 - 0x0FFFF | Block 0 |
---|---|
0x10000 - 0x1FFFF | Block 1 |
0x20000 - 0x2FFFF | Block 2 |
0x30000 - 0x3FFFF | Block 3 (and so on...) |
Which consists of...
0x00000 - 0x0FFFF | Block 0 |
---|---|
0x10000 - 0x1FFFF | Block 2 |
0x20000 - 0x2FFFF | Block 4 |
0x30000 - 0x3FFFF | Block 6 (and so on...) |
0x00000 - 0x013FF | see below |
---|---|
0x01400 - 0x013FF | Block 1 |
0x11400 - 0x113FF | Block 3 |
0x21400 - 0x213FF | Block 5 |
0x31400 - 0x313FF | Block 7 (and so on...) |
The bytes 0x1220 - 0x13FF appear on all the drives to be used by the hpt firmware to store some info about the raid layout in use. Of course this appears in Block 0, so consequentially it would appear on /dev/sda if the proprietary drivers are in use. I think it is checksummed to detect tampering but I don't think the chip/firmware can or will check other areas of the device are correct, which is just as well if using free software raid.
Also, 0x0000 - 0x01FF on /dev/hde can be used for a rootsector in the usual way, it appears the firmware has written a dummy one for your enjoyment here.
I'm assuing an existing debian install somewhere on /dev/hda...
Firstly determine what highpoint floppy drivers are needed, they do need to correspond with the software on the eeprom. Save these drivers on a floppy.
For example, I find the file bios372.232 inside the file kr7a_cx.bin, and the md5sum of this file matches release 2.32 from highpoint, in 372-v232.zip/BIOS/bios372.232, thereby confirming it is the same software. This can be verified by unpacking the bios components, with lharc, then md5sum the highpoint software contained therein.
Unplug any hard disks attached to IDE1/IDE2 on the mainboard, and connect a CDROM drive there. Keep the floppy drive connected. Start the computer, disable APIC (XP's installer likes to crash with this on :-P) and make the default boot device the CDROM, then the HPT chip in the bios. In highpoint bios, you can configure a mirror+stripe array with 64k stripe using all 4 attached disks. Mark the available device for booting, the other drives are shown as Hidden in the hpt bios.
Exit this hpt bios and the computer then starts from the XP cd. Windows XP asks for RAID drivers and the user must press F6 promptly when asked, then provided with the drivers on floppy. I chose the Windows XP version.
It's best to let it partition the array from empty. A 32769 meg partition is the max it seems to allow for FAT32. We don't want NTFS as designed to be corrupted by free operating systems. Highpoint recommended that hard disks on IDE sockets 1-2 were not connected at this time as XP insists on writing the bootloader to those drives instead of the fakeraid array if they were connected.
When windows installed, install all components, then recovery console, then secure the computer.
Later, you can shutdown the computer and reattach your usual drives to IDE1/2 on the mainboard. When computer starts again choose your non-raid drive and start into your existing debian install.
apt-get install dmraid
/etc/init.d/dmraid start
You can make an ext3, reiserfs or similar partition as a primary 2 (after the fat32 partition) using cfdisk /dev/mapper/hpt37x_abcdefghij
/etc/init.d/dmraid restart to have it reload the partition tables.
Initialise the filesystem in this new partition, for ext3 like so
mke2fs -j /dev/mapper/hpt37x_abcdefghij2
mount -t ext3 /dev/mapper/hpt37x_abcdefghij2 /fakeraid
cp -a /boot/grub /fakeraid/boot/
umount /fakeraid
/etc/init.d/dmraid stop
You may want to copy the rest of your non-raid drive to the raid array at this point, tho at this point only the /boot/grub directory is required.
Firstly it's recommended to backup the hpt information block and the rootsector from each disk. The information block is the only thing that differ's between drives that are otherwise mirrors of each other.
#!/bin/sh dd count=1 if=/dev/hde of=hderootsect dd count=1 if=/dev/hdg of=hdgrootsect dd skip=9 count=1 if=/dev/hde of=hdehpt dd skip=9 count=1 if=/dev/hdf of=hdfhpt dd skip=9 count=1 if=/dev/hdg of=hdghpt dd skip=9 count=1 if=/dev/hdh of=hdhhpt
Now write the stage1.5 starting at exactly 0x1400 bytes on the drives containing the first 64k block of your array. This writes it immediately after the hpt information block. This needs to match the filesystem in use, so for ext2/ext3 use e2fs, for ReiserFS use reiserfs, etc...
dd seek=10 if=/boot/grub/e2fs_stage1_5 of=/dev/hde dd seek=10 if=/boot/grub/e2fs_stage1_5 of=/dev/hdg
Now we need to exit linux and start grub again from the bios, as it does the stripe translation that grub will use to find stage2, its menu.lst and eventually to load linux and its initrd file from the partition, or indeed, mach/HURD with its modules. Running grub when linux is resident in memory prevents it using the bios routines, and consequently it sees the raw drives rather than the fakeraid array.
You can use find to determine which (hd?) entry corresponds to the array. If there is one other hard disk attached then that would normally be (hd0) when you have started from it, making the array (hd1). XP will be on (hd1,0) and so your ext2fs will be found on (hd1,1)
grub root (hd1,1) install /boot/grub/stage1 (hd1) (hd1)10+19 p /boot/grub/stage2 /boot/grub/menu.lst
However, when the computer is configured to start from the fakeraid array, the bios reports it as (hd0), so /fakeraid/boot/grub/menu.lst on the array needs to be configured accordingly. Your other disk may be reported as (hd1). The existing entries can be adjusted to start using a root device of (hd1)
The fakeraid array can now be configured as the start device, and it's now possible to start into XP on raid, or Debian from the non-raid disk.
To finish off, here's how to boot from your fakeraid array, without any help from /dev/hda-hdd. Place the following in /etc/mkinitrd/scripts/dmraid
However, this is only valid for linux up to 2.6.12 as subsequently devfs has been removed and a different means of dmraid needs to be used
#!/bin/sh # This is /etc/mkinitrd/scripts/dmraid # # Derived from /usr/share/initrd-tools/scripts/e2fsprogs # # program to mount hpt raid device cp /sbin/dmraid $INITRDDIR/sbin # we need some linux modules... mkdir -p $INITRDDIR$MODULEDIR/kernel/drivers/md/ cp $MODULEDIR/kernel/drivers/md/dm-mod.ko $INITRDDIR$MODULEDIR/kernel/drivers/md/dm-mod.ko cp $MODULEDIR/kernel/drivers/md/dm-mirror.ko $INITRDDIR$MODULEDIR/kernel/drivers/md/dm-mirror.ko # dmraid likes to open up the traditional device nodes cp -a /dev/hde $INITRDDIR/dev/hde cp -a /dev/hdf $INITRDDIR/dev/hdf cp -a /dev/hdg $INITRDDIR/dev/hdg cp -a /dev/hdh $INITRDDIR/dev/hdh # run dmraid before mounting root cat <<EOF > $INITRDDIR/scripts/dmraid.sh #!/bin/sh cd / mount -nt proc proc proc mount -nt sysfs sysfs sys mount -nt devfs devfs devfs modprobe dm-mirror dmraid --activate yes --ignorelocking --verbose umount -n devfs umount -n sys umount -n proc EOF chmod a+x $INITRDDIR/scripts/dmraid.sh # for debugging... #cp $INITRDDIR/sbin/init /tmp/init #head -n -4 /tmp/init > $INITRDDIR/sbin/init #echo /bin/echo "End of init" >> $INITRDDIR/sbin/init #echo /bin/dash >> $INITRDDIR/sbin/init #tail -4 /tmp/init >> $INITRDDIR/sbin/init case "$VERSION" in 2.4.*) LD_ASSUME_KERNEL=2.4 export LD_ASSUME_KERNEL ;; esac PROGS="/sbin/dmraid" LIBS=`ldd $PROGS | grep -v linux-gate.so | sort -u | \ awk '{print $3}'` for i in $LIBS do mkdir -p `dirname $INITRDDIR/$i` cp $i $INITRDDIR/$i done
At this point you'd want to have copied your operating system fully on the fakeraid partition, if you haven't already done so.
You now need to regenerate the initrd on the fakeraid partition now, i.e. with e.g. mkinitrd -o /fakeraid/boot/initrd.img-`uname -r`, and edit /fakeraid/etc/fstab to tell it the root filesystem is called.
Also a typical entry on /fakeraid/boot/grub/menu.lst could now look like this:title Debian GNU/Linux, RAID array root (hd0,1) kernel /boot/vmlinuz-2.6.12 root=/dev/mapper/hpt37x_abcdefghij2 video=radeonfb:1024x768-32@60,panel_yres:768 noapic resume2=swap:/dev/hda2 fbcon=font:SUN12x22 initrd /boot/initrd.img-2.6.12 savedefault boot
Now when RAID is selected in the main award-bios, it loads grub from the rootsector, you can pick XP or a debian kernel. XP chainloads the XP loader from its bootsector. Selecting debian typically loads linux and its initrd into memory using bios calls, which will be striped as per the system setting. In the initrd we set up dmraid and use it to mount the real root device.
update-grub, and perhaps many other programs, require that the root filesystem device is present in the filesystem. You'll want the other devices available anyway to be able to mount the filesystem's on them.
You may well have udev managing /dev by now. This only seems to create the mapper devices when the array is activated, and that was during the initrd, so they've disappeared by now. They can be faked as follows, or devfsd can be used to manage /devfs even if it is no longer being used for /dev, and /etc/fstab entries can be pointed there as needed.
NAME=`dmraid -s | grep name | tr -s " " | cut -d" " -f3` cd /dev/mapper/ # view of first mirror slices pair mknod $NAME-1 b 254 0 # view of second mirror slices pair mknod $NAME-2 b 254 1 # raid view of entire array. Point cfdisk here to add/remove partitions. mknod $NAME b 254 2 # 1st primary partition on array, you may mount this mknod $NAME1 b 254 3 # 2nd primary partition on array, you may mount this mknod $NAME2 b 254 4 # 3nd primary partition on array, you may mount this mknod $NAME2 b 254 5 # and so on...