Tags:
create new tag
view all tags
Trying to create a quick howto to implement a bootable raid partition in Vector Linux. Starting by copying some of the resources from RAID and then condensing them.

See:

Contents

Progress

I made some progress, including learning some things about the Vector install process -- I should make a note of those on a different page, but I'll just jot some notes here randomly for now:

  • I may have come to a point of diminishing (personal) returns on this effort (at least for the near term) -- I think I can set up RAID on a system with all the right infrastructure (kernel (2.4.x or above) with RAID support and the raidtools installed (including the devs). Vector uses a good kernel but does not inlcude the raidtools (or /dev/md devices).

  • The Vector install scripts are in /usr/bin (IIRC) in the RAM image mounted during the install. The main script is setup. Most of the files in that directory that are not links are readable (i.e., plain ASCII text).

  • Linux is one of the interesting scripts it (un)tars Vector Linux from the installation CD (mounted as ??) to whatever is mounted at /mnt/partition(s?).

  • Also, browsing through the scripts, I can see where dann and linc apparently modified some portions.

  • So, if I proceeded further, at some point I'd need to update the fstab at /mnt/partition/etc (and add the raidtab, and so forth).

  • I'm getting mixed signals on whether you can boot from a RAID array or not -- older information says no, even more recent information says only from RAID-1 (i.e., /boot must be on a RAID-1 device), but with the new (in 2.4 autorecognize feature), I don't see why booting could not occur from any RAID device that is autorecognized. (Maybe only RAID-1 devices are autorecognized??)

  • At some point soon, I could also make the ...Reminder page, and convert this to a ...Beginner page (maybe)

Procedure and Notes

  • For the easiest approach, use a 2.4 kernel with RAID support compiled in. You also need the raidtools (compiled) which allow you to "initialize, start, stop, and control your RAID volumes" and include the /dev/md devices). For now, I'm assuming this is already the case in Vector 3.0 and up. If not, this article gives some hints. Unless otherwise noted, all quotes on this page come from that article, part 1 or part 2.

  • To see the several levels of RAID, and the advantages and limitations of RAID see some of the references on RAID. Don't let anyone tell you RAID eliminates the need for backups or prevents data corruption.

  • After you are done, you will be able to address the RAID array as a device like /dev/md0. (The array might consist of hda3, hdb3, and hdc3, but that won't matter for normal user interaction -- if you have to recover from a fault you may have to get into those details and will need the /etc/raidtab file that was used to initialize the RAID array.)

  • This brief howto is aimed at creating a linear addressing array or RAID-0, as I am trying to combine two small disks into one disk large enough to install Vector CFK Linux -- my target is about 1.3 GB which includes 200 MB of swap. See Software RAID in the new Linux 2.4 kernel -- Part I of this article has a pretty good explanation of Linear Mode and RAID-0 -- it does not provide much discussion of the other modes up through RAID 10. Quoting from that article:

on Linear Mode (see the article for other stuff including limitations)

Linear mode is one of the simplest methods of combining two or more block devices into a "RAID" volume -- the method of simple concatenation. If you have three partitions, hda3, hdb3, and hdc3, and each is about 2Gb, they will create a resultant linear volume of 6Gb. The first third of the linear volume will reside on hda3, the last third on hdc3, and the middle third on hdb3.

To configure a linear volume, you'll need at least two partitions that you'd like to join together. They can be different sizes, and they can even all reside on the same physical disk without negatively affecting performance.

on RAID-0 Mode (see the article for other stuff including limitations)

RAID-0 is another one of those "RAID" modes that doesn't have any "R" (redundancy) at all. Nevertheless, RAID-0 is immensely useful. This is primarily because it offers the highest performance potential of any form of RAID.

To set up a RAID-0 volume you'll need two or more equally (or almost equally) sized partitions. The RAID-0 code will evenly distribute writes (and thus reads) between all constituent partitions. And by parallelizing reads and writes between all constituent devices, RAID-0 has the benefit of multiplying IO performance. Ignoring the complexities of controller and bus bandwidth, you can expect a RAID-0 volume composed of two partitions on two separate identical disks to offer nearly double the performance of a traditional partition. Crank your RAID-0 volume up to three disks, and performance will nearly triple. This is why a RAID-0 array of IDE disks can outperform the fastest SCSI or FC-AL drive on the market. For truly blistering performance, you can set up a bunch of SCSI or FC-AL drives in a RAID-0 array. That's the beauty of RAID-0.

  • Linear mode can use partitions anywhere, including on the same disk. To take advantage of RAID-0, partitions should be approximately equal size and located on different disks and, ideally, different controllers.

  • # cat /proc/mdstat will provide information about your current RAID array and give you an indication as to whether this kernel includes RAID support.

  • To see if raidtools has been installed:
    • look for the /dev/md devices -- md0 thru md15, although you don't necessarily need all of them -- just one for each RAID array you plan to set up
    • try running mkraid with no arguments -- if yoiu get command not found you have a problem

  • Create the partitions you intend to include in the RAID array with partition type "fd" (Linux RAID autodetect)

  • Setup file /etc/raidtab similar to the following:

  • For RAID-0

I've seen comments at the beginning of a file, I'm not sure that comments are allowed at the end of active lines.

raiddev /dev/md0  # you can choose any of /dev/md0 through /dev/md15
        raid-level      0  # "linear" or 0 thru 10 (not sure all are valid)
        nr-raid-disks      2  # number of disks in the array
        persistent-superblock   1  # 1 allows Linux to set up the array at boot time
        chunk-size      32  # size of the chunks, see the article
        device         /dev/hde1 # list each partition in the array
        raid-disk      0         # and number it, starting at zero
        device         /dev/hdg1 # next device ...
        raid-disk      1    

  • For Linear Mode

           
raiddev /dev/md1
        raid-level      linear
        nr-raid-disks      2
        persistent-superblock   1
        chunk-size      32
        device         /dev/hde2
        raid-disk      0
        device         /dev/hdg2
        raid-disk      1               

  • Note on "chunk size":

"chunk-size" specifies the granularity of the chunks used for RAID-0 in kilobytes. In this example, our RAID-0 volume will write to its constituent partitions in 32K blocks; that is, the first 32K of the RAID volume maps to hde1, the second 32K maps to hdg1, etc. We also specify a chunk size for our /dev/md1 linear volume -- this is just a dummy entry and doesn't mean anything.

  • Do a one-time initialization of the array:

# mkraid /dev/md0

  • You may want to do:

# cat /proc/mdstat -- you should see something like this:

Personalities : [linear] [raid0] [raid1] [raid5]
read_ahead 1024 sectors
md0 : active raid0 hdg1[1] hde1[0]
      90069632 blocks 32k chunks
 
unused devices: <none>

  • If things look good, create a filesystem on the device:

# mke2fs /dev/md0

(or whatever -- for Vector we have been using ext3, so then do)

# tune2fs -j /dev/md0

(or, just do mke2fs -j /dev/md0 instead of the two previous steps)

  • Mount the filesystem

# mkdir /mnt/raid # mount /dev/md0 /mnt/raid

This did not work for me because I did not have the /dev/md0 device>

  • _I plan to mount it as / or /usr, so

# mount /dev/md0 /

Oops, I'm not sure when to do this during the install process -- something to puzzle over. _I now have some clues -- see #Progress.

  • We may need / want to add to fstab:

/dev/md0      /mnt/raid      reiserfs      defaults        0 0

The following (among other things) makes me believe we should be able to install Vector on the RAID array and boot from it there -- my problem is understanding the installation process enough to know when / how to mount the RAID array so that installation occurs there. I'll probably try by mounting immediately after the disk partioning procedure in the Vector install, and see what happens, then post to the list and ask for help. Nope, I'll probably reboot once first, so the new partitions are "recognized". (As is often the case, viewing dmesg might be helpful.)

If you set the partition type correctly to "FD", your RAID volume will be auto-started at boot time. Now all that's left to do is use and enjoy your new Software RAID volume. And (of course smile catch my second Software RAID article, in which we'll take a look at some more advanced Software RAID functionality and RAID-1.

Everyone should read the original article, parts 1 and 2 for information on performance, limitations, recovering a RAID array (which we should ignore for CFK -- if one of the disk fails, we take both out, discard the dead one, and install a new single disk or pair of RAID disks). (There may be a few mistakes in the article -- in one place the author says the probability of failure of a RAID array is the sum of the probabilities of failure for each disk in the array, it is actually the product (and I suppose to be more precise, the probability of not failing is something like (1-P1)(1-P2)...). Also, it appears that, at the time he wrote the article, you could not boot from a RAID array. Lots of things lead me to believe that is no longer correct. Also, see RAID for the feedback I sent to the author which includes two possibilities for data corruption which he does not discuss in his article (they are discussed in the Software RAID HOWTO, around chapter 5, IIRC.

Just one more quote (I think):

"A lot of people think that if they place all their important data on a RAID 1/4/5 volume, then they won't have to perform regular backups. This is completely false -- here's why." Read the article.

I believe everything he says in the article (or at least the vast majority of it) but my goal in using RAID-0 or Linear Mode is taking two otherwise useless disks and making use of them. The slight increase in probability of failure is not of concern here -- only if we start to see a significant increase in problems (returns) with the two disk set vs. a single disk will I begin to think this approach is a mistake.

New Plan

Without rereading the above, I want to quickly make note of my tentative plan for my next attempt:

This was rather quick, and I may have overlooked something -- but maybe there is a chance this will work, perhaps with minor modifications. (The roadblocks in the previous plan included the difficulty of getting the raid tools for Vector (don't see a package) on both the install disk and the install "image" without making modifications to the scripts and the install CD.)

  • Create a master Vector hard drive for cloning, install it as, for example, hdc on a "test machine"
  • Boot from a rescue disk that supports raid
  • Mount the master hard drive (/mnt/temp??) and copy the md devices to /mnt/temp/dev.
  • (As a further test, but not essential to this first test), copy the raidtools to the equivalent location on /mnt/temp -- later see if they work, but it is not essential that they do.
  • Create the appropriate /etc/raidtab in the ramdisk filesystem and copy it to /mnt/temp/etc/raidtab (not absolutely essential, but helpful)
  • Create the appropriate /etc/fstab in the ramdisk filesystem and copy it to /mnt/temp/etc/raidtab (not needed so much on the rescue system but very helpful (I think) for the final system
  • Run mkeraid while in the rescue system
  • Consider making and formatting the filesystem while in the rescue system (although I think you accomplish the same thing in during the Vector install or disk cloning process)
  • At this point, you could do the Vector install (reboot) or do the cloning process -- before wasting too much time, I should reboot and confirm that at least the Vector install finds and mounts the RAID array -- oops, maybe it can't without the md devices -- maybe just do the cloning process.

UPDATE: I tried the above today -- made some progress but ultimately failed because mke2fs would not complete "Writing superblock and filesystem accounting info." Could it as simple as a bad harddrive? Guess I'll try again, ideally with two known good hard drives. Other notes:

  • The Linuxcare Rescue Disks (version 1.5 and 2.0) and the PartImage cd (with Speak Image) did not help me for various reasons. (IIRC, neither Linuxcare CD had the md devices, neither let me mount a floppy, and 1.5 seemed to have errors on the CD.) PartImage would not start even with 80 MB of RAM -- got to the "Freeing unused kernel memory: 256 K Freed" line and stopped doing anything.)

  • Knoppix 3.1 became my rescue disk. I could not do the mke2fs successfully in knoppix or in the Vector install boot/root -- knoppix complained about dmas, Vector started but never finished. On bootup Vector did recognize the raid array.

  • Tried a second time with two different disks, same result. Then added the -R stride=8 parameter -- still no good. (Stride should equal the chunk size (32k) divided by the block size (4k) = 8 for my case.)

References on the stride parameter:

Contributors

  • () RandyKramer - 13 Mar 2003
  • StuartBoreen - 13 Mar 2003
  • <If you edit this page: add your name here; move this to the next line; and include your comment marker (initials), if you have created one, in parenthesis before your WikiName.>

Page Ratings

Edit | Attach | Watch | Print version | History: r6 < r5 < r4 < r3 < r2 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r6 - 2003-04-11 - RandyKramer
 
  • Learn about TWiki  
  • Download TWiki
This site is powered by the TWiki collaboration platform Powered by PerlCopyright � 1999-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding WikiLearn? WebBottomBar">Send feedback
See TWiki's New Look