RAID has been around for over 15 years. Why use RAID? For me, the reasons are redundancy and reliability. I don’t like disk failures. By running RAID, a disk failure will not take down my system; it still runs after a disk fails. When a disk does fail, I still have my system, and I can find another drive, add it to the system, and be ready for the next failure. Read More at ONLamp.
Seems like a good howto, but the image they show on IDE drives. I still wouldn’t use IDE for RAID. While it is a good home project, never for production. I wish he would have complied a list of Raid Controllers though. Mainly bcause I am lazy and it would be nice to have them in front of me.
On the whole, Onlamp has excellent stuff about BSD – that is pretty interesting not only to the BSD user, but to the general Unix/Unix-like user as well.
For FreeBSD, I’d say they’re the second most precious resource, after the Handbook.
http://www.onlamp.com/pub/q/all_bsd_articles
Could you tell me why you wouldn’t use IDE? And no, saying “SCSI is better” is not an answer.
Doesn’t even google rely on cheap and redundant ide solutions? (think i read that somewhere)
I mean SCSI is nice for specific applications, but often ide-raids just offer more or less the same security with a much lower pricetag..
Why you need a whole article on implementing _hardware_ RAID. I mean, that’s one of the main reasons you use hardware RAID in the first place – so you don’t have to worry about “implementing” it in the OS.
SCSI is 3-4x faster then IDE. Used to have a dept. server with a 3Ware IDE RAID 5. Was cheap, worked good, stable, etc. but when dept. grew over 20-25 users, the system (P4-2.8G w/1G RAM) crawled to a snails pace. Changed to LSI 320UW MegaRaid RAID 5 and it’s still running fast at around 65 users. IDE is good for small dept. or where the App type is mostly read, but if it’s write intensive – stick with SCSI (or have very tolerant users). Also, the IDE Raid controllers I found (3Ware, Promise, Seagate) didn’t have the flexibility in Adding/Adjusting/Deleting Partitions/Volumes that SCSI RAID controllers do (which means adding space can require a complete reinitialize with the IDE controllers).
The actual RAID-implementation is a relatively small part of the article; a better summary would be “How to set up a hardware RAID and migrate a FreeBSD install onto it”.
The article is nice, but how about using vinum(8) or software RAID under production environment? I’ve been using vinum RAID 1 and it seems stable so far.
Doesn’t even google rely on cheap and redundant ide solutions?
Yes, but Google’s entire architecture is built with the understanding that failures are frequent and common. Thus, their data is duplicated many times over. That’s not really comparable to the average server and its data.
SCSI can be many times faster than IDE and also allow many more devices to be put on a single channel. SCSI also supports external devices and true support for hot plugging. SCSI also can support devices IDE simply can not such as scanners, so if you were to have a SCSI scanner SCSI disks would be an obvious choice. SCSI drives have also classically been more reliable and although that is not neccessarily true now, if you purchased several SCSI devices when this was true you could still use them if you were to use a SCSI bus which is compatible with the device.
Personally I don’t run SCSI for my raid, but I am considering replacing my primary disk with a SCSI solution, because it’s just too slow to compete with the IDE RAID. Generally what I would suggest unless you have a ridiculous amount of money to spend on storage is use SCSI only if you need a lot of speed(and in this case, run 64 PCI, otherwise the second you leave the SCSI bus you are limited to IDE transfer rates anyway). If you need a lot of storage space, go IDE, IDE can be as fast as SCSI and if you don’t have a lot of money it can actually be faster in a RAID setup, and per the GB IDE is a LOT cheaper than SCSI.
for VonSkippy:
Also, the IDE Raid controllers I found (3Ware, Promise, Seagate) didn’t have the flexibility in Adding/Adjusting/Deleting Partitions/Volumes that SCSI RAID controllers do (which means adding space can require a complete reinitialize with the IDE controllers).
This is certainly true with most IDE RAID controllers, and will also remain true into the cheaper SCSI controllers. I’m not sure about the IDE controllers, but I know more expensive SATA RAID controllers give the features you are talking about. Generally though if you are running an IDE RAID 5 you should be running software RAID unless you really know why you are running hardware RAID. Software RAID in Linux with EVMS and LVM allow the exact features you are saying the SCSI controllers have, and some more feature your SCSI controller probably does not have.
I’m running Linux, but as an example I am running 3 IDE disks on an hpt 4 channel IDE RAID controller(not running in “hardware” raid mode) with a 350MB swap partition on each, and the rest of the drive(199GB) as an LVM Physical Volume, then I run it through the EVMS RAID 5 plugin and into a single logical volume which supports adding and removing of disks, as well as snapshotting. I know hdparm isn’t the greatest benchmark, but in this setup I achieve 75MB/s and the writes are at least twice as fast as my primary IDE drive(remember writes aren’t only slower on IDE, but RAID5 itself makes writes slower)
As for fbsd I’m not entirely sure if it can use the device mapper/LVM/EVMS, so it might not support the dynamic resize features, but most people don’t really get any benifit out of that anyway, those who do would probably have to go for a pricier solution on fbsd no matter what bus it is on and therefore SCSI becomes a more viable solution there.
Maybe this article should be followed up with the ways of doing a software RAID in fbsd, I would very much like to see if/in what way they implement something like evms as well as standard md RAID and how it compares to the hardware solutions.
*THE* advantage of SCSI–and it’s a significant one at least in the usage patterns of the *nix world–is TCQ. (SATA II of the SATA v 1.0 spec has NCQ which is similar, though quite as robust/sophisticated, but less complex)
correction: the above “.. though quite as robust/sophisticated ..” should read: “.. though not quite as robust/sophisticated ..”
TCQ is just a cool thing SCSI drives can do, but in reality don’t most modern OSs really attempt to do the same thing when writing to a disk anyway? I could be wrong here, but I thought I heard something about this. I think the main speed advantage has a lot more to do with the RPMs the drives run at and the bus itself. I’m pretty sure hdparm is a sequentual read and most SCSI disks beat out IDE disks on this test by about 50%, so even though TCQ is a feature, I think when most people decide to go out and buy SCSI they are not doing it for solely TCQ