Bugs are a common fixture with WindowsXP, but why not make the OS work when it can? See how to tap WindowsXP for sophisticated RAID functions with only a few modifications.
Bugs are a common fixture with WindowsXP, but why not make the OS work when it can? See how to tap WindowsXP for sophisticated RAID functions with only a few modifications.
Femme Taken wrote some info last year on his website http://www.tweakers.net/reviews/373/1
http://gathering.tweakers.net/forum/list_messages/733248/
http://balusc.xs4all.nl/ned/twe-wrh.html
all in dutch tho, but i cannot believe uncle tom was unaware about this.
You’d think that they would just enable software RAID in the OS by default. Although I can understand the benefits of software RAID(especially for RAID 5), software RAID on the average benefits home users who don’t have the money to purchase high performance RAID cards. Which would suggest that RAID should be enabled in the home versions of XP as well as the server. Another thing that is a must for software RAID, is the ability to modify RAID chunk sizes, which is a feature of Linux’s software RAID as well as many high end hardware RAID cards. Changing the cluster size to match user usage as well as file sizes of the data stored on the RAID array can make a more than notable difference in performance.
As for the tests, I was a little dissapointed. There were only I/O tests, no exclusive read or exclusive write tests, and really RAID 5 should have a much faster read than write, so it would have been interesting to see. It would also be nice to see a test with a tweaked and untweaked Linux software RAID 5 to see how they compare.
Also personally I wouldn’t trust an ugly hack like this with a couple hundred Gigs of data. Anytime you start replacing text strings in dlls to unlock features you can’t expect to have a great deal of stability.
Even if it is RAID1 its still better than the pain that is software RAID on windows. RAID1 cards are cheap (@$25) and plentiful, and chances are your motherboard already supports it. For home users this is usually more than good enough and a ‘hacked’ software RAID5 is good for nothing more than curiosity.
and whap happens when windows automatic updates is turned on and one of these dll’s get changed? exactly…
Quote from Evert: and whap happens when windows automatic updates is turned on and one of these dll’s get changed? exactly…
—————————————-
Just change them back It cannot harm, because Windows _cannot_ be installed on a RAID5 array and because broken RAID1 arrays will just work apart from each other. So the OS will boot, so you’re able to change the files back.
I tried out Raid 5 on a Win2k server. 4 ultra 100 IDE drives all on separate channels. The performance was horrible. It was way slower than a single drive by itself. It was better to just use raid 1. Ended up switching to Linux for raid and LVM. BTW, The same raid 5 setup with linux had three to four time the drive speed as the Windows setup. I have heard although I cant find a source, that windows raid 5 implentation is fundamentally broken. I demo’d a raid 5 solution from iomega that used windows software raid and it suffered from the same slowness.
TG
Software raid arrays have always been under the “neat hack” category, most of them pretty stable and inventive with Linux,unix, etc… but professional applications have always relied on Hardware Raid.
I could come up with too many snarky comments regarding Windows and software reliability.
Find a hardware RAID card for any pro-unix (you won’t)
All of the big iron servers use software striping.
Software raid arrays have always been under the “neat hack” category, most of them pretty stable and inventive with Linux,unix, etc… but professional applications have always relied on Hardware Raid.
Software RAID 5 is not a neat hack. In many cases Software RAID 5 actually gives better performance than Hardware RAID 5 depending on system setup.
There are actually problems with hardware RAID 5 that do not occur on software raid also, consider that you have a 4 year old $500 raid controller in your machine and lots of important data raided across the array. Now you’re RAID card just died and the card is no longer manufactured or the card company went under. Good look getting the data back, with software raid you can simply move the drives to any other controller and you are fine.
Also hardware RAID can be much more constricting. Hardware RAID is not aware of disks on a per partition basis, so you can only RAID disks and not partitions. Take for instance my software RAID setup, 3 200G drives, they are partitioned into a very large reiser partition and a very small swap partition a piece. My reiser’s are RAID 5’d together for data reliability as well as speed on read. My swap’s are RAID 0’d together for pure speed. If a disk goes out I can just turn off swap, replace the disk, reformat the swap, let the RAID 5 rebuild and turn swap back on. This can’t be done on a hardware RAID array.
Also many hardware RAID cards don’t let the cluster size per volume be changed, this can make significant performance differences as said above.
To the guy with the $25 Raid card: You’re card is probably software RAID(it has a driver so that you don’t have to set it up like a software RAID, but it uses you’re computer’s CPU, not the RAID cards). This is definately in all cases the worst on performance and reliability as far as Hardware/Software/or the hybrid you’re using setups are concerned.
Doesn’t this violate the MS EULA for Windows XP?
“You may install, use, access, display and run one copy of the Product on a single computer”
Where does it say you can modify?
I didn’t I could purchase RAID cards for my laptop.
No? Try a USB/Firewire RAID chassis. Or a PCMCIA RAID card. Works for me. Use the right tools for the job.
As for the lack of hardware raid for ‘pro-unix’ – surely you jest. Most are external chassis’ with onboard RAID5 and FC/USCSI/whatever connectivity.
My swap’s are RAID 0’d together for pure speed.
You’d almost certainly get better performance by using 3 separate swap partitions and letting the memory manager deal with them.
RAID 0 won’t improve random access speed, which is what you really want to improve for swapping.
You’d think that they would just enable software RAID in the OS by default. Although I can understand the benefits of software RAID(especially for RAID 5), software RAID on the average benefits home users who don’t have the money to purchase high performance RAID cards.
Home users typically have single disk systems. Or *maybe* the original system drive and another drive used as an upgrade (of a very different size).
Which would suggest that RAID should be enabled in the home versions of XP as well as the server. Another thing that is a must for software RAID, is the ability to modify RAID chunk sizes, which is a feature of Linux’s software RAID as well as many high end hardware RAID cards.
Using software RAID – particularly RAID 5 – is well out of the ballpark of “typical user”. *Tuning stripe sizes* isn’t even playing the same game.
You’d almost certainly get better performance by using 3 separate swap partitions and letting the memory manager deal with them.
RAID 0 won’t improve random access speed, which is what you really want to improve for swapping.
That’s certainly something to look into. Generally I don’t touch swap(G of mem), so I really don’t know. Mainly I was giving my demonstration of what I did to show some of the things you can do in software RAID that you can’t do on a hardware RAID, but in this case I might not have made a great decision on when to use said features. I’ll look into it, thanks for the pointer.
You’re posting at the same time I am, so to the next comments.
I know a lot of non technical users who would like to implement various levels of raid and if they could do so easily without extra expenses they would. I’m not talking joe user here, but you’re average power user. (mainly what I was trying to say is that raid 0 or 1 is rarely done in software mode on the server and is more usefull to nonserver users)
For the chunk sizes I certainly wasn’t referring to home users, but just in general it’s a feature that should be there. For software RAID 5 on the server it’s a pretty important feature. It’s also probably going to make a large difference in benchmarks if you can configure chunk sizes to match or be optimized for the type of tests you are doing.
Are there raid5 capable mobo’s or raid 5 pci-x cards?
What you are percieving as being “hardware” RAID 0/1 only because its a piece of hardware is in fact also software RAID. Expect for offering the extra slots, these cheap cards don’t do the RAID calculations, the CPU does them and that’s why they are called software RAID as well. A “real” hardware RAID controller is expensive, because its got a RAID chip doing the RAID calculations on the card.
So much for your cheap RAID.
using the Linux kernel’s software RAID drivers versus the onboard IDE RAID chip functions bundled onto various mobos.
If I’ve been paying attention, it’s been said that the kernel’s own software RAID drivers (but I gather it’s not so simple as /dev/md0 these days either, what with LVM, EVMS, the legacy drivers, and more available? What’s a fella to use are quite simply going to be faster, but WHY is this?
Surely the actual RAID calculations in BOTH cases are going to be done on the system’s own CPU, since in BOTH cases there is “real” dedicated RAID board for offloading them onto.
So what does that all mean? Simply that the actual in-kernel software RAID drivers are faster/better than the vendor-written ones?
I of course meant to say:
since in BOTH cases there is NO “real” dedicated RAID board
–Amazing what a difference one word makes
Here’s your clue – try searching ebay for cheap cards (and be sure to filter out those promise and other software assist cards) before you open your yap again.
Actually it can be as simple as /dev/md0 and in my setup it is. Software RAID isn’t always faster than hardware RAID or a better solution, but in the case of RAID 5, software RAID makes a very compelling case.
From my understanding, in a true hardware raid 5 the XOR operation to calculate parity are actually done on the RAID card, which means that if you upgrade your CPU, you really don’t upgrade the performance of the RAID. Which is arguably good or bad depending on setup/opinion.
One of the other things that makes software RAID a good choice is that you’re data is not tied to a single card. So if your IDE card blows out your not out of data while waiting for the card assuming you can find enough extra IDE channels. Also in a hardware RAID there’s a chance that a card that blew out could no longer be made and newer cards from the same company might not be made or might not be compatible with your RAID volumes. What cause this is that how the volumes are created on Hardware RAID are not standard, so they aren’t compatible between manufacturers and many times not between cards from the same manufacturer.
Software RAID allows the use of RAIDs per each partition, which means that you could have 3 drives and actually have 1 RAID 5, 1 Raid 0, a Raid 1, and an extra partition or other possibilitys like this. I’ve never seen a hardware RAID that can do this, and I think it might be because it would need to be aware of the partition types being used in some cases. Even just knowing Window’s and Linux’s partitions at that kind of level would be a lot to expect and would lead to the card having to be upgraded frequently.
Either way for some reason hardware RAID card’s don’t support this and Software RAID can easily support this.
The stripe or cluster size when configured properly to the way the system is used can lead to faster performance. If you have many small files that you acces parts of all the time, then you can make the cluster or chunk size smaller so you only get the data you need. If you only have 800M videos on the volume, you can make the stripe size bigger to allow for better adjustment to those kinds of files. It’s not something that everyone needs, but in some cases it can gain a lot of free I/O bandwidth. Some Hardware RAIDS can do this, some can’t, those that can are ussually quite restrictive about what size of chunks you can use. Software RAID in Linux allows just about whatever size you want as the stripe size.
Unless a true hardware RAID card is being used XOR calculations for parity and other calculations are being done on the host cpu. We are talk all highpoint and promise cards here. They all use your CPUs for calculations, they are hybrid’s, they have all the disadvantages of software/hardware arrays and are just generally poor performers compared to software/hardware arrays. 3ware on the other hand make true IDE hardware arrays, but for a 2 channel PATA card(only one disk allowed per channel..3ware thing), you’ll probably spend over $100..and that’s the bottom of the barrel card.
Hardware RAIDs do have their advantages if you have enough money, in example a very high level RAID 5 will allow hot plug expansion of new disks and dynamically include the disk into the RAID 5 array. I’ve not seen a way to do this on a software RAID array, but the hardware cards that can do this are so ridiculously expensive you’re better off buying the amount of drives you can get and not having to use this feature in the first place.
So in general the features of software RAID are almost always better. Software RAID is very affordable as opposed to any hardware RAID card worth it’s salt. Software RAID is more reliable. Performance can be better or worse on either type of setup depending on system setup and use. Hybrid cards are the worst of all worlds, so if you wanna pay for one of them…don’t, and go software RAID for free with better performance.
There are some other wierd advantages of software RAID that you could ues, but wouldn’t be advised. like you could say make a raid array of 2 ide disks and 1 sata, or 2 scsi and 1 ide. Not that it would be smart, but there’s no way to do that on a hardware array and you could do it in a software array.
too much timing, and there are still more points that each side has, but this will hopefulyl suffice as an answer to your question.
[i]Are there raid5 capable mobo’s or raid 5 pci-x cards?</I?
Both, but a RAID5 mobo doesn’t make a lot of sense, because RAID 5 ussually has a poorer write than even a single harddisk, so it would not be wise to install you’re OS on it. If you’re not going to install your OS on it, why have it on board? RAID5 PCI-x cards do exist, but I’ve never used one and couldn’t tell you the advantages/disadvantages.
I didn’t get what OS you are using, but if it is Linux, the kernel will stream the swap by default if the following conditions are met:
1.
The drives are not on the same physical port. It doesn’t matter if they are master or slave as logn as they are not on the same cable so to speak.
2.
The priority of the drives are set to the same in /etc/fstab ( se more about that on http://nalle.no/newnalle.php ).
So with Linux, you simply dont need to set up streaming; it’ll stream anyway (and i’d think this method would make the swap even faster.
I was told this earlier in the forums and I planned on researching but didn’t have the time. So I’m going to fix it back to the way it should be. As I said earlier though, it doesn’t take away the fact that there are good uses for raids based on partitions rather than disks.
Thanks for the explanation.
Mainly I was giving my demonstration of what I did to show some of the things you can do in software RAID that you can’t do on a hardware RAID, but in this case I might not have made a great decision on when to use said features. I’ll look into it, thanks for the pointer.
I’ve seen many hardware RAID controllers that allow multiple arrays on the same physical drives. It’s not particularly common, however, because:
a) there’s generally little real benefit to doing so; and
b) the differing demands/disk loading of the different types of RAID tend to conflict and reduce performance for everything.
I know a lot of non technical users who would like to implement various levels of raid and if they could do so easily without extra expenses they would. I’m not talking joe user here, but you’re average power user. (mainly what I was trying to say is that raid 0 or 1 is rarely done in software mode on the server and is more usefull to nonserver users)
The overhead of software RAID0 or RAID1 (or even software RAID5 these days) is insignificant (RAID5 somewhat less so, obviously, but any remotely modern CPU is more than fast enough to perform parity calculations with little meaningful impact on performance).
For the chunk sizes I certainly wasn’t referring to home users, but just in general it’s a feature that should be there. For software RAID 5 on the server it’s a pretty important feature. It’s also probably going to make a large difference in benchmarks if you can configure chunk sizes to match or be optimized for the type of tests you are doing.
In benchmarks maybe, but in real life performance the difference between verious chunk sizes in general doesn’t matter. Most servers are not being used solely for files of very similar average sizes, but a mix (and the bottleneck is almost always the network anyway).
Certainly, there are corner cases where optimising the stripe size is not only worth it, giving substantial _actual_ improvements. However, in general it’s neither worth the time nor the effort.
From my understanding, in a true hardware raid 5 the XOR operation to calculate parity are actually done on the RAID card, which means that if you upgrade your CPU, you really don’t upgrade the performance of the RAID. Which is arguably good or bad depending on setup/opinion.
The bottleneck on any remotely modern system is not the CPU. Even a Pentium 2 has checksumming speeds well and truly into the hundreds of MB/s. In short, upgrading your CPU – unless the system is incredibly heavily loaded all the time – is not going to give any speed boost to software RAID5.
One of the other things that makes software RAID a good choice is that you’re data is not tied to a single card. So if your IDE card blows out your not out of data while waiting for the card assuming you can find enough extra IDE channels. Also in a hardware RAID there’s a chance that a card that blew out could no longer be made and newer cards from the same company might not be made or might not be compatible with your RAID volumes. What cause this is that how the volumes are created on Hardware RAID are not standard, so they aren’t compatible between manufacturers and many times not between cards from the same manufacturer.
This is probably the biggest issue with hardware RAID, but it’s very overblown. It’s usually only a problem for people using bottom of the barrel hardware RAID controllers. Most manufactures keep their products on the market for some time, and spares for even longer – in general, by the time your hardware RAID controller karks it, the machine will be obselete anyway, so you just replace it and restore the data from backups (you do keep backups, right ?
Software RAID allows the use of RAIDs per each partition, which means that you could have 3 drives and actually have 1 RAID 5, 1 Raid 0, a Raid 1, and an extra partition or other possibilitys like this.
This is a very bad idea, from a performance perspective, due to the way the different RAID systems use disks.
I’ve never seen a hardware RAID that can do this, and I think it might be because it would need to be aware of the partition types being used in some cases.
Hardware RAID cards can do it – although some can’t (because it’s not a feature most people want) – and usually have their own partitioning schemes. Remember, hardware RAID presents each logical RAID volume to the rest of the system as a discrete physical drive.
Unless a true hardware RAID card is being used XOR calculations for parity and other calculations are being done on the host cpu. We are talk all highpoint and promise cards here.
Pretty much any card that purports to do RAID5 has an onboard processor (I’m sure there’s one somewhere that doesn’t, but I’ve never seen it).
Highpoint and Promise cards are well and truly at the “cheap & nasty” end of the scale as well.
3ware on the other hand make true IDE hardware arrays, but for a 2 channel PATA card(only one disk allowed per channel..3ware thing) […]
This is not a “3ware thing”, it’s hardware-enforced common sense. Putting more than one disk per channel kills performance.
So in general the features of software RAID are almost always better. Software RAID is very affordable as opposed to any hardware RAID card worth it’s salt. Software RAID is more reliable. Performance can be better or worse on either type of setup depending on system setup and use. Hybrid cards are the worst of all worlds, so if you wanna pay for one of them…don’t, and go software RAID for free with better performance.
Actually, the biggest problem with the “hybrid” cards is not the principle, but the fact they tend to be very much budget oriented and thus of low quality. The principle is really quite sound, because it gives you probably the single biggest advantage of hardware RAID – the RAID array appearing as a single physical device – while reducing costs by not having onboard processing power (which in most cases today is pointless anyway).
using the Linux kernel’s software RAID drivers versus the onboard IDE RAID chip functions bundled onto various mobos.
The biggest and most important difference is that software RAID requires the OS to be booted to some degree before the RAID volume is accessible, whereas hardware RAID (even pseudo-hardware RAID like that on most motherboards) presents the RAID volume as a physical drive.
These days, this is probably the single biggest advantage of hardware RAID, as it abstracts the underlying disks away (ie: the OS need not know anything about them).
If I’ve been paying attention, it’s been said that the kernel’s own software RAID drivers (but I gather it’s not so simple as /dev/md0 these days either, what with LVM, EVMS, the legacy drivers, and more available? What’s a fella to use are quite simply going to be faster, but WHY is this?
Surely the actual RAID calculations in BOTH cases are going to be done on the system’s own CPU, since in BOTH cases there is “real” dedicated RAID board for offloading them onto.
Software RAID (or pseudo-hardware RAID) performs any “calculations” using the host system’s CPU. *Real* hardware RAID does it with a CPU on the RAID controller (typically an intel i960 of some sort).
However, these “caulculations” are rarely a bottleneck, as any remotely modern CPU has more than enough power to perform them far in excess of any disk speed. Only if the system is extremely heavily loaded (in terms of CPU usage) will there be any meaningful performance impact from software RAID.
So what does that all mean? Simply that the actual in-kernel software RAID drivers are faster/better than the vendor-written ones?
*Real* hardware RAID almost always performs better (mainly because controllers tend to have a couple of hundred MBs of dedicated cache). Hardware RAID also offers the advantage of completely abstracting out the component disk devices.
On the other hand, hardware RAID has the potential to leave your data inaccessible in the case of a physical RAID controller failure. Software RAID also tends to have far more tuning potential – but realistically that’s of little use in typical scenarios.
1. Download a copy of Windows Server 2003
2. Find a corporate key
It’s about as legal as hacking dlls.
1. Download a copy of Windows Server 2003
2. Find a corporate key
It’s about as legal as hacking dlls.
Or just download the eval version and install that – completely legal.