“Red Hat is pleased to announce the availability of the beta
release of 5.1 (kernel-2.6.18-36.el5) for the Red Hat Enterprise Linux 5 family of products. Red Hat Enterprise Linux 5.1 is still in development and therefore the contents of the media kit, the implemented features, and the supported configurations are subject to change before the release of the final product.”
Are the ISOs publicly available?
nope, have to sign up for the beta program to get them
Just as a clarification, if you are not already running RHEL5 or CentOS5, you are probably not interested in this beta program, anyway. As of 5.0, RedHat has changed their naming convention, and 5.1 is just a periodic (quarterly, sort of) update which is not very exciting at all, unless you have mission critical machines already running RHEL5 and want to help make sure that the minimal changes in 5.1, when it is released, will not cause you problems.
Very boring stuff, really. But every change, however small, has the potential to cause problems, and this program is designed to help address that mundane, but important issue.
” and 5.1 is just a periodic (quarterly, sort of) update which is not very exciting at all,”
Except of course:
“Ext3 filesystem now fully supports filesystem sizes of up to 16TB”
…which is pretty darn exiting and important (and oft requested) to a lot of companies.
Is it? How long does it take to e2fsck an 8TB+ filesystem?
Of course, I suppose that depends upon how many files and directories are on it. But even if the average file size is rather large, I find that even a 500GB filesystem takes about 10 minutes. If that scales in proportion to size, an 8TB-16TB fs would mean take 3-6 hours of down time. Ouch!
Ext3 is my hands down favorite FS for Linux. But for filesystems that are large, I’m not sure what the best option would be. XFS would be much faster on the fsck. But data loss is almost guaranteed with it.
ZFS is userspace only. Btrfs is only an option for the future.
Is it? How long does it take to e2fsck an 8TB+ filesystem?
Of course, I suppose that depends upon how many files and directories are on it. But even if the average file size is rather large, I find that even a 500GB filesystem takes about 10 minutes. If that scales in proportion to size, an 8TB-16TB fs would mean take 3-6 hours of down time. Ouch!
Ext3 is my hands down favorite FS for Linux. But for filesystems that are large, I’m not sure what the best option would be. XFS would be much faster on the fsck. But data loss is almost guaranteed with it.
Not to mention, how long will it take to run a backup? Days? Naturally you could run multiple jobs at the directory level (or whatever), but this is practically asking for something to be missing when you need it because no one updated the backup config.
Whenever humanly possible, I say “just say no” to filesystems larger than around 500 GB. But of course it is not a perfect world…
I too would be far too nervous to run XFS on anything important, and in general I think extN is the only free linux filesystem which has enough industry support. It would have been interesting if JFS would have gotten more support, but the time for that is long past.
Full backup (needed but one time only)?, yes.
Incremental (the most obvious way)?, minutes.
Full backup (needed but one time only)?, yes.
Incremental (the most obvious way)?, minutes.
This is not my experience, but I have not done it in all imaginable ways. Can you back this up (pun not intended, aargh), by telling me what backup method / software you use, and how big your filesystems are, and maybe even the rate of change on the data?
Most of the backup software I use does block level deltas, so “incremental” backups still take significant time.
Backup Software: Collection of rsync scripts
File Systems: 400 GB
Rate of Change in Data: About 50GB a day, meaning data that is changed, not necessarily added.
Time to perform a backup after the initial one, a few minutes, usually below five.
The best way tyo backup large amounts of data is through BCVsArray based snapshots.
I queise the disk and snap or split the BCV. Mount the BCVSnap on another box and run the backup completely outside of the production box. When I am done I either destroy the snap or resync the BCV….
-brian
The best way tyo backup large amounts of data is through BCVsArray based snapshots.
I queise the disk and snap or split the BCV. Mount the BCVSnap on another box and run the backup completely outside of the production box. When I am done I either destroy the snap or resync the BCV….
Very cool, BCV is an EMC thing, right? What I am most interested in is shorter backup windows, meaning less data loss from a failure 5 minutes before your new set is done. I guess this approach would help if you are processor bound, but, considering that the problem is often IO, it could make it a little worse because (if I understand right), most of the data blocks are still on the production spindles (the main, unavoidable problem), and the array has to add overhead to manage the clone (probably not so bad).
Still, having array or lvm clones is very nice for making sure you are getting everything you are supposed to get.
Who says backup is boring