“The Unix98 standard requires largefile support, and many of the latest operating systems provide it. However, some systems still chose not to make it the default, resulting in two models: Some parts of the system use the traditional 32bit off_t, while others are compiled with a largefile 64bit off_t. Mixing libraries and plugins is not a good idea.” Read the article at Freshmeat. On another interesting technical reading, you will find “Understanding the Linux Virtual Memory Manager” and “Code Commentary on the
Linux Virtual Memory Manager“.
I don’t know about the implementation details, but I do know that both the filesystem, NFS and tools such as cpio, tar and ufsdump cope perfectly well with large filesystems. And the reason I know this is because I daily work with monstreously big files.
Was one of the many things we loved about BeOS, and one that many Linux people didn’t seem to value at the time. The penguin has come home to roost.
thasss becuz ya paid a billion dollass for the mainframe
It’s great if you are running a 64bit kernel. Some of us are hamstrung with 32bit Solaris kernels that we don’t have the time to upgrade.
Besides, if gzip barfs on files larger than 2^31, doesn’t that somewhat limit everyone? (this is still a problem, right? It used to be… maybe my ex-sysadmin just compiled it against the int off_t rather than the long off_t…I forget now)
>Was one of the many things we loved about BeOS
The default BeOS, with 1 KB block size, could support a file up to 31 GB. To use files that are bigger than that, you will need to re-install BeOS and tell it to use bigger blocksize. And even then, the BFS had an issue with fragmentation (yes, BeOS is fragmenting as well – don’t believe the hype) that in some cases it wouldn’t let you do not even that 31 GB of filesize. If you up the block size to 8192, you could go up to half a terrabyte of a file (again, if the conditions that BFS needs are met).
Learn more about large file support on
http://www.suse.de/~aj/linux_lfs.html
Looks like Linux is having growing pains. BSD has a much better designed file system and VMM. I use a 486DX100 running FreeBSD 5.0 with all the latest KDE 3.1 and anti-aliased fonts. Because of FreeBSD’s sipirior VMM, I bet my machine runs circles around any of your crappy Pentium 4’s with Linux.
Heay right. Tommororw I am selling my Athlon with Linux and buying 486DX100 with FreeBSD 5.0 just to get some more speed
Why don’t put some real benchmarks FreeBSD 5.0 vs. Linux 2.4.20?
“Looks like Linux is having growing pains. BSD has a much better designed file system and VMM.”
I’d say Linux and FreeBSD are about neck and neck. FreeBSD has its problems and bugs, too. Heck, on my own machine FreeBSD won’t install cleanly if my CD-ROM is an IDE primary slave; it’ll read garbage characters instead of the hard drive name and a garbage number instead of a disk cylinder count. No other OS seems to have this problem. Go figure. It’s one of those ugly, obscure, and hard-to-reproduce bugs that should remind you that the FreeBSD developers are mere mortals, not gods on high making a technically perfect OS.
You are really complaining about silly things: 64 bit Solaris and hardware has been out for ages, without exaggerating, and even the cheapest, rackmountable Sunfire V100 will support all the 64-bitness you need. As for gzip, or any other tool that you wonder about it’s largefile-ness, just do a “man largefile”, and you’ll get all the ins and outs of what is supported and with which tools. Basically, all the XPG4-certified toolchain in Solaris supports large files. I went right there to check for gzip… and sure enough… hmm.. it ain’t there. But compress, uncompress and zcat are there.
So what I mean is, there is no excuse not to have largefile support under Solaris. If you are still running it in 32-bit mode, I think it’s really only you to blame.
“Besides, if gzip barfs on files larger than 2^31, doesn’t that somewhat limit everyone? ” As you can see, I didn’t even know whether gzip is largefile aware or not, yet it didn’t limit us one bit, even though we compress large files into large files all the time. It’s just that we use compress to do the job.
Windows is a 32bit system. When you ask the system how big a file is, it returns 2 32bit numbers. Most programs don’t look at the high 32bits and just fails on big files.
Worse is the crapy installers that don’t check the high bits when checking for free disk space. I’ve had programs that needed 32MB of hard drive space fail to install because I had over 60GB of free disk space.
compress a 3 GB file on my Blade 100, running Solaris 8, and sure as hello, gzip and bzip2 (which come with Solaris 8 by default) couldn’t handle it, yet compress had no problem.
if, for any reason you can’t use compress, from here you can download a version of bzip2 whic hhas been compiled for the 64 bit kernel:
http://www.ibiblio.org/pub/packages/solaris/sparc/
I just installed it and tested it (if you do that, remember that the original bzip2 will not be replaced, and will be the one in the PATH, so make sure you run the /usr/local/bin/bzip2).
What kind of a situation requires a 31+ Gig file? Honest question here.
Desktop use:
– video editing
– backup your disks
Server use:
– database tablespaces (even if in raw devices)
– some nifty disk-copying procedures/tricks
Workstation/”braniac” computers:
– complex simulations may require a large working set