In this article we’ll examine the range of choices available for Linux network file systems. While the choices are many, we’ll see that Linux still faces significant innovation challenges; yesterday’s network paradigm isn’t necessarily the best approach to the network of tomorrow.
They need to impliment the FreeBSD network stack. Would be a great improvement and have no licensing issues.
Just use Plan9 FS’
They didn’t even touch the big kid on the block: OpenAFS. It’s a DFS (distributed file system) so I’ll give them the benefit of the doubt in saying that’s why they didn’t consider it. But it’d satisfy their requirements a lot more easily than NFS or Samba.
Authentication and security: Kerberos or AFS tokens
Administration: Comes with backup tools, all files are under one cell so you don’t have to keep track of which server has which files, more Netware-like ACL’s instead of POSIX-like
Synchronization: is that really part of a filesystem’s job? Why not use something already built for that task, like rsync?
See http://www.openafs.org for details.
Just use Plan9 FS’
I have to agree on that. Someone should take a good look at that, they did some thinking.
The nicest thing is ofcourse the fs model there, everything is files, and no diffrence where they are (i.e. you can share devices just as any other file, and use a device on another box transparently)
It just appears to be trying to sell coda.
They skipped over talking about sfs, cfs, afs, and a few others.
For any one that has not heard of sfs before, here is a link…
http://www.fs.net/sfswww/
I rather like NFS, regardless of the criticism it receives.
“of tomorrow”: fad word identified, credibility down to 0.
Proceeding to read anyway:
singel mispelled in first paragraph
He seems to say:
NFS is suited to LAN, but not to WAN (internetworks, The Internet). Then he called internetworks todays network paradigm; even though that’s always been true since arpanet. I suppose it’s more present today as people want to access anything from everywhere.
I’m not sure how lovely AFS is, but I know my university uses it for about everything (all home directories are AFS mounts for over 20,000 students and a I have no clue how many faculty). It’s run through kerberos, somehow and seems to work ok, once you get kerberos working (a total pain in the butt).
I’d like to see some ssh mods to let you easily copy files off a computer you are already sshed into; that’d just about suit all my likes.
“I’d like to see some ssh mods to let you easily copy files off a computer you are already sshed into; that’d just about suit all my likes.”
Already built into ssh, its called scp.
“They need to impliment the FreeBSD network stack. Would be a great improvement and have no licensing issues.”
Yeah… because implementing the FreeBSD network stack will so completely give them a really good network file system.
“Yeah… because implementing the FreeBSD network stack will so completely give them a really good network file system.”
Heh sorry, I saw “Linux needs better network” and then posted without reading the article.
What is needed is something AFS-like but with lighter-weight requirements than having to have a kerberos infrastructure setup and working… this is a little much for home users, small businesses etc.
Well kerberos isn’t a requirement for AFS. It can have its own user database and token server. But I agree it does require a lot of setup.
On KDE I remember using an SCP quite transparently. What’s wrong with this?
I aghree with other people in the Plan9’s subject. The “future of networking” for other OSes will be copying plan9-like features IMHO, the way Plan9 does “remote graphics” by exposing the interface to a file and then importing it wherever you want is just the way to go.
But as many people say…”been there, done that”. Plan9 is there and it works, current mainstream OSes are stuck in 70’s concepts. The “future of networking” isn’t plan9, that’s the future for today’s mainstream operative systems, for plan9 and other operative systems will be beyond I guess, since they’re in the “future” today
“They need to impliment the FreeBSD network stack. Would be a great improvement and have no licensing issues.”
I don’t see how that would help for a network filesystem, as FreeBSD is based in the same concepts than linux (unix). They’re just two implementations of the same thing.
…Windows’ offline files feature, which keeps a local cache of files on a share and synchronises them whenever the network is available.
_Very_ useful for corporate laptop users.
I am an advocate of using what exists. Improve what exists.
I know I’m going to be stoned to death for this, but I am quite happy using NFS, and having given version 4 a go, if you’ve got *NIX machines, and willing to install the free NFS client provided by Microsoft onto Windows machines, I really can’t see a reason for using SMB.
CUPS for printing, LDAP + NFSv4 for file sharing; its a great combination, reliable, stable and secure. No complaints here.
Does NFSv4 have native support for encryption?
Think of the Internet as a cluster in slow motion.
Thats a flawed assertion. The Internet is not a cluster in slow motion, thats what the Internet partly is indeed. The Internet, as i see it, is more like a gathering of millions of different networks, from small ones to big ones, with a lot of different maintainers and owners. Owner X doesn’t want non-Owner X to own some data. Owner Y, etc. In short, not all data connected to the Internet or even spreaded via the Internet is for the general public. A lot of data is meant to be private and thus is being encrypted (or not). The Internet is one big ‘mess’ of diversity with builders on standards, some of which are flawed (IPv4 for example) — more about this later.
I did not found it a very good article because it didn’t point out what exactly the problems with current network filesystems are and it left out some network filesystems (some of which are pointed out here already). There were no in-depth compares and it seems he discusses 2 different markets:
Quote: In today’s network paradigm, the network file system challenge has become the distributed file system challenge, as we have moved from self-contained LAN environments to a world of occasionally connected computing. To be competitive in this environment, an operating system must have a file system that handles distribution and synchronization problems smoothly and securely.
Apple understands this. Apple’s relentless focus on the “digital lifestyle” has led them to work hard at getting a wide array of devices, from cell phones to iPods to video cameras, to connect and communicate. MacOS X gets high marks for its capabilities in this area.
WTF do these devices have to do with enterprise cluster/network filesystems? And why not make a distinction between enterprise and small / medium sized networks?
One is a consumer application in which security is not very important. I’d say that, at the moment you plug in your device, you should be able to perform tasks on the data stored on it (either with or without authentication). Now, thats what Project Utopia tries deal with. Its mandatory, important, work is done and i hope non-GNOME DEs will deal with it too. But thats about the only thing i can come up with.
He asserts the current times are different because of the Internet, the only thing network filesystems need to do is to circumvent the downsides of the Internet. One huge one is unreliability and dependance which means that if you want a more stable and secure line, you have to resort to your own WAN. A VPN over the Internet won’t cut it, if its worth the price. Take Skype for example. Perfect service, you say? Wait till you’re dependant on it and there’s a DDoS or cracker going on. IOW it can’t guarantee stability and security. A less open, more private network where defined parties share data over a private line is far better and i think thats what government officials use right now (some do, at least).
No, beond the above i’m more worried about Linux its local filesystem and a more local way of accessing files (like SFS and SSHFS try to) and versioning like VMS had. If by overwriting a file you by default make a revision, thats great for group collaboration or for backup purposes (though other backup meassures are still neccesary. The author agrees this is a problem on the network layer: “and also leads to confusion over which copy of a file is the master copy. Better to have a central file server for the work group to which each group member has access.”)
check out sshfs:
“Shfs is a simple and easy to use Linux kernel module which allows you to mount remote filesystems using a plain shell (ssh) connection. When using shfs, you can access all remote files just like the local ones, only the access is governed through the transport security of ssh. Shfs supports some nice features:
* file cache for access speedup
* perl and shell code for the remote (server) side
* could preserve uid/gid (root connection)
* number of remote host platforms (Linux, Solaris, Cygwin, …)
* Linux kernel 2.4.10+ and 2.6
* arbitrary command used for connection (instead of ssh)
* persistent connection (reconnect after ssh dies) ”
http://shfs.sourceforge.net/
“…Windows’ offline files feature, which keeps a local cache of files on a share and synchronises them whenever the network is available.
_Very_ useful for corporate laptop users.”
While this may be very useful for corporate laptop users, it it is only useful if you have Windows XP Pro. I couldn’t find any way to enable it on XP Home, please let me know if there is, it would be useful.
As to why it is ignored in the article would probably be because the article is about network file systems on LINUX!
Idunno about: “Windows’ offline files feature”
However: http://www.cis.upenn.edu/~bcpierce/unison/
dpi wrote:
> Does NFSv4 have native support for encryption?
Yes. Look for “Setting up krb5” here:
http://www.citi.umich.edu/projects/nfsv4/linux/
What is needed is something AFS-like but with lighter-weight requirements than having to have a kerberos infrastructure setup and working… this is a little much for home users, small businesses etc.
erm, coda should be a descendant of version 2 of the afs..
does it still perform operations on the whole file?
As a Linux beginner dual booting w/ Windows I was shocked to find out I couldn’t just mount my Windows HD, without possible suffering total loss of data because writing could corrupt the filesystem. Now I know NTFS is closed and proprietary, but it would be great if it had some good solid support on Linux.
Although this is slightly off topic;
Read Only support under Linux has been solid for quite some time. Unless you enable write access, you won’t loose any data.
Write access, despite more than a little bit of reverse engineering, has, and will continue to be flakey for at least the forseable future.
(This sort of problem is why we bitch about open standards so much.)
Try looking up captiveNTFS, it’s a way to use Windows’ own driver for NTFS and therefore provide reliable read/write support for NTFS.
As a Linux beginner dual booting w/ Windows I was shocked to find out I couldn’t just mount my Windows HD, without possible suffering total loss of data because writing could corrupt the filesystem. Now I know NTFS is closed and proprietary, but it would be great if it had some good solid support on Linux.
Nice FUD Mike. When was this? What Linux version?
Let me first clarify the following statement:
I was shocked to find out I couldn’t just mount my Windows HD, without possible suffering total loss of data because writing could corrupt the filesystem.
1) Windows HD != Windows partition.
2) Windows partition != NTFS; FAT16 and FAT32 support is fully available, read-write.
3) Mount != mount with read/write access.
The new NTFS implementation is 100% reliable when you read and/or write. It has partial write support though. The write support which works is believed to be stable; the write support which doesn’t work (ie. full write support or missing features) is not included in this new driver. Besides that, there’s a performance improvement over the old implementation and the code is more clean.
The old implementation however had a less good performance, had again safe read support, and had write support which was not reliable and could indeed not guarantee data safety. In short, read support has always been secure, and this is widely documented and known. Perhaps you used the old implementation AND enabled write support. Perhaps you encountered a bug.
If you want full write support, use a Windows license + ntfs.sys + CaptiveNTFS, buy Paragon NTFS mount, use a network FS, etc.
PS: I’m afraid neither BSDs or your Amiga are able to mount NTFS read/write … *grin*. I agree the situation is not ideal however its much better than with other OSes, its mostly a problem in dual boot situations and afaik Linux has the best NTFS support around.
You can use Active Directory for the Kerberos/LDAP management and use OpenAFS against it. It would be nice if we had a free AD equivalent that was as easy to manage for the typical corporate drone or home user.
For those interested in the Plan 9 file system protocol(9P); there is a project to implement it on Linux that is under heavy development and that will be submitted for inclusion in the main Linux kernel very soon. The DragonflyBSD people also have shown interest in the project and might adopt it(the linux kernel code is GPL; but all other code should be BSD so they can re-use it).
For more information see:
http://v9fs.sf.net/
Are Coda and Intermezzo still being developed? Last time I looked they were dead.
v9fs is being actively developed, as is GFS (developed by Sistina, which was purchased by Red Hat, who sell GFS commercially; Red Hat have promised a GPL release in 2004). Both these projects look promising. The article is out of date.
Does AFS/OpenAFS still require that you use its own disk format? How good is fsck for OpenAFS for Linux? I don’t want the networking layer to dictate the disk fs format.
Just a note, GFS is already GPLed. Debian packages are in the make for example. The source is available at RedHat.