While the capabilities of operating systems have improved over the last years, the improvements have largely focused on under the hood changes. New functionality is reaching the user via additional applications which allow her to write a DVD, connect to her mp3 player, download streaming video locally and other tasks which were not present before. But the graphical interface of the computer itself is keeping the same concepts introduced with its appearance. One could argue that the graphical environment of computers is exactly the same for the last 10 years and only cosmetic changes take place in newer versions of operating systems. Moving away from the desktop metaphor is harder than it seems. Even alternative operating systems have embraced the concept instead of exploring new ideas. This article describes a solution which attempts to free the user from the files/folder concept. Current situation
Microsoft and Apple have a huge customer base. Making radical changes to the user interface (UI) is almost impossible. As I explained in my previous article, I disagree with the fact that everyone is excited with 3D candy coming on the desktop and nobody attempts to correct its existing shortcomings. I also argued that even in the open source world most prominent projects assume that the desktop metaphor is the one true solution. There are exceptions to this but a truly alternative idea has failed to catch on.
While GNU/Linux might be considered an alternative operating system, its graphical interfaces are not alternative at all. In order to reach mainstream acceptance, KDE, GNOME (and XFCE) developers have followed the tried and true approach of imitating the interface of the dominant operating system(s). They have done an excellent job and by all means, office workers can easily migrate to KDE or GNOME. I congratulate them for their efforts since what they have achieved is truly amazing work. So now that we can accommodate for users that like the desktop metaphor, it is time to explore other ideas too.
The desktop approach has a lot of flows. It presents too much information to the user. A huge selection of menus, and icons is present at any given time on the screen. Users have trouble navigating the menus to accomplish what they want. Task based interfaces are only successful in certain areas. The desktop root window is cluttered with documents, folders and applications. Searching for a specific document can be a nightmare. Overlapping windows require micro-management. Direct manipulation can be a curse under certain circumstances. Many people have written before me and better than me the problems of the desktop metaphor. If you are happy with your desktop and think that there is nothing wrong with it, you should not read this article in the first place.
Too much data
Rather than attempting to throw everything away, we should gradually identify what is wrong and correct it, keeping backwards compatibility where possible. I will only focus on the idea of having files and folders which plays a central role in the desktop metaphor. Organizing files under folders was indeed efficient for many years. Users did not have a large number of information on their computers. And since office users were already accustomed with physical files and folders, they had no trouble navigating the virtual ones. Of course things have changed since.
Today too much data is present on a single computer. A human user cannot simply remember where everything is placed. The files and folders are directly mapped on file-system structures. Users are provided with a file manager application which gives them direct access to the file-system hierarchy present on the hard disk of the computer. Manual file management can quickly become overwhelming.
People realized this management problem and several new solutions appeared which combined the file-system with some sort of database. We now have google desktop and Beagle. There provide essentially indexing services to the user. Instead of having to remember the exact location of a file, she can simply query the indexing service using properties of the files and hopefully get a result back.
All these are great solutions on their own. But they cure the symptoms and not the cause of the problem. Users should not have to use a smart indexing service which helps them find what they want in the chaos hierarchy of their files. There should never be a chaos in the file hierarchy in the first place. In fact, users should not even care if there is a hierarchy or not.
Delegate file management to the computer
Taking this idea further, I propose the complete removal of the file manager application. Power users will be able to access the file system via the command line if they want, but normal users should never want to do that or even know that it is possible. Removing the file manager might scare several users, so it may not sound as a realistic thing to do. We should explore the implications of this removal.
A user opens the file manager for three main reasons.
The first operation is a valid use of the file manager. But it could also
The second operation was a valid use of the file manager. Users have a large number of files so they open them by searching for the name of the file, or using the “recent” list of the operating system or the editing application responsible for this file type. Lately indexing services have appeared as described already. Users search for a document with the file manager as a last resort only. It means that all other methods have failed, which is something that we should avoid.
The third operation is time consuming and should be avoided. We should free the users from micro-management. While power users get a warm feeling that their file hierarchy is exactly as they want it, casual users rarely spend time organizing files. Most of them do not even bother at all and just dump everything on the desktop root window.
So if we provide users with an external media application and some kind of indexing service can we finally remove the file manager? Some users might be surprised that they cannot access files directly. They might see this as losing power. The truth is that they do not lose this power. They rather delegate file management operations to the computer where they belong. Management via complete control is only possible at small scales. After a certain amount of data is reached, good management means delegating responsibility to assistants who will perform the needed operations. In our case the assistant will be the computer itself.
Tagging content
This means that instead of constructing a simple indexing service, we also entrust the computer with the underlying file-system hierarchy. The user is shielded behind the abstraction layer of the query service. The query service acts as a search engine on the local files. Files are marked with tags that describe their contents. The user can enter tags on a text field and retrieve all files that match them. The interesting idea here is that the file-system hierarchy is built dynamically according to the tags of the files.
Current indexing services suffer from the fact that they can only extract metadata from indexed files, while they do not understand their contents. Getting the size or dimensions from an image file is easy, but the indexing service cannot know what is actually depicted on the picture. In the future we may have clever pattern recognition applications which would understand the content of every file that enters a computer. Until this semantic content utopia is a fact we need to resort to the best pattern-recognition method we already have: the user herself.
Every time a new file or folder enters the computer (via physical media or via the network), we ask from the user to tag it with a set of keywords that describe it. This might seem a daunting task at first sight. It is not, since the users already have to think where to place a new file on their computers and decide a target directory depending on the incoming document. In fact our solution is even better since it does not restrict the user on a single directory. While previously the user would have to select whether the “Sales” folder or the “2006” folder was the proper place for a document which described sales of 2006, now she can tag it with “2006” and “Sales” keywords and let the computer decided on the actual place insider the filesystem. Of course some users would place the document in the “sales” folder located inside a “2006” folder. This creates instantly a 1-level deep hierarchy which after some time will result in the chaotic filesystem found on modern workstations. This is exactly what we want to avoid.
Of course since we care for compatibility we keep the old filesystem around. If the user wants, she can put manually the file wherever she wants outside the control of our abstraction layer. Power users and programmers can still have their carefully monitored tree structure on their home directory and only let some (or none) of their files under the control of the abstraction layer.
Building a prototype
A lot of people have expressed their opinion on how an idea should be implemented, but few have actually created an implementation. This time I have already implemented a prototype of this abstraction layer. And rather than presenting a command line tool with a thousand of command line options, or a daemon process with yet another interface/protocol/API and a full-Howto on the installation process, I have followed the exact opposite direction. Start from the user interface first!
The prototype is actually a graphical application (X11) with cheesy graphics full of animation and custom widgets which will only entice casual users. But these are actually the target audience. Other than than it is a single C application with an embedded SQLITE database. No external databases or running daemons are needed. The application is named simply “The vault” (I am open to name suggestions).
Its functions can be easily summerized. Anything that is copied to .vault/incoming is up for inclusion in the abstraction layer. Once the application is launched users can either assign tags to files/folders or search the existing (already tagged) files/folders. A third option to see recently searched tags is also offered. No special toolbars or menus or anything more complicated is present. The actual files are stored under .vault/storage. The idea is that casual users should never access this directly, but they should always use the tag-based graphical interface.
The vault is located at its own sourceforge page. To build it you need the sqlite development headers and libraries and also the Enlightenment Foundation Libraries (EFL). The EFL are the libraries behind the next version (17) of the Enlightenment Windowmanager. They are not officially released yet so you are expected to download the snapshots from the freedesktop site.
If you actually visit the sourceforge site you will realise that the vault is essentially one component (around 30%) of a bigger project that attempts to explore some alternative ideas on how the modern desktop should look and act. You are free to help the development of any component. The vault is just the first one published to the open source community.
As a final note I should also mention glscube. Glscube was released after I had started development on the vault, and they share many similarities. Glscube also creates a virtual filesystem based on the tags assigned by the user on individual files. It even includes an indexing service on its own. Another impressive capability is the POSIX-compatibility layer offered for non-glscube-aware applications. The problem with glscube is a high number of system-intrusive dependencies including a fuse module, a PostgreSQL database, and a console daemon. While the idea is interesting I find the implementation a bit heavy on resources and certainly an overkill on low-end systems. I wanted something simpler and more lightweight. Even its authors are mentioning a future change from PostgreSQL to sqlite (which I already use).
About the author
1)Copy data to/from external media such as CDs,flash drivers, mp3 players e.t.c.
2)Find a file and click it in order to edit it.
3)Move around files and better organize them.
be handled by an external application built for this purpose. For brevity reasons, I will not talk about this application in this article.
Kapelonis Kostis is a computer science graduate. He believes that the desktop metaphor is obsolete. After spending a lot of time thinking of the perfect User Interface, he has started implementing it. If you share his vision or have suggestions on your own, do not hesitate to offer advice, or even help with the coding process.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.
I have embraced the idea… but it’s not quite yet here sadly.
Vista allows some metatagging, but it’s not a single click far (it’s 4 clicks deep . Zeta/Haiku with just lack an interface to the meta-attributes to be the del.icio.us of the OSs.
The point is, quoting MS, that: “As development of Windows Vista continued, we got a lot of feedback that said people weren’t ready to move to a library-based file explorer just yet, so we scaled back on the Library vision a bit.”
Edited 2006-11-02 10:50
something like this:
http://nepomuk.semanticdesktop.org/xwiki/bin/view/Main1/
??
KDE is going that route anyway, Strigi for search and trying to make the desktop something different from a file-manager.
Removing a file manager would be stupid anyway in KDE (again), exploring files is the same as exploring the internet, the “normal” users wouldn’t need to know that the internet browser is also a file browser, while the “power” user can still use a decent file browser. Why throwing something usefull away while it can be implemented in a similar program.
The idea of using several programs for the job, is gradually finding acceptance for example in music programs that handle the music files…
The idea of using several programs for the job, is gradually finding acceptance for example in music programs that handle the music files…
The iPod/iTunes filesystem is an example of that; it doesn’t rely on tags for the filesystem structure but the interface itself is based on the tags instead of relying on the existent filesystem.
Yet, this is a very specialized use of it and I don’t know how it would work applied to the whole desktop experience.
Edited 2006-11-02 13:32
That’s why it’s called a music manager, you can have a picture manager, a movie manager… For special files a special program that manages those files. But I think Nepomuk really is what the guy is looking for, it tries to give a meaning to a file, instead of something just being a file of bits and bytes. Sadly enough they’re still researching how to do it and I doubt they will find a solution any time soon.
Hello
take a look at oberon and plan9 to see other “options” to the common sense of a “computer shell”.
gabi
…that the world primarily focus on idiots? So because many people are too dumb to sort their files in an sane way, having the ability to sort is bad? I don’t think so.
There are probably better interface designs then WIMP, but i have yet to see one that realy works better.
I’m quite happy with my KDE, but if i wanted to explore new interface designs i would start with getting rid of the concept of having Windows. I can’t remember it’s name right now, but i have once seen an Windowsmanager that handled every Window as a Tab in a big tabbed sheet.
It realy looked like it could be very usable, unfortunately it wasted incredible amounts of screenspace. Since i tryed it on my Widescreen Notebook with only 1280*800 pixels there wasn’t enough space left to the app on the y axis, so i did not use it much, but otherwise i might have given it a longer look.
Yes I’m an idiot : I do programs since 1980, and I know what is a computer, a filesystem, etc…
But when I use my personnal computer, I just want a tool very simple to use but very helpful. Perhaps the reason which made me buy a Mac.
I think that idiots are those who make things complicated just because they know how to deal with.
Hopefully there are persons like the one who wrote this article, who make things change and go farther and better.
I hope to read more articles like this one, and less YetAnOtherPreviewOfYetAnOtherDistroOfLinux which just encourage to sleep.
How are files and folders complicated? If they are, then a lot of idiots must be running accounting offices.
Oh, wait…
“But when I use my personnal computer, I just want a tool very simple to use but very helpful. “
I think that’s one of the UNIX principles you can see in Linux as well. But the user has to know that such a tool exists and how he can use it. This implies: He has to know what he wants to do first – before starting. Am I correct? I’ve seen many people sitting infront of the computer and expecting the computer (!) telling them what they should do.
Refer to the KISS principle at this time.
“I think that idiots are those who make things complicated just because they know how to deal with.”
You’re not right at all. Complex problems often need complex solutions that can be brought down to sub-problems solved by simple tools. The “trick” is to combine them. In most cases, intelligence is needed. You can hardly have complex relations represented in a GUI, because it’s not fully programmable. The authours would have to think of every possible relation.
Good ideas are simple, simple ideas are good. In most cases.
I like this statement from the article:
The third operation [Move around files and better organize them] is time consuming and should be avoided. We should free the users from micro-management. While power users get a warm feeling that their file hierarchy is exactly as they want it, casual users rarely spend time organizing files. Most of them do not even bother at all and just dump everything on the desktop root window.
Oh yes, I’ve seen such desktops full of file symbols, arranged in colums and heaps – which weren’t on the backup media later because they were located in a “Windows” system subdirectory. The users didn’t know were the files were located, so they could not backup them. 🙂 But that was years ago.
So, the user does not have to know where his files are physically. For NFS filesystems, that’s obvious. So, how could the user hold control over is data?
The author mentioned to use content tagging. I think, here we’re in trouble, because we have two possibilities:
a) Let the user decide on content
This is time consuming as well, so the user won’t have an advantage of it. The user had to input data manually for each file. Why has he saved it? What does he need it for, what will he need it for at a later time?
b) Let the system decide on content
Surely you’ll see the problem: How should the system know what’s important to the user? What key information should be retrieved from documents? And how about music files? Rely on (often missing) ID3 tags? And files in proprietary formats that are not accessible?
Remember: The “semantic compiler” does not work.
int a, b, sum;
sum = a – b;
This is syntactically correct, but does not what it seems to be intended to do.
I think you can’t solve this problem in a simple way. The idea is good, but doesn’t work.
I think that requiring the user to enter tags each time he/she saves something is just as much micro management as navigating to/creating folders.
So gather as much metadata as possible:
1) When a file is downloaded from a web site, save the title, url, and maybe even words from the site as metadata.
2) When a file is saved from an email, again, store the subject, sender, (full text of the message) as metadata.
3) When a file is stored on cd/ipod … etc.
Full text search ala beagle is nice, but it won’t help you with a joke stored as 025454565.jpg in your 2GB “downloads” folder.
The problem is also that source/”repackager” of the information is often an amateur/noob who often barely knows what a file is, and in the end he doesn’t (or is too lazy to) fill-in the meta data. A good example are mp3’s with either data written in IDtags or filenames (or both).
There is also a need for unified metadata taxonomy accross desktops/apps, otherwise you will end up with having numerous “unrelated” classes like e.g. tunes, songs, music,”things” etc. A big gorilla like MS can enforce this in Windows (even they failed here with WinFS), but linux distros can have a hard time agreeing on something like this.
I think it’s the exact other way around. MS with its enormous user base CAN NOT even remotely enforce anything like this, because of hundreds of millions of users with certain work habits. MS has its hands tied. Linux distros are in fact aimed at people with higher level of computer skills, willing to try out new things. Revolution should start with the guerilla, not the gorilla.
You are right, I was thinking the same thing. What about copying 2000 files from a friend … what do you do then?
Seriously, I’m all for innovation and new methods of storing/retrieving important data, but I’m terrified of the idea of misspelling something or just hitting “save” and forgetting to add important criterias etc. My brain is a brain, nothing more nothing less, it’s filled with memory errors and CRC errors.
Anyway, I’d prefer something optional here as a first step to at least improving file hierarchy that would make at least my life easier and surely many backupadmins.
Why isn’t all available file systems today and softwares used for production purposes a lot more aware of versioning? What I’m saying here is that on my computer I always store files according to a versioning scheme (as in x.y.z versioning depending on size of changes). Now what I would want is that the computer itself stores “original document/image/file” and let’s say autosaves every 5 minutes or when user demands saves and “tags them”. For instance saying “Version where it is sent to Mr X” and if changes to that file later on, you can always reverse changes back.
Instead like today storing it as a new file with a new name in order to keep “original” wouldn’t it be better to simply let the filestructures as well as the software on top always keep original and store “changes along the way” and rebuild files that way when requested for. If for some reason you wanna remove backdata, that option should be available to.
For me at least, this would lessen number of files dramatically, and make everything better.
Yes, this technology exists today, I’m aware of that. But how can it not be standard and widely adopted?
Anyone else agreeing about this?
I really like this idea, I had one which is quite similar. Maybe some of my ideas can help you improve your prototype (I will try to join your sourceforge project, as soon as I have finished one of my three current development projects):
– When data is stored you already have a lot of informations that are hardly used: User (+ Data From LDAP for example, Buisness Unit or somethings like that | Date | FileType | … You should be able to query this data in… For example if I want to know what buisness reports where done in Nov. 06 I don’t care about the format or the user, I only Care about the Keywords Finance and so on…
– I would try not to redesign the whole UI, just redesign the “Save As” Dialog, this can be implemented quickly in various libraries (GTK, QT, … )
– Make Revision Copies of Files Smaller than 100MB, Diskspace is no matter today ..
Just my 2 Cents… Keep on realizing your idea, it’s a great starting point!
– Patrick H
Edited 2006-11-02 12:17
WIMP systems may not be the end-all of designs, but they do work wether you have 10 files or 100,000 files.
Most alternative designs I have seen would fine as long as you are not dealing with too many items, but once you go over the limit (diffirent for each system) you quickly get bogged down.
Worse with meta-data you usually only find you need a key piece only after you collect a lot of file first – adding the meta-data after the fact can be a pain.
I have made this mistake using BeOS, adding the meta-data would have taken weeks, figuring out an automatic way to update the meta-data was not easy, most users could not have done it and that is the problem. At some point or another the work of maintaining your system does become hard.
My first thought was this:
You go out for a long weekend and take 200 photos on your digi cam. You then connect your camera and then start uploading, and then, holy crap, you have to enter meta tags for 200 photos!
This would involve opening each one individually and thinking deeply enough to get intelligent search results. I’m guessing you might be sitting there for hours. No way.
And 200 is a small number. My wife frequently takes 1000+ photos during a photography shoot and that just makes my head spin thinking about tagging all of those!
Nope, I’m sticking with folders for now.
Heh… that’s a pretty good point
A possible solution would be something akin to a multi-rename-tool. Just with metadata-modifications instead of filename-modifications.
Like “Titel: Weekend trip to Grand Canyon” + x
where x is an ever increasing number.
TotalCommander has a multi-rename-tool with such functionality for filenames. Something like that but on metadata would solve the problem, at least to some extent.
> You go out for a long weekend and take 200 photos
> on your digi cam. You then connect your camera and
> then start uploading, and then, holy crap, you have
> to enter meta tags for 200 photos!
That’s the first thing that crossed to my mind too, but then I realized I wasn’t taking everything into account. For one, there could be the option to tag a group of files all together, just like you’re going to dump them all inside of one single directory whose name you had to, obviously, chose.
Except, you can give more than one tag to this bunch of pictures, whilst with the directory you’re stuck with only one name.
It’s obviously a more powerful approach and not necessarily more difficult to handle. The directory approach could be shown to be a subset of the database approach, in fact, where each directory is basically a specific view on the db.
Problem is, after a while, people will not tag anymore, it’s tiring, and it takes a lot of time. And then they’d arrive to a db which contains half-indexed stuff and the rest is just a dump without tags. And since you don’t have filesystem access to quickly organize them into directories, you’re more screwed than you’d be today.
I mean just imagine the transition first: Joe has 2 million files. Transitions to a db-abstracted filesystem, tagging the 1 million files, and I mean really tagging, not just saying this is a text file, this is a pdf file… Then tagging the new files as they come along daily, weekly, and so on. He is not tired from the stuff, because he is one of the fourtyfourhundred (sorry I’ve just been trough a few episodes ).
But, imagine a situation like I had in the last days: I had to run tests generating a lot of data. The app I quickly wrote automatically output the data into dozens of hierarchical directories containing several hundred files. I had to do it for generating evaluation data for a research paper. This was a somewhat dirty, but fast and easy to handle way. Imagine I didn’t have a way to access the filesystem directly, but only the db layer. Then the easy hierarchical output would have turned into a db-table-generation process as if you’d organize your data in a db today. I mean no thanks, there are times when you just don’t have the luxury of time to do that.
I don’t mean the db-abstracted filesystem idea isn’t nice. Hell, I love the idea. But what I don’t want to see is the total drop of the traditional filemanager paradigm any time soon.
> Problem is, after a while, people will not tag
> anymore, it’s tiring, and it takes a lot of time.
It’s not more tiring than giving a name to the directory in which you’re copying your files. What’s more, the system could present you with a list of already used tags and let you chose between them.
If it’s images we’re talking about, a semi-decent AI system could also learn to recognize them based on the tags given earlier and similarities between pictures, giving you the ability to search pictures much more efficiently and effectively.
> And since you don’t have filesystem access to
> quickly organize them into directories, you’re more
> screwed than you’d be today.
So, how’s organizing files into directories any easier than giving tags to a bunch of files at once?
> I mean just imagine the transition first: Joe has 2
> million files. Transitions to a db-abstracted
> filesystem, tagging the 1 million files, and I mean
> really tagging, not just saying this is a text
> file, this is a pdf file…
Tagging a file simply means attaching more information to it than it otherwise has. If Joe doesn’t care about attaching any such information to his files in the former case, why would he care in the latter?
On the other hand, if joe does care about organizing his files into directories, why can’t he attach tags to a bunch of files all at once? Conceptually, it’s exactly the same thing, except that in the latter case you are working with sets, rather than nodes in a tree. You can do a lot more operations with sets than you can do with directories, like intersecting them, joining them, and a multitude of other relational operations.
> But, imagine a situation like I had in the last
> days: I had to run tests generating a lot of data.
> The app I quickly wrote automatically output the
> data into dozens of hierarchical directories
> containing several hundred files. I had to do it
> for generating evaluation data for a research
> paper. This was a somewhat dirty, but fast and easy
> to handle way. Imagine I didn’t have a way to
> access the filesystem directly, but only the db
> layer. Then the easy hierarchical output would have
> turned into a db-table-generation process as if
> you’d organize your data in a db today. I mean no
> thanks, there are times when you just don’t have
> the luxury of time to do that.
Consider this/path/to/that/file; now consider “this path to that file”. The former is a path, the latter is a list of tags: you can always transform a path into a list of tags, hence if you can write a path, you can write a list of tags, which isn’t any more difficult to do.
I really don’t understand the issue. You’re assuming that a tagging system automatically needs to have more information than a filename/hierarchy system, but I don’t think that’s true.
Suppose you have those 1000 photos. A good photo management tool (and, in fact, this is roughly what iPhoto does) will ask you to specify an album name, and then will import all of your pictures from the camera, tagging each one with the album name and the import date.
You can go back and retag pictures whenever you want, but it isn’t essential. And honestly, if you want each of your pictures to have a name, how is it any easier renaming a file then editing a tag?
I agree 100% with the author.
The WIMP metaphor is clumsy. It’s very inefficient and complicated for no reason at all.
The PC will have an XBox360-like interface one day (where everything is fullscreen).
The PC will have an XBox360-like interface one day (where everything is fullscreen).
I sure hope not. That would be terrible. It works for some tasks and applications, but there are many tasks it doesn’t work for at all.
The desktop metaphor is still superior. The problem is that Windows, OS X and KDE/Gnome only halfheartedly implements the desktop metaphor.
They all have the components required to make a killer desktop and they all fail in the end.
I’m curious there, can you give examples on how any of them fail the desktop metaphor? (just out of curiosity, as to what you consider a good desktop metaphor)
Well, printing for one thing. Mail handling for another thing. Persons/contacts as a third issue. Templates as a fourth issue (though Gnome actually does have some of the functionality in the way, one would expect).
Try dragging a file to the printer… like a Word-document. Instead of powering up a print window, Windows fires up OpenOffice or MS Word. Mac OS X and Gnome/KDE is no better.
What should happen would be a printer dialogue window opening asking you for number of copies, page range etc. And then using plug-ins to parse the document. That would be the expected behaviour, if we use the desktop metaphor to its fullest extent.
…and then will come the next revolution, desktops that allow you to assign varying sections of screenspace to many applications running concurrently; applications that can be sized and placed at will…
…but file management, indexing service and desktop metaphor has nothing to do with each other.
True, the main stream OS’es have issues with the desktop metaphor, but that’s because they only implement a minor part of the desktop metaphor.
Windows have the potential to be a killer on the desktop, but Windows, OS X and KDE/Gnome all make the same mistake, relying on huge blobs for functionality.
However – the article is mostly about indexing services and I can definitely support it.
Beagle on Linux, Spotlight on OS X and Windows Indexing Service on Windows.
I’d like to point out that Windows have been ahead here for many years. Unfortunately MS have decided not to make a major thing out of it. Use the ordinary search window and write queries in the free text search field. It’s not userfriendly but it gives you the same search functionality known from Google Desktop Search, Windows Desktop Search (an unnecessary extension, really) and beagle.
Indexing Services and Virtual Folders are natural parts of a Desktop. And BeOS/Haiku and SkyOS can show off a few things here, even though the query builder in BeOS is cumbersome for a power user.
One of the major reasons file management is becoming increasingly complex is because package managers tend to do a crappy job.
Another reason is because we – the users – have a tendency to scatter our own files all over the place. That’s where indexing services are really good (and of course message data from chats, as well as mails).
Package managers do a crappy job, and you use Windows as an example of an improvement? I’m sorry, but that’s just so much BS: When you install a Windows program, does it put itself in C: or in C:Program Files or in C:SomeCompanySomeFolder, or in any one of the 23 letters of the alphabet Microsoft have kindly >/sarcasm< left you the use of for your drives? As for where my data goes, I should be able to put stuff where I want, not where some hotshot with an MBA in Redmond thinks I want, and even if I shouldn’t, package managers have zip to do with that.
No no no no no no no… you misunderstood me.
I did NOT use Windows as an example of doing package managing better.
Let me repeat: I did NOT use Windows as an example of doing package managing better.
You did not at all read my post. twenex, you know very well, I’m a Linux user as well as a win2k3 user, though I prefer Syllable, Haiku and SkyOS.
It is in relation to indexing services that Windows have been far ahead for many years (even before the release of Windows 2000). In relation to installers and package management Windows is worse today than in 1995.
Let me make it clear that in regard to package managers and installers, Windows is as bad as most Linux distributions. Back in the early Win95 days one could usually decide where things went (incl. menu items in the start menu). Today installers on Windows just installs the software without letting me customize anything, with the result that everything is scattered all over the place, just like you decribed so well. GNU/Linux-distributions tend to be just as bad, not letting me decide where to install packages. It is less of a problem due to menu structure in the Gnome Menu. But it still scatters files all over the place, unless you use GoboLinux.
Package Managers have everything to do with scattering installed files all over the place. I’ll disagree with you on that one.
EDIT: Fixed missing tags.
Edited 2006-11-02 16:49
Fair enough, yes I misunderstood you, sorry.
In other words, you want to move to a flat filing system where each program tags every file, and all of those files are searchable, and rely on the apps to sort the files insted of a file manager.
Sounds an awful lot like PalmOS4.
A lot of what has been said in the article and above could possibly work with home desktop machines, but how does it translate into a work environment? The idea of Linux and the BSDs being a multi user system seems to have been lost in this topic. The same goes for file servers of any flavor including Novell’s OES, Windows Server 2003, any Linux Distro out there. Rights management and top down flow would be negated. There would have to be a user interface, which represent the file system in a way completely different from the actual layout. It would have to be a shell on top of a shell. Even then, how to you share directories through that interface?
I’m in a project where we are currently working on a very similar product. While we may not have any working code yet, we have already spend a great deal of time on the user interface and have some new ideas on how to integrating sharing of files into this interface.
http://www.iola.dk/en/current_project
Edited 2006-11-02 14:13
Take that desktop metaphor and bury it in the backyard under the third rock next to your beegees collection.
so you add an indexing service and take out the file manager?
where to you win doing this?
leave the file manager alone
add a good indexing service, and make the file manager aware of it…
in the other hand, i am a good indexing service… why my stupid pc is going to be even good?
and yes the world is make to please idiots…
because they are idiots we have to be idiots as well
educate the idiots, don’t make them more idiots…
bye
> and yes the world is make to please idiots…
> because they are idiots we have to be idiots as
> well
There’s nothing idiotic about letting the computer do the job it’s supposed to do: ease your life.
I will come to your house and kill you if you attempt to force me to enter metadata for every file that comes onto my machine. That idea is genuinely that awful.
The current state of affairs is bad. That I do not deny. The problem comes from the vast increase in the amount of stuff which is coming in. When I first started with Linux, the /etc directory was easy to use; now it is a cluttered mess. The same goes with dotfiles (which you want to add to?!?!) in my home directory.
The problem I have is that I have more crap coming in than I can easily manage with the current crop of tools. I have too many files and folders. I need some drawers.
Come up with a drawer app. Then we’ll talk and I won’t kill you.
Edited 2006-11-02 15:50
WIMP is like democracy.
It has been said that democracy is the worst form of government except all the others that have been tried. — Sir Winston Churchill
And then the obligatory GUI versus CLI run-down:
Compare WIMP to typing textual commands. When you want to accomplish a task using WIMP, you will point at things and try to waggle the mouse in a way that the computer will try to decipher. In other words, you are using sign language.
When you want to accomplish a task using textual commands, you type in what you want the computer to do. In other words, you are using real language.
Of course both are only analogies, but (as I have commented on OSNews before when this topic has come up) pointing at things and grunting (clicking) seems far less elegant than explaining verbally what you want. WIMP is like a step back to a pre-civilized human.
I’m not saying it is exclusively a bad thing, but sometimes it makes me wonder…
The analogy only gets you so far. Show me an interface where I can user my innate visual skills and hand-eye coordination, and you’ve got WIMP. Show me an interface where I can use my innate language skills, and you’ve got something ahead of its time. But show me the CLI, and in essence you’ve got a programming language with a steep learning curve.
Or “I use the CLI, hence it’s superior”
so i read the article, moaned at having to have the e17 libs to try this thing out and instead decided to look at the website first to see if it was worth the trouble.
on the front page is this:
“Multitasking is a myth. Only power users exploit it. Casual users do one thing at a time.”
i suggest he reads more usability research as he’ll find that yes, bobby, people do multitask. theory is nice but when field research with sound methodology shows otherwise, i know as a scientist which i’ll trust more.
“The only other application 99% of casual users use in parallel is their music player to have background music”
and what is instant messaging exactly then? *sigh*
this is the biggest problem surrounding UI development today, both in the open and proprietary worlds: people are wandering around trying out their hunches and papering over the obvious holes with excuses and personal theory.
my favourite was a recent video where the “researcher” claimed his prototype made the interface disappear when the gestures he was using were not intuitive/obvious and there was a button bar at the top of the touch screen! *sigh*
oh well ..
Well, multitasking is a myth?
Hmm.. I wonder what the services are doing then, if they aren’t running?
To me it seems he is being confused with the human variant of multitasking and the computer variation of multitasking.
And not all tasks required a maximized window. Look at how BeOS/Haiku handles email-reading and writing.
I only use maximized windows for real documents (OpenOffice) or development (Visual Studio, Borland Delphi, eclipse, lazarus, Bloodshed Dev-C++) and Paint Shop Pro.
Most other tasks are solved using small windows (like reading and writing mails), usually several of those opened.
What I need is multiple workspaces.
The original “multitasking = myth” comment, though, is not invalidated by your example: As the original commentator could simply label you a “power-user”.
I also, though, run a number of programs across multiple workspaces. Some stuff you may not need all the time but you’ll want without waiting (date, volume control), some stuff you need to see if it goes unexpectedly awry (proc. temp, load, net-traffic, etc.), some things you try to keep atop of (# unread e-mails, incoming IM’s), and the best way to do this is via a combination of monitors, pop-ups, text logs, and running programs. I also use open programs to remind me I’m not done whatever task I was using them for yet.
If multi-tasking is really only for power users, shouldn’t the goal be to smooth out the learning curve and lead everyone towards a vaunted power-user status, not ‘dumbing down’ the interface to meet?
Well, I am a power user and even a geek. But I know quite a few persons I wouldn’t consider power users and even they have a tendency to solve certain tasks across several small applications. Notepad is one of them.
But it all depends on the definition of power user.
If multi-tasking is really only for power users, shouldn’t the goal be to smooth out the learning curve and lead everyone towards a vaunted power-user status, not ‘dumbing down’ the interface to meet?
Oh yeah! I’m with you on that one. Unfortunately the major companies selling software to Average Joe cares more about selling the software than making it good (and that’s understandable – they need the profit), and therefore dumbing it down is the best short term strategy.
But if the learning curve could be straighten out (and I think it can), it would mean we could have software useful for the beginner, without getting in the way of the power user.
A thing I’d like to see changed would be controlling the mime types in Windows. One step could be to go back to the old control of mime types (as in NT4) instead of the more automatic one introduced in Windows2000, and then make it easier than in NT4 to control mime types and their respective file types.
An easy-to-use menu builder to modify context menues would be welcome too.
The reason why many of these operations are looked at as power user operations are because they are too difficult to perform (for most users) due to poor UI. Not to mention a lack of understanding of PC’s on the side of newbie users. (And perhaps also a lack of will to understand?)
This article is nice ideas relly, not very usefull but nice :p
Just kiddin, I understand u fully when i look at the overfilled desktop from downloaded files in the background, and yes I have a warm feeling managing my little file hirarchy on 4 disks 2 OS’s and alot of files, but tagging won’t really help cos it’s difficult for a computer to understand what i need and I don’t remember what exactly I tagged.
The problem will persist till we create a real AI, someone to talk to, who understands us and the very content of files, yep yep yep that’s the real end of Desktop metaphor my friend.
I have a feeling there is something wrong in this article.
Desktop clutter is bad. But why is the desktop cluttered? Because visual way of doing things (WIMP) is not suitable for manipulating large quantities of information/items/data. There’s nothing wrong with the _logical_ way of arranging things in directories. WIMP is the opposite of the directory tree.
The library that stores a million books has no problem finding the correct volume because the books lay on indexed shelves installed in indexed racks that stand in indexed storage rooms. And the library has the catalogue!
Why don’t we just make our computer use this library metaphor?
> Why don’t we just make our computer use this library
> metaphor?
That’s the point, we already do. But the problem is, not everything is a book.
I was doing a database recently with an archivist, who was very impressed with the importance of picking the right categories or keywords to use to categorize the items. Thinking there was something a bit odd about this, I consulted a friend who is something of an expert in these matters.
He laughed long and hard. Then he told me that numerous studies had shown that people cannot in fact use keywords effectively to find things. They forget what they have tagged. Their interests change. What is a keyword for one is not for another.
What people do effectively (maybe not efficiently) is drill down, use tree type searching. They have a much better, largely unconscious, way of finding stuff.
The desktop metaphor works because it allows rapid drill down, and rapid reassignment.
So we still have keywords. But he was right – no-one uses them. They drill down, or if all else fails, they do full text searches for whatever words occur to them.
“He laughed long and hard. Then he told me that numerous studies had shown that people cannot in fact use keywords effectively to find things. They forget what they have tagged. Their interests change. What is a keyword for one is not for another. “
Yes, that’s an existing problem. It requires the ability of thinking in categories and suborders. The used thinking schemes change over lifetime, especially in the early youth, where many computer users seem to be in when they sit infront of their box. 🙂 So a context driven system should be able to do an “evolution” to itself – the same way the user evolutes.
Ah, that makes perfect sense AAMOF!!! The drilling down a tree heierarchy is in fact MUCH MORE VISUAL than the DYNAMIC display of data in a search/tag based system. Yeah! Go directories!
(Of course the search is useful to have, as long as the hard directories still exist alongside)
I’m surprised by the number of negative reactions. All Kostis is saying is that files are a technical artifact in which most people wouldn’t be interested if they didn’t have to be. Files are a needlessly overloaded concept – some files contain data (most interesting to users), some files contain configuration (considerably less interesting) and some contain programs (which are only interesting in so far as you launch them from files). This is both confusing and dangerous for non-geeks.
Kostis is simply suggesting that we get away from the tired old concept of files and programs and get to the concept of data and capabilities. Windows’ start menu and its attempts at hiding “system” files from users are a weak attempt at doing this. The web (think Google Mail, Google Maps, Google …), has done a comparatively better job of focusing on data and capability, which is probably why users seem to have an easier time learning and using web interfaces than traditional desktop apps, even though the web interfaces are often less slick. Sure, there’s a lot of files being used there, but it’s all managed by experts working in data centers.
As to tagging and metadata, the current scheme for naming files is nothing but a simplistic tagging mechanism that encodes relationship (folders), identity (file name) and type (extension). There’s no reason that a tag-based storage mechanism can’t use these fields as a starting point but then add considerable value where additional metadata is available (either via the data source or by the user).
The key to successfully moving beyond the file system is to provide a competent browser mechanism that gives users a fall back for when they can’t remember specific searchable terms and that lets them instead explore their data and use spatial memory to recognize matches. It might also be nice to provide an expert system that helps users who start with a vague idea of what they’re looking for to iteratively narrow their results. Think of a good librarian – you might walk into the library knowing very little about what you’re looking for, but by questioning you about your objectives, (s)he can help you find exactly the right book.
At the end of the day, software that makes you do anything that’s not directly related to your real work objectives should be considered a failure. To Kostis (and to me) that means that users should never have to touch anything called a “file” unless it’s actually a file (e.g. my 2006 tax file).
And assign the user for tagging content is ridiculous. I think a simplified hierarchic system and dynamic metadata on filesystem objects is much better. It should be programs that assign AND read/write metadata to/from filesystem objects NOT the user. Example, a camera, the camera could add metadata kamera=”cannon” and model=”400″ to the picture.
the metadata could be added like File.add(“key”,value)
example: File.add(“kamera”,”cannon”), File.add(“model”,”400″). The folder metaphor is greate! but what is needed by todays operative system, is to clean up the todays messy structure AND as default only present for the user his starting point/home directory.
I have two points:
1.) Programmers and programs would still need to know the exact location of a file, so file managers will ALWAYS be necessary. What would happen if your photo-album screen saver picked up on your porn directory when you had family over? The only way to separate files with these kinds of details, is to have a human organize them.
2.) You should spend your time on something far more important to technology than making computers easier to use for users that obviously can’t comprehend them enough to ever use them to their full potential. They’d be better off with real filing cabinets and folders.
Its amazing how many people can’t comprehend how to navigate a file hierarchy and organize files. Those of us “gifted” enough to understand know that there are very simple tools to locate files quickly if we cannot find them — utilities such as slocate.
Edit: It should be noted that I’m disagreeing with the author’s philosophy of UI design as stated in his Manifesto on his SourceForge page, much more than I am criticizing anything he says in this article. That said….
Oh man…. where to begin…. Sure, tags and searches are good… everyone knows that already. But some of these other ideas if implemented in a mainstream OS would take UI back to the stone age. Luckily even Microsoft has done enough user testing to realize that the amount of oversimplification suggested actually makes everything much harder and more annoying for most people.
From the Manifesto:
>User interfaces should be simple, discoverable >adaptable. Casual users should find their way easily >after some minutes. Intermediate users should be able >to delve a little deeper. Power users should resort to >the command line if they want to do something >extraordinary.
Simple and discoverable: good. However, it must also be remembered the more similar an interface is to something that came before it, the more likely it is to be discoverable to the user. Hence, WIMP apps that follow certain conventions are easier for most people to use than new, nonstandard interfaces. As for power users needing to resort to the command line: NO NO NO NO NO NO NO NO NO NO NO NO NO NO NO NO NO NO NO NO NO NO NO NO NO!!!!!!!!!!!!!!!!!! That’s my take. I know I’m not the only one.
>Unix GUI applications should mirror the console ones. >They should be small, reusable and focused on a single >area. Monolithic designs are to be avoided. It should >be easy to obsolete, replace or upgrade functionality.
Only if they’re easy to link together, like Unix console apps (this is something already done on the Amiga and now on OS X with Automator). In practice, though, it’s often actually much easier for end users to have one app, like iTunes, that’s always open, then search around for a specialized one to burn CDs, one to rip CDs, and one to listen to music. (Not to mention tagging, downloading, finding podcasts, the list goes on and on… there’s a reason iTunes is popular even with non-ipod owners.)
>There should be only two ways to accomplish a task. >Following the beaten path (easy but slow) for the >casual user or bypassing the path (hard but quicker) >for the power user. Any other way might confuse the >casual user or angry the power user.
Hit the nail on the head, by stating the absolute, complete wrong way of thinking about this. The author believes there can only be “the path of the beginner” and “the path of the expert.” That goes completely against the reason that personal computers have become popular: because there is a sliding scale of functionality that increases with use. With the author’s binary separated model, how would normal users ever be encouraged to venture into more powerful pursuits? Just tell them to RTFM? This would just encourage more elitism and make the majority of people more helpless to use the computer the way they want.
Apple has it right: Give users a path to more power, while making that power as easy to use as possible. Encourage people to discover functionality they didn’t know existed. But don’t throw functionality out for the sake of the grandmas.
Edited 2006-11-02 21:21
Someone said: files are organized like at a library, organized on specific sections, specific racks, specific shelves, specific rooms, specific buildings, etc.
Then someone said: not everything is a book.
So the solution is:
3-Dimensional File Hierarchies! How much more visual can you get?
eh, ok. maybe not. but its no more a BAD idea than the one this article is suggesting!
Interesting. What the author wants is more or less Yep, but for all files.
Yep :
http://www.yepthat.com/index.html
The author might find the following interesting:
SegusoLand: http://segusoland.sourceforge.net/screenshots.html
Or it’s successors Logical Desktop and One Finger: http://logicaldesktop.sourceforge.net/screenshots.html and http://onefinger.sourceforge.net/screenshots.html
They basically make searches and clickable, discoverable searches the basis of the interface experience. Great ideas, except that untagged files again fall out of scope. Except for OneFinger, which sacrifices some of the earlier features to overcome it’s limitations. Pretty good, but you have to wonder at the added value.
And the rant against meta-data (I still like the last line of 2.2):
http://www.well.com/~doctorow/metacrap.htm
Edited 2006-11-02 22:41
You’d be surprised at the efficiency one can achieve with a little scripting knowledge and some help from command line tools, such as find and xargs. In all seriousness though, scripting data management will always be a necessity as long as users and/or programmers are too lazy to cleanup after themselves. And I don’t see a metadata based file system reducing this problem anytime soon.
File Manager: as long as we have several ways to do something, a file manager is the handiest tool for messing w/ files.
1. Copy data: OK, what data should be copied, and how? How is the user going to be sure of the structure in three different places (current OS, temporary media, destination OS)? A file manager is handy, here. Integration is a much better solution than removing the file manager. CD burning and ripping in Konqueror are great examples of this: easy, quick, and highly intuitive.
2. Ah, you’re either being tricky, or missed it entirely. This is two separate points. Finding a file is a problem. Clicking it to edit is not. I think it’s great that I can quickly choose to open some files (text, fancy text, and archives) in 5 or more applications from only two menu levels deep. And sometimes, I indeed want a different one than default! While the menus could be flattened a bit, as far as features go, this is great.
3. Organzing files, well, it depends on how those files are used. Again, I’m clearly not for removal of the file manager, but often times, it’s going to be more trouble than it’s worth to really organize files. But, like for #1, I would be more inclined to use an additional feature set to the file manager than an interface that removed the file manager as we know it. The result is just going to be a dumbed down interface, and about as useful as the Windows default of not showing extensions.
Extending the file manager would also mean we could have our cake and eat it to: whatever good stuff comes up to better handle files, and an easy switch to actually handling them ourselves, with those file management operations at our fingertips in both parts of the interface. It would improve productivity for us power users, and allow a learning curve for those who want to. A better interface is good, but hiding it helps no one but the technophobes (and really not even them).
I can easily envision, for instance, a 2D or 3D web or tree of types of files, swapping organization with a mouse click (or wheel up/down), making a motor-memory friendly way to handle the home/documents stuff. However, there’s no reason I can think of to remove the file management aspects, and in fact, they could be put to good use, and selection and manipulation of the file or directory (or virtual grouping from a database) would be faster than looking through my 1000+ file My Documents I’ve got…and I know from experience that I’m not too bad about organizing my files!
“In the future we may have clever pattern recognition applications which would understand the content of every file that enters a computer.”
I think it exists, we’re just consumers right now. Robots can do it, so…
Tagging by user: so instead of just going “OK”, I’ve got to give new information? Ugh! A long name should suffice. The real trick is to, like photos and text documents, create some framework so that a learning indexing service can infer from the data of ANY file information useful about it. That means picking out words from spreadsheets, finding beats and moods in music files, file types and names in archives (and maybe some of their contents), and so on. Even as a power user that likes to organize his stuff, I just save to My Documents or ~ most of the time, and use chording to get where I want. All around much quicker, as there’s only one piece of data needed (the first few characters on the title–it helps to make them long when possible!). I would do the same thing with tags.
Key words will create just as chaotic an environment as folders have, unless used very creatively. Metadata as we have it just needs more integration, and searches smarter and faster (so they can be mouse clicks and single keys away, not words and menu trees away). More may help, but not merely by the presence of that metadata–there must be a structure for it to be created with so that it is not used tersely (so as to complete your initial task faster, which was about making the files, not figuring out good words to look for it with later on!). With luck, the proper solution for this will emerge technologically, or become apparent when we really need it (like spam filtering).
Luckily, with computing going more parallel, unused whole CPUs will be there in not too long to remove the apparent performance burden of regular tagging and searching.
“Of course since we care for compatibility we keep the old filesystem around.”
IoW, the EXT5 guys won’t give a damn about all of this, and aren’t going to compromise their wonderful designs for your flights of fancy. Half of them still might not use X by the time this stuff is actually working, and DIY PCs are of ages past . I couldn’t help it
Actual minimal implementation? Using EFL?! I must build it!
Closing to the main article: assume that I am a power user, but still just as stupid and lazy as the guy who thinks the internet is a blue e. Why? Because in my own ways, I am. So are the overwhelming majority of us.
DmitryK: “Why don’t we just make our computer use this library metaphor?”
I think it’s a good idea. The DB-on-top-of-the-FS can do just that, but I think some folks get carried away, with ideas like hiding the FS, rather than just organizing it better for human (as opposed to standards makers’ and software developers’ ) minds. Not everything in the current GUI or command line is broken.
Falemagn: “What’s more, the system could present you with a list of already used tags and let you chose between them.”
Let, or force? If let, what are the defaults (rather, what should they be,? If force, forget it.
“You can do a lot more operations with sets than you can do with directories, like intersecting them, joining them, and a multitude of other relational operations.”
Which is a major problem, as non-math folk don’t like that, since you could have the equivalent of a file being in many places at once, but only having a single instance. Now, if used just for searches or other specific sorting scenarios: bring it on!
Edited 2006-11-03 00:41
Symphony OS does away with much of the WIMP metaphor in the Mezzo desktop. No desktop, no menus (except in programs of course).
Folders will never go away. People are used to storing things in file folders, so the metaphor is acceptable to most people.
I do pile stuff, right next to my computer actually, but I know what pile to look in to find something 😉
There’s also too much data to assign tags to everything. I have thousands of pictures. Am I suppsoed to go back and assign a tag for every single one? I’d rather put them in a folder and put tags on the folder. A human is better at detecting the actual meaning of a file (“Is this Mt. Everest from the south or the north?” “What did I mean when I created this letter?”) but there’s too many files to actually apply tags to.
WIMP will be used for a long time, applications have too many features to fit into a few screens: word provessors have hundreds of features those that are used are still too many to fit in a few screens.
Tags in all it’s glory is probably great. But they are but one of many important relations between objects.
Some post suggested pulling data from the mail an attachment came from to index it properly. Why? The important relationship between the object and the mail is all contained in the link between the object and the mail. Find the mail, find the object. Find the object, find the mail. Sadly our current GUIs only supports the first case.
This is also exactly how the brain works. There’s a reason it’s called “mind maps”, it’s because association chains is how we store shit in our brains. And it’s exactly by association that we retrieve it. Tags, or directory names, or spatial placements, are all examples of stuff that can be associated with an object, but not the only ones.
How many times have you tried to find something thinking: “hmm, I remember I had it while…”. And that’s the shit, if proper links to really model interesting relationships between objects where constructed and kept a user could navigate to his/her objects by traversing the exact same association chains they have in their brains.
One problem when adding metadata to a file hasn’t had a good solution to yet is the old age problem of ftping a file to a server that doesn’t support your metadata. Classic mac had this problem, NTFS has this problem, I’m sure that ReiserFS has this issue.
All metadata tends to be lost when going over a network. Which kinda destroys a lot. Email, ftp, http, all destroy the metadata without smart clients and smart servers. Which makes all your precious tags worthless.
Sure you could invent yet another file format to wrap your metadata and files together, but isn’t that more of a pita, and exactly what you are trying to avoid?
The world doesn’t need another metadata tagging system and gui. What it does need is an open way to exchange data that works with the current toolset. Which is why that Spotlight (in its half assed form). Is currently the best solution to the metadata problem.
By only relying on the file itself instead of user added metadata (only a calculation from the file) it can handle the problem of networks and works with all the current tools on os x.
I don’t see that hierarchical organization and keyword-searching are contradictory. Folders are a form of hierarchical and univocal tagging. They are convenient for many cases. Open tags are an alternative mode of organization which may be very helpful (especially if the number of keywords is constrained as the number of directories is).And I wish that some day OSX would allow adding tags in the save window (along with the folders)
In fact, the problem in the current implementation of indexing (at least in OSX) is that spotlight does not interpret Folders as tags/metadata.Say I want to find the receipt of a book entitled “Computers Galore” stored in a “Books” subdirectory under a “Purchases” folder. At least in OSX it’s impossible to find the receipt by searching “purchases books computers”. The example is just a simple illustration, but it shows the limitation of the current directory+indexing implementation. The Folder in which the file is in should be part of the file metadata.