Over the past several years, mobile devices have greatly influenced user interfaces. That’s great for handheld users but leaves those of us who rely on laptops and desktops in the lurch. Windows 8, Ubuntu Unity, and GNOME have all radically changed in ways that leave personal computer users scratching their heads.
One user interface completely avoided this controversy: Xfce. This review takes a quick look at Xfce today. Who is this product for? Who should pass it by?Since
1996, the Xfce desktop environment has evolved at a steady, non-disruptive pace. It’s reliable. I’ve
used it for six years and have rarely encountered bugs. Most
important, Xfce presents a simple menu-driven interface on a
traditional desktop. Anyone who’s
ever used a computer can sit down with it and
immediately become productive. No mysteries here about “Where’s the
Start button?” or “Does it have a menu?” or “How do I add a desktop
icon?”
Xfce is the default desktop for about ten
Linux distributions.
Over 80
others offer it in their repositories. For this
review, I worked with Xfce 4.10 on Xubuntu 13.10 and Zenwalk 7.2. Xfce
version 4.12 has been in development for two years and should be released soon. Read
about upcoming version 4.12 features here.
A Traditional Desktop
Here’s the default Xfce desktop presented by Xubuntu 13.10. It contains
a top panel with some minimal information and a menu button. There’s
also a bottom panel
that you can’t see in this screenshot: it remains invisible until you
move
your mouse cursor over it. The bottom panel contains icons for ten common applications.
I’ve clicked on the menu button (in the upper left hand corner of the
screen) to show the
drop-down menu and some of the default apps:
Menus and configuration work as anyone who has ever used Windows
(pre-version-8) would assume. To add an icon for an application or
folder to the desktop, just
right-click on any empty spot in the display and select “Create
Launcher” or “Create Folder.” Right-clicking also allows you to control
“Desktop Settings” including desktop background, menuing options, and
the default desktop icons.
Similarly, you can add and delete application
launchers and plug-ins to the default panels. Just right-click on the
panel. You can
relocate panels to any edge of the display, and add and delete new
panels
at will. I alter the panels according to my screen’s size and shape.
The “Settings Manager” selection in the main menu lets
you configure Xfce and your operating system. No mystery here about how
to tailor the system to your needs.
Here’s how I altered the default Xubuntu desktop to suit my preferences
in just fifteen minutes. I deleted
the top panel, and made the bottom panel permanently visible. I changed
its icons to launch my favorite apps. I added a second panel on the
right-hand side of the display and placed launchers there for some
system tools. Finally, I altered the desktop background
and added a couple program icons to the desktop. To top it off I
reduced my screen resolution for better readability:
Given its flexibility, most distros pre-configure Xfce. So the
default desktop you’ll see varies by the distro. For
example, here’s the initial desktop for VectorLinux 7 Standard Edition.
It features a centered top
panel with common launchers and a Mac-like dock at the bottom. If you
move the mouse cursor over the dock the icons in focus enlarge. In this
screenshot, the cursor points at the Pidgin app in the dock:
What Xfce Includes
Xfce is a full desktop environment (DE). It bundles both a
user interface and programs to support common desktop tasks. Its core components are:
Component: | Function: |
Window Manager | Manages windows on the display |
Desktop Manager | Manages screen background, icons, root window |
Session Manager | Controls sessions, login, power management |
Settings Manager | For easy configuration |
Panel | Manages panels and their icons |
Application Finder | Finds apps |
Xfce libraries | Underlying functions and widgets |
Xfconf | Client-server configuration |
Thunar File Manager | Default file manager |
Xfce also bundles a default set of applications: Midiori
for web browsing, Xfburn for creating optical discs, Ristretto for
viewing images, Orage for calendaring, Mixer for audio tracks, and
Terminal for a command line interface. Distros often modify this list
by their own additions and omissions.
Compatible and Lightweight
Xfce has a minimalist philosophy. The idea is
to provide a basic desktop environment, to which you add any
applications you need. You can add GNOME or even KDE apps
without package dependency problems. You can also start
GNOME
or KDE services automatically upon startup. I often install Xfce along
with MATE and various GNOME and KDE apps in a single Linux Mint
instance. It all works without
conflicts.
Xfce is lightweight. Most Xfce-based distros download to a single CD,
rather than requiring a DVD.
Memory use
is significantly less
than for KDE or GNOME.
Interactive response is quicker, too.
I’ve
installed Xfce on many Pentium IV HT and
early dual-core systems, as part of Xubuntu, Mint, or VectorLinux.
These systems are up to ten years old and often have as
little as 512 megabytes of main memory and 256 megabytes of video
ram. With
this lightweight software, even these old computers are responsive.
They can perform nearly all the same desktop
functions as state-of-the-art hardware.
All those Windows XP
boxes
people are throwing out? Most could continue in service simply by
installing a
good Xfce-based Linux.
Who is Xfce For?
If you’re looking for a flashy interface
or bells and whistles, don’t bother with Xfce. You’ll probably find it
boring. If you want
your PC to mimic your handheld, try Windows 8, Ubuntu Unity, or GNOME.
And if you hanker after all the latest features, Xfce will disappoint.
I recommend Xfce for those who want
to concentrate on their work rather than their software. It’s a simple,
traditional desktop. PC users like
it. It’s an excellent choice for when you install and configure a
system for a family member, friend, or other end user. You won’t have
to train them on how to use it. While
other interfaces have morphed out of all recognition over the past few
years, Xfce stayed the course. It’s stable and reliable;
bugs are rare. The
product is fast and lightweight, so it works well on low-spec and older
computers.
If
simplicity, usability, and reliability top your
goals, Xfce is worth a close look. To learn more, take the Xfce 4.10 tour, read the Xfce introduction, or
explore the online wiki.
——————————
Howard Fosdick is a database and systems administrator who works as an
independent consultant. He frequently writes technical articles and has
an M.S. in Computer Science.
While the XFCE components are lightweight and modular, they have some not-so-light dependencies. The power manager depends on consolekit, polkit, udisks and upower; polkit in turn depends on spidermonkey, a JavaScript interpreter! Similarly, the Thunar file manager automount plugin depends on consolekit, polkit and udisks.
I use XFCE, and agree with the author that it does a good job of staying out of the way. But I have removed Thunar and the power manager in favor of commandline alternatives.
And sadly, those dependencies make it less portable in regards of non-Linux operating systems. 🙁
I’ve been a big fan of XFCE (v3) and Xfce (v4), but in my limited experience, it has become less usable on FreeBSD (my primary OS). While it works as a whole, some functionalities (especially power and disk related) require specific tweaking outside of Xfce to “make it work by different means”, and those are very unpleasant means. Some months ago, I had tried to get “everything” running with FreeBSD 10, but I had to move to Gnome because Xfce didn’t deliver the expected results anymore, while requiring many system services and using more resources than I would have thought. So it’s not just about the amount of dependencies it will install, but also about the services it requires to run. That might be no problem when running Linux on a recent PC, but for older computers (non-multicore, less than 1 GB RAM, no 3D graphics card and so on) it’s definitely not so good, especially when not running a tailored Linux (no mainstream “big ass” distribution which includes, installs and runs everything plus the various kitchen sinks).
Furthermore, I must admit that I miss the simple, yet “powerful enough” interface of XFCE (v3) which was a configurable CDE lookalike. Sadly, it isn’t maintained anymore – requires Gtk 1, has no Unicode support and does not integrate with “system services” like Xfce (v4) does. Still it was very fast, had “hooks” to make things work (like dealing with disks with xfmount) and didn’t require much learning. In this “traditional” sense, it was a perfect replacement for users coming from a Sun Solaris background (with CDE), but also easy to adopt for people coming from “Windows” land. (And I still have a P2 system running it on top of FreeBSD 5 including office, multimedia, graphics and programming applications – works perfectly.)
Earlier versions of Xfce (v4) were also on the FreeSBIE live system CD, and it’s still a good GUI environment for systems run from optical media: to try out Linux, to use it as an emergency system, or simply for testing purposes.
Honestly, that’s more an issue of the BSD folks IMO; the source is there.
I’m not sure it is that easy. Sure, the source of the Linux kernel (which is a primary dependency) is there, but the BSD kernels are different. Maybe it’s even impossible to implement certain things which are too specific (cgroups, u{dev,disk,power,whatnot}). Remember that it’s not just about porting or patching simple things – Xfce depends on many kernel functionalities and also system services that do not exist in BSD. And imagine the fun if systemd becomes a required dependency… 😉
Maybe it’s even about “wasted work”. Take HAL for example. BSD was lacking behind in HAL support when it became a major dependency of KDE, Gnome, and X itself, often together with DBus. When it started working reliably, it had been obsoleted in Linux already, which moved on to the “u* framework”. Still HAL stuff is stuck in many components of the system which has to kept working, or a massive loss of functionality would appear. There probably has to be some reasonable judgment about “if it’s worth the trouble”. It could also happen that a fork is being created, free of the “Linuxisms”, probably lacking certain functions for some time until they get re-implemented in a BSD-specific or even generally portable manner.
But those are just my individual assumptions. If you are interested in details, you should contact the BSD folks directly.
Perhaps all these DEs should just quit claiming compatibility with “Unix-like OSes” and just say “Linux”.
The BSDs have several issues:
1. Less manpower than Linux working on this stuff, especially when you’re not talking about FreeBSD;
2. The rapid changing of Linux’s interfaces (the hal/*kit/u*/systemd saga someone referred to is an example), especially when it is felt by some in the BSD community that this is a result of not of enough thought and proper engineering up front causing a lot of “scrap and start over” later on.
At least one BSD distribution (PC-BSD) felt that the Sisyphean task of implementing frequently-changing Linuxisms just to make DEs work is not worth the effort, and is building its own DE (Lumina).
I think these two are actually related. ie, because there are so many people working on Linux, often independently, and they all want to have something working, so they have to design on the fly and get it out there. This results in a lot of implicit test-by-use which hurries up the scrap-and-redesign cycle.
My point, exactly!
Edited 2014-06-11 08:30 UTC
The problem with keeping older computers is thus…pretty much anything made during the so called “MHz Wars” from 93-06 had ZERO time devoted to power saving so if you sit down and do the math the amount of useful work you are getting for the amount of power you are using? its just not worth keeping.
Lets say that older PC you are talking about is a late model P3, say a 1GHz. According to CPU world the CPU of a Coppermine 1GHz P3 uses 25w and of course this 25w is constant since there is no energy saving features in these chips. To give you something to compare it to an AMD Sempron quad in socket AM1 uses 25w WHILE giving you full HD (the Sempron is an APU) AND four cores AND an extra 400MHz per core AND full surround sound AND GB ethernet AND…see the problem?
Frankly the older systems made before the advent of the Core series on the Intel side and pre AM2 on the AMD side are really not worth keeping, the amount of power you use versus the amount of useful work just doesn’t add up.
Yes, this is correct for most of the times. Still it’s possible that if you have to work with what you’ve got, you can still turn a (quite low power) Pentium 1 machine into a usable file server. Of course you get much better results if you’re willing to invest money, for example, in low-power mainboards (usually ARM based) and “eco disks” or SSDs. On the other hand, wasting a 2 GHz computer with a fat GPU and a 800 W power supply just for browsing “Facebook” doesn’t sound that appealing, too. 🙂
Older PCs are still found in many places, and there are still people wanting to use them for something, instead of participating in the annual “throw away, buy new” dance that keeps industry happy. Those formerly were happy about installing Linux and Xfce on that kind of systems, and it was no problem to use them, because they were sufficiently fast and secure (unlike, for example, when people try to install pirated copies of outdated “Windows XP” on it). And if resources were too low to run “mainstream Linux”, those people would simply switch to a different OS like FreeBSD or OpenBSD and still use Xfce for its lightweight, but powerful features.
With Xfce not being able to deliver portability and efficiency anymore, other more lightweight desktop environments (and maybe even preconfigured and tailored window managers) could become more interesting as a base to build a fully-featured system consisting of OS, desktop, and application software. But as soon as you enter “too fat” applications to the mix, you’re back at the initial problem. 🙁
There are also non-profit organisations which are in the business of avoiding the huge pile of office waste (computers and printers), and instead install them with Linux and donate them. This is especially interesting for people who want to learn about computers and achieve experience, but simply cannot afford to buy a new one, even though computers become cheaper and cheaper. But with the continuous “renewal” especially of smartphones (buy a new one every year, throw the old one into a garbage can), tablets and laptops, maybe “component-based” PCs will also become less and less relevant to the general public. And when people don’t see the waste they’re creating, they don’t care. (Maybe I just grew up with the wrong mentality, as I don’t feel very comfortable with throwing away something that fully works, just because industry tells me it’s “old”.)
Basically, I agree with this, but allow me to add:
In today’s modern PCs, the ratio is still better, and given the assumption that you hardly use 10 % of the resources of the computer, the waste of energy is less (not in relative, but in absolute amount). The problem often isn’t that the hardware stops working, but because the software demands more and more resources to perform basically the same tasks (from a user’s point of view), which is compensated by buying better hardware, creating toxic waste as a side effect, just to keep the same “average usage speed”. On the other hand, there are many features “hidden” from the user which depend on the availability of 4 GB RAM, a 3D-enabled GPU, or the presence of multiple CPU cores. Without the general (and increasingly cheap) availability of those, development would not go into that direction. Maybe that’s also the reason why there is so much bloadware – because nobody notices when something is inefficiently programmed, as it’s cheaper to simply buy faster computers than to perform an efficiency-oriented code rewrite. (You usually find this mentality in business software.)
For example, I’ve recently seen a top-of-the-line PC installed with “Windows 8”, and I was surprised how terribly slow everything was. The person in question had owned many high-end PCs over the years, all of them equipped with the then-current “Windows”, and he told me that he never actually noticed that something became faster, even though he always bought the newest and fastest PCs; instead, he felt like software became slower with every release. And people somehow accept this as being “normal”… now imagine what you could achieve with such hardware if you just added the proper software!
Windows (not “Windows”) Vista, 7, 8, 8.1 all were faster than the previous version; but I guess that’s just improper software for you…
This seems to be the curse of “open”desktop.org.
A project that seems to have been hijacked by various individuals on Red Hat’s payroll. Hijacked in the sense that opendesktop.org was about creating cross-desktop systems, yet more and more their projects require a very specific set of dependencies found either under the Gnome or udev/systemd umbrellas.
Hell, LXDE is jumping ship to QT as we speak. Merging their efforts with Razor-QT in the process.
Opendesktop has always been about coordinating the common forms of doing things in different desktop environments. So while Gnome, kde, xfce and others are all different having common ways of doing things that applications can use makes everything easier for app developers.
I don’t think cross operating system compatibility has ever been part of that goal. I think its mission should always be to make the best open desktop experience possible. Now, linux has so much more funding behind it, that they can spend resources on the desktop and related technologies. the *BSDs don’t have that luxury and end up getting left behind as they can’t keep up with the pace of change.
So what to do? Hold back everyone because BSD lacks funds? Or sally forth and design the best desktops for open systems?
I say, damn the torpedoes, full speed ahead!
Ironically, this isn’t an issue of simply having funds to spend on resources. The fundamental difference between Linux & the BSDs has always been relative to the difference between firmly engineered solutions & “good enough for now” solutions. People always try to frame this topic as a matter of the BSDs not being able to keep up. Yet, the BSDs have never had a need for continuously scrapping infrastructure simply to replace is with something else that’ll also be scrapped. Do the job right, from the beginning, so you won’t have to constantly rewrite the same shit over & over & over again. So many resources wouldn’t need to be spent on the desktop, if the Linux developers wouldn’t churn so much. The same can be said of most subsystems in Linux. I still remember the whole crap-fest of the early ALSA days. The Linux guys seemed to start shitting lead bricks when OSS became commercialized. How did the BSDs handle that situation? Well, they basically decided to maintain their own branch of OSS. It took less time & worked nicely. They didn’t have to drag anyone through a shit-storm.
It’s not a matter of funding. If you can’t engineer good subsystems, then you shouldn’t be writing subsystem code. Design first, then code. If your design is crap, redesign it BEFORE wasting everyone’s time. There’s a reason that the BSDs are known for being rock solid. Being rock solid isn’t Linux’s key attribute. The fact that everyone (& their grandmothers) is writing Linux code, even if they don’t have any design skills.
Famous last words of ship & submarine commanders…
There are many reasons why Linux is the way it is, some of which I’ve explained in my comment above yours.
Other reasons that Linux must churn is that most hardware is actually badly implemented and so the subsystems must change once in a while because the amount of specific work arounds start working against each other.
Designing things to be perfect from the beginning only works if nothing ever changes. Like I said before, Linux churn is a result of previous Linux churn. BSDs don’t experience this because they don’t have previous churn to force them. There’s no positive feedback loop.
So yes, it is somewhat a matter of funding and resources. Change creates more change, and Linux has a lot more sources of change from outside that it becomes a juggernaut. Linux very much can’t say “stop giving us code” for long.
Sounds like the solution is pretty obvious -stop piling workarounds on top of each other. It’s like finding a victim with a shotgun wound on his chest & trying to patch him up with lots of little bandaids, instead of applying a proper dressing. This isn’t a technical matter, it’s a social & managerial matter.
if(HerdingCats(sLinuxDevelopmenters) {
StopAllDevelopment();
RemoveUnorgizedDevelopers();
AddFreshDevelopers();
ActuallyEngineerSolution();
ImplementSolution();
ScrapAllOldSolutions();
if(SolutionWorks()) {
CommitSolution()
}
}
It’s not rocket science.
(My indention spaces were automatically removed by the site software.)
Now, you’re just making excuses. Churn can always be stopped. Churn could’ve been stopped when the development started on each major version of the Linux kernel, but no one bothered to actually do it. Every project has a beginning, so saying that the BSDs had no original churn is no excuse. Linux could’ve started without original churn, but it didn’t. Linux could’ve transitioned to a system with less churn, but it didn’t & it won’t. Like I said earlier, this is a social & managerial mater -not a technical one. By the way, you’re right, Linux churn doesn’t exist in a positive feedback loop.
If a project can’t reject badly designed & badly engineered code, then that project has serious issues. What’s funny is that you really believe that bs. I’m looking at what you wrote & I’m matching it up with how many years came & went that were supposed to be the year of Linux on the desktop. If your desktop can’t stabilize because the kernel is churning faster than a dairy farm produces butter, then is it any wonder that the year of Linux on the desktop never arrived? This discombobulated approach to development has forced Linux to play catchup to TWO OSes, where it originally had to play catch up to just one. I’ve been around for the development of both Linux & the BSDs. As trashy as Windows can be, there’s no doubt about the fact that it’s still more seamless than Linux…when it’s not crashing.
You seem really ignorant of how Linux development works. All your criticisms would apply if Linux were a corporate project with top down control.
But Linux isn’t.
In fact, you seem to think that waterfall development model is either in use, or should be in use, with Linux.
You’re right that it isn’t rocket science. Because to build rockets, you have top down control of every aspect of designing and building a rocket. Large, open source, software systems are not built like that, and your rant shows you to be completely ignorant of these matters.
I’m well aware of the development model used by Linux. News flash, it’s exactly that model that I’m criticizing. You fail to see the big picture, simply because you want to view the *nix ecosystem as the Linux ecosystem. Nothing could be farther from the truth. Have you ever stopped for a moment to think about every other *nix that lives in this environment? You want to complain about the other *nix derivatives, as if Linux isn’t the newcomer. People like you want to change stander *nix software to suit Linux, as if that software didn’t exist long before Linux. Here’s the meat & potatoes of the matter -Linux’s crappy development process poison’s the well for everyone else!
I’m saying that people who can’t properly design subsystems have no business writing code.
First, rocket science isn’t about building rockets, it’s about understanding & designing rockets. The people who could be considered rocket scientist don’t normally build the rockets themselves -I should know, I spent a large bulk of my career maintaining rockets & missiles. Even for the ones who do build rockets, the most important part is the understanding what you’re trying to accomplish & actually using a design process to achieve that goal. In addition, the Linux way of development isn’t the only way to develop software -if you’ve been around for longer than a decade, then you’d already know this. The fact that I’ve been contrasting Linux’s way with the BSD way shows that I know exactly what I’m talking about. Now, you just said that open source development doesn’t work that way. But I know of 3 BSDs who’s development model, in fact, DOES work that way. What exactly do you think we’ve been talking about this whole time??? You’re constantly trying to call people ignorant, while prominently displaying your ignorance.
You’re criticizing it for not being the waterfall model. Do you realize how stupid that is?
It’s everyone else’s fault if they can’t keep up.
The BSDs are free to design their own subsystems and get them adopted.
Thank you for arguing my point.
I never said it was the only way. I specifically said “large”, and compared to the BSDs, Linux is very large.
I would also disagree that BSD “works”. If it did work, then why can’t it keep up with Linux? You blame Linux for breaking compatibility with things, but you don’t ask why BSDs feels the need to make software portable from Linux to BSD. There is only one reason: people want software in userland that BSD doesn’t provide. If the BSD process really works, they wouldn’t have that problem, would they?
I never said it had to be a waterfall, I said to design it correctly BEFORE coding.
If you think it’s a matter of keeping up, then you’re dumber than I thought. How many times do you have to redesign the same damned frameworks???
Hey genius, the X server isn’t supposed to be dependent on specific kernel subsystems. Neither are the DMs.
Actually, I argued my point. You just don’t know any better.
Linux isn’t small, but it’s not exactly large either. The kernel isn’t really much bigger than any other modern kernel.
Yet again, it’s not a matter of keeping up. Design it correctly, from the start. I sure hope that you’re not a developer. You seem to know jack shit about real software engineering.
That is basically waterfall, dumbass. How can you design something correctly for a use case you cannot predict, Nostradamus?
How many times are people going to invent new ways of using the Linux ecosystem?
What, are you going to tell users “use it the way I designed it, or move out of the way”?
Hey genius, nothing is supposed to be dependent on anything, if you want to talk about cohesion and coupling. But there is the real world where things CHANGE, which I keep having to say and you don’t seem to comprehend.
Something like the X server is going to have to be dependent on the behaviours of graphics drivers, for example. X11 was finalized in 1987. OpenGL came on the scene in 1991. To get any sort of efficiency, you have to introduce dependencies. And ultimately, the thing gets too unwieldy that they have to redesign the whole thing, hence Wayland and Mir.
The fact that you think there’s no reason for redesigning something like X is proof you have no idea about change.
You said it wasn’t rocket science. I also said it wasn’t rocket science, because rockets aren’t designed bottom up, but top down. Then you said rocket science is top down. Therefore, you argued my point, dipshit.
Drivers, dumbass. Most of the development work happens in drivers, forcing subsystems to have to change when the design is no longer suitable to the wide ranging behaviour of hardware.
Have you ever tried designing a system that has human users that never had to change due to requirements being changed, arsehole? Frameworks and APIs are the kernel equivalent to designing for users.
You seem to know jack shit about developing software that people are actually going to use in ways you can’t imagine. It must be easy developing software for something that never changes. I develop for real customers who want everything that contradicts each other, so I know how difficult it is to design something that caters for all unforseeable extensions.
Like I said, if you’re so good at designing for the end of time, then put the fuck up, or shut the fuck up. Right now, you are nothing more than that sulking kid at the sports carnival who mutters to himself that he could be first in the race if he wasn’t forced to take the tryouts like everyone else. Where the fuck is your kernel if you’re so fucking brilliant?
Hey brainiac, designing it correctly doesn’t mean that it might not need a few touch ups or adjustments, it means that you don’t have to scrap the whole damned subsystem just to replace it with another whole subsystem that’ll still be replaced.
It doesn’t matter how you use it, new usage doesn’t justify continuous churn.
This isn’t a user issue, it’s a developer issue. Users would appreciate systems that’re stable. You’re talking all kinds of shit about graphics cards, but even with all of that churn, Linux is still lagging. I recall XIG’s drivers out performing Linux drivers -2d & 3d. In fact, the current implementation of the open source X server is just an attempt to play catch up with XIG’s original X server (X386) -hence the name XFree86. Guess what? They designed it correctly from the beginning.
Since you have clearly never designed any major subsystems, I’ll spell this out for you again -CHANGE DOES NOT MEAN SCRAP THE WHOLE FUCKING SUBSYSTEM & CREATE A BRAND NEW ONE THAT ALSO DOES NOT WORK & WILL NEED TO BE SCRAPPED.
X11 is a specification. It’s not the spec that’s a problem, it’s the implementation of the free X server that’s the problem. When OpenGL came out, it was implemented on SGI’s own X implementation. They didn’t have a hard time integrating OpenGL into their X server -after all, they created the GLX protocol. In fact, I can’t name one single other X server that seems to have as many problems as the open source one. Then again, I also don’t know of another one that needs to be tied so closely to the kernel on each release. Hmmm..what’s more likely -that the Linux developers are doing it wrong or that every other X server & DM does it wrong?
It’s not X that’s being redesigned and replaced, ass clown, it’s the free implementation of the X server that’s being redesigned and replaced. There’s a difference -learn it.
I guess you just completely missed the point about UNDERSTANDING WHAT YOU’RE DESIGNING BEFORE BUILDING.
So, exactly how much of this development work have you done? None? Yeh, that’s what I thought. Stop trying to justify bullshit that you have no firsthand knowledge of.
Have you ever tried designing anything at all?
See above response.
I’ve done so already & I’ve gotten paid nicely for it. Even if I hadn’t, there’re still other projects that do that very same thing that you think can’t be done. Since you’re claiming to know so much, where’s YOUR fucking kernel? In fact, where’s your…anything at all???
Hey, shit for brains – this is what Linux has been doing over the years, which culminates in having to rewrite a subsystem when those changes build up to be no longer maintainable.
Congratulations: YOU’VE RE-INVENTED THE VERY THING YOU CLAIMED TO BE ABLE TO FIX. How does that feel, dipshit?
Most of the rest of your post is dealt with by this simple fact.
Then why isn’t it killing Linux off, you tosser? Since your design is so fucking awesome, why don’t you tell us where it’s being used? I bet they’ll replace your crappy OS in the future with Linux.
Unlike you, I DIDN’T claim to be able to design something that covers all eventualities, so I don’t have to prove my kernel designing chops, because I never claimed. What I claim is that Linux, the kernel and the ecosystem, changes so often that it is impossible to design something perfect that never changes. Learn to read, you illiterate, uncomprehending cunt.
I recall you crying that Linux has always had churn. Thanks for proving that Linux caused it’s own churn problem. In the past, I’ve watched whole parts of the kernel get repeatedly patched & still didn’t work. I’ve read emails of Linus chewing out developers for polluting the kernel with crap that was clearly broken and eventually had to be completely scrapped without every having properly worked. That’s not at all what I’ve been rallying for. If you think that it’s the same thing, then you are obviously dumb as fuck.
Because the purpose of my code is to actually work. It’s not a popularity contest, it’s a functionality contest. There’re tons of places that Linux will NEVER go simply because of being inadequate. It’s not even an issue with the majority of the kernel, it’s really just those precious shitty subsystems for some of the drivers that you’re constantly defending, even though they don’t work. I guess that’s why nVidia completely disregards those shitty subsystems & uses it’s own.
More lies. I never said that it had to cover every eventuality. I just said that it shouldn’t churn every fucking month! Is that so hard to comprehend? Are you really that stupid? But, maybe you really are.
If you read my first comment on the subject, that is what I fucking said. You seem so proud for what is essentially your inability to fucking read. Go back to kindergarten and learn your ABCs before you speak to an adult.
Linux causes its own churn and its own churn makes it impossible to design something up front without halting development for a year, which is unacceptable to anyone who needs Linux to work now. The way Linux is developed is because it fits this constant churn cycle.
You are claiming to know better. And yet time and time again you show you fucking don’t. That is why I keep saying you have no understanding of scale and change.
Then why don’t you tell us where your perfect OS is? I’m beginning to think you’re fucking making shit up.
You claim that you have the design skills, therefore, you make the claim that your OS should be able to slide right in to any niche Linux finds itself in.
The fact that it DOESN’T proves you don’t know how to design something that works for even a quarter of Linux’s uses.
Let me remind you you are claiming you know better than the Linux developers based on your cowardly-backtracked claim that you think it should be easy to design something up front and doesn’t need to be ripped out.
Name one Linux subsystem that gets churned “every fucking month” then. You call me a liar. You’re the fucking liar. And you’re stupid, illiterate and a coward.
Yes you did say it had to cover every eventuality. You say it indirectly EVERY TIME you claim Linux developers should be able to design for use cases that don’t exist yet. Learn to understand the implications of your argument you retard.
Well this discussion quickly escalated…
BTW, when has this NOT been the case? That’s no excuse.
Yeah, and how much hardware do the BSDs have to accommodate compared to Linux?
Linux is continually being purposed into a lot of esoteric hardware that is not being experienced by any other operating system.
Like I said, you have absolutely no understanding of scale and change.
Ok, let’s get down to brass tacks. None of that esoteric hardware means shit in a discussion about desktops, unless it’s actually desktop hardware. Also, considering that we’re talking about desktops, it’s really a poorly attempted distraction to bring up hardware that has nothing to do with the topic at hand. But, since you want to take it there, name a piece of hardware that must be constantly designed around multiple times just to get the X server or even XFCE to run on it, that no other OS works with…I’ll wait.
Sounds like you’re still trying to blow smoke about commodity hardware & the development of infrastructure that supports it. No one believes that bullshit. Either design it right, or move out of the way for someone else to do so.
Your whole argument has basically amounted to bull’s milk -& I’m not drinking it!
It has EVERYTHING to do with it. The esoteric hardware uses the same driver framework as common desktop hardware. Not just esoteric hardware, but closed hardware, which may as well be esoteric and open.
Have you not heard of graphics cards? Have you not heard of the difficulty of getting X to incorporate new functionality in poorly documented and/or closed graphics cards?
Sounds like you’re still trying to blow smoke about commodity hardware & the development of infrastructure that supports it. No one believes that bullshit. Either design it right, or move out of the way for someone else to do so.
Your whole argument has basically amounted to bull’s milk -& I’m not drinking it! [/q]
You’re an idiot.
“Design it right, or move out of the way”.
NO.
Go on. Stop them. Make them stop with your convincing arguments. I think this attitude exposes you as a petty dictator wannabe.
And how many other OSes have had to constantly recreate the same framework within the same major version number???
Have you ever actually written any drivers for said frameworks? Have you ever actually written any video card drivers? Have you ever actually compared the codebases of multiple OSes? It sounds like you’re trying to argue something that you have no intimate knowledge of.
Says the guy who sounds like he hasn’t been in the *nix community for more than 5 years.
The last resort of children who have no valid argument. Get back to me when you make it past puberty.
No, what? No, you don’t want it designed correctly? No, you want to just code bullshit? Or no, you can’t design OR code & are just talking out of your ass?
Hmmm…not accepting bullshit excuses makes me a wannabe dictator. That’s rich.
You must be an idiot if you think the major version number of the Linux kernel means anything much these days.
The testimonies of the people who do write this stuff is not enough for you? Okay.
Do you know how many merge requests Linus gets per day? Do you know how many merge requests Greg Koah Hartman and all the other maintainers get per day?
Unless you can wrap your mind around thousands of merges over the course of a release cycle and think you can just “design it right” before you can accept any of those new drivers/features and think you’ll never need to redesign ever again, you are fucking delusional.
Uh no. I call you an idiot because that’s what you are. The whole “design it right or move out of the way” is a stupid argument for anything.
No, what? No, you don’t want it designed correctly? No, you want to just code bullshit? Or no, you can’t design OR code & are just talking out of your ass? [/q]
No to “move out of the way”, you retard.
It’s the “move out of the way”, as if people have to listen to you, that makes you the wannabe dictator. Learn to read. Linux is going to continue on because it has momentum and is behind billions of dollars of revenue. No one is going to stop just so you can perfect your design that everyone else must use but not all would agree.
You think you have the best design mind? Go ahead. Write your own kernel. See how many people would agree to use it. See how well your perfect design can predict the future so much so that it will never need to be redesigned ever again. Put up or shut up you imbecile.
If this fact doesn’t bother you, then you really don’t understand the basics of software development. Perhaps you should go to school -you know, that place where the academics are. Where you learn now to not be ignorant.
As a person who’s written software himself, no second hand accounts isn’t enough for me. So, before you try to tell me shit about developing software, try doing it yourself -otherwise, STFU. I’ve been a developer since the mid 1990s. Telling me second hand stories doesn’t mean shit. I’m not talking about scripts, either. So, yeh, I know what the fuck I’m talking about. I’ve done *nix AND Windows development…even back in the 3.1 days.
So, you haven’t written any code & you haven’t been in the *nix community for very long. Still spewing second hand knowledge, huh? Here’s a clue: That’s not any different than other projects that experience massive growth & has to deal with development that’s happening around the world -all pouring in at the same time. The only caveat is that Linux is just a kernel, whereas other many other projects are WHOLE FUCKING OPERATING SYSTEMS.
So. You haven’t developed anything, yourself, but you think that’s a stupid argument for engineering code. Yeh, right.
Well, since you aren’t a developer, then you aren’t in the way. In fact, you’re not even on the map. You’re just another internet parrot.
You have no idea of how these things go in cycles. After Linux fucks up the rest of the *nix community, some new OS will pop up & replace it. Momentum today doesn’t mean momentum tomorrow. And, for all of that momentum, there still hasn’t been a “year of the Linux desktop”…so much for momentum…
I’ve already written software that plenty of people use. Sounds like you haven’t even written software that you even use yourself. So, before you worry about the software that I write, how about you write some yourself. Or, just STFU.
I write software, among which, plugins for Eclipse that interface to mainframe programs. And I’ve written mainframe programs and dealt with customers who have expectations of mainframe programs, and different customers who want different things.
I’m actually the guy on the team who argues for more thought in design than other people in the team actually do, but even I realize it’s not possible to design for eternity.
You seem to think you can.
PUT UP OR SHUT UP.
Have you been exposed to all the different driver requirements that kernel developers have to consider? My “second hand sources” are from those kernel developers themselves, like Greg Koah Hartman. Do his job for a year before you strut around like you own the place, you pansy.
One size doesn’t fit all you idiot. I work on software where the version number ticks over every two years due to considerations other than redesign.
The fact doesn’t bother me because I’m not a f–king religious ideologue where there’s only one way to do things and everyone else is a heretic that must be burned.
You are f–king retarded.
Go on. Write that Linux killer OS. You’ve been talking big these past few comments but you have f–k all to show. That’s the bare fact.
If there’s ever any talk more empty than the “Year of the Linux Desktop”, it’s talk of a new OS killing Linux. Just like talk of the mainframe dying.
There hasn’t been a year of the desktop, but Android has already the greatest smartphone OS share. You sound like Janos Slynt in that last episode of Game of Thrones, ranting about how giants don’t exist while missing the plain reality that in the real (ie, GoT) world, giants exist. Linux won’t win the desktop because the world is CHANGING and the desktop is no longer that important. There’s that f–king word you don’t like to hear. CHANGE. You seem to think it doesn’t exist. Good luck to you.
Edited 2014-06-11 15:08 UTC
Yeh, mainframes -who’s hardware doesn’t change very often…but you keep talking about change. Yeh, you’ve really proved your point there.
Hey, dipshit, no one every said design for eternity. However, it’d be nice to have a system that doesn’t replace crap with crap over & over again. But, I guess you wouldn’t know about that.
Every kernel developer for every OS is exposed to lots of requirements. That’s not something that’s unique to Linux, braniac.
Just because you’re version numbers don’t tick often doesn’t mean that your argument has merit. If your API changes, then that’s technically a new version -it’s not the fucking same version, retard.
Apparently, the fact doesn’t bother you because you’re used to writing shit code, so accepting it must be perfectly fine with you. That has nothing to do with ideology, it has everything to do with wanting QUALITY. Yeh, I bet you’ve never considered that word before.
Sticks & stones from an asshat.
I don’t have to write a Linux killer, Linux isn’t even at the top of the heap (neither is BSD). However, regardless of whether I write it or someone else does, it’ll still come & it’ll still bump Linux out of the number 3 spot. It’s not a matter of who or if, it’s a matter of when.
Kinda like the talk of Linux killing Windows or MacOS X??? Yeh, I thought so.
Except that you can’t replace the desktop with a smartphone because of all of the things you will NEVER be able to effectively do on a smartphone. So, smartphones won’t kill the desktop. Smartphones still need to be attached to actual computers, periodically, to use them to their fullest potential. If it wasn’t for them actually being phones, tablets would completely replace them. Still, even tablets can’t replace desktops. You’r point is moot. As for change, change isn’t bad -constantly trading out code that doesn’t work for more code that doesn’t work is bad. Buy a clue.
The way people/banks use mainframes today changes. Change is change, no matter what is changing. It could be hardware. It could be usage.
You basically said to design for eternity. To think you can design something so perfect that it only needs touch ups every once in a while IS to say you think you can design for eternity. Put up or shut up.
I never said it was unique. It is the VOLUME of differing requirements that is unique to Linux. No other operating system gets requested to work in so many different areas all at once. Name me one kernel code base that has to work in desktops, smartphones, dumb phones, routers, Raspberry Pis, supercomputers, server farms, mainframes on many different processors like x86-64, ARM, Itanium, PPC, MIPS, Sparc and gets thousands of merge requests every release cycle (of a few months).
You stupid argument was that Linux redesigns things within the same major version number. You were talking about the outward facing version number. Understand your own arguments.
Ubuntu changes version numbers every six months. A lot of projects have changed to releasing on time frames rather than API changes. Version numbers do not have a universal meaning, simple as that.
Yes you do. You claimed to be able to design an OS so perfect that it will only need touch ups here and there. You claimed you wrote one and got paid for it. You made the claim and now you’re backing away from it like the fucking big talking coward you are.
You thought so what? Where did I make any fucking claim about Linux killing Windows or Mac? You have a fucking chip on your shoulder about one OS killing another OS, when my claim was much more specific about why Linux is the way it is. Grow up you cunt. Go rant about your bullshit to someone else.
I never made the claim that smartphones will replace the desktop either you fucking cunt. You talked about “Linux on the Desktop” as though that was what was going to kill Linux, as though the desktop is what makes or breaks an operating system. I brought up Android as an example of Linux finding an end-user/non-server niche in which it flourishes.
English, motherfucker, can you read it?
Wow, you really ARE stupid. Here’s a hint -if the hardware changes, then the kernel subsystem changes; if the way people use the overall system changes, then the userland libraries & applications change.
I basically didn’t say anything remotely similar to that. It’s your stupidity that’s not allowing you to understand that you can’t churn the fucking outward facing systems continuously & expect everyone else to continue sharing the ecosystem with you because of your bad habits. You don’t have to change the kernel’s APIs (drivers or otherwise) multiple times within the same fucking year, get a fucking clue you dumbass.
Hmmm…still trying to compare Linux to Windows, huh. Your arguement is very weak. Other than Itanium (which is basically dead) & Raspberry Pi, every single other platform has the same basic group of OS’s ran on them. Why don’t they have the same amount of requests? Because the developers who work on those OSes aren’t often available for every random fuck to start asking for bullshit. Requests get prioritized, triaged, & filtered. It’s pure stupidity to think that you can even ready every request, so why would a person think that every single request should be implemented? That’s ignorance & you have tons of it.
In the case of Linux, the outward facing number is the ONLY fucking version number. It’s not like a commercial OS that has an internal version number & a marketing version number. Understand how stupid your response was.
Ubuntu doesn’t develop the kernel, you dumbass.
No, I claimed that I can develop subsystems that don’t have to be completely scrapped within the same major version number, within a few months of completion, or even within the same fucking year. I haven’t backed away from shit & I never will. As far as cowardice, everyone who knows me knows that I’m not a coward. However, you’re welcome to settle this in person -instead of hiding behind a computer. If not, then it’d be prudent to avoid taking this dialog into directions that you aren’t capable of following to completion.
So, your example doesn’t hold up & you now want to back away from it. It was clearly you who brought up the subject of Linux having momentum as if it’s here to stay. Yet, it’s momentum isn’t shit compared to the number 1 & 2 systems. So, stop being a Linux whore & look at it for what it really is -a decent system that could be much better if there weren’t so many contributors who share your shitty design attributes.
You’re starting to lose track of your own points. I guess that’s what happens when you don’t actually have a point.
Now, I’m bored with your obvious lack of any actual first hand knowledge of what you’re talking about. I’m bored with you making excuses for people coding whole subsystems without first giving a thought to their actual design. I’m also bored with going back and forth with a piece of shit who can’t make a point, so he has to start flinging insults. So, this will free up time for both of us. I can go back to enjoying my day & you can go back to sucking off farm animals. Don’t forget to use mouthwash afterwards.
Have a nice day!
“Done”
So your whole act of being a cunt is that you just don’t like people who like Linux and/or understands why Linux development works the way it does.
Fuck off and go jerk yourself off elsewhere.
I guess the obvious retort is: then why does the desktop on BSDs suck so very, very much?
I was being nice saying that they couldn’t devote resources to doing it right.
You’re essentially saying, they can’t do it right because they can’t do it right? That’s kind of insulting to the BSD devs.
The obvious answer is that the BSD developers mainly don’t deal with desktops…until now. If you recall, the desktop software was supposed to be portable & multiplatform -the X server runs on pretty much ALL *nixen systems. The DM’s are supposed to run on the X server & really aren’t supposed to be kernel dependent. That’s how it’s always been until the Linux developers came & started upsetting the consolidation that’d taken so long to establish. Unless you’ve been dealing with *nix for over 2 decades, you wouldn’t know anything about this. Now, you have portions of the DM & the X server starting to creep down into none standard APIs & shitting all over the ability of the whole software stack to run in a multiplatform way. Why else do you think that there’s now a BSD desktop in development? Yet, so many Linux users started complaining about how resources would be better used if they’re shared. Well, no, the Linux developers have already proven that there’s no benefit in trying to share resources with them.
Qt.
Get it right.
Might be not so light under the hood, but what I am asking is this: is it important at all? What need do these “light” desktop environments serve at all?
I mean, they are all fine and dandy, until you start your first program. Any modern browser takes more resources than the OS and the DE combined, perhaps several times so. If you want to edit documents offline, you are stuck with LibreOffice (no, gEdit or Abiword do not cut it) — not exactly lightweight, either. If you do programming (not in Python), you will need an IDE; and if you are so unlucky to be a Java programmer, you will need something really heavy, like Eclipse. Games? Don’t make me laugh.
So I guess these lightweight DEs are more like a beautiful wallpaper: it makes you feel better for the first 5 seconds after your desktop starts up. After that, it doesn’t matter anymore.
I think some people have the nostalgic idea of still running on decade old hardware, probably without a modern web browser.
For the rest of us, there’s SSDs and tons of RAM.
My point was that mostly we need that SSD and tons of RAM because of our web browser & co. and not because of the DE — and there is no reak replacement for those. I have used KDE, Unity, even XFCE on the same Core2Duo laptop, and what I found is that aside from bugs (such as Unity menu intergration making Firefox unbearably slow), the DE hardly matters. Granted, if you wanted to run Linux on an Amiga 500, you would have to choose everything very carefully.
I think you’ll have to test it on less than a Core2Duo to maybe see the difference.
I sometimes do wonder if the people who use modern hardware and still notices a difference aren’t doing so because of a psychosomatic effect.
There is a difference between resources going to software complexity and resources going to multimedia. Software complexity brings a host of problems: maintainability, security, and vendor lock-in.
We can examine your examples by these criteria —
* Gaming: optional, doesn’t handle private information, other software doesn’t depend on it => don’t care, just need RAM and disk space
* Browser: essential, handles private information, other software doesn’t depend on it => a reliability and security risk, must remain vigilant, but luckily there are a lot of drop-in replacements to choose from
* IDE/word processor: same as with browsers, but less easy to replace with alternatives. I use the Unix shell for coding and LaTeX for big documents, but understand that these are not viable solutions in many spaces.
* OS/desktop: same as with IDEs/word processors, but even more difficult to replace. Special care must also be taken since many parts of the OS and DE run with elevated privileges. This is where one should demand good design and execution, and invest in platforms that deliver it. So far Linux works for me (I use Gentoo), but I might move to FreeBSD in the future.
Desktops are more of a problem as they try to strike a balance between Grandma-usability and being maintainable. I understand XFCE’s choices given their limited manpower, but they introduce a problem.
I think the idea is that the desktop environment should not be the one consuming resources, but leaving those to things like the IDE, Office Suite, Games, etc.
First of all, I do have hardware that gnome and kde don’t run on and XFCE does.
Secondly, a lot of the applications that I do run such as Emacs and a terminal, that use relatively few resources.
I find it really annoying how a lot of people use the word “traditional” to refer to the Windows95 user interface.
Well, here’s some shocking news:
WINDOWS 95 WAS NOT THE FIRST OPERATING SYSTEM OF EVERYONE ON THE THE PLANET.
Some of us consider “traditional” to be SunView or Indigo Magic Desktop (SGI), which, hello, look nothing like Windows 95.
To me, the Windows95 interface is about as alien as it gets as I’ve never had to use one for any length of time.
FYI, my primary desktops happen to be Window Maker on Ubuntu 13.04 and OSX 10.6.
WindowMaker is my interpretation of traditional as it works like NeXT. I’ve been using WindowMaker since the late 90’s.
Edited 2014-06-07 04:15 UTC
windowmaker on my dev machines but xfce on my laptops. I just use my dev box, others also use the laptops.
You most likely belong in a minority there. Definitions such as this always go with the majorities, not the minorities, and it’s pointless to get all upset about that.
As someone with experience of said non-western rich countries, I can tell you with authority that no-one wants unstable, half-polished, unpredictably changing, desktops that don’t work on half the hardware.
What they want is something that works. Cheap and not working is not acceptible. It is condescending to think that poor non-westerners can put up with broken unstable barely supported technology.
Look at Nokia. Closed. But solid, and cost-effective.
If you are right – what is the uptake of OSS desktops in that world? Almost zilch. The market has spoken.
Bottom line: “they” which includes “me”, will take closed but working, over broken but “free”.
Don’t go putting words in my mouth, I never even so much as implied that. I was talking about the need for a cheap/free OS that can be used even on a decade-old hardware. That *is* a real usecase already out there and your silly, chidlish exaggerations don’t change that.
Yup, I do realize that I’m in the minority.
However, this is OSnews where one would expect that users here have an interest in things not Microsoft.
I was just bringing up the point that words like traditional, conventional, etc should not automatically be inferred to mean “looks like Windows95”.
In some parts of the galaxy, drinking warm fish juice for breakfast is considered traditional (5 pts to whoever can identify this place). I for one prefer my fish juice cold.
When I think of ‘traditional’ desktops, I think of a desktop background that you can put icons on, some sort of taskbar/dock, a recycle bin, etc. I’m not sure if Win95 was the first to put all of these elements together, but I think it was the first widespread ‘mainstream’ desktop environment that most can identify with. So, I think referring to it as the defacto ‘traditional’ desktop is pretty accurate, since most of us have been using it for nearly 20 years.
Yes – and without a “Start Menu” – like LisaOS, MacOS, GEM, TOS, AmigaOS, Nextstep, RiscOS, Windows before 95, OS2 – everyone who used a GUI before 1995 still knows how a traditional GUI looked like for more than ten years before MS screwed it up.
On Topic: I still always liked Xfce
Edited 2014-06-07 10:17 UTC
Its traditional, not because it was first, but because it was so popular with so many people for so long.
For me, if a desktop can’t tile and/or doesn’t come with dmenu, then it may as well be toilet paper. The whole “traditional” clicky-clicky desktop is beyond me.
Thing is, we’re a minority. No one gives a toss about us, and they really shouldn’t. We just whine too much for our own good, to be frank.
Edited 2014-06-07 18:31 UTC
WIMP: Windows, Icons, Menus, Pointer. It’s older than Windows 95, it’s even older than the Mac. It goes all the way back to Xerox PARC, where both Microsoft and Apple got inspiration for their first mouse-driven interfaces.
That’s not so say minimalism and non-standard interfaces are bad things; I’ve found a happy medium between minimalism and features in Openbox, but it’s not for everyone.
While I like Win’95 for gaming I always preferred Program Manager over Start menu. Maybe because dos installers don’t create start shortcuts anyway.
From this perspective it’s Ubuntu Unity that went back to the roots, as it’s very similar to Amiga Workbench 3.x, NeXT and older MacOS. I like that inspirations and a fact that a commercial Linux desktop developers saw something outside Windows world. Something that free OSS devs from KDE and older Gnome sadly didn’t realize.
I want to like XFCE, and to a large extend I do, but an environment like MATE or LXDE are significantly lighter on resources. My main problems with XFCE are the lack of integration of any sort of half decent World Clock. Something like gsimplecal can be launched as an application, but I know of no way to integrate that into the XFCE clock in a non-hackish way. The other big beef is the placement of icons on the desktop is totally inflexible. So I need to use either pcmanfm or nautilus as a file manager, causing me to either flee to LXDE or Gnome-flashback (or MATE). Because of these to preferences (requirements for me), XFCE in its native form might never work.
As a long time GNOME2 user I tried GNOME3 and even stuck with it for 3 releases, trying to smooth its edges with extensions and tweaks that reduced some of the annoying changes in the user interface. One thing that I found frustrating was how slow it was in spite of running on some pretty top-of-the-range hardware. I’m not sure how much of that was caused by the use of GL acceleration with open-source drivers (I was using a Radeon HD3870 at the time) and how much by GNOME3 itself but it was pretty damning. It also had some horrible quirks: for example setting the polling rate of my Razer mouse to 1000Hz caused the window manager to suck 20% of a CPU core for no apparent reason.
In spite of this what put the final nail on it for me was the stability. The last version I used was 3.8 so I understand that it was early in the 3.x cycle but it was just too unstable for my tastes. Common functionality both in the window manager / file manager and in the accompanying core applications would contain horrible bugs. Crashes were frequente and I filed over 20 bugs via ABRT while using it. The use of extensions to fix UI problems was probably making the issue worse as some had bugs of their own.
So one day I just decided to give Xfce a spin; I hadn’t used it for a while and it didn’t seem to have changed much which was good. What was better however was that it was fast, smooth and solid. I’ve been using it for a year now as my main desktop and the only issue I had was a minor glitch in the Thunar file manager when plugging a digital camera. That’s it: no crashes, no “you have to restart the shell” messages, no broken functionality in the core applications.
Of course it might not have the bling of other desktop environments but frankly who cares. I code all day on my Linux machine for a living; it needs to work, to be fast and responsive and to be stable. Xfce gives me these three characteristics in a familiar and unsurprising environment, why change?
I can attest to the issue with polling rate, although I have it with compton & xfce. When I move a window, compton tries to render/update it 1000 times per second, despite my refresh rate being only 60. It causes a lot of CPU usage and even jumpy movement.
Setting it to 100hz fixes that and makes XFCE+compton the fastest OpenGL composited desktop ever for me.
I hate to say this but after years of hope that an old desktop would become functional stable and popular.. I now think the game is over.
The need for a desktop has diminished. More apps / services are accessible with a wider range of clients .. IPads Chromebooks android phones…
The role of the desktop is now to simply provide access to those apps… And take care of things like power peripherals
I exaggerate but not by much.
This is why many many OSS people use the best hardware and desktop combination to get on with working coding painting… There are very few Linux specific apps anymore and so the need to get the desktop working flawlessly is not so urgent. Apple macbook air … And you’re done.
And virtualization of course further killed the need for a Linux desktop.
While that may be the case in the western countries among well-off citizens it’s still a losing proposition in developing countries and financially very poor areas. The need for a low-cost, high-quality desktop may have, indeed, dropped quite a bit over the years, but it’s not disappeared entirely, and that’s only talking about the functionality and costs — the need for freedom hasn’t diminished.
Death of the desktop predicted. Film at ’11.
I’m pretty sure Xfce has let you scale fonts for at least five years, genius. Reducing screen resolution is the idiot’s method of increasing “readability.”
That’s pretty harsh. Maybe it’s just a personal preference? Font scaling has some issues on certain WMs/DEs, and Xfce in particular can look crappy when you start increasing the font size. Some of the window decoration themes simply aren’t designed for scaling up the fonts. With a good enough monitor, reducing the resolution can give you a correctly proportioned, easily readable screen without goofy-looking fonts.
I am sure what he means is telling the system that the screen is higher dpi that in actually is (like setting pixel density 150dpi instead of what could be the actual 100dpi) so that the desktop believes the screen is smaller and it renders things larger. Or maybe the reverse, if the text you get by default is too large text, like with Ubuntu.
I switched to Xfce from KDE 3.5 when 4 came out and was so crappy, and even though KDE 4 is much better now, I have never felt the need to switch back. It’s simple, it works, and just keeps getting incrementally better.
I used a bit of GNOME 2 and Xfce during the end of GNOME2/KDE3’s time, and only tried KDE4 as an everyday desktop with a more recent openSUSE release. I was still on a machine with only a gig of RAM, so I wasn’t exactly happy with its performance… but since then, I’ve moved onto tiling window managers (using i3 right now).
I’ve been kind of wanting to go back to Xfce and give it a try after all this time, but just can’t bring myself to… and even if I did, I think I’m so spoiled by tiling window managers and their total lack of mouse as a requirement that I probably wouldn’t feel quite right using it (or any stacking window manager, for that manager). Still… I had a great experience with Xfce in several releases of Zenwalk, KateOS, SalixOS and others years ago. Xfce was definitely at the top, IMO.
And it pretty much works just like Windows 7.
With Windows 8.1 with Update 1 (both free to Windows 8 users) it provides a pretty consistent Windows desktop experience. You can set Windows to boot directly to desktop in fact with Update 1 it does this on non touch hardware.
Windows 8/8.1 desktop features all the familiar elements of Windows such as Taskbar for hosting your open apps and shortcuts. File Explorer provides improved file management tools, ribbon based toolbar provides quick access to configuration and organizational tools. You can now mount VHDs and .ISOs natively.
To be honest, the desktop in Windows 8/8.1 actually feels more powerful. So, I find it strange when you say Microsoft is all about touch, yes, they were forcing it on users, but they never killed the desktop, just made it a legacy feature of the OS, just like the transition from DOS to Windows 1x/2x/3x that still provided DOS as a legacy component for legacy apps.
I work in an environment where I am exposed to a variety of customers, both adults and students. A lot of teachers bought notebooks with Windows 8 when the went off on vacation and brought them to use at work. Students too I notice also seem to have Windows 8 based notebooks that outnumber the amount of Windows 7 notebooks. I have asked several of them about their experience with Windows 8. Most say its really different, that’s the usual response, but then you hear, its not really hard to use.
You can notice the engagement with OS, lock screens are customized images of themselves and their families. The Start Screen is normally changed with a different background and color scheme to suit the users taste. The seem to find the desktop just find if they want to browse the web or use traditional desktop apps like Office.
So, this idea that users are somehow complaining comes across as bull crap to me. We might say the upgrade pace to Windows 8 is evidence that persons are not interested, but what would you expect with a great release called Windows 7 and the fact that many users are more than happy with what it does for them?
Windows 7 is the new XP and I won’t be surprised when 2020 comes Microsoft will be in a similar situation trying to get users off it to Windows 12 or whatever version is out by that time.
Windows has a broken UI design. It is designed for both touch and non touch devices simultaneously. The problem is that you either are one or the other. If you are on a touch device, why should you have to switch to desktop to use a normal app? And for God’s sake, if I am on a desktop with no touch, why can’t you lock out metro? It is very jarring to have to pop back and forth between the two interfaces when trying to get work done.
EDIT: It seems you can come close to locking out metro using some third party tools. But still, why should you have to pay extra for it?
Edited 2014-06-08 05:35 UTC
I have a touch laptop running Win 8.1 here that begs to differ, I use the touchscreen often while using the keyboard. It works well.
Have you considered a job in Microsoft’s PR department?
There was a lot of crazy stuff in that like “engagement” that they would find attractive.
You either recognize that the UX is fundamentally screwed up and needs to be improved ( like Microsoft does now), or you’re in denial (Microsoft under Balmer) .
I kind of wonder about the future of xfce. I like the DE, but I wonder if gtk2 is a dead end. And I don’t really like Gnome3 at all. The thing that has kept me away from KDE is all the bloat. I don’t want a bunch of apps I don’t normally use, yet they are tied in. qt 5 is suppose to fix that and make things modular. LXQT is now out in beta and if it is modular, it will be my new desktop.
Linus, who hates C++, chose to have his program “Subsurface” converted to Qt because it was too hard to do basic things.
Huh? What does this mean?
It means not even the suckiness of C++ (in Linus’ view) was enough to stop him from ditching GTK2 in favour of Qt. This is how much GTK2 is a dead end from the point of view of one developer.
I was wonder about its future too since they are not releasing anything since April 2012.
The main reason I don’t use XFCE is because it is designed to integrate with xscreensaver and xlock. I detest both of those programs. Xscreensaver has fade-to-black, but the fade can’t be interrupted while it’s in process. And the xlock screen is hideous, to put it mildly. Not only is it ugly, it also has an extremely long and non-configurable delay if the password is fat-fingered.
It is possible to replace these ancient and crufty old xscreesaver/xlock programs, but it requires manual hacking of aliases to xlock, which XFCE is hard-wired to use.
Not true:
Settings -> Keyboard -> Application Shortcut pane
If you double click on “xlock…” you can change the shortcut AND the command itself.
Hope this helps,
RT.
In Xubuntu 14.04 it uses Light Locker instead, so the lock screen is like the login screen.
This review should have been of 14.04. It’s great.
Linux with 76 Windows Manager to try out :
http://linuxbbq.org/
Kochise