This week’s KDE Commit Digest tells about an installer for KDE on Windows and the problems the developers encountered setting up a working environment for KDE to run on. Many screenshots included, showing the first applications (such as Konqueror) running natively.
Maybe we’ll finally shut up people saying “KDE is dead”, “There is no visible progress” and other nonsenses.
Very impressive technology, indeed. Wake me up when they port Amarok
Please say it ain’t so….Arrrrrrrrrrrrrrrrgggggggghhhh
I’ll second amaroK and add Quanta.
I haven’t been fond of the idea of KDE on windows until reading this.
——–A group of people are currently working on an installer for KDE on Windows applications. The installer has end-users and developers as its focus to make installing KDE on windows as simple as possible.
KDE applications have several runtime library and tool dependencies. The library kdelibs for example currently has 17 external dependencies, which are zlib, pnglib, jpeglib, tiff, jasper, pcre carbon lib, acl, bzip2, libxml2, libxslt, openexr, openssl, gettext, perl, dbus and Qt.————
This type of installer is one of the biggest problems for linux IMHO. And to my knowledge, not one single distro has it’s own type of stand alone install protocol. No, YUM, APT, and etc are not the same thing, they rely on internet servers.
(I’m going to use RPM just for commonality sake here)
I should be able to double click on an RPM file, be able to click next, next, watch the bar, click next, then watch 17 more bars(if they apply) and then click done. Then use my app. All dependencies should be statically linked to the app so that I don’t need to hunt them down, and I don’t need an internet connection either.
The more linux apps get ported to windows, the more our guys will have experience with this type of simple interface, the more likely it’ll be that we’ll have one soon? Just a thought.
The KDE crew is porting KDE to windows, I hope they reverse-port this install framework to linux. We desperately need it.
Edited 2007-01-14 23:40
“The KDE crew is porting KDE to windows, I hope they reverse-port this install framework to linux. We desperately need it.”
Autopackage? Klik? The frameworks are there, they just have to be used.
Klik still relies on a repositry. Indie developers cannot just throw up a simple installer on their blog for people to download in quite the same ease and universal compatibility as Mac and Windows users. Mac users can install with a drag from a DMG, and Windows users have MSI or the excellent NSI. Linux? Several Distros, several different pacakage managers, dependency hell, no simple installation location.
Kroc, I hope you understand the fundamental difference between software installs on Windows/OS-X and Linux?
“Dependancy Hell” as you call it is 1. due to the modular nature of Linux Apps and 2. Not a problem with a good package management system.
Where your 2 commercial examples get around “dependancy hell” is by packaging everything into the program installer. Then there was the good old Windows “dll Hell” which is as much a valid point as “Dependancy Hell” on Linux nowdays, mute or not relevant.
Linux just has the package manager pull down dependancies now days required for installing the program you want, if the dependancies aren’t already installed on your system.
OK, have you ever tried using Debian “woody” with the version of KDE that came with it and then upgrading it to the latest and greatest in Sid? I can absolutely 100% assure you that it isn’t as easy you’re trying to make it appear. I’ve been using Linux on and off since 97, and from 2002-2006 it was my sole desktop system, so I think I know what I’m doing. I also provided a lot of forum help in the past on the Libranet forums, including on the example that I’ve just listed above.
Package management is cool, it mostly works, but it’s not always straightforward and rosy like you’re trying to paint. Your average Windows user would not be impressed. Both OS X and Windows offer far easier ways to both install applications imho.
Dave
OK, have you ever tried using Debian “woody” with the version of KDE that came with it and then upgrading it to the latest and greatest in Sid? I can absolutely 100% assure you that it isn’t as easy you’re trying to make it appear.
True, but then that’s about the equivalent of upgrading from Windows 98-XP. There’s a reason that nobody recommends updating windows, they all say you should do a clean install if you want things to work right.
Both systems have their merits, but I think the strongest plus for package managers is that you can click 1 button and have every single piece of software up to date. On Windows, MS has begun doing that for their own apps through Windows Updates, but obviously they can’t for 3rd party software. The biggest weakness is that proprietary software which isn’t allowed to be added to the package managers often have atrocious installers. I prefer the Linux way personally.
Debian != All Linux’s
Hey I thought this article was about getting KDE code compiling in a Windows environment (using CMake), not about repository vs. package based installations.
Hey I thought this article was about getting KDE code compiling in a Windows environment (using CMake), not about repository vs. package based installations.
Exactly, but it seems to have been hijacked by disgruntled Windows power users who just can’t get their minds around the fact that, oh horror, Linux *is* different from Windows!
OK, have you ever tried using Debian “woody” with the version of KDE that came with it and then upgrading it to the latest and greatest in Sid?
As someone else pointed out, that’s a HUGE update, and it is obvious for it not to be straightforward. Have you tried to replace the windows kernel, core dlls, explorer.exe etc. without a hitch?
On the other hand, it’s more than two years that I’m continuously running an up-to-date Gentoo install on my main desktop. Yes, in these two years boring things happened (switch to udev, GCC upgrades, yuck!) and Gentoo is not made for newbies, but 95% of the time the simple command “emerge world” keeps the system up.
Package management is cool, it mostly works, but it’s not always straightforward and rosy like you’re trying to paint. Your average Windows user would not be impressed.
But they’re using Windows programs.. on Windows! I don’t care if it’s a fallacy, I’m ad hominem’ing this arguement.
Klik still relies on a repositry. Indie developers cannot just throw up a simple installer on their blog for people to download in quite the same ease and universal compatibility as Mac and Windows users
While it is true that Klik recipes still need a repository, the Klik installer will generate an image file which contains all necessary dependencies.
These images can be offered for download separately, for example see here:
http://kde-apps.org/content/show.php?content=16550
Krock, you are absolutely right, but you will not see this happen on the Linux platform because the ‘geeks’ forget what a normal user is and is capable of doing. It’s another reason why Linux will not reach mainstream critical mass. You can expect negative moderation from others (as I probably will) because they simply do not agree with your comments. Pretty sad really.
Dave
Krock, you are absolutely right, but you will not see this happen on the Linux platform because the ‘geeks’ forget what a normal user is and is capable of doing.
As others have pointed out, there are other reasons for this. But of course it’s a lot easier to “blame the geeks.”
You might be unaware of this, but Windows software is produced by geeks too. Bill Gates himself is the ultimate geek. You can be a geek and be aware of usability issues.
You can expect negative moderation from others (as I probably will) because they simply do not agree with your comments. Pretty sad really.
I’m moderating down your comment, not because I disagree with it (I do, but that’s why I responded it), but because it is *off-topic*. Please stop using Linux (or, in this case, KDE) threads to spew your FUD, thanks.
Actually, it is on-topic. The article is about the possibility of KDE on Windows, and it has a *simple* installer. The comment of hopefully this might appear on Linux (or would be a good idea) is totally valid. You might disagree with me, but in 5 years when Linux still has bugger all of the market, people might start to realise where Linux DOES have problems (and that this is one of them).
As to being FUD, that’s bullshit. I’ve used Linux enough to know what I’m talking about, I’ve provided free support to others, I’ve helped beta test as well. I have a quite capable idea of what Linux is about, and where I feel its weaknesses are.
You are not the average user (nor am I). If you don’t currently work in IT support, I suggest that you get out there and get a bit more of a grasp of the real world, and ordinary computer users (who make up probably 95% of ALL computer users).
Dave
The article is about the possibility of KDE on Windows, and it has a *simple* installer.
Actually the article (the commit digest) contains that the installer is necessary because, unlike on Linux distributions, there is neither a standard way to get and install libraries, nor standard layouts where such dependencies would be installed.
it *doesn’t* have a simple installer, and never will. it is impossible to create a simple windows installer. you have to choose between letting the users manage all dependencies themselves (17 in case of KDE) package them, with the risk of overwriting excisting installed versions, versioning trouble, instability -> the dll hell, or last option: link it all statically, blowing memory usage.
now what’s simple about having to choose between putting your head in a shithole, sleeping with a skunk or swimming in a bunch of fertilizer?
in linux you can do apt-get install KDE (or whatever system there is). you don’t have to search the web for an installer tool, possibly unpack it, start it, look for the instructions, answer the questions, select what you want to install, where you want it, etc. no, just start your package manager, search for the app you want, click & done.
Actually, it is on-topic. The article is about the possibility of KDE on Windows, and it has a *simple* installer. The comment of hopefully this might appear on Linux (or would be a good idea) is totally valid.
Well, when the entire thread turns into a criticism of Linux package management, then it becomes off-topic. There’s not even a single mention of Linux in the article summary!
You might disagree with me, but in 5 years when Linux still has bugger all of the market, people might start to realise where Linux DOES have problems (and that this is one of them).
Except that I do not believe that Linux package management has *anything* to do with its popularity, and so far you have not made a strong argument in support of this position.
You are not the average user (nor am I). If you don’t currently work in IT support, I suggest that you get out there and get a bit more of a grasp of the real world, and ordinary computer users (who make up probably 95% of ALL computer users).
For your information, I acutally design UIs as part of my work as a game developer, so I have a little idea about usability issues, since I confront them regularly. An application like Ubuntu’s Add/Remove Programs is *very* usable, IMO – and so is something like Adept/Synaptic for more advanced users.
My opinion stays the same: package managers, especially now that they have nice front-ends like Add/Remove Program in Ubuntu, are still the best way to manage software. Standalone installers, such as the ones used by Crossover, OpenOffice or Google, are also good. For those who really, really want the latest bleeding-edge software, there are distros such as Gentoo, Arch Linux, Mandriva Cooker and the current unstable version of Ubuntu – or they can learn to compile from source.
Since those users represent just a fraction of normal users (who will be content to use what’s in the repositories and/or download standalone installers when they are available), I don’t feel that this is a serious issue regarding Linux adoption.
Beyond this, you are also ignoring the fact that distributing apps with standalone installer is not the responsibility of Linux distro makers, and thus is not one of Linux’ weaknesses; rather, it’s entirely up to the software developers themselves, and these should be the target of your criticism. If a Windows app was available as a source code only, would you blame Windows for it?
@Krok and Melkor:
You can get autopackage for Linux and distributors can provide the .package file on a CD to end users who can just double click it to run.
Many applications such as Crossover office etc come with the Loki installer which is great. I even used to make installers for a Quake 3 mod known as Navy Seals:Covert Ops with Loki’s installer.
It even has qt and gtk frontends.
http://www.autopackage.org
Having said that, most distros provide good package management and dependency resolution capability with yum, apt etc.
Melkor, non geek users are not idiots. They are more than capable of installing packages if they put in some small effort in learning the new OS they switched to. Linux is not windows and expecting Linux to act like Windows is not the smart way to go.
Autopackage is actually used by the Dutch inland revenue for the linux version their tax return application. You can choose between a tarball or an autopackage. Both work just fine. (It’s funny, though, that they choose to go with a motif-based version of wxwidgets for the tax app itself).
What’s needed far more than install frameworks are some standardized naming convention so that an rpm, deb,.. could install on any distro using yum, smart, apt-get or whatever package handling software the distro uses.
I can’t understand it.
I should be able to double click on an RPM file, be able to click next, next, watch the bar, click next, then watch 17 more bars(if they apply) and then click done. Then use my app.
Why? Why do you want this hell on Linux -the exact hell that three years ago convinced me to switch?
Package managers are the best thing after sliced bread.
Why do you want to download a file, click N times, for *each damn program* you want to install, when you can fire up something like Synaptic, click the packages you want installed, click “Apply” and install tens of programs without annoying wizards etc?
The fact there is no standard stand-alone protocol (there are things like autopackage and klik, but I’ve never seen them widespread) is that stand-alone things are EVIL, and are used on Windows just because most Windows software is commercial, so it must rely on this (in fact, commercial Linux software often has standalone installers).
In fact, I can’t understand why people don’t port apt-get (or something similar) on Windows and allow people to install free (as in beer/freedom) software in Windows with the same ease.
This would be even more useful than the KDE port.
All dependencies should be statically linked to the app so that I don’t need to hunt them down
So you want to install, for example, GTK 10 times if you have 10 GTK apps installed? Hmm, how clever.
and I don’t need an internet connection either.
You can use apt-get, or rpm, or the like, from a cd-rom for example.
We desperately need it.
We desperatly need to stay away from it.
You’re being completely unsympathetic to the average user – the Windows user, who this KDE port is aimed at.
The average Windows user is actually doing something more complex and time consuming and user-unfriendly, by downloading and clicking tons of separate installers.
The fact that this is the Windows way of doing things is simply the braindead consequence of commercial software dominance for that OS. It doesn’t mean that it’s an easier, friendlier approach -in fact, it’s not easier nor friendlier.
Having a centralized repository of free software with an easy click-and-go interface would be *really* friendly for the user.
Want to install, let’s say, a cd ripper on Ubuntu? Fire up synaptic, search “cd rip”, choose, click, go.
Want to install a cd ripper on Windows? Fire up google, get lost in a sea of crappy shareware etc., after half an our choose one, download, click installer, “read” license agreement, click a LOT of times, choose install directory, wait for license to expire after 30 days (if you didn’t pay attention enough when searching).
The fact that people is accustomed to the latter way of doing things doesn’t mean the user is best served this way. I’m all for taking good ideas from proprietary software (often it has), but please don’t be dummy borgs.
The average Windows user is actually doing something more complex and time consuming and user-unfriendly, by downloading and clicking tons of separate installers.
You can add “bloody dangerous” to that list. The average Windows user is required to look by himself on the Internet for install kits and run whatever they find on their machine as administrator! This is so terrible from the security point of view it’s amazing.
Compare this to downloading packages from an official repository, where everything is tested and checked against a checksum and guaranteed 100% free of spyware and other malware. Not to mention guaranteed to integrate smoothly with the rest of your system, not to mention available for upgrade at the click of a button as opposed to hunting down yet another executable manually.
you are SOOOOO right…
You are being completely unsympathetic to the average user – the newbie user, who is just learning how to use Ubuntu.
Why should we have to deal with the silly installer+dll-hell?
There are installers for linux. Firefox, Sun Java and nVidia drivers can be installed that way. People just don’t want it.
There are downsides with installers, like every application installing the same bunch of dll’s in its own directory – the same dll’s (but usually different version) the former application just installed in its respective directory.
It’s called DLL-hell, and is still an issue and completely ruins the sole idea of dynamic libraries. There is usually no dependency hell on Linux today, unless you are messing with unstable versions, in which case it’s a human error. _Your_ error.
“Why should we have to deal with the silly installer+dll-hell?… …t’s called DLL-hell, and is still an issue and completely ruins the sole idea of dynamic libraries… ”
————————————————–
DLL Hell is not something I have come across in a very long time on Windows. Almost all Windows software these days is self contained and does not use shared libraries. Good windows software doesn’t even use the registry, which is the main source of problems on Windows machines. The idea that DLL hell is still an issue for Windows PCs hasn’t been valid for at least 6 years now – developers simply stopped relying on shared libraries when they realised that hard drive space was ample to hold larger executables with static libs, and Internet bandwidth made large installers a doddle to download. Considering that I have used about 19GB on my primary 250GB partition where my software is installed, I couldn’t really care less about bloated duplication of libraries.
I have no problems with the way Linux distros handle software installation, but I do like being able to download the pre-compiled binary directly from the software vendor rather than an untrusted third party (ie, the packager/distributor), since I lack the time or the skill to be able to audit source code and compile it myself. For closed source software (which is mostly what I use, simply because there are no OSS music studio apps available that even begin to compare to the likes of Sonar or Tracktion (if Mackie port Tracktion to Linux with full VST(i) support, I would probably switch to some version of Linux)), the Linux way of doing things doesn’t really have much effect, since closed source software vendors tend to write their own installers. But it does make it a pain to try new releases of software without attempting to compile it yourself.
It is also a security issue, as I generally trust (in the sense that I know who to blame if it goes pear shaped) the ISV more than software packagers not to insert something nasty into the final binary. If the ISV does something nasty to commercial software that I have purchased, I have legal avenues I can pursue to seek damages or a refund. But with OSS it is all very strictly “use at your own risk” (unless you pay for support).
If someone on the Ubuntu team inserted some malware into the sourcecode of one package of the Ubuntu system, how long could it be before someone realises?
Chances are that it will be a long time (well, time enough to do a lot of damage) before someone realises if there is some trojan sitting in the binary of some innocuous library that is presumed to be stable and doesn’t get much attention from other coders.
Downloading software with all of the dependencies direct from the ISV is my preferred option, rather than using the repository of the distro, even if it means massive duplication of libraries.
Hard drives are cheap. My time and patience are not.
If the ISV does something nasty to commercial software that I have purchased, I have legal avenues I can pursue to seek damages or a refund. But with OSS it is all very strictly “use at your own risk” (unless you pay for support).
I think it’s been a while since you’ve read an EULA…basically, you have no recourse against them, even if someone had put some nasty code in commercial software.
Distro makers do audit their packages, and work in an open way, so it’s very difficult for someone to insert malicious code. Also, since the risk of getting found out (and being ostracized by the FOSS community) is very high, this would highly discourage those hypothetical malicious code polluters.
So I think that your worries about the safety of distro-maintained code are unfounded. There’s no real reason to believe that software from ISVs is any more (or less) safe.
“I think it’s been a while since you’ve read an EULA…basically, you have no recourse against them, even if someone had put some nasty code in commercial software.”
——————–
Well, where I live EULAs are overridden by such legislation as the Trade Practises Act, and if someone sells me unmerchantable junk, it doesn’t matter what they put into their EULA, I can at the very least give them very negative publicity, if not sue them silly if the vendor happens to be in my jurisdiction. Consumers have a lot of power to whinge and get their way where I live.
“Distro makers do audit their packages, and work in an open way, so it’s very difficult for someone to insert malicious code. Also, since the risk of getting found out (and being ostracized by the FOSS community) is very high, this would highly discourage those hypothetical malicious code polluters. ”
Really? There are enough people with enough time to pour over millions of lines of code across thousands of different packages across hundreds of different distributions that malicious code can’t easily slip through from time to time, and not be picked up and corrected for several days if not weeks? I highly doubt it.
I am sure that once detected, the response to the malicous coder would be as you describe, but that doesn’t stop them from doing it at least once, and then the damage is done.
And even if the packager makes changes they regard as beneficial, they are producing a version of the program that is different to the one that the ISV intended (which may or may not be a good thing), and may introduce bugs that are not present in the ISV version.
You are probably right in saying there is a low probability of malicious code being inserted by distro packagers, but the probability of distro packagers producing a version of a package that has more bugs than the original may be quite a bit higher.
I think it is desirable for distros to consist of core OS functionality, and leave actual apps to ISVs to distribute separately. There are too many distros that try to bundle every app under the sun, when they should really just include very basic stuff, and have a system where people can easily install applications themselves (I am quite partial to the MacOSX way of doing this, but it is becoming increasingly more common to find Windows apps that use simply drag and drop installation method).
I prefer to keep the OS and the software that runs on the OS a bit more separated, and most Linux distros tend to blur the two together. Ubuntu is pretty good in that it doesn’t have too much crap installed by default (it is not much different to having a PC with Windows and Office installed by default, with no other extras), which makes it easy to customise to your needs, but a lot of distros bundle 16 slightly different programs of every type of software can conceivably be packed onto a CD-R, and often none of them are particularly good, and it is time consuming to remove them. That said, many of those distros are a joy to use once they have been purged of all the stuff I don’t want on them.
Really? There are enough people with enough time to pour over millions of lines of code across thousands of different packages across hundreds of different distributions that malicious code can’t easily slip through from time to time, and not be picked up and corrected for several days if not weeks?
Yes, there is. Remember that this is not a centralized process. Take the kernel, for example: the kernel devs audit the kernel code. The distro makers then take copies of those kernels (digitally signed and checksummed) then make patches for them, which are also reviewed, and then packaged, checksummed and signed. There’s virtually no room for someone to insert malicious code, which is why it has only happened a handful of times over the years.
This isn’t Wikipedia – there’s a method to the madness.
I am sure that once detected, the response to the malicous coder would be as you describe, but that doesn’t stop them from doing it at least once, and then the damage is done.
The “damage” is easily reversible, and as far as I know there have never been a serious incident like this for the kernel, and only a few rare ones for other projects.
For the coder, the risks outweigh whatever he could possibly hope to accomplish through such vandalism.
I also don’t understand why people would think this would be more likely to happen in the FOSS world – in my mind, the risk of malicious code being loaded into your PC is *much* greater with closed-source software, especially from small projects. In fact, we can safely say that nearly all spyware and trojan horses are the product of a closed-source development environment.
You are probably right in saying there is a low probability of malicious code being inserted by distro packagers, but the probability of distro packagers producing a version of a package that has more bugs than the original may be quite a bit higher.
There’s no reason this should happen: packaging a version doesn’t mean changing the source code – in fact, 99% of the time the vanilla app is simply compiled and packaged using an automated packaging tool. The packagers don’t touch the code at all, so again your fears are unfounded.
I wrote: “Really? There are enough people with enough time to pour over millions of lines of code across thousands of different packages across hundreds of different distributions that malicious code can’t easily slip through from time to time, and not be picked up and corrected for several days if not weeks?”
You replied: “Yes, there is. Remember that this is not a centralized process. Take the kernel, for example: the kernel devs audit the kernel code. The distro makers then take copies of those kernels (digitally signed and checksummed) then make patches for them, which are also reviewed, and then packaged, checksummed and signed. There’s virtually no room for someone to insert malicious code, which is why it has only happened a handful of times over the years.
This isn’t Wikipedia – there’s a method to the madness.”
I would like to see your reasoning behind that assertion. How many hours does it take one person to audit 1000 lines of code? How many lines of code can any one person audit in, say, a week? How many people are available in any given week to do this? Debian has tens of thousands of packages in its repositories, constituting tens if not hundreds of millions of lines of code. Some of this code is unmaintained, or worked on sporadically, and would rarely be examined closely by anyone, especially if it was packaged in a debian derived distro where people are assuming that the software was automatically packaged.
Harmful code doesn’t have to be deliberately malicious to do damage.
A coder who inserts a trojan into some network code that siphons out things like passwords and bank account details can gain a lot (or may think they have much to gain, even if in reality they don’t) for the risk of ostracism from the OSS community.
I put it to you that there are simply not enough people with enough time and expertise to cover all of the bases, or even half of them.
Don’t get me wrong, I like and support Free OS software, but I am a little sceptical about the security claims made about it at times. Just because anyone can theoretically audit the source code for the software they are using doesn’t mean that they do, or have the expertise to do so, and there are simply too many gaps for things to slip through unnoticed until it is too late.
I am not saying closed source is any better – it is quite the opposite since you have no chance of reviewing the code, but the difference is that with closed commercial software there is someone to sue (in applicable jurisdictions) or press charges against if they stick trojans that skim bank account details into their software (for example – this is a cybercrime that is punishable by law), whereas OSS coders are often anonymous or use aliases, and may be very difficult to capture and prosecute.
Right, I think the user will be wonderfully served by having to download and install apps he needs by hand, running into incompatibilities between libraries and apps (or having to install huge statically linked apps), and so on.
but a lot of distros bundle 16 slightly different programs of every type of software can conceivably be packed onto a CD-R, and often none of them are particularly good, and it is time consuming to remove them.
Well, at least you can *try* this software freely, and maybe find something good you wouldn’t have suspected of. And many times “redundant” apps are not redundant: sometimes features (and bugs) one has the other hasn’t, so I often find handy to have both available.
Linux has a lot to learn from Windows/OS X, but software management is not one of these things.
I have no problems with the way Linux distros handle software installation, but I do like being able to download the pre-compiled binary directly from the software vendor rather than an untrusted third party (ie, the packager/distributor), since I lack the time or the skill to be able to audit source code and compile it myself.
What are the chances that a software vendor has not kept a few holes in their package? With Ubuntu/Fedora, you do have the source files that the final binary packages are made from. You can recompile the SRPMS used in Fedora/Redhat with one command i.e rpmbuild –rebuild package.srpm. If you wish to audit it yourself or pay someone to audit it, you can.
With the vendor, all you have to rely on is their word. And considering that many large software vendors sometimes don’t acknowledge bugs or security issues, you can never really be sure.
In fact, I can’t understand why people don’t port apt-get (or something similar) on Windows and allow people to install free (as in beer/freedom) software in Windows with the same ease.
This would be even more useful than the KDE port.
Different mentality, I guess. The nice thing about installing a Windows app is that:
a) The app is available immediately after release. You don’t have to wait a day or two (or even a week or two) for Microsoft to package the app so you can install it on your machine
b) The installers work 99% of the time
Whenever there’s news on here about a major Linux application releasing a new version, there’s usually like 3 or 4 people bitching that there’s no packages available yet for distro x y or z. This leads me to the conclusion that Linux users can’t get their apps when they want them, unless they want to spend time rolling their own packages, which doesn’t sound like fun to me
My experience with Synaptic is that there’s usually 8 million applications available … except for the one I happen to be looking for at that particular time. And if I find it, it wasn’t uncommon for the damn thing to be outdated as compared to the current version.
I know that Linux users often tend to think that the latest stable version of an app is really ‘bleeding-edge’ stuff because of the nature of how apps are distributed, but I like having access to apps immediately if I want them. Plus, when I’m checking out an application, I’m usually reading about it on the app’s homepage anyway, so then I click the download link, Next, Next, Done.
Let’s contrast this with Linux – if I’m on the app’s homepage, I can either check to see if there’s a version available for my distro, or I can fire up the package manager and download it. I can’t see this being much faster, unless I’m installing like 5-10 apps at a time, which I only do when reinstalling the OS. But here’s a tip for you … many Windows apps (probably at least half that I use) don’t have to be installed at all, and are available on a fresh install, if you have them on a seperate partition
Wouldn’t the easiest be to define some kind of WSB?
Creating a define naming scheme, some minimal requirements and perhaps a service to look up how to resolve the missing dependencies.
That would allow you to
– roll your own packages (as a developer)
– avoid dll hell
I would start the initiative if I would care more for the windows platform…
it’s called autopackage. just not everybody uses it (and it’s not entirely finished).
but making a rpm and a deb allows installation on >90% of the systems out there, so it’s not that bad. and those running a non-rpm non-deb system can surely compile the tar.gz themselves…
I think you missunderstood my posting. I was thinking along the lines of creating a LSB for Microsoft Windows, so that Open Source software could be used/installed similar to how it is on linux. (So that one doesn’t get 200.000.000 copies of zlib in memory)
Perhaps then an “autopackage for windows” since you seem to like that package manager 🙂
But back on topic:
Yay…those pics look good…even if the installer could use some graphics designer and usability expert loving 😉
sorry, did misunderstand you. well, microsoft could simply implement LSB compatibility (they already have a unix compatibility layer)… would solve the problem. i would considder it a waste of time if volunteers would spend time on it. if they would spend that time on improving linux, apps on linux would improve, and Microsoft would be forced to do it themselves
The app is available immediately after release. You don’t have to wait a day or two (or even a week or two) for Microsoft to package the app so you can install it on your machine
That’s not really true. Releases are different for Windows and Linux. For Windows, there is usually no source-only release: it is only available when it is packaged. For Linux, the apps are often available as source tarballs, before they are packaged.
Think of it this way: Linux apps are available sooner than Windows apps, but in a less user-friendly format.
My experience with Synaptic is that there’s usually 8 million applications available … except for the one I happen to be looking for at that particular time. And if I find it, it wasn’t uncommon for the damn thing to be outdated as compared to the current version.
I think you’re wildly exaggerating here – unless you’re constantly hunting old, unmaintained or very obscure versions.
Also, you have to understand that the distros cannot afford to have bleeding-edge packages in their “stable” repositories, because of the risk of breakage. This means that power users who are addicted to using bleeding edge packages should: a) learn to compile tarballs, b) run “unstable” distros (such as Feisty Fawn or Mandriva Cooker), or c) learn to free themselves from the irrational obsession with running only the “latest and greatest”.
but I like having access to apps immediately if I want them.
With Linux, you can often have direct access to CVS versions. But see what I’m saying above: it’s not that the packages are only available later, rather it’s the tarballs that are available sooner. You therefore have the possibility to have access to apps sooner in Linux, if you really want them (by downloading the tarballs and compiling them). With Windows, you’re stuck waiting for the app to be packaged in a standalone installer.
Again, the obsession of running only the “latest and greatest” is not something that afflicts normal users; rather, it is a typical Power User quirk – and Linux power users should really learn how to compile tarballs. It’s not hard, and it is a strenght – not a weakness – of the *nix OSes.
With Windows, you’re stuck waiting for the app to be packaged in a standalone installer.
Perhaps, though you’d never know it, since the standalone installer is released at the same time as the app.
I think you’re wildly exaggerating here – unless you’re constantly hunting old, unmaintained or very obscure versions.
I tend to run a little bit of everything
Also, you have to understand that the distros cannot afford to have bleeding-edge packages in their “stable” repositories, because of the risk of breakage.
Again, what the hell is bleeding edge? I’m not asking for the ability to install beta versions. Are you saying that I can’t install official releases right away because of fear that it might break some other applications? And if so, is this not a flaw with the architecture?
This means that power users who are addicted to using bleeding edge packages should: a) learn to compile tarballs, b) run “unstable” distros (such as Feisty Fawn or Mandriva Cooker), or c) learn to free themselves from the irrational obsession with running only the “latest and greatest”.
Well, I don’t currently have to do any of those three things, so guess I’m in pretty good shape
With Linux, you can often have direct access to CVS versions. But see what I’m saying above: it’s not that the packages are only available later, rather it’s the tarballs that are available sooner. You therefore have the possibility to have access to apps sooner in Linux, if you really want them (by downloading the tarballs and compiling them).
Ummm … doesn’t this sort of defeat the purpose? I thought the whole point of this super-leet system is that you have the ability to install 500 apps at the same time with just one click. Well, I guess you can, but who the hell knows which version(s) you’re actually going to end up with. With Windows, I always know. Again, both methods have their pluses and minuses, but I prefer the Windows way myself, and to say that it’s totally borked as compared to the Linux/open source way isn’t exactly fair.
and Linux power users should really learn how to compile tarballs. It’s not hard, and it is a strenght – not a weakness – of the *nix OSes.
I consider it a weakness. Keep in mind that I define a power user is somebody who wants to exploit the very most amount of functionality with his computer with the very least amount of work possible. So, IMHO, most power users wouldn’t bother wasting their time compiling their own apps, even if they knew how, unless they absolutely had to. Compiling apps is the realm of geeks (people who have more of an interest in understanding computers than actually using them.) Here’s a good quote for you – many people are busy studying the roots, while others are picking the fruits.
Keep in mind that I define a power user is somebody who wants to exploit the very most amount of functionality with his computer with the very least amount of work possible.
And here’s me thinking a power user is someone who has more than 3 apps open at once 🙂
Perhaps, though you’d never know it, since the standalone installer is released at the same time as the app.
This is exactly my point: this is a matter of perception more than anything else.
Again, what the hell is bleeding edge? I’m not asking for the ability to install beta versions. Are you saying that I can’t install official releases right away because of fear that it might break some other applications? And if so, is this not a flaw with the architecture?
Not really. It simply means that you should run an unstable distro.
You also have to consider the manpower issues, here. It takes a little bit of time to make sure a new app works well once installed. “Official releases” are not a problem if they don’t link to lots of libraries, but for apps that rely on a lot of other apps (or that a lot of apps rely on), you want to make sure that nothing gets borked. So you usually have to wait a couple of days until it’s packaged.
Again, the software developers *can* package it themselves, and many do (whether it’s rpms, debs or standalone installers). This is more of an issue for developers than distro maintainers.
I thought the whole point of this super-leet system is that you have the ability to install 500 apps at the same time with just one click.
For packages that have been tested.
Note: if running the latest and greatest is important to you, then perhaps a distro like Gentoo would be a better choice.
Again, both methods have their pluses and minuses, but I prefer the Windows way myself, and to say that it’s totally borked as compared to the Linux/open source way isn’t exactly fair.
I would never say that the Windows way is borked. It works well for *Windows*, while the Linux way (or ways, really) works better for a Linux system.
I consider it a weakness. Keep in mind that I define a power user is somebody who wants to exploit the very most amount of functionality with his computer with the very least amount of work possible.
That’s one definition of power user. It fits well with the Windows model. It doesn’t fit as well with the Linux model.
So, IMHO, most power users wouldn’t bother wasting their time compiling their own apps, even if they knew how, unless they absolutely had to.
Exactly. That means they have the option of waiting a couple of extra days (which they would have to do with Windows, they just wouldn’t be aware of it) or compile – which isn’t exactly rocket science.
Compiling apps is the realm of geeks (people who have more of an interest in understanding computers than actually using them.)
You realize that for 99% of the population, “power user” and “geek” are synonymous…
Here’s a good quote for you – many people are busy studying the roots, while others are picking the fruits.
It is possible to do both. That is my definition of a Linux Power User (and yes, I understand that it is different than for Windows).
My point is that this is mostly a perception thing, and that it’s unfair to criticize Linux for something that a) rests on the shoulders of the software devs, and b) will only be an issue for a very small portion of users who want Linux to function exactly like Windows.
Exactly. That means they have the option of waiting a couple of extra days (which they would have to do with Windows, they just wouldn’t be aware of it) or compile – which isn’t exactly rocket science.
Wouldn’t the length of time one would have to wait depend on how popular (or not) the app was? Maybe it would be available that same day, or maybe you’d have to wait several weeks. And you might end up having to beg the distro Gods to package the app for you. That sounds like more work than it should be. As for compiling apps, I’ve done it before. Sometimes it works, sometimes I get weird compile errors that I have to seek help with. It’s just not consistent enough that I would consider it a useful way to install software. I’ve heard there’s something called checkinstall (or similar) that simplifies the process, but then again .. we’re straying away from the ‘500 app installs with one click’ method that people seem to speak so highly of. And if you have to stray from that too many times, you may eventually start to wonder if it might be easier if you could just download setup.exe.
My point is that this is mostly a perception thing, and that it’s unfair to criticize Linux for something that a) rests on the shoulders of the software devs, and b) will only be an issue for a very small portion of users who want Linux to function exactly like Windows.
Keep in mind that I was responding to somebody who thought all us Windows users would be so blessed if we could install software like Linux users do. I was just telling him no thanks
Wouldn’t the length of time one would have to wait depend on how popular (or not) the app was? Maybe it would be available that same day, or maybe you’d have to wait several weeks.
Popular apps are available quickly, at least on Ubuntu. I haven’t used Mandriva in a while but I recall it was also pretty fast if you ran Cooker. It was certainly fast enough for me, a recovering Windows Power User.
And you might end up having to beg the distro Gods to package the app for you.
By that you mean “ask nicely on the appropriate forums”, I guess? 🙂
As for compiling apps, I’ve done it before. Sometimes it works, sometimes I get weird compile errors that I have to seek help with. It’s just not consistent enough that I would consider it a useful way to install software. I’ve heard there’s something called checkinstall (or similar) that simplifies the process, but then again.
Checkinstall won’t make it easier to compile, but it’ll install the app as a rpm/deb package, so it can be removed/upgraded normally with a package manager later on. It’s a useful packaging tool, in other words.
Compiling require that you have the correct dev packages installed. That’s usually where people run in errors, and I agree that it’s not always clear what dev packages you need. It should be standard practice for developers to indicate this in the tarball’s README file (some do).
we’re straying away from the ‘500 app installs with one click’ method that people seem to speak so highly of. And if you have to stray from that too many times, you may eventually start to wonder if it might be easier if you could just download setup.exe.</i
Well, that’s the thing. This doesn’t happen “too many times”. In fact, it’s the exception rather than the rule. As I’ve stated, if you run unstable distros you’ll have your bleeding edge system. If you run Gentoo you’ll have your bleeding edge system. Stable distros are for people who want a system that “just works.” That’s the beauty of the Linux model, it’s flexibility.
Look at me: I’m comfortable enough to compile apps from source, but I rarely if ever do it. The last time I did was when the version 1.5.0pre2 of Celestia hit the CVS repository, and before that I can’t remember but it was at least three months ago. That is a small fraction of the total number of software I’ve installed (and sometimes removed) in the same period of time – and yet I consider myself a power user who *likes* to try bleeding-edge versions (and sometimes revert to previous versions when I’m done). So if it’s a rare annoyance for me, a moderate tinkerer, then I have a hard time considering this as a serious issue.
[i]Keep in mind that I was responding to somebody who thought all us Windows users would be so blessed if we could install software like Linux users do. I was just telling him no thanks
Fair enough. I have nothing against standalone GUI installers, and in fact Windows also has its own version of the package manager (the Add/Remove Program control panel).
I do think that for system software (which in the Linux world is not all produced by MS, and hence cannot be part of global Windows update) the package manager approach just makes sense. It can also work for commercial software when distros have special repositories for them (like Ubuntu), but standalone installers are a perfectly acceptable alternative.
i’d like to add to this discussion the notion of a rolling release, something you don’t ever see in the windows world.
Arch linux uses it, for one, and it means you generally have the latest stuff with a delay of 1 or 2 days.
ok, might be too long for those 2 pieces of software you really like – then compiling is an option.
but we’re talking about the whole system here. an average XP system (and i’m talking about the software here, not the os, as that one is what, on average 6 years old?) has loads of old software on it. you can choose to spend every day 30 minutes to see if there is a new version for all software you manually installed – or not. in the latter case, you’ll most likely only find it a few days or even weeks later. well, i don’t have to look for new software all the time. i do a ‘pacman -Syu’ and i’m up-to-date. this was what i did love in linux – no more searching for the latest musepack version, new winamp, netscape, whatever.
linux’ packagemanagement saves a huge amount of time. and you also have the choice (at the expense of some time) to be more up-to-date than ever – i use several software packages directly from CVS/SVN. which is very hard to do on windows, but so trivial on linux a poweruser can do it anytime they want the latest for anything.
Dang, sorry for the missing closing italics tag. I should have previewed (or at least checked the post once it was sent).
Keep in mind that I define a power user is somebody who wants to exploit the very most amount of functionality with his computer with the very least amount of work possible.
That is most definitely not what a power user is. That’s a lazy person who wishes the latest and greatest was brought to him on a golden platter. A power user is someone who gets the most out of his tools/system/whatever by knowing them very very well, inside and out.
A power user is someone who gets the most out of his tools/system/whatever by knowing them very very well, inside and out.
What you describe is a geek, not a power user. Let me explain the difference:
Let’s say a geek and a power user are both setting up a hard drive with multiple partitions and two or more operating systems for the first time. They both learn during the process that you can’t have more than 4 primary partitions. A power user doesn’t know why this is the case and doesn’t care. A geek would have to look this up to figure out why this limitation is in place. By the time the geek has the answer, the power user already has his system up and running
Am I saying that power users are superior to geeks? Not at all, they’re just different kinds of users. Geeks have a much better understanding of the fundementals, and thus make for better programmers and sysadmins. Power users just learn to exploit the individual applications to get as much work done as possible in the least amount of time, learning as little as they have to in order to accomplish their mission.
Let’s say a geek and a power user are both setting up a hard drive with multiple partitions and two or more operating systems for the first time. They both learn during the process that you can’t have more than 4 primary partitions. A power user doesn’t know why this is the case and doesn’t care. A geek would have to look this up to figure out why this limitation is in place. By the time the geek has the answer, the power user already has his system up and running
I must be a power user then, because I’ve never cared about why you can only have 4 primary partitions (and in fact I didn’t know about this limitation at all).
I think the line between the two is blurrier than you suggest… 🙂
I may be jaundiced in this belief, but I just don’t see compiling a package as some major adventure in geekdom. I think that’s a view that tends to be massively inflated by those who’ve never done it (or who did once, and had a bad experience — I’m not saying it can’t happen).
A typical source package from Sourceforge requires the following action to build manually:
1) Extract the tarball (don’t even need to go CLI yet, GUI apps aplenty to help here)
2) Open a terminal and cd to where you unpacked it (again, you can get to this point in GUI with Konqueror and probably other filemanagers)
3) ./configure –help and have a quick scan to see if there are any options worth setting (if you understand them)
4) ./configure (with any options you chose)
5) make && sudo make install
That checklist follows for probably more than 90% of source distributions out there. Learn it once and it’s utterly mundane, really. Compare it with the typical Windows install:
1) Navigate to the installer and run it
2) Read the license and ‘agree’
3) Choose ‘Custom install’ and see what your options are (you can always click ‘Back’ if it’s meaningless to you or you’re happy with the defaults)
4) Click ‘Next’ a couple more times
5) Reboot (sometimes )
The principles and process involved are more similar and generic than some would have you believe.
Also bear in mind that localization and OS version-compatibility are generally taken care of implicitly, whereas with a Windows app you’ll often have needed to do those parts yourself by choosing the right language-version and (sometimes) Windows-version to download. Those reductions in workload can be weighed against the Windows package’s user-friendly bonuses of extracting itself and not requiring any CLI action.
> Think of it this way: Linux apps are available sooner
> than Windows apps, but in a less user-friendly format.
In MSVS you have a “Setup” project which contains “primary outout of MainProject”. When you right-click the setup project and select build, it automatically builds the setup project including all dependencies. This means that generating the whole .MSI is just as easy as compiling the app itself.
I guess building rpm/deb packages isn’t that easy, huh? My guess is that it’s probably even a command line thing…
Why do you want to download a file, click N times, for *each damn program* you want to install, when you can fire up something like Synaptic, click the packages you want installed, click “Apply” and install tens of programs without annoying wizards etc?
I have a Xubuntu install and I want to remove Gaim using Synaptic. The problem? Synaptic says it needs to also remove XFCE-Desktop. Needless to say, I can’t uninstall Gaim, because of this. (Sadly, this happens for half of the apps that come preinstalled).
On the other hand, it’s true that repositories have a very big selection of software, but I’m totally lost when I want something that is not listed there … probably I have to *gasp* compile something or type some cryptic commands from the command line … well, too much excitement for me.
PBI, on the other hand seems to have gotten it right.
I have a Xubuntu install and I want to remove Gaim using Synaptic. The problem? Synaptic says it needs to also remove XFCE-Desktop.
I don’t use Xubuntu, so this is not official.
However, I would assume that XFCE-Desktop is just a meta-package and uninstalling it will do absolutely nothing to your system. It is just there so that you can click on that 1 package to install a set of default programs. Removing any one of those would then mean that the whole XFCE-Desktop isn’t installed, so it gives you the option of reinstalling it, but doesn’t actually remove anything else.
This type of situation definitely needs some UI work, as I’ve heard this type of misconception from quite a few people in different distros.
Package management is a good thing. However there are problems: it is not cross distro compatible (usually) and it relies on central install database (i.e. no or hard-to-do non-root installations)
Try to compile recent K3B on a RHEL3. No way, it requires newer version of Qt. So having alternate way, statically linked and compatible with many distros is always useful, and that kind of thing can be provided by autopackage.
That said, I see no clear way ahead. Maybe packaging system should allow for local non-root installs with overloading packages, i.e. supporting alternate versions for users or groups of users (could be implemented by having overloading package sets which don’t screw-up the system, but provide compatibility layer for demanding apps). This of course complicates bug fixing, especially if lib x.y craches on newer version of gnome or if there are security problems.
In Windows there is a SxS “service” which usually tries to provide an app with proper version of a library. Maybe Linux should adopt something similar, as latest isn’t always the best in case of dynamic linking (so this problem stalls introduction of a new lib versions in distros).
Also manual source installations should be able to register into the DEB/RPM databases, so dependency hell can be avoided even if I’d want to install and compile something manually. This however should be supported by both autotools and application developers, which don’t always care about specific distros.
Still, compared to windows dll hell, we can’t say that Linux is bad in this area, though current situation certainly hurts vendors wanting to port application to Linux and have one package/binary per as large number of distributions as possible.
Windows is actually trying to make it Linux like.
Those .msi packages you see in an installer are “the rpm” of windows. They allow:
* Installing and uninstalling from command line (i.e: no dialogs)
* Updating
* Dependancy resolution
* Side by side install
And even Microsoft uses xml definitions to “compile” those packages. The necessary software (Wix) is distributed as open source on source forge.
The installer is just a frontend nowadays (but it’s also contained in the MSI).
So I guess Linux has been doing the right thing for a long time.
(The only thing I miss is the ability to configure the package during install. As fas as I know only debian has such a facility).
I should be able to double click on an RPM file, be able to click next, next, watch the bar, click next, then watch 17 more bars(if they apply) and then click done. Then use my app. All dependencies should be statically linked to the app so that I don’t need to hunt them down, and I don’t need an internet connection either.
I do agree with you.
A simple thing to implement would be a kind of “super-package” containing the rpm of the application plus a set of rpm on which the program itself relies on.
If a newer (and compatible) version of a complementary package is already installed on the system then the version supplied is ignored, otherwise it is installed from the “super-package”
Not only it would ease a lot of situation, but what will it happend if in 10 years i’ll need to install, just say, an ubuntu 6.10 from cd and all the required packages that we are used to get online will be offline ?
A simple thing to implement would be a kind of “super-package” containing the rpm of the application plus a set of rpm on which the program itself relies on.
No, thank you. The rest of us like linux package managers just because they ain’t like this. We don’t want to see multi-ten/hundred megs installers for every package, and at the same time loose every package dependency management that we had before.
No, thank you. The rest of us like linux package managers just because they ain’t like this. We don’t want to see multi-ten/hundred megs installers for every package, and at the same time loose every package dependency management that we had before.
“the rest of us”? Who are you to rappresent the 100%-1 Linux users?
Secondly, you won’t _have_ to use a super-package. That would only be another option, traditional packaging system won’t need to be modified.
Third, you won’t loose any dependency managment.
I’m confused: You want to be able to install a .deb, for example, without relying on repositories (which you already can), but at the same time you want it to be able to handle dependencies without relying on an Internet connection or repositories?
Forgive my lack of imagination, but I don’t see how that’s possible. (though it is possible, that is if you have the deps.)
You suggest statically linking; something to take up with the application developers or whoever is packing your program.
Personally, I think that’s a disk-spacingly horrible idea, considering the size of some of these monsters.
Edited 2007-01-15 00:30
The KDE crew is porting KDE to windows, I hope they reverse-port this install framework to linux. We desperately need it.
No, we don’t.
This is an example of the biggest problem with “Desktop Linux”; Windows users.
Instead of learning a better way to do something, you want to bring all that is bad from Windows when you move over to Linux. Windows’ install mechanism is one of the most poorly thought out ways to get programs onto a computer. We definitely do not need that on Linux.
Forgive me if I’m wrong here, but looking at those screenshots it looks like this “simple installer” for Windows is in fact just a simple package manager. It looks just like the cygwin installer to me, getting a list of the most up to date packages off a known website and listing them for you to select.
orgive me if I’m wrong here, but looking at those screenshots it looks like this “simple installer” for Windows is in fact just a simple package manager.
If it is, then it’s a good thing.
We definitely do not need that on Linux.
Yes, because typing commands is so much better … NOT
Yes, because choosing a cryptic package from a 20.000 item list is even better … NOT
Yes, because we have so much diversity on Linux, and we can install in at least 15 different ways on 50 different distros … NOT
This is why, Linux doesn’t have more users or any mainstream applications … because of this kind of unjustified superior attitude.
Yes, because typing commands is so much better … NOT
That’s like complaining about all the BSOD on Windows – it’s largely a thing of the past, at least for package management.
Yes, because choosing a cryptic package from a 20.000 item list is even better … NOT
It’s hard to search for “Open office” and then select that from the top of the list? It seems to me like it is the exact same thing you do on Windows – go to Google and search for “open office” and then select that link from the top of the list. Maybe I’m missing something here???
Yes, because we have so much diversity on Linux, and we can install in at least 15 different ways on 50 different distros … NOT
That’s just a consequence of the fact that there is no single version of Linux, and it is in fact linux’s greatest strength and greatest weakness at the same time.
This is why, Linux doesn’t have more users or any mainstream applications … because of this kind of unjustified superior attitude.
Certain user’s attitudes certainly cause problems, but if you look at the developers and companies surrounding Linux and OSS I find the exact opposite. Reading the blogs on PlanetKDE I find a ton of people who are open to new ideas, trying to help new users, and are downright friendly. I’d look to other reasons, personally.
OMG
Static Linking is very bad.
Ok, so imagine there is a buffer overflow in libpng.
Think of the number of applications that use libpng, to name just a few:
Firefox
Gimp
xzv
Thunderbird
OpenOffice
etc.
So since you’d decided to statically link libpng to each of those apps you’d have to obtain new builds of, and install all of those apps.
When libraries are dynamically linked the OS loads one copy of the library in to memory when the first application that uses it is opened. Other applications that used that library don’t have to load it when they start because they can just use the copy of the library that is already in memory.
it’s MADNESS!
– Jesse McNelis
How is clicking “Next” several times and watching a gague bar (several times for multiple applications) simpler than typing one command (one command even for multiple applications)? I don’t want to click the buttons, I want my app! .
Besides, static linking means:
a) size on disk – OK, disk is cheap
b) size in RAM
– there’s never enough RAM
– the more RAM is available for caches the better
– no sharing of the same library code in RAM
c) size during transfer – bandwidth is _NOT_ cheap everywhere
d) security patches – oh yeah, let’s download the whole huge static OOo binary instead of just libjpeg. and let’s do it for all apps, that use libjpeg.
That’s just stupid.
Granted, in Windows world, it works like that. However, that does not mean that it is a good thing to do.
I’d like to have Konqueror as a stand-alone browser for Windows.
Someone please explain why adding another window manager to an OS that 1) already has one, and 2) wasn’t meant to run any others; is a good thing. Other than using KDE apps on Windows, how could this possibly be a benefit?
Someone please explain why adding another window manager to an OS that 1) already has one, and 2) wasn’t meant to run any others; is a good thing.
This is a misunderstanding. The window manager will not be (at least not planned) one of the ported applications.
I think the internal naming for this category of applications is “workspace”, i.e. applications needed to provide a desktop environment on Unix/X11 systems-
Well, actually AFAIK you can replace the Windows shell (mostly explorer.exe): you don’t change the window manager, I guess, but surely most of the desktop environment. There are a lot of these replacements, both commercial and free or even open source.
I don’t think porting KDE as a desktop shell replacement would be an easy task, but it would be surely interesting.
Someone please explain why adding another window manager to an OS that 1) already has one, and 2) wasn’t meant to run any others; is a good thing. Other than using KDE apps on Windows, how could this possibly be a benefit?
Think of the KDE desktop and the KDE application framework as two seperate components; the KDE desktop is built upon the application framework, but the application framework is not dependent upon the desktop, it simply sees it as another application.
So just as in Gnome, you can run KDE applications without requiring a KDE desktop, they’ll now be able to do this in Windows and OSX as well. On *nix, Qt interacts directly with X which is why applications can run independent of the actual desktop; now in Windows and OSX, Qt will interact directly with those local GUI’s so once again applications can run independently of the actual desktop.
I guess another way to look at would be to compare KDE/Win to .Net; it’s simply a different framework for running applications. Using the framework in this manner should, at least theoretically, isolate the developers from having to code for a specific OS which makes for easy application portability. At least, that’s the basic idea.
the others answered already: there will be no Kwin for KDE4. i’d like to add to this: Windows doesn’t have a windowmanager. that’s not how it works there. ever noticed you can’t minimize or close windows when the apps in them are frozen? you can in linux… that’s the benefit of having a seperate windowmanager!
…but I was unable to find a download link on the site.
Am I looking in the wrong place? If so, where can I grab it?
keep them coming !
CMake, the amazing cross-platform build system. Check it out, you’ll find it’s much more intuitive than autoconf/libtool.
Why was my comment voted down? KDE4 uses CMake as their build system, which supports many OSes and now allows for the cross-platform nature of KDE4.
…all this talk of Linux vs. Windows software management is *very* off-topic, and should be modded down accordingly. Come on, people, let’s try to stay on-topic, which is the coming availability of KDE apps on Windows.
Many people say that this will hinder migration to Linux, but personally I believe the opposite: there are more people that will *not* switch because the apps they use are not available on Linux. If they start using the apps on Windows, it will be easier for them to migrate later. I think this is a positive step, especially if you can install these without administrative rights (so I’ll be able to use Konq instead of Windows Explorer at work! 😉
Many people say that this will hinder migration to Linux, but personally I believe the opposite:
KDE has nothing to do with Linux, nor does it have any goals pertaining to getting people to switch to Linux. And even if if it did, you’re still wrong. Having Konq or any other KDE app will just be more apps on Windows. Firefox didn’t get anybody to switch to Linux.
but KDE *DOES* want to work to advance Free Software, hence there was a big discussion about the port to windows. and still not all developers like it.
KDE has nothing to do with Linux, nor does it have any goals pertaining to getting people to switch to Linux.
Right, it doesn’t. It’s still free software, though, and without it Linux wouldn’t be where it is today (at least not for me).
And even if if it did, you’re still wrong. Having Konq or any other KDE app will just be more apps on Windows. Firefox didn’t get anybody to switch to Linux.
You’re missing the point: the more cross-platform apps there are, the less incentive there is to stick to a particular platform. Firefox may not have brought people to Linux (though there’s no way to know that for sure), but I know that when a friend of mine uses my Kubuntu laptop and surfs the Internet, he feels right at home using Firefox and realizes that Linux isn’t all that different from Windows (from the user’s point of view).
The open-source community are only damaging the credability of their own platform. If Microsoft Windows has access to all the apps on Linux/*nix, then what incentive is there to use Linux/*nix?
People could use all open-source apps on the same platform they can run all their games and apps that don’t work in Wine or Crossover. In fact, they can already do it now with Cywin, Services for Unix, so on. But native support makes the process simplier.
This especially bothers me when you relize Microsoft and 90% of commercial companies don’t give a rats @$$ about porting their products to Linux/*nix.
Bingo…give the man a cigar. When you can get all the Unix apps on Windows, plus all the Windows apps that require a very sub-optimal Wine or Crossover to run on Unix, then then it’s actually a disincentive to switch.
And that just goes to show that KDE is interested in KDE and not in Linux, which makes sense from their perspective, but doesn’t help any kind of migration.
Once again, you’ll never see any kind of significant migrations until you have a whole product (not Linux + GNU + KDE/Gnome slapped on top + random distro branding).
The worst enemy of desktop Linux isn’t Windows. It’s OSX when it starts getting licensed to OEMs.
KDE has nothing to do with Linux, nor does it have any goals pertaining to getting people to switch to Linux.
Right, it doesn’t. It’s still free software, though, and without it Linux wouldn’t be where it is today (at least not for me).
Well “for you” is not really relevant to what KDE has to do with Linux. KDE would be just as relevant in a BSD world where Linux never existed.
And even if if it did, you’re still wrong. Having Konq or any other KDE app will just be more apps on Windows. Firefox didn’t get anybody to switch to Linux.
You’re missing the point: the more cross-platform apps there are, the less incentive there is to stick to a particular platform. Firefox may not have brought people to Linux (though there’s no way to know that for sure), but I know that when a friend of mine uses my Kubuntu laptop and surfs the Internet, he feels right at home using Firefox and realizes that Linux isn’t all that different from Windows (from the user’s point of view).
And you and all these other people keep on missing the point over and over again. There has to be incentive to switch to a platform, not to stick with a platform. The incentive to stick with a platform is already there.
In fact, there’s less of an incentive to switch when absolutely nothing is exclusive to Linux. That’s one of the problems, but there’s a whole bunch more.
You have to realize that “Linux” has really nothing to do with an end-user experience. End-users barely understand what a kernel is. Until there is a cohesive whole system that is marketed and developed as more than just the sum of the parts then forget about it.
It’s 2007 and people need to start recognizing reality.
indeed, i expected a huge discussion about ‘should we port to windows’ and stuff, instead there is this big “i’m a windows user, software management on linux is different so it is bad”.
(while, of course, as many clearly argued, linux/BSD software management sweeps the floor with the non-existand management in windows).
Exactly who’d be using this, other than a Linux geek who was playing in windows. I cannot fathom using most Linux software on windows save maybe xine, VLC, K3B, etc. But, KDE itself??? Why? Most people never even change their desktop background, why would they want the flexibility that KDE would offer? It would just confuse things further…..or, maybe I just don’t understand?
or, maybe I just don’t understand?
That’s a safe bet. This isn’t about the desktop environment KDE, it’s about KDE apps like K3B, Konqueror, KOffice, etc. The people who use it would be the same people who use OOo and Firefox on Windows.
/*This isn’t about the desktop environment KDE, it’s about KDE apps like K3B, Konqueror, KOffice, etc. The people who use it would be the same people who use OOo and Firefox on Windows.?*/
Why would the people using OO and Firefox on Windows would want to run KOfiice,KDE, etc, on windows? If those people wanted to use these programs, they would be running linux or BSD instead of windows.
Edited 2007-01-15 06:40
Why would the people using OO and Firefox on Windows would want to run KOfiice,KDE, etc, on windows? If those people wanted to use these programs, they would be running linux or BSD instead of windows.
For a great many reasons, but how about we start off with the most obvious – the majority of those people haven’t even ever heard of Linux, let alone thought about switching to it for the sole purpose of using some apps they’ve never heard of, seen, or tried before.
/ *but how about we start off with the most obvious – the majority of those people haven’t even ever heard of Linux*/
Here we go again people giving the same excuse on why hardly nobody wants to use Linux, it does not work any more. I’m sure they have heard about Linux, the same way they have heard about the macs, and still they buy PCs running Windows or a copy of Windows over the mac os and Linux combined.
Edited 2007-01-15 08:09
Oh, they’ve probably heard of Linux in passing – like on one of those IBM commercials or something. But how many people do you know that have actually seen a computer running linux? Most of the people I know (non-tech of course) wouldn’t know the difference between “Linux” and “Pentium.” They just know it has something to do with computers.
Here we go again people giving the same excuse on why hardly nobody wants to use Linux, it does not work any more.
I’m not saying it is the only reason, but it is a significant one. I’d probably rate it 3rd, with the first two being a lack of commercial programs they already use on Linux, and a lack of any OEM’s pushing Linux over windows. Of course, both of those will take care of themselves once enough people start asking for it, but until then it’s something of a chicken and the egg problem. Apple tries to solve this by using great advertising and brand recognition. Linux doesn’t really do anything except claim to be technically superior which only really gets heard by the tech guys.
KDE as a desktop environment won’t run under Windows (and the windows native DE sucks in comparison to KDE’s). also quite a lot capabilities won’t be available, at least for quite some time.
also quite a lot capabilities won’t be available, at least for quite some time.
Yeah, I expect that my favorite part of KDE, kio_slaves, will not be available for Windows…
Yeah, I expect that my favorite part of KDE, kio_slaves, will not be available for Windows…
If you read the article you would know that it’s already available and working partially. There’s even a screenshot or two to prove it.
Well, file:// and http:// are working (and yes, I did read the article), but I’m talking about the ones that really make a difference for me, like ftp://, zip://, fish://, etc.
Although, if file and http are working, implementing the rest shouldn’t be that complicated…there may be hope yet!
Why would the people using OO and Firefox on Windows would want to run KOfiice,KDE, etc, on windows? If those people wanted to use these programs, they would be running linux or BSD instead of windows.
Surprisingly shortsighted. Why do you think that people only use one OS ? Many of us out there use not one, two but sometimes even more OSes. And anything that could make the gap narrower is very welcome, like virtualization, cywgin, etc. and yes, the possibility of running kde applications natively on windows is a very nice possibility. I could list about a dozen kde-based apps that I would really like to use even when I’m in Windows, and I’m sure there are very many people out there who think similarly.
Other than the above, this also gives the opportunity to WIndows users to get to know and try kde apps without their hidden /or not so hidden/ fear of Linux.
All in all, this is a very good thing and I applaud it wholeheartedly.
actually it will help linux. Since lots of windows users use firefox, that app is now much better and linux has a much better browser. So if all the other kde apps are on windows, linux will have better apps.
Right now, people are making sure websites look good on firefox. Once konq is in the mix there will be added pressure to code for standards
Besides, it’s all about choice. After all, the linux community could also say “screw kde-we’re going with gnome.”
/*KDE on Windows*/
Yup this will be the year of the windows desktop.
KDE in all it’s ugliness – now on windows!
I think the differences between Linux and Windows installers is pretty inevitable. Neither system would work very well for the other.
Windows has a large number of big, proprietary applications that are fairly self-contained and don’t particularly want to give up control to some central software manager. Look at how many proprietary apps aren’t allowed to be distributed on Linux’s package managers.
OTOH, Linux has thousands of tiny little libraries that tons of apps share. The sheer number of packages available on Linux makes the Windows system impractical, especially when you consider the frequency that most of that software is updated. On Windows you may have 10-20 apps you keep an eye out for updates on – no one wants to do that for 2000 apps on Linux.
Now, you can argue that sharing all that code in little libraries is an inferior way to produce software and that creating large apps like in Windows would be better, but then you’re not arguing about installation methods so much as the basic open source philosophy. Cathedral vs the bazaar.
So, is there a “Typical Install” option so you don’t have to go through and guess what you need for each item?
Honestly, plenty of people are going to want to use this on Windows and won’t care if they can install source, documentation, libraries, etc.) They are just going to want an option to “make the damn programs work”
That check-box interface shown in the screenies should be for the Advanced install.
Perhaps three levels…. Typical, Custom, and Advanced. Typical could just install the most commonly used apps, Custom could let the user choose by application and take care of the deps itself, and Advanced could offer Source, Libraries and such.
As for the idea of Synaptic on Windows… I think that would be great for Free/OSS Apps, it could be administered by a group like theopencd.orf or pricelessware.org … the closest thing I’ve seen so far is the Google installer. Hell… maybe Google could/should/would open up for OSS/Free apps on their installer similar to the repositories for Google personal home page applets.
As for the idea of Synaptic on Windows… I think that would be great for Free/OSS Apps, it could be administered by a group like theopencd.orf or pricelessware.org … the closest thing I’ve seen so far is the Google installer.
The closest thing I’ve seen so far is the Fink project on MacOS
I’ve wanted to run Konqueror on Windows since I first saw it. Its a much better file manager than Explorer and it’ll be easier to do testing in KHTML.
I’ve wanted to use and reccommend KDE apps for years and it’ll finally be possible!
This has got to be one of the most idiotic of the latest in a long line of idiotic moves my the Linux community. Complete waste of resources as usual to a project that will in the end do nothing. Oh wait, sorry I forgot this is the year of the Linux desktop, like it was in 2006…2005…2004…2003…2002.etc..
We already have a desktop OS that suits the needs of the vast majority of people. I have nothing against if people want to use a Linux GUI, but this just is not where Linux’s strength lies. Stick to the server role, work on security, stability, performance, manageability, etc.. and Linux will find success