Now this is interesting. Microsoft developer Garrett Serack has acknowledged that it is generally easier to roll out a, for instance, complex stack of open source server software on Linux than it is on Windows. He also offers a solution – he’s working on a project to bring package management to Windows. This project will be community-driven, and Serack has the full blessing from Microsoft.
He first identifies the problem: Windows is very different from Linux and other similar systems, and on Linux, it is much easier to roll out a complex set of open source server software than it is on Windows. Basically, it’s far easier to build applications from source on Linux than it is on Windows due to the autoconf and similar tools.
“When that same application needs to be built on Windows, it takes some effort. Finding the dependencies and getting them to compile (which is inconsistent from library-to-library on Windows) and then building the application itself – again, inconsistent – generates a binary that you can run,” he explains, “Nearly all of the time, if someone posts those binaries, they bundle up their copies of the shared libraries along with the application. The trouble is, that there is no common versioning, or really, sharing of shared libraries on Windows. If your app and my app both use the same library, they could (and often do) ship with a different version of it.”
This is a problem that needs resolving, and Serack is proposing just that. He has started a project that will not simply implement the UNIX way on top of Windows – instead, it will be designed fully with the Windows way of doing things in mind, which should deliver advantages over just draping a UNIX environment over Windows.
For instance, it will use WinSxS to handle multiple versions of the same binary, “including multiple copies of the same version of the same library, compiled with different compilers”. Both 64bit and 32bit will be handled side-by-side, without collisions. MSI will be used, and files will be placed in Windows-specific locations. There’s a whole lot more stuff that this new technology, named the Common Opensource Application Publishing Platform (CoApp for short), will support.
Another very positive aspect of this project is that will be entirely community-driven, with the full blessing of Microsoft; however, Serack does not have to confer with the company about the project. “The folks here at Microsoft have recognized the value in this project – and have kindly offered to let me work on it full-time,” he details, “I’m running the project; Microsoft is supporting my efforts in this 100%. The design is entirely the work of myself and the CoApp community, I don’t have to vet it with anyone inside the company. This really makes my job a dream job – I get to work on a project that I’m passionate about, make it open source, and let it take me where it makes sense.”
A project page has been set up at Launchpad, and a wiki is available as well. This sounds like a very ambitious and welcome project for Windows open source developers.
Interesting to see how Windows Eco system has blotted now and there is a need of a project like this.
Perhaps it’s the state of open source offerings for the Windows platform?
Windows doesn’t need package management, The MSI system works fine, you usually grab dependencies when you build your setup program. DLL Hell has been closed for a long time.
This msdn article explains dll redirection:
http://msdn.microsoft.com/en-us/library/ms682600%28VS.85%29…
It shows just one of the ways to handle installing and using a specific version of a library.
The problems with apps in windows is developers (Adobe for one) who use non standard installers that don’t follow the rules.
You also aren’t going to get 3rd party developers on board, without some sort of payment system, (if then) and that means that you’re repository system will be incomplete, and users will still have to resort to installers, for a good (in some cases most).
This has little to do with solving dependency issues – it is more of an attempt to make Windows a more attractive target for OSS devlopers. The point is the code > compile > package > distribute > deploy methodology used on Windows is foreign in much of the open source world.
Most OSS projects follow a code > package > distribute > compile > deploy methodology. Although binary packages are common from distribution repositories, they are NOT commonly provided upstream – source distribution is the prefered method. Most OSS projects that actually offer Win32 builds do so by having 1 volunteer backport changes and maintain a separate build tree just for Windows… which is often a painful process and since win32 is not part of the normal flow of things it often lags behind the mainline branch and has bugs/issues.
Most OSS projects also do not use Visual Studio at all, and frankly despise using it because it again is completely foreign to the way they work. Its not necessarily the compiler (although that has some pain points too) – its everything else about it. The way it manages projects and all the black magic is performs is geared solely towards windows deployment – none of it is useful outside of the Microsoft ecosystem.
This looks like an attempt to create a common build environment that can be targeted by OSS developers in a similar manner to the way they target Unix-like platforms now. There is little in the way of details, but I suspect the approach will involve having a default install of the Visual C++ CLI compiler along with build scripts constructed similarly to what is used on Linux (along with lots of other stuff Im sure). That would alleviate a lot of the pain of supporting windows in an OSS project, assuming the tools provided are up to the task.
So if you are a windows developer then yes, this will kind of be useless to you. But it isn’t being done for you.
Like it or not, this IS Windows. OSS developers should be doing things the Windows way on Windows, as that’s what users are expecting.
Don’t mess with the user just to get out of learning the proper way to do something on a new platform.
In what way exactly does either VLC for Windows or Peazip for Windows “mess with the user”?
I’m intrigued.
Well I disagree with your opinion in this matter. Using windows is pretty painful and MS should be helping us as customers and not strong arming us as slaves.
I’m not often raising my voice when it’s about “Windows”, simply because I have no right to do so. In this case, allow me the following comment:
You’re mentioning something very important.
Let me emphasize: “what users are expecting” – this is the important phrase. Allthough I am happy to have system-controlled means to install, update or remove software on the systems that I am using. “Windows” user are depending on a different concept on how to deal with software. First of all, the “Windows” itself has to be installed from a CD or DVD, or it already is preinstalled. Updates for this system come automated through MICROS~1. But anything else – applications that the user wants to install – come from the web. Yes, it sounds old-fashioned and somewhat strange to me, but as you said: That’s how “Windows” is: The user opens a web browser, downloads an EXE file that installs something, and this happens nearly independent from a locally centralized means of software administration, brought by the “Windows” itself. Allthough “Windows” already seems to have such a concept, a system means to delete installed software, it often relies on the means brought BY this this software. I think the corresponding terms are installers and deinstallers which have to be provided by the software manufacturers. So if they want to make their software available on “Windows”, they should have to obey the rules that “Windows” has for such tasks.
Just imagine you are in a UNIX environment and want to install a program. It’s not available through your software management system – no, you have to use a web browser to get it manually. Then, you need to run a program that requires root privileges, but doesn’t tell you what it does. Finally, you realize that it installed things in arbitrary places where nobody with a halfway working brain would put it, e. g. the program’s binary into /, and its libraries in /etc, while its configuration is stored somewhere in /var/tmp. And it occupies you all the way clicking “next”, “next”, “next”, “next”, “next”, “reboot”.
I know, a quite stupid example, but it shows: Nobody would like to use such a messy and idiotic concept in an environment that has standardized and documented means on how to install software.
Maybe its hard to admit, and I’m not even sure if I tell the truth, but even “Windows” seems to have such requirements.
Furthermore, don’t force the user to learn something new. This already happens too often. 🙂
But honestly, as you said: Developers who provide a piece of software on a particular plaform should obay the rules that exist on that platform.
Not that I would be unhappy to be able to install software on a “Windows” system by a centralized means, without interaction, without manually searching the web and downloading things – in fact, it would be a great benefit both for average users, but also for professionals who have to work in such an environment and keep it running. Especially security considerations (such as program updates) come into mind. Having ONE centralized tool to work with, instead of many different ones, depending on the many different installed programs that “update theirselves”, would surely be helpful.
An important requirement would be the software providers to “play according to the new rules”. This would also imply some kind of end of “doing it our way” for every single manufacturer of software, and in a system as heterogenous like “Windows”, this looks hardly achievable to me, because every developer seems to be fine with “his way” of installing and updating software. So the basic concepts of how something like a “package database” should be learned first. If established, such an infrastructure could be very handy in many situations, as I said.
But as I initially mentioned: I’m not a “Windows” person, so you may not read my statement at all. 🙂
Two Words:
Get Lost.
If the end user gets a .msi installer, and by running the installer his program gets installed… What is the problem exactly?
I play on both sides of the fence (I do windows development AND some OSS stuff), so I try to keep an open mind and look at things from both sides. If this makes it easier for the developers of OSS projects to package their code for deployment on windows, and it makes it easy for users to install it, and it doesn’t in any way affect someone like you that would never use it, what is your problem with it?
Hell, I have no idea if they can even make this work – it sounds quite challenging to me. But if you like Windows (sounds like you do) how can you complain about something that would make more software available on your favorite platform? It makes no sense at all…
Ah the ignorance dripping from these words. I guess you’re saying OSS developers are not Windows users, so what they want doesn’t really count. Yeah something which makes things easier for OSS developers is really an awful thing.
There are at least two ways that I know of that completely avoid these problems.
The first way is to use Lazarus.
http://en.wikipedia.org/wiki/Lazarus_(software)
http://www.lazarus.freepascal.org/
Here is an exmaple of a cross-platform FOSS application developed using Lazarus:
http://peazip.sourceforge.net/
The second way is to use Qt and Qt creator:
http://en.wikipedia.org/wiki/Qt_(framework)
http://qt.nokia.com/products
Here is an exmaple of a cross-platform FOSS application developed using Qt:
http://en.wikipedia.org/wiki/VLC_media_player
These methods both avoid the need to use Visual C++.
Both of these methods alleviate a lot of the pain of supporting windows in an OSS project, as the tools provided are indeed up to the task.
AFAIK, the versions of Peazip and VLC for Windows are exactly the same as the versions for Linux.
Edited 2010-04-09 02:52 UTC
I’m a fan of Lazarus having been a long time Delphi developer in a former life. And while I do not use Qt I admire it. That said, neither of those really do anything to help if the project you want to port to Windows is a posix C application… The vast majority of OSS projects (especially server stuff) is rooted in very old code and its all in C.
Rewriting a large C project just to allow it to run on Windows is probably not very appealing to developers of such projects – especially when frankly most of them don’t really give a damn about their code running on Windows. Its users that usually want to see such ports, and in the OSS world, for the most part, users don’t call the shots – developers are beholden only to themselves and other developers. That is simply a fact of life.
This is an olive branch so to speak, and effort by MS to ease the pain of supporting their OS. If they can put something together that makes the process less painful for developers it is a win for everyone.
Agreed and thus you have the old adage that you can lead a horse to water but you can’t make it drink. In the case of Microsoft, they provide the necessary information, technology and tools to developers but if the developers choose not to use it there is very little they can do apart from holding back certification for vendors who refuse to use the standard MSI format.
It reminds me very much of the 3 people who developed the registry that were interviewed on Channel 9; as one of them noted, it was a great idea but unfortunately many developers failed to read the documentation regarding how to effectively use the registry which then ended up resulting in the registry being used as the dumping ground for a mishmash of things that should have never been put into it in the first place.
Read the article, this is about developers having to avoid to package all the dependencies for the applications and the security and size implications.
And how do you avoid packaging a specific library with your app in case it is not installed? How do you grab all the dependencies for a specific app when you install it automatically?
Again this is mainly about OSS developers and users. OSS software “usually” does not depend on closed source software so why would you need a payment system for putting OSS software into a repository?
DLL hell is not closed at all. It is just moved to another level.
For example, suppose you have a library compiled with Visual Studio 8 and an application that uses that library compiled with Visual Studio 9. Also suppose that your machine has Visual Studio 9 only.
Under this scenario, you wouldn’t be able to run the app debug version because the Visual Studio 8 DLLs would be missing from the system.
The WinSxS solution is retarded. Microsoft should have simply name its DLLs with a version number.
I personally think that software patents are total nonsense, but since open source developers have to live in this hostile environment, maybe they should patent package management. Then Microsoft will have to either stop development of this, or give up any notion of suing Linux for infringing Microsoft’s so-called “intellectual property.”
Except Linux package management has been around since the mid 1990s or so, and SysV package management for longer.
There’s only 1 year to patent an invention in the US.
In other words, assuming some Linux company/developer *did* patent this back in the early 90s when it was new Linux… its patent would just now be expiring.
Now, close to 20 years later, there never was any patent to expire and Microsoft is *finally* admitting that this is a good feature and implementing it.
In other words, patented or not, it wouldn’t matter. Microsoft’s so slow to admit something in another OS is good, they waste about as much time as it would for a patent to expire before even thinking about implementing it. Meaning, a patent would just cost more money, and wouldn’t help at all: Microsoft is still like the Titanic in their speed of adopting new things they didn’t develop from other OSes.
Hell, look at symbolic links. By the time Microsoft does anything new at a low level, it’s already old. With some exceptions, of course (most of which they patented, without a doubt).
IBM is patenting optimization. Are you telling me a year has not passed since developers ‘invented’ optimization?
ah, software patents are seriousely f–ked up.
About time really.
The funny thing is I remember having a discussion with some people on here (about a year ago) who maintained that Windows is easier to install applications than Linux because of the way you download Windows installers and “just run them” as opposed to Linux software repositories. Well, whoever you were, clearly even Microsoft disagree with you.
The repository and package manager model is perfect for the enterprise space where the IT department wants to be able to provide a set of validated apps.
But when you’re at home and you want the latest version of something because there’s some specific feature you’ve really been waiting for then installing an msi file is easier.
For example a few months ago I tried using Ubuntu and installing the latest Mono (not the one from the repository). It turned out to be a real pain and I gave up on it. I think that’s the main complaint I’d have to the repository system used by linux distros: you’re at the mercy of the maintainer as to what version of software you can use and often if you want a newer version of the software you need to upgrade the whole distro.
If you are that keen on the cutting edge, you shouldn’t be using Ubuntu.
Here is a distribution aimed more at your usage:
http://www.archlinux.org/
This distribution is a “rolling release” distribution. Within a couple of days of release of a new version of a project, such as the case of Mono that you mentioned above, Arch would typically have that version in its repositories. You can install it then WITHOUT having to update the whole distro.
Arch is designed for this, whereas Ubuntu isn’t.
Or you could use Gentoo, which also uses rolling releases and even offers many ebuilds for packages directly from the developers versiontracking system.
And you can select to have 95% of your system stable and just use testing/unstable versions of the packages you want to test.
So you see: If you want up to date packages, just take a distro which offers that.
Firstly, the whole point of Ubuntu is that it’s packages are older but deemed stable. Most desktop users don’t care about cutting edge, they care about stability. So Ubuntu’s repository model makes perfect sense given Ubuntu’s target audience.
Thus if you want the latest software, you should be using another distro such as Sidux (which is also debian), ArchLinux, etc.
Secondly, you can’t really use your example to compare Windows to Linux in terms of software repositories as Linux is far more inter-dependable than Windows. So upgrading one piece of software in Windows isn’t as likely to break another. In Linux, upgrading one piece of software would mean that if software x uses APIs y then APIs y would need upgrading too thus software z wouldn’t work as they’re depending on older versions of APIs y. This is why software repositories are so crucial to Linux, because (and in combination with the package manager) they ensure that you have the correct dependencies loaded.
So therefore a better comparison would be a Windows software repository with Apple’s App Store – which quite clearly works extremely successfully for the average user you previously stated software repositories were pointless for.
…I was wondering if we would see, specifically because of MS efforts to separate the kernel space from layered libraries… to grow away from the “Win32” concept. A new build methodology isn’t necessary of course, but it could be a good thing and help shape how the operating system changes over the next decade or so.
I’m not sure what this means?
Yeah I just reread and you are right. Doesn’t make much sense. Can I blame this on the beverage I was drinking at the time? (Baileys + Brandy + Creme de Menthe… very tasty, you should try it sometime!)
Comparing them is misleading since they are not at the same level at all and it is a pretty common confusion.
The formats, RPM and deb are comparable
The tools, rpm and dpkg are comparable
The dep resolves, yum, apt-rpm, urpmi, zypper etc is comparable to apt-get, aptitude and so on.
The problem with Windows is not its package management (or lack thereof as is the case) but rather its closed development model. I won’t even go into the myriad issues I have with Microsoft. After using Linux for many years, I simply don’t care for Windows any longer and live perfectly fine without it. I certainly wouldn’t switch back to Windows just because it were suddenly coupled with a powerful system wide package management tool like Apt or had a repository full of thousands of programs for one-click installation. Sorry Mr. Ballmer, you’ve lost this once and former loyal customer.
Funny the thing I hate most about Linux is the shared library system. There’s too many unneeded dependencies. The whole system was designed when gigabytes of RAM only existed in sci-fi.
DragonflyBSD is one of the few Unix teams that actually has a plan to fix the system. The major Linux distros seem content with sending users to the command line when the system breaks. The other problem is that users often have to wait longer for program updates.
http://www.dragonflybsd.org/goals/#packages
PC-BSD has their pbi solution but DragonFly’s is more interesting.
Edited 2010-04-09 02:17 UTC
A rolling release distribution such as Arch rarely breaks.
Even stable-release distributions virtually never break if you stick to that release. If you want a new release, re-install the OS (this is easily done in 30 minutes or so by taking a backup copy of /etc, and having a separate partition for /home and /).
In a rolling release distribution such as Arch Linux, this wait is typically a couple of days. For a stable-release distribution such as Ubuntu, the wait is six months or less.
Windows users had to wait about eight years for an update to IE6. Even then, all they got was IE7. Likewise, there was a wait of what, 5 years for an update to XP, and all that was forthcoming was Vista.
Edited 2010-04-09 03:08 UTC
Everyone says their favorite distro rarely breaks. The only Unix system I trust to handle updates is FreeBSD and even I know it doesn’t work 100% of the time.
It isn’t a personal problem. It’s a general problem with distros like Ubuntu that claim to be user friendly but then cause problems for users when it comes to updates. Some Dell owners that had bought machines with Ubuntu were dumped into the command line after trying to update to 9.10.
Telling users to just stick with the release is unacceptable for security reasons. Users also don’t like having to wait for updates while Windows and OSX users can just go click-click-done. But even when Linux users update conservatively the system can still break down which requires a trip to the command. The system is not fail-safe when it could be.
If Linux users were all using Arch and weren’t trying to convert novice users I wouldn’t care as much. I’m just sick of Mark Shuttleworth’s fraudulent claim about his distro being ready for human beings. It isn’t ready if pushing the upgrade button can lead to a command prompt. Google at least has a good plan for ChromeOS which involves updating the system by wiping the whole thing and keeping user data online. Maybe it will encourage some of the big distros to re-think the shared library system.
I’d mod you up but I already replied to this article.
FreeBSD, for me, is no better than any Linux distribution and worse than some. What’s needed to make any updating work well is care and attention which is really, really hard. The bigger your system gets, and the more variations that are possible, the harder it becomes.
For a system like Arch I have to wonder: What happens if you don’t do any updates for a year? With a rolling release type scenario can you still update to the latest without issues? I am betting no, but I’d like to hear from Arch users who have done this.
I am a Debian guy, myself. If you track testing you are doing a sort of rolling release, which is fine so long as you keep rolling. If you target a particular stable branch you’re OK until but you get security updates only. If you target ‘current’ stable you get something like a rolling release which I’ll call a staggered release. Debian is vast but you are typically okay if you are updating from stable->stable+1, and likely okay if you go from stable->stable+2, but after that I have seen some issues occur.
As for your “dropped to command line” issue: You’re right. Whatever you can say about Windows they did get “fail safe” mode working well. Only hugely hosed Windows won’t get to safe mode. That said, some useful things don’t work well or at all in safe mode (msiexec, I’m looking at you.) For your Ubuntu example it would be helpful if they had some *reliable* fallback X11 configuration that fires up if X keeps crashing. I know they tried to make something along those lines but it is not as dependable as the simplistic safe mode. I’m imagining a graphical single user mode here.
it’s really automagic installation and upgrading of open source programs. I really hope no one thinks this is an endorsement of the shared library system.
Shared libraries isn’t a bad thing. RiscOS had the best non-shared library system I’ve seen (of course some libs where shared), but I wouldn’t go back to it now I’m use to package management. If your not using shared libraries, might as well build everything static. Which can be the right thing to do, but certainly not for everything, which is why no OS I’ve ever heard of does it.
Stable system libraries can be shared but in Linux there is not only a lack of a standard library base but applications also share all kinds of third party libraries which creates unneeded dependencies.
As for RiscOS it should be a model for application installation and removal.
microsoft has finally nailed one serious issue on the head.
if they succeed it will remove one of the advantages of linux that makes it more comfortable to develop on.
i don’t know about you, but i find it quite scary that they took this problem seriously.
No it is not fine.
Try and build a open source project on Windows. It’s a nightmare. You end up chasing round the net downloading the right version of libs, setting the environment up with them in the right way. More often then not Visual Studio comes into play, which can be a pain even if you have a full copy. I’ve done it a few times, and other times I’ve just given up (on the same thing I build with no issue at home on Linux).
On a Debian based system, it’s something like:
sudo apt-get source <package>
cd <package dir>
./configure
make
sudo make install
and you are done. Worse case it tells you some libs you need to install, which is just some “sudo apt-get install <lib>”.
For non-programmers, you get:
* A place to get software that is safe.
* A place where the software is organised to work together.
* Updates!
People have been wetting themselves over the iStore but to me it just looks like a package management app.
The lack of package management is one of the things that really grates when using Windows.
On a Debian based system, it’s something like:
sudo apt-get source <package>
cd <package dir>
./configure
make
sudo make install
Actually it’s easier than that:
$ apt-get source <package>
$ cd <package dir>
$ dpkg-buildpackage -rfakeroot
$ sudo dpkg -i ../package.deb
Cheers, I’ll try that next time. But it doesn’t change my point. Way way way way way way way easier to do then on Windows.
Download foreign source
Uncompress downloaded file
./configure
make
checkinstall
Freaking fantastic to build a tarball into a .deb during install like that. The few programs I’ve needed outside the Debian repositories have worked beautifully.
That’s only because you don’t use Win32 and MFC and you try to link against some UNIX-intended libs.
It sounds like this project is aimed at making compiling open source software on Windows easier, which is not what the headline and summary made it sound like.
What I expected was for Microsoft to introduce a canonical update framework that any developer could use to get his application to be able to check for new versions and update itself without said developer being required to roll his own system. This would be partially copying–and partially building on–the package management systems found in Linux land.
Instead what I get is something which purports to allow open source software components to be compiled more easily and to share libraries more easily. Well, this is great but it doesn’t do much for Windows and doesn’t do much for Windows admins. Oh, did I mention it’s just a concept and hasn’t got anything usable yet?
“Release early, release often” — it’s quoted so often like this that people seem to forget the caveat to “release early,”–get something *working* and then release. You don’t have to finish it and it doesn’t have to be perfect but if you want your project to succeed you should make something that at least builds and shows the direction in which you are moving.
http://www.microsoft.com/web/
downloads and installs various open source web platforms along with their runtimes and dependencies
they work great!
Edited 2010-04-10 05:38 UTC
There’s a problem with having a package manager for Windows unlike Linux. With Linux almost all software is open source. And it comes from upstream. There is no Linux distro that creates their source code. All linux distros decide what code to pull from upstream source code from various projects, compile the code to their native distro, and then dumpi the compiled binaries into their repositories so users of their distro can pull from them.
<p>
Note that the *distro* package manager figures out the dependencies between packages for *that* distro. With me so far? Why is this bad for Independent Software Vendors on a Windows platform? Because it’s the distro package managers that decide what software goes into their software repository and what doesn’t. If Microsoft *is* the package manager and *you* are an Independent Software Vendor and you *want* your software to be in the repository *you* have to ask Microsoft to pretty please put it in for you.
<p>
Do you think Microsoft will do it out of the kindness of their hearts? Do you think they will do it for free? Do you think they will put your application into their package manager if it competes with a Microsoft product? What happens if there is an open source program in the repo that might not be as good as yours, but it’s *good enough* for most people and it’s the repository and *your* software is *not*? What other option do you have? The only package manager for Windows is Microsoft. Which means *you* are so %&&%$*!
<p>
If you think this concern is unrealistic or exaggerated compare with the situation for software for the iphone. Who controls what software can go on the iphone? Are there any cases of software removed from the app store because it competed with Apple? Were there any apps denied because of a pissing match with Apple?