“Intel has created a new network management and configuration system for Linux called ConnMan – but not everyone is pleased to see it challenge NetworkManager. Ars looks at the pros and “conns” of the decision to create the new software.”
“Intel has created a new network management and configuration system for Linux called ConnMan – but not everyone is pleased to see it challenge NetworkManager. Ars looks at the pros and “conns” of the decision to create the new software.”
Yet another replacement of an already established system with something completely different because one developer doesn’t like the way it’s designed. Here’s the prime reason Linux-based oses haven’t made much general desktop progress: necessary frameworks are being replaced so often that writing applications for them is absolutely pointless and not worth the time if they’ll just have to be rewritten for some new subsystem in a few years. I like open source, but even so, I wouldn’t waste time targeting Linux distros. Come on Sun/Oracle, give Opensolaris the boost it needs and we may see the open source desktop actually become generally viable. But at this rate, we sure won’t see it with Linux oses unless one decides not to just go with the flow of replacing x with y and actually take the time to make sure everything integrates well and keeps everything stable instead of living on the bleeding edge. Too much competition, not nearly enough collaboration.
Your argument assumes that the same does not happen on Windows, on MacOSX, and all other proprietary OSes.
Well, neither does yours when you come right down to it, you haven’t stated anything about what does or doesn’t happen on open source versus proprietary oses. However, I did not say that this constant replacing of subsystems was a problem inherent to open source, I said it is an issue concentrated specifically around Linux and the community around it.
Subsystems do eventually need replaced or overhauled regardless of os or development model, but it seems that under Linux in particular one system no sooner gets to be somewhere close to stable then another comes to replace it. OSS/Free to ALSA, devfs/hotplug to udev, ipchains to iptables… the list seems endless and the result as fickle as the current dev of the day’s choice of dinner. If you contrast this with *BSD, Solaris, and the like you see that those platforms have managed to keep their subsystems relatively stable for a very long period of time. FreeBSD, for example, though at major version 7 can run applications compiled as far back as to be built for version 4. A driver for Solaris 10 will typically still work, pre-built no less, on Opensolaris even up to the latest builds. This is typically not the case with Linux unless the applications are either statically linked or linked in such a way that it will not look for specific library files but will instead load them by name and/or version. Drivers are not backward or forward compatible at all in Linux, a driver must be recompiled to match the running kernel version in order to work properly, as even if you forceably load such a driver in the wrong kernel it may cause erratic behavior.
Now where am I sighting proprietary examples? Where did I say this is a problem inherent to open source development? It has nothing to do with open source, and everything to do with the chaotic and anarchic way Linux and most distributions there of are developed. It is the result of being a mishmash of components slapped together rather than being a complete system as other foss platforms are.
Yeah, here’s a newsflash for you: Hardware moves faster than the long living subsystems you like.
This is about MIDs, small nebooks, and even advanced cell phones. FreeBSD and (Open)Solaris don’t even run on those.
Netbooks? Hmm, funny you should say that… I have both FreeBSD and Opensolaris running on my Eee 1000HE. Here’s a news flash for you: just because the hardware changes, a completely new subsystem is typically not needed if the original one can be extended particularly if it works on the same principals. Take networking for example, apart from interfacing needed to bridge to new drivers and/or stacks, the actual process is very similar especially if the userland is decoupled from the physical hardware handling as it should be. But of course, that assumes things are designed well instead of slapped together… oh, wait.
Do these OSes present a moving target to application developers in the same way that Linux does?
And that’s at the source level — at the binary level it’s utter chaos, such that proprietary apps are forced to link everything statically except libc …
The targets don’t move after the devices are out. And, both Moblin and Maemo *are* Linux.
No way. You need to create the packages (debs) for the target OS anyway. Static linking would be ridiculous.
And BTW, daemons like this use dbus.
And why would I, or any other developer, want to create a deb, an RPM, a… ad infinitum for an endless number of systems with an endless number of library variations? There are better uses for the time that would take. Sure, if the software was foss then a packager could do it, but then I’d have to contend with downstream patches x, y, z in Ubuntu and downstream patches x, y, z in Fedora, etc etc… all of which have the potential to introduce bugs that aren’t in the original software.
No thanks.
Because you’ll want to test on that platform anyway.
It doesn’t really take much time to make a package, once your source is ready for packaging in the first place. It’s one command to build it, really (“dpkg-buildpackage -r fakeroot” or somesuch). And package creation can be automated.
I’d rather have this problem than see the development of Linux platform halted just to stay stable. The Linux development in mobile space is advancing rapidly right now, and attempting to provice full binary compatibibility across device generations is right out. It’s going to take years before we have source compatibility, even.
And for new phone generations, you’ll want to polish the UI anyway. Recompiling the source is peanuts compared to that.
Sounds rather harsh a reaction towards exciting events taking place these days.
The time that could have been used to fix bugs that occur on Fedora is now used to create an RPM instead. How is this a good thing?
It does. It takes time to learn how packaging works and it takes time to get up to speed.
How do you write a Debian spec file? I keep forgetting it so every time I have to hunt Google for an example file and related documentation. Ditto for RPMs, which I haven’t created for 5 years now but last time I remember it it was quite a horrific experience involving rebuilding the package numerous times o get right. Maybe an experienced packager can do it faster but it takes time to become experienced.
Why do you see “stability” as synonymous to “halting development”? It’s definitely possible to have both. Why not strive harder to have both?
But you’ll have to create the rpm anyway. Later on, it’s just about building the rpm.
Yeah, but again, this needs to happen anyway. Once you have the packaging working, the changes to it are easy to make.
Most libs aim for source and binary compatibility across “major” versions. Of course we can always say “work better!” to all the lib vendors, but it’s always about juggling the resources available.
Personally, I don’t think binary compatibility matters all that much. Lacking it is a bit of a hurdle for closed source vendors, but not enough to be concerned about. We’ll see whether it becomes a real problem few years down the road, but currently thinking too much about it is not worth it.
You keep talking about how this “needs to happen anyway” as if it’s a fact that can never be changed. The point is that it shouldn’t have to be like this, that the community should strive to unify packaging efforts in order to reduce wasted efforts. Why shouldn’t the community do that? Why should this be the mentality instead?
Some thoughts:
– The .deb packaging format isn’t going anywhere (way too much infrastrucure exists for that to happen)
– Debian packaging is hard to learn. It could be made simpler by improving the way packages are made, without need to make changes to the package structure as such (e.g. by providing better “debhelper” scripts, GUI wizards etc). I think Ubuntu is already exploring something like this.
– C/C++ build toolchains suck (automake, I’m looking at you). Improving that story seems more urgent to me than fixing the packaging. qmake, scons are better, possibly CMake too (never tried it).
I certainly want to avoid looking like “stop energy” in this matter, though. There are many fruitful avenues for improvement in this area, and many incremental improvements are possible. But trying to make large communities (debian/ubuntu) or communities with significant financial interests (fedora/redhat) to adopt “revolutionary” changes seems very hard.
* I was referring to OSX and the flavors of Windows. Their core APIs don’t change as rapidly as those of Linux.
In the case of Linux-based mobile platforms, the best case scenario is that the platforms stay stable, but they will then they will eventually ecouple from the wider Linux ecosystem. Or, with every new device the platform has to be drastically updated so software will be easier to port from the desktop world.
* One example is Mathematica. It links only to system libc and Xorg libraries. It provides its own Qt, Mesa, LAPACK and Intel Math Kernel Library. Acrobat Reader follows a similar scheme, except it uses a system 32-bit xulrunner.
Notable exceptions are Skype and Adobe Flash, which link to various system libraries; Skype is 32-bit thus far. Then maintainers encounter the same difficulties as with binary GPU drivers: mismatched versions, mysterious bugs, etc.
I would call this a mess.
Edited 2009-06-29 15:01 UTC
Did you even read the article or are you just trolling? He tried to use NetworkManager and it turned out that it does not fit environments that consists of more than WLAN and Ethernet.
“One problem was the difficulty of extending NetworkManager to support additional kinds of connectivity; Holtmann says that significant portions of NetworkManager’s code base would have to be rewritten to facilitate support for WiMAX, for instance. He faced similar challenges when he attempted to overhaul NetworkManager to enable tight Bluetooth integration. (…) ConnMan is lighter and has fewer mandatory dependencies. PolicyKit and the udev device management system are optional dependencies, meaning that ConnMan can work without them in environments where they aren’t needed.”
I did read it, and I’m not impressed with his reasoning for a whole new subsystem. Reworking the parts of networkmanager that are a pain, sure. Making it more extensible, definitely, extensibility is never a bad thing. But to create a whole new, and incompatible I might add, system is just overkill but oh so typical of the Linux community.
Sounds like ConnMan may be just what the doctor ordered for Linux networking. The current NetworkManager-based mess does all sorts of things wrong: it’s bloated, has lots of dependencies, doesn’t intergrate at all with traditional Unix configuration files and AFAIK, doesn’t work from either the CLI or WMs other than KDE and GNOME. If ConnMan can fix even some of those problems, I’d be happy.
Oh, in that case, you can expect to be very unhappy indeed. Because NM *does* integrate with traditional Unix configuration files – not perfectly, but supporting old and new configuration styles is part of what you call bloat. ConnMan doesn’t. It’s designed for embedded systems, which means it doesn’t have any need for compatibility with anything else.
You don’t like that NM doesn’t work from command line or environments other than Gnome and KDE? Well, according to their website, ConnMan supports Gnome. You could presumably write other UIs, but that’s also true of NM.
You want to talk dependencies? They’re a little smaller, but not much. ConnMan uses most of the same components as NM – wpa_supplicant, dbus, PolicyKit. Glancing at the code, it uses the ModemManager project that spun off NM. Seems to use libudev to find devices, just like NM does now that HAL is being retired.
In fact, looking at the code base, it seems to me that the most significant difference between the two is that ConnMan appears much more modular – it can be built without wifi, or without ethernet, or bluetooth, etc. Probably useful in an embedded system with known targeted hardware, but as a compile-time option, of no real benefit in a general-purpose desktop.
If you don’t like NM, chances are ConnMan isn’t for you either…
I’m not sure how much the churn has helped or hurt Linux. But the churn is certainly there. OSS, ALSA, Pulseaudio, devfs, udev, son-of-devfs (or whatever the upcoming thing is called), Network Manager, Connmann…
The list goes on. Sometimes I wish that someone would actually think out these designs before the distros start depending on them.
Edited 2009-06-29 17:04 UTC
let’s not forget people that the very principle of open competition between ideas and implementation is the very thing that makes this area so vibrant, productive and innovative ..
Anyone else noticed that Intels guy was complaining about Networkmanager not being good for embedded systems and red hats guy was complaining that Connman isnt good for desktop systems.
Thats like palm complaining ubuntu isnt good for embedded systems and canonical complaining palm pre isnt good for desktop systems.
this is like watching southparks court trial episode with both sides using the chewbacca defense.
From TFA:
I’m a fan of ConnMan already.
Killing network manager is NOT going to make me sad…
You seem to be under the misapprehension that as a dial-up user, ConnMan will be an improvement. It won’t. ConnMan is exactly the same thing as NM – it has a few minor differences in approach, but nothing really significant.
I really like cannman personally. Many of the objections with it about not being able to do certain things that network manager can will be addressed over time, since it is still faily new and network manager has been around for some time. I like a lot of the ideas in connman and the fact that it requires less resources to use makes it ideal for netbooks where teh key is to squeez out as much battery life as humanly possible. all in all they are both good and each have their advantages. I look forward to the inovations that each will bring and that each will cause the other to do in oder to stay “dominant”
One thing that is great w/ ConnMann is that it moves the complexity to the backend, out of the GUI. The fact that kde4 network manager was broken so long (or so I read) showed that something is wrong with the architecture – certainly the UI should be easy to “slap together” quickly. It’s the frontend that needs to be implemented several times for different platforms.
network manager has sucked for so long now, especially when you use wireless.
Even with simple things like detecting that a network cable is already plugged in can be missed when it detects a wireless network at startup.
To connect to my university wireless i had to install WICD which as far as wireless goes, beats NM hands down. At home i don’t even install network manager as all i’m doing is getting a dhcp connection from my modem.
I’ll give ConnMan a go for sure because at the moment using no network manager at all is easier than using NM.