The Debian fork website, put together by the Veteran Unix Admins (VUA) group, has annouced the VUA has decided to fork the popular Debian GNU/Linux distribution. The VUA is critical of Debian’s decision to adopt systemd as the distribution’s default init software and to allow software packaged for Debian to depend directly on systemd. The VUA plans to create a fork of Debian using SysV Init as the default init software and is asking for donations to support the endevor.
The default init system in the next Debian v8 “Jessie” release will be systemd, bringing along a deep web of dependencies. We need to individuate those dependencies, clean them from all packages affected and provide an alternative repository where to get them. The stability of our fork is the main priority in this phase.
There has been a lot of debate over systemd in the Debian community in the past few months and it will be interesting to see if this non-systemd fork of Debian gains support.
Is there anything indicating this is nothing but some disgruntled guy putting up a webpage and an empty github repo ?
‘Veteran Unix Admins (VUA) group’ ?
And you are supposed to donate…
Meanwhile Jordan Hubbard, one of the three FreeBSD founders argues that FreeBSD needs to adopt a service management framework like that in systemd/launchd.
As it stands, I don’t think this is a credible effort, not open, no names of any individuals involved, no real technical info, no way to respond or interact, only a donation option. I’m afraid this is a money grabbing scam, nothing will ever come of this.
Well I do hope this is genuine, because being able to fork a project when you disagree with what the current project maintainers are doing is really a great freedom which FOSS offers.
And although I don’t agree with the creators of this fork, as I myself have no problem with systemd, I find it important that people see that this is a viable option.
So if this ends up being just a bunch of noise by some disgruntled user(s), or worse a hoax, then it will likely end up harming the idea of forking, as the next time someone genuinely suggests a fork, detractors will say ‘hoax like that other one!’, or ‘waste of time, these forks never amount to anything but hot air’, etc .
Now, since Jesse Smith declared the Veteran Unix Admins (VUA) group as if it was some established entity rather than nothing more than a moniker someone used on a webpage to describe a entirely unconfirmed group of people, I wanted to highlight that for all we know this is a single guy just wanting to make a fuss / get some easy money, the lack of names of anyone involved makes for very little credibility, particularly when asking for funds.
I’m surprised Hubbard didn’t just advocate for folding all of FreeBSD into OSX…
The problem for a lot of system administrators is that systemd is a heck of a lot more than service management. No one has any idea what the scope of systemd is.
Again the scope is to provide the core parts which together with the Linux kernel becomes a base OS which can be targeted as a de facto standard.
Just like the BSD operating systems for example, FreeBSD, NetBSD, OpenBSD, they all ship as a base OS which includes system tools and daemons, systemd is offering the same thing as a cross Linux distro solution.
Essentially, systemd is bring Linux ecosystem back closer to UNIX principles (notably Solaris). For memo, Slackware represents one of early Linux system using BSD style system management before the move to sysvinit.
Really? Wow.
Regardless if you consider systemd good or bad, one thing it doesn’t do is bring Linux closer to the UNIX principles.
Why do people keep saying this? There’s nothing BSD about what Slackware does.
Slackware’s init system is/was based on BSD’s. It’s one of the main distinguishing points of that distro, actually.
Edited 2014-11-30 18:41 UTC
Again, no one has any idea what the scope is.
I would hope FreeBSD goes with OpenBSD’s rc system (which is nice and simple but perhaps a bit limited)or Upstart or runit.
I’ll believe they’re serious and wish them luck.
I do hope they’re reasonable about removing things from the “deep web of dependency” which for the most part doesn’t exist.
Sure, udev and logind, but those already have replacements which may need a bit of work but are already out there.
But removing socket activation and startup notification which are optional from services would just be silly so I hope they don’t waste their time on it. Even if there might be a “systemd” string somewhere in the package.
I wish them all the best, and hope a well maintained Debian-like clone without systemd will emerge. But it seems strange to me to start with Jessie and rip out systemd. Doesn’t it make more sense to start with Wheezy if they want to continue maintaining and upgrading it??
jessie has many thousands of updated packages across the board while the actual systemd interdependencies are at the moment relatively confined. It’s less work to revert the systemd infection in that small set than it is to start with the previous stable and then upgrade everything else.
The initial aim here is not to fork and maintain all ~20000 separate source packages; that would be an impractical burden. However, creating a sane set of base packages is realistic and manageable, and will permit a clean upgrade from Debian wheezy to Devuan jessie-equivalent. The specific details are being worked out.
It’s real.
There’s over 149 people on IRC in #debianfork on freenode. This is a serious movement and those numbers have been increasing for over a week.
psst, I heard it’s possible to have more than one IRC account at the same time.
account?
Is it still real if those 149 ‘people’ are a bot and 148 people asking ‘wtf is this’?
I would not be so sure: I am one of those 149 users hanging out in #debianfork (and now #Devuan, too) and I am there purely for the entertainment value.
As much as I dislike SystemD, the clusterfsck of a disaster that is SysV init is in absolutely no way ever the better alternative. PID files? Really? Is it 1985?
I’m not surprised. A lot of sys admins who have otherwise been silent over the years will now not be.
Oh noes! The tech world is going to tremble!!!
https://www.youtube.com/watch?v=lKie-vgUGdI
I would be. Sys admins run a lot of stuff…….
Yeah, the good ones, the ones whose opinion are worth a damn, sure do run things.
The thing is, no competent sysadmin is going to do something as monumentally idiotic as going out of their way to announce to the world their inability, or unwillingness, to expand their technical skill sets. That is professional suicide in this field. Which is what you’re proposing basically amounts to.
This is what is more likely to happen; an admin in a redhat/centos shop will get the memo that they are moving some systems over to 7. They’ll either spend a couple of days at a tech seminar, or on their own time, to learn the new OS. Including systemd, and then they’ll move on with their lives.
The view in industry is defnitively more pragmatic, than the passionate vistas found in some people’s basements. Which is why you don’t seem to understand that most people are not passionate enough, about something as arbitrary as an init system, as to commit career seppuku over it.
What a load of meaningless crap spoken by someone who gives an opinion but doesn’t ‘do’ anything.
There are already (a lot of) admins on < Red Hat 7 who are refusing to move to it. It’s not so bad at the moment in the RH world because upgrade cycles are slow. Some are still on Red Hat 4/5.
We’re not doing this for the goodness of our fecking health. We’ve tinkered with enough test systems and seen enough to know this is going to be a whole world of hurt.
… you should not be so hard on yourself. But try to get out that basement every now and again, some fresh air does wonders to one’s outlook.
In any case, the only thing your reply provided me is further evidence any contact, you may have with the real world component of this issue, is merely coincidental.
ROTFL. What a load of meaningless crap. Again.
Yes, because if we were to actually discuss the topic at hand you’d start to get more embarrassed and have to get even more off-topic.
I’ve described what’s happening out there in the real world of people using Linux distributions to get work done, and there is justifiable concern. That’s the way it is. If it were just an init system few would bother as we’ve been through so many, but it most certainly is not. Spend a couple of days at a tech seminar and move on with their lives…?! Is that you Lennart?
The simple fact is that systemd is a problem, and as it entrenches itself into the Linux distribution world more problems will become very apparent. There are those of us who’d like to learn our lessons from the past several decades and avoid what is an obvious disaster, certainly in the server world.
… but enough about yourself. Also, “wooosh.”
This is how the real world works; If my team has to run app/stack X that requires OS Y. Then any sysadmin which refuses to install and support OS Y, for whatever personal stylistic reasons, will find themselves not long for employment with the organization.
In any case, from reading your posts I get the impression that your passion for the subject at hand is only surpassed by your lack of understanding of it.
SystemD is too invasive, yet I don’t think sysv init scripts are the way forward either. We DO need something with proper process supervision and dependency coordination, areas in which I’d give sysv a failing grade. So long as it’s “sysv versus systemd”, then I think it’s a lost cause. I’m much more interested in other init alternatives and I’m glad they at least alluded to the possibility. I’ll get on board once they get a contender which isn’t sysv.
Has anyone actually attempted to install Debian Jessie 8.0 and then proceed to rip out systemd? Or replace it with upstart or anything else out there?
Debian usually is pretty easy to do such things, I don’t think systemd is an all thing in. Granted, if you’re using it for Gnome, it quite possibly is.
But for a server?
The debian fork is considering everything from uselessd to openrc to upstart. Remember this is very early. Their first goal is to rip out systemd as a hard dependancy. Then they need to start creating packages to make it behave sanely whilst allowing other init systems to come in. Right now jessie needs systemd just to install libsdl2-dev. It’s embedded itself into everything. It’s going to take a lot of work to remove it.
There is no systemd dependency on libsdl2-dev AFAICT.
https://packages.debian.org/sv/sid/libsdl2-dev
It’s embedded itself into everything? That sure is an interesting feature of systemD.
My main problem with the situation is that the focus is way to much on the init part.
Sure, systemd can be used as the init process, but the main reason lots of packages have gained dependencies on the systemd package is because of its role as a set of system services.
If any other init is being used, what daemon or daemons are they planning to provide the system services of systemd?
There is a lot of mentioning of packages depending on systemd, but often it is software using the system service facilities, not depending on any of systemd’s init capabilities.
The alternatives could of course still run systemd as a service (PID > 1), but given the hate I doubt they would.
How are these alternatives planning to address those missing parts?
anda_skoa,
That’s actually the problem, the components providing these features should not have been tethered to an init system. We should be able to depend on those features independently of the init system. Conversely we should be able to depend on an init system independently of the other features.
Say a package rightfully has a dependency on some feature, but because it’s systemd it brings with it a wealth of “features” that are actually unwanted and have nothing to do with the original dependency.
Systemd fixed real limitations of sysv init scripts, which I think is great. However it introduced excessive coupling between unrelated services, which is both unnecessary and harmful to alternatives. When programming, unnecessary coupling is often a sign of a bad design. With Lennart P, I don’t suspect incompetence, therefor I worry that it’s being done deliberately as part of a strategy to exclude alternatives.
I don’t think this gets to the root of the issue. Personally I don’t oppose systemd for the sake of hating it, but I do think it’s bad for the linux ecosystem when too many features get absorbed into the init system. Many don’t mind because systemd didn’t step on their toes. However let me create a hypothetical scenario where systemd does just that and imagine a full fledged web server gets feature creeped into systemd to offer more services. If systemd wrote their own, they would rightfully be accused of NIH, but let’s assume they fork nginx and they integrate it without issue.
So in this hypothetical scenario, the stereotypical systemd arguments still apply, it is the same sort of tight coupling as before, and undoubtedly many new package dependencies will grow on the bundled nginx. Nginx is a great choice, but that doesn’t mean everyone wants it. Some people want to be able to choose system daemons independently from the init system and other features. Maybe such people are just being greedy SOBs with entitlement issues, they should be happy with what they got for free, and if they don’t like it they need to suck it up and support alternatives themselves. However what makes this especially frustrating is that there was no need for nginx to be tightly coupled to the init system to begin with. Everyone who wanted nginx could have easily installed it without it being a hard dependency of the init system.
This is mostly true of systemd’s other non-init features too. Many people in this debate seem to assume opponents are against having new features, but that’s not really the case, we just want them to be added to the right place in packages that users can easily choose.
Edited 2014-11-28 16:45 UTC
I think we had this argument about strong coupling before. That is my main complain about systemd and its possible consequences as a barrier to alternatives init management and system services.
Also, it does not seems to me that we have had anything with such a build up barrier before, nor XFree86 to Xorg or anything that transitioned to better alternatives and with a so large scope. In this case, things may turn unavoidable.
I, actually, like many aspects of systemd, it is way better than sysvinit could ever be, and some services it provides were really missing on the field, no doubts about that, but the “packaging” as a whole is starting to become a little uncomfortable to swallow. Definitively, things could be handled better.
And, like you, I do think that LP is doing it on purpose, what is kind of worrisome. I hope we are both wrong, though.
Edited 2014-11-28 18:54 UTC
Alfman,
But isn’t this the case?
Which of the features depend on systemd being init?
As far as I can tell the only kind of program that would depend on systemd being used as init are system daemons that cannot be started by any other means.
Given that such daemons usually are not Linux specific I find it difficult to consider that anything is currently depending on systemd as init.
Is is kind of the point I am trying to make.
Looking at systemd as init is focusing on the wrong aspect.
Its main selling feature is providing programmable access to system properties, being able to run as PID1 is just a bonus.
One could easily have sysvinit, upstart or whatever laucnh systemd as one of many services, launch and control all other services as wanted and still have all programs that use systemd’s services work.
As I wrote above, I would be surprised if any of these programs currently depend on systemd’s init capabilities.
But I would welcome any example of course.
anda_skoa,
Oh I see, no it’s not that systemd needs to be init (there may be a few exceptions). However nobody really thinks having two init systems is a good design for a long term solution. So if systemd is going to have an init, then it should be loosely coupled with other system services. System administrators should not be impeded by tight coupling.
But it is what it is, if systemd didn’t have an init system we wouldn’t be having this discussion because the resulting systemd services would clearly be loosely coupled with the init system.
It can be a nice feature. However a major problem with systemd’s approach is the instability of the interfaces such that all the system services have to be deployed in lockstep with the init as a monolithic system. This is not a robust design and it becomes difficult for administrators to manage subsystem X independently from upstream subsystems Y and Z. I don’t think such tight coupling belongs in an init system to begin with. Never the less if it’s going to be there then the least systemd could do is standardize the official interfaces such that alternatives can implement them and not be broken by the next systemd update. This would still be controversial though because of the controversies with dbus itself.
I find it troubling that systemd may have an incentive to not standardize interfaces to increase the maintenance burden for alternatives, which are forced to play “catch up” to remain compatible.
Edited 2014-11-29 17:29 UTC
Alfman
Only 2 init systems. Aftman I think you have forgotten about Docker and the number of inits a system could be running. Systemd is particularly designed to be stackable. sysvinit really does not tolerate the idea of being stacked at all. Future init systems are required to be stackable. Any init system that is not stackable need to be dropped from the running or fixed so it is stackable.
No we would still be having argument because it would still be a very large bundle of services coming in a single group.
To be correct systemd is working on standardising official interfaces like logind.
Also you claims about not a robust design with a lock step deployment is wrong. The issues that are documented over the years with X + Y + Z part equalling completely death of a Linux system are quite long.
Like system admin attempt to run deamontools with sysvinit resulting in major stuff up with some of the processes started automatically by LSB define in sysvinit script and others started by deamontools resulting in confused mess.
Sysvinit is not robust because it does not cover enough so results in deamon monitoring tools have to be mixed with it that then end up fighting with sysvinit over what should start what. Systemd might have gone too far the other way but at least its saner.
Like deamontools monitor log system access. Monitoring if a application is writing logging messages is a reasonably good way to tell if the service is still alive.
Really init logging and process tracking need to be all part of one system to be able to present system administrators with somewhere near correct information. There are other things in systemd like getting dhcp or network time that you could call feature bloat.
Systemd no matter how you slice it technically is better than sysvinit. Now if people don’t like systemd fine make something technically equal.
Gentoo guys I respect they did not like systemd and has gone out and started Openrc with the object of being able to feature match systemd without being a bundle from hell. Openrc is a reimplementation of sysvinit ideas with modern tech.
Remember anything designed for Unix that is old than 1990 was not designed for security. Ask those cleaning up X11 server code base. sysvinit is design on the idea that everything will be properly behaved as well. Reality suxs because a percentage of applications will not properly behave.
i’ve asked before cause i really don’t know the answer but how does Apple’s Launchd (bsd or mit licensed)stacks against SystemD and does it do those requirements that you mentioned?
I”m rather fond of Upstart myself. It does service management and does it well.
Then we DO need something that does exactly what systemD does for these areas. Because its the first one to actually do them 100% correct. The consequences of doing them right is what a lot of people object to ( being the cgroups administrator which leads to the logind tie in, which leads to gnome tie in, etc, etc ).
So maybe what you would propose is something like upstart that does it 95% correct, but doesn’t have the consequences of a systemD setup? So you just need to completely forget about those corner cases that it can’t cover and never try to fix them.
Bill Shooter of Bul,
The usage of “correct” seems subjective in this context – all designs have pros and cons. Why shouldn’t we try to fix things over time?
Sorry, I’m being very breif in my comments. I don’t have time to track down all the bits of documents from the web to support what I’m saying. And I might just be remembering things wrong.
But, I do believe the easiest and most straight forward way of doing system monitoring is with cgroups. Cgroups in the kernel is going to need a single process managing it in userspace. When your init is using cgroups, it is a much cleaner design to have it also be the cgroups manager. Having cgroups manager in your init causes a number of dependencies ( anything that needs cgroups ) on that init.
While others have proposed various solutions for other parts of systemd design, I believe this is very difficult to solve.
Bill Shooter of Bul,
No problem, I’ll assume you are probably referring to something like this, from a pro-systemd perspective:
http://lwn.net/Articles/575793/
I disagree with this, the easiest way to do this hasn’t changed with systemd. The monitoring process need only handle the SIGCLD signal. Not only is this very easy to do, but it doesn’t need any special permissions or non-standard configurations either.
The link points out some functional overlap between a “cgroup manager” and a service manager. However I think that maybe cgroup management might be better left to something more powerful & flexible like linux containers. We can deploy a service inside a linux container and then allow the init system to monitor the container. This method of managing cgroups should work with any service BTW, not just ones that are designed for systemd. It should also work with any init system. This is where the power of *nix comes from in the hands of power users, not monolithic solutions.
Anyways, that’s my take. I need to get work done too. Enjoy your weekend
Edited 2014-11-28 22:54 UTC
Alfman other than what you describes does not work. Linux is not a pure Posix OS. Syscall clone under Linux allows starting threads and processes with new signal que so nothing returns to parent. So no SIGCLD message. Next service might be multi process and not handle the SIGCLD messages it child processes are making.
Upstart tried all this stuff right down to implementing ptrace to follow all the stunts syscall clone does. Reason why Ubuntu decided to kill off upstart long term is it consumes more cpu time than doing a cgroup in the first place.
Its like sysvinit biggest bug. PPID usage is sysvinit biggest bug. Issues with PPID.
PPID are not unique values. So your DNS sever stops someone opens text editor due PPID recycling sysvinit is reporting the DNS server is up because the text editor is running on the recycled PPID.
Reality if you work from the point of view of functional method on Linux to know when a service is running or stopped. End result is cgroups or LSM module. Both can assign system wide unique values to a group of processes. Key word is unique value. You want every service with a unique item to identify them.
Fork of debian based on openrc would at least be an functional objective. Fork on sysvinit might as well stay mainline Debian. Debian has not removed the sysvinit option. Only thing debian has done default systemd on Linux not.
I see these forkdebian sites and so on as FUD attempts I would like to know the company behind it.
“I disagree with this, the easiest way to do this hasn’t changed with systemd. The monitoring process need only handle the SIGCLD signal. Not only is this very easy to do, but it doesn’t need any special permissions or non-standard configurations either.”
We are interested in your comment.
Would you mind to clarify it at:
http://lwn.net/Articles/624320/
You can create a guest account there.
Thanks.
jb
If by system monitoring you mean process monitoring then no, it’s not the easiest and most straight forward way. The easiest and most straight forward way is to run the process in the foreground (what systemd calls a “simple” service). All this cgroups nonsense that exist to “prevent daemons from escpaping montitoring” is, quite frankly, bullsh1t. A properly designed process does not even try to escape monitoring but I guess RH has a lot of badly designed “enterprise” systems to support.
So you will claim that software bugs should not exist either. Properly designed process can still have software bugs causing escape.
Saying hey every service has to behave perfectly to be monitored and tracked is like saying software has to be bugless. It is almost impossible to be true.
Soulbender next problem a majority of services are type forking under systemd not type simple. Yes type forking services is where the major problems are in sysvinit.
Forking equals using PID and PPID tracking under sysvinit. Both of these ideas are broken due to the fact the ID numbers are not ensured to be unique and both PID and PPID can recycle into new processes.
Like it or not sysvinit is a broken design. We need something designed properly that can truly work. Yes this may require altering posix standards if we want it cross platform.
No, if they run in the foreground then they really can’t. Processes doesn’t magically fork.
And if the point is to move forward why are we supporting these mis-designed abominations? Forking is a legacy from the stone age of sysv init scripts, it has no place in a moderns OS.
tl;dr: forking a service process is retarded, don’t do it. Ever.
We already have the proper design; run in the foreground. It’s been around for ever, works 100% of the time and there’s no need to alter any standard.
If some company or developer don’t want to fix their broken forking design and add an option to run in the foreground; f–k ’em.
Edited 2014-11-29 06:35 UTC
Soulbender sorry I will start listing what you call retarded.
Mysql, Postgresql, Apache httpd, cups…. These are all forking by systemd define. Chrome webbrowser also falls into what systemd calls forking.
Why is this done is allow load to be spread better on multi cpu systems.
This is completely not understanding the problem. You cannot set http or database servers to run in all in foreground due to their multi process nature they start a lot of background processes.
Heck you even lose cups so cannot run printers if you remove everything that is service and is using forking/threading model.
Linux kernel makes very little difference between a process and a thread.
Yes forground is what you must do so sysvinit works. Problem is sysvinit is that old its not compatible design for multi cpu core systems. sysvinit has been hacked to work on multi cpu systems. The fact its not comes back and bite over and over again.
All of them can run in the foreground and does so without any performance hit, even though they use multiple processes and/or threads.
All of these are sensibly designed.
I’m pretty sure Chrome isn’t a system service.
Actually, you can and it works great.
If you think threading or multi-process requires forking into the background it is you who don’t understand the problem.
They are completely different things and there’s a big difference between them in Linux.
No, sysv relies on forking (and double forking) and the terrible idea of PID files.
Edited 2014-11-29 08:04 UTC
In fact they don’t
I should have wrote you should not. Because what you are doing is a security breach.
We are talking about systemd define. Anything using syscall clone to start a tgid is forking by systemd.
Multi process is forking by systemd define.
If set to forking, it is expected that the process configured with ExecStart= will call fork() as part of its start-up. The parent process is expected to exit when start-up is complete and all communication channels are set up. The child continues to run as the main daemon process. This is the behavior of traditional UNIX daemons. If this setting is used, it is recommended to also use the PIDFile= option, so that systemd can identify the main process of the daemon. systemd will proceed with starting follow-up units as soon as the parent process exits.
See critical bit.
The parent process is expected to exit when start-up is complete and all communication channels are set up.
If you block this from happening by demanding foreground you end up with service up and running without completely getting rid of privileges when it no longer requires after opening and setup communication channels. Privileges are connected to Light weight process so to truly get rid of a application privilage is kill a user-space thread at a min. Problem is the first thread in a process under Linux TID is used as the TGID.
In Linux kernel you have a task struct that is a kernel thread. A task structure contains a number that is the PID that Lightwight process ID that is TID in userspace. In that task struct there is a TGID that is the userspace PID value. Reality they are not completely different in Linux at core they are all just configure 1 structure.
Yet your call PID files a terrible idea then suggest runit that is using them.
In fact they do and I have done so numerous times. All of them can be run in the foreground if you so wish. This have no effect on the performance or security of the applications.
Wow, you just have no idea at all.
It is perfectly possible to drop privileges even if you don’t fork. Forking does in fact have nothing to do with privileges. At all. All it does is dis-associate the program from the controlling terminal (“putting it into the background”), something that was required for SysV init scripts but isn’t for systemd or other moderns tools.
Even with systemd is is better to run your process as a simple service if possible. The cgroup stuff is just to manage legacy processes that dameonize.
Runit aren’t using PID files. Traditional SysV uses PID files. Systemd/upstart/runit/daemontools etc does not.
Edited 2014-11-29 09:01 UTC
Do you know how long it takes to run a drop privileges on a process thread running on a 4096 cpu core system. The answer could be up to 50 mins. Not really a good idea to take that long to perform the action.
How long if I syscall clone or posix fork to start a new PID/TGID with the correct privileges and kill off the process with incorrect privileges. Almost instantly even on a 4096 cpu system(instantly being under 1 min). Killing a process and creating a new process lighter than modifying a process. Running on large systems start permission correct on each thread and forget the idea that you can change them after the fact.
Creating new you don’t have to digging through every cpu on the system process lists to make sure everything is set correctly.
1 fork does not have to dis-associate program from controlling terminal. Its a option if that happens.
http://linux.die.net/man/2/fork
Main thing fork does is issue a new tgid/posix pid.
Yes the issue is in the forking process inside Linux you could end up in a disconnected from terminal state.
There is a reason why you wish to disconnect from the terminal when you drop privileges is the cold hard fact the terminal that started your application can still be holding those privileges you have dropped. Bug in terminal equals attacker able to regain dropped privileges if you have not disconnected it.
Dis-associate program from controlling terminal is not fork alone its an intentionally coded action to remove possibilities to regain privileges.
Problem here for particular operations like checking up on supervisors Runit still uses PID files. Yes Systemd, upstart and deamontools are clean of PID files. Runit happens to be not clean of them.
http://smarden.org/runit/runsv.8.html
It is the runsv part of Runit that is tainted. Basically runit has moved PID from being on the service to being on the runsv wrapper around the service. When we want this completely gone.
So claiming Runit is not using PID files is badly incorrect.
Runit is close but no cigar. Lot of people make the mistake that since deamontools is clear of PID problems that Runit is also clear.
Um… This is wrong. I know privsep state machines are fairly foreign in linux land even though papers have been published on them for decades, but forking is a prereq of actual secure software.
http://www.citi.umich.edu/u/provos/papers/privsep.pdf
Soulbender,
Yep, and all this fuss over cgroups/ptrace/hidden pipes (systemd/upstart/daemontools respectively) to work around the same problem. Interestingly a kernel patch submitted by Lennart P gives us another option:
prctl() PR_SET_CHILD_SUBREAPER
https://lkml.org/lkml/2011/7/28/426
http://man7.org/tlpi/api_changes/#Linux-3.4
Setting PR_SET_CHILD_SUBREAPER allows a process to monitor orphaned grandchildren.
A kernel patch by LennartP? I’m scared already. Linus has some choice words with regards to what Lennart and Kai consider good design.
Linus has already stated he will be accepting no patches from Lennart, Kay Sievers or anyone else from systemd until the have proved they can be trusted with maintainership.
This isn’t true at all, he threatened to ban Kay Sievers but nobody else. The initial argument that caused him to threaten to do so has long since been settled.
daemontoons/runit has no hidden pipes as far as I know. Oh wait, maybe that’s the hack for forcing forking processes to work with it?
Never cared for that myself and really, are there any server processes left that doesn’t have an option to run in the foreground? I can’t think of any but maybe something horrible like Oracle still hasn’t caught up with the 90’s.
http://cr.yp.to/daemontools/svscan.html
Soulbender deamontools is that heavy on pipeing it can be the reason why you run out of file handles.
Deamontools forces everything that is running in foreground to run in background feeding all data into pipes. Deamontools is not much better than sysvinit.
http://smarden.org/runit/runsv.8.html
Really read and weep. Yes runit key part runsv depends on pid numbers not being reissued overlapped with the service/supervise/pid bit. Really getting the process id of the controller from the service/supervise/control pipe would have been better. Dead process dead end on pipe. No false positive right. In fact using pipe to work out if something is up before killing it can cause a race condition.
Check pipe get PID of runsv.
runsv has died. New process has now started on PID.
Send message to kill runsv problem you have just killed some other random process on the system.
Same applies to service/supervise/pid.
cgroup is the magic bullet to cure this. Message to a cgroup to kill a cgroup will only effect applications owning to the cgroup. This end the randomly killing programs problem.
Runit consumes a heck of a lot of file handles with the pipes.
runit contains nothing to contain apache http or mysql or postgresql having issues due to forked of parts locking key files preventing those from being able to restart.
Soulbender sorry runit and deamontools are both broken.
No it doesn’t. Deamontools forces everything to run in the foreground.
It’s a lot better, it’s just you who doesn’t understand how it works. in fact, it appears you have no idea how it works or what pipes are used for.
daemontools runs the processes in the foreground, pping stdout/stderr to a pipe that is used by the logger.
The pipes are just the control interface.
Well, you just go ahead and beleive that.
No it doesn’t.
Alfman prctl() PR_SET_CHILD_SUBREAPER has issues.
1 it is Linux only like cgroups so it not any help to those wanting to run non Linux kernels.
2 its not like cgroups in a very critical way. A process in a service can call prctl() on it self and disconnect from the init CHILD SUBREAPER.
3 it only has 1 slot where cgroups have stacking. So a service wanting to use PR_SET_CHILD_SUBREAPER and Init using will conflict.
4 finally like cgroups works its not sysvinit compatible without having to rebuild all of sysvinit.
PID/PPID,Ptrace and hidden pipes we know all don’t work without question. Yes sysvinit, upstart and deamontools all have examples of escaped processes. Ptrace is worse than hidden pipes due to the major overhead of using Ptrace.
PR_SET_CHILD_SUBREAPER if this required capability to set this then service did not have the capability it could kinda work. But that is not how Linux is currently implemented.
Cgroups is basically the only thing currently implemented and mainline that does the functionality required in a workable form.
PR_SET_CHILD_SUBREAPER is another Kay Sievers kernel patch. Yes half baked kernel patch to provide X functionality. Yes the reasons why Linus has stopped taking kernel patches from Kay Sievers includes PR_SET_CHILD_SUBREAPER. Alfman if you want to use the PR_SET_CHILD_SUBREAPER route you need some kernel developers to expand and fix up the functionality.
This is the problem remove all the init and service management systems that are based on broken tech you are not left with many. Currently only 3 options. Openrc, systemd and DMD(from GNU). Yes a choice of 3 is possible. Forking debian to be Openrc or DMD would be a workable path forwards.
oiaohm,
I was pointing out various mechanisms that have been applied in addressing the same problem, I’m aware that there are caveats. If we can leave cgroups to a better tool like LXC, that might be preferable for me but I’m not altogether opposed to them. My interest is having something that is both reliable and unintrusive. Maybe uselessd or openrc like you suggest can fit the criteria.
WRT your points 2 & 3, I think you should take a closer look at how PR_SET_CHILD_SUBREAPER works and can be stacked at multiple levels of ancestry without a problem. It works as one would expect.
see kernel/exit.c:
for (reaper = father->real_parent;
reaper != &init_task;
reaper = reaper->real_parent) {
if (same_thread_group(reaper, pid_ns->child_reaper))
break;
if (!reaper->signal->is_child_subreaper)
continue;
thread = reaper;
do {
if (!(thread->flags & PF_EXITING))
return reaper;
} while_each_thread(reaper, thread);
}
There’s no need to reiterate the criticism of sysv because those sentiments are redundant with what most of us already feel.
Edit: I’ve just seen the ranting going on in the discussion thread… Yikes, I’m out! Everybody have a good night.
Edited 2014-11-29 08:41 UTC
Above is all true
Problem here you have badly miss read this. It is borked.
Notice the return reaper just above while_each_thread(reaper, thread); So its only going to return 1 even if 2 or more are set in the process. Hope it finds the right reaper because more than 1 can be set.
PR_SET_CHILD_SUBREAPER is nasty bad as you end up with what applications have defined mixed with what your init system has defined. Its not a nice little tree diagram. Every thread can have a SUBREAPER that is different using PR_SET_CHILD_SUBREAPER. Worse the order of trigger will be the random order the threads start in.
cgroup namespaces has true stacking of this. 1 subreaper per cgroup namespace then it follows the define cgroup namespace tree backwards. Yes items the init system would have defined.
I should have said 3 more correct. PR_SET_CHILD_SUBREAPER is using 1 slot in the thread structure where cgroups is using a tree in the cgroup name-space structure. The fact its 1 slot per thread that PR_SET_CHILD_SUBREAPER has done makes it a huge mother of a mess. 1 slot per thread group would be sane. Yes 1 slot per thread group would be 1 slot per posix process. So reducing random sub-reaper possibilities.
PR_SET_CHILD_SUBREAPER is something that could be made work correctly with a little bit of work Alfman current state is another Kay broken invention that minor coding errors can drive you completely nuts with it.
2 still stands because there is no flag to take way the privilege to mess with PR_SET_CHILD_SUBREAPER or limitation on what in a process can mess with it. Yes the second half of this you could call kernel bug but it has it usage. How you are able to restart PID1 without killing the system comes down to being able to bend PID numbers around and change associations. Like one option would be only thread master under Linux be able to use PR_SET_CHILD_SUBREAPER and no other thread in the process be able to mess with it without special flag. What is thread master. The first thread to run in a thread group is the thread master this is normally the 1 that contains main().
Basically PR_SET_CHILD_SUBREAPER could be made work but it would require someone now to have a ABI change approved or create a new function providing fixed functionality. Linux world fixing something like PR_SET_CHILD_SUBREAPER is insanely hard since no kernel space alteration is allowed to break user-space applications.
oiaohm,
Lennart P and Kay Sievers are controversial, but they’re not stupid, the patch does solve the orphan problem. I just don’t think you understand how it’s supposed to work. It’s not designed to return SIGCLD for grandchildren to the new reaper until those grandchildren are orphaned. Do you understand why? This is specifically designed to fix the issue that the monitor process looses track of grandchildren processes when they get reparented to PID 1 because that’s the event that causes a supervisor process to leak children without a ptrace.
There’s always room for improvement and I’m usually very fond of discussing it in a friendly manor, but at this rate I don’t feel compelled to continue the discussion. I’m not sure you realize just how abrasive and condescending you come across.
Yes I know this. Problem is I know this issue extremely well. So do the three lead developers cgroups namespaces.
Alfman what you are not getting is I understand all the theory of how it meant to work and what its attempting to achieve. Then I also understand what its real world implementation has achieved and they do not match because it missing covering particular cases.
Problem is the issue is not always as simple as getting reparented to PID1. Worst event that happens is PIDX dies and is replaced by another process. A orphaned under Linux does not always get re-parented to PID1 if the system believes it still finding a correct parent process,
The PID tree inside Linux is kinda broken. There is no requirement inside Linux for processes to be a nice tree shape. Cgroups are a hack that fixes this.
Crashing service or Linux OOM killer triggered all kinda of bad things are happening.
Serous-ally one of the biggest causes of process re-parented to the wrong process is the OOM Killer removing processes.
monitor process looses track of grandchildren processes
This fault will still happen even with Kay Sievers patch. The patch is not good enough to cover all the ways Linux kernel causes this. Cgroups after a heck of a lot of effort do cover all cases of processes attempting to get lost.
The two ways that work to track is a LSM security attribute or cgroup. Both the process can be prevented from removing itself.
Lennart P and Kay Sievers are controversial, but they’re not stupid
I would agree but as a kernel developer Key Sievers has made a repeating mistakes.
Please note Key Sievers is still suspended from being able to submit new feature patches to the Linux kernel until he fixes it old ones that got mainlined. Including the one you kinda suggested as a option instead of cgroups.
http://lkml.iu.edu//hypermail/linux/kernel/1404.0/01331.html
Lennart P is controversial and has submitted code to the Linux kernel. Leannart P is not banned because bug that turned up in his submitted code he would fix.
Kay Sievers is controversial and banned for miss behaviour. Problem is the ban includes the very item you raised. If you follow the lkml mailing list about the feature you raised Alfman you will find it is one of the items Linus warns him that if he don’t fix one day he will be banned. Kay Sievers has been stupid as he called Linus bluff only to find out Linus was not bluffing a few years latter.
Kay Sievers does not deserve to be called not stupid at this stage. In anything Key Sievers deserved to be called stupid for disobeying Linus warnings.
Really its unfair to put Lennart P in a sentence with Kay Sievers.
If the effort is put in and it fixed. I would then been very happy to say to developers use instead of cgroups if it worked as well cgroups.
Alfman basically the main reason why I am so abrasive is the item you brought in is not production ready and is from a banned Linux kernel developer. Banned Linux kernel developers are quite rare and takes a fairly extrema stupidity to get banned.
Con Kolvis for example walked away but he was never banned from submitting code. You can count the number of banned kernel developers in the Linux world on one hand.
There’s always room for improvement
I will back this but this is a case that Kay Sievers is required to some do improvements and has not at this stage. There is absolutely no reason to take the pressure off Kay Sievers to correct his ways.
oiaohm,
I really do appreciate the change in tone and your explanation.
I haven’t looked but I’ll take your word for it. If OOM doesn’t re-parent to PID1, then it sounds like a bug exists with OOM. This implies the OOM doesn’t use the same code to delete the processes, so maybe it should be refactored to move the process deletion code into one single place. That should fix both the OOM bug and the alleged reaper bug.
Yes of course they can run in parallel, but in practice I imagine this is more useful for transitional phases than something end users/admins really want to end up with.
Not that unfair, they both signed off on the patch together. To be honest I don’t care WHO wrote it as long as it works. If there are bugs with OOM that cause a process not to reparent correctly, then obviously those should be fixed. It would be very unfortunate if kernel developers aren’t willing or able to fix these things.
Perhaps, although it feels less controversial to me. In a way it might be comparable to busy-box (not in terms of one big binary, but in terms of being a monolithic package of services).
But nobody is contesting this, my very first post states in no uncertain terms that sysv init needs to go. Almost everyone is in agreement that sysv init scripts are antiquated. It’s not about the merits of sysvinit at all, it is about what we want going forward.
Edited 2014-12-01 15:43 UTC
To delete a process completely requires ram. OOM killer starts up because you don’t have ram. The lack of ram fact can result in bad things even using the same code to delete.
There are other ways that process deletion can fail. Like a cpu core locking up and having to be restarted by watchdog half way through a process delete. These are the corner cases. Most of the corner cases you cannot depend process delete process has completely correctly. In fact you have to design that process deletion has not worked correctly. Corner cases you are depending on the schedulers to be able validate and detect that the process list is screwed up then be able to take actions to correct.
Cgroups add some rules. This make failed process deletion more detectable if cgroups are used. Like a tgid/posix process must completely be in a single cgroup. If it happens to be split between two or more cgroups you have a error in process table. This is hack it reduces the probability of the error going unnoticed a lot. And on cgroup PID namespace where the PID table is unique inside the cgroup extends the time until PID recycling overlap can happen in a undetectable way.
Cgroups is not perfect at fixing the issue of process table corruption. The key thing to detect the corruption is unique values with rules most cases corruption will break rules. Cgroups reduces undetectable process table corruption events to insanely rare if cgroup used like DMD, systemd and openrc.
Alfman you have presumed incorrectly that process table corruption issue is fixable before it happens. Yes prevention is normally better than cure. In the real world sometimes cure it the only option.
The problem here the fix is use cgroups. If you are not using cgroups you are reducing the Linux kernel means to detect issues after things have gone badly wrong. Solaris OS uses zones for the same kinds of detection.
The only other option other than cgroups/zones is to make when ever a cpu lockup or out of memory or any of the other corner cases happens reboot the complete system to maintain clean state just because things could have gone wrong because you have no decent way of detecting the stuff up. Why because without using cgroups you are tying the schedulers hands behind its back.
Problem here is ISV and Administrators requirements that has to be meet. Docker is liked by ISV a lot because they can have configured all the init options the way there application needs in a nicely independent bundle. For administrators this is also good as messing with the init system inside 1 Docker container only messes with the application contained.
Running in parallel is what lot of Administrators and ISV’s want to end up with. Of course they would prefer the same init system all the way through.
This is the problem all the Kernel scheduler to detect as many process table errors as possible the init system has to use kernel unique features so not be 100 percent portable. Anything that is 100 percent portable between posix systems as a init system is broken. Posix standard does not include cgroups or zones or any of the other platform features that increase means for schedulers to detect process table errors.
Docker is not a transitional usage it is a forever more usage.
Lot of this comes down to the simple fact there is a List of Items that a modern day init system has todo. Even if you hate it you really have todo it. Being stackable and using cgroups on Linux is kind mandatory. If you are not going to use cgroups you really need to get some serous kernel developers because you are going to have to design and add newish features.
oiaohm,
And? OOM conditions are exactly the scenario the OOM killer was designed to handle, if you are right that this problem exist with the linux OOM killer, then I’d immediately consider that a Linux bug.
Well, the CPU doesn’t just lock up unless there’s a bug somewhere. Keep in mind I’m not asserting these bugs do not exist, but I am asserting that when the code is correct, the CPU will not lock up.
In other words, when a CPU does lock up, the appropriate response for OS devs is to find & fix the bug rather than ignore it because a watchdog resets the computer.
Yes I know, that was implied, if the OOM killer fails to reparent a process, it’s a bug. The kernel should not corrupt it’s own structures in absence of hardware faults. It’s not the responsibility of userspace programs to anticipate kernel corruption since that would be futile.
This is clearly wrong. Of course it can be fixed, the question is whether it will be fixed. I’m not sure why you are backtracking here because you already admitted that it could be fixed “If the effort is put in and it fixed. I would then been very happy to say to developers use instead of cgroups if it worked as well cgroups.”
This said, I think Linux is more fragile than other operating systems in boundary cases because of it’s deferred allocation strategies, which of course is exactly why Linux got an OOM killer. Without getting into a debate about the merits of an OOM killer, I still expect it to do it’s job without causing corruption in the kernel, otherwise it’s clearly a bug.
Edited 2014-12-02 02:01 UTC
Ok, fair enough. What’s better for a universal operating system: using cgroups to monitor processes or redoing all forking service processes and banning any third party applications that do?
Keep in mind cgroups has lots of other nice things like proper enforcement of resource allocation limits and better scheduling by default if they are used.
You can see why some would say cgroups is a better option.
I been reading on the Debian kFreeBSD mailing lists that the devs are pondering a fork as well because systemd with the FreeBSD kernel is a non-starter for them. Maybe this can become their new Debian base?
I couldn’t but laugh at “deep web of dependencies”.
My systemd is compiled from http://cgit.freedesktop.org/systemd/systemd-stable/log/?h=v217-stab…
and the dependencies are:
At least quota-tools, lz4, qrencode, elfutils can be eliminated (builds fine without them).
I hardly call the rest a deep web of dependencies. most are stuff many have installed already.
“Dependencies” as in other software that depends on systemd — rather than what systemd needs… Right???
thats the problem of “other software” if its written to only depend on systemd, its not systemd’s fault.
LOL my ass. Some of the packages that you listed have a lot of dependencies too. That’s a deep web of dependencies for an init system.
Whats better, a project with no dependencies which writes all its code, or a project with less LOC, but more external libraries?
Furthermore, does more functionality/features mean bloat?
And if I simply imported the code of another project into my code to reduce the dependencies, does it make it better.
IMHO, the number of dependencies has nothing to do with how good a project is
You might not care but other people do.
And in the US, there is an Anti-vaxxers movement. Here in Australia, some people think Smart power meters cause Autism.
I think it is a good thing to have alternatives.. But, I think my point is, every time I have heard the term “bloat” being used in the Linux Community, its almost always been used wrongly
Exactly, that is pretty bad.
What, specifically, is so bad about it?
I’m asking because I don’t think you’ve got a clue.
readelf -d on systemd outputs the following.
How many of those do you think are useless?
Actually no, they don’t. Debian can write another application that provides the same interfaces as logind.
They simply don’t like systemd because it is too “redhat”.
Edited 2014-11-28 09:04 UTC
LOL. You totally missed the point of the post all together.
Edited 2014-11-28 10:31 UTC
Reimplementing those interfaces isn’t useful. You’re then still effectively tied to following the systemd “design” and will be forced to play perpetual catchup.
The other point to make is that these “interfaces” (and I use the term loosely), are poorly-specified, poorly-designed and poorly-implemented. This house of cards is not something we should be basing the core of our system upon. See the sobering analysis here:
http://gentooexperimental.org/~patrick/weblog/archives/2014-11.html…
This still has been pushed in without any real critical review. None of that would make it past code review in my team, yet this joke is foisted upon us all…
Very true and we will probably go through a phase of bad mess with the kdbus thing.
What’s worse is that kdbus will have to be backported to various distribution kernels because Linus has explicitly stated that it won’t be integrated into the kernel tree itself. The reason? Because of the systemd maintainers’ bad attitudes and general unresponsiveness when problems occur.
Stop spreading lies, Linus once made an angry statement that he wouldn’t accept code from Kay Sievers, who is one of the systemd maintainers (this was directly targeted at Kay, no one else).
Meanwhile kdbus is being submitted by Greg Kroah Hartman, who is more or less Linus ‘second in command’ when it comes to maintaining the Linux kernel.
kdbus is being reviewed for mainline Linux kernel inclusion right now, it’s at the second revision.
I am spreading no lies. Linus has explicitly stated this in black and white for crystal clarity. He has major concerns over maintainership with the systemd crew and no it wasn’t just some angry statement he once made and no it wasn’t just directed at Sievers.
I always love how angry some people seem to get once they’re backed into a corner regarding systemd and its brain damaged ecosystem.
I don’t know where you get the idea that Hartman is somehow LInus’s second in command which you seem to want to lend as much credibility to as possible. While Greg is respected that person is Andrew Morton.
Good luck with that.
That is Linus’s position, it’s very clear and there are very sound reasons behind it as to why.
There are certainly many that can be considered, Alan Cox as well (although Linus may have burned that bridge), but I will claim that Greg is the closest to taking over Linux maintainership should Linus step down, he is the stable branch maintainer, long time friend of Linus, and together with Linus the only two kernel developers working for the Linux Foundation last time I checked.
You must be very new in the world of Linux development if you think a flame-post by Linus is something he is somehow beholden to, like really.
And that very post you refer to make you a liar, because it not even that flaming says that kdbus won’t be merged with mainline, it says that he (Linus) would consider merging it if it has been proven as stable from use in a distro, that is how far he went in a fit or rage.
But none of that matters, because this is just another case of Linus throwing a tantrum and later backtracking, because kdbus IS CURRENTLY being reviewed for inclusion into mainline (what part of that is it you have trouble understanding ?).
Now, if as you claim, kdbus was blocked from mainline inclusion, the kernel developers would not be reviewing kdbus at all.
In reality as opposed to your fantasy, it is currently at it’s second revision, and just to be clear, the kernel developers would also not request a second revision (or third etc) if they were not considering the code in question for inclusion into mainline to begin with.
Do you understand now ?
So stop spreading these same lies over and over, better yet, Greg Kroah Hartman has a AMA on reddit 1st december, why don’t you ask him why he is working hard on getting kdbus mainlined when according to you Linus won’t allow it ?
Sorry, but I honestly don’t understand what problem putting DBUS in the kernel solves.
As for kdbus I’m personally neither for or against it as of yet, my argument with segdunum is about his repeated nonsense claim that kdbus is blocked from kernel mainline given that it is currently under review for mainline inclusion.
As for reasons of having kdbus in the kernel, here are reasons Greg himself posted on reddit a few days ago (I’m not going to argue for or against them):
Linus only reviews stuff and only gets involved when he sees hard patches for him to include. That’s the way stuff works. Stuff that gets rejected is then rejected….brutally.
Prior to that Linus has made his position very clear on what will happen if merging is attempted in the post I quoted from. You, and those behind kdbus, can choose to ignore that if you want but I’m afraid you’re wasting your time and effort if you think it is going to be waved in on a nod and a wink.
As an example of what happens, this is what occurred when Matthew Garrett asked if Linus could pull a patchset:
https://lkml.org/lkml/2013/2/21/228
THAT is how stuff gets reviewed and rejected in kernel land. Good luck with that.
Edited 2014-12-01 12:59 UTC
It’s because it performs very poorly. Putting it in the kernel, as we all know, magically make it perform better. That’s the kind of amateur hour we’ve got and you only need to read the ‘specification’ for it.
No, he explicitly gave a heads up to Greg specifically about kdbus. Linus doesn’t post that kind of stuff because he thinks it is hilarious, it’s because he means it.
It’s very, very clear sweetheart, there is no pussy-footing around and you’re just making yourself look like the desperate idiot you are now.
People can make as many revisions as they like until patches are pushed up. That’s when it gets reviewed by Linus, not before. The response is generally “I’m not merging this load of shit into my kernel because of X, Y and Z”. That’s the way things work. If you don’t know that you know absolutely nothing about kernel development. At all.
We shall see what Linus has to say about it once it is attempted, but he has made his position clear that this is to be screwed around with in distribution kernels before he will ever accept it. The fact that it is being ‘reviewed’, whatever that happens to mean, doesn’t mean a damn thing.
Sigh…
He made a flame post sometime in april this year, Linus is the mainline kernel maintainer, Greg is the stable branch maintainer, they are long time friends and both work for the Linux foundation, do you think Greg would spend a lot of time working on kdbus being mainlined if he knew Linus was going to refuse it ?
And do you seriously think that they haven’t discussed this before Greg even embarked on getting kdbus mainlined ?
Yes it does, because if kdbus was somehow banned from kernel mainline inclusion, the kernel developers wouldn’t bother reviewing the kdbus code at all, they have better things to do (like reviewing code which could actually make it into mainline).
Reviewing code is when someone submits code for kernel inclusion and the relevant kernel developers for the area which that code touches reviews the code and comments on how to improve it, questions aspects of it, or simply tells the submitter that this is never going to be accepted, changes or not.
This ‘review, ask for changes, review, ask for changes’ cycle can take MANY revisions, but if they have no intention of actually accepting the code at all, there will be no revisions, they (the relevant kernel developers) will simply turn the code down.
That is obviously not the case here as it is currently in it’s second revision of code review.
And for the record, while Linus has the ‘absolute’ power in terms of merging with mainline, he typically (as in practically always) goes by the decision made by the subsystem maintainer(s) who are responsible for the area where the submitted code belongs, which in this case would be those currently reviewing kdbus.
But this discussion with you is pointless since you are clinging to a 8 month old flame post by Linus as some sort of hard stance in the matter, when the review cycle ends we will know if kdbus is going to be mainlined or not.
Which part of this isn’t sinking into the thick skull?
You can get yourself involved in endless ‘reviews’ as much as you like but that is Linus’s position. End of story. That is also what Linus means by a ‘review’ – not fannying about on a mailing list.
That longwinded comment is simply meaningless squirming to avoid this point.
Edited 2014-12-02 11:12 UTC
Not to mention that kdbus crosses over into kernel/userspace interaction, and Linus has exceptionally strong views on maintaining kernel and userspace compatibility that kdbus and systemd maintainers do not share at all.
I’m going to get myself some popcorn and watch this. It’s going to be fantastic. now I think of all the ramifications.
Oh, and D-Bus was originally cobbled together (and I mean that literally) as a replacement for DCop in KDE and beyond for usage with desktop environments where it just about holds together. It’s ten years old, has seen very few, if any, improvements or fixes and still hasn’t reached a frozen level of stability. We’re now all supposed to accept it as a system-wide messaging system.
i think you need to read up on kdbus as to see the improvements that have been made. http://lwn.net/Articles/580194/ is one article at a quick glance
It was trash over ten years ago and it is still fundamentally trash because the protocol, whatever the hell it happens to be, is plain laughable.
The really funny thing is that you’ve got a bunch of idiot developers who think that if they move it into the kernel it will all somehow magically get faster more efficient, because, as we all know running it in the kernel just solves those problems.
you are so out of date, its not worth discussing anymore
kdbus is not related to dbus in anything but name, and then only part of it.
One of the bad things about kdbus is that it is not dbus, and applications using system dbus would need to be ported to kdbus, and probably support both incompatible interfaces during the migration period.
Yes, because going on systemd ML and typing “Hi, I am making a logind alternative. Please let’s discuss how to make the logind interface stable/fixed and let’s discuss how to make it better” is so difficult. Also if this doesn’t work – make a better interface, write documentation and patch Gnome/KDE to support it (LP also patched logind support for GNome himself, nobody did this for him).
Maybe they are.. but logind obviously works. If there is no attempt to make an alternative and the wish for a stable interface, why even bother to make them better. Of course the wish to cooperation must come from the “opposing” side.
Most part of that analysis is about D-Bus protocol itself – sure it is not ideal but it has been tested and is widely used. Why change it now? Of course if someone would want to do a D-Bus protocol 2.0 and actually write a new specification that addresses all the issues – sure that would be great. But this has not been done and most of developers don’t care anyway.
You seem to be under the impression that these things haven’t been investigated and discussed. They have, over 18 months back. logind is a joke. Shame it’s not remotely funny.
With respect to logind, it is not reasonable to have to effectively ask permission from the systemd developers to develop an alternative. Making it stable and documented are not exceptional requests. This should have done a long while before it was made the default, not after the fact! That’s their responsibility as the original authors. For software which is supposedly becoming the base of the operating system, it looks unfortunately like a toy project given the lack of professionalism and care. They want people to depend upon this? They need to spend some effort doing a proper job, and properly specifying and documenting its operation is the bare minimum toward that.
“Cooperating must come from the opposing side”. It’s software for crying out loud, not a war. It’s not my responsibility to shoulder the burden of them doing a half-arsed job. Right now, their system is a fragile house of cards made of unspecified complex interdependencies. That is no way to build something others can rely upon and build upon.
Logind is so bad you’d think it was a joke, but it isn’t. Everything that happened with PulseAudio and the generally ridiculous attitudes of the maintainers towards backwards compatibility and serious problems are now being applied to core parts of the operating system.
I can actually see a whole load of friction with the kernel developers who thus far have largely wanted to keep out of userspace, quite understandably. Linus has already stated that kdbus won’t be in the mainline kernel tree because he cannot count on the maintainers.
The problems here are several-fold. Not only do you have poor requirements for systemd, poor code and knock-on problems that weren’t thought about but you’ve also got a shockingly poor attitude from the maintainers in fixing any problems. Bash security issues have already been bad enough but I really, really, really wouldn’t want to have to deal with a serious security issue in systemd.
This isn’t some kind of NIH syndrome or some kind of bad attitude towards Red Hat. Systemd and the tin-pot outfit of maintainers behind it is a serious problem.
Edited 2014-11-28 15:00 UTC
god, you talk a load of crap. you are mailing it all up otherwise you’d provide some proof of your crap. When it comes to technical matters, L Poettering would wipe the floor with the you and your fellow detractors, your jealousy is saddening
systemd is not just an init system, it’s an umbrella project which aims to offer the core parts which together with the Linux kernel serves as a base operating system.
So if you want to compare the dependencies of the whole systemd project, you need to compare them against the dependencies of an equivalent set of tools, like those being shipped as part of other operating systems.
If you just want to compare against systemd as an init system, as I recall the minimal requirements for that would be the ‘systemd daemon’, the ‘journal daemon’ and ‘udev’, so look up the dependencies for those.
That’s a stupid amount of dependencies.
Honestly, those of us who were old enough to use Linux since Xfree86 and devFS, KDE3 and eventually Xorg have already played this game before.
It happens every time a new technology comes out. Its always a case of a few people who have no real contributions to open source projects making a lot of noise, hissing and whining and trying to fork.
In practice all the real developers adopt the new technologies because they recognise the benefits (and whilst there might be minor issues, realise the problems are easily fixable).
The reality is, every time I’ve tried to find infomation about the main “developers” behind the anti-SystemD movement, I’ve never actually found any patches. The last guy who started a whole Anti-SystemD website had exactly 1 bug-report in his name..
this is pretty much true. there’s one exception – gnome-shell inspired like four forks, and all of them are still going strong.
What are the other two, besides MATE and Cinnamon?
unity?
lxde/lxqt
I don’t think LXDE/LXQt were created specifically as a response to Gnome Shell.
So true.
Remember, when linux distributions switched from BSD style init to SysV?
Or when Red Hat has led the switch from a.out+libc5 to elf+glibc2?
There were similar whiners, who got used to status quo and didn’t want to move forward. There are similar whiners now, they are just able to make more noise.
However, the caravan moves on.
The is a whole different ball game to any of those examples.
I’ve seen it before for a long time, and what ends up happening is that bad software gets accepted, gets used, a whole load of problems fall out of the woodwork and it then gets abandoned costing a whole load of time and effort. The problem with systemd is that if this happens, which it will if it gets widely adopted, it threatens the credibility of an open source operating system.
Indeed. I have seen these dynamics in both open source projects, and commercial products.
Some people go out of their way to opinionate(sic) about specific issues, of which they have little to no actual understanding, passionately.
Put simply, it’s a lot like GTK. Lots of crap, bloodletting, “I’m right, you’re wrong” arguments that Linus and the kernel developers have called Gnome and GTK developers out on, licensing arguments….and then we arrive at a point over a decade later where Qt is quite clearly the correct choice for developing GUIs.
Yes, we have seen all this before. A lot.
Xfree86, Open Office, GCC….where are you now?
What exactly is wrong with it?
A coworker has a lobg history with unix then linux and complains about this.
I am not familiar with the issue.
systemd started out as a new init system but gained a whole range of other bits and pieces along the way. These days it’s an umbrella project that contains most of the basic blocks of a user-space. It is, however, kept in one source tree and has one build system as opposed to several smaller source trees and multitudes of build systems. Regardless though it still compiles out to multiple binaries and you can choose which parts you have or don’t have.
The other issue people have is that one of the creators is a guy called Lennart Poettering whom has become the “guy to hate on even though most don’t actually have a reason to”
Starting to ask for donations right away, without demoing at least some concrete work, is not the optimum way to get going.
Edited 2014-11-28 09:09 UTC
I agree that this is backwards, and I would certainly like to see concrete details before donating.
That said, I am one of the Debian developers who is planning on getting involved with this new distribution, and I will almost certainly be donating myself once the details are clearer. My understanding is that the basic infrastructure (archive, build automation, keyring etc.) is currently being set up, which will allow wider participation and should be a concrete demonstration of the project’s purpose. I believe the donations will be to cover the costs of this infrastructure.
I’m sure more details will be forthcoming soon.
Regards,
Roger
I also wish the Debian developers luck with their fork. It’s not going to be easy – the systemd Borg keeps growing and now owns Gnome3, and KDE is in its sights.
Unfortunately, it will be months, maybe years, before Debian-fork can be built with the 10s of thousands of packages that make Debian so attractive. Which is why I’ve decided not to wait. I’m moving to Slackware.
To all those smirking self-assured commenters who think that we systemd-haters are just old fuddy-duddies who fear change, you are just so ridiculously wrong that I don’t know where to start.
But go ahead and enjoy your Borg. You’ll like it, until one day it gets compromised and you’ll have no alternative.
To all those smirking self-assured commenters who think that we systemd-haters are just old fuddy-duddies who fear change, you are just so ridiculously wrong that I don’t know where to start. well, you just put yourself in that category with your accusation of systemd assimilating Gnome and possibly KDE, you obviously need to read up about systemd. enjoy your time at slack
Well, not knowing where to start and fearing change are rather correlated. So, yeah…
All I know is that two days ago I ran apt-get update. I then just wanted to install libsdl2-dev and nothing else. Already had X11, the XFCE desktop etc installed. Was not running apt-get dist-upgrade, and it wanted to pull in systemd and some other low level stuff. This is why people are calling for Debian to get forked. libSDL-dev should not need systemd and when a system already has the other supporting software installed. It should not need a new init system pulled in.
Can I asked how you determined that this would require systemd to be the new init system?
Did the apt-get output indicate the removal of the current init?
Yes the apt-get output indicated that it would require the upgrading of init and about 5-6 other components. I think it may be because libsdl is now being compiled against pulseaudio. I only wanted to use libsdl-image-dev to draw some textures so hardly a use case that needed a massive system overhaul.
Edited 2014-11-28 19:46 UTC
Ah, interesting.
I guess that could be fixed on the packaging level though.
E.g. by providing an otherwise empty package for “systemd as init” that conflicts with any other init package and a “systemd as daemon” package with a common init script that runs systemd as a normal process and then having the systemd main package depend on either as alternatives.
So a system already running any other init would simply install the “system as daemon” alternative and thus keep the core system untouched while still providing all the facilities required by the to-be-installed program/library.
Did you try?
apt-get install –no-install-recommends libsdl2-dev
It’s certainly pulled in through the libdbus-1-dev dependency which has a transitional Recommends on dbus.
Turning off Recommends should avoid the installation of systemd.
Adrian
They lost. Give it up.
Actually, when you are in disagreement over the course of a FOSS project, you have two options and they picked the second.
Not that I agree with them but it is totally in accordance to their rights.
Some will see this as an unfortunate waste of precious development effort but we really don’t know yet. Perhaps, something good can spring from this whole brawl.
No. That’s why forks are explicitly encouraged in the open source world. Whether a fork gains weight and gets well used is the determining factor, not votes.
Same wasted resources, same ending. Nobody will remember this in a few years.
Edited 2014-11-28 14:33 UTC
1) Dbus is marked for Extermination to be replaced by documented and stable kdbus. So the talk of lack of dbus documentation is really nothing to worry about long term. The fact kdbus has taken far longer than expected to finalise. Of course the fact kdbus comes along this screws the BSD guys over like Kernel Mode Setting has.
2) https://lkml.org/lkml/2014/11/16/4 is a very interesting read on where the linux system is going.
The reality number 2 is the killer for sysvinit going forwards. The next generation of services for Linux will be able to expect groups of users per service that are unique to the service. The result of being able to hand over a pool to services is reducing services that need CAP_SETUID.
sysvinit is dead man walking. Systemd might be a pain now but it at least can meet the requirement list.
Yes UID allocated in pools is another case of Linux not bothering for Posix standard body to rule before coding.
The problem is that Linus is not going to accept kdbus into the mainline kernel. It will have to be backported with every distribution kernel. The reasons is that the kernel developers simply cannot trust the maintainership of those behind systemd.
can you link to anything from Linus that explicitly says that? you might want to read this article titled “KDBUS Submitted For Review To The Mainline Linux Kernel” http://www.phoronix.com/scan.php?page=news_item&px=MTgyNTQ
That is Linus’s position the last time this was called into question. They are of course free to propose its inclusion, but Linus’s position is that this will only be considered when it has proven to be stable after playing with it in the distribution kernels. His problem is maintainership.
I also wish them luck with the idea that moving the transport mechanism into the kernel will magically make all the performance and efficiency problems of DBus go away.
so, no you can’t point to anything that says Linus will never allows kdbus into the kernel, he only states it won;t get there until it proves itself which is a sensible position to take and not your mis-reading and mis-intrepretation of it.
No.
kdbus is just an alternative to providing the busses and transporting the data, application developers are unaffected by that.
I.e. an application will work independent of whether the bus is provided by dbus-daemon or kdbu/systemd.
The only developers affected are those who maintain the D-Bus client libraries.
Wrong badly. kdbus support features standard dbus does not.
https://code.google.com/p/d-bus/source/browse/kdbus.txt
1) kdbus can safely cross container boundaries this includes docker containers old dbus cannot do this.
2) kdbus supports using memfd officially so will not bleed memory in memfd with terminated kdbus messages, dbus can bleed memfd messages.
3) kdbus does not have race conditions with messages for approval to kernel security modules.
4) kdbus credential information is complete.
Finally the new kdbus will have zero dependence on the existence of systemd. Remember normal dbus client libraries have been taken into the systemd source tree.
kdbus is no longer a systemd project. Linux kernel developers has taken it out the hands of systemd developers. Why has this happened. kdbus long term may replace binder under android.
https://github.com/gregkh/presentation-kdbus/blob/master/kdbus.txt
Yes the main reason why the original kdbus by systemd was rejected was lack of new features and lack of fixing dbus historic problems(this is what kay got slammed over. Mainline Linux has quality requirements). If you think current kdbus submitted to mainline is just another dbus solution you are very badly wrong. kdbus is aiming to have all the required features of binder and dbus. Yes could allow android applications talking to normal desktop interfaces and the reverse.
I am aware of that.
But, as I said, this does not affect any programs using D-Bus, they don’t exercise any of these.
The features they are exercising are just implemented differently by their respective D-Bus library, most of which will support both backends in order to work on both kdbus and dbus-daemon based systems.
What a load of tripe. kdbus has been mooted basically to try and solve a load of performance issues with DBus, rather than, you know, actually fixing whatever the protocol happens to be and making it less of a laughing stock. But, we all know that running shit in the kernel magically makes it faster.
The Android stuff is pure hearsay in an attempt to try and give it credibility. Good luck running it on a mobile device. That should be funny.
Linus has made it very clear that kdbus will need to have been in distribution kernels for some considerable time until it will ever be considered for inclusion.
Really? Wow.
And as a bonus it’s also really “kewl” to have it in the kernel because, you know, you get to use the word “kernel”.
Remember that kernel HTTP server that was going to solve all speed problems with web servers? Whatever happened to that, I wonder….
What happened is simple. HTTP server in kernel ends up replaced by syscall sendfile and TCP_CORK.
http://man7.org/linux/man-pages/man2/sendfile.2.html
Kernel directly packeting and sending the network data of a file is faster than user-space doing it.
Yes sendfile in Linux is a direct decent-ant of TUX http server in the Linux kernel.
Sendfile got inside 1 percent of TUX performance.
epoll and later has got even closer.
Running particular things in kernel is faster. Not everything of a HTTP server is faster in kernel mode. What was faster is today provided by syscalls.
Reality is more operations by a standard http server on Linux today are done by pure kernel space operations than were before Tux server.
If you want something really slow for the Linux kernel space attempt todo a floating point operation.
Kernel HTTP server did in fact work out what had to be fixed to make userspace HTTP servers get inside 1 percent. Yes that required more syscalls.
Linus has made it very clear that kdbus will need to have been in distribution kernels for some considerable time until it will ever be considered for inclusion. unless you post a link to Linus’s statement, no-one is going to be stupid enough to believe your intepretation
I’ve posted it and you can have a Google for it if you so desire. It came out in that whole systemd debug debacle a while back.
This site is called OSNews. I’m not hand-feeding morons.
No you didn’t. I posted you a link where is is being considered for inclusion. And that link was recent unlike, your out of date assertions
A Debian fork? Please wake me up and tell me that it was only a nightmare
It’s not a nightmare, it’s Microsoft’s wet dream (fragmenting the Linux community more always helps someone!).
– Brendan
There are always people who cannot handle change. I will stay and support Debian, thank you.
systemd is a pretty nasty monster to have lurking in your subsystems. I can recommend switching to eudev, the best gentoo-devs have come up with for years (which shows how much we dislike stuff violating the unix philosophy).
Discussions on IRC did include thoughts on moving to eudev. We’ll likely not have a choice in the matter once udev is no longer usable. And in the longer term, moving to openrc (the preliminary work to enable migration was done last summer in Debian).
OpenRC is certainly recommendable. I like it. Now. The migration in gentoo was a bit rough, as usual, but it works nicely I’ll admit. It is not a replacement for sysvinit or other init systems but plays along with them in a proper fashion (seen from a gentoo’ers perspective).
This also means that OpenRC is a whole lot easier to fit to a bsd-system (or any other system) than the evil known as systemd.
no thanks. systemd is just systemd, journald and udev as a minimum, everything else is configurable and thats fine by me
Rather pointless waste of resources. I’ll just continue using Debian.
Edited 2014-11-30 04:09 UTC
Often when an open source project starts to gain momentum, a separate group appears, makes plenty of noise and creates a fork. Some may call this freedom, but looking from the outside, it does prevent wider acceptance, especially among other-os users. Free participation, which is the great strength of OSS, is also one of its weaknesses. When Wayland came along, it was a fantastic start and I though no-one can ruin this. But a MS-supporting miracle came along (again) and Mir was created. Is this the way open source projects are going to keep working in the future ?
Edited 2014-12-02 12:53 UTC