Lennart Poettering, the author of systemd, has announced: “I just put a first version of a wiki document together that lists a couple of easy optimizations to get your boot times down to [less than] 2s. It also includes a list of suggested things to hack on to get even quicker boot-ups.”
The document specifies that it works best on systems with SSDs. IOW computers that will boot in a few seconds using a normal init system. A 2-second boot on a machine with an SSD is not what I’d consider a big win.
I’ll also note that
a) Last time I set up Arch Linux on my netbook – using sysvinit, and a normal hard disk drive – it booted to the login screen in about 10 seconds. This wasn’t a major feat either, all I had to do was customize the initrd and background a few daemons.
b) Boot time contests don’t matter, because desktop environments like Gnome 3 and KDE 4 have bloated to the point that login is now slower than the rest of the boot process. Right now Ubuntu can take 20 seconds to get to the login screen, and 30 to get Unity into a usable state.
(And I don’t know about you, but I care about the login time more than the boot time.)
Edited 2012-05-14 00:13 UTC
I have an Atom netbook (maybe 4 yrs old now) with which I occasionally swap the storage drives. SSD runs Debian Sid and the stock 5400rpm HDD runs Arch Linux.
Arch + systemd alone wasn’t that great, but Arch + systemd + e4rat (only works with ext4) gives me around 7sec from GRUB to prompt. Login time is a usually 1 – 1.5 sec after I do “startx”, since I run lightweight tiling WMs most of the time.
You might want to check out the ArchWiki’s e4rat page if you’re on ext4 on a regular HDD.
Edited 2012-05-14 04:07 UTC
I actually care more about my computers hibernation state even more than the boot time. I would rather not have to reboot my laptop, however after a few sleep sessions. My resources start to fade away.
From login screen to usable unity takes ~15 seconds for me.
I’m very interested in systemd, and getting boot time down to 2 seconds sounds pretty amazing. However, current version of Fedora is way slower, even slower than Ubuntu’s Upstart. So I’m wondering if this new super-fast boot setup will find its way into the next Fedora release? I’m currently an Ubuntu user, but I’d give Fedora another look if they could come up with a 2-second boot-up.
If such amazing boot times can be achieved, maybe Mark Shuttleworth would reconsider systemd. It’s kind of troubling that Linux initialization is becoming ever more fragmented with SysVinit, Upstart, OpenRC and systemd all competing for mind-share. That’s got to be making things more complicated for developers.
I have had similar results. On my hardware Ubuntu (with Upstart) boots faster than Fedora (with systemd). Not by a lot, I wouldn’t say the time difference is either large, or important. However, it does make me question whether systemd is worth the effort to implement and adopt.
If you read the list, you will notice that some of the things that slow down systemd boot time is support for advanced storage options in Fedora, like LVM, iScsi, etc. Fedora defaults are the problem (for a pure desktop user) not systemd
That’s the default behavior of competing Linux distributions. Touting features here and there, but its nothing new, almost all of these features are only updated versions of software packages. Also on the GNOME camp, they are still debating of where to put the POWER OFF button, or to hide it with a modifier key. Instead of improving the desktop for “home users”, “developers(ISV)”, “business users” they are all busy of the things that only they themselves(internal developers) will only care. Save for Ubuntu, I think the strength of Ubuntu is a user-centric disto.
Fast boot times are overrated.
Fast login, now, that’s something worth pursuing. Unfortunately, everything sucks in that regard.
LXDE is quite good on that front if you don’t mind your DE being fairly barebones. (Just make sure you check “Preferences > Desktop Session Settings” to make sure no undesired GNOME/KDE/Xfce autostart components came along for the ride)
Functionality is more important to me than raw login speed, so I’ll stick with KDE. I admit that I haven’t really investigated improving login times under Kubuntu, but it is something I maintain with Windows. I think I’ll take a peek later tonight.
Windows is especially bad, since it seems that every time you install or update something, commercial software wants to add something that launches automatically, so every time I install something I fire up msconfig and check to make sure only what I want to launch at startup actually launches.
Apple is especially bad. I have banished it from my desktop. If I want to add something to my iPod, I fire up a virtual machine dedicated to iTunes.
Speaking of which, I just removed and added a bunch of software, so I’d better check…
EDIT: Then again, KDE can be overwhelming with features, so I think I’ll check it out…
Edited 2012-05-14 02:04 UTC
Since you’re on Kubuntu, I’d suggest installing LXDE with `apt-get install lubuntu-desktop`. That’ll give you a Lubuntu option in KDM with a more polished, comfortable default theme and configuration than bare LXDE.
Thanks. I’ll do that tonight.
Use a tiler (Xmonad, dwm…etc). You’ll get both speed and functionality.
Edited 2012-05-14 03:48 UTC
I realize you’re not being serious, but… That is not the same kind of functionality. Way too much stuff on Linux is tied to heavy desktop environments – networking and automatic power management in particular stand out. Getting this stuff working under a standalone WM tends to require extensive configuration or ugly, insecure hacks.
(And in the case of NetworkManager, it doesn’t work in a command line environment, period. nmcli is a complete joke.)
Pretty annoying. Especially seeing as laptops, which could theoretically benefit the most from a lightweight environment, are currently left with crippled functionality.
Yes, it was mainly tongue-in-cheek as I understand most people don’t like tilers, but I have to disagree with you on the networking and power-management. There’s ceni, netcfg and good ol’ wpasupplicant for networking and the glorious but little-known Linux-PHC undervolting (in combination with the usual cpufreq) which used to give me more juice than what the manufacturer stated on the box. I say “used to” because the battery is a bit old now, sadly. As far as I know, these work in almost any DE or WM, though not quite automated as what you might find in Gnome or KDE.
The only downside (for some people) is….
…. like you mentioned. You do need to spend a little time editing configs. No security implications that I’ve noticed (so far) though, but then again, Arch isn’t exactly an OpenBSD rival on that front.
Oh, and some tilers come with systray functionality these days, so you can pretty much dock your nm or wicd or cpu-scaling applets there as well.
Apologies for going somewhat off-topic.
Edited 2012-05-14 04:25 UTC
wpa_supplicant provides no simple way to connect to an encrypted network on the fly; you have to be able to write to the wpa_supplicant.conf file.
ceni requires the root password.
Suspending and hibernating the computer requires long, unintuitive dbus commands. Either that or sudo, which as I understand it is a security hole.
Mounting stuff in a file manager requires either a working consolekit session (which is not possible on many distros, ranging from Debian Squeeze to Ubuntu 12.04), or messing around with PKLA files (which is again probably a security hazard). Alternatively you can use one of the various immensely bloated login managers…
I actually posted a rant about this on the Arch forums, and a lot of people seemed to agree with me. Basically it appears to me that, while Linux based GUIs for doing this stuff have improved recently, the friendly CLI environment to back it up isn’t quite there.
Edit: And I should point out that by “friendly” I mean “friendly to experienced users,” not “friendly to complete novices.” No CLI is friendly to novices, but a good CLI must be friendly to people who know what they’re doing; i.e. it shouldn’t make things more complicated than they have to be. And almost every Linux CLI thing that involves wireless, power management, or device mounting makes things more complicated than they have to be.
Edited 2012-05-14 14:56 UTC
Another tiler-junkie here (Notion fan)
Indeed I personally am pretty happy with wicd-client for configuring connections to ad-hoc networks. The systray icon is a nice (though optional) extra.
The funny thing is, I’ve actually been meaning to swap out LXPanel and Openbox for AwesomeWM for a while now… I just haven’t had time to implement the hybrid tiling/floating mouse/keyboard interaction I want in Lua yet.
…and I’m not sure whether the Lua API for AwesomeWM is powerful enough to implement drag handles for adjusting the tile sizes. I might have to expedite my plans to learn Haskell and just de-GNOMEify Bluetile.
(Bluetile is a attempt to make XMonad friendly enough for the average GNOME 2.x user but it’s too GNOMEy for me even though it implements more of the features I want in a tiler than any other tiler config I’ve ever seen)
Not sure what you meant by”drag handles”, but if you you want varying sized tiles, than you’re probably looking for a dynamic tiler.
No idea about Awesome. I’m not a fan of Lua.
You’ve basically highlighted the main problem with tilers. They need to fit you like a second skin, or they’re bollocks. Probably why you see a lot of people WM hopping until they find one that works for them.
Edited 2012-05-14 04:31 UTC
Bluetile can be configured to let you resize your tiles by grabbing the border between two windows and dragging. (Like the splitter widgets used to implement things like file manager sidebars)
What specifically do you mean by “dynamic tiler”? I’ve never heard the term before.
Not necessarily. I’d settle for something that implements enough of the de facto standard interactions from floating WMs that I could ease myself into it. (Again, something Bluetile scores well on. dwm-derived WMs like AwesomeWM seem to revel in making sure as few keybindings are shared with floating WMs as possible.)
Edited 2012-05-14 04:42 UTC
Dynamic basic means manual in my simpleton-lingo. Sorry for the confusion.
Hmm,
I personally prefer the Master-Stack model found Dwm/Xmonad/SpectrWM, but maybe you might like something like WMFS2.
Project Page: http://wmfs.info/
ArchWiki: https://wiki.archlinux.org/index.php/WMFS2
Decent Guide: http://crunchbanglinux.org/forums/topic/18819/wmfs2-guide/
Supposed to offer the best of both tiling and stacking worlds, at least according to the people that talk about it (and the screenshots).
Very little experience with Awesome and no experience with Bluetile, so I can’t help you there. Awesome can probably do what you’re asking as well.
Thanks. I’ll take a look at it once I have a moment to spare.
I like the concept of the master-stack model but the way I tend to use my stuff precludes using it exclusively. (It also doesn’t help that I’ve got a pair of 1280×1024 monitors, which is too narrow for master-stack with many of my apps)
If I could find a Window manager that was lightweight and themable like Openbox but easier to extend, I’d probably start experimenting with implementing a hybrid of floating, tiling, and tabbing that does what I want.
As is, I just have to try to think of ways to work around the existing Window manager to implement as much as possible of what I want without killing off the mature floating implementation.
Edited 2012-05-14 05:08 UTC
actually I think dwm has the features you want. Mod+j will move the master/stack split right (I think, and k does left, idk it’s not a combo I know well, I tend to have to fiddle a bit to find the right combo.) theming is a hex color edit and recompile away. And different tabs (or programs) can be set to float or tile by default.
Requiring a recompile to modify the config automatically denies it the status of “easy to customize”. That’s why I started my experiments with AwesomeWM.
…and I’ve already got those two particular features in AwesomeWM. They’re trivial. The hard parts are adding a complete set of mouse-based interactions so I don’t have to waste time going for the keyboard if my hand is already on the mouse, making menus XGrabPointer so you don’t have to click within them to close them, and using a horizontally-tiled pixmap to theme titlebars so you can get subtle gradients.
make & make install seems easy enough to me. Then again my needs are not your needs. I use the keyboard extensively and could care less about the mouse. So atleast in my usage case all the features you want are available to me.
Ha, that’s funny. I always use ‘static’ vs ‘dynamic’ to mean whether the WM ever resizes windows unless I specifically ask it to – so I’d consider manual tiling ‘static’ and basically anything else ‘dynamic’.
(there’s not a lot of tiling wm’s that are ‘static’ in this sense – I ended up maintaining one that does as I didn’t find any others I could get used to. Shameless plug: check out http://notion.sf.net if interested)
I don’t have enough screen real estate to feel comfortable with a tiling WM.
Windows and KDE both snap windows to the left and right sides of the screen easily enough. That is useful often enough for what I mostly do at the moment. Most of my other tasks would tend to be maximized, even on high-res screens.
I miss having a 1600×1200 display, but my computer now is a laptop with a 1366×768 display.
How about Xfce, then?
http://www.webupd8.org/2012/05/install-xfce-410-in-xubuntu-1204.htm…
You could try razor-qt, a new KDE-like environment without the bloat.
http://razor-qt.org/
Then again, if functionality is what you’re after, perhaps KDE is exactly what you need. 🙂
It doesn’t really start quickly, but there’s also that maintained fork of KDE 3.5 named Trinity which is pretty lightweight once it is started.
http://trinitydesktop.org/
There’s a bootchart graph in the article showing less than 2 second boot from kernel to XFCE desktop. It includes login, PulseAudio, power management, XFCE Panel, Thunar etc.
http://git.fenrus.org/tmp/bootchart-20120512-1036.svg
Since log in times are so long you don’t want to be waiting for your machine to get to log in do you?
Can anyone explain to me why fast boot times matter? I reboot my laptop at most a few times a year when I upgrade my kernel. I suspend-to-RAM several times a day, and hibernate when I travel.
Aren’t fast login times much more important on multi-user systems?
Boot times matter because there are many brain dead journo’s out there that think that boot times are a measure of how fast a particular distro is.
As opposed to some real life benchmarks but that would take real work on their part rather than hitting ‘Next’ in the installers and using that for their copy.
Alot of people here think boot time doesn’t matter. I know hibernation makes boot times less frequent and less relevant to some people. However sleep/hibernation modes represent a great deal of complexity between OS/BIOS/hardware, and is frequently the cause of driver bugs. If turning on a computer was as fast as waking it up from sleep, then it might eventually enable a OS to do away with the ugly complexities of hibernation.
In my opinion one should be able to turn on and use a computer much like they turn on and use a TV – only waiting for the display to “warm up”.
Hibernation has other benefits like being able to keep all your programs/documents opened. Even if you have a session manager that saves state on shutdown, it still needs to reopen everything one by one as opposed to loading an image into ram.
Does sleep/hibernation support in drivers really add complexity, or just require drivers to be written correctly?
Also, saving the state of the 30+ terminal and app windows I have open is not simple for my desktop — no desktop I’ve ever used has done it correctly.
Hypnos,
“Does sleep/hibernation support in drivers really add complexity, or just require drivers to be written correctly?”
It’s both actually.
Unless the system is placed in light sleep where peripherals continue to draw power, they will have to be reinitialized upon restart, but now we need new mechanisms to ensure the OS state that was saved to disk upon hibernation can be fully restored. This kind of state synchronization is far from trivial, especially when there are physical bus changes between hibernation sessions like USB devices being changed around.
“Also, saving the state of the 30+ terminal and app windows I have open is not simple for my desktop — no desktop I’ve ever used has done it correctly.”
Yeah unfortunately most applications and operating systems weren’t designed to enable applications to save and restore their session state. I have read about an OS/API that does it though, despite searching I wasn’t able to find it’s name again though.
Hibernation adds shutdown delays, although these are less annoying that bootup delays. In theory though, a well tuned normal bootup should beat a hibernation bootup because hibernation saves and restores fragmented ram, which is wasteful.
Yes, it does add complexity. Because when you have a device completely powered down, it’s no longer maintaining state – e.g the graphics chip no longer knows what mode it should be in, the contents of video memory are gone, etc.
Which means that when it wakes up again, the video driver has to put it back into the state it was in before. Re-initialise the hardware (effectively a bootup sequence for the GPU), switch to the right mode, then ensure that userspace does a redraw to finish things.
Hibernation rarely works in Linux world.
It’s always something, either your Wifi, soundcard or whatever you have connected to USB, but one single faulty driver makes the whole hibernation as bootup concept useless.
A while ago, I have tried to sum up in a blog post the reasons why I prefer to turn my computer off instead of putting it to sleep when I’m not using it. Maybe you would want to read that.
http://theosperiment.wordpress.com/2011/07/15/5-reasons-why-i-think…
Thanks for the link. My responses:
#1 — don’t care. I use my laptop like my phone, always has to be at the ready.
#2 — not sure. I’ve never had RAM fail on me despite 24/7 operation. It’s true that constant charge/drain on a Li-ion battery is not good. On my Thinkpad I set the charge thresholds using the SMAPI interface and leave the laptop plugged in most of the time.
#3, #4 — not really a problem in Linux
#5 — this one is more interesting — I will consider it.
Well, the components which I have ever have to change late in a computer’s lifetime, and thus consider vulnerable to aging, are…
-> Laptop batteries : Always, because most other computer components are designed to last more than 18 months.
-> Hard drives : From time to time, but these are not affected by permanent sleep since they are pretty much turned off.
-> Screens, RAM : Rarely but happens, especially to the cheap noname components that they put in commercial computers. Failing RAM especially is annoying because it is tricky to diagnose, though MemTest is helpful when it works.
On Linux, most background services are only restarted during OS reboot. So while their files may be updated, the running copy of the software isn’t. Due to this, if you leave a computer in sleep or hibernate for months or years, you effectively take nearly the same security risks as if you disabled updates altogether (though “user” software will be kept up to date on its side if you regularly close it).
Linux also has its own issues with sleep and hibernation. Basically, some drivers can sometimes end up in a garbled state and freeze upon sleep and resume. If you can isolate which kernel module exactly is failing (in my case ath9k), you can ask the OS to unload it on sleep and reload it on resume, but this approach has its problems.
Edited 2012-05-14 17:16 UTC
Thanks for your reply.
I restart services when they are updated; sometimes a reboot is required, usually not. Isn’t this standard practice on *nix?
I’ve never had an issue with garbled hardware state on sleep/hibernate on my Thinkpads. These have had Intel GPUs. But, I have had to make a few tweaks on other machines.
I wish Android would boot that fast. On my phone it takes more like a minute (including system-storage scan), which is ridiculous and probably slower than my mechanical-drive laptop.
I’ve always found it bewildering how long so many commercial operating systems take to bootup.
The following bottlenecks probably deserve most of the blame:
1. Unnecessary Serialization.
Most devices are independent from one another and can be initialized at the same time, but many coders are lazy or not good at multitheaded/multiprocess programming and depend on critical sections, disabling interrupts, etc. Even drivers which support parallel execution might be prevented from doing so by the OS. Several linux distros suffer huge performance penalty because their init scripts load everything serially instead of in parallel.
2. Unnecessary delays.
I’ve seen this so often it makes me disappointed in my peers, but they often solve race conditions by adding arbitrary delays throughout the code. For example back in the modem days it was very common to see companies running code that delays between AT commands and even some cases where the code had hard coded delays between characters. These delays will become significantly less optimal as hardware evolves. One company thought they were being clever and made the delays configurable – how thoughtful. Removing these delays means solving race conditions that the original developer couldn’t be bothered to solve.
3. Disk thrashing.
This one is easy enough to fix by using a SSD. For HDs, placing everything in an initd and/or rearranging startup apps linearly on disk will significantly reduce the need for numerous and slow disk seek operations. I’ve seen defrag tools do this for windows, but I’ve never seen anything similar for linux (can someone clue me in!).
(4. Network delay)
I hesitate to add this one, since an unresponsive network should not really cause local bootup delays. But in fact I have seen cases of it where serially loaded processes get blocked due to a process waiting on DHCP or a network connection.
I’ve also occasionally run into parallel loading where neither the developers nor the system were smart enough to take prior boots into account, so the network init got done late enough that, before the network init finished, the init system had started to run out of tasks which could be run in parallel without waiting for the network to be up.
ssokolow,
Yea exactly.
The system will become idle while waiting for the network to come up, and then when it does it gets busy loading again. Ideally the loading would never stop/block until everything is fully loaded, even if the network isn’t ready.
The problem can be more complex than meets the eye, since many daemons will fail when they call bind & listen if the network isn’t up yet. Existing standards lack a way to have these processes load all their resources prior to the network becoming available. They’d probably need to poll for the network, which would only make the problem worse.
I did the following tests:
Terminal A
ifconfig br0:1 down
socat TCP-LISTEN:5555,bind=192.168.100.52 –
// above fails because network IP not yet available
ifconfig br0:1 192.168.100.52
socat TCP-LISTEN:5555,bind=192.168.100.52 –
// above succeeds
Terminal B
ifconfig br0:1 down
socat TCP-CONNECT:192.168.100.52:5555 –
// connection fails
ifconfig br0:1 192.168.100.52
socat TCP-CONNECT:192.168.100.52:5555 –
// connection succeeds!!
The last connection reveals an interesting fact. It is possible for a linux process (the second socat in Terminal A) to hold an open listen socket on an interface that is not yet enabled. If we could somehow get into this state on bootup, then it would solve our delima perfectly. Unfortunately though I’m not aware of a direct way to enter this state. Somehow we’d need the Linux API to permit the first command on Terminal A to succeed.
Edited 2012-05-14 05:01 UTC
The number one suggestion is to not us LLVM.
Well, back ups are a gazillion times more important to me than boot time improvements, so no. Heck no.
Number two is also requires not using LLVM. So double heck no to that.
Number three is to not use SELinux. Triple heck no. Not reading any further.
I think you mean LVM (Logical Volume Manager) not LLVM (Low Level Virtual Machine).
Late night eyes and brain didn’t see the extra L my bad.
“Consider disabling SELinux and auditing. We recommend to leave SELinux on, for security reasons, but truth be told you can save 100ms of your boot if you disable it. Use selinux=0 on the kernel cmdline.”
Ummm.
Hey, I can do a lot in 100ms! I could literally blink in that huge window of time, or I could have a fleeting thought, why I could even scan halfway across my screen with my eyes!
Sarcasm aside, I don’t use SELinux myself as I’m not that paranoid about security. My router’s firewall has never been breached (in fact I’ve tried to get through it myself while at work, to no avail), and I also use iptables on the computer itself. So I’ll be able to blink at least one more time during boot!
Yeah, so why was it suggestion #3 on how to improve your boot time, if they themselves said it was a bad idea?
I mean you could eliminate the time wait time altogether, if you simply threw your computer into a river. You’d never wait for that particular computer to boot again!
… And if it was number 12, you would be OK with it? *
– Gilboa
* That’s assuming the order was *intentional* and/or meaningful…
Edited 2012-05-14 14:11 UTC
Well, if you ask me to list a number of ways to improve things, the most obvious and those of greatest help would be at the top. While those that make no sense would be at the bottom or not on the list at all. Although, I may start descending from those that are slightly helpful to those that are not helpful at all, before entering into the realm of the absurd. So it wouldn’t be bad at #12, if #16 was throw the computer into the river and #20 is ” obtain a microscopic black hole …”.
Did you notice that the article is mostly about configuring systemd for appliance-type hardware, not for a desktop or server?
This is for an environment where a) hardware is a constant and so modules aren’t required, and b) the root partition is going to be on a simple disk setup, probably an SSD. In that environment, neither LVM nor initrd are required.
Again, this is *not* about your personal desktop machine or laptop.
I am sorry. I can see that there are some benefits with systemd, but doesn’t anyone else feel like systemd is implemented in a way that’s pretty much the exact opposite of what we always about unix-like operating systems?
The boot process (I for myself always liked BSD-style rc.d, like in Gentoo or Arch Linux) never was something I thought would need reimplementation.
Also I don’t really like that everything is replaced by “magic” in form of services that guess everything on their own . I know this is a bit off topic, but take LightDM for example. I really do like its concept, because it tries to be a display manager for everyone, whatever desktop or window manager you would use, whether you prefer GTK, QT or whatever. What’s really bad about it is that when accountservice is installed (not somehow enabled or something) you can’t configure it by hand anymore. The problem is that accountserive usually comes as a dependency, you are locked into this unless you become hacky.
I know, one shouldn’t always have to deal with configuration files, but what was so bad about having them set up automatically by some configuration fronted (or script that runs on installation) and then can be easily configured by hand. Also, why did everyone start using XML? Were other formats really a problem and wouldn’t something like YAML or (I can’t believe I am saying this) even INI files be better and preferred by most admins?
Sorry, I don’t want to rant, but I feel like just a few years ago everyone would have screamed “NO WAY!” at these things and I am not sure what has changed.
I am not saying everything is bad and everything should always stay like it was, but I don’t really see how Lennart Poettering simply gets broadly accepted by everyone. Sometimes it looks like a desperate try to bring Linux to the desktop
If that’s really what people want I think there would be a huge potential with overthrowing Windows by doing Windows 8 (HTML5/JavaScript) the right way (QT has the right tools, QML and Webkit are amazing).
Sorry, if this sounds like a rant. Maybe I am just overlooking some important things. I used to be more involved some time ago, so maybe that’s my problem in seeing a lot of sense behind all this.
I humbly disagree. Lots of people are dissing systemd on theoretical grounds instead of examining what it’s actually doing.
As far as I’m concerned it’s a) taking much of the suckiness out of traditional sysvinit which is layers upon layers of shell hacks while b) being relatively lean (most of the extra stuff people complain about are just small programs bundled with the systemd source and don’t affect the daemon) and c) standardizing some of the stupid different-because-noone-coordinated differences between distributions that just waste time for everyone.
It’s like we had 40 years to collect dust bunnies and now finally someone comes along with the big vacuum cleaner. I expect that in a couple of years, my base system will boot much faster, consume less resources, support more dynamic features, be less confusing and overall more friendly.
Oh, hey – your wish is granted. Systemd *doesn’t* use XML for its configuration files. And thankfully, it doesn’t use YAML either – that’s a good readable format, but kind of obscure, dragging in extra libraries to parse it. No, Systemd uses INI files, nice and readable by humans, and easily parsed without needing to link against special libraries.
You might think that, but the existence of dozens of init replacements suggests that not everybody is as happy as you.
no LVM? Whut? If you’re in to btrfs, it can be skipped. However not using lvm is most of the times a time bomb (and please don’t be stupid by saying that a large partition is good enough; it’s not).
Bypass initrd….. sigh.
disable auditing and selinux/apparmor. what….?
disable syslog…
oh my god. typical developer again. He already did no good with portaudio, systemd. And for what?
Not to offend any developer but if he or she doesn’t understand how a system works, what things are needed for, we shouldn’t let him proceed. Look at the merging of all the paths he’s also in favor of.
How to kill linux… hire Lennart.
remove cron… gnome 3.4 .. etc
This man is full of painful ideas that is nowhere near useful. As if I would restart my servers daily. I don’t. Besides 192GB memory testing takes a bit longer than a boot cycle. Also laptop: I just suspend. Works for months and the nly time I recall a reboot was needed was twofold: new kernel and filesystem checking.
How can this man be stopped…. can we stand up or do we keep being forced things thru the throat?
🙁
Well, I think you have to think about the intended audience. The intended audience is people who’d like to hack their system or people making a specialized distribution for embedded systems.
The page is not saying everyone should do this.
Also some of the points are about limitations of current technology, like LVM. Think of them as short-term suggestions until long-term solutions show up.
Well, it will save you 100’s of milliseconds!
Because that adds…what..100ms to the boot?
I think it’s the same kind of people who care about this stuff that also puts flame decals and spoilers on their stock Japanese cars thinking it makes them faster.
In all fairness, I think portaudio works pretty well these days, at least for me.
Edited 2012-05-16 01:14 UTC