Ever wonder what makes a computer tick or how a UNIX server does what it does? Discover what happens when you push the power button on your computer. This article discusses the different boot types, managing the AIX bootlist and the AIX boot sequence. After reading this article, you will better understand what exactly happens when your server starts.
Not very different from many other *NIX systems, going through boot loader, kernel and init and using runlevels and an inittab file.
By the way for everyone wishing to learn this stuff in more details and for different systems (Linux, FreeBSD, HP-UX) may I suggest the “Unix System Administration Handbook” (Nemeth et al). It is an excellent read, a real eye opener. Knowing how your system boots, especially how the init scripts work will make you much more confident in using it and configuring it.
Interesting article. I like this complicated technical stuff. ๐ Allthough I don’t have much experience with AIX (due to OS/390 running on the AS/400), many things mentioned in the article are understandable, obvious, logical and expectable when you’re coming from a UNIX background. So no matter which particular kind of UNIX or Linux you’re using, most things look familiar.
The alternative to the runlevels is the use of an rc script and the rc.d/ entries, such as it is the case in the FreeBSD OS. Refer to “man boot”, “man loader”, “man init” and “man rc” for further education.
Another interesting read: “The magic garden explained” by Goodheart and Cox, pp 48, 273.
I already hear someone screaming: “But the PC does it on its own! I don’t want to know anything!” ๐
You probably don’t have much experience with OS/390 either, considering it runs on S/390 (now System z)…
Long time ago, brain not sufficiently functioning… ๐
//SYSIN DD *
Of course you’re right. It was OS/400 on the AS/400 (the older, beige ones), doing Cobol, Fortran, and of course JCL. By the way, on today’s z Series sytems, you’ll find z/OS, too (a system which I had the time to play a little with).
Off topic, ABEND. =^_^=
/*
Hehe, I think we are two of the most prominent BSDers in this site
We all know which OS users cry out like that
Doc Pain wrote: “The alternative to the runlevels is the use of an rc script and the rc.d/ entries, such as it is the case in the FreeBSD OS. Refer to “man boot”, “man loader”, “man init” and “man rc” for further education.”
I have seen these rcX directories on some Linux distrobutions, and my debian installation has a init.d/ and several rcX.d/ direcotries in /etc.
In these rc0,1,2,3,4,5,6.d/ direcories are symlinks to scripts in /etc/init.d/, and it seems to me at least that this system uses both “rc.d/ entries” and runlevels, so I dont really get the destinction between rc-directories ans runlevels…
Care to explain?
Edited 2008-05-18 19:38 UTC
Have a look at your /etc/inittab file. You will find a line that says something like:
id:2:initdefault:
This means that when you system, starts normally, it enters runlevel 2. Now move a few lines down, and you will see this:
l2:2:wait:/etc/init.d/rc 2
This basically means the rc script will execute the scripts in /etc/rc2.d
Now, have a look at the scripts in /etc/rc2.d:
An example:
S20ssh -> ../init.d/ssh
this starts the ssh server. Obviously it is just a link to a script in init.d, but the rc script reads this from /etc/rc2.d. The name and number are significant too. The 20 signifies the order of the script. For example this:
S19nis -> ../init.d/nis
executes before ssh. The “S” in the name means the rc script will call this script with a “start” argument. Essentially, S20ssh is like writing:
/etc/init.d/ssh start
If it had a “K” instead of an “S” it would be called with a “stop” argument.
Have a look at your /etc/rc1.d scripts. These are called when you switch to single user mode (runlevel 1). You will see that quite a few services are stopped (or “K”illed) when entering runlevel 1:
K80nfs-kernel-server -> ../init.d/nfs-kernel-server
this calls /etc/init.d/nfs-kernel-server stop, so NFS sharing is stopped when you enter single user mode.
There are also two “special” (or transient) runlevels, namely 0 (for shutdown) and 6 (for reboot). Have a look at the scripts there too.
After all, it is not a difficult system
This “boot” thing you’re talking about… it happens more than once?
Yes after a prolong power outage and the UPS are about to die. That normally happens once every 2 or 3 years. I theory if you plan you upgrade cycles around power outages you can probably get away from rebooting your OS only once. However most of the time most good administrators will reboot the server occasionally usually after making some changes to make sure settings stick as well to update their startup documentation, if there is any.
That happens after about 5 minutes in our datacenter. However, after 30s, the generators are fully running, and if the datacenter managers don’t keep the generators running, We don’t have to pay them .
What I’d like to know is what an OS does when it shuts down??
Why does it even touch the disk at all? I really don’t understand why it takes so long. A Kill signal should be sent to all processes, and then the power cut. What is the hard disk needed for? If there’s stuff still be saved, why wasn’t it saved when it was changed to begin with (like preferences &c.)?
Whatever happened to the Amiga way of shutting down? Why isn’t that possible now
An (oversimplified) explanation is this:
Most of the processes you mention maintain data files on the filesystem, constantly reading and writing to it. An obvious example is e.g. a database system. The OS itself does not immediately commit all writes to disk: it prefers to keep some of them in memory and flush them to disk at the best opportunity (i.e. when load is low etc). This is necessary, since disk writes are costly (hard disks may be very fast these days, but they are still a lot slower than main memory). Other things to consider are: Processes which are swapped out to disk, files that are cached and so on. At any point in time, the filesystem has open files, with data either waiting in memory to be written or being currently written. When an application knows it will access the file again soon, it will not close and reopen it (this is costly as well).
When you shutdown, every process must be stopped, and all the data has to be actually written on the disk platters. Only then is the filesystem consistent and ready to be unmounted. Depending on how many apps are running and the amount of writes still pending, this may take some time.
Having said that, I rarely turn off my Linux/BSD systems, so this does not affect me
Edited 2008-05-17 19:36 UTC
In that case Vista must decide to rewrite the entire hard disk just for good luck, with the amount of churning and how long it takes. I’ve seen Vista laptops take three to four whole minutes to shut down! That isn’t dumping buffers, that’s earnestly trying to hit the MTBF
Hehe, I haven’t experienced that long delays on my Vista laptop yet. However, there is a solution: Whenever possible, do not shutdown, just hibernate.
My Vista beats 3 min hands down !
No really, it shuts down normally since SP1.
Regarding the question
you gave a good explaination. I’d like to add the following:
Most users coming from a PC background do not see that UNIX is meant as a multi-user multi-process operating system. So it may be possible that many users are working on the same machine when it shuts down. The OS usually gives shutdown warnings, giving users the time to finish their work. Then, specifig signals are used to make the running applications do their own “shutdown stuff”, e. g. saving unsaved files to disk so they don’t get lost even if the user forgot to save them. After this, the applications are requested to terminate theirselves.
As you mentioned, data usually is written asynchronously. So the OS usually waits some time until all buffers are flushed at shutdown.
Understandable. ๐
As I see it, these people complaining of long startup/shutdown times have all the right in the world to be dissapointed with “modern” OSes in this respect – taking into account that (their) earlier very simple machines like the Amiga, C64 and the ZX Spectrum did this in one second or so, literally. Also lots of computer-like appliances such as mobiles, MP3 players, PDAs etc, do the same – this is true regardless of technical resons!
As a 42 year old man coming from a diverse electronics, PC, mathematics, compiler design, and datalogi (“computer science”) background, I certainly see that “UNIX is meant as a multi-user multi-process operating system“. In fact, I have tried to figure out – for the last quarter of a century or so – why an old time-sharing system like UNIX would be regarded a good basis for something resembling a personal computer.
UNIX was originally designed back when CPUs were almost 100 times as expensive as today and therefore had to be shared among many users. Computers were also at least 1000 times as slow and had perhaps 1/10000 the amount of memory, which originally restricted operations to primitive serial character processing very far from today’s graphical interfaces. As most people here know, this has very strongly affected the basic architecture of UNIX, and this backround does not fit today’s and tomorrow’s demands very naturally (with many CPUs per user instead of the other way round).
UNIX/Linux should probably be redesigned almost from the ground up, or simply replaced, when in comes to use in personal computers (servers are another matter).
Moreover, today, the habit of not turning a PC off (or even “hibernate”) in order to hide the ridiculously looong boot times is simply immoral, as power draw from computers has emerged as a major environmental problem – think about if we did the same with our TVs.
Edited 2008-05-18 13:43 UTC
I come from this era too. Started with a TI-99/4A and ended with an Atari 1040STE. My next machine was a PC running Windows 95. I never understood why it needed such a long boot time.
It is one that has been tested in all kind of environments, and found to be working
The other systems should be redesigned so they get the stronger points of UNIX. I see too many people complaining about Vista, and almost no one complaining about MacOSX. What UNIX needs is probably to lose it’s geeky image (this will be easier when there are GUI tools for every possible setting). In other respects, it seems the motto “Whoever does not understand UNIX is doomed to reinvent it… poorly” seems to stand.
Sure, I agree with you. Though the reason I don’t turn off my machines has nothing to do with boot times (they boot quite fast anyway). I am running two home servers that have to be online all the time. One is running debian and hosting a site for my students. The other is running FreeBSD and hosts files for the Greek documentation project. Unless there is some other work or test in progress, I turn off my desktop at nights.
I agree on most of your views, especially regarding slow-booting Windows (and Linux/KDE), but your statement
seem more theological than anything else to me. Why should a plain personal computer user have to bother with understanding UNIX? That seems bizarre to me.
Also, why these complicated installation procedures (again, in personal computers), why not simply design executable files so that they can be run directly and function as both the application itself and a configurer/”installer” that (in most cases) creates/modifies only a few local config-files. This, at any point, i.e. when needed, much like some applications in DOS did for example.
It would be very simple and also so much more inherently self contained (or object oriented if you like) than the scattering of files and information at various places, normally done by both Linux and Windows installation procedures. Why should simplicity and elegance be so darn hard to achieve? (Again, in a single-user personal computer.)
Sorry for my, perhaps, slightly irritated tone, don’t take it personal, it’s only 20 years of frustration taking its toll
No, sorry this was not what I meant. An end user that is not the kind of geek like many of us here, should not have to understand UNIX the way this statement implies. People who write OSes should though, and they should try to apply its stronger points to their OS (meaning mostly the under-the-hood design). The desktop OS of the future (if such a thing exists) does not have to be Windows or UNIX, but should merge the best of both worlds, both in the GUI and internals.
I too would love to see all “dependency hell” and “DLL hell” go away. However the drawback from this (under current technology terms) would be statically linked programs, with lots of duplicated code. So what, memory is cheap you may say, but consider how many apps you would have to update when a vulnerability is found in code contained in all of them. There is a price for everything.
Hehe, don’t worry about it, I can sympathize with you. I’ve gone through many systems over the years. There is no such thing as a perfect OS, at least I feel I have a lot more control now that I mainly use Linux/BSD.
I fully agree with you, and moreover without some dynamic linking more code would be duplicated in RAM and in the caches which, of course, are much more limited than disk (or flash) space. However I feel that a dynamic library placed in its own naturally named subdirectory would be easy to find even for simple applications written the way I suggested above, i.e. without demanding special packet handlers or installers (of course some conventions are needed to avoid dependence on user input or searching).
Several versions (bug fixes and/or variants) of a certain library could be put in separate subdirectories placed under the same “umbrella” directory (as disk space is “virtually unlimited” today), enabling applications to choose a version based on either date stamps or user selection (the user may prefer an older version of a GUI for instance).
I misread the part on reinventing UNIX – sorry for that – but still feel that the UNIX way is certainly not the only way. For instance, as far as I understand, VMS and WinNT (both by Dave Cutler) seem technically just as sound as UNIX. All the XP and Vista add-ons by MS are another matter, also what one thinks about MS business practices etc. As everybody here knows, there also exists other systems which are not plain copies of UNIX.
True. If we were to reinvent the OS world toady, without caring about compatibility, I am sure there would be plenty of new and fresh ideas to mix with some of the best established ones. Problem is people seem to care more about providing compatibility with older apps, than innovating. This is even worse in closed source systems like Windows, where binary compatibility is a must, the user base is vast, and third party vendors have released thousands of titles. Quite frankly, the simple fact this whole thing still works, even with all these problems, is a little miracle.
Regarding NT, I was a fan of the system too. I was running NT 3.51 on PC when everybody else was running 95. I had to invest quite a sum to get more RAM back then, but I was very satisfied by the system, very stable and secure (for the era). There were lots of things to like: user management, NTFS permissions, pure 32bit. Real heaven. I was disappointed with NT 4.0 though. Crappy performance on everything I put it on. I ended up only installing NT 4.0 servers, which worked OK. My hope went up again with Windows 2000. Clearly one of MS best. Vista is another story. MS should take a step back and start with more focused code for the next version. Their current development is at a dead end.
My experience with AIX is limited to installing some backup software once in a while. Some things about the AIX / pSeries stack look pretty good though. Any fans out there who can fill us in with some personal experiences? That would be an OSNews article competition entry I would really enjoy reading.
I mean it would have been far more interesting if they had gotten deeper into the firmware boot process. And compared it to how a typical X86 system IPLs.
Far more interesting if you want to talk AIX internals is to talk about how the ODM works and how it basically very similar to the MS registry but no nearly as broken.
Even more interesting would by why?
Linux usually uses GRUB to boot, which I’ve detailed here:
http://www.pixelbeat.org/docs/disk/
Note also that the just released Fedora 9 now uses upstart to start stuff after the kernel has loaded.