Linked by David Adams on Sat 17th May 2008 03:39 UTC, submitted by IdaAshley
General Unix Ever wonder what makes a computer tick or how a UNIX server does what it does? Discover what happens when you push the power button on your computer. This article discusses the different boot types, managing the AIX bootlist and the AIX boot sequence. After reading this article, you will better understand what exactly happens when your server starts.
Order by: Score:
Comment by sonic2000gr
by sonic2000gr on Sat 17th May 2008 06:26 UTC
sonic2000gr
Member since:
2007-05-20

Not very different from many other *NIX systems, going through boot loader, kernel and init and using runlevels and an inittab file.
By the way for everyone wishing to learn this stuff in more details and for different systems (Linux, FreeBSD, HP-UX) may I suggest the "Unix System Administration Handbook" (Nemeth et al). It is an excellent read, a real eye opener. Knowing how your system boots, especially how the init scripts work will make you much more confident in using it and configuring it.

Reply Score: 5

RE: Comment by sonic2000gr
by Doc Pain on Sat 17th May 2008 23:52 UTC in reply to "Comment by sonic2000gr"
Doc Pain Member since:
2006-10-08

Interesting article. I like this complicated technical stuff. :-) Allthough I don't have much experience with AIX (due to OS/390 running on the AS/400), many things mentioned in the article are understandable, obvious, logical and expectable when you're coming from a UNIX background. So no matter which particular kind of UNIX or Linux you're using, most things look familiar.

Not very different from many other *NIX systems, going through boot loader, kernel and init and using runlevels and an inittab file.


The alternative to the runlevels is the use of an rc script and the rc.d/ entries, such as it is the case in the FreeBSD OS. Refer to "man boot", "man loader", "man init" and "man rc" for further education.

By the way for everyone wishing to learn this stuff in more details and for different systems (Linux, FreeBSD, HP-UX) may I suggest the "Unix System Administration Handbook" (Nemeth et al). It is an excellent read, a real eye opener.


Another interesting read: "The magic garden explained" by Goodheart and Cox, pp 48, 273.

Knowing how your system boots, especially how the init scripts work will make you much more confident in using it and configuring it.


I already hear someone screaming: "But the PC does it on its own! I don't want to know anything!" :-)

Reply Score: 2

RE[2]: Comment by sonic2000gr
by kev009 on Sun 18th May 2008 00:49 UTC in reply to "RE: Comment by sonic2000gr"
kev009 Member since:
2006-11-30

You probably don't have much experience with OS/390 either, considering it runs on S/390 (now System z)...

Reply Score: 1

RE[3]: Comment by sonic2000gr
by Doc Pain on Sun 18th May 2008 10:05 UTC in reply to "RE[2]: Comment by sonic2000gr"
Doc Pain Member since:
2006-10-08

Long time ago, brain not sufficiently functioning... :-)

//SYSIN DD *

You probably don't have much experience with OS/390 either, considering it runs on S/390 (now System z)...


Of course you're right. It was OS/400 on the AS/400 (the older, beige ones), doing Cobol, Fortran, and of course JCL. By the way, on today's z Series sytems, you'll find z/OS, too (a system which I had the time to play a little with).

Off topic, ABEND. =^_^=

/*

Reply Score: 2

RE[2]: Comment by sonic2000gr
by sonic2000gr on Sun 18th May 2008 11:57 UTC in reply to "RE: Comment by sonic2000gr"
sonic2000gr Member since:
2007-05-20


The alternative to the runlevels is the use of an rc script and the rc.d/ entries, such as it is the case in the FreeBSD OS. Refer to "man boot", "man loader", "man init" and "man rc" for further education.


Hehe, I think we are two of the most prominent BSDers in this site ;)

I already hear someone screaming: "But the PC does it on its own! I don't want to know anything!" :-)


We all know which OS users cry out like that ;)

Reply Score: 2

RE[2]: Comment by sonic2000gr
by parentaladvisory on Sun 18th May 2008 19:37 UTC in reply to "RE: Comment by sonic2000gr"
parentaladvisory Member since:
2006-12-18

Doc Pain wrote: "The alternative to the runlevels is the use of an rc script and the rc.d/ entries, such as it is the case in the FreeBSD OS. Refer to "man boot", "man loader", "man init" and "man rc" for further education."

I have seen these rcX directories on some Linux distrobutions, and my debian installation has a init.d/ and several rcX.d/ direcotries in /etc.
In these rc0,1,2,3,4,5,6.d/ direcories are symlinks to scripts in /etc/init.d/, and it seems to me at least that this system uses both "rc.d/ entries" and runlevels, so I dont really get the destinction between rc-directories ans runlevels...
Care to explain? ;)

Edited 2008-05-18 19:38 UTC

Reply Score: 1

RE[3]: Comment by sonic2000gr
by sonic2000gr on Sun 18th May 2008 20:52 UTC in reply to "RE[2]: Comment by sonic2000gr"
sonic2000gr Member since:
2007-05-20

I have seen these rcX directories on some Linux distrobutions, and my debian installation has a init.d/ and several rcX.d/ direcotries in /etc.
In these rc0,1,2,3,4,5,6.d/ direcories are symlinks to scripts in /etc/init.d/, and it seems to me at least that this system uses both "rc.d/ entries" and runlevels, so I dont really get the destinction between rc-directories ans runlevels...
Care to explain? ;)


Have a look at your /etc/inittab file. You will find a line that says something like:

id:2:initdefault:

This means that when you system, starts normally, it enters runlevel 2. Now move a few lines down, and you will see this:

l2:2:wait:/etc/init.d/rc 2

This basically means the rc script will execute the scripts in /etc/rc2.d

Now, have a look at the scripts in /etc/rc2.d:

An example:

S20ssh -> ../init.d/ssh

this starts the ssh server. Obviously it is just a link to a script in init.d, but the rc script reads this from /etc/rc2.d. The name and number are significant too. The 20 signifies the order of the script. For example this:

S19nis -> ../init.d/nis

executes before ssh. The "S" in the name means the rc script will call this script with a "start" argument. Essentially, S20ssh is like writing:

/etc/init.d/ssh start

If it had a "K" instead of an "S" it would be called with a "stop" argument.

Have a look at your /etc/rc1.d scripts. These are called when you switch to single user mode (runlevel 1). You will see that quite a few services are stopped (or "K"illed) when entering runlevel 1:

K80nfs-kernel-server -> ../init.d/nfs-kernel-server

this calls /etc/init.d/nfs-kernel-server stop, so NFS sharing is stopped when you enter single user mode.

There are also two "special" (or transient) runlevels, namely 0 (for shutdown) and 6 (for reboot). Have a look at the scripts there too.

After all, it is not a difficult system ;)

Reply Score: 2

Wait...
by DoctorPepper on Sat 17th May 2008 13:25 UTC
DoctorPepper
Member since:
2005-07-12

This "boot" thing you're talking about... it happens more than once?

Reply Score: 2

RE: Wait...
by theTSF on Sat 17th May 2008 14:16 UTC in reply to "Wait..."
theTSF Member since:
2005-09-27

Yes after a prolong power outage and the UPS are about to die. That normally happens once every 2 or 3 years. I theory if you plan you upgrade cycles around power outages you can probably get away from rebooting your OS only once. However most of the time most good administrators will reboot the server occasionally usually after making some changes to make sure settings stick as well to update their startup documentation, if there is any.

Reply Score: 1

RE[2]: Wait...
by stestagg on Sat 17th May 2008 16:03 UTC in reply to "RE: Wait..."
stestagg Member since:
2006-06-03

Yes after a prolong power outage and the UPS are about to die.


That happens after about 5 minutes in our datacenter. However, after 30s, the generators are fully running, and if the datacenter managers don't keep the generators running, We don't have to pay them ;) .

Reply Score: 2

Comment by Kroc
by Kroc on Sat 17th May 2008 18:17 UTC
Kroc
Member since:
2005-11-10

What I'd like to know is what an OS does when it shuts down??
Why does it even touch the disk at all? I really don't understand why it takes so long. A Kill signal should be sent to all processes, and then the power cut. What is the hard disk needed for? If there's stuff still be saved, why wasn't it saved when it was changed to begin with (like preferences &c.)?

Whatever happened to the Amiga way of shutting down? Why isn't that possible now ;)

Reply Score: 6

RE: Comment by Kroc
by sonic2000gr on Sat 17th May 2008 19:34 UTC in reply to "Comment by Kroc"
sonic2000gr Member since:
2007-05-20

What I'd like to know is what an OS does when it shuts down??
Why does it even touch the disk at all?


An (oversimplified) explanation is this:

Most of the processes you mention maintain data files on the filesystem, constantly reading and writing to it. An obvious example is e.g. a database system. The OS itself does not immediately commit all writes to disk: it prefers to keep some of them in memory and flush them to disk at the best opportunity (i.e. when load is low etc). This is necessary, since disk writes are costly (hard disks may be very fast these days, but they are still a lot slower than main memory). Other things to consider are: Processes which are swapped out to disk, files that are cached and so on. At any point in time, the filesystem has open files, with data either waiting in memory to be written or being currently written. When an application knows it will access the file again soon, it will not close and reopen it (this is costly as well).
When you shutdown, every process must be stopped, and all the data has to be actually written on the disk platters. Only then is the filesystem consistent and ready to be unmounted. Depending on how many apps are running and the amount of writes still pending, this may take some time.
Having said that, I rarely turn off my Linux/BSD systems, so this does not affect me ;)

Edited 2008-05-17 19:36 UTC

Reply Score: 2

RE[2]: Comment by Kroc
by Kroc on Sat 17th May 2008 20:12 UTC in reply to "RE: Comment by Kroc"
Kroc Member since:
2005-11-10

In that case Vista must decide to rewrite the entire hard disk just for good luck, with the amount of churning and how long it takes. I've seen Vista laptops take three to four whole minutes to shut down! That isn't dumping buffers, that's earnestly trying to hit the MTBF

Reply Score: 6

RE[3]: Comment by Kroc
by sonic2000gr on Sat 17th May 2008 20:23 UTC in reply to "RE[2]: Comment by Kroc"
sonic2000gr Member since:
2007-05-20

Hehe, I haven't experienced that long delays on my Vista laptop yet. However, there is a solution: Whenever possible, do not shutdown, just hibernate.

Reply Score: 2

RE[3]: Comment by Kroc
by Googol on Sun 18th May 2008 20:46 UTC in reply to "RE[2]: Comment by Kroc"
Googol Member since:
2006-11-24

My Vista beats 3 min hands down ! ;)

No really, it shuts down normally since SP1.

Reply Score: 2

RE[2]: Comment by Kroc
by Doc Pain on Sun 18th May 2008 10:12 UTC in reply to "RE: Comment by Kroc"
Doc Pain Member since:
2006-10-08

Regarding the question

What I'd like to know is what an OS does when it shuts down??
Why does it even touch the disk at all?


you gave a good explaination. I'd like to add the following:

Most users coming from a PC background do not see that UNIX is meant as a multi-user multi-process operating system. So it may be possible that many users are working on the same machine when it shuts down. The OS usually gives shutdown warnings, giving users the time to finish their work. Then, specifig signals are used to make the running applications do their own "shutdown stuff", e. g. saving unsaved files to disk so they don't get lost even if the user forgot to save them. After this, the applications are requested to terminate theirselves.

As you mentioned, data usually is written asynchronously. So the OS usually waits some time until all buffers are flushed at shutdown.

Having said that, I rarely turn off my Linux/BSD systems, so this does not affect me ;)


Understandable. :-)

Reply Score: 3

RE[3]: Comment by Kroc
by Henrik on Sun 18th May 2008 13:40 UTC in reply to "RE[2]: Comment by Kroc"
Henrik Member since:
2006-01-03

As I see it, these people complaining of long startup/shutdown times have all the right in the world to be dissapointed with "modern" OSes in this respect - taking into account that (their) earlier very simple machines like the Amiga, C64 and the ZX Spectrum did this in one second or so, literally. Also lots of computer-like appliances such as mobiles, MP3 players, PDAs etc, do the same - this is true regardless of technical resons!

As a 42 year old man coming from a diverse electronics, PC, mathematics, compiler design, and datalogi ("computer science") background, I certainly see that "UNIX is meant as a multi-user multi-process operating system". In fact, I have tried to figure out - for the last quarter of a century or so - why an old time-sharing system like UNIX would be regarded a good basis for something resembling a personal computer.

UNIX was originally designed back when CPUs were almost 100 times as expensive as today and therefore had to be shared among many users. Computers were also at least 1000 times as slow and had perhaps 1/10000 the amount of memory, which originally restricted operations to primitive serial character processing very far from today's graphical interfaces. As most people here know, this has very strongly affected the basic architecture of UNIX, and this backround does not fit today's and tomorrow's demands very naturally (with many CPUs per user instead of the other way round).

UNIX/Linux should probably be redesigned almost from the ground up, or simply replaced, when in comes to use in personal computers (servers are another matter).

Moreover, today, the habit of not turning a PC off (or even "hibernate") in order to hide the ridiculously looong boot times is simply immoral, as power draw from computers has emerged as a major environmental problem - think about if we did the same with our TVs.

Edited 2008-05-18 13:43 UTC

Reply Score: 2

RE[4]: Comment by Kroc
by sonic2000gr on Sun 18th May 2008 15:52 UTC in reply to "RE[3]: Comment by Kroc"
sonic2000gr Member since:
2007-05-20

... - taking into account that (their) earlier very simple machines like the Amiga, C64 and the ZX Spectrum did this in one second or so, literally.


I come from this era too. Started with a TI-99/4A and ended with an Atari 1040STE. My next machine was a PC running Windows 95. I never understood why it needed such a long boot time.

In fact, I have tried to figure out - for the last quarter of a century or so - why an old time-sharing system like UNIX would be regarded a good basis for something resembling a personal computer.


It is one that has been tested in all kind of environments, and found to be working ;)

UNIX/Linux should probably be redesigned almost from the ground up, or simply replaced, when in comes to use in personal computers (servers are another matter).


The other systems should be redesigned so they get the stronger points of UNIX. I see too many people complaining about Vista, and almost no one complaining about MacOSX. What UNIX needs is probably to lose it's geeky image (this will be easier when there are GUI tools for every possible setting). In other respects, it seems the motto "Whoever does not understand UNIX is doomed to reinvent it... poorly" seems to stand.

Moreover, today, the habit of not turning a PC off (or even "hibernate") in order to hide the ridiculously looong boot times is simply immoral, as power draw from computers has emerged as a major environmental problem - think about if we did the same with our TVs.


Sure, I agree with you. Though the reason I don't turn off my machines has nothing to do with boot times (they boot quite fast anyway). I am running two home servers that have to be online all the time. One is running debian and hosting a site for my students. The other is running FreeBSD and hosts files for the Greek documentation project. Unless there is some other work or test in progress, I turn off my desktop at nights.

Reply Score: 2

RE[5]: Comment by Kroc
by Henrik on Sun 18th May 2008 17:42 UTC in reply to "RE[4]: Comment by Kroc"
Henrik Member since:
2006-01-03

I agree on most of your views, especially regarding slow-booting Windows (and Linux/KDE), but your statement

In other respects, it seems the motto "Whoever does not understand UNIX is doomed to reinvent it... poorly" seems to stand.

seem more theological than anything else to me. Why should a plain personal computer user have to bother with understanding UNIX? That seems bizarre to me.

Also, why these complicated installation procedures (again, in personal computers), why not simply design executable files so that they can be run directly and function as both the application itself and a configurer/"installer" that (in most cases) creates/modifies only a few local config-files. This, at any point, i.e. when needed, much like some applications in DOS did for example.

It would be very simple and also so much more inherently self contained (or object oriented if you like) than the scattering of files and information at various places, normally done by both Linux and Windows installation procedures. Why should simplicity and elegance be so darn hard to achieve? (Again, in a single-user personal computer.)

Sorry for my, perhaps, slightly irritated tone, don't take it personal, it's only 20 years of frustration taking its toll ;)

Reply Score: 1

RE[6]: Comment by Kroc
by sonic2000gr on Sun 18th May 2008 18:15 UTC in reply to "RE[5]: Comment by Kroc"
sonic2000gr Member since:
2007-05-20

seem more theological than anything else to me. Why should a plain personal computer user have to bother with understanding UNIX? That seems bizarre to me.


No, sorry this was not what I meant. An end user that is not the kind of geek like many of us here, should not have to understand UNIX the way this statement implies. People who write OSes should though, and they should try to apply its stronger points to their OS (meaning mostly the under-the-hood design). The desktop OS of the future (if such a thing exists) does not have to be Windows or UNIX, but should merge the best of both worlds, both in the GUI and internals.

Also, why these complicated installation procedures (again, in personal computers), why not simply design executable files so that they can be run directly and function as both the application itself and a configurer/"installer" that (in most cases) creates/modifies only a few local config-files. This, at any point, i.e. when needed, much like some applications in DOS did for example.


I too would love to see all "dependency hell" and "DLL hell" go away. However the drawback from this (under current technology terms) would be statically linked programs, with lots of duplicated code. So what, memory is cheap you may say, but consider how many apps you would have to update when a vulnerability is found in code contained in all of them. There is a price for everything.

Sorry for my, perhaps, slightly irritated tone, don't take it personal, it's only 20 years of frustration taking its toll ;)


Hehe, don't worry about it, I can sympathize with you. I've gone through many systems over the years. There is no such thing as a perfect OS, at least I feel I have a lot more control now that I mainly use Linux/BSD.

Reply Score: 2

RE[7]: Comment by Kroc
by Henrik on Sun 18th May 2008 20:18 UTC in reply to "RE[6]: Comment by Kroc"
Henrik Member since:
2006-01-03

I too would love to see all "dependency hell" and "DLL hell" go away. However the drawback from this (under current technology terms) would be statically linked programs, with lots of duplicated code. So what, memory is cheap you may say, but consider how many apps you would have to update when a vulnerability is found in code contained in all of them. There is a price for everything.

I fully agree with you, and moreover without some dynamic linking more code would be duplicated in RAM and in the caches which, of course, are much more limited than disk (or flash) space. However I feel that a dynamic library placed in its own naturally named subdirectory would be easy to find even for simple applications written the way I suggested above, i.e. without demanding special packet handlers or installers (of course some conventions are needed to avoid dependence on user input or searching).

Several versions (bug fixes and/or variants) of a certain library could be put in separate subdirectories placed under the same "umbrella" directory (as disk space is "virtually unlimited" today), enabling applications to choose a version based on either date stamps or user selection (the user may prefer an older version of a GUI for instance).

I misread the part on reinventing UNIX - sorry for that - but still feel that the UNIX way is certainly not the only way. For instance, as far as I understand, VMS and WinNT (both by Dave Cutler) seem technically just as sound as UNIX. All the XP and Vista add-ons by MS are another matter, also what one thinks about MS business practices etc. As everybody here knows, there also exists other systems which are not plain copies of UNIX.

Reply Score: 1

AIX
by jwwf on Sat 17th May 2008 19:04 UTC
jwwf
Member since:
2006-01-19

My experience with AIX is limited to installing some backup software once in a while. Some things about the AIX / pSeries stack look pretty good though. Any fans out there who can fill us in with some personal experiences? That would be an OSNews article competition entry I would really enjoy reading.

Reply Score: 2

pointless
by xophere on Sun 18th May 2008 22:05 UTC
xophere
Member since:
2006-07-19

I mean it would have been far more interesting if they had gotten deeper into the firmware boot process. And compared it to how a typical X86 system IPLs. ;)

Far more interesting if you want to talk AIX internals is to talk about how the ODM works and how it basically very similar to the MS registry but no nearly as broken.

Even more interesting would by why?

Reply Score: 1

grub
by pixelbeat on Mon 19th May 2008 09:29 UTC
pixelbeat
Member since:
2008-05-06

Linux usually uses GRUB to boot, which I've detailed here:
http://www.pixelbeat.org/docs/disk/

Note also that the just released Fedora 9 now uses upstart to start stuff after the kernel has loaded.

Reply Score: 1