“Each distribution has some specific tools to build a custom kernel from the sources. This article is about compiling a kernel on Ubuntu systems. It describes how to build a custom kernel using the latest unmodified kernel sources from www.kernel.org (vanilla kernel) so that you are independent from the kernels supplied by your distribution. It also shows how to patch the kernel sources if you need features that are not in there.”
Vanilla kernel just does need make menuconfig && make dep && make clean && make modules && make modules_install (2.6.x can even ommit some of these). Then you copy System.map and vmlinuz to /boot, properly append an entry to lilo.conf (run lilo) or to grub.conf and reboot to test.
There’s nothing distro specific in that behaviour, and I’d not like to see customized tools to do that. There’s plenty documentation inside the documentation directory of the kernel source to be able to compile it; I mean, if you are not capable of reading it and undestanding it, you’d better install binaries for your distro.
Exactly. Besides, this description doesn’t even add more than making a deb package before installing the newly built kernel. Which, while handy sometimes, is generally unnecessary. Btw, for a while now, it comes down only to “make menuconfig”, then “make bzImage modules modules_install install” (last one is up to one’s taste). Nothing hard in it, and nothing distro-specific either.
Vanilla kernel just does need make menuconfig && make dep && make clean && make modules && make modules_install (2.6.x can even ommit some of these). Then you copy System.map and vmlinuz to /boot, properly append an entry to lilo.conf (run lilo) or to grub.conf and reboot to test.
The Ubuntu way is a script that takes care about those steps you just described.Furthermore it can create *.deb’s of both the header-files and the kernel so you can share them with friends running Ubuntu on the same arch.
Compiling a new kernel on Ubuntu is easy,thus less bug prone for beginners.
1)download the source file from kernel.org and move it to “/usr/src/”
*.tar.gz (to unzip: tar -xzvf *.tar.gz in /usr/src)
*.tar.bz2 (ton unzup: tar -jxvf *.tar.bz2 in /usr/src)
2) make menuconfig in /usr/src/linux-2.6.*
3) sudo make-kpkg clean
sudo make-kpkg –initrd –append-to-version=-custom kernel_image kernel_headers
The result is two *.deb’s,one is the actuall kernel and the other is the header-files.Both can be installed with dpkg -i *.deb
The script takes care of adding the proper lines to grub,lilo.
I find it quite slick and I like the fact that the installed kernel is visible in Synaptic for removal, just as the distributed Ubuntu kernels would be….
Then if you compile a newer kernel at a later point it is easy and certainly less intrusive to uninstall older kernels this way!
I’m getting annoyed with all the articles (mostly elsewhere not here) which are:
“Do XXXX the ubuntu way”
Where the Ubuntu way is identical to the Debian way. Sure the distributions are identical, but these articles are very depressing when 99% of them involve copying an existing article and running “s/Debian/Ubuntu/g”.
It would be more useful if people wrote new and interesting things instead of rehashing these same subjects over and over again. Especially when Ubuntu is mostly designed for “newbie” types who don’t need to compile a kernel. Right?
Sure its good that people want to, if they do, but in that case they could probably find the Debian guide and extrapolate. Ditto for installing codecs, installing Java, etc.
Where the Ubuntu way is identical to the Debian way. Sure the distributions are identical, but these articles are very depressing when 99% of them involve copying an existing article and running “s/Debian/Ubuntu/g”.
The funny thing is, make-kpkg was in Debian before Ubuntu even existed… so yes, that is frustrating.
Why is it depression? There are *many* ways to do anything in Debian and Linux and not all of them work as effectively on Ubuntu.
You could try to sift through all those ways and discover which is the least likely to cause problem and make educated guesses on what should change and what should stay the same or follow a guide that did that research for you.
If you want people to do “the right thing” and not have problems down the road (e.g. during upgrades), you publish a guide.
Without such visible guides, you fall back on instinct. When I first tried Ubuntu (Warty), I used the old “make bzImage” technique that I learnt back in my Slackware days. It’s always worked so I never bothered changing or even investigating any other way. When I saw the kernel guides on the Ubuntu website, I realized how much better it could be and switched.
The only thing missing from this article is a line at the end (or beginning) that says that “this guide should also be applicable to Debian”. Just send an email to the author and it can be done.
“The script takes care of adding the proper lines to grub,lilo”
ever heard about
make install
this command will copy all necessary files to /boot and update GRUB/lilo automatically.
making deb and sharing only makes sense if your friend has exactly the same hardware setup (assuming customization) as you do otherwise kernel may not even boot. Distros provide general purpose kernels with all possible hardware options included. So repeating this is waste of time.
making deb and sharing only makes sense if your friend has exactly the same hardware setup […]
No. If you make a .deb first and install it with dpkg, then you can uninstall it with any package-manager later. You don’t have to erase any files or directories yourself (okay, you don’t have to, you can just leave them where they are).
And you can delete the sources afterwards if you want (Of course only if you KNOW that everything works). You just have the package which you can install and uninstall as you like. Also, the package is even smaller than the sources.
o.k.
my custom kernel has only ext2 and xfs (other fs are removed)
you are using reiserfs. Also we are using different disk controllers (and I only included one I need).
try to install my custom kernel and boot it successfully on your box.
in other words deb kernel package to be distributed will have to contain all possible hardware options. This is included in general purpose kernel provided by specific distro.
Sorry, I quite s*ck at explaining.
I wanted to say that it’s easier to make a .deb and install it than installing directly. Not (just) because it needs less commands, but because it’s easier for YOU (not anyone else) to uninstall it again later (without forgetting any files).
Of course a customized kernel from my maschine won’t work on others if they have different hardware.
Of course a customized kernel from my maschine won’t work on others if they have different hardware.
Unless you don’t disable modules and the arch is 32-bit or 64-bit on both machines.
well, if you would ever installed from vanilla:
make bzimage && make modules && make modules_install && make install
restart to your new kernel (with unique name (in .config) so it will not overwrite previous kernel from the same source)
that is all
Uninstall will require one line command too.
so this way you can skip all GRUB editing.
make install
will also copy all extra flags added to the previous kernel.
(and you can transfer bzimage/system.map to another box if you want).
I don’t really see any advantage of “Ubuntu way” over non-Ubuntu way
When someone chooses a distro, they are also tacitly choosing to trust that distro and its engineers to produce a kernel that is secure, capable and which runs well.
So I’m not really sure what the point of all this is. Going to kernel.org for a plain vanilla is throwing away all the work your distro may have put into its own kernel version. Sure, some users will always have special needs and for them a recompile from pristine sources may be the best or only option. But for most folks? I really doubt it, especially as most folks will not be Linux engineers. Compiling from your distro’s own sources sounds better to me.
I guess the article should really be called “The Debian Way” since this is what is being described. Ubuntu has done lots of good things, but inventing Debian isn’t one of them. I also add “modules_image” to the make line, then I get Nvidia 3D compiled and installed in one pass as well.
Problem with updating kernel these days is that it might require udev update as well (as it tends use parts of userspace API/ABI which still isn’t set in stone). One should prey that new udev doesn’t break older kernels or something else (like HAL), as I didn’t notice that authors of it really care much about backward compatibility if it stands in their way.
As distributions usually ship modified version udev just with critical backports and fixes, update has to be done manually and it is possible that one could loose functionallity from those fixes, possibly breaking something else.
I have implemented the kernel compilation as a function in my admin.sh script as followed:
k() {
cd /usr/src/linux
make menuconfig
make
sudo cp arch/x86_64/boot/bzImage /boot/linux
sudo make modules_install
# additional modules you need for your system
#cd /usr/src/modules/et131x
# sudo make
# sudo make modules_install
#cd /usr/src/modules/rt2500
# sudo make
# sudo make install
#sudo vmware-config.pl default
}
mind if i borrow the script?:-)
I notice that your script copies bzImage into /boot/linux literally – rather than giving it a version number.
I’m sure that would simplify updating lilo/grub, but it does mean there is a fair chance of overwriting a working kernel with a broken one if the options were incorrect.
I’ve already thought about the option of versioning the images…
If you compile the kernel for the first time and do not know which parts must be included statically and not just as a module, the kernel may not boot. But the linux/.config files adapt to a higher kernel version. So if you once compiled a working kernel, you can keep the .config and be sure that a higher kernel version will also compile and boot without any problems. I am using these way for more than 2 years and had never trouble.
And as you have mentioned, you do not need to update the menu.lst file in grub (not sure with lilo…) – that is one of the reasons I use these way:
Another reason – I like KISS! Not the band… Keep It Small and Simple.
I’ve already thought about the option of versioning the images…
If you compile the kernel for the first time and do not know which parts must be included statically and not just as a module, the kernel may not boot. But the linux/.config files adapt to a higher kernel version. So if you once compiled a working kernel, you can keep the .config and be sure that a higher kernel version will also compile and boot without any problems. I am using these way for more than 2 years and had never trouble.
And as you have mentioned, you do not need to update the menu.lst file in grub (not sure with lilo…) – that is one of the reasons I use these way:
Another reason – I like KISS! Not the band… Keep It Small and Simple.
There’s a tool you use to change the default shell, it’s called “chsh”. Having the user delete the /bin/sh is the best way of doing things. It only takes one slip up. Know your environment.
In Ubuntu (and in Debian, too) /bin/sh is a symbolic link that points either to /bin/bash or /bin/dash. It doesn’t affect your login shell, just the scripts that call /bin/sh. This behaviour can be toggled via debconf by typing “sudo dpkg-reconfigure dash” in terminal.
You can also install a debconf frontend called gkdebconf if you want a GUI for viewing and reconfiguring the available debconf settings.
The reason I switched to Ubuntu was to avoid compiling kernels; I’ve done that enough times, both the regular and Debian way. I suspect that some things would break if I were to roll my own kernel.
On any distribution, I almost always find myself building my own kernel in order to get all my hardware to work properly. Building the kernel is easy, you don’t need any .deb files or custom scripts. The command “make install” has worked well for quite some time.
is how to compile your own kernel with all the useful kernel patches that Ubuntu uses in order to not lose features or hardware support. That, however, seems to be a big secret. And that is why we still (as of Edgy) don’t have a full-featured low latency kernel for Ubuntu readily available – again, by “full-featured” I mean the kernel with all Ubuntu patches applied.
For a good example of how custom kernel compilation should be handled, look at AltLinux. They have the base kernel and all kernel patches available as SRPMs so you can apply just the right set of patches for you.
First, compiling the kernel is only for those people who NEED to do it. If you think that compiling your own kernel is the graal to make your PC the ZOMFG H4X0RZ BOXXEN, just stop reading.
Second, it’s for people who know what they’re doing.
Anyway, two VERY good and complete references:
http://www.debian.org/doc/manuals/reference/ch-kernel.en.html
http://newbiedoc.sourceforge.net/system/kernel-pkg.html
make-kpkg really starts to be useful when you are using module-assistant and kernel-patch packages.
But sure, the simplest cases people could take care of with a ~/bin/installkernel script.
(The kernel’s ‘make install’ will call it if it exists, falling back to /sbin/installkernel.)