Linked by diegocg on Thu 7th Nov 2013 22:19 UTC
Linux Linux kernel 3.12 has been released. This release includes support for offline deduplication in Btrfs, automatic GPU switching in laptops with dual GPUs, a performance boost for AMD Radeon graphics, better RAID-5 multicore performance, improved handling of out-of-memory situations, improvements to the timerless multitasking mode, separate modesetting and rendering device nodes in the graphics DRM layer, improved locking performance for virtualized guests, XFS directory recursion scalability improvements, new drivers and many small improvements. Here's the full list of changes.
Order by: Score:
OOM Killer
by Alfman on Fri 8th Nov 2013 04:56 UTC
Alfman
Member since:
2011-01-28

Thom, you missed what I'd consider to be one of the bigger highlights: changing the out of memory killer!

https://lwn.net/Articles/562211/

As was described in this June 2013 article, the kernel's out-of-memory (OOM) killer has some inherent reliability problems. A process may have called deeply into the kernel by the time it encounters an OOM condition; when that happens, it is put on hold while the kernel tries to make some memory available. That process may be holding no end of locks, possibly including locks needed to enable a process hit by the OOM killer to exit and release its memory; that means that deadlocks are relatively likely once the system goes into an OOM state.



Linux has never been extremely stable in the past under out of memory conditions, I've always considered the out of memory killer to be major hack for a fundamental problem.

Following a bunch of cleanup work, these patches make two fundamental changes to how OOM conditions are handled in the kernel. The first of those is perhaps the most visible: it causes the kernel to avoid calling the OOM killer altogether for most memory allocation failures. In particular, if the allocation is being made in response to a system call, the kernel will just cause the system call to fail with an ENOMEM error rather than trying to find a process to kill.


About time! The previous behavior of returning a successful mallocs only to have to invoke the linux OOM killer never made any sense. Killing processes heuristically without so much as asking the user first is ugly at best and outright reckless at worst. What will linux kill? How about a document, web browser, X11 session, etc. I've learned to significantly over-provision ram to avoid the OOM, but I've never been pleased about it's design nor even it's existence.

That may cause system call failures to happen more often and in different contexts than they used to. But, naturally, that will not be a problem since all user-space code diligently checks the return status of every system call and responds with well-tested error-handling code when things go wrong.


And that's how it should have always been, by returning errors to user space, we let processes handle these errors gracefully.

void*buffer = malloc(20*1024);
if (!buffer) { this doesn't happen }

http://opsmonkey.blogspot.com/2007/01/linux-memory-overcommit.html

Of course this is a problem in the first place because linux was intentionally designed to overcommit memory. I think in practice the harms outweigh the benefits. Hopefully the changes here in 3.12 will make linux rock solid even under out of memory conditions with overcomit turned off (no more killing of innocent processes).

Edited 2013-11-08 05:05 UTC

Reply Score: 23

RE: OOM Killer
by Savior on Fri 8th Nov 2013 09:04 UTC in reply to "OOM Killer"
Savior Member since:
2006-09-02

What will linux kill? How about a document, web browser, X11 session, etc.


According to my experience, it's almost always the last one (X11)... only seldom had I the luck of it being something harmless, like Firefox. If this update fixes this problem, I'll be very happy.

Reply Score: 3

RE[2]: OOM Killer
by ajs124 on Fri 8th Nov 2013 13:30 UTC in reply to "RE: OOM Killer"
ajs124 Member since:
2013-08-28

huh, it worked pretty reliably for me and killed firefox or its plugin container most of the time.

Reply Score: 2

RE: OOM Killer
by Vanders on Fri 8th Nov 2013 09:33 UTC in reply to "OOM Killer"
Vanders Member since:
2005-07-06

What will linux kill? How about a document, web browser, X11 session, etc.

From experience it's usually sshd, making it as hard as possible to actually get to the machine and clean it up.

Reply Score: 3

RE: OOM Killer
by jessesmith on Fri 8th Nov 2013 16:04 UTC in reply to "OOM Killer"
jessesmith Member since:
2010-03-11

This is excellent, I always considered the default behaviour dishonest. Telling a process it can have memory which isn't available and then killing a process (seemingly at random) always struck me as a terrible idea.

Reply Score: 4

RE: OOM Killer
by Kebabbert on Mon 11th Nov 2013 20:19 UTC in reply to "OOM Killer"
Kebabbert Member since:
2007-07-27

Thom, you missed what I'd consider to be one of the bigger highlights: changing the out of memory killer!

https://lwn.net/Articles/562211/
"As was described in this June 2013 article, the kernel's out-of-memory (OOM) killer has some inherent reliability problems. A process may have called deeply into the kernel by the time it encounters an OOM condition; when that happens, it is put on hold while the kernel tries to make some memory available. That process may be holding no end of locks, possibly including locks needed to enable a process hit by the OOM killer to exit and release its memory; that means that deadlocks are relatively likely once the system goes into an OOM state.



Linux has never been extremely stable in the past under out of memory conditions, I've always considered the out of memory killer to be major hack for a fundamental problem.
"
O_o

How many other design problems does Linux have that has not been fixed in the year 2013? This is RAM overcommit OOM thing is horrendous, and if you read the article, Linux still has the OOM killing thing going on, but less seldom. No wonder people say Linux is unstable...

Reply Score: 2

RE[2]: OOM Killer
by Alfman on Tue 12th Nov 2013 06:31 UTC in reply to "RE: OOM Killer"
Alfman Member since:
2011-01-28

Kebabbert,

"No wonder people say Linux is unstable..."

Ugh, there's a difference between criticizing and trolling, and you just crossed it ;)


"This is RAM overcommit OOM thing is horrendous, and if you read the article, Linux still has the OOM killing thing going on, but less seldom."

Well this is a fair question, and to the best of my knowledge, these changes are supposed to completely clean up the kernel's own handling of OOM conditions by allowing syscalls to return 'no memory' errors instead of blocking while the OOM killer runs.

Note this does not change user space over committing, that can already be disabled separately (sysctl vm.overcommit_memory=2). As long as syscalls were still triggering OOM killer inside the kernel due to syscalls which couldn't return 'no memory' errors, there was still a problem. Now that this is fixed, I would expect that the OOM killer will never be invoked.

Also, to really have a full grasp of the situation, we need to understand the background behind over-commit to begin with, and that has a lot to do with one particular function: fork. Unix multi-process based programs work by cloning a process into children. At the moment the child process is spawned, 100% of it's memory pages will be identical to the parent. In terms of fork, it makes sense to share the memory pages instead of copying them. Depending on what the child and parent actually do, they may or may not update these shared pages as they execute. If and when they do, the OS performs a 'copy-on-write' operation, allocating a new page to hold the modified page.

The OS can now use the pages that would have been used by child processes for other purposes. Using extra pages for caching is *good* since they can be dumped at any time should they be needed. Using them for spawning more processes or additional resources at run time is *bad* (IMHO) because now they cannot be recovered without taking back resources already in use elsewhere (aka OOM-killer).

Unfortunately, there are cases when overcommitment is unavoidable under fork semantics. Consider a large app (ie a database consuming 1GB ram) wanting to spawn a child process to perform a simple parallel task. This child process might only need <1MB of ram, but it's never-the-less sharing 999MB of memory with it's parent for as long as it executes. NOT overcommiting implies the two processes combined need 2GB instead of 1.001GB. And if 2GB is not available, well then the parent process will not be able to spawn any children for any reason.

So, overcommiting is good for fork, which is why linux does it. This highlights the real underlying problem, nearly everybody thinks overcommit is bad, but the truth is many are still huge proponents of fork().

Reply Score: 2

RE[3]: OOM Killer
by Alfman on Tue 12th Nov 2013 19:08 UTC in reply to "RE[2]: OOM Killer"
Alfman Member since:
2011-01-28

To the extent that fork still gets use and it's cons aren't going away, I'd be very interested in seeing a simple compromise by making over-commit *explicit*.

Such as: no application will ever be overcommitment unless it explicitly requests it or it's process is configured for it by the administrator. By default, when a process requests memory from the kernel, the kernel will keep that commitment (anything else is bad practice). But in cases where overcommitment is still desired or needed, it can be done explicitly.

So, for example, when a 1GB database tries to call fork, it could tell the kernel that overcommiting the 1GB child process would be preferable to being denied with a 'no memory' error by default. If the overcommitment turns into an OOM condition, then only those processes who elected over-commit would be killed, which seems pretty fair.

This could be implemented pretty easily in linux by adding a new flag to the existing clone syscall. Anyone else have thoughts about this?

Reply Score: 2

Btrfs dedup
by WereCatf on Fri 8th Nov 2013 14:28 UTC
WereCatf
Member since:
2006-02-15

I'm interested in Btrfs's newly-gained support for data deduplication, but after taking a look at this app called "bedup," which is supposed to do the actual heavy lifting, I'm left confused. The application only looks for identical files, it doesn't look for identical blocks of data, so does this mean Btrfs's dedup is also limited to file-level deduplication or is it just a limitation of the app in question?

Reply Score: 3

RE: Btrfs dedup
by Bill Shooter of Bul on Fri 8th Nov 2013 16:42 UTC in reply to "Btrfs dedup"
Bill Shooter of Bul Member since:
2006-07-14

I think its just the bedup app, but don't quote me on that. I think its just a case of crawling before walking. Files are easy to examine for dupes, and its fairly useful for a lot of people like myself. Although, I may wait a few kernel revisions for any bugs to shake out before using it on anything vital.

Reply Score: 3

RE: Btrfs dedup
by Laurence on Fri 8th Nov 2013 17:50 UTC in reply to "Btrfs dedup"
Laurence Member since:
2007-03-26

If you want deduping then you're better off with ZFS - not only does it work against fragments, but it works live.

In fact on the whole I've been unimpressed with Btrfs during my recent testing.

Reply Score: 2

RE[2]: Btrfs dedup
by Alfman on Fri 8th Nov 2013 18:57 UTC in reply to "RE: Btrfs dedup"
Alfman Member since:
2011-01-28

Laurence,
"In fact on the whole I've been unimpressed with Btrfs during my recent testing."

I've been meaning to do this myself, I'm very interested in what you tested and the results of your tests, if you don't mind elaborating.

I've maintained nonstandard kernels in the past, but I'm trying to get away from that and stick to mainline as much as possible, so ZFS is less appealing for that reason even though it seems to be robust and mature.

I have a very strong desire to install a copy-on-write FS on production servers to replace an rsync --link-dest solution I have deployed for generational backups. It's pretty clever and quite efficient, however one major problem with this approach is that files cannot be moved, otherwise they fail to link. It's possible to relink dups after the fact using 'hardlink' or 'fdupes', but it's not ideal and raises other concerns about unintentionally hardlinked files getting restored.


One question I do have is whether it's possible to copy one Btrfs FS in deduped form from one host to another without having to rededup it anywhere in the process? I'll research it eventually, but maybe someone here knows...?

Reply Score: 2

RE[3]: Btrfs dedup
by jessesmith on Fri 8th Nov 2013 21:12 UTC in reply to "RE[2]: Btrfs dedup"
jessesmith Member since:
2010-03-11

I have been running ZFS on Linux boxes with standard kernels for over a year. Using the ZFS kernel module (or ZFS-FUSE) does not require a custom kernel.

Reply Score: 3

RE[4]: Btrfs dedup
by Alfman on Fri 8th Nov 2013 23:12 UTC in reply to "RE[3]: Btrfs dedup"
Alfman Member since:
2011-01-28

jessesmith,

"I have been running ZFS on Linux boxes with standard kernels for over a year. Using the ZFS kernel module (or ZFS-FUSE) does not require a custom kernel."

Thanks for the suggestion. The problem with fuse is that it doesn't offer good performance. In some benchmarks, Fuse-ZFS barely registers on the chart.

www.phoronix.com/scan.php?page=article&item=zfs_fuse_performance

This link is somewhat dated, but it's never the less been my experience that ALL fuse filesystems suffer from excessively high CPU utilization and low performance, particularly under high concurrency, and the benchmarks here bear that out. I find it unlikely that a recent benchmark would be any kinder to Fuse-ZFS. I may be convinced to go with ZFS, but if I do it will definitely be a patched kernel. That's not to say fuse-zfs isn't useful for it's features, it would just defeat the point in my having a high performance raid array.

Btrfs is also compared in the link above for anyone interested (I wouldn't be surprised if it has improved since 2010).

Edited 2013-11-08 23:13 UTC

Reply Score: 2

RE[5]: Btrfs dedup
by phoenix on Sat 9th Nov 2013 00:09 UTC in reply to "RE[4]: Btrfs dedup"
phoenix Member since:
2005-07-11

He said to use the ZFSonLinux kernel modules, not FUSE modules. Several distro now include ZFSonLinux packages in their repos, and more are available on their website. No custom kernel required, and no loss of performance due to FUSE.

Reply Score: 4

RE[6]: Btrfs dedup
by Alfman on Sat 9th Nov 2013 07:17 UTC in reply to "RE[5]: Btrfs dedup"
Alfman Member since:
2011-01-28

phoenix,

Thanks for pointing that out, somehow I focused only on ZFS-Fuse. I do not see ZFS kernel modules being supported natively by Debian, mint, centos, and I presume ubuntu, do any distros carry it natively?

I did see that zfsonlinux.org has modules for many popular distros (ie dep for debian, rpm for cent), but that's still technically the kind of 3rd party code/binary modules that I was trying to avoid. For those who don't mind this, then they can (and should) go this route. However let me just mention my own reasons for hesitating with this solution:

1. I believe I may be in violation of the CDDL & GPL licenses if I redistributed a kernel with ZFS to my clients.

http://arstechnica.com/information-technology/2010/06/uptake-of-nat.....

It's nearly inconceivable my clients would actually notice, much less care. I really have no idea if the copyright holders would care in the slim chance they got wind of it (ie oracle or linux devs). Maybe they wouldn't, but it'd still bug me if *my* business was in violation, you know?


2. My previous experiences with 3rd party modules (source & binaries) is that every single kernel update has the potential to break the 3rd party kernel code. The result means that module developers must keep on top of each and every mainline release (if distributing source), and each and every distro release (if distributing binaries), in order to not leave a gap in supported kernels. It wouldn't be the first time I've had to get my hands into the code to fix a temporary incompatibility with new kernels.

In fairness to zfsonlinux, I'm NOT pointing a finger at them, it's just an inherent problem with the kernel lacking a stable API/ABI. I may very well go with ZFS to gain it's functionality, but honestly using it as a root file system would make me nervous every time I upgraded the kernel on a server. I don't have this nervousness with say EXT4, so I'd probably keep root as an ext4 partition until ZFS is in mainline or if the ZFS kernel modifications were officially supported by the distro.

Edited 2013-11-09 07:34 UTC

Reply Score: 3

RE[7]: Btrfs dedup
by jessesmith on Sat 9th Nov 2013 16:08 UTC in reply to "RE[6]: Btrfs dedup"
jessesmith Member since:
2010-03-11

It is not a violation of the license if you distribute the ZFS module. The ZFS on Linux project has a nice FAQ section explaining this. If you merged the ZFS and kernel packages/source then it might be a violation, but distributing them separately is not.

As for breaking across updates, this is unlikely as the API does not change across security updates, it'll only be an issue across major upgrades. You're even safer if you use PPAs for Ubuntu-based distributions (including Mint) as the ZFS module is built specifically for your distribution and does not rely on the third-party upstream project.

Reply Score: 3

RE[2]: Btrfs dedup
by diegocg on Fri 8th Nov 2013 20:14 UTC in reply to "RE: Btrfs dedup"
diegocg Member since:
2005-07-08

Live dedup is definitively not a perfect solution, it hurts performance and for many people it's a bad choice. Btrfs will add live dedup in next releases, so users can choose which method suits better to their needs.

Edited 2013-11-08 20:14 UTC

Reply Score: 4

RE[3]: Btrfs dedup
by ddc_ on Sat 9th Nov 2013 23:52 UTC in reply to "RE[2]: Btrfs dedup"
ddc_ Member since:
2006-12-05

Live dedup [...] hurts performance

It is actually very much implementation-dependent. Live dedup trades some expensive writes for much more of less expensive checks, so the outcome depends havily on the usage pattern and algo.

Reply Score: 2

btrfs
by gan17 on Fri 8th Nov 2013 22:17 UTC
gan17
Member since:
2008-06-03

Does any modern distro even default to butterface or recommend it as a stable solution?

I remember the time everyone in Linux-land was talking about butterface being the next big thing, and that was YEARS ago.

Reply Score: 3

RE: btrfs
by ddc_ on Sat 9th Nov 2013 23:46 UTC in reply to "btrfs"
ddc_ Member since:
2006-12-05

Fedora provides it in installer, though not as default AFAIR. But they enjoy thinking of themselves as of bleeding edge, so their example may not be indicative. Arch still doesn't provide boot time fsck for btrfs (there is a package in AUR, but that is unofficial anyway). Given that is rolling release and pretty much targetted at "powerusers", this should be indicative.

Reply Score: 2

RE: btrfs
by jessesmith on Sun 10th Nov 2013 00:54 UTC in reply to "btrfs"
jessesmith Member since:
2010-03-11

The openSUSE distribution has supported Btrfs for a while now. I do not think it'll be the default under the release after this coming one, but they have done a lot of work with Btrfs and integrating snapshots in with the YaST admin tools. They seem to be the only Linux distribution taking Btrfs really seriously at the moment. Fedora and Ubuntu both have Btrfs as an option at install time, but it isn't really well implemented in either distribution.

Reply Score: 3

Not so clever
by Kebabbert on Mon 11th Nov 2013 20:32 UTC
Kebabbert
Member since:
2007-07-27

Linux overcommits RAM and when RAM is exceeded, Linux starts to kill processes randomly, which makes the system unstable and might loose your data or crash. Other OSes does not allow overcommitting of RAM, so RAM will never be exceed - this means your system is stable even under low memory. Linux will cause problems when low on RAM, because it will kill some processes randomly.

https://lwn.net/Articles/553449/
This guy is not so clever. He complains that Solaris does not allow overcommitting RAM, which did not allow him to use Emacs under low memory conditions. Well, Solaris might now allow him to use Emacs under low memory conditions, but the system will start to kill random processes when he starts Emacs. Instead Solaris will refuse to start Emacs.

This guy wishes that Solaris would behave as Linux: allow him to start Emacs, and at the same time kill another random process. How clever is that? The system might loose his data, or crash! I would not like him in to a server hall.

Reply Score: 2