Linked by Thom Holwerda on Mon 26th Feb 2018 18:13 UTC
Windows

Flaky failures are the worst. In this particular investigation, which spanned twenty months, we suspected hardware failure, compiler bugs, linker bugs, and other possibilities. Jumping too quickly to blaming hardware or build tools is a classic mistake, but in this case the mistake was that we weren’t thinking big enough. Yes, there was a linker bug, but we were also lucky enough to have hit a Windows kernel bug which is triggered by linkers!

Order by: Score:
He hasn't found the bug
by malxau on Mon 26th Feb 2018 20:02 UTC
malxau
Member since:
2005-12-04

For those reading the article to find the bug, it's not yet diagnosed. The FlushFileBuffers workaround is heavyweight (I'm surprised he didn't mention how slow it is), but it also alludes to the problem.

I don't know the bug either, but can describe from a kernel point of view why these cases are complex.

A linker typically writes to files via a writable memory mapped view. These pages are written directly into the system cache and are considered dirty. The pages are free to be written out by the system at any time; the file system is responsible for ensuring that if a page is read from disk and the page was written, the data must be returned; but if the page has not been written, zeroes are returned. That's conceptually simple, but the implementation is complex due to various factors. If you want to know more about this, I wrote about it at https://www.osr.com/nt-insider/2015-issue2/maintaining-valid-data-le... .

The next complication with linkers is executable file layout. Typical windows PE files are composed of a series of sections with different page permissions, but these are aligned at 512 bytes within the file. When an executable page is read from disk, it is stored at 512 byte alignment, but the corresponding data that the linker generated was at 4Kb alignment. So it's not strictly true that because a (single) data page was written than an unaligned executable page can be read - both data pages that compose an executable page must be written before the executable page is read. This is default behavior, but it is optional - executables can be written with 4Kb alignment, they'll just be somewhat larger. Some compilers have done this by default (eg. see http://www.malsmith.net/blog/good-compiler-bad-defaults/ )

If I were running the Chrome build process, I'd be tempted to compile this particular binary with 4Kb alignment and see if it fixes the problem. Another thing to check is that the file is not NTFS compressed (which requires writing 64Kb of uncompressed data and totally changes this logic.)

What's frustrating about reading this article is that the bug could be in one of a handful of kernel components, but the investigation hasn't gone far enough to even pinpoint which. (I hope I wasn't the one who wrote it!)

Edited 2018-02-26 20:08 UTC

Reply Score: 7

Whoa
by Poseidon on Mon 26th Feb 2018 20:44 UTC
Poseidon
Member since:
2009-10-31

I'm a bit surprised that there haven't been bigger errors, Microsoft has been dragging their kernel to modernity really quickly. I hope they emphasize even more on QA and get this fixed soon.

Reply Score: 1

RE: Whoa
by damp on Mon 26th Feb 2018 21:19 UTC in reply to "Whoa"
damp Member since:
2006-03-19

I`m just surprised and a little envious, i want his skills.

Reply Score: 2

NT is still garbage
by tidux on Mon 26th Feb 2018 22:04 UTC
tidux
Member since:
2011-08-13

> Building Chrome very quickly causes CcmExec.exe to leak process handles. Each build can leak up to 1,600 process handles and about 100 MB. That becomes a problem when you do 300+ builds in a weekend – bye bye to ~32 GB of RAM, consumed by zombies. I now run a loop that periodically kills CcmExec.exe to mitigate this, and Microsoft is working on a fix.

What the actual fuck? This would be considered unacceptable on Haiku, let alone Linux or OpenBSD.

Reply Score: 4

RE: NT is still garbage
by avgalen on Tue 27th Feb 2018 10:29 UTC in reply to "NT is still garbage"
avgalen Member since:
2010-09-23

> Building Chrome very quickly causes CcmExec.exe to leak process handles.

What the actual f--k? This would be considered unacceptable on Haiku, let alone Linux or OpenBSD.

This is unacceptable on Windows as well, but it is not something an OS/Kernel should be bothered with. Building Chrome is a usermode process. If that process causes other programs to leak resources that is another usermode process. Usermode processes are configured by default to consume as many resources as are available. In the end that means that one usermode process can consume allmost all resources, which is what you would want in all normal scenarios (no leaks).
As long as the OS can still control those usermode processes the OS is working perfectly.

(ccmexec.exe isn't even present on systems by default, it is a tool that enterprises use to monitor their systems for updates)

The underlying bug is that if a program writes a PE file (EXE or DLL) using memory mapped file I/O and if that program is then immediately executed (or loaded with LoadLibrary or LoadLibraryEx), and if the system is under very heavy disk I/O load, then a necessary file-buffer flush may fail. This is very rare and can realistically only happen on build machines, and even then only on monster 24-core machines like I use.

Well, why wasn't there a unittest for this exact scenario? /s

Edited 2018-02-27 10:42 UTC

Reply Score: 5

v RE[2]: NT is still garbage
by tidux on Tue 27th Feb 2018 17:12 UTC in reply to "RE: NT is still garbage"
RE[3]: NT is still garbage
by zlynx on Tue 27th Feb 2018 18:33 UTC in reply to "RE[2]: NT is still garbage"
zlynx Member since:
2005-07-20

Pretty confident about Linux there, aren't you?

Haven't you read about or experienced Linux's enjoyable bugs with O_DIRECT, and mixing memory mapped with read() / write() IO? I think I recall some bugs with Linux AIO io_submit() too.

Sure, those were fixed. But at one point in time there were inconsistent views of IO, just like what this Windows bug sounds like.

Ooh, while Googling about I found another one about transparent huge pages and O_DIRECT causing screw-ups in Linux.

I like Linux, but don't put it on a pedestal.

Reply Score: 6

RE[3]: NT is still garbage
by avgalen on Tue 27th Feb 2018 19:42 UTC in reply to "RE[2]: NT is still garbage"
avgalen Member since:
2010-09-23

You don't need a unit test

Maybe you didn't know it, but /s indicates sarcasm. Of course there wasn't a unit test for it because the circumstances are way to extreme for a unittest.

just a system design that doesn't constantly thrash disk like a retard.

It isn't a system design that trashes the disk like a retard. The guy is compiling Chrome which normally trashes the entire system (under Linux as well). The mentioned bug has the specifics that "if the system is under very heavy disk I/O load"

This architecturally can not happen on Linux.

Of course it can. There is nothing in the architecture of Linux that prevents 1 usermode process from taking up almost all the systems resources, effectively blocking a 2nd usermode process from performing well. Just like under Windows this is the normal behavior and as long as the OS is still capable of controlling both usermode processes they will both continue to run and do their work. Now there are certainly differences in how cpu/mem/io/caches are allocated but those differences cannot guarantee that both programs will get enough resources.

(here is a nice, although dated, architectural comparison with some scheduler characteristics: https://www.ukessays.com/essays/information-systems/compare-cpu-sche...)

Reply Score: 4

RE[4]: NT is still garbage
by kwan_e on Wed 28th Feb 2018 05:35 UTC in reply to "RE[3]: NT is still garbage"
kwan_e Member since:
2007-02-18

There is nothing in the architecture of Linux that prevents 1 usermode process from taking up almost all the systems resources,


https://en.wikipedia.org/wiki/LXC

Reply Score: 2

RE[5]: NT is still garbage
by evil on Wed 28th Feb 2018 10:40 UTC in reply to "RE[4]: NT is still garbage"
evil Member since:
2013-05-22

/etc/security/limits.conf
I used it more than 10yrs ago to avoid system locks, while some process leaks memory extensively.

New approach is cgroups.

And LXC is another beast, meant for lightweight virtualization.

Reply Score: 2

RE[6]: NT is still garbage
by avgalen on Wed 28th Feb 2018 16:35 UTC in reply to "RE[5]: NT is still garbage"
avgalen Member since:
2010-09-23

Interesting links! It doesn't seem like any of that would prevent the mentioned issue by default though.

The main problem is always that you want all resources to be available to every program...but you don't want 1 program to take up resources that another program needs.

So you end up configuring a generic system for a specific workload
or you end up with a system where 1 program can limit the performance of the 2nd program
or you end up with a system where a program runs non optimal

Reply Score: 3

Bugs that go away on their own...
by moronikos on Mon 26th Feb 2018 22:34 UTC
moronikos
Member since:
2005-07-06

I loved the description. They didn't find the bug, but by reordering some stuff, it went away for a year.

In my shop, we have a saying, "What goes away on its own, comes back on its own." That is definitely my experience. But sometimes, after digging and digging and digging, you still haven't found the problem. And, if you can find a way to at least appear to go away, sometimes that is the best you can do for a while.

Reply Score: 5