Linked by Thom Holwerda on Mon 10th Jul 2017 18:27 UTC
Windows

This story begins, as they so often do, when I noticed that my machine was behaving poorly. My Windows 10 work machine has 24 cores (48 hyper-threads) and they were 50% idle. It has 64 GB of RAM and that was less than half used. It has a fast SSD that was mostly idle. And yet, as I moved the mouse around it kept hitching - sometimes locking up for seconds at a time.

So I did what I always do - I grabbed an ETW trace and analyzed it. The result was the discovery of a serious process-destruction performance bug in Windows 10.

Great story.

Thread beginning with comment 646574
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: 2 questions
by feamatar on Tue 11th Jul 2017 10:37 UTC in reply to "RE: 2 questions"
feamatar
Member since:
2014-02-25

I understand that, but because there are many readers on this side with great technical insight, I am interested if someone knows why thread creation is used instead of fibres or a threadpool when threads are expensive either way.

Reply Parent Score: 2

RE[3]: 2 questions
by Lennie on Tue 11th Jul 2017 10:51 in reply to "RE[2]: 2 questions"
Lennie Member since:
2007-09-22

I suspect, part of the reason is probably because most compilers and build systems have been build that way for ages. Even when using Java commandline build tools people are used to spanning javac multiple times when using build tools if I'm not mistaken.

Reply Parent Score: 2

RE[3]: 2 questions
by kwan_e on Tue 11th Jul 2017 12:47 in reply to "RE[2]: 2 questions"
kwan_e Member since:
2007-02-18

I am interested if someone knows why thread creation is used instead of fibres or a threadpool when threads are expensive either way.


Because most programmers do what they were taught and are afraid of anything new. They learnt threads and critical sections, and they'll use it and nothing else, gosh darn it.

Reply Parent Score: 3

RE[4]: 2 questions
by Alfman on Tue 11th Jul 2017 15:15 in reply to "RE[3]: 2 questions"
Alfman Member since:
2011-01-28

kwan_e,

Because most programmers do what they were taught and are afraid of anything new. They learnt threads and critical sections, and they'll use it and nothing else, gosh darn it.


You are right, habits die hard. We'll keep doing things the same way despite problems (be it programming languages, networking protocols, etc). Often times the energy to change course exceeds our willingness to adapt.

Even if you and I committed to doing things a better way, we're still held back by everyone else's work too. One thing I have a lot of experience in is AIO, and unfortunately linux has poor&incomplete support for AIO from the kernel to libraries and there's a terrible cost for going against the grain.

For example, you can implement a network daemon with nonblocking sockets, but the name resolver on linux is blocking, so if you want to use hostnames you end up with an AIO daemon that blocks or your AIO server has to use threads anyways in order to not block - it just makes things much more complicated.

Even asynchronous daemons like nginx are negatively affected. It used to be that if you specified proxy hosts by name instead of IP, nginx would block (I learned that the hard way when a production web server stopped functioning when an upstream DNS server failed). Nginx has since implemented their own internal asynchronous resolver, but having to reinvent the wheel sucks for a lot of reasons. How does it find the DNS server? Does it support ipv4/ipv6? Does it support both recursive and non-recursive servers? What about DNS-SEC? netbios? What about the hosts file? Fixing the limitations of the native libraries oneself leads to a lot of reinventing ;)

Edited 2017-07-11 15:17 UTC

Reply Parent Score: 2