Linked by Thom Holwerda on Wed 10th Jul 2013 11:23 UTC
Windows "The default timer resolution on Windows is 15.6 ms - a timer interrupt 64 times a second. When programs increase the timer frequency they increase power consumption and harm battery life. They also waste more compute power than I would ever have expected " they make your computer run slower! Because of these problems Microsoft has been telling developers to not increase the timer frequency for years. So how come almost every time I notice that my timer frequency has been raised it's been done by a Microsoft program?" Fascinating article.
Thread beginning with comment 566685
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: Interesting read
by moltonel on Wed 10th Jul 2013 13:44 UTC in reply to "Interesting read"
Member since:

I never had any idea that it was this bad with windows.
With linux, this is user definable when compiling the kernel.
I always felt that the system would run faster with a faster timer. It just felt like latencies were lower. Things felt snappier.

Yes, it's the old latency vs throughput dilema. Raise one and you lower the other. That's why it's common to recomend a high frequency for desktops (better latency) and a low one for servers (better throughput).

Funny to see that Windows lets app tune this setting while it is compile-time for Linux. That's what you get when your identical kernel has to work for everybody, but I wonder if that feature costs the Windown kernel performance in and of itself (on top of the "give userspace some power and it'll missuse it" problem described in the article).

Getting the best of both worlds (fully tickless kernel) is obviously hard, so it's great to see so much engineering time poured into it for Linux. It's the kind of endeavour routinely seen in the Linux community, that I imagine happens less often elsewhere.

Reply Parent Score: 5

RE[2]: Interesting read
by p13. on Wed 10th Jul 2013 13:59 in reply to "RE: Interesting read"
p13. Member since:

Yeah, for servers, i usually run 1khz.
I've also upped it to 1Khz on my netbook, and powertop is showing MUCH less wakes, which is logical.

There might be a kernel parameter that changes the timer, i'm looking.
It's not something that is often discussed. Pretty interesting stuff!

Reply Parent Score: 2