Linked by Thom Holwerda on Wed 10th Jul 2013 11:23 UTC
Windows "The default timer resolution on Windows is 15.6 ms - a timer interrupt 64 times a second. When programs increase the timer frequency they increase power consumption and harm battery life. They also waste more compute power than I would ever have expected " they make your computer run slower! Because of these problems Microsoft has been telling developers to not increase the timer frequency for years. So how come almost every time I notice that my timer frequency has been raised it's been done by a Microsoft program?" Fascinating article.
Permalink for comment 566685
To read all comments associated with this story, please click here.
RE: Interesting read
by moltonel on Wed 10th Jul 2013 13:44 UTC in reply to "Interesting read"
moltonel
Member since:
2006-02-24

I never had any idea that it was this bad with windows.
With linux, this is user definable when compiling the kernel.
I always felt that the system would run faster with a faster timer. It just felt like latencies were lower. Things felt snappier.


Yes, it's the old latency vs throughput dilema. Raise one and you lower the other. That's why it's common to recomend a high frequency for desktops (better latency) and a low one for servers (better throughput).

Funny to see that Windows lets app tune this setting while it is compile-time for Linux. That's what you get when your identical kernel has to work for everybody, but I wonder if that feature costs the Windown kernel performance in and of itself (on top of the "give userspace some power and it'll missuse it" problem described in the article).

Getting the best of both worlds (fully tickless kernel) is obviously hard, so it's great to see so much engineering time poured into it for Linux. It's the kind of endeavour routinely seen in the Linux community, that I imagine happens less often elsewhere.

Reply Parent Score: 5