Linked by Thom Holwerda on Tue 17th Nov 2009 16:13 UTC
Windows Microsoft's Professional Developers Conference is currently under way, and as usual, the technical fellows at Microsoft gave speeches about the deep architecture of Windows - in this case, Windows 7 of course. As it turns out, quite some seriously impressive changes have been made to the very core of Windows - all without breaking a single application. Thanks to BetaNews for summarising this technical talk so well.
Order by: Score:
v Comment by diegocg
by diegocg on Tue 17th Nov 2009 16:33 UTC
RE: Comment by diegocg
by PlatformAgnostic on Tue 17th Nov 2009 16:47 UTC in reply to "Comment by diegocg"
PlatformAgnostic Member since:
2006-01-02

The part of the scheduler you're talking about has been per-cpu and scalable since Windows Server 2003. The Dispatcher Lock was for synchronizing other APIs which have no equivalent on Linux.

Reply Score: 15

RE: Comment by diegocg
by rockwell on Wed 18th Nov 2009 20:32 UTC in reply to "Comment by diegocg"
rockwell Member since:
2005-09-13

Massive knowledge fail. Freetards continue their streak of FUD.

Reply Score: 1

There weren't deadlocks before
by PlatformAgnostic on Tue 17th Nov 2009 16:54 UTC
PlatformAgnostic
Member since:
2006-01-02

Just for the record, previous OSes did not suffer from deadlocks in the scheduler like they article implies. When the number of locks increase, though, more discipline is required to ensure that no lock inversion deadlocks can arise. A new careful hierarchy had to be instituted to avoid this (and the code has a lot of self-checks to enforce correct ordering).

http://en.wikipedia.org/wiki/Deadlock#Circular_wait_prevention

Reply Score: 4

v Big woop.
by jabjoe on Tue 17th Nov 2009 17:18 UTC
RE: Big woop.
by Tuishimi on Tue 17th Nov 2009 17:35 UTC in reply to "Big woop."
Tuishimi Member since:
2005-07-06

Different doesn't always equate to better. Windows has performed well enough up to date, and now performs better. AND I can watch my Netflix movies online with Windows... ;)

Reply Score: 5

RE[2]: Big woop.
by bousozoku on Wed 18th Nov 2009 03:29 UTC in reply to "RE: Big woop."
bousozoku Member since:
2006-01-23

Different doesn't always equate to better. Windows has performed well enough up to date, and now performs better. AND I can watch my Netflix movies online with Windows... ;)


That Windows Vista performed well enough is arguable, but that Windows 7 performs better seems pretty much true.

It's nice to see that they actually addressed shortcomings, rather than just adding more features blindly.

Reply Score: 3

Vista was fine after SP2
by nt_jerkface on Wed 18th Nov 2009 07:06 UTC in reply to "RE[2]: Big woop."
nt_jerkface Member since:
2009-08-26

the tech press just didn't bother reporting on it, bashing MS with the Vista stick was more appealing.

Reply Score: 2

*off topic
by Cody Evans on Wed 18th Nov 2009 00:57 UTC in reply to "Big woop."
Cody Evans Member since:
2009-08-14

I personally think that drive letters are easier to use than the way Linux does it, "E:\" vs "/media/usb-disk1/"

Reply Score: 3

RE: *off topic
by ba1l on Wed 18th Nov 2009 02:09 UTC in reply to "*off topic"
ba1l Member since:
2007-09-08

The mount points aren't really user visible in Linux.

You plug in an external USB drive, and it shows up, usually labelled only as "4.0GB Disk" or something like that. If you've given the drive a label, that label shows up instead. This is consistent across most applications, and is far better than "E:".

Besides, most users don't notice drive letters in Windows either. I certainly don't - I stopped caring as soon as Windows stopped forcing me to remember what drive letter was which.

Reply Score: 4

RE[2]: *off topic
by Bending Unit on Wed 18th Nov 2009 06:09 UTC in reply to "RE: *off topic"
Bending Unit Member since:
2005-07-06

Sometimes you still have to revert to command line, maybe to do something with your automatically mounted NTFS drive and then...? /media/somethingsomething

Personally I prefer drive letters, a forest, instead of a single file system tree and in windows I can use either one.

Reply Score: 2

RE[3]: *off topic
by bnolsen on Wed 18th Nov 2009 13:00 UTC in reply to "RE[2]: *off topic"
bnolsen Member since:
2006-01-06

Drive letters can be a nightmare in a company with any number of machines when handling network shares. Having arbitrary limits (26 letters because of english) is just dumb.

If you really want on unix mount stuff into /drive/a, /drive/b, etc...or hook an automounter into /drive

Reply Score: 2

RE[4]: *off topic
by Bending Unit on Wed 18th Nov 2009 15:53 UTC in reply to "RE[3]: *off topic"
Bending Unit Member since:
2005-07-06

Yes when complexity grows, ie when you feel that 26 drive letters is too little, I can understand it feels limited.

I still don't dislike the concept of having a forest though. Letters is simple and good enough for most users. For more complex scenarios, names sort of like the Amiga had would be nice. Drivename:\dir\file

Reply Score: 2

RE[5]: *off topic
by fx__ on Wed 18th Nov 2009 17:23 UTC in reply to "RE[4]: *off topic"
fx__ Member since:
2006-03-31

No no, the Amiga had the slashes the correct way! Partition:Directory/File

I think the Amiga-way is many times better than the Windows-way, you could name a partition DH1: but also give it a name, say Games:, and then access it by typing either DH1: or Games:.

It also had assigns which was very nice. Sort of like soft-links but with volumes. You could assign Games: to DH1:Cool stuff/More cool stuff/Games/ and then access that deep directory by just typing Games:, very nifty. I know you could do this in DOS aswell, but it's not the same thing. Sys: on the amiga is always the disk you have booted from, whatever the name is. And so on...

But I still think the Unix system is the best and most flexible, maybe not for home users, but as long as it's hidden (as on the Mac and in Gnome/KDE etc..) it works really nice.

Reply Score: 1

RE[2]: *off topic
by cerbie on Wed 18th Nov 2009 09:34 UTC in reply to "RE: *off topic"
cerbie Member since:
2006-01-02

Unless everything you do is on C:, Windows still makes you keep track of that.

Reply Score: 2

RE: *off topic
by Brunis on Wed 18th Nov 2009 13:00 UTC in reply to "*off topic"
Brunis Member since:
2005-11-01

I personally think that drive letters are easier to use than the way Linux does it, "E:\" vs "/media/usb-disk1/"


Really? I can drag and drop files to my USB Pen drive icon on my ubuntu desktop.. on windows only geeks know which driveletters correspond to certain pieces of hardware..

And the context menu 'send to' function extrapolates this problem.. send to F:\ which has a harddrive icon .. Windows always presumes prior knowledge.. it is NOT userfriendly and never has been..

Reply Score: 0

It's nice to see...
by Tuishimi on Tue 17th Nov 2009 17:39 UTC
Tuishimi
Member since:
2005-07-06

...what the underlying changes were, how 7 is different from Vista. It certainly felt different to use (altho' I only used Vista briefly on my in-laws' computer).

And it seems to work well on my netbook. Post google os announcement (that beta might be released next week) I was considering giving it a shot on my netbook, but now I am wondering just how flexible the OS will be (enough for my needs? I don't think it will) and I am probably going to stick with Win 7. Yeah, off-topic, sorry, but the point was that it performs sufficiently well on my netbook to keep using it (instead of XP or something else).

Reply Score: 1

Comment by Gone fishing
by Gone fishing on Tue 17th Nov 2009 17:55 UTC
Gone fishing
Member since:
2006-02-22

Vista found itself managing systems with more than 64 total cores.


and

In fact, they were making investments in quad-core systems earlier in Vista's lifecycle than originally anticipated


So Vista is dog slow because it can't cope with 64 cores or even 4. Well I wish my box had 64 cores, which it doesn't - why then is it dog slow with two cores or even one - in fact why is it dog slow?

This feels like an explanation of the form, if you can't baffle them with science baffle them with bull shit.

Oh just installed Windows 7 and other than drivers being a big problem it is quite nice, quite fast, quite responsive - its OK.

Reply Score: 3

RE: Comment by Gone fishing
by tomcat on Tue 17th Nov 2009 19:00 UTC in reply to "Comment by Gone fishing"
tomcat Member since:
2006-01-06

So Vista is dog slow because it can't cope with 64 cores or even 4.


Look, Vista isn't the best OS that MS has ever produced, but it does function OK. Saying that "it can't cope with ... even 4" cores is a little over the top. The point of the article is that concurrency was a bigger problem with Vista because of lock contention. That contention has been reduced in Win7 by making locking more fine-grained; thus, making each of the cores more efficient, since they more time doing productive work and less time waiting around for locks to clear. If anything, your Vista box will be more efficient with one or two cores than 64 because of the lower lock contention; as you add cores, you increase contention and, in turn, reduce throughput through the global lock. These changes will primarily make the kernel more scalable (as the article points out).

Edited 2009-11-17 19:10 UTC

Reply Score: 2

RE[2]: Comment by Gone fishing
by Gone fishing on Wed 18th Nov 2009 05:14 UTC in reply to "RE: Comment by Gone fishing"
Gone fishing Member since:
2006-02-22

I was just quoting the article - However, my point is I don't think its just multiprocessor support that makes Vista dog slow. I use it on a dual core and it’s awful.

Folk complained about ME, compared to Vista ME looks like a good deed in a naughty world - admittedly the underlying technology in Vista might have been a step forward but the user experience is miserable.

MS apologists might like to say Vista isn’t that bad – if you run it on a quad core with 8 gig of ram it’s OK – it isn’t it’s awful something that MS will soon wish to forget. If MS had any decency they would make upgrades from Vista to Windows 7 almost free.

Reply Score: 2

nt_jerkface Member since:
2009-08-26

Or those that aren't vulnerable to tech group think.

Vista vs XP on an EEEPC
http://www.youtube.com/watch?v=EXw7v1bxpSs

I ran this test several times off cam and in some cases XP beat vista and in other cases Vista beat XP by a small margin. There was no consistent differential in score. In this particular video XP just happens to beat Vista by a few points. Overall Im surprised at how well the 900 handles Vista's overheads compared to XP

Reply Score: 2

RE[2]: Comment by Gone fishing
by Laurence on Wed 18th Nov 2009 10:30 UTC in reply to "RE: Comment by Gone fishing"
Laurence Member since:
2007-03-26

Look, Vista isn't the best OS that MS has ever produced, but it does function OK.


I'm sorry, but just "functioning OK" isn't good enough when the company in question is the most profitable software house on Earth, Vista is (was) your flagship OS and you're the run-away market leader for desktop OSs.

In situations like that, i'd expect the OS to function briliantly rather than just "OK" compared to some free OSs.

But then maybe I expect too much from my market leaders?

Reply Score: 3

RE[3]: Comment by Gone fishing
by plague on Sun 22nd Nov 2009 14:44 UTC in reply to "RE[2]: Comment by Gone fishing"
plague Member since:
2006-05-08

THANK YOU!!

I've been saying this to people I know a long time and they all just respond with something similar to: "well, if you don't like it, why don't you code an OS yourself and show how it's done?" or "You just don't like it because it's trendy to not like it" or "yeah, your Linux works soooo much better, try playing [insert game] on it" or "it's not Microsofts fault [insert application or hardware] doesn't work, it's the other companys fault!".

People always contradict themselves, blaming other OS's because xyz doesn't work, despite beeing a Windows app, but then excusing Microsoft for the exact same thing, instead blaming the manufacturer of said app or hardware. And making excuses that it's up to everyone to keep up with the times and upgrade their computers, not Microsofts responsbility to make sure their software run on grandmothers toaster. Instead of admitting that an OS doesn't have to be slow and bloated and require the absolute latest hardware to run. It can be small, nimble, backwards compatible, and beautiful looking and STILL run on grandmothers toaster. It all depends on the coders, and from the market leader, I expect nothing less.

Reply Score: 1

v RE: Comment by Gone fishing
by cb88 on Tue 17th Nov 2009 19:02 UTC in reply to "Comment by Gone fishing"
RE[2]: Comment by Gone fishing
by BluenoseJake on Tue 17th Nov 2009 22:53 UTC in reply to "RE: Comment by Gone fishing"
BluenoseJake Member since:
2005-08-11

Just like all those commercial games that show up on OS X and Linux?

Reply Score: 3

RE[3]: Comment by Gone fishing
by sbenitezb on Wed 18th Nov 2009 02:52 UTC in reply to "RE[2]: Comment by Gone fishing"
sbenitezb Member since:
2005-07-22

The bigger problem seems to be most game studios using DirectX instead of OpenGL. It's a shame because they are missing a lot of customers.

Reply Score: 2

RE[4]: Comment by Gone fishing
by BluenoseJake on Wed 18th Nov 2009 04:49 UTC in reply to "RE[3]: Comment by Gone fishing"
BluenoseJake Member since:
2005-08-11

I doubt they see it that way, with over 90% of the market running Windows, they feel they don't need to bother.

Reply Score: 5

RE[4]: Comment by Gone fishing
by Panajev on Wed 18th Nov 2009 09:06 UTC in reply to "RE[3]: Comment by Gone fishing"
Panajev Member since:
2008-01-09

What customers are they missing out on? Until OpenGL gets more competitive against DirectX, switching to OpenGL for games means worse games for your Windows based customers which are the 98% of your expected user base anyways... supporting both API's means higher development costs so Windows + DirectX only is the wisest choice.

Reply Score: 3

RE[4]: Comment by Gone fishing
by lord_rob on Wed 18th Nov 2009 13:12 UTC in reply to "RE[3]: Comment by Gone fishing"
lord_rob Member since:
2005-08-06

Even if I modded BluenoseJake and Panajev up, I'd love to agree with you. Howewever they are right. Microsoft had announced in the past they were going to merge Direct3D with OpenGL.

Not exactly the same but they had a proprietary protocol to read mails from Hotmail directly in Outlook Express (Webdav). This protocol had been reverse engineered and integrated into Linux (hotway). Microsoft then decided to change its Webdav protocol for some other closed protocol. When Webdav was about to be replaced, due to pressure (GMail ?), they came back to the standard pop3 over SSL.

I doubt the same will happen for Direct3D because it's already THE standard for 3D games on Windows, OpenGL only being a follower.

Reply Score: 2

RE[5]: Comment by Gone fishing
by sbenitezb on Wed 18th Nov 2009 14:31 UTC in reply to "RE[4]: Comment by Gone fishing"
sbenitezb Member since:
2005-07-22

But Windows is not the only gaming platform. How about consoles? They are missing the consoles market, not only the OS X or Linux market.

Reply Score: 2

RE[6]: Comment by Gone fishing
by BluenoseJake on Wed 18th Nov 2009 15:25 UTC in reply to "RE[5]: Comment by Gone fishing"
BluenoseJake Member since:
2005-08-11

But Windows is not the only gaming platform. How about consoles? They are missing the consoles market, not only the OS X or Linux market.


How? Nobody cares if your console is running OpenGL or Direct X or something else. You don't write games for OpenGl or Direct X, you write it the game for the Xbox 360 or the Playstation 3. I don't know about the playstation, but with the xbox, the development tools also allow you to publish the exact same game for Zune and Windows as well as the console.

I don't see them missing anything.

Reply Score: 2

RE[2]: Comment by Gone fishing
by testman on Tue 17th Nov 2009 22:55 UTC in reply to "RE: Comment by Gone fishing"
testman Member since:
2007-10-15

AROS doesn't even have memory protection. I may as well compare it to the boot-time of my Commodore 64.

Reply Score: 6

RE[2]: Comment by Gone fishing
by cb88 on Wed 18th Nov 2009 15:37 UTC in reply to "RE: Comment by Gone fishing"
cb88 Member since:
2009-04-23

I see the windows fanboys don't like my comment... sheesh

Is is not a fact that every OS except perhaps AROS and Haiku are ridiculously slow to boot?

I have only installed Haiku myself but Those videos of AROS were mind boggling.

Reply Score: 1

RE: Comment by Gone fishing
by rockwell on Wed 18th Nov 2009 20:35 UTC in reply to "Comment by Gone fishing"
rockwell Member since:
2005-09-13

//which it doesn't - why then is it dog slow with two cores or even one - in fact why is it dog slow? //

Umm ... becuase you f--ked it up somehow, or your hardware is cheap-ass shit?

My old Pentium D box runs Vista plenty fast.

Reply Score: 2

Arrggg!
by Bill Shooter of Bul on Tue 17th Nov 2009 18:09 UTC
Bill Shooter of Bul
Member since:
2006-07-14

"A lot of people might think, 'Wow, 97 megabytes doesn't seem like a lot of free memory on a machine of that size'," said Wang, "And we like to see this row actually be very small, 97 MB, because we figure that free and zero pages won't generally have a lot of use in this system. What we would rather do, if we have free and zero pages, is populate them with speculated disk or network reads, such that if you need the data later, you won't have to wait for a very slow disk or a very slow network to respond."


That is the ideal situation in which a user's applications do not use much memory, except that which is used for loading objects off disk. That is not my typical usage of an operating system. Why can't I just control the amount of memory it uses for cache? Do what you want for default, but let me tweak it, for heavens sake. If you're right that the default behavior of cache everything is so correct, then I'll admit I'm wrong after trying to improve upon it.

Reply Score: 1

RE: Arrggg!
by slight on Tue 17th Nov 2009 19:06 UTC in reply to "Arrggg!"
slight Member since:
2006-09-10

The nice thing about caches, generally speaking, is that if you need the space for something else you can just discard the cache, so when your programs need that space the cache will be used for the programs.

The chances that you can manage the memory better than the kernel are slim.

There is an argument for how to tune whether very old pages get swapped out though, which Linux exposes through its 'swappiness' parameter (great name that ;)

Reply Score: 2

RE[2]: Arrggg!
by Bill Shooter of Bul on Tue 17th Nov 2009 23:44 UTC in reply to "RE: Arrggg!"
Bill Shooter of Bul Member since:
2006-07-14

I can set the size of the virtual memory page file in windows, or leave it to the default setting. Why can't I also tune this behavior that is essentially controlling similar behavior?

Reply Score: 2

RE[3]: Arrggg!
by PlatformAgnostic on Wed 18th Nov 2009 02:12 UTC in reply to "RE[2]: Arrggg!"
PlatformAgnostic Member since:
2006-01-02

The pagefile size has no effect unless it is set to be too small. The OS will not use the pagefile to store information that could just as well be stored in memory (old modified pages do eventually get written out, but they won't be discarded and read back in unless there's other demand for memory).

Reply Score: 2

RE[4]: Arrggg!
by Bill Shooter of Bul on Wed 18th Nov 2009 03:20 UTC in reply to "RE[3]: Arrggg!"
Bill Shooter of Bul Member since:
2006-07-14

Edit: I'm tired, I didn't understand that I had posted the previous message

Response:
Yeah, obviously the page file setting only comes into play when windows would like to allocate more memory (for cache) than you have. It used to grow the page file by loading things from the harddisk into memory ( which was the page file, located ... on the same stupid slow hard disk). Read on for why I used to set this to be pretty small.



Additional page file Comment:

But I can set the size of the virtual memory pagefile. I used to have to do this on weaker systems to prevent them from running out of space while the computer was running.

I'd rather have the option to control it. My memory uses are non-conventional, I understand why they did it this way. If you gave me a choice, I'd prefer to have the option. The fact that you have to explain this over and over again to everyone. you could have avoided all of that with just an option. Let people play with it and discover how awesome windows engineers were in their wonderful default setting! Sometimes people need to see the before picture, in order to comprehend the beauty of the after.

Edited 2009-11-18 03:27 UTC

Reply Score: 3

RE[2]: Arrggg!
by cb88 on Wed 18th Nov 2009 00:07 UTC in reply to "RE: Arrggg!"
cb88 Member since:
2009-04-23

Actually I'm pretty sure that exokernel designers would differ with your opinion on the kernel being the best at memory management. see here http://pdos.csail.mit.edu/exo.html

The drawback is exokernels have an inherent bloaty tendency that could be avoided though...

Reply Score: 0

RE[3]: Arrggg!
by m_pll on Wed 18th Nov 2009 04:29 UTC in reply to "RE[2]: Arrggg!"
m_pll Member since:
2009-07-16

Deciding what pages to prefetch into unused memory is a difficult problem that requires a lot of high level code that doesn't really belong in the kernel.

And this is precisely why Windows doesn't do this in the kernel. Prefetching decisions are made by the Superfetch service, which runs entirely in user mode. The kernel provides some basic interfaces that Superfetch relies on, but all the logic is in user space.

Reply Score: 4

RE: Arrggg!
by sbergman27 on Tue 17th Nov 2009 23:57 UTC in reply to "Arrggg!"
sbergman27 Member since:
2005-07-24

Why can't I just control the amount of memory it uses for cache?

I don't understand this. Any memory that is not being used right now for programs can be used for caching. And if programs suddenly have a need for more memory, that memory can simply be freed. Instantly. No disk activity required. Why would you *want* to limit what the kernel does with otherwise unused memory, which can be put to good use?

In the Linux world, we do have a bit of a conflict between those folks who think that programs' seldom-used pages should get swapped out to make more room for disk cache, and those who feel that program's pages are sacred, and should not be swapped out unless absolutely necessary. But that is a different issue.

Reply Score: 4

RE[2]: Arrggg!
by sbenitezb on Wed 18th Nov 2009 03:02 UTC in reply to "RE: Arrggg!"
sbenitezb Member since:
2005-07-22

We have enough memory in today computers that swapping is mostly unneeded. If you have more than 1GiB of memory and you do a fair use of your computer, you'll see most of the time the swap isn't needed at all. I have 1GiB and my KDE desktop uses no more than half most of the time with browser, mail client, chat client and torrent client. Any program that's using more than that is probably misbehaving and ought to be terminated. When you have swap, say double your RAM, and a program goes rogue, it will start consuming all available memory, and then the kernel will allocate all available swap to it. That's gigabytes of swapping until the OOM kicks in, producing a lot of disk thrashing and the normal slowdown. The disk IO is producing the feeling that you need a new computer. Hopefully, in a few years swap will be totally unnecessary.

Reply Score: 1

RE[3]: Arrggg!
by cerbie on Wed 18th Nov 2009 09:42 UTC in reply to "RE[2]: Arrggg!"
cerbie Member since:
2006-01-02

I doubt it will become unnecessary. We will need a fair paradigm shift, before that occurs, I think. Having more memory begets uses of more memory.

However, for general use cases, more RAM and no swap works great, on both Windows and Linux. Any process that can try to use up all my RAM deserves to be forcefully killed ;) .

Reply Score: 2

Application Breakage...
by TemporalBeing on Tue 17th Nov 2009 18:11 UTC
TemporalBeing
Member since:
2007-08-22

As it turns out, quite some seriously impressive changes have been made to the very core of Windows - all without breaking a single application.


Well, when you change the code without changing interfaces then yes - you better not have application breakage.

Except, in this case they improved performance and there are likely applications out there that may have relied on the timing to keep from breaking - so likely, they have broken some applications.

Though anyone writing an application that is that dependent on timing deserves to have it break when the timing changes like that.

Reply Score: 2

v An extreme but real example of two things
by Bahadir on Tue 17th Nov 2009 19:32 UTC
Thom_Holwerda Member since:
2005-06-29

An extreme and real example of how linux kernel is much more superior than the windows kernel, and how it does not matter a tad whether you have a badly designed kernel, provided that you have a more user friendly and compatible interface.


...?

Reply Score: 3

tomcat Member since:
2006-01-06

An extreme and real example of a poster not having any clue what a kernel does but who nonetheless feels compelled to post meaningless crap.


There, fixed it for you.

Reply Score: 12

Bahadir Member since:
2007-05-19

"An extreme and real example of a poster not having any clue what a kernel does but who nonetheless feels compelled to post meaningless crap.


There, fixed it for you.
"

Tomcat the old osnews user, why do you get so angry? There is a point to what I am saying. I used a single sentence I guess that's why it wasn't addressed correctly. Sometimes summaries don't work.

Here's an excerpt from the post:

"The problem with the PFN lock is that the huge majority of all virtual memory operations were synchronized by a single, system-wide PFN lock


By PFN he is perhaps talking about the struct page lists of all physical pages. I know the stuff having written one myself.

Now, having a single lock for this? Up until 2009 Windows Vista? That doesn't sound right to me.

Projects have different priorities, and I understand that. Windows project is about customer-oriented usability and compatibility. It is not about performance or best technical architecture.

Linux kernel is there for technical superiority. People with high technical skills (not saying this isn't true for Windows kernel, but more true for Linux kernel) contribute to the project because of their technical interests. Also Linux has an evolutionary aspect to it that strengthens it, which is not true for many other kernels out there.

The result? In many ways the Linux kernel is technically superior. But Windows is used a lot more. Before Ubuntu I wasn't able to use Linux for everyday use myself. That's what I wanted to summarize in a single sentence.

Reply Score: 0

rockwell Member since:
2005-09-13

// That's what I wanted to summarize in a single sentence//

And, like most freetards, it came out f--k-all because you're an idiot.

Reply Score: 0

tomcat Member since:
2006-01-06

This is sad because Linux kernel IS indeed superior to the windows kernel.


How, exactly, is the Linux kernel superior to the Windows kernel?

Reply Score: 2

sbergman27 Member since:
2005-07-24

How, exactly, is the Linux kernel superior to the Windows kernel?

While I would not be inclined to make any sweeping statement such as "X is superior to Y", a willingness to open up the code to the wider world seems a good long term investment. Despite any ups and downs, my investment of confidence, professionally, has performed well over the years. I see no reason to abandon it. I would provisionally call that superior.

Reply Score: 2

kragil Member since:
2006-01-04

An open system can be better adapted to your needs.

Just look what the HPC guys are using.

New Top500
http://www.top500.org/stats/list/34/osfam

Reply Score: 1

moondevil Member since:
2005-07-08

Maybe because Linux is free and for commercial OS they need to pay per machine, or even per core?

Reply Score: 0

rockwell Member since:
2005-09-13

Right because 99% of computer users are working on high-end supercomputing clusters. Therefore, linux is better.

Typical freetard bullshit.

Reply Score: 1

lemur2 Member since:
2007-02-17

Right because 99% of computer users are working on high-end supercomputing clusters. Therefore, linux is better. Typical freetard bullshit.


http://www.serverwatch.com/trends/article.php/3848831/Lack-of-Innov...

Reply Score: 2

lemur2 Member since:
2007-02-17

Right because 99% of computer users are working on high-end supercomputing clusters. Therefore, linux is better.


As far as what computer users are running lately:

Only 500 machines, but approximately 2 million copies of linux:

http://www.top500.org/stats/list/34/procclass
http://www.top500.org/stats/list/34/osfam

At the other end of the grunt scale, 11 million netbook machines with one copy of Linux each:

http://www.computerworld.com/s/article/9140343/Linux_s_share_of_net...

A bit like squeezing from both ends towards the soft middle.

Reply Score: 2

rockwell Member since:
2005-09-13

Um, way to completely miss the point! Go freetardians and servers-are-the-only-computers mentality!

Reply Score: 2

Comment by talaf
by talaf on Tue 17th Nov 2009 22:48 UTC
talaf
Member since:
2008-11-19

And? The NT kernel never "let me down" either, that's hardly a meaningful metric now isn't it?

Very good article btw, though as always on this subject it'll just reap trolls again and again.

Reply Score: 1

v What's the point?
by Berend de Boer on Wed 18th Nov 2009 01:01 UTC
license_2_blather
Member since:
2006-02-05

I hope they deploy all these high-performance features on the 64-core box that is processing the free Windows 7 upgrades. Because I sure as hell haven't received mine yet.

Until that happens (and I find out I can still disable StupidFetch), they can geek out all they want. I'll reserve judgment.

Reply Score: 1

Correction to Thom's text...
by leavengood on Wed 18th Nov 2009 05:13 UTC
leavengood
Member since:
2006-12-13

The more fine-grained approach in Windows 7 and Windows Server 2008R2 yields some serious performance improvements: on 32-processor configurations, some operations in SQL and other applications perform 15% faster than on Vista. And remember, the new fine-grained method has been implemented without any application breakage.


In the linked article it says 15 TIMES faster, not percent. That is a big difference ;)

Either way it is good to see Microsoft putting an effort on performance. It will keep the other operating systems on their toes as we can no longer rely on Windows being as crappy and slow as Vista.

This is also making me wonder about some of the locking semantics in the Haiku kernel. I assume we have more fine-grained locking sort of like what Win7 now has.

Reply Score: 1

PlatformAgnostic Member since:
2006-01-02

it's not 15x faster. that would imply that it was untenably bad before.

Reply Score: 1

JonathanBThompson Member since:
2006-05-26

This is the important point Leavengood was referring to:

"While spinlocks comprised 15% of CPU time on systems with about 16 cores, that number rose terribly, especially with SQL Server. "As you went to 128 processors, SQL Server itself had an 88% PFN lock contention rate. Meaning, nearly one out of every two times it tried to get a lock, it had to spin to wait for it...which is pretty high, and would only get worse as time went on."

So this global lock, too, is gone in Windows 7, replaced with a more complex, fine-grained system where each page is given its own lock. As a result, Wang reported, 32-processor configurations running some operations in SQL Server and other applications, ended up running them 15 times faster on Windows Server 2008 R2 than in its WS2K8 predecessor -- all by means of a new lock methodology that is binary-compatible with the old system. The applications' code does not have to change."


Yes, Ryan, I think it'd be nice to address such scalability issues in the Haiku kernel, but.... not worry about them for R1, because that's too sensitive of an area to mutate in such a major way without a rather high risk.

Reply Score: 2

Comment by bnolsen
by bnolsen on Wed 18th Nov 2009 13:07 UTC
bnolsen
Member since:
2006-01-06

I'm going to get downvoted for this (it is off topic) but...

on osnews generally only very very few posts get downvoted so they're not readable. This thread all of a sudden has tons, and a lot of the posts "killed" bring up points that have valid discussions that are now partly ruined because the original post isn't visible for context of discussion (many are not in bad taste trolls).

Whats going on here?

Reply Score: 1

RE: Comment by bnolsen
by talaf on Wed 18th Nov 2009 13:37 UTC in reply to "Comment by bnolsen"
talaf Member since:
2008-11-19

I'm going to get downvoted for this (it is off topic) but...

on osnews generally only very very few posts get downvoted so they're not readable. This thread all of a sudden has tons, and a lot of the posts "killed" bring up points that have valid discussions that are now partly ruined because the original post isn't visible for context of discussion (many are not in bad taste trolls).

Whats going on here?


What comment are you refering to? All the downvoted comments are made by morons and trolls on the first page at least.

There's the one thinking "giant locks" have been removed DECADES AGO (lol) in Unices anymore (they're still there bro, they're being gradually removed afaik, just like MS is doing).

There's the one claiming MS has got an inferior kernel, though he wouldn't be able to tell why or even define what a kernel is for the life of him if asked.

Some random linux troll.

And the other one comparing vista boot time to AROS.

Hardly anything useful there.

Edited 2009-11-18 13:38 UTC

Reply Score: 5

RE[2]: Comment by bnolsen
by moondevil on Wed 18th Nov 2009 16:11 UTC in reply to "RE: Comment by bnolsen"
moondevil Member since:
2005-07-08

Couldn't agree more. Sometimes I do miss the early OSNews days without Slashdot like trolling.

Reply Score: 1

RE[3]: Comment by bnolsen
by Thom_Holwerda on Wed 18th Nov 2009 16:23 UTC in reply to "RE[2]: Comment by bnolsen"
Thom_Holwerda Member since:
2005-06-29

Couldn't agree more. Sometimes I do miss the early OSNews days without Slashdot like trolling.


I don't see the problem. The comments are properly modded down, and no longer in sight. Yet - you still complain about them. Isn't the fact that they're no longer visible kind of an indication in and of itself?

Why look them up specifically? I just don't get it.

Reply Score: 1

casuto
Member since:
2007-02-27

kernel's performance enhancements are negligible on current systems. You need at least a 32+ cores system to take advantage of these kernel's enhancements. Win7 is just ready for the future...

Edited 2009-11-18 17:09 UTC

Reply Score: 2

Droping Vista Kernel absolutly natural
by dulac on Thu 19th Nov 2009 16:18 UTC
dulac
Member since:
2006-12-27

Reaso for the new approach, maybe following Free-BSD, is actually much simpler than the given explanations and is the natural way to deal with the problems pinpointed for years in the arquitectural change to multiple CPUs.

The problem is simply the bottleneck between Multiple CPUs and Multiple RAM pages.

Therefore:
- Putting to sleep non-relevant CPU's drops attempts to get in-memory data.
- Dividing Memory (apparently non-free) eases memory administration.

Microssoft is apparently strategies already used succesfully by AMD... and Free-BSD, with a touch of novelty that is not. But a progress, none the less, for the white elefant that windows has allways been. Finally it is changing to a mouse to become agile and fast.

May we say... at last?
And also at last we may say the promisses of w95 are being implemented.

One thing seems to be missing thought: Security!
Because the semantic complexity (if it is as declared) may open paths to UN-security (this is a comment on abstract, on a loose context). Unless eficiency and ssecurity are in diferent levels, and I believe they are (but we are talking of Microsoft, so trashing external insigth is always higly probable, as is Microsoft obtaining patents by those external insights, obvious or not).

Cheers.

Reply Score: 1