Scott Charney, chief security strategist at Microsoft, told developers at the TechEd 2003 conference in Brisbane, that information collected by Dr Watson, the company’s reporting tool, revealed that “half of all crashes in Windows are caused not by Microsoft code, but third-party code” . . . Charney also reinforced Microsoft’s message to developers and network administrators that they needed to build secure applications and networks “from the ground up”
I still hold the OS responsible if it can’t handle an application crash.
-bogey
Sorry no flame intended, but this guy is completely telling bullshit. These buggy programs should run in userspace and should not be possible to let the kernel crash imho. Like when IE crashes it takes complete OS down … Sounds to me a problem in how Windows (or any other os) is built up. I never had a program let my OS crashed because of 1 program.
Well, if 3rd party programs cause 50% of the crashes, you’d think OSX, Linux, BSD, UNIX, etc would crash more. Oh well.
Are you saying, as long as i don’t install a graphics card, a modem, a network card, or any other hardware, or software to get stuff done, I wont have problems. Hell, why don’t we blame the RAM, Motherboard and CPU at the same time.
Charney’s also reinforced Microsoft’s message to developers and network administrators that they needed to build secure applications and networks “from the ground up”.
I’ll get right on that. Yeah, as soon as Microsoft does.
So this means that the other 50% is Microsoft’s fault?
If this percentage was more in the 25% range, I would think “wow!”, but since it means one out of every two crashes is Microsoft’s fault, I’d say they are not practicing what they preach….
Take out one crash-prone part of Windows, and you’ll stabalize your system: Windows’ Explorer shell.
I’d advice everyone who experience crashes in Windows to start looking for an alternative shell. Like Talisman, LiteStep, Aston …
It helps, espessially for Win95/98/ME users.
I can fully believe that most crashes are caused
by buggy drivers. But QNX demonstrates that you
can get them out of kernel space and still get
excellent performance.
as taranis said, this means that half of all crashes are directly the fault of Microsoft, and the other half are indirectly the fault of Microsoft!
i’ve had 3rd party software ‘hang’ linux (XFree locked up), but never crash it…
Does Blaster count as a third-part app?
Yet is MS and MS has the solution, the third party developers dosn’t have MS Windows source code, just some half documented APIS, if MS wants no third party software crashes the give them access to the sorce code.
How can an access to the source code of NT/2K/XP help about you debugging your application? Oh no… you’re a perfect programmer, no bug in your apps, never.
Never seen a windows crash since I’m on NT4 ’cause IE failed… nor since IE5 do I have to restart explorer.exe when IE totally crash. (still… no kernel crash, just have to restart explorer.exe. You have restart X some time too, no?)
Dr Watson report too application crash. Not just Windows kernel crash… and to my knowledge only exist on NT based Windows.
Nothing is perfect. Neither Windows nor the apps. But don’t shoot the ambulance…
OK, this is something MS should be ashamed of, not bragging about! There are countless 3rd party apps, and all of them combined account for only 50% of the crashes. That leaves MS as the single biggest offender with 50% all on its own. Hearing this remind me about that saying… You know, pots, kettles, and all that.
I guess the only way around the problem “from MS point of view” don’t run third party software. If you only run software produced by MS then you will never have a crash.
Writing an OS is difficult thing, many hours of debugging. Sometimes _EVERY_ OS can crash because of hardware bugs, e.g. CMD640 chipset, Pentium f0 0f bug, PCI BIOS bugs, BIOS bugs, CPU bugs.
> Charney’s also reinforced Microsoft’s message to developers and network administrators that they needed to build secure applications and networks “from the ground up”.
No problem – I’ll build it on Linux or *BSD, thanks.
Charney’s also reinforced Microsoft’s message to developers and network administrators that they needed to build secure applications and networks “from the ground up”.
Does this mean 3rd party apps should be written so they by-pass Windows?
It’s all very well saying applications should be secure “from the ground up” but an application can only be as secure as the foundation it’s built upon!
So if my OS crashes because of a problem with a media file it is not the fault of the OS, but Media Players fault, thus the fault of the third party that produces Media Player, which is Microsoft. But from what I understand, Media Player is going to be a part of the OS. Thus if the OS crashes because of a bad media file, the it is again the falut of the OS, which is produced by … Microsoft!
Poor Bill Gates. He just can’t win.
It’s a fact which came out in the anti-trust trial that MS has secret API’s and that MS purposely codes against other apps like netscape. MS has no right to bitch what-so-ever when people writing drivers, for example, don’t have the necessary information to write proper (informed) code.
So either MS should publish the docs to the API interfaces or shut-the-hell-up.
isnt this the pot calling the kettle black ?
rolling on the floor
and they did say that third party apts were only
responisble 50 percent of the time.
oh well
Give me a break, MS. The fact that applications written for the Windows OS can bring it down is a fault of the OS. Two words (sort of) for you, MS:
“Bomb.app”
Microsoft needs to do like Apple did and troubleshoot every way imaginable that an app might take down their OSs and stay on top of squashing the bugs. Then they need to listen to user feedback as to what unimaginable ways people have found to crash the OS and squash those bugs. Unfortunately, all that sounds like work, without much pay – why bother when most of the world believes there is only one OS on the market. Better revenue can be had by adding bloat/featrues and repackaging it as a new product, rather than fixing what has needed fixing since Windows 3.1 or so.
Actually, they need to do a lot more than that. Baby steps.
MS=BS
MS = responsabel for 50% of windows crashes – ONE COMPANY
_THOUSANDS_ of other software house take up the other half.
I’d say MS is abusing its monopoly powers to corner the market on buggy pieces of shit.
…and then passing the buck. The 3rd party stuff shouldn’t be able to crash the OS. I know Eugenia’s been educating me on the factors of kernel drivers and how you use them to provide speed and low latency, but the fact is that the only way to provide a stable OS that cannot be crashed by hardware drivers and 3rd party software is to stop accepting kernal drivers and direct hardware access. The OS is supposed to provide all the close to the hardware functions. That’s its purpose. If we have to sacrifice a few miliseconds here and there, by eliminating kernel drivers and DMA and all those workarounds to original bad design, so be it. Computers are fast enough.
I want to see an end put to all crashing at the hands of 3rd party products. That’s the OS’s responsibility and none of them, except maybe QNX, actually comply.
hi any of you lot heard of Xp? iv been running 3 computers 4 the past 2yrs running the following Oses:
1) Win Xp Pro
2) SuSE Linux 8.2
3) Red hat linux 7.1
the one that crashes the worst is windows Xp, but it crashes far less than the linux distros, and why because Xp has App, neutralizers, when an app crashes only tht app crashes, and the os remains stable, only when a kernel error occurs which is very rare does the whole os crash, and when Explorer crashes in Xp, the Xp kernel reboots while running Windows & other apps, so not to disrupt the system too much, so try thinking about what you’re saying, also majority of crashes are users faults and 3rd aprty coz, 3rd party apps replace windows .dll & .sys files with older un-compatible versions, which makes the sys unstable, the users fill the os up with so much shit that no .dll or .sys file exists that belongs to the original Os, and the secret API’s all Oses hav them Linux has an API for its TCP/IP stack that is very sparsly documented on the net u are very lucky find it on the net
Whatever Microsoft says and whatever excuses the “no matter how bad it gets Windows rulz” crowd gives, the fact is that Mac OS X has none of these wide open vulnerabilities. Don’t even think about repeating the mantra that it’s because Windows has 95% of the market because in this case that’s BS. Windows is so full of security holes simply because of Microsoft doesn’t have to spend time on quality programming. Why is this? It’s because they know that their customers have very low expectations and suffer from the “it’s good enough” syndrome. In short, they don’t have to care because they know that the majority of their customers will trudge on with whatever trash they put out. For the Windows apologists out there… Have a good time downloading patches and reloading OS’s….
Honestly, I don’t believe you….
It is extremely difficult to bring down SuSE 8.2 with the stock kernel running on well supported hardware.
By well supported hardware, I mean hardware using non-experimental drivers.
Apps just don’t crash the Linux kernel, unless you’re doing something experimental or wacky.
Even if there’s is a keyboard lockup or somthing, you can usually ssh into your machine and correct the problem.
It is gratifying to know that the folks I bought my operating system from and the major applications I use (Office, browser, etc.) are the cause of 50% of my system crashes. It is astounding that a company is willing to admit such a thing openly.
In defense of this amazing statement I must say that they have done a lot to correct the problem in XP. However, from a statistical point of view this acceptance of blame/credit for so many crashes puts them in a very bad light. And then to berate the developers because they cause the other 50% is even more outlandish. Admitting to the fact that they killed their OS as much as the combined thousands of developers boggles the mind.
I think that MS should put their collective tails between their legs and crawl back into their sheltered world. There they will continue to be protected against the cries of their clientel.
Windows is riddled with design flaws that make it susceptible to 3’rd party bugs.
1) Too much code! The main components of my Linux system probably weigh in at a little over 12 million lines of code. That includes 4 million for all of KDE CVS, 3 million for the kernel, 2 million for X, 1 million for glibc, 1 million for GCC, 1 million for other utilities). Windows XP, on the other hand, is upwards of 40 million lines of code! And that count doesn’t include a PIM, an Office Suite, an IDE and a compiler, and a full set of utilities for a wide variety of file formats (image viewers, PDF viewers, etc). Throw in MS Office, and that count jumps to more than 65-million lines of code! As Tannenbaum said, “adding more features adds more code which adds more bugs!” Part of the problem is that Windows suffers from featuritis. The larger problem is that Windows suffers from bad layering and bad modularity. Too many pieces are tied closely together rather than being independent. Changes in one piece often have reprecussions in others. You’ll sometimes hear Linux newbies bitching about all the different dependencies and all the abstraction layers in Linux software. Not only is that proper software design, but its the *only* way to properly manage the complexity of such large systems!
2) Too much stuff in the kernel. GUIs are complicated. Thus, things like the GDI should *not* run in the kernel! Neither should the HTTP handling in IIS run in kernel mode! In Linux, there are only two things that can hard-lock your system — a bug in a kernel driver, or a hardware failure. Even if X freezes and your keyboard dies, the kernel is still chugging merrily along and can be fixed via a quick ssh session. It seems that Microsoft hasn’t learned from their past mistakes. They’re taking steps to put even more code into the kernel. Next up is large parts of SQL and the .NET CLR (both to support WinFS).
3) Related to (2): DirectX. DirectX was cool back when having direct access to a graphics buffer was the only way to get good performance out of your hardware. These days, graphics cards are so abstracted, that is not the case. In fact, most new graphics cards take a performance *hit* when you access them directly, because that requires them to flush all sorts of buffers and run do locking and whatnot. The DRI model, where the only piece of code that needs to be in the kernel is the DRM, which simply copies command buffers to the card from userspace, is actually the fastest way to program the harware.
Its kind of funny MS mentions third parties. Lots of people go every day without using any third party apps. Let’s see:
WinXP for the OS
Internet Explorer for the browser
Microsoft Office for the office suite
Outlook as the mail client
Visual Studio for the development environment
MSN Messenger for messaging
FrontPage for web publishing
Microsoft Project for project management
Microsoft Money for budget management
The first half dozen cover pretty much everything I do all day, and the whole lot probably covers 90% of what the average business user does. Throw all that on a machine with the stock Windows XP drivers, and you’ve got a configuration that is probably on the majority of machines out there! So even when its not Microsoft’s fault via WinXP, its Microsoft’s fault via one of their other fine products
Ah, but Microsoft “innovates” their products “based on” other companies’ products. So, it *is* third parties’ faults. For example, some bugs in IE are probably there because of the guys from UCLA, so not only they should not get that half a billion dollars, they should pay Microsoft back for giving it a bad name.
If 50% of crashes are because of other applications then I have the following questions:
– why is it possible for a NORMAL application to crash the system?
– why are normal applications able to integrate that deep into the system?
– why are developer integrating normal applications that deep into the system?
– why is Microsoft not providing enought/all information for the developers to integrate that deep into the system? (i mean: no hidden things, hidden api functions, etc)
– why are other os system so stable (compared to Windows)? What are they making better?
– why is it so hard on Windows to get tools (I am not writing about NICE tools! First they have to be GOOD and if they look NICE then this is an bonus, but not important) for debugging, tracing errors, etc…
– if so manny Windows applications are crashing, then how comes, that alot of them have an certificate from Microsoft, that they are certified for Windows? BTW: Certified for what? Where is the QUALITY aspect?
– etc…
“- why are other os system so stable (compared to Windows)? What are they making better? ”
uhhh– because they are based on UNIX instead of chickenwire and styrofoam?
“- why are other os system so stable (compared to Windows)? What are they making better? ”
uhhh– because they are based on UNIX instead of chickenwire and styrofoam?
thank you for the most stupid answer I readed in this thread!
My windoze Mellinum and XP crashes is related to what ….
I guess windoze was built on coping codes from others.
OR Windoze is one of the worst OS around..
Well then that makes the statement true
I think they’re referring to application-level crashes. When an app crashes in Windows XP, a message box pops up offering to email the Dr. Watson log to Microsoft(!) But the prompt sounds helpful enough: “Please tell Microsoft about the problem (yes/no)”.
This seems to be an antitrust abuse, as it gives Microsoft rather too much information about their customers’ pattern of application use, and where and how their competitors’ apps crash (call stacks, registers, thread counts, etc). And the logs are cumulative, so they contain the history of every crash on the workstation.
Man the lynch mob is out tonight! If you read the actual words you see it says “third party code”, not “third party applications”. This includes drivers, and even the most capable os can be brought down by a buggy driver.
Just to provide a little counter-weight.
Its largely a matter of development philosophy. UNIX was architectured. It follows a philosophy of minimalism and orthogonality. The NT kernel was also architectured, and (in the 3.x series) was similarly stable and secure. But that architecture was butchered and perverted in later releases of the OS.
Just take a look at the Win32 API. Win32 has thousands of calls to accomplish what POSIX can do with 130. The CreateProcess() vs fork/exec() is often cited. CreateProcess() has 10 parameters (some of them are structures with dozens more parameters), most of them unrelated to the actual process creation. The other parameters handle things like security, environment, working directory, etc. In comparison, fork() has no parameters, and exec() has 3.
In my experience with Win32 vs POSIX, there are two major things to consider: orthogonality and completeness. Orthogonality means that each function does only one thing, independent of other functions. Completeness means that each function does, within its domain, everything you need it to do. The consequence of the first is that you don’t have functions that do two loosely related things. The consequence of the second is that you don’t have two functions that do similar things. UNIX open() is a good example. open() opens a file. That’s all it does. But that file can be anything from a text file in a home directory, to a socket connected to a remote server. In contrast, Win32 has different functions for opening files than for opening sockets.
Why harp so much on the API? The API guides the way you think. When you see an API that’s cluttered with hungarian notation, weird naming conventions, non-orthogonal, incomplete functions, etc, your thinking becomes the same way. Further, the API should be the clean, sparkling exterior to the OS. If Win32 is Microsoft’s idea of clean, you can understand why the guts of their programs can be so rotten.
Its largely a matter of development philosophy. UNIX was architectured. It follows a philosophy of minimalism and orthogonality. The NT kernel was also architectured, and (in the 3.x series) was similarly stable and secure. But that architecture was butchered and perverted in later releases of the OS.
Hence the reason why I harp back to UNIX every time. Had Microsoft hased their operating system on a strong UNIX base, what we see now would never had occured.
Microsoft COULD HAVE created an operating system like NeXT. SYSV based core, and pure postscript driven interface. Thrown OpenGL into the mix and you would have one heck of a desktop.
UNIX has matured over the last 20 years and had they, Microsoft, matured their operating system with the innovations that occured we would have a rock solid operating system without the vast amounts of headaches we see today and better yet, it would be UNIX 98 compliant thus allowing all and sundry access to documentation via the opengroup.
Opensource ISN’T the key to a stable and scalable operating system, openstandards are. That is the key. The idea has been tested by numerous third parties, public consultation has occured etc etc. That is not to say that there will be errors.
Making a law takes a very similar approach, first and MP brings forward a bill, the parliament debates it, then a select committee is formed to request input by the public, the committee then suggests changes, then the bill is brought back to the parliament for further debate.
This is how an openstandard is formed. Sure, it isn’t fast but it does give a strong and robust core for any further development based off the original concepts.
I agree! We made a project on QNX4 some years ago. During development in an experimental 5-node network we tried a lot of very stupid things, but never ever the network or one of the kernels crashed. That’s what I call reliability.
The OS has to have control over all applications and their usage of resources (memory, devices etc.), not the other way round.
Never ever may an OS be totally crashed by fault of a single application.
There’s a lot to do B.G.: do it!
Regards Chris
Windows is riddled with design flaws that make it susceptible to 3’rd party bugs.
Name ’em. List the *design flaws* in Windows that make it more susceptible than other, comparable OSes (no, things like QNX are not comparable).
Too much code! The main components of my Linux system probably weigh in at a little over 12 million lines of code. That includes 4 million for all of KDE CVS, 3 million for the kernel, 2 million for X, 1 million for glibc, 1 million for GCC, 1 million for other utilities). Windows XP, on the other hand, is upwards of 40 million lines of code!
Yes, but that doesn’t mean those 40 million LOC are all in use at once. Many will be spent in drivers you never install, in subsystems you never use (eg: POSIX and OS/2), in parts of the system that there purely for backwards compatibility and rarely get used, etc.
The larger problem is that Windows suffers from bad layering and bad modularity. Too many pieces are tied closely together rather than being independent.
This sort of interdependence and re-use of code is an example of *good* software design. It makes perfect sense that if some widely-used module breaks, then everything that re-uses that module will also break.
Changes in one piece often have reprecussions in others.
As one would *expect* in any heavily modularised piece of software that makes extensive re-use of code.
You’ll sometimes hear Linux newbies bitching about all the different dependencies and all the abstraction layers in Linux software.
Yes, this is because these things suck and tend to be poorly handled.
Not only is that proper software design, but its the *only* way to properly manage the complexity of such large systems!
Huh ? Constant reinventing of the wheel and Not Invented Here syndrome causing developers to keep writing their own code instead of re-using existing code is *good* programming practice ? Where did you learn software development ?
Too much stuff in the kernel. GUIs are complicated. Thus, things like the GDI should *not* run in the kernel!
GDI does not run in the kernel, it runs in kernel *space* – and with a negligible overall impact on stability and a significant increase in performance.
Neither should the HTTP handling in IIS run in kernel mode!
At the moment it doesn’t. AFAIK it’s an option for people willing to make the stability sacrifice for the greater performance.
In Linux, there are only two things that can hard-lock your system — a bug in a kernel driver, or a hardware failure.
Or code running with root privileges. The same applies to Windows (although Windows’ inherently better security model makes it easier to reduce the potential impact from code running with elevated privileges).
Even if X freezes and your keyboard dies, the kernel is still chugging merrily along and can be fixed via a quick ssh session.
Sometimes. I’ve had X lock machines often enough to know it’s not always possible to kill it.
I’ll also reiterate my position that for the vast majority of users, an X crash is just as bad as a system crash. Not only because most of them lack the knowledge or facilities to be able to restart X without rebooting, but also because most of them will be doing everything from within X, so as soon as it dies, so does all their work. It’s the same as the specious argument that Linux’s great imposition of file permissions means a worm will only wipe out all the files belonging to the user who runs it.
They’re taking steps to put even more code into the kernel. Next up is large parts of SQL and the .NET CLR (both to support WinFS).
Firstly, I’d like to see some cites. Secondly, I’d appreciate it if you could go out and learn the difference between “in the kernel” and “in kernel space”. To help you along, it’s similar to the difference between building a kernel with all the drivers compiled in, or building them as modules.
DirectX. DirectX was cool back when having direct access to a graphics buffer was the only way to get good performance out of your hardware.
Huh ? The whole point of DirectX is so programmers *don’t* write to the hardware. They write to DirectX, and DirectX interfaces with the hardware (via the HAL).
These days, graphics cards are so abstracted, that is not the case.
Yes. They are abstracted by DirectX.
In fact, most new graphics cards take a performance *hit* when you access them directly, because that requires them to flush all sorts of buffers and run do locking and whatnot.
We are not using DOS anymore. Nobody directly accesses graphics cards. Everyone uses abstraction interfaces like DirectDraw, Direct3D, OpenGL, etc.
The DRI model, where the only piece of code that needs to be in the kernel is the DRM, which simply copies command buffers to the card from userspace, is actually the fastest way to program the harware.
Congratulations, you’ve just described the philosophy behind DirectX. Talk to an abstraction layer and let the abstraction layer deal with the hardware.
Your list of “design flaws”, apart from being woefully short, is at best gross misunderstanding of how things actually work. In particular, DirectX does a hell of a lot more than drive video cards.
Incidentally, for all you people blasting the article, it’s pretty bloody obvious they’re talking about *application crashes*, not OS crashes. The mention of Dr Watson should make that clear. In which case the 50% number is _easily_ believable, if not rather generous (you really think Microsoft writes 50% of the software out there ?).
Heck, even if they weren’t I’d call 50% a pretty low number. In my personal experience, a good 95% of OS crashes in NT-based Windows are caused by shitty third party hardware drivers (even more common now XP is targetted at low-end consumers), hardware failures and software running with elevated privileges mucking with OS internals. Even in older DOS-based Windows, the majority of problems are caused by bad drivers or horrible legacy apps abusing the OS.
Windows is pretty stable, all things considered. On NT/2k/XP, stick with quality hardware using Microsoft or WHQL certified drivers and avoid buggy programs that execute with elevated privileges (are you listening, McAfee ?) and OS crashes should be rare, if not nonexistant.
Had Microsoft hased their operating system on a strong UNIX base, what we see now would never had occured.
And instead we would have had other things to deal with, like Unix’s primitive and inflexible security model.
SYSV based core, and pure postscript driven interface. Thrown OpenGL into the mix and you would have one heck of a desktop.
Meanwhile completely fucking the entire existing customer and developer base due to a vastly different development environments and zero backwards-compatibility.
Apple (who is obviously who you are thinking of) can get away with this today using OS X for two reasons:
Firstly, the hardware is fast enough to make full emulation of the old system for backwards compatibility a practical possibility. Back in 1988, when NT was being designed and the 20Mhz 386 with 8MB of RAM was the cutting edge of PC technology (and had been so for ~2 years), it wasn’t such a viable option.
Secondly, their complete monopoly control over their platform makes it much, much easier to drop support for older technologies and hustle along the user migration – much more so than Microsoft (company philosophies help here as well – conservative vs well, somewhat crazy).
UNIX has matured over the last 20 years and had they, Microsoft, matured their operating system with the innovations […]
For example ?
Unix has been maturing for over 30 years, NT for barely half that time. You’d bloody well hope Unix was better at some things.
[…] that occured we would have a rock solid operating system without the vast amounts of headaches we see today […]
The vast majority of headaches are caused, as the article says, by third party code out of Microsoft’s control (although less so applications, as the article is clearly talking about, than drivers, which cause the real headaches). As I said elsewhere, stick NT/2k/XP on quality HCL hardware using Microsoft or WHQL drivers, avoid dodgy programs that require elevated privileges and OS crashes will be extremely rare. I can’t even remember the last time one of my Windows boxes crashed, but it was sometime back in early 1999 when I was still using NT4. Of course, my Quake framerates are substantially lower than they could be because I don’t use the latest-and-greatest video drivers.
Opensource ISN’T the key to a stable and scalable operating system, openstandards are. That is the key.
Open standards won’t help your OS stability and scalability one iota if its design is poor. They will help interoperability, but that’s about it. A protocol or API definition or a set of filesystem semantics isn’t going to help system stability in the slightest if the OS design itself is flawed (like, say, MacOS Classic or Windows 9x).
I challenge you to detail how “Open Standards” can have any noticable effect whatsoever on an OSes stability and scalability.
Its largely a matter of development philosophy. UNIX was architectured. It follows a philosophy of minimalism and orthogonality.
Calling Unix “architected” is pretty generous. It was more “evolved” than “architected” and was not really meant to have stretched to the purposes it is used for today. That is why we have hideous kludges like SUID executables and “privilege separation” trying to work around the basic design flaws without opening up too much of a security hole.
What you call “architecture” is better described as “philosophy” – and even that’s always been pretty loosely followed (“everything is a file” – pig’s arse).
The Win32 API is a mess, undoubtedly, but NT wasn’t designed for Win32 and isn’t written with it. NT was designed exceptionally well – superior to Unix pretty much every way – it’s just that the tacked-on bits that came along afterwards are severely hampered by the need for backwards compatibility (which .NET is allegedly supposed to fix).
It’s somewhat unfortunate Microsoft place such an emphasis on backwards compatibility and legacy support, because it’s caused them no end of grief with their from-the-ground-up next generation OS.
I bet msft means the third party code THEY use…
LOL, just kidding, or am I?
Hard to say. As neither a windows, mac, or linux/bsd user- I can tell you, irreguardless of third party code, the OS design needs to follow an understanding of murphy’s law to be truly successful as a seriously good piece of software platform, for your third party applications. If you write a driver that sends windows into BSOD, it’s not quite ready- but after working and working on it, contacting MSFT themselves, still getting the problems, changing the implementation still causes problems, then the only way to fix the problem is to change the OS you’re using. MSFT can only TRY to own everything they didn’t outright seek to destroy. ALthough, trying is antitrust and legality issues, from an engineering standpoint, it just means that hardware vendors are developing hardware that conforms to an OS spec that is unfriendly to the concept of third party *anything*. It would make me laugh if nvidia brought a linux/bsd/openbeos distro on the scene,just to prove me right… someone would be spitting out their mochachino in redmond.
“As I said elsewhere, stick NT/2k/XP on quality HCL hardware using Microsoft or WHQL drivers, avoid dodgy programs that require elevated privileges and OS crashes will be extremely rare.”
I’ve seen more than once Win2k crashing immediately after installation – so no third party applications or drivers involved. These were also hardly a hardware problems since other OSes (including Win NT) run on same machines without problems.
Other point is about popular claims that Windows support more hardware than other OSes. All this hardware support comes through third-party drivers that come with hardware. Without them, it looks that Linux (and probably BSDs) have much better hardware support, they have more drivers in stock kernel distribution, than Windows have.
Actually, my last point applies to 3-party applications, too. Lot of third party apps supposed to be MS windows major strength, and this MS statement makes this look as its major weakness.
I’ve seen more than once Win2k crashing immediately after installation – so no third party applications or drivers involved. These were also hardly a hardware problems since other OSes (including Win NT) run on same machines without problems.
Was the hardware on the HCL ?
Also, differences in the way the OSes use hardware may expose different bugs. For example, there are numerous examples of Linux running find on hardware with faulty RAM and Windows not, simply because of the different methods the two OSes use to allocate and access memory.
Another relevant example is ACPI. An NT4 installation will not expose any bugs in a particular motherboards ACPI installation because it doesn’t use it. 2k and XP might.
Other point is about popular claims that Windows support more hardware than other OSes.
I’m not making any claims about popularity. In the context of this discussion I don’t care which OS is more popular, I care about specious arguments and flawed reasoning. Please don’t turn this into a “my OS is better than yours” argument, particularly since you have no idea which OS I’d be talking about.
Without them, it looks that Linux (and probably BSDs) have much better hardware support, they have more drivers in stock kernel distribution, than Windows have.
Your presumption is flawed. Just because a driver comes in the “stock kernel” doesn’t mean it’s not the equivalent of a “third party’ driver for Windows. There’s no shortage of beta (and alpha) quality drivers in the Linux and *BSD kernels. Added to that, unlike Linux and the BSDs, Microsoft actually offers in-depth testing and certification of drivers.
Actually, my last point applies to 3-party applications, too. Lot of third party apps supposed to be MS windows major strength, and this MS statement makes this look as its major weakness.
Why ? Are you going to try and assert “third party” applications don’t carsh under other OSes ?
“Was the hardware on the HCL ? ”
Yes it was.
“I’m not making any claims about popularity.”
You didn’t.
” Are you going to try and assert “third party” applications don’t carsh under other OSes ?”
No. I won’t.
But many pro-MS guys (and MS people themselves) present good hardware support and vast amount of apps as one of the strength points of MS OSes. This is achieved through 3rd-party drives and apps. On the other hand, same people now claim that all that 3rd party drivers and apps cause OS instability. This doesn’t look consistent to me.
And if you leave Windows with only “supported/qualified” applications and drivers, you probably won’t get more than Linux has.
“Your presumption is flawed. Just because a driver comes in the “stock kernel” doesn’t mean it’s not the equivalent of a “third party’ driver for Windows”
Sorry, I don’t get your point.
well microsoft claims 50% of crashes to be caused by 3rd party code – maybe or not – but – what about the other 50% ???
I believe, even 50% is much more than too much…
“Was the hardware on the HCL ? ”
Yes it was.
Very strange.
“I’m not making any claims about popularity.”
You didn’t.
You appeared to be.
” Are you going to try and assert “third party” applications don’t carsh under other OSes ?”
No. I won’t.
Then I’m not quite sure what your point is. You seem to be saying that because third party applications crash on Windows, this is bad for the platform.
But many pro-MS guys (and MS people themselves) present good hardware support and vast amount of apps as one of the strength points of MS OSes. This is achieved through 3rd-party drives and apps. On the other hand, same people now claim that all that 3rd party drivers and apps cause OS instability. This doesn’t look consistent to me.
It’s quite consistent. You can have your cheap-arse, POS hardware supported by some dodgy programmers and have massive hardware compatibility at the cost of some stability, or you can choose quality hardware on the HCL and still have good hardware compatibility.
It’s not an either/or, it’s a choice.
And if you leave Windows with only “supported/qualified” applications and drivers, you probably won’t get more than Linux has.
I’d say you almost certainly would have more than Linux – even more so for FreeBSD.
“Your presumption is flawed. Just because a driver comes in the “stock kernel” doesn’t mean it’s not the equivalent of a “third party’ driver for Windows”
Sorry, I don’t get your point.
You seem to be implying that if the third party non-certified hardware drivers are removed from the Windows “hardware list”, the Linux supports more hardware. I’m saying that if you take out the equivalent sorts of drivers in Linux – beta/testing/experimental/unsupported/etc – all of which are provided in the “stock” kernel, then it’s “hardware list” is reduced by at least a similar, and probably a much larger, amount.
In short, saying Windows doesn’t support as much hardware if you remove the dodgy unsupported drivers and not doing the same for Linux is making an unfair and unreasonable comparison.
I still blame the OS too for crashing the whole system instead of running programs in their own space. All that should happen is the specific program crash and close.
I still don’t understand who is getting all these crashes. For the past 3 years I’ve been using XP/2000 on various different machines, and I have never had the OS “crash” on me. Yes, applications have crashes occasionally, but never has the entire OS been brought down. I know others who have had the same experience. I actually don’t know anyone personally who complains about XP/2000 crashing all the time. I dunno….
i think it is bullshit, and just blaming other 3rd party program does not going to help the performance of the OS itself…
if the OS is already poor then it’s not useful for blaming other 3rd party software
Yes, but that doesn’t mean those 40 million LOC are all in use at once. Many will be spent in drivers you never install, in subsystems you never use (eg: POSIX and OS/2), in parts of the system that there purely for backwards compatibility and rarely get used, etc.
>>>>>>>>>>>
True, but that’s the case in Linux, too. Most of the 3 million lines of code in the Linux kernel are drivers for hardware. Beyond that, those 40 million lines of code must be maintained. When something changes, those 40 million lines of code must be updated. That saps a huge amount of resources that could be better spent fixing other bugs. Software tends to be incredibly complicated, because as the number of components increases linearly, the number of relationships increases quadratically. Even if a lot of the lines of code aren’t in the critical path, they still have an impact on the whole. And yes, I consider too many lines of code to be a design flaw.
As one would *expect* in any heavily modularised piece of software that makes extensive re-use of code.
>>>>>>>>
Only if the design is tightly coupled. The thing that hits Microsoft so bad is not that a module fails when modules it depends on fail, but that a module fails even though the modules it depends on still work. Modules are supposed to be black boxes. You can change one without affecting anything else, long as you don’t change the interface. However, in a system that is too tightly coupled, the interface is not clearly defined, and it happens that changes to the internals of one module can affect others.
Yes, this is because these things suck and tend to be poorly handled.
>>>>>>>>
For example? Linux is very well layered. The kernel handles device-IO. X handles graphics. Qt handles widgets. KDE handles the desktop. The only blemishes are that XFree86 handles mice and keyboards (necessary to make XFree86 portable) and that the window manager should really be a module rather than a seperate process.
Huh ? Constant reinventing of the wheel and Not Invented Here syndrome causing developers to keep writing their own code instead of re-using existing code is *good* programming practice ?
>>>>>>>
A lot of people write their own versions of things because the existing things don’t do what they need. GNOME was written because KDE didn’t (originally) meet everyone’s licensing requirements. But within a given sphere, proper design is demonstrated. I’ll use KDE as an example, because GNOME frankly isn’t. In KDE every app uses all the common KDE services possible. Even Quanta’s FTP file handling is actually just a common KDE service. That results in a level of consistency between KDE apps unmatched by other major platforms, including OS X (thanks to Brushed Metal).
GDI does not run in the kernel, it runs in kernel *space* – and with a negligible overall impact on stability and a significant increase in performance.
>>>>>>>>>
There is no practical difference between running in the kernel and running in kernel space. A bug in the GDI brings down the kernel. And you don’t need to be in the kernel to be fast. In raw benchmarks, XFree86 is faster at drawing than the GDI. BeOS managed to have a very fast GUI, even though it ran in user-space.
Or code running with root privileges. The same applies to Windows (although Windows’ inherently better security model makes it easier to reduce the potential impact from code running with elevated privileges).
>>>>>>>
Yes, a good security model is something Windows inherited from NT 3.x. But a good security model does you no good if:
A) You make your users root by default!
B) Your code is full of security bugs. MSBlaster anyone?
Sometimes. I’ve had X lock machines often enough to know it’s not always possible to kill it.
>>>>>>>>
Any possible relation to the NVIDIA drivers? The NVIDIA kernel driver can be flaky on many machines.
I’ll also reiterate my position that for the vast majority of users, an X crash is just as bad as a system crash.
>>>>>>>>
Vast majority of *home* users. A large percentage of all machines run in business (client or server) or public settings. Specifically, a huge number of those machines are servers — do you want a bug in the GUI bringing down your server?
Firstly, I’d like to see some cites. Secondly, I’d appreciate it if you could go out and learn the difference between “in the kernel” and “in kernel space”.
>>>>>>>>>>
I’ve written my own kernel, and I’ve got a big stack of kernel development books on the next shelf. Next question…
To help you along, it’s similar to the difference between building a kernel with all the drivers compiled in, or building them as modules.
>>>>>>>>
And in both cases, if there is a bug in the program, the whole kernel crashes!
Huh ? The whole point of DirectX is so programmers *don’t* write to the hardware. They write to DirectX, and DirectX interfaces with the hardware (via the HAL).
>>>>>>
Somebody doesn’t remember DirectX int he early days. DirectX does not work via the HAL. It makes a bridge directly between the programmer and the hardware, *through* the HAL. Using DirectDraw, one could obtain a pointer directly to video memory. And you could crash the machine quite easily with that pointer.
We are not using DOS anymore. Nobody directly accesses graphics cards. Everyone uses abstraction interfaces like DirectDraw, Direct3D, OpenGL, etc.
>>>>>>>
DirectDraw is direct access to the hardware. Even though most people work at a more abstract level via Direct3D, it is still possible to touch the hardware via DDraw.
Congratulations, you’ve just described the philosophy behind DirectX. Talk to an abstraction layer and let the abstraction layer deal with the hardware.
>>>>>>>
As DirectX has moved to a more OpenGL model, yes. But that wasn’t the case back when I was doing DDraw programming around DX 6.x. Even though DDraw is now abstracted away above D3D, thanks to backswards compatibility, those original vulnerabilities remain.
you really think Microsoft writes 50% of the software out there ?).
Office. IE. Outlook. Windows. Right there, 90% of what most office workers, and many home users, interact with daily.
Windows is pretty stable, all things considered. On NT/2k/XP, stick with quality hardware using Microsoft or WHQL certified drivers and avoid buggy programs that execute with elevated privileges (are you listening, McAfee ?) and OS crashes should be rare, if not nonexistant.
>>>>>>>
That’s just theory. I’ve got half a dozen Windows machines here (my family’s not my own) that say differently. Its a design flaw if the OS needs to be babysat so much. I run CVS versions of KDE, with device drivers written by people in their spare time, and crashes are rare. My Windows machines will break even though some of them run nothing but Outlook and IE!
Me, apparently. My list of Windows problems:
1) I can’t figure out how to get Win2k to recognize DMA on a SiS PIII based system. Linux does fine on the same machine.
2) One of my machines, a Duron 700, crashes every time it comes out of sleep mode. It also crashes about 10% of the time when logging on.
3) Another one of my machines, an Athlon 1700+, refuses to keep a stable WLAN connection, even though I’ve tired numerous tims to fix it. It works for a few weeks, then magically breaks.
4) My primary machine, which now boots into WinXP when I need to test our company’s software in Windows, will hard-lock at the Windows loading screen about 1 in 3 times.
5) Our office network just got hit with MSBlaster. The file server has a problem keeping a stable connection. The same would have happened to my home network, had it not been behind a nice, secure Linux firewall.
I used to think Windows was stable. I used to baby my Windows systems — never installing unusual software, never trying weird utilities, reinstalling once every couple of months. Ran very smoothly. Took too much effort. Now, I treat my Linux machine like crap. Run CVS versions of software, development kernels, install obscure programs, litter the hard drive with cruft, (all as root etc. Takes it like a man. Something is not stable and secure if you have to baby it. Most people just don’t have that kinda time, or that level of skill with computers.
happens when I’m tinkering around with GDI, and I forget to redraw the entire window area in OnDraw(). If I start up that window, and move another window over it a couple of times, forcing Windows to draw what isn’t there to be drawn (instead it creates a black background), it only takes a minute to get a nice, wholesome BSOD. Probably because Windows runs out of resource handles or something.
(I can perform the same trick using DirectX )
Now I’m not calling that good programming (isn’t good programming orthogonal to using GDI?), but that BSOD is a terrible OS flaw.
They are using Dr. Watson report tool to say what crashes Windows. At best of times, it’s not a very reliable tool.
When I get crashes from one of the Windows applications, I allways press “Don’t send”, but if one of the other applications crash I press “Send”.
There’s a very good reason for that (exept the fact that I don’t like Microsoft).
All the Windows apps, I’ve paid for when I bought Windows (well, when my boss bought Windows). I think Microsoft bloody well could do their own bug testing – after all I’ve paid them well enough for it (or my boss did, anyway).
The third party apps, on the other hand gets reported, so Microsoft can fix that problem if it is OS-caused. After all, they’ve not made the application, so I don’t expect them to know what makes the third party apps crash.
A lot of the people I know (even some Windows-freaks) feel the same way about this, and I can’t just see that the people I know are so special that we’re the only ones in the world thinking that.
On my Linux boxes it’s different. Using Linux makes you a part of the community. You know you are working for your own good, and if you send a mail to the maintainer of a project, more often then not the maintainer takes time to answer the mail himself (I’ve done that several times – last the KOffice team – and I got an answer; a negative one, but none the less – it was well explained).
It’s easier helping people like this.
So I doubt that using this report system is even near reliable.
</nalle>
“Me too.”
JSplice, you DO know people with problems, if you’ve read posts here at OSNews. Every time someone comes in here and claims that Windows XP never crashes, people like me chime in to say “Yes it damn well does!” and then we list all the problems we have.
Some of mine:
– Windows XP will freeze on startup if it was not the last OS being run (I dual boot BeOS). Don’t even think of blaming BeOS. If Windows doesn’t do a reality check on the hardware state, that’s it’s own fault and the blame lies on Microsoft.
– Windows XP does not shut down at least 1/2 of the time. I have to hit reset to reboot or hold the power button to shut off. BeOS has never done this to me. Apple Mac OS X has done this countless times.
– Windows XP will spontaniously stop responding to program launch events. I’ve not seen this happen again since installing SP1, so maybe that’s fixed. This problem was responsible for many unwarrented reboots.
Maybe these are not all crash situations, but they might as well be because they are behaviors that require rebooting to solve.
I could also add, for those people laying the blame on so called POS hardware: what do YOU consider GOOD hardware??? I have several pieces of professional, not cheep hardware, and none of them has a Microsoft certified driver because the developers don’t bother with that step. I believe that 90% of the drivers you get from your manufacturers are not run through the MS Certification program. Maybe, if the product is at the end of its development cycle, they might run the final driver through MS Certification… usually not, because the product is at the end of its development cycle.
The solution to an instable system isn’t “certification processes” and “having hardware that isn’t shitty;” the solution is making the OS design different. In this, BeOS too needs change because drivers and file system actions can kill the system. NO OS should allow this, today.
Who are you to judge Bill Gates? Only God can. Stop throwing your traditional values at me. You have such high expectations for an operating system. How is Bill Gates supposed to fix Windows if you keep hurting his self esteem? Stop all this crashing stuff. Instead of a calling it a computer crash call it “A good Try”. Society needs to start encouraging people instead of putting them down.
Look, Bill Gates dropped out of college. It was a very unselfish thing to do. He probably didn’t get to take “Operating Systems 101” and instead let a minority student in as he’s probably for Affirmative Action. That was a very fine show of responsibility and diversity on his part.
Now a lot of you guys who are comlaining about “A good Try” (NOT A CRASH, we don’t use words like that) sound negative. I think there are too many negative thoughts in you guys. How do we know it was the computers “Good Try” made you feel bad instead of your bad vibes hurting the computer. The Spirit of Open Source is not negativity. Before you start your computer say three times “My computer is good. My computer is good. My computer is good.” Wait, I mean say that once, because that is three times otherwise you’ll say it nine times.
Remember “GOOD TRY” Mr. Gates!!! Don’t give up!!!
Ya, I believe that, roles eyes over & over. Ok windows has crashed for me a few times when I first booted after install. Remember when windows me (I think) blue screened before it even came out when Bill Gates was preveiwing it on tv, hehe.
[QUOTE]Had Microsoft hased their operating system on a strong UNIX base, what we see now would never had occured.[/QUOTE]
And instead we would have had other things to deal with, like Unix’s primitive and inflexible security model.
Primitive and infexible security model? Please go into more detail because apparently you are some sort of all knowing “guru” of osnews.com.
[QUOTE]SYSV based core, and pure postscript driven interface. Thrown OpenGL into the mix and you would have one heck of a desktop. [/QUOTE]
Meanwhile completely fucking the entire existing customer and developer base due to a vastly different development environments and zero backwards-compatibility.
Mean while had you READ my post rather than ranting you would well and truely know that I was talking in hindsight. REPEAT!!!!!! HINDSIGHT!!!!!!!!!!!!!!!!!! I am talking in reference to Microsoft chosing the UNIX path instead of the NT path.
[QUOTE]UNIX has matured over the last 20 years and had they, Microsoft, matured their operating system with the innovations […] [/QUOTE]
For example ?
Unix has been maturing for over 30 years, NT for barely half that time. You’d bloody well hope Unix was better at some things.
10 years of “maturing” and Microsoft still can’t accurately document their API. Maybe they should have taken programming documentation 101.
[QUOTE][…] that occured we would have a rock solid operating system without the vast amounts of headaches we see today […] [/QUOTE]
The vast majority of headaches are caused, as the article says, by third party code out of Microsoft’s control (although less so applications, as the article is clearly talking about, than drivers, which cause the real headaches). As I said elsewhere, stick NT/2k/XP on quality HCL hardware using Microsoft or WHQL drivers, avoid dodgy programs that require elevated privileges and OS crashes will be extremely rare. I can’t even remember the last time one of my Windows boxes crashed, but it was sometime back in early 1999 when I was still using NT4. Of course, my Quake framerates are substantially lower than they could be because I don’t use the latest-and-greatest video drivers.
That isn’t reality. Reality is that people don’t follow that advice. These are the same people who don’t update their machines, and the same people Microsoft claims can adminsitrate a server because their software is so easy.
Making computers easy to use has only made the issue worse by making people think they know more than they really do.
[QUOTE]Opensource ISN’T the key to a stable and scalable operating system, openstandards are. That is the key. [/QUOTE]
Open standards won’t help your OS stability and scalability one iota if its design is poor. They will help interoperability, but that’s about it. A protocol or API definition or a set of filesystem semantics isn’t going to help system stability in the slightest if the OS design itself is flawed (like, say, MacOS Classic or Windows 9x).
I challenge you to detail how “Open Standards” can have any noticable effect whatsoever on an OSes stability and scalability.
OpenLDAP + OpenSSH + NFS provides a reasonably stable, securre, scalable solution for a non-homogeneous environment. POSIX Security Interfaces and Mechanisms which covers discretionary access control,
audit trail mechanisms, privilege mechanisms, mandatory access control and information label mechanisms.
POSIX threading provides a standard wau of writing threaded applications that will be portable to different platforms with minimum fuss.
I am sure there are many more. Stop trying to stir shit and actually take it like a man. Microsofts problem isn’t a lack of talented programmers but suffering from a strong unilateralism streak.
[Split over two posts because I’m a wordy bugger]
Primitive and infexible security model?
If you can come up with a better word than “primitive” to describe unix’s security model of a single “all-powerful” user and “everyone else”, I’m all ears, because at the end of the day it’s really only a short step away from the security model of DOS and MacOS Classic.
Please go into more detail […]
Well that’s the whole point, innit ? On a unix machines there’s stuff-all detail to go into because you’re either root, and can do anything and everything you damn well please to the running system, or a normal user who can’t really do much at all.
The only way to “manage” (and I use the term loosely) remotely complex file permissions requirements under unix is to abuse the groups system into a nightmarish mix of interdependencies. This is before getting into other things like trying to control fine-grained access to things like hardware or low-level OS functions. The primitiveness of Unix’s security semantics is why people like me have to deal with the security nightmare of SUID binaries.
Have a look at the groups file on unix systems with non-trivial numbers (10,000+) of users that have to offer multiple services to different user classes. They’re a mess. ACLs are simply a better solution – and while some unixes are starting to hack them in, they’re not part of the “unix philosophy”.
[…] because apparently you are some sort of all knowing “guru” of osnews.com.
Nope, I’m just a SysAdmin.
Mean while had you READ my post rather than ranting you would well and truely know that I was talking in hindsight. REPEAT!!!!!! HINDSIGHT!!!!!!!!!!!!!!!!!! I am talking in reference to Microsoft chosing the UNIX path instead of the NT path.
I read it the first time, and my comment still stands. Choosing the Unix path would have meant abandoning a significant established base of DOS and Windows users and developers. No matter how you look at it, from a business sense choosing the unix path would have been a bad idea.
From a philosophical path, well, Bill wanted to replace Unix with something better – and had he let Dave Cutler have his head and not kowtowed to the marketing department, he probably would have got it. Given the time they ended up having to take getting NT onto the desktop, the performance hit of the better design would have ended up negligible on todays hardware and the design sacrifices made ~8 years ago for NT4 in the name of performance really are needless.
10 years of “maturing” and Microsoft still can’t accurately document their API. Maybe they should have taken programming documentation 101.
Which parts of the MSDN documentation do you find lacking ? Most people I know who program Win32 complain about information overload, not lack of it.
Or is this a sideways reference to the good ol’ “secret APIs” strawman ?
I’m still waiting for some of those Unix “innovations” as well.
That isn’t reality. Reality is that people don’t follow that advice.
Actually, for competent people it is reality. They follow the HCL. They use certified drivers. They avoid buggy and poorly written software. And yea, verily, their Windows systems run largely trouble-free.
These are the same people who don’t update their machines, and the same people Microsoft claims can adminsitrate a server because their software is so easy.
Those two groups of people are very different.
People don’t update their machines because, quite frankly, it’s a PITA to do so and requires more knowledge should be required to use a computer.
Making computers easy to use has only made the issue worse by making people think they know more than they really do.
Making computers easy is precisely what companies should be striving for. When my computer has the ease of use and reliability of my microwave or my car, I’ll start believing they’re getting close. OS X is probably a nose in front of Windows at this point, but the sheer sluggishness of the interface detracts much from its advantages.
The easier and more straightforward something is to use, the less likely it is to break – as Apple have demonstrated time and time again over the last 20 years. The KISS principle.
The problem is there’s a whole swathe of inadequately-egoed people out there who seem to think that for something to be any good, it has to be hard. That way they can spend $LOTS_OF_TIME learning how to use something and use that knowledge to look down their nose at other people.
If you really think the future of all computer usage lies in editing cryptic, inconsistent, poorly documented text files on an 80×25 text screen and not ticking check boxes in a GUI or talking into a microphone, then you’ve got pretty mediocre ambitions for computing.
[Part II]
OpenLDAP + OpenSSH + NFS provides a reasonably stable, securre, scalable solution for a non-homogeneous environment.
Bollocks. LDAP is basically universal, but neither OpenSSH nor NFS work particularly well with anything that can’t pretend to be a Unix system.
OpenSSH is pretty good for securely distributing commandlines to a bunch of clients. Unfortunately a) this tends to be rather CPU-heavy if the server is heavily used and b) it’s pretty worthless if you need more than a commandline can offer (which covers probably 90% of client machines in todays world).
Similarly, NFS isn’t much good for file sharing, particularly of the ad-hoc type. It’s clunky to use, can’t be manipulated by end users and is, as the TLA indicates, Not Fucking Secure. It’s only realistic use in any semi-well-designed network is for sharing storage resources on a *physically private network* between *servers*. It’s not much good for farming out files to *end user machines* and if you’re using it to do so, I hope you place a great deal of trust in them.
I certainly hope you weren’t suggesting tunnelling NFS over SSH as well, because it sure as hell won’t be scalable without scads of custom hardware and software support.
You suggest your solution is good for “non-homogenous” environments. This is ridiculous. Apart from the insecurities of NFS, a massive swathe of potential client machines (Windows boxes and older Macs, to name but two types – hell, even an OS X client machine deals with NFS poorly) can’t easily access NFS-shared data and don’t work well with the file permission semantics. Even Samba would be a better choice for such a purpose, although it doesn’t really work well with unix file ownership/permissions. Your proposed “solution” is really only good for a network full of unix boxes, or other systems that can make a reasonable show of pretending to be a unix box – hardly “non-homogenous”.
POSIX Security Interfaces and Mechanisms which covers discretionary access control,
audit trail mechanisms, privilege mechanisms, mandatory access control and information label mechanisms.
POSIX threading provides a standard wau of writing threaded applications that will be portable to different platforms with minimum fuss.
I hate to break it to you, but people these days want more than a multithreaded background process.
POSIX is not the be-all and end-all. I, for one, would hate a world filled with unix clones sitting under different GUIs.
I am sure there are many more.
I’m still waiting. Btw, you haven’t actually offered any reasons how those “open standards” can – on their own – make an OS stable and scalable. You have merely offered examples that demonstrate – as I said – how open standards aid interoperability.
Stop trying to stir shit and actually take it like a man.
You’ll have to do better than that. You might have a hard-on for POSIX, but it had its chance to deliver and didn’t.
I’m really not discriminatory, by the way. All OSes suck, they just do it in different ways (and some more interestingly than others).
Microsofts problem isn’t a lack of talented programmers but suffering from a strong unilateralism streak.
Microsoft’s problem (apart from being a for-profit publically traded company) is a huge customer base demanding massive legacy support, ease-of-use and the ability to use dirt-cheap hardware. No-one else delivers a solution that meets all these needs as well as Microsoft does. The only other company that even comes close is Apple, and they fail on the “dirt cheap hardware” part (apart from their laptops). It’s like the old engineering adage – “you can have it fast, cheap, or good – pick any two”.
Only if the design is tightly coupled. The thing that hits Microsoft so bad is not that a module fails when modules it depends on fail, but that a module fails even though the modules it depends on still work.
Hardly surprising. Things *do* break from time to time, y’know.
Modules are supposed to be black boxes. You can change one without affecting anything else, long as you don’t change the interface. However, in a system that is too tightly coupled, the interface is not clearly defined, and it happens that changes to the internals of one module can affect others.
Er, if you break a module and it starts spitting out garbage, that’s going to affect anything else that uses it.
For example?
Er, the dependencies nightmare in the average Linux system ?
Linux is very well layered. The kernel handles device-IO. X handles graphics. Qt handles widgets. KDE handles the desktop.
How is this different from the I/O-manager, GDI, whatever-the-widgets-in-windows-are-called and Explorer in Windows ?
A lot of people write their own versions of things because the existing things don’t do what they need.
No, a lot of people write their own versions of things because the existing things don’t do exactly what they *want*.
GNOME was written because KDE didn’t (originally) meet everyone’s licensing requirements.
A prime example of wasted development effort.
Does anyone else find it ironic that one of the strengths of Open Source is supposed to be widespread reusability, yet just about every OSS programmer seems to feel the need to rewrite everything himself ?
That results in a level of consistency between KDE apps unmatched by other major platforms, including OS X (thanks to Brushed Metal).
I can only assume this holds true for a certain limited subset of apps in the current KDE CVS, because it doesn’t seem to hold for the KDE + some installed apps on Redhat 9.
There is no practical difference between running in the kernel and running in kernel space.
At the very least, there is in terms of code manageability.
And you don’t need to be in the kernel to be fast.
You probably did in 1993.
In raw benchmarks, XFree86 is faster at drawing than the GDI.
Which benchmarks ? This alleged superiority in benchmarks certainly doesn’t translate to real-world superiority with any X GUI I’ve ever used.
BeOS managed to have a very fast GUI, even though it ran in user-space.
BeOS had the advantage of not having to drag along any legacy baggage.
Yes, a good security model is something Windows inherited from NT 3.x. But a good security model does you no good if:
A) You make your users root by default!
Administrator != root.
If there wasn’t so much brain dead software out there, users wouldn’t need high privileges by default. It’s a chicken and egg problem.
B) Your code is full of security bugs. MSBlaster anyone?
There’s no shortage of security bugs in Linux. Buffer overflows, anyone ?
Any possible relation to the NVIDIA drivers? The NVIDIA kernel driver can be flaky on many machines.
No. This had happened to me before Nvidia was even a commonly known name (and has since).
X runs with root privileges and messes semi-directly with hardware. It is not surprising that X, or programs abusing it, can hang up the system.
Vast majority of *home* users.
Vast majority of users using X.
A large percentage of all machines run in business (client or server) or public settings.
Unix based client machines “in business” are almost always going to be running X as well. Heck, a not-insignificant-number of *servers* “in business” are probably running X.
Specifically, a huge number of those machines are servers — do you want a bug in the GUI bringing down your server?
Not particularly, but I doubt the likelihood of a bug in a completely idle GUI bringing down the server is particularly high.
I’ve written my own kernel, and I’ve got a big stack of kernel development books on the next shelf. Next question…
I’m waiting for the cites on all this stuff supposedly going into the next Windows kernel.
Somebody doesn’t remember DirectX int he early days.
Somebody does, but realises that we are no longer in “the early days”.
DirectX does not work via the HAL.
All the architecture diagrams and descriptions I’ve seen suggest DirectX uses the HAL (or supplements it with its own).
It makes a bridge directly between the programmer and the hardware, *through* the HAL. Using DirectDraw, one could obtain a pointer directly to video memory. And you could crash the machine quite easily with that pointer.
Could ? Can ?
More importantly, do programmers actually do this, or do they do what they are *supposed* to do and write to DirectX ?
DirectDraw is direct access to the hardware. Even though most people work at a more abstract level via Direct3D, it is still possible to touch the hardware via DDraw.
I don’t believe it is. I’m pretty sure the programmer writes to DirectDraw and DirectDraw writes to the hardware.
That’s just theory.
For you maybe. It’s practice for me and quite a few people I know.
I’ve got half a dozen Windows machines here (my family’s not my own) that say differently.
I feel for you. Personally, I haven’t had a Windows box (my own or any one I have any say over) crash or behave strangely since about 1999.
Its a design flaw if the OS needs to be babysat so much.
It doesn’t. I don’t “babysit” my Windows boxes and I don’t “babysit” other peoples Windows boxes. They seem to chug along quite satisfactorily.
I run CVS versions of KDE, with device drivers written by people in their spare time, and crashes are rare. My Windows machines will break even though some of them run nothing but Outlook and IE!
Then your Windows boxes are broken or atypical. Note that I consider POS hardware with crappy device drivers “broken”.