A few readers submitted a link to the reg, where there’s an interesting piece on Windows and Linux security. The article debunks some common myths about both OSes and associated software. Check out the bullet points, or if you have some time, complete report.
I’m surprised this was not picked up by SecurityFocus. For anyone interested in security this article is a good read and brings up several interesting points about vulnerabilities and how the risks are measured (or not).
This article rings of truth about the security industry and Microsoft’s tactics. I liked this alternate headline:
“ICAT classified 67% of Microsoft’s vulnerabilities as high severity, placing Microsoft dead last among the platform maintainers in this [high severity] metric.”
I’m surprised this was not picked up by SecurityFocus.
i’m glad someone has finally collected these tidbits and put them in one place.
in fairness to windows, the devasting DCOM vuln was a stack overflow, not an exploit in the protocol or service itself. this stuff happens on linux too, just more rarely. 😉
Windows is monolithic, not modular, by design
This is bullshit. Windows is neither more or less monolithic than Linux, both are monolithic kernels. In both OS a single poorly written driver has full access to kernel space and can bring down the system. The term monolithic has nothing, nada, zero, zip, to do with the ability, or lack thereof, to load kernel modules at runtime.
Of course he could be referring to Linux as an OS and not a kernel.
For example, if one integrates the graphics rendering features into the innermost sphere (the kernel), it gives the graphical rendering feature the ability to damage the entire system.
So DRI isn’t in the Linux kernel? Since when? Maybe they took it out when they (Finally) removed the kernel httpd.
Finally, a monolithic system is unstable by nature. When you design a system that has too many interdependencies, you introduce numerous risks when you change one piece of the system.
Change the major version of your glibc, then tell me about a non-monolithic system design.
According to the Summer 2004 Evans Data Linux Developers Survey, 93% of Linux developers have experienced two or fewer incidents where a Linux machine was compromised. Eighty-seven percent had experienced only one such incident, and 78% have never had a cracker break into a Linux machine.
This is just outright false application of data. Where is the corresponding data from Windows developers? Note the term DEVELOPERS, ie knowledgable users, these are no your general home users. I know maths isn’t my greatest subject so can someone explain how if 93% represents those with less than two compromises there can be 87% with one compromise plus 78% with none.
Even more important, Linux provides almost all capabilities, such as the rendering of JPEG images, as modular libraries. As a result, when a word processor renders JPEG images, the JPEG rendering functions will run with the same restricted privileges as the word processor itself. If there is a flaw in the JPEG rendering routines, a malicious hacker can only exploit this flaw to gain the same privileges as the user, thus limiting the potential damage. This is the benefit of a modular system, and it follows more closely the spherical analogy of an ideally designed operating system (see the section Windows is Monolithic by Design, not Modular).
Windows ALSO uses shared libraries, and they ALSO run with user privileges. Yet again this has NOTHING to do with monolithic system design and everything to do with allowing users to run as administrators. If you run your Linux box as root (Lindows, now fixed) then you’re in exactly the same position.
Linux browsers do not support inherently insecure objects such as ActiveX controls, but even if they did, a malicious ActiveX control would only run with the privileges of the user who is running the browser.
‘Please enter your root password to install this software’.
The vast majority of Windows viruses/trojans rely on social engineering. Linux is not immune to the stupidity of its users.
If a malicious hacker manages to gain complete control over the Apache web server on a Debian system, that hacker can only affect files owned by the user “www-data”, such as web pages. In turn, the MySQL SQL database server often used in conjunction with Apache, runs with the privileges of the user “mysql”. So even if Apache and MySQL are used together to serve web pages, a malicious hacker who gains control of Apache does not have the privileges to exploit the Apache hole in order to gain control of the database server, because the database server is “owned” by another user.
If I have control of your Apache server I also have control over the contents of any scripts your server is running, can read your MySQL username and password from your current scripts, and from there use a rewritten script to access your MySQL database. I don’t need to be running anything as the MySQL user to do it, all that’s needed is MySQL scripting support and an Apache/Scripting exploit (Which thankfully aren’t common).
****
Basically I stopped reading here. The author obviously just wants to attack Windows whilst holding Linux up to be some paragon of virtue. They have some good points, but the blinkered style just irritated me. I really believe Linux IS a more secure design than Windows, but trying to prove it in this way is simply ridiculous.
“a malicious ActiveX control would only run with the privileges of the user who is running the browser. Once again, the most damage it could do is infect or delete the user’s own files.”
this is where local roots come in handy. you don’t need console access to get root, just one of the kernel vulns. i think as linux progresses and gains market share, there will be such malware that uses two stages — one to grab user level, and one to escalate privs. complex, yes. impossible, no.
I would expect nothing less from The Register concerning topics like this.
“Windows has only recently evolved from a single-user design to a multi-user model…Windows XP was the first version of Windows to reflect a serious effort to isolate users from the system”
Eh? Win2k did this first, not XP. NT 4.0 also did a pretty good job of this (I don’t consider almost a decade ago “recent”).
“Windows focuses on its familiar graphical desktop interface…By advocating this type of usage, Microsoft invites administrators to work with Windows Server 2003 at the server itself, logged in with Administrator privileges.”
This is just flat out wrong, Windows servers run headless equally as well as any *nix distro out there, and there is no need to be an admin to log in remotely or at the terminal itself.
“Linux is Not Constrained by an RPC Model…For example, the MySQL SQL database server is usually installed such that it does not listen to the network for instructions”
Sounds like pretty bad physical application design to me…why would you want your webserver/db server on the same machine? This is find for a dev environment, but holds no water in a real networked production environment.
For every article/{fill in the blank] study, there is an equal and opposite article/study. In the end, it’s ultimately up to the admins themselves to lock machines down, and that’s where the true awareness needs to be made aware.
I have used Linux for years
connected to the internet 24/7
with no firewall or antivirus software
with no problem.
Try that with Windows……..
The vast majority of Windows viruses/trojans rely on social engineering. Linux is not immune to the stupidity of its users.
I must call your BS on this one I am afraid. When was the last time a virus came and asked you to install it. Nimda, Code Red , MSBlast, Klez, and insert many more here. No, the VAST majority require nothing more than being on the internet without a good antivirus and a good firewall. The biggest barrier to get a trojan to install itself in windows is to get it on the system. With Linux, it has to be given executable status, which requires input, and it has to be run because they generally d not run themselves. In windows, they get in, run themselves, install themselves and no password necessary. I use computers in university labs which are pretty locked down, so much that I cannot even right click, but I can always install stuff, only that it is usually no biggie because they are ghosted every time. And I absolutely have no admin privileges on those, though this cold be by design. I know the installer service can be turned off in windows, but that rarely happens and in all probability breaks a lot of things.
If I have control of your Apache server I also have control over the contents of any scripts your server is running, can read your MySQL username and password from your current scripts, and from there use a rewritten script to access your MySQL database. I don’t need to be running anything as the MySQL user to do it, all that’s needed is MySQL scripting support and an Apache/Scripting exploit (Which thankfully aren’t common).
But most of the time, you do not use the SQL password in any case. You normally do not need that to put stuff in a web page. You either put it directly in the database, and this would ideally have nothing to do with the web page itself, or the users do it using some authentication. The password is mostly used for admin, but you may correct me if I am wrong. I think using the MYSQL password all the time smacks of bad design any time. You shouldn’t need it.
Windows ALSO uses shared libraries, and they ALSO run with user privileges. Yet again this has NOTHING to do with monolithic system design and everything to do with allowing users to run as administrators. If you run your Linux box as root (Lindows, now fixed) then you’re in exactly the same position.
Lindows went out of their way to break something there. The fact is the default behavior on just about every distro is to have a separate account for the administrator, and others for users. Some even disable graphical login for root. Try doing that for Windows. I know you can’t, and there are reasons for that, good or bad.
I think you miss the point on the modularity he is refering to. Linux is meant to work as an amalgamation of parts which you choose to be in your system. consequentially, you do not have to run some programs which could be security risks. You do not have to have a rendering engine on your system at all. As far as I know, you cannot do that with Windows. You cannot remove that module.
* Nicholas Petreley’s former lives include editorial director of LinuxWorld, executive editorial of InfoWorld Test Center, and columns on InfoWorld and ComputerWorld. He is the author of the Official Fedora Companion and is co-writing Linux Desktop Hacks for O’Reilly. He is also a part-time Evans Data Analyst and a freelance writer.
Completely unbiased of course!
We can all build a case to support one side or the other selecting the facts that suit us, what’s the point of this?
> This is bullshit. Windows is neither more or less monolithic than Linux, both are monolithic kernels. In both OS a single poorly written driver has full access to kernel space and can bring down the system. The term monolithic has nothing, nada, zero, zip, to do with the ability, or lack thereof, to load kernel modules at runtime.
I’m afraid kernel design doesn’t have much to do with what is being discussed here — the overall design including the userland is being discussed. And yes Windows is the epitomy to opaque and non-transparent design compared to Unix. Fact Windows is very poorly designed, which explains why M$ does a major redesign with pretty much every release of the OS. Windows is destined to suffer from poor security because of its poor design. Windows still very much belongs on home desktop and not much else — business desktop is too much of a risk and deploying Windows on the server is plain insane.
…Of the integrity of anyone who uses Netcraft market share numbers when comparing exploits. It’s well known Netcraft counts web *sites* not web *servers*. When comparing how many *machines* are compromised, the number of web *sites* being run on whatever web server software is irrelevant.
Not to mention a comparison between Apache and IIS is only good for drawing conclusions on a comparison between Apache and IIS.
Then there’s this little gem:
“Windows has only recently evolved from a single-user design to a multi-user model.”
Windows NT has been multiuser since day 1, back in 1993 (or 1988 if you want to count from the start of development). True enough, a “consumer” version of Windows based on the NT branch has only been around for a few years, but that version – XP – is just the latest incarnation of an OS that was conceived, designed and built to be a multiuser OS.
That was about the point I stopped taking this seriously, although continuing to read reveals someone who clearly has a massive chip on their shoulder and a distinct lack of objectivity.
sshd[18755]: Illegal user rolo from 211.182.117.6
sshd[18757]: Illegal user iceuser from 211.182.117.6
sshd[18759]: Illegal user horde from 211.182.117.6
sshd[18761]: Illegal user cyrus from 211.182.117.6
sshd[18763]: Illegal user www from 211.182.117.6
sshd[18765]: Illegal user wwwrun from 211.182.117.6
sshd[18767]: Illegal user matt from 211.182.117.6
sshd[18769]: Illegal user test from 211.182.117.6
sshd[18771]: Illegal user test from 211.182.117.6
sshd[18773]: Illegal user test from 211.182.117.6
sshd[18775]: Illegal user test from 211.182.117.6
sshd[18777]: User www-data not allowed because not listed in AllowUsers
sshd[18779]: Illegal user mysql from 211.182.117.6
sshd[18781]: Illegal user operator from 211.182.117.6
sshd[18783]: Illegal user adm from 211.182.117.6
sshd[18785]: Illegal user apache from 211.182.117.6
***
That’s just part of the normal logging day from my router at home (SSHD is the only service that it runs). The actual log is 57k of mostly sshd entries (From a variety of different IPs), mostly trying to log in as root. SSHD still requires you to manually turn off root access after installation.
My bet is that pretty much anyone who has a Linux box and runs SSHD will have similar logs (Seems to be getting worse recently btw, I guess I should lock sshd down to the local address space). Point being is that there are people out there actively looking to root Linux boxes. The less knowledgable the userbase gets the more success they will have.
ditto on my debian and before that mandrake since 1998 no antivirus, no trojan scanners, no firewall, and nothing has happened ever for all these years.
Window users try that for awhile.
When was the last time a virus came and asked you to install it.
You mean you missed out on the Kournikova virus?
Admittedly all users had to do was click on the vbs attachment to run it (Which is horrific design), but it still required user interaction (And was quite successful as I recall). Linux virus writers will just have to be more inventive in the instructions they give to their unsuspecting victims.
I must call your BS on this one I am afraid. When was the last time a virus came and asked you to install it.
Every single one of those “just double click this to see $CELEBRITY naked” emails.
Nimda, Code Red , MSBlast, Klez, and insert many more here.
Which make up a minority.
No, the VAST majority require nothing more than being on the internet without a good antivirus and a good firewall.
I think you need to review your virus statistics. Compare how many use remote exploits (and remember to only count multiple variants that exploit the same flaw once) vs how many are distributed by the ubiquitous email attachment.
The biggest barrier to get a trojan to install itself in windows is to get it on the system.
How many emails a day do you think the average person receives (or would receive, were it not for filtering mail gateways) with exploit attachments.
With Linux, it has to be given executable status, which requires input, and it has to be run because they generally d not run themselves. In windows, they get in, run themselves, install themselves and no password necessary.
Unless you’re running as a regular user. Shock ! Horror ! If you run a Windows system like you do a Linux system it’s just as hard to get infected !
I use computers in university labs which are pretty locked down, so much that I cannot even right click, but I can always install stuff, only that it is usually no biggie because they are ghosted every time.
It’s not hard to stop people installing things under Windows – even easier if all you want to do is stop them installing things system-wide. If you can do it on those systems, it’s because either you’ve been allowed to or the admins are incompetent.
And, of course, you miss the biggest reasons social engineering is so much more of a hole than remote exploits, is because the latter is _trivial_ to fix. A software patch (almost always available before any in-the-wild exploits) and a firewall have protected you – but nothing can protect you from users running malicious code deliberately. That’s why the biggest security hole in any system is the end user.
Lindows went out of their way to break something there. The fact is the default behavior on just about every distro is to have a separate account for the administrator, and others for users. Some even disable graphical login for root. Try doing that for Windows. I know you can’t, and there are reasons for that, good or bad.
Shouldn’t be too hard (although I’ve never tried it) – just remove the “Log on Locally” permission.
Fact Windows is very poorly designed, which explains why M$ does a major redesign with pretty much every release of the OS.
Which major redesigns are you thinking of ? The last _major_ design change in Windows I can think of was back in 1996, when NT4 moved the graphics subsystem in kernel space.
Windows is destined to suffer from poor security because of its poor design. Windows still very much belongs on home desktop and not much else — business desktop is too much of a risk and deploying Windows on the server is plain insane.
Please list the aspects of Windows *design* you think are bad and why.
Windows NT has been multiuser since day 1, back in 1993 (or 1988 if you want to count from the start of development). True enough, a “consumer” version of Windows based on the NT branch has only been around for a few years, but that version – XP – is just the latest incarnation of an OS that was conceived, designed and built to be a multiuser OS.
The day when you can run two copies of WinWord on Windows without any additional software on the same system for two diffrend users but both of them have diffrend screens (and are still logged into the system) is the day where you can count Windows multiuser environment. I am not aware of Windows NT 4.0 beeing able to do that out of the box!
Windows is much much harder to lock down to allow user and admin separation then any *nix I am aware of.
Try to develop am application in Windows (a service) where you try to access information on a network. This is pure hell in Windows. As a developer, I can tell you, that Windows is hell to programm with that kind of stuff. Or try to develop a application wich runs in the background (no user loged in) and prints out on a printer (local or network). It is possible in Windows, but it is everything else then easy.
I could continue with many stuff in that direction. For example try to develop a service where you dynamicly write into the event log of Windows. You can’t just write there in. You need every message to be in a resource file. If you deinstall the application or service, then the messages in the log get deleted as well. Very funny!
Sorry. But to name Windows a true multiuser environment is just plain wrong. It would be the same as calling Windows 95/98 a true multitasking system (yes… it runs multiple applications at the same time, but one application can block the complete system).
If Windows is such a great multiuser environment, then just tell me why we need Citrix?
No one who is serious in the security business is picking this up because its flawed.
The very first myth is about windows being a big target and apache running most of the web. WTF does that have to do with anything ? I can run apache on Windows and it will have little to do with Linux.
He is comparing apache use on linux to windows use in general and trying to claim that linux takes as much heat on the web as a windows box.
Most exploits, trojans and viruses aren’t spread by breaking into web servers anyway, they are spread by users running something. In that sense yes windows is a bigger target.
Of the ones that aren’t user initiated, blaster comes to mind. A service running by default is exploited without user intervention.
Any OS setup correctly and managed by someone who knows what they are doing can be secure.
Out of the box I’d say linux has better track record of security but it shouldn’t take articles like this to prove that because articles like this are just as scewed as the MS FUD coming out of redmond in regards to windows security.
With this article I find the flaws in the authors reasining and technical details. The author asserts:
” Windows has only recently evolved from a single-user design to a multi-user model ”
This would be true if Windows XP had evolved from DOS, it did not. Windows XP is derived from the NT/2000 kernel, which has always been multi-user from day one.
“Windows is monolithic, not modular, by design”
Windows has a Microkernel, not a monolithic kernel as the author suggests. Windows is a modular Operating system only Microsoft has never used it as such and they plan to make it more modular being able to add and remove features with Longhorn Server.
” Linux is based on a long history of well fleshed-out multi-user design ”
The author does a really good job to not say linux is based on UNIX, which it is. i wouldnt call the UNIX or Linux system well designed or even superior.
” Linux is mostly modular by design ”
Linux is not modular by design, you cannot add or remove functionality without recompiling your kernel. If Linux was a proprietary product such as UNIX thenyou would be stuck with a what you see is what you get system. Linux has a Monolithic kernel like DOS had a monolithic kernel.
” Linux servers are ideal for headless non-local administration ”
My Windows Server 2003 racks at work are headless and I have no problem with system administration from home or anywhere else.
” This exposes that server to any browser security holes. Any server that encourages you to administer it remotely removes this risk. ”
Any administrator that sits there and uses the server as his own personal desktop to surf the web is not a real admin. I use IE on my server to access Windows Update and to use web apps that I have. this is common sense, are you open to the browser security flaws? Sure but your chances of getting hit by an IE flaw on the server is less probable than on the desktop. As I stated before you can administer a Windows Server remotely.
the author then goes on to mention the severity ratings from Microsoft. I agree Microsoft should not classify the severity but Microsoft has to decide what it should patch first.
I personally dont put that much consideration into this security report. I find it more of a marketing tool.
The bulleted list contained these remarks:
” Myth Windows only gets attacked most because it’s such a big target, and if Linux use (or indeed OS X use) grew then so would the number of attacks.
Fact When it comes to web servers, the biggest target is Apache, the Internet’s server of choice. Attacks on Apache are nevertheless far fewer in number, and cause less damage. And in some case Apache-related attacks have the most serious effect on Windows machines. Attacks are of course aimed at Windows because of the numbers of users, but its design makes it a much easier target, and much easier for an attack to wreak havoc. Windows’ widespread (and often unnecessary) use of features such as RPC meanwhile adds vulnerabilities that really need not be there. Linux’s design is not vulnerable in the same ways, and no matter how successful it eventually becomes it simply cannot experience attacks to similar levels, inflicting similar levels of damage, to Windows ”
I do think popularity does play a big part in this issue. If Linux tommorrow was to become the dominant OS does this guy actually think virus and malware writers intend to take their ball and go home? If he does then I have some ocean fron property in colorado I want to sell him. Crackers will get into systems regardless of if its Mac, Windows, Linux or UNIX. They will write viruses for these systems. It doesnt matter.
The only real security is education. Teaching people to protect their systems is key. I have run Windows Servers for 3 years. I have yet to have an intrusion or a virus. is it magic? No, I know what I am doing and I try to educate users.
Oh yeah, the number one source for unbiased information, especially about that kind of topic…
Never mind. No one is forcing me to read it.
Indeed asking this guy about the “real facts” about Windows vs. Linux security is like asking Rumsfeld about the “real situation” in Iraq.
“Unbiased” is something that doesn’t exist in the real world. Everywhere you look you get nothing but BS, half-truths and plain out lies. Everyone has his own little pathetic agenda.
Which major redesigns are you thinking of ? The last _major_ design change in Windows I can think of was back in 1996, when NT4 moved the graphics subsystem in kernel space.
Just look at Longhorn!
Please list the aspects of Windows *design* you think are bad and why.
– To much stuff is integrated in the system and can not be deinstalled without breaking the functionality of Windows (deleting the IE folder or deinstalling IE renders you Windows unstable and in some way’s unusable, so many stuff is integrated into the kernel that for every stupid update you need to reboot the system (no other *nix has that limitation), Explorer can easy crash a runing Winodws (tell me when you last time managed to crash your complete Linux with KDE or Gnome or any other DE), etc
– I/O stuff in Windows is still bad designed (try to render a image in any 3D software while formating a floppy (you will feel the difference! trust me!)
– Software installation (you can not install Windows for a user and then do the admin/user separation the same way as you do in *nix. Try to allow the user to install Nero without giving him admin access. Good luck! In *nix I can give the user the right to use the cd burning software, without giving him root access. In Windows such stuff is not allways possible)
etc
The day when you can run two copies of WinWord on Windows without any additional software on the same system for two diffrend users but both of them have diffrend screens (and are still logged into the system) is the day where you can count Windows multiuser environment. I am not aware of Windows NT 4.0 beeing able to do that out of the box!
Firstly, that’s a pretty broken definition of multiuser. Heck, DOS-based Windows (and even DOS itself) can do that with some trivial additional software and none of them are even close to multiuser (nor does that additional software make them so).
Secondly, you can do that with Terminal Services, which has been around since NT4.
Being multiuser has nothing to do with multiple interactive sessions.
Windows is much much harder to lock down to allow user and admin separation then any *nix I am aware of.
Uh huh. Create a user. Put it into the “Users” group. Really hard stuff.
Try to develop am application in Windows (a service) where you try to access information on a network. This is pure hell in Windows. As a developer, I can tell you, that Windows is hell to programm with that kind of stuff. Or try to develop a application wich runs in the background (no user loged in) and prints out on a printer (local or network). It is possible in Windows, but it is everything else then easy.
What’s so hard about it ? (I’m not a developer, but I am curious).
Sorry. But to name Windows a true multiuser environment is just plain wrong. It would be the same as calling Windows 95/98 a true multitasking system (yes… it runs multiple applications at the same time, but one application can block the complete system).
Windows NT is multiuser. Always has been.
If Windows is such a great multiuser environment, then just tell me why we need Citrix?
Firstly, you don’t “need” Citrix.
Secondly, Citrix exists for the same reason something like ReiserFS exists.
Just look at Longhorn!
You said:
“Fact Windows is very poorly designed, which explains why M$ does a major redesign with pretty much every release of the OS.”
So the last major redesign was in 1996 and the next one is going to be ~2006, with 3 product releases in between (2000, XP, 2003), but somehow that is “pretty much every release of the OS” ?
I’d call significant under-the-hood changes ever ~10 years or so to be reasonable. Particularly when, say, Linux does it every few years (and sometimes between “stable” kernel releases !).
To much stuff is integrated in the system and can not be deinstalled without breaking the functionality of Windows (deleting the IE folder or deinstalling IE renders you Windows unstable and in some way’s unusable, so many stuff is integrated into the kernel that for every stupid update you need to reboot the system (no other *nix has that limitation), Explorer can easy crash a runing Winodws (tell me when you last time managed to crash your complete Linux with KDE or Gnome or any other DE), etc
Sounds like your systems are severely broken.
IE is necessary for some system functionality, so of course removing it will cause problems – try ripping some basic system libraries out of Linux and watch it break.
Reboots are largely for the benefit of ignorant end users and usually aren’t necessary at all.
If explorer is crashing your entire system (and not just the shell), your system has big problems – as would any Linux system where X and/or KDE/GNOME/whatever were crashing the whole system (and I have seen it happen).
I/O stuff in Windows is still bad designed (try to render a image in any 3D software while formating a floppy (you will feel the difference! trust me!)
No problems whatsoever. Your system is broken (or you’re using Windows 95/98/Me).
Software installation (you can not install Windows for a user and then do the admin/user separation the same way as you do in *nix. Try to allow the user to install Nero without giving him admin access. Good luck! In *nix I can give the user the right to use the cd burning software, without giving him root access. In Windows such stuff is not allways possible)
Firstly it’s an application problem, not an OS problem. It’s not Microsoft’s fault if Nero can’t write their software properly.
Secondly, it usually *is* possible, even with broken applications, by spending a little time figuring out what the application is trying to do that it can’t and granting specific permissions on files, directories, registry keys and the like to allow it.
Windows has a Microkernel, not a monolithic kernel as the author suggests.
I don’t think he’s talking about just the kernel. Also, NT isn’t really a microkernel. It’s *based* on a microkernel-ish design (like, say, OS X) but a lot of stuff has since been moved into kernel space for performance reasons (again, like OS X).
Windows is a modular Operating system only Microsoft has never used it as such and they plan to make it more modular being able to add and remove features with Longhorn Server.
He simply makes the standard error of equating “lack of alternative modules” with “modular”.
He simply makes the standard error of equating “lack of alternative modules” with “modular”.
Should of course be:
He simply makes the standard error of equating “lack of alternative modules” with “not modular”.
Hey, I think I was the only one getting hammered like that with SSHD… That’s why I recently changed it to a non-standard port. Not that I don’t trust OpenSSH, but I am not always getting the lastest version as soon as it comes out…
Similarily, my Apache logs are filled with that kind of junk (mostly invalid URLs). I imagine we will see more of these “flaw finders” as Linux & co are gaining more popularity.
I used to be a fan of The Reg a while ago but I wouldn’t classify them as a good source of information nowadays. They just ride the “We’re cool because we say that Microsoft sucks” wave.
Here are just a few most obvious and fundamental shortcomings in the Windows architecture that will perpetuate extremely poor security (I’m sure you can come up with more given enough time):
– IIS is deeply wired into Windows kernel in an effort to match the performance of Unix web server — pretty much any major vuln in IIS leads to the entire box being owned
– Windows registry is a single point of failure for the entire OS and is easy to target by hackers
– Internal boundaries of the OS are extremely porous especially in GUI — any application can talk to any other application through windowing system, which means that regardless how unprivileged the vulnerable program is there is always a possibility of escalating privileges through other applications/services
And any buffer overrun or crack in the Windows GUI or webserver can be exploited to take control of the entire system.
Firstly, that’s a pretty broken definition of multiuser. Heck, DOS-based Windows (and even DOS itself) can do that with some trivial additional software and none of them are even close to multiuser (nor does that additional software make them so).
You are right. If I could write in my native langauge (german), then I would probably write the definition diffrendly.
Secondly, you can do that with Terminal Services, which has been around since NT4.
Terminal services are an nother beast. Multiuser is something diffrend.
Being multiuser has nothing to do with multiple interactive sessions.
Correct.
Uh huh. Create a user. Put it into the “Users” group. Really hard stuff.
If it would be that easy, then I would realy love Windows. But it is not that way. So many applications require write access there and there. It is hell to maintain.
I have customers maintaining 145’000 users on Windows and this is everything then easy. Every dam application on Windows can be diffrend. I know that this is not the error of Windows, but it is Microsofts fault, that they allowed that in the old days. And now you have applications wich require you to tweak the hell out of Windows to get that application runing under Windows in a tight up environment.
What’s so hard about it ? (I’m not a developer, but I am curious).
It would go to far to explain everything in detail, but it is not an easy task.
In *nix even an simple perl script can be made to work as a service. In windows you don’t have that possibility (at least not a easy one). In Windows you need a hell of API calls to deal with credentials and and and till you get the stuff to work.
Windows NT is multiuser. Always has been.
Okay… then what is your definition of multiuser?
Firstly, you don’t “need” Citrix.
You are right. You don’t need it. But you need additional stuff to make WinNT having the functionality to allow you certain stuff. WinNT does not have out of the box that kind of functionality.
Secondly, Citrix exists for the same reason something like ReiserFS exists.
Ok. At least you can choose in *nix. In Windows you ohly have FAT, FAT32 and NTFS (okay… once they had HPFS).
If you would like to try it for yourself how easy it has become to break into Windows, just check out the wonderful Metasploit framework (www.metasploit.com). You can literally own pretty much any Windows machine (except for very well patched and hardened ones) in about 20 secs (literally). And the truly amazing part is that you can teach practically anyone to do it — it is that easy. It is a tribute to excellent work from Metasploit folks and also to miserable Windows security — thanks M$
You said:
“Fact Windows is very poorly designed, which explains why M$ does a major redesign with pretty much every release of the OS.”[i]
It was not me telling that!
[i]So the last major redesign was in 1996 and the next one is going to be ~2006, with 3 product releases in between (2000, XP, 2003), but somehow that is “pretty much every release of the OS” ?[i]
Every release of Windows get’s changed at the API level. The Win32 is a collection of new, death, old, and and and functions. MS could probably shrink that beast to half size if they would kill all the old and death functions.
[i]I’d call significant under-the-hood changes ever ~10 years or so to be reasonable. Particularly when, say, Linux does it every few years (and sometimes between “stable” kernel releases !).[i]
Uhh…. Linux some thime changes to much! But the difference between Linux and Windows is, that in Linux the applications get changed very quickly.
[i]Sounds like your systems are severely broken.
NO! Don’t be so ignorant.
IE is necessary for some system functionality, so of course removing it will cause problems – try ripping some basic system libraries out of Linux and watch it break.
Yes. IE has some system functionality. But HTML rendering should be integrated as a module wich can be replaced or even not installed. Why does it have to be a integral part of the OS? Why do I need HTML rendering, if I only want Windows to act as a DNS or DHCP server?
Reboots are largely for the benefit of ignorant end users and usually aren’t necessary at all.
They may be necessary or not, but the installation software is FORCING you to do it!
If explorer is crashing your entire system (and not just the shell), your system has big problems – as would any Linux system where X and/or KDE/GNOME/whatever were crashing the whole system (and I have seen it happen).
I havent seen it for long time as well, but I have seen it. I even have seen WinWord crashing my system. But I have never ever seen in the last 3 years OpenOffice.org crashing my Linux system. Never ever!
No problems whatsoever. Your system is broken (or you’re using Windows 95/98/Me).
No! I have seen that on Windows NT/2K/XP.
Firstly it’s an application problem, not an OS problem. It’s not Microsoft’s fault if Nero can’t write their software properly.
In some ways you are right, but you forgett to thing about the fact, that Microsoft needs to publish the information on how to integrate that stuff into the system.
I once had the entire MSDN Universal subscription. The MS sales person insured me, that I will get everything from A to Z about every application from MS (except some games and other small special stuff) and I will get ever documentation of the Windows API. Then after some while I have readed on the net, that MS had 217 unpublished API calls and other protocol stuff wich they have hided from others. After one year of my subscription, I did not extend the subscription. The MS sales person called me again asking why did I not extend the subscription. I just answered that they promissed to deliver EVERYTHING and that I did not get everything! And paying every year for something wich is not complete, renders the complete suff for me useless! They can search other idiots developing on their platform. Now I have again a subscription, but I did not payed it! A customer wanted me to develop an application for imaging and DMS and this task required deeper documentation in the Windows API. So I told the customer that I can do it, but that he pays the MSDN, because I will in no way buy it! And I will never buy it! Not under the above conditions. I don’t like, when they lie to me!
Secondly, it usually *is* possible, even with broken applications, by spending a little time figuring out what the application is trying to do that it can’t and granting specific permissions on files, directories, registry keys and the like to allow it.
Yes. This is ture.
5 posts on this page alone. do i sense an emotional reaction?
…is a great article if you are a linux zealot!
The existance of this article is proof that Microsoft’s campaign to paint Linux as a security risk is working…
The article should have stated up front that it was totally biased in favor of linux and that the goals of the article were to fire back at Microsoft’s campaign. Instead they “try” (not too hard) to make it “appear” that this is somehow an objective look at both OSes, when in reality it’s a Microsoft hit piece.
Shameful…
The first really good, stable, reasonably secure multiuser, multiprocesser OS from M$ was Windows 2000. Windows NT was a poor effort and wasn’t really very usable or even moderately reliable before the release of Service Pack 4 which was more 2 years after its launch.
WinNT + SP3 was the absolute minimun that anyone in their right mind would dream of running and anyone who tried to use SP2 or less must really like the color blue ( as in BSOD).
Being multiuser has nothing to do with multiple interactive sessions.
Correct.
Then why do you insist it is with your request to run simultaneous sessions with Word ?
If it would be that easy, then I would realy love Windows. But it is not that way. So many applications require write access there and there. It is hell to maintain.
Which is an application problem, not a Windows problem.
You can grant extremely fine-grained permissions, down to specific users, files and registr keys. Giving people full-blown admin access because some application they’re running wants to write to a single file in its installation directory is like cracking a walnut with a sledgehammer.
I’ve been running an NT desktop daily since NT4 in 1996 as a regular user.
I have customers maintaining 145’000 users on Windows and this is everything then easy. Every dam application on Windows can be diffrend. I know that this is not the error of Windows, but it is Microsofts fault, that they allowed that in the old days. And now you have applications wich require you to tweak the hell out of Windows to get that application runing under Windows in a tight up environment.
It’s not Microsoft’s fault. Windows NT has been available for 11 years now and the move from DOS-based Windows 9x to NT-based Windows has been on the cards since then (one of the primary drivers for the Windows 9x-like interface and GDI-in-kernel-space of NT4). Heck, Windows 98 and Me were never even supposed to exist (originally, NT4 was supposed to be replaced with NT5 – that becamse Windows 2000 – in the ’98ish timeframe with a “consumer version” like XP).
Developers who haven’t been writing their applications to be compatible with NT for *at least* 5 – 6 years now cannot reasonably place blame anywhere except themselves. The move to a multiuser OS – NT – is a punch that’s been telegraphed for over a decade.
In Windows you need a hell of API calls to deal with credentials and and and till you get the stuff to work.
Ok, so your complaint seems to be that you have a lot of security restrictions to deal with to get a service to work. Why exactly is this a _bad_ thing ?
Okay… then what is your definition of multiuser?
The same definition used in computer science (I don’t have any old textbooks handy or I’d quote out of one of them) – the ability to run processes in separate, distince user contexts. Everything else – permissions, user restrictions, etc – all stem from the basic ability of the OS to say that arbitrary chunk of executable code A is being run by user B and arbitrary chunk of executable code X is being run by user Y. It’s got nothing to with how many people can use the system at once – or even if people are using the system at all – it’s the fundamental ability of the OS to partition applications away with “user credentials”.
I can boot up a Linux machine configured in such a way that only one person can use it interactively at once – either locally or remotely – but that in no way changes the fact that Linux is a multiuser OS.
You are right. You don’t need it. But you need additional stuff to make WinNT having the functionality to allow you certain stuff. WinNT does not have out of the box that kind of functionality.
Yes, it does. NT4 had a Terminal Services variant and both Windows 2000 and 2003 include TS functionality out-of-the box.
Ok. At least you can choose in *nix. In Windows you ohly have FAT, FAT32 and NTFS (okay… once they had HPFS).
You misunderstood my comparison, I think. Citrix exists because it does some things better than Terminal Services – just like ReiserFS exists because it does some things better than ext3 or Mozilla exists because it does some things better than IE.
Then why do you insist it is with your request to run simultaneous sessions with Word ?
The english language is probably the source of our missunderstanding here.
Which is an application problem, not a Windows problem.
….
You can grant extremely fine-grained permissions, down to specific users, files and registr keys. Giving people full-blown admin access because some application they’re running wants to write to a single file in its installation directory is like cracking a walnut with a sledgehammer.
No. I did not talked about the full admin access (wich is stupid to implement because of one or serval applications).
I’ve been running an NT desktop daily since NT4 in 1996 as a regular user.
I started with WinNT when 3.1 NT was out (looking the same way as Win3.11).
It’s not Microsoft’s fault. Windows NT has been available for 11 years now and the move from DOS-based Windows 9x to NT-based Windows has been on the cards since then (one of the primary drivers for the Windows 9x-like interface and GDI-in-kernel-space of NT4). Heck, Windows 98 and Me were never even supposed to exist (originally, NT4 was supposed to be replaced with NT5 – that becamse Windows 2000 – in the ’98ish timeframe with a “consumer version” like XP).
It is Microsoft’s fault! Corporations (in europe at least) want stable systems wich they can at least have for 4 years till the switch to new systems. Changing all two years the OS is a heck of work. And MS is forcing to much changes at once!
Developers who haven’t been writing their applications to be compatible with NT for *at least* 5 – 6 years now cannot reasonably place blame anywhere except themselves. The move to a multiuser OS – NT – is a punch that’s been telegraphed for over a decade.
Microsoft “telegraphs” much stuff, when the day is long. If developers would follow every anouncement of MS, then they would never start and never finish their applications. Because MS anounces every second day that the new Windows XYZ will be the next big thing. And projects take time. So you basicly know that when you are finished your application will be called “legacy”. Super! Great for MS (and me, because I can charge again and again and again…) but the end customers get’s pissed off!
Ok, so your complaint seems to be that you have a lot of security restrictions to deal with to get a service to work. Why exactly is this a _bad_ thing ?
That this is everything else the transparent. The API is a monster!
The same definition used in computer science (I don’t have any old textbooks handy or I’d quote out of one of them) – the ability to run processes in separate, distince user contexts. Everything else – permissions, user restrictions, etc – all stem from the basic ability of the OS to say that arbitrary chunk of executable code A is being run by user B and arbitrary chunk of executable code X is being run by user Y. It’s got nothing to with how many people can use the system at once – or even if people are using the system at all – it’s the fundamental ability of the OS to partition applications away with “user credentials”.
Okay… This is my understanding of multiuser as well.
It was not me telling that!
Sorry, my bad. I just assumed it was the same person without actually looking.
Every release of Windows get’s changed at the API level. The Win32 is a collection of new, death, old, and and and functions. MS could probably shrink that beast to half size if they would kill all the old and death functions.
So which OSes are you thinking of that don’t have API changes with major releases ?
Microsoft probably could shrink Win32, but that would break a lot of old applications, and legacy support is a cornerstone of Microsoft’s success, so they’re not likely to do that any time soon. .Net is supposed to be replacing Win32 anyway.
Uhh…. Linux some thime changes to much! But the difference between Linux and Windows is, that in Linux the applications get changed very quickly.
And the difference is in Windows they don’t _need_ to change as much.
NO! Don’t be so ignorant.
If his system is behaving the way he describes, it’s broken. End of story. It’s *not* normal behaviour and no-one should even consider accepting it as such.
Yes. IE has some system functionality. But HTML rendering should be integrated as a module wich can be replaced or even not installed. Why does it have to be a integral part of the OS? Why do I need HTML rendering, if I only want Windows to act as a DNS or DHCP server?
Because it’s part of the shell and there isn’t an option to install without the shell. Really, I don’t understand why this is such a big issue – if the machine is _only_ being used as a DHCP or DNS server, why is the shell being installed (not even in use) an important issue ? Code that isn’t running can’t be exploited.
They may be necessary or not, but the installation software is FORCING you to do it!
Rarely, these days. In any event, you shouldn’t be applying software patches outside of maintenance periods where a reboot isn’t a problem. If a service must be available 24/7 then it should have (multiple) redundant backups, again making a reboot an insignificant issue.
I havent seen it for long time as well, but I have seen it. I even have seen WinWord crashing my system. But I have never ever seen in the last 3 years OpenOffice.org crashing my Linux system. Never ever!
And I’ve _never_ seen explore take down an NT box either (I have seen it on Windows 9x machines). The simple fact is explorer crashing *an entire machine* is as unusual and broken as KDE, OO.o or any other userspace program crashing a Linux machine and should be treated as such. If it’s happening regularly, your machine is broken. Stop whinging and get it fixed, just like you would with a Linux box.
No! I have seen that on Windows NT/2K/XP.
Then your machine is broken, get it fixed.
In some ways you are right, but you forgett to thing about the fact, that Microsoft needs to publish the information on how to integrate that stuff into the system.
Given that there is no advantage whatsoever to Microsoft retaining information from software developers to stop them writing such code, you’ll have to work pretty hard to convince me there’s a conpiracy behind it.
Then after some while I have readed on the net, that MS had 217 unpublished API calls and other protocol stuff wich they have hided from others.
Did those API calls do anything existing calls could not ?
Undocumented APIs are undocumented for a reason – so people don’t use them. That’s so Microsoft (or anyone else, they’re hardly the only ones with undocumented API calls) can change or remove them without having to worry about breaking applications. Hell, half the cruft still hanging around in Win32 is probably there because people insisted on using undocumented APIs that some website or book had told them about.
I just answered that they promissed to deliver EVERYTHING and that I did not get everything! And paying every year for something wich is not complete, renders the complete suff for me useless!
What information were you lacking that stopped you developong the software you wanted ?
Seems to me you were just looking for an excuse.
[i]They can search other idiots developing on their platform. Now I have again a subscription, but I did not payed it! A customer wanted me to develop an application for imaging and DMS and this task required deeper documentation in the Windows API. So I told the customer that I can do it, but that he pays the MSDN, because I will in no way buy it! And I will never buy it! Not under the above conditions. I don’t like, when they lie to me!<?i>
No-one is lying to you. They’re undocumented for a reason – so you don’t use them.
The first really good, stable, reasonably secure multiuser, multiprocesser OS from M$ was Windows 2000. Windows NT was a poor effort and wasn’t really very usable or even moderately reliable before the release of Service Pack 4 which was more 2 years after its launch.
WinNT + SP3 was the absolute minimun that anyone in their right mind would dream of running and anyone who tried to use SP2 or less must really like the color blue ( as in BSOD).
NT4 was fine. I ran it from Beta 2 (early 1996) all the way through to the release of Windows 2000 and had a grand total of four BSODs, three of which were due to hardware failures and one of which was caused by McAfee software breaking with SP2.
Of course, I wasn’t running it on the cheapest, nastiest hardware I could find, either, which might have had some impact.
IIS is deeply wired into Windows kernel in an effort to match the performance of Unix web server — pretty much any major vuln in IIS leads to the entire box being owned
Only the HTTP listener component runs in kernel space AFAIK.
Not to mention the httpd in the Linux kernel is somehow different, right ?
Windows registry is a single point of failure for the entire OS and is easy to target by hackers
Like /etc, you mean ?
Internal boundaries of the OS are extremely porous especially in GUI — any application can talk to any other application through windowing system, which means that regardless how unprivileged the vulnerable program is there is always a possibility of escalating privileges through other applications/services
Source ?
And any buffer overrun or crack in the Windows GUI or webserver can be exploited to take control of the entire system.
Which would never happen on a unix box to a process running as root, would it ?
So DRI isn’t in the Linux kernel?
There is no graphics code in the kernel. The DRM is in the kernel, but that’s a management component. It handles things like detecting the card, transferring DMA packets to the card, and handling interrupts. The actual graphics code runs in userspace, and calls the DRM to transfer the resulting command packets.
Maybe they took it out when they (Finally) removed the kernel httpd.
The kernel httpd is an experimental feature. Nobody uses it in a production environment. It’s not even compiled in by default. Apache certainly doesn’t use any kernel space components. IIS, on the other hand, has critical HTTP handlers run in kernel space to jack up the performance numbers.
> Windows registry is a single point of failure for the entire OS and is easy to target by hackers
>Like /etc, you mean ?
The parallel you’re trying to draw between the registry and /etc is just lame. Contents of /etc directory are just configuration files and are very resilient to failures — misconfigurations are easily identifiable and fixable. Registry on the other hand is a absolutely non-transparent monolithic cobbled up hairball that can take down the whole ship with just a single misconfiguration. Plus fixing registry is close to impossible — you end up reinstalling the entire OS just because of that. Sorry, Windows sucks.
NT4 was fine. I ran it from Beta 2 (early 1996) all the way through to the release of Windows 2000 and had a grand total of four BSODs, three of which were due to hardware failures and one of which was caused by McAfee software breaking with SP2.
Of course, I wasn’t running it on the cheapest, nastiest hardware I could find, either, which might have had some impact
I still have that cheap peace of nasty hardware – a multi-CPU box that has stably run, in no particular order:
BeOS: 4.5, 5.0
FreeBSD: 3.5, 4.3, 4.6.2, 4.7
Linux: over a dozen distros with kernels from 2.2.14 – 2.6.2,some of which I patched and compiled myself
Solaris x86: 2.6 (poorly but few lockups), 2.7(slow but reasonably stable), 8, 9
Windows: NT ( from release to SP5 – see above); 2000 ( up to SP3); XP up to SP1; 2003 Server (will only run single-CPU)
Only one of the major problems I had with NT was due to hardware failure – a hard drive containing an 8-gig NTFS partition . A fresh install of NT 4 on a different drive choked while trying to read it but, fortunately I was able to use Linux to retrieve about a gig of data before the drive failed completely.
Stupid cut ‘n paste!!
The parallel you’re trying to draw between the registry and /etc is just lame. Contents of /etc directory are just configuration files and are very resilient to failures — misconfigurations are easily identifiable and fixable.
I’m sure thousands of people who have accidentally put the wrong thing into one of those configuration files and rendered their systems – or important aspects of their systems – unusable will disagree.
Registry on the other hand is a absolutely non-transparent monolithic cobbled up hairball that can take down the whole ship with just a single misconfiguration.
As can the wrong thing in /etc – your point ?
Plus fixing registry is close to impossible — you end up reinstalling the entire OS just because of that. Sorry, Windows sucks.
I can’t even remember the last time I had a corrupted registry on an NT install, so your implication it’s anything close to a common occurence is simply false (as is your similar implication ragrding hand-editing the registry).
If your registry gets corrupted – either by some myserious gremlin in the works or by stuffing up editing it, you simply restore it from the last backup (or “last known good”). Problem solved.
Any mainstead OS has several single points of failure. To try and imply the Registry is the only one is nothing more than blatant FUD.
It seems they have alot of Linux exploits there too.
> I’m sure thousands of people who have accidentally put the wrong thing into one of those configuration files and rendered their systems – or important aspects of their systems – unusable will disagree.
Thousands of people having such a malady on Unix would just boot from external media into single user mode, mount the root filesystem and fix the misconfiguration. Simple as that! I bet you can’t do that on Windows the OS that was designed to be disposable — if it doesn’t work, reinstall.
> I can’t even remember the last time I had a corrupted registry on an NT install, so your implication it’s anything close to a common occurence is simply false (as is your similar implication ragrding hand-editing the registry).
Any virus or vulnerability in the program having enough privileges to modify the registry would easily accomplish that. If it haven’t happened to you yet, it certainly can any day.
> If your registry gets corrupted – either by some myserious gremlin in the works or by stuffing up editing it, you simply restore it from the last backup (or “last known good”). Problem solved.
Not convincing. If your “last known good” registry happens to be pretty old and there is bunch of changes that took on the system, you still end up with a systems that is as dead as fried chicken.
Sorry, Windows *sucks*!
I’m sure thousands of people who have accidentally put the wrong thing into one of those configuration files and rendered their systems – or important aspects of their systems – unusable will disagree.
If that ever happens, you can boot of a livecd or a floppy distro and fix that file. I thinks that it is not the same for the windows registry.
I can’t even remember the last time I had a corrupted registry on an NT install, so your implication it’s anything close to a common occurence is simply false (as is your similar implication ragrding hand-editing the registry).
I only remember havin such problems on windows 9x, but not on 2000 or XP, but I have to admit that I rarely boot to windows on my PC (only to play some games or compiling a program on windows to test if it works).
> It seems they have alot of Linux exploits there too.
Only the Linux exploits are pretty ancient and you gonna have a hard time finding a vulnerable system nowadays. Windows exploits on the other hand are pretty current and there is still good chance that you can target at least 90% of all Windows systems out there not protected by firewalls. Have some fun at work with Metasploit and point out to your CIO how secure Windows is! (NOT)
Sorry, Windows *sucks*!
I dont know if windows sucks, I think that its just not for me and not for you . If other people are happy with windows, its ok for me, the ones that will suffer are them, not me .
The only thing I am missing in Linux is file encryption (and less important for security: compression) that is comparably tighly integrated and transparent as in NTFS. Especially important is that you should not require a entire filesystem to be mounted via a loop cipher, but allow per user (or even multiple passwords per user) encryption. I think this will lower the local attack potential against Linux significantly.
Here are some basic flaws.
1) IE intergrated into the OS.
2) Using Active X.
3) Combining active X and IE to update you system.
4) Intergrating Outlook into the OS
5) Intergrating WMP into the OS.
6) Intergrating video into Ring 0 for better preformance.
Got some questions and hopefully that some brave sole can comment on some of my questions.
1) Why does NTOSKRNL.EXE call home to MS via port 80, through any browser. Whats with the stealth actions.
2) Why is port 1025 open. Yes I know that anything under 1024 is in the danger zone, however, it links to an application that I cannt find. This is in a default w2k install.
3) RPC port 135 (epmap). Now way to close it. Has to be firewalled in.
4) Why is netbios and friends enabled by default.
5) MS doesnt document all services and ports being used by their OS. This obsecurity will reaks havoc with the user / admins using their OS.
6) Why is the hardrive exported to the world by default
7) The sudo function doesnt always install software properly.
8) Why does service.exe call home to MS.
9) Why do servers need WMP?
10) Why does the system allow for inbound connections to port zero.
11) Do you know how many settings you have to go through in the registry to secure 1 box? I have better things to do with my time.
Viri, trojans and others can infect your system just by recieveing an email or viewing a web page. Yes, you can turn up your settings to high to avoid some of these issues. Ever, surf the web with these settings? Yes, this is w2k, not sever; however most points that are listed apply to the server.
I only have 10 service enabled and 2 of them are for Nvidia.
Also, system updates can only be done via IE (ok, you can order a cd for patchs, that comes out once a year, if your lucky).
MS announced that there will be no more patchs for IE for anything older than XP unless you pay for a browser upgrade. What a way to hold users of your product hostage. Never mind that MS said they would support the OS for 10 years. (Flawed by design, its called greed).
These are just a few rants off the top of my head. And before any of you think that I am trolling, think twice. I have been using MS since 1980. Anyone here remember, edlin or zero-wait state.
20 years of viri, trojans, malware and software that is overpriced. Every have to re-register software because of an HD crash, new mobo, new CPU and so on. How much indignity can users take.
FYI: I have been creating images of sofware for more years than I can count but it does create issues for new boards or processors (not hd’s).
The sofware is flawed but thats because its coming from a company that is flawed. Its almost 3 am here and I have to apologize for my semi-completely incoherent rants.
“1) Why does NTOSKRNL.EXE call home to MS via port 80, through any browser. Whats with the stealth actions. ”
Because your party to some strange conspiracy theory? Because your system’s been compromised? Neither seem unlikely given your post.
“2) Why is port 1025 open.”
MS RPC. 10 seconds on google rather than uninformed spouting might make you more credible.
“3) RPC port 135 (epmap). Now way to close it. Has to be firewalled in.
4) Why is netbios and friends enabled by default. ”
Because NT and beyond are network operating systems. Oddly enough they offer network services. For years NetBIOS has been the protocol of choice for offering those services.
“5) MS doesnt document all services and ports being used by their OS. This obsecurity will reaks havoc with the user / admins using their OS. ”
Specifics. It’s not good enough to parrot something you heard on slashdot.
“6) Why is the hardrive exported to the world by default ”
It’s not. It’s restricted by user at least. Remote connections do not expose the server service by default. NIC Bindings, firewalls etc can be used to further restrict exposure.
“7) The sudo function doesnt always install software properly. ”
Probably because of poorly implemented install packages
“8) Why does service.exe call home to MS. ”
Evidence? If you mean svchost.exe because it’s the host to automatic updates and error reporting for a start.
“9) Why do servers need WMP? ”
Terminal Services. Some components are needed for Windows Media Services. Media services have been a part of a general purpose OS for a decade now. As a general purpose OS, Windows supplies them. Applications benefit from the presence of a consistent reliable media infrastructure.
“10) Why does the system allow for inbound connections to port zero. ”
Because that’s what all operating systems have been doing for years.
“11) Do you know how many settings you have to go through in the registry to secure 1 box? I have better things to do with my time. ”
One of those things might be to learn scripting, group policy or even *.reg files. It’s stupid to complain that a design is flawed because you don’t understand it.
If you’re up to date you would have been unlucky to get any virus over the last 5 years.
In short, your post points more towards user incompetence than MS greed or flawed design
Thousands of people having such a malady on Unix would just boot from external media into single user mode, mount the root filesystem and fix the misconfiguration. Simple as that! I bet you can’t do that on Windows the OS that was designed to be disposable — if it doesn’t work, reinstall.
Actually – it works in Windows just about the same, except that Windows has an registry revision system built in (‘System Restore’) which allows for more flexible ways to restore an Windows system.
Please consult http://support.microsoft.com/?kbid=307545 for the exact explanation.
– IIS is deeply wired into Windows kernel in an effort to match the performance of Unix web server — pretty much any major vuln in IIS leads to the entire box being owned
You are confusing (either deliberately or due to ignorance) two seperate systems here. There is the httpd kernel driver which kan only server static content and can be disabled (consult http://msdn.microsoft.com/library/default.asp?url=/library/en-us/ii… for information about ‘IIS 5.0 isolation mode’ for exact information) and you have the IIS application which is able to run script engines and do other tricks.
IIS itself runs as a ‘SYSTEM’ account which basically means it has a lot of priviliges (100% comparable to the Apache ‘mother’ process that has root priviliges), it then drops priviliges to IUSR_(servername) to run the files.
Internal boundaries of the OS are extremely porous especially in GUI — any application can talk to any other application through windowing system, which means that regardless how unprivileged the vulnerable program is there is always a possibility of escalating privileges through other applications/services
The escalation through other services could only happen if the target service is broken already. Proving an os has bad security based upon the fact that there are broken applications doesnt proof that an OS is broken – it proofs that applications are broken.
Besides this – you are forgetting here that priviliged service accounts should really run on another WinStation (look it up on MSDN) and that there is a very good and thorough security model seperating access between applications running on different WinStations.
Windows 2003 server is the most secure version of Windows so far. Windows 2000 server was FULL of holes.
The funny part of this all is that an OS from a multi, multi Billion dollar company is being compared to an OS created in a dorm room!
You would think that M$ would have made their OS more secure a lot sooner then 2003.
People always say “well there are alot more Windows machines out there then Linux machines so that is why there are more problems” But that logic makes no sense because 1. There are 1000’s of Linux machines facing the internet and running very important tasks such as DNS, Webserving, Routing, Firewalls etc, yet don’t get taken down as often as Windows machines. 2. The problems in Windows would be there no matter if Microsoft has 100 million or 1 million machines on line. A lot of the problems go back to Windows NT 3.51 (And are still in current versions of the OS) Microsoft spent too much time in the past on making the software as easy to use as the Mac and did not focus on security.
It took the little Kernel that could to make MS focus on security. I bet if Linux did not exsist MS would still not focus on security.
Nicholas Petreley’s former lives include editorial director of LinuxWorld, executive editorial of InfoWorld Test Center, and columns on InfoWorld and ComputerWorld. He is the author of the Official Fedora Companion and is co-writing Linux Desktop Hacks for O’Reilly. He is also a part-time Evans Data Analyst and a freelance writer.
How can you take an article on this subject, from a guy with such a background, seriously?
And secondly, an operating system is only as secure as its users.
Well, looks like we shouldn’t discuss the kernel design, ok, but there’s windows web server, windows storage server – isn’t that modular in petreley’s sence?!
why windows full of remotely vulns/exploits that affect the system, while linux have only local root vulns?
i doesn’t mean apache/kde/xfree86 for linux , cuz they’re can be easily removed from the system.
why i have to change the settings in “system restore” in windows so i can delete a virus, while in linux i only have to rm virus?
why i cant use Windows Server 2003 with P1 and 32MB memory, while its possible to use Debian with P1 and 24MB?
why microsoft use FreeBSD on hotmail while there’s a server’s product from microsoft?
1. That’s debatable at best.
2. That’s an extraordinary over simplification of removing a UNIX virus or worm. The process is much more complicated in many cases.
3. Because Windows Server 2003’s a full featured, general purpose OS that doesn’t restrict it’s feature or abilities to decade old hardware.
4. It doesn’t. It hasn’t for years. Do some research before you vomit your “pearls” of wisdom on the rest of us.
Well done, now you don’t even have to read the article, argue against his arguments and show where the author is biased, you just dismiss the article.
Dumb? Yes
Convenient? You bet
And to claim that an operating system is just as secure as it’s users is equally impressive. Sorry for my harsh wording, but to basicly state that the design of an operating system doesn’t have anything to do with security is just so obviously dumb and false that words fail me to describe it.
I see all the worried Windows people have crawled out of the woodwork. Whenever a botched Microsoft sponsored Windows study comes out – nothing. When people are able to produce studies like this with, basically no effort whatsoever, every Tom, Dick and Harry feels the need to point out the security features of Windows which are just laughable, quite frankly.
We also get to see the quality of people out in the IT world today:
Then there’s this little gem:
“Windows has only recently evolved from a single-user design to a multi-user model.”
Windows NT has been multiuser since day 1, back in 1993 (or 1988 if you want to count from the start of development)….is just the latest incarnation of an OS that was conceived, designed and built to be a multiuser OS.
Windows was never a multi-user OS. It was designed to be on every desk and in every home (direct quote from Bill Gates) and never to be networked. You can try and do it with Win95/98, but a quick edit of the registry, which everyone has access to, gets around any multi-user (and I hesitate to use that phrase) crap you’ve put on.
That was about the point I stopped taking this seriously, although continuing to read reveals someone who clearly has a massive chip on their shoulder and a distinct lack of objectivity.
Get yourself back to nursery and get out of IT because you have no idea what you’re talking about
Unless you’re running as a regular user. Shock ! Horror ! If you run a Windows system like you do a Linux system it’s just as hard to get infected !
Nope. The ordinary Windows user is easily exploitable to get administrator privileges, and many viruses use those exploits. Also, in businesses you find that services such as printing in complex environments sometimes don’t work when people are logged in as ordinary users. Why do you think so many people are logged in as administrators anyway? It’s not some secret society!
Whatever you do, don’t send me your CVs because it’ll be a waste of paper.
> The escalation through other services could only happen if the target service is broken already. Proving an os has bad security based upon the fact that there are broken applications doesnt proof that an OS is broken – it proofs that applications are broken.
The design and architecture of the windowing system in Windows intertwined with the kernel (IPC) mechanisms is at fault here, not just the application. Windows design that has IIS/IE and other userland applications wired to deep to the core of the OS make any vuln in the application make it dangerous enough to bring down the entire OS. You’ll be pissing in the wind any time when you try to make Windows resilient to these flaws — it’s a futile task because you can’t make something work when it is so fundamentally f*cked. Windows a better fit for a gamers desktop and not much else.
The very first myth is about windows being a big target and apache running most of the web. WTF does that have to do with anything ?
For God’s sake. Microsoft likes to tell us how attacked Windows is because it is popular. Apache runs two-thirds of the web and doesn’t have anything like the disruption caused by exploited IIS servers (IIS accounts for far less of the web sites on the Internet) and Windows boxes. That is WTF it has to do with anything.
Brain’s on next day delivery I take it?
Most exploits, trojans and viruses aren’t spread by breaking into web servers anyway, they are spread by users running something.
You’ve obviously never connected a Windows box directly up to broadband. I don’t know where people get this crap about there only being e-mail viruses and that they’re the only threat.
It is a two-pronged attack. Attack by e-mail, virus takes over your system, sends itself to everyone in your addressbook usually and uses any network connections to flood your network (or your fellow broadband users) with traffic to seek out any exploitable Windows machines and IIS servers.
Anyone who doesn’t know this is in denial.
Any OS setup correctly and managed by someone who knows what they are doing can be secure.
Unless it does several things you don’t know about.
but it shouldn’t take articles like this to prove that because articles like this are just as scewed as the MS FUD coming out of redmond in regards to windows security.
Read it and make an IT professional judgement. If you can’t, don’t post about it.
Under which rock do you people live?
So the last major redesign was in 1996 and the next one is going to be ~2006, with 3 product releases in between (2000, XP, 2003), but somehow that is “pretty much every release of the OS” ?
The last major re-design was Windows 2000 (thanks largely to some BSD code) because NT4 wasn’t good enough for practically anything server-based. Windows XP and 2003 changed the userspace part of the OS quite a bit.
Windows is a modular Operating system only Microsoft has never used it as such and they plan to make it more modular being able to add and remove features with Longhorn Server.
Microsoft were going to modularise Longhorn (especially the driver model), but because Windows is so broken they couldn’t so they’re forking it off XP SP2.
The author does a really good job to not say linux is based on UNIX, which it is.
It’s a system modelled after UNIX, but it isn’t UNIX. There are some fairly fundamental differences.
Linux is not modular by design, you cannot add or remove functionality without recompiling your kernel.
Bollocks. How many enterprise users of Suse or Red Hat re-compile their kernels? Never. Drivers are built to be brought in as modules. How often does your server hardware change?
A Linux system is about much more than the kernel, but of course you knew that right?
Linux has a Monolithic kernel like DOS had a monolithic kernel.
Please don’t take away any last scrap of credibility you have by comparing DOS to Linux. Come to think of it, compared to Windows DOS actually worked.
My Windows Server 2003 racks at work are headless and I have no problem with system administration from home or anywhere else.
The definition of headless is no graphical system, and certainly not applications like IE, running for the local system. That’s not the case with Windows.
I use IE on my server to access Windows Update and to use web apps that I have.
I thought you said you remote admined these things? Clue: physically using IE installed on your server, remotely or otherwise, is not remote admin.
Sure but your chances of getting hit by an IE flaw on the server is less probable than on the desktop.
Why? The IE software is the same, the codebase is the same as XP and you have the ability to remote admin or run IIS which can presumably be seen from the outside. Jesus – do you know what a risk analysis, or common sense, is?
If Linux tommorrow was to become the dominant OS does this guy actually think virus and malware writers intend to take their ball and go home?
Nope, but you can make it less easy for a virus writer by not providing built-in helpers for them. At the moment, all you need to write a Windows viruses is Notepad, a knowledge of VBScript and a simple knowledge of ActiveX and Windows services if you want to be really nasty.
The only real security is education. Teaching people to protect their systems is key. I have run Windows Servers for 3 years. I have yet to have an intrusion or a virus. is it magic?
No it’s called luck. Seriously, given the above you’ve either been extremely lucky or you have your Windows servers locked away tightly on an internal network.
Given what you’ve written, I doubt whether you actually know you’ve been exploited or had a virus anyway.
Reboots are largely for the benefit of ignorant end users and usually aren’t necessary at all.
God, so we’re all ignorant then? There are many things in Windows, especially driver and network related, that still require reboots.
as would any Linux system where X and/or KDE/GNOME/whatever were crashing the whole system (and I have seen it happen).
Simply running X cannot do this. Any locally specific graphics drivers (hardware related) can do this. Don’t confuse the two.
No problems whatsoever. Your system is broken (or you’re using Windows 95/98/Me).
If you don’t have problems with IO under fairly heavy then you’re not running Windows.
Firstly it’s an application problem, not an OS problem. It’s not Microsoft’s fault if Nero can’t write their software properly.
Yes it is. Microsoft provide the facilities for software installation. If that’s broken, that’s Microsoft’s fault. Don’t try and fob this off onto application developers.
Secondly, it usually *is* possible, even with broken applications, by spending a little time figuring out what the application is trying to do that it can’t and granting specific permissions on files, directories, registry keys and the like to allow it.
So we have to do everything you like to complain that you think you have to do with a Linux system? LOL! I don’t have time for that, sorry. Windows needs a packaging system like that for RPM for DEB. They don’t, and that is Microsoft’s fault.
you dont see admins using Windows boxes setup as firewalls to protect a bunch of Linux boxes :^P
alot of ppl complaine about the article, did u read the whole article?? this guy state every point with detaild proof and explain why .
I have “crashed” my Fedora before. Actually I could turn it off by hitting the power button and it would properly shut down. If I had access to another computer to ssh in I would be able to kill X and restart it without rebooting. The thing was for some reason, the keyboard stops taking input, and the screen freezes, but the system is still very much up. It is extremely annoying and to an ordinary user, it is as bad as the whole system crashing. But it is not crashing in the same sense as Windows crashing. Mind you, I have had real crashes with Linux, kernel panics, but it was one problem when I compiled my own kernel which had issues with a certain driver. A few kernel revisions later, this doesn’t happen anymore. But I wouldn’t exactly score Windows worse from a desktop user’s point of view, but a Linux server wouldn’t have that problem. It would have its other problems though.
One extremely good thing about Linux is that you do not have to have X running on the system, and can therefore choose to not even compile drivers for the graphics system, or probably just remove the modules. You can remove everything that could cause unnecesary pain, try that with Windows.
I’ve run Linux and I run FreeBSD without any problems…
However, I also run Windows XP without any firewall, anti-virus software etc (connected 24/7). And I never have any problems.
My XP machine has also been up for days doing heavy tasks 24/7…
I do like Unix type OS’s better (for a lot of other reasons then security) but maybe the proof depends on who’s eating the pudding?
The design and architecture of the windowing system in Windows intertwined with the kernel (IPC) mechanisms is at fault here, not just the application.
You are aware that the Win32 subsystem is actually not the kernel, right? The IPC mechanisms of Windows as you know it all are provided through the Win32 API which makes them not a kernel problem but a subsystem ‘problem’.
Besides this little piece of mis information – its also not the IPC mechanisms that actually are at fault. Most IPC mechanisms do provide proper authentication and authorisation mechanisms (Mailslots, named pipes for example) – all of them except the message passing.
For the message passing – as i already stated earlier – there is a far better appoach of a completely sand-boxed ‘messaging scope’ called a WinStation.
Could you name a few flaws in the IPC mechanisms that actually cause the vulnerabilities which are not based upon ill-written programs ?
Windows design that has IIS/IE and other userland applications wired to deep to the core of the OS make any vuln in the application make it dangerous enough to bring down the entire OS.
Again you are mistaking several facts. IE is a part of the generic user interface components of Windows which on their part rely on the Win32 sub system, which on their part rely on the NT kernel.
This is no interwindling ‘deep in the core OS’. It actually is interwindled with parts of the user interface but that is only a problem when the UI is used for potentially unsafe situations. In that case it provides a mere extra attack vector against the many attack vectors already available when you have the elevated priviliges. Just like any other OS
IIS is implemented as a basic driver/service model and has nothing to do with the OS.
You’ll be pissing in the wind any time when you try to make Windows resilient to these flaws — it’s a futile task because you can’t make something work when it is so fundamentally f*cked. Windows a better fit for a gamers desktop and not much else
In fact – Windows has one of the most granular security systems available in mainstream OS’es available at the moment and you’re statements are obviously based on lack of knowledge and experience.
And to claim that an operating system is just as secure as it’s users is equally impressive. Sorry for my harsh wording, but to basicly state that the design of an operating system doesn’t have anything to do with security is just so obviously dumb and false that words fail me to describe it.
Well, still I’m right. Linux is definitely more secure than Windows — no doubt about it — but a dumb end user will happily enter his root password into a box when asked, or whatever.
The opposite is also possible. Someone with enough knowledge about Windows (or someone who bookmarked Google) can make his Windows install pretty damn secure. Not as secure as a well maintained Linux box– but still more than secure enough for decent home use.
A car can be very safe, with traction control, seatbelts and the lot. But if the driver turns traction control off, doesn’t use his seatbelt…
> In fact – Windows has one of the most granular security systems available in mainstream OS’es available at the moment and you’re statements are obviously based on lack of knowledge and experience.
What mainsteam OS’es are you talking about? Windows the bottom of the pile unless you draw comparisons against some ancient OS. Any up-to-date Unix based/like OS’es (AIX, Solaris, new secure incarnations of Linux, a.k.a SELinux) have far better implemened security. Windows can’t hold a candle to Solaris security for instance.
Simple as that! I bet you can’t do that on Windows the OS that was designed to be disposable — if it doesn’t work, reinstall.
I can boot to a recovery console and restore an old copy, or load the registry hive into another machine and repair it there.
Of course, if you get to the point where the registry is that badly damaged, you’re well and truly past any comparison with a damaged file in /etc anyway and more on par with extensive filesystem corruption in /.
Any virus or vulnerability in the program having enough privileges to modify the registry would easily accomplish that. If it haven’t happened to you yet, it certainly can any day.
But somehow a process under unix running as root couldn’t do similar levels of damage ?
Not convincing. If your “last known good” registry happens to be pretty old and there is bunch of changes that took on the system, you still end up with a systems that is as dead as fried chicken.
As you can get with an extensively damaged unix install. Your point ?
Windows was never a multi-user OS. It was designed to be on every desk and in every home (direct quote from Bill Gates) and never to be networked. You can try and do it with Win95/98, but a quick edit of the registry, which everyone has access to, gets around any multi-user (and I hesitate to use that phrase) crap you’ve put on.
Windows NT has always been multiuser.
Nope. The ordinary Windows user is easily exploitable to get administrator privileges, and many viruses use those exploits.
List these simple, unpatched exploits.
Whatever you do, don’t send me your CVs because it’ll be a waste of paper.
No fear. I don’t think I’d feel any worse being knocked back by someone who doesn’t even know basic computing history.
The last major re-design was Windows 2000 (thanks largely to some BSD code) because NT4 wasn’t good enough for practically anything server-based.
Please describe this BSD code and what it was used for.
Windows XP and 2003 changed the userspace part of the OS quite a bit.
For example ?
It’s a system modelled after UNIX, but it isn’t UNIX. There are some fairly fundamental differences.
_Fundamental_ differences ? Like what ?
The definition of headless is no graphical system, and certainly not applications like IE, running for the local system. That’s not the case with Windows.
The definition of headless is no local console or, if you’re particularly old-school, no display hardware whatsoever. “Graphical” has nothing to do with it.
I thought you said you remote admined these things? Clue: physically using IE installed on your server, remotely or otherwise, is not remote admin.
So if you don’t emulate a local login over the network (thus ruling out the ubiquitous SSH and/or telnet), what are you using to admin your unix boxes ?
Why? The IE software is the same, the codebase is the same as XP and you have the ability to remote admin or run IIS which can presumably be seen from the outside. Jesus – do you know what a risk analysis, or common sense, is?
Because IE doesn’t just mysteriously self destruct. Any exploit for it has to be *triggered* somehow.
Nope, but you can make it less easy for a virus writer by not providing built-in helpers for them. At the moment, all you need to write a Windows viruses is Notepad, a knowledge of VBScript and a simple knowledge of ActiveX and Windows services if you want to be really nasty.
And pretty much all you need for a Linux virus is sh and mail – or maybe perl if you’re 31337 – hardly uncommon pieces of software on a unix machine.
God, so we’re all ignorant then? There are many things in Windows, especially driver and network related, that still require reboots.
Please learn the difference between “largely” and “always”.
If you don’t have problems with IO under fairly heavy then you’re not running Windows.
Looks like Windows to me.
Yes it is. Microsoft provide the facilities for software installation. If that’s broken, that’s Microsoft’s fault. Don’t try and fob this off onto application developers.
Except it’s _not_ broken.
So we have to do everything you like to complain that you think you have to do with a Linux system? LOL!
With broken applications, yes.
I don’t have time for that, sorry. Windows needs a packaging system like that for RPM for DEB. They don’t, and that is Microsoft’s fault.
MSI worked fine last I checked.
>>Not convincing. If your “last known good” registry happens to be pretty old and there is bunch of changes that took on the system, you still end up with a systems that is as dead as fried chicken.
> As you can get with an extensively damaged unix install. Your point ?
The point is on Windows you gonna have to resort to a restore each time the registry is corrupted instead of a simple on-the-spot fix you can do in Unix/Linux environment — there is a huge difference.
The point is on Windows you gonna have to resort to a restore each time the registry is corrupted instead of a simple on-the-spot fix you can do in Unix/Linux environment — there is a huge difference.
“Each time” ? It’s hardly an event that happens with any frequency. About as often as damage to a unix system that will require similar levels of interaction to fix, in fact.
If your unix filesystem gets corrupted and wipes out /etc or other important directories like /lib or /boot, you’re going to be spending some quality time with the backup tapes as well.
“*Bangs head off desk in disbelief*. ”
Keep banging. It might eventually do some good.
“MS doesn’t document their interfaces and services …”
Again specifics. I’d argue they’ve got the best public documentation of any OS.
“And yet people still get a free hand into your system.”
They might get a free hand into your systems. They don’t in mine.
“Probably because of a poorly implemented installation system”
No still packages.
“Is that behaviour documented anywhere, and if so can it be locked down? ”
Yes. You can use the pretty little interface that Windows likes to give ordinary users. Or you can learn how to run a network and use Group Policy and friends to do it.
“At a guess, how many people using Terminal Services use the Windows Media Services would you say? ”
You might notice the full stop if you weren’t busy inappropriately fondling your SUSE and Gentoo CDs. There are two separate reasons why a server might need media components. And MS have moved into the 21st century and actually offer those OS services.
“Not on a server they aren’t, unless you explicitly tell it to serve multimedia content. Very few people need to. ”
Not on Linux they aren’t. Because it’s proponents are a bunch of masochists who can’t see the value in consistent GUIs or Media Services
“Microsoft Marketing Press Release XXXXX. LOL! ”
Ouch. Biting. That really is an advantage. 97% off the world disagrees with you. In 5 years time that figure will be no better without some changes in current OSS attitudes. You’ll still be in your bedroom tenderly caressing those CDs.
“So what about locking down my system so that I don’t need to rely on updates right across the board for the system, and what about the time it takes to get a fix out, apply it to a system and test it to make sure nothing breaks in a production environment? ”
If you decide not to use updates don’t put your system on a network. Patching’s a fact of life for all OSes these days. It’s time you got used to it. It’s time you implemented a testing process that could react in a timely manner. And you have the hide to call me incompetent.
The thing about Windows is it is very difficult to fix the problem once it occurs. In Linux, you can backup your /etc directory easy, and all you need to do to restore the system is to make sure the binaries are there and they are correct. fixing the registry is something I used to do, but nowadays if Windows gives me problems, I reinstall. Coz fixing registry is no fun. Its mostly groping in the dark. With Linux, computer won’t load drivers, or can’t find new hardware, just see the logs, and see what went wrong, and edit the correct file or install the correct driver.
> If your unix filesystem gets corrupted and wipes out /etc or other important directories like /lib or /boot, you’re going to be spending some quality time with the backup tapes as well.
LOL. It looks like you’re running out of arguments to provide any real support against the “registry is a POS” proposition. If you’re taking this position with filesystem corruption, then Windows is at least twice as likely to fail compared to Unix/Linux as it can also suffer from file system corruption and registry corruption on top of that. Windows has more single points of failure layered one on top of the other. Windows is a total POS in that respect.
“1) Why does NTOSKRNL.EXE call home to MS via port 80, through any browser. Whats with the stealth actions. ”
Because your party to some strange conspiracy theory? Because your system’s been compromised? Neither seem unlikely given your post.
Still doesnt answer the question.
“2) Why is port 1025 open.”
MS RPC. 10 seconds on google rather than uninformed spouting might make you more credible.
Found what I was looking for using MS RPC and 1025. I searched quite poorly (my error).
“3) RPC port 135 (epmap). Now way to close it. Has to be firewalled in.
4) Why is netbios and friends enabled by default. ”
Because NT and beyond are network operating systems. Oddly enough they offer network services. For years NetBIOS has been the protocol of choice for offering those services.
Still doesnt make it right.
“5) MS doesnt document all services and ports being used by their OS. This obsecurity will reaks havoc with the user / admins using their OS. ”
Specifics. It’s not good enough to parrot something you heard on slashdot.
How about port 5000 off the top of my head. There are more, just can remember right now. Port 5000 is used to close connections. Please find that listed in MS documentation. Now, now, picking on slashdot, thats hitting below the belt assuming that I read slashdot. We know what happens when you assume.
“6) Why is the hardrive exported to the world by default ”
It’s not. It’s restricted by user at least. Remote connections do not expose the server service by default. NIC Bindings, firewalls etc can be used to further restrict exposure.
“7) The sudo function doesnt always install software properly. ”
Probably because of poorly implemented install packages
MS has encouraged the design of software in single user mode. Every install software and it takes admin priv’s to have it set up. MS didnt create the software, their bad practices have encouraged it.
“8) Why does service.exe call home to MS. ”
Evidence? If you mean svchost.exe because it’s the host to automatic updates and error reporting for a start.
I will post my firewall logs as soon as I get back home.
“9) Why do servers need WMP? ”
Terminal Services. Some components are needed for Windows Media Services. Media services have been a part of a general purpose OS for a decade now. As a general purpose OS, Windows supplies them. Applications benefit from the presence of a consistent reliable media infrastructure.
It still doesnt make it right. You should be able to use the shared libraries and still remove the offending program.
“10) Why does the system allow for inbound connections to port zero. ”
Because that’s what all operating systems have been doing for years.
I will have to test this theory. I dont believe that it is specified in RFC 1700, if I remember corretly. However, its been some time since I have read it last.
“11) Do you know how many settings you have to go through in the registry to secure 1 box? I have better things to do with my time. ”
One of those things might be to learn scripting, group policy or even *.reg files. It’s stupid to complain that a design is flawed because you don’t understand it.
If you’re up to date you would have been unlucky to get any virus over the last 5 years.
In short, your post points more towards user incompetence than MS greed or flawed design
1) I have never gotten a virus.
2) “It’s stupid to complain that a design is flawed because you don’t understand it.”
And what dont I understand? Please enlighten the world with your supreme wisdom.
3) “In short, your post points more towards user incompetence than MS greed or flawed design”
How so? How do you explain the fact that MS states that they will support an OS for 10 years. Recently, it points to the fact that IE will no longer be updated for anything older than XP. Yes, amazing, IE is the vehicle to distribute patchs. So, are they no longer supporting the OS or not. I have notice that you did not specifically comment on this point. My point still remains, GREED.
No do I comment on your you less than stellar behavior in postings. This discussion was purly about flaws / security issues in an operating system. You have brought this to a personal level. I can argue rationally, can you. This must make you an MSDN reader, just because I must be a slashdot reader.
I look forward to you next post.
“In Linux, you can backup your /etc directory easy”
in win you start regedit and go to file->export and end up with an 30mb text file.
OK – here we go .
Windows NT has always been multiuser.
You quite clearly stated that Windows has been multi-user since about 1988. This is quite clearly not the case at all and shows a woeful lack of knowledge.
My definition of multi-user is that the thing actually works:
1. Printing and other services always work as an ordinary user in multi-user, roaming environments.
2. Installing software as administrator shows up and works for every user on the system in their menus. This doesn’t always work at all – don’t blame application installers.
3. Installing software is locked down. Users cannot half-install software or run applications from their home directory.
…and so on and so forth, which is why people run Windows as administrators. They haven’t got time to muck about with this crap.
A multi-user environment depends on security and permissions being implemented properly. Maybe one day you’ll pick up the cluestick and realise that Windows doesn’t do this.
That’s my definition of a multi-user system – that it works.
List these simple, unpatched exploits.
The fact that they’re patched or not is pretty irrelevant. They quite simply shouldn’t be there:
http://www.maths.usyd.edu.au:8000/u/psz/securepc.html
Look under the section, “Do not rely on Win2k security”, and much of this applies to Windows XP as well since it is Windows 2000 with some minor architectural changes.
What am I going to do? Apply several thousand patches every time I install a system? One of these days you’ll realise that patching in the scheme of security is an absolute last resort.
No fear. I don’t think I’d feel any worse being knocked back by someone who doesn’t even know basic computing history.
Coming from someone who thinks Windows has been multi-user since 1988, that’s quite funny.
Please describe this BSD code and what it was used for.
TCP/IP stack, BIND (DNS), network services…. What on Earth do you think Active Directory is?
For example ?
You’re the Windows/computing history expert. Google is but an address change away.
_Fundamental_ differences ? Like what ?
For someone who professes to tell us that this report is BS and that Windows is perfectly OK compared to Linux, that’s not funny. Please find out how each is structured before you comment.
The definition of headless is no local console or, if you’re particularly old-school, no display hardware whatsoever. “Graphical” has nothing to do with it.
It is perfectly possible for a system to be headless and still have an entire graphical subsystem (with IE and everything else) running within it like Windows does. That is a security risk and it is not being headless. When Windows has a purely console set up with no overhead, then that will go some way to matching up to this definition.
So if you don’t emulate a local login over the network (thus ruling out the ubiquitous SSH and/or telnet), what are you using to admin your unix boxes ?
Well, in the case of Windows you’d have a simple Windows Update admin daemon running – or a unified way of remote admining the system with very little dependency on the server side. Running IE on the server and farming it out through Terminal Services is not secure and is not really remote administration in my book.
Because IE doesn’t just mysteriously self destruct. Any exploit for it has to be *triggered* somehow.
Unfortunately, that trigger is pretty damn easy to squeeze for a Windows system that is connected directly to the internet and is providing services. If IE isn’t necessary (which it isn’t) it shouldn’t be running or installed.
And pretty much all you need for a Linux virus is sh and mail – or maybe perl if you’re 31337 – hardly uncommon pieces of software on a unix machine.
Woeful lack of knowledge and pretty naive. If you don’t know about Linux-based systems don’t embarrass yourself. Where’s the equivalent of VBScript and ActiveX that can be directly run within an e-mail client or through a browser without user knowledge or intervention? Where’s PERL running by default, embedded everywhere that allows this to happen on a Linux-based system? There isn’t any, that’s why.
You’re seriously comparing the ease of writing a VB Script and sending it to someone with trying to get around SSH, or tricking a user into running a PERL script that wouldn’t get at the core system anyway as it is run under a user’s ID?
The problem with Windows is how easy it is to spread a virus. You might conceivably bring down a Linux box with an exploit at some point, but would it spread as far and wide as it would if you were attacking Windows? That’s the real killer.
Please learn the difference between “largely” and “always”.
You still need to reboot Windows – period. Black and white. Largely and always is always in a business environment.
Looks like Windows to me.
LOL.
Except it’s _not_ broken.
It is broken. Install various pieces of software as an administrator and then ask your users whether they have showed up in their Start menus or whether they can even run them and whether they work. The answers will be various, but mostly they won’t be working at all and serious manual intervention is necessary.
With broken applications, yes.
The applications aren’t broken – you don’t get away with that line. Microsoft specifies how software will be installed, provides the interfaces and the system for doing it. If they haven’t accounted for this it is their fault.
MSI worked fine last I checked.
It didn’t have a built-in dependency system that worked the last I checked – something developers need. The Debian and RPM installation systems are about far more than simply installing static bits of software.
Keep banging. It might eventually do some good.
I doubt it – not around here.
Again specifics. I’d argue they’ve got the best public documentation of any OS.
For what they choose to document. You weren’t able to come up with an explanation as to why Windows does what it does in the way this guy says. “Oh, it’s all a conspiracy and people hate Microsoft!” is not good enough quite frankly. Come up with an exact explanation or don’t bother.
They might get a free hand into your systems. They don’t in mine.
I know they don’t – they don’t run Windows, I know what’s on them and I know what they run. That’s called proper security auditing .
How do you know anyway? Give me an exact explanation of what you do to lock down your system and how you know exactly what services are running on your machines.
No still packages.
Nope, bad packaging system. You don’t get away with blaming application developers. They use what’s available. People have been packaging things up on Linux and UNIX based systems for years without any trouble in multi-user environments (proper ones, that is).
Yes. You can use the pretty little interface that Windows likes to give ordinary users. Or you can learn how to run a network and use Group Policy and friends to do it.
You misunderstand. I want to know exactly why this behaviour is necessary and what information is passed. If you can’t do this then it is a serious question mark and is a security risk. Your system fails a security audit.
A detailed explanation of everything is essential in a security audit. If you can’t do it, don’t comment about security and don’t comment here .
You might notice the full stop if you weren’t busy inappropriately fondling your SUSE and Gentoo CDs.
I know I’m hitting the mark now, and I know exactly who, or what, I’m talking to . I don’t think you want to be lecturing people on grammar…..
There are two separate reasons why a server might need media components. And MS have moved into the 21st century and actually offer those OS services.
Oh, two separate reasons – praise the lord for Media Services! Is that it? What are those reasons, and what happens if Media Services aren’t required, which they invariably aren’t?
You’re still missing the point that it’s a security risk – if it isn’t necessary get shot of it by default.
Not on Linux they aren’t. Because it’s proponents are a bunch of masochists who can’t see the value in consistent GUIs or Media Services
That’s a very amusing comment from somebody who accuses people of fondling CDs and simply commenting from their bedroom. Did you read the article and look at the consistent GUI that is YaST in the previous one?
You still haven’t provided an explanation as to why Media Services are necessary for everyone running a server.
Ouch. Biting. That really is an advantage. 97% off the world disagrees with you.
97% of the servers in the world are not all using Windows Media Services. I don’t know where you get that idea. Look at the context of the discussion and of the article.
You’re trying to turn Microsoft’s desktop share (it isn’t 97% though) into something else. Stay on topic.
In 5 years time that figure will be no better without some changes in current OSS attitudes. You’ll still be in your bedroom tenderly caressing those CDs.
Not any sort of answer, and pretty much describes exactly who you are. For those of us actually doing something with Linux and Open Source software, and undercutting clueless idiots who insist on peddling SBS and Windows , that’s somewhat disconnected from reality.
If you decide not to use updates don’t put your system on a network. Patching’s a fact of life for all OSes these days.
Right, so no Windows system should be connected to the Internet? That’s a good start.
Patching is a fact of life for Windows because it is the only damn bloody way of securing it! It’s not good enough.
It’s time you got used to it. It’s time you implemented a testing process that could react in a timely manner.
Jesus. How many incarnations of patches for Windows and Microsoft software come out every month? You’re seriously expecting people to spend the incredible amount of time it takes to test the patches, test them with Microsoft software and then test them with any third-party or in-house applications to see if they still all work? Then you’ve got to document it all for next time?!
You expect companies to shoulder all of this expense and fork out for huge amounts in server, client and Client Access licenses, buy anti-virus software (which seriously affects any security patches) employ Windows admins, employ 3rd party application developers and support companies etc. etc. for all of this?
I think you’ve hit on something there! You wouldn’t want to put all of this into a TCO report for us so that we can see it all, would you . *Laughs out loud!!*.
You’re most certainly not employed if you suggest this to a company. There are those of us who like secure systems out of the box and to keep patching to a minimum. If you have to patch to this extent then it is a terribly bad sign as to how secure your system is.
There are those people who have businesses to run and other things to do .
And you have the hide to call me incompetent.
Don’t keep ramming it home – I know you are. See above.
What on Earth do you think Active Directory is?
Active Directory is based mostly on Banyan-Vines Street Talk for NT which Microsoft acquired the rights to shortly after the demise of Banyan Systems. Both sides of this debate are playing loose and fast with the facts and appear to be equally uninformed…as usual.
two computers sitting side by side in my house. mine runs gentoo linux. my wife’s runs windows xp home edition. mine is one month older than hers and both sit behind a hardware firewall (linksys broadband router using NAT). as far as user responsibility, we’re pretty equal in keeping up with security updates. hers falls victim to a lot more crap than mine has. the worst i’ve gotten was a few unsuccessful break in attempts. hers has been victim to both virus and spyware.
For God’s sake. Microsoft likes to tell us how attacked Windows is because it is popular. Apache runs two-thirds of the web and doesn’t have anything like the disruption caused by exploited IIS servers (IIS accounts for far less of the web sites on the Internet) and Windows boxes. That is WTF it has to do with anything.
First, according to the stats, 54% of Fortune 1000 companies run IIS, while only 20% run Apache. Guess what: hacks of these servers are published and discussed, hacks on some Joe Shmoe Apache server running on Linux kernel 2.0 are not visible to the public.
Stats here: http://www.port80software.com/surveys/top1000webservers/
Why can’t you ask yourself if IIS is so bad why US Fortune 1000 companies prefer it day after day and year after year?
Second, Apache is better than IIS meaning Linux is more secure than Windows? Strange logic, at least.
By the same logic, why can’t you tell us about how secure is Sendmail, a security joke of decades, and by stating the fact that security in Sendmail was and is a piece of crap- make same logical conclusion that Linux is inherently not secure and will never be made secure?
You might conceivably bring down a Linux box with an exploit at some point, but would it spread as far and wide as it would if you were attacking Windows? That’s the real killer.
Yes, it would. You demonstrate lack of imagination, limiting yourself by only thinking about viruses taking down Windows boxes that run under Admin account.
Let me help you. Suppose, you got exploit or virus on Linux box, properly secured, fully patched, firewalled, under user account. (That is already much more than we can tell about regular Windows box, but I don’t want to make your life too hard. Lets assume that all these users who can’t patch Windows box will run properly secured Linux.)
Suppose this is new exploit or virus- it is still undetected. You run it on inherently secure Linux under user account- so you don’t worry. You don’t care- now you behave like Windows user:), but your excuse is that inherently more secure Linux will not let anything bad happen to the desktop no matter what.
Now, what that worm/virus can do:
1. Try to guess root password. It has all time it needs to do it. A weak password can be cracked in a day, a stronger password may need a week, a strong password may demand few weeks- so what? Successful worm/virus need not hack every box it landed in- enough to get root on many home Linux desktops that will be running with weak root password.
2. Try to steal root password. Pop up ‘Please enter a root password…’ in an appropriate time, user is not supposed to remember if some functionality requires root or not. Many unexperienced users may be tricked.
3. Have fun without root access. Prove that user has much to lose. Slightly edit user tax return before he/she saves it to disk, steal user personal info from Resume documents, spam Internet with Viagra emails (no need for root to to that) or send some really nasty emails from user account to some people who get annoyed really fast (US Gov, FBI, etc.) and see how fast user computer gets vaporized from the face of the Internet by the ‘special services’ or such.
4. Be patient. Dial ‘mother server’ once in a while, check if a new exploit for a Linux had been found, one that gives root access to local user. As soon as exploit is available and posted on ‘mother server’ download exploit, run it and see if you have got root.
+++++++++++++++++++++++++
See, it is easy to get root and all you want if user system is compromised, even restricted user account. I came with this list spending less 30 minutes of my time, professional hackers will do better and will have much more imagination to get into Linux running on home user desktops, as soon as the target becomes big enough (as in the user base).
Active Directory is based mostly on Banyan-Vines Street Talk for NT which Microsoft acquired the rights to shortly after the demise of Banyan Systems.
Microsoft did not buy the rights to Street Talk at all, but made a well judged $10 million equity investment for an employee training programme that ended in Banyan scrapping its competing products, and the end of Banyan basically.
Both sides of this debate are playing loose and fast with the facts and appear to be equally uninformed…as usual.
Please feel free to inform yourself then :
http://www.google.co.uk/search?q=cache:bv1jhD46OQcJ:www.crn.com/Sec…
Quoted below:
Earlier this month, Westboro-based Banyan unveiled a stunning cash-for-customers swap with Microsoft. Banyan said it would effectively become a Solution Provider for Microsoft’s Exchange, Windows NT and 2000, and Active Directory products and curtail development of its own competing products. In return, Banyan received a much-needed $10 million shot in the arm from Microsoft, Redmond, Wash.
Banyan had no direct involvement in Active Directory at all other than Microsoft telling it in no uncertain terms to get off its turf.
Active Directory is an amalgamation of DNS, LDAP, Kerberos and Windows networking services. There are extremely stable BSD licensed implementations of DNS, LDAP and Kerberos that Microsoft basically lifted for Windows 2000 and made them as incompatible with the originals as they could possibly get away with whilst still trying to look open.
Just call a duck a duck, OK?
That was an amazingly intelligent post… jeez..
Well i think you have a Point There.
There will be a lot of Joe [Linux] Users running mandrake with
0 length User acounts and 15+ ? Oh, Please, 8 chars max root passwords.
Joe User – The EX-WIndows users who enters his pass, and presses OK when he ever sees one…
Yep… Human Factor – The most Unsecure part of your OS
Yes you are a hacker…
Read it By Matt (IP: —.vic.bigpond.net.au guy. Not all OS’ handle this operation the same way.
http://seclists.org/lists/nmap-dev/2003/Jul-Sep/0024.html
You talk a decent talk but not a good walk.
You never gave an answer to ring 0.
You never gave a good example of why an OS should have a browser embedded into its core functionality, same w/wmp and so on.
Dare people say that your a MS zealot?
The statement I addressed was:
You might conceivably bring down a Linux box with an exploit at some point, but would it spread as far and wide as it would if you were attacking Windows?
I think I made myself clear: a virus or worm running under user account on a Linux box would not have hard time to get root account on that box, and with the root acount (or even without in some cases) it would not have problems to spreading as far and wide as it would if it were attacking Windows.
There are different ways for virus to get into your computer: through corrupted image file causing buffer overflow, for example, or through similarly corrupted PDF document. It is hard way.
There is also easy way: an email from Linus Torvalds telling user to apply a latest kernel patch, message from the bank telling to install secure access software, or just plain old good Web download accelerator.
Now, as a courtesy to you, let me review some of your erroneous assumptions:
> And at, say 8 chars for user passwords and 15+ for root + a 10 sec time penalty for erronus login attempts, I’d bet you be busy for more than a few weeks trying to bruteforce something like ?+AxMPa(+DWfjuy,p.
A) How many users you know will use something like ?+AxMPa(+DWfjuy,p. as a password? If everyone were like that, we would not need IT security departments in every corproation.:)
B) “10 sec time penalty for erronus login attempts” is only for folks. Software (worm, virus) can do that much faster. Google for dictionary attacks.
>>Be patient. Dial ‘mother server’ once in a while, check if a new exploit for a Linux had been found
>Yes, and how does that gain you root on my system?
Amazing! You did not get that? OK, here it is: you’ve got a code running under user account, doing what looks like nothing at all. User ignores it like Windows users ignore spyware up to the point when they can’t boot- but this code does not affect performance. Computer is fast, responsive, no visible problems.
Just once in a while that code dials ‘mother server’ for new local priviledge escalation exploits. Gets them, runs from user account, and if Linux desktop does not have latest patch applied- gets root.
++++++++++++++++++++++
We know that Linux can be hacked: FSF, Debian Web sites. We know that Linux deesktops can be hacked in great numbers when you can find them in great numbers: Stanford U.
We know that.
Linux is getting hold of desktop, and we will see more examples of its geting screwd in the hands of a regular unexperienced user. You can blame user for that, but it does not change the bottom line.
“There is also easy way: an email from Linus Torvalds telling user to apply a latest kernel patch, message from the bank telling to install secure access software, or just plain old good Web download accelerator.”
didn’t hear of md5sum?
“We know that Linux can be hacked: FSF, Debian Web sites. We know that Linux deesktops can be hacked in great numbers when you can find them in great numbers: Stanford U.
We know that.”
well, really?
if debian site been defaced mean that debian is insecure, so what you say about 11 sites belongs to microsofts been defaced?
http://debian-hardened.org/
http://d-sbd.alioth.debian.org/www/
you make a valid point, Linux does not fall by the thousands like a row of dominos like Windows worms and viruses do, but Linux boxes can get cracked and i also agree as Linux gains more market share there will be more exploites for Linux…
hopefully the differences in distros can keep the domino effect from making Linux the the security mess windows can be at times…
@openforce: However, I also run Windows XP without any firewall, anti-virus software etc (connected 24/7). And I never have any problems.
Then you’re incredibly lucky. Last time I tried that, my computer was owned in less than two hours.
@elevator:
You are aware that the Win32 subsystem is actually not the kernel, right?
The Win32 subsystem was moved into the kernel in NT 4.0.
The IPC mechanisms of Windows as you know it all are provided through the Win32 API which makes them not a kernel problem but a subsystem ‘problem’.
The IPC mechanisms are exposed through the Win32 API (which is in the kernel anyway), but are implemented using native NT kernel IPC.
Could you name a few flaws in the IPC mechanisms that actually cause the vulnerabilities which are not based upon ill-written programs ?
There was a buffer overflow last year in the Windows RPC (which is a form of IPC), code that was used for lot’s of exploits.
IIS is implemented as a basic driver/service model and has nothing to do with the OS.
In IIS 6.0, the HTTP stack is implemented in a kernel driver called http.sys. The rest of the webserver is built on top of this driver. This increases performance at the expense of shoving more potentially exploitable code into the kernel.
“I have used Linux for years
connected to the internet 24/7
with no firewall or antivirus software
with no problem.”
how do you *know*?
Do you think, when your box gets rooted, a little flag pops up on top that says “CONGRATULATIONS! You’ve been cracked!”? Or the cracker gives you a phone call to let you know?
“The very first myth is about windows being a big target and apache running most of the web. WTF does that have to do with anything ? I can run apache on Windows and it will have little to do with Linux.”
Um, I see you missed the point completely. This is how his argument ran, clearly stated:
*People contend that Windows attracts more viruses and worms because it is more popular than any other OS.
*If this were true then, logically, one would also expect the most popular web server to attract more exploits than any other webserver.
*However, the most popular web server (Apache) attracts fewer exploits than a less popular web server (IIS), and therefore the assertion that Windows attracts more exploits simply because it is popular cannot be true.
It’s perfectly possible to argue with this on several grounds – you can, as others earlier in this thread have, dispute the assertion that Apache is cracked less often than IIS. Or you could point out that web server exploits are generally singular (you crack ONE website, to deface it), whereas OS exploits tend to be propagating worms, so the comparison doesn’t necessarily hold. But your objection to it is invalid.
I think that’s unfair. The Reg writes on a big variety of topics, has a fairly liberal editorial policy, and employs a variety of writers with a variety of views. They have some Microsoft people, as well.
I thought this article was mostly interesting for showing another good example of Microsoft playing games with third party statistical reports. The Reg has a very good history of pulling Microsoft’s PR department up on its more egregious abuses of other people’s numbers. The rest of the stuff is standard points that have been debated for years, but the detailed critique of how Microsoft used an external survey in its marketing was interesting.
“Registry on the other hand is a absolutely non-transparent monolithic cobbled up hairball that can take down the whole ship with just a single misconfiguration.
As can the wrong thing in /etc – your point ?”
OK, I’ll bite. Suggest a single change to something in /etc which will make my system unusable.
heh. a limited set of backups of the entire system configuration is “more flexible” than just booting up and editing whatever you like? so, what you’re saying is, us Linux users would be better off if our “recovery technique” was to tar up /etc and stick it in a backup directory every so often? good idea! I’ll have to try that! sheesh.