A few readers submitted a link to the reg, where there’s an interesting piece on Windows and Linux security. The article debunks some common myths about both OSes and associated software. Check out the bullet points, or if you have some time, complete report.
“Ouch. Biting. That really is an advantage. 97% off the world disagrees with you.”
Riiight. So (to unquestioningly accept your figure, because life’s too short…) 97% of the world uses Windows because they have a pressing need to use Windows Media Player on their servers?
here, have a clue. Normally I charge but for desperate cases, they’re free.
“Who We Are
Port80 Software develops tools to enhance the security, performance and user experience of Microsoft’s Internet Information Services (IIS) Web server.”
ahh, more wonderful unbiased information.
OK, I’ll bite. Suggest a single change to something in /etc which will make my system unusable.
I’d say deleted your init scripts will pretty much fuck you up the next time you reboot.
RE:I’d say deleted your init scripts will pretty much fuck you up the next time you reboot.”
i never actually done this, but i think the command to reboot will not work so you wont be able to reboot or shut off the machine (properly) becuase the reboot command needs to run init runlevel 6
correct me if i am wrong, but i am not about to use my computer as the guini pig in this experiment
>OK, I’ll bite. Suggest a single change to something in /etc
>which will make my system unusable.
>I’d say deleted your init scripts will pretty much fuck you
>up the next time you reboot.
Reboot? what is that
rm -f /etc/mtab on a running system will at least give some funny hard drive behavior. If you use scsi drives or raid systems you are in possible trouble. make sure you always use
/proc/mounts or symlink it..
I’m too bored to install all that Symantec, AVP, DrWeb, Pango, Anti-spys, firewalls ( most WinXP SP2 articles suggest firewalls NOT from MS), every work day start with question “is there new epidemy ?” and still hold my breath when listen strange HD activity ( database backup ? troyan ? spyware ? CTRL-ALT-DEL -> Processes -> sort by CPU usage … YES! IB_Backup !! still alive. At least look uncompromissed ).
Paranoia ? May be. But I NEVER have the same doubts with FreeBSD 4.7 box (www, ftp, squid, mail server). Same at home, I type it in WinXP SP2 box ( Doom3, FarCry, PainKiller, etc ) and hope that Symantec antivirus update quicker than I catch something fun surfing net. Reboot to Fedora Core 2 – and I feel much better.
Seriously, MS “Get The Facts” look laughing. Is not better to spend money to security fix than PR actions ? Fix holes, THAN make banners!
“The statement I addressed was:
You might conceivably bring down a Linux box with an exploit at some point, but would it spread as far and wide as it would if you were attacking Windows?”
Yeah. And you did this by pulling out a few examples which were more or less likely. I see that you have dropped a few of them. For the rest you could just have spared your wind, and simply said “Linux is as vulnerabel to gulliable idiot users as any other system.” I mean, even Fort Knox isn’t a very secure location for you money if the staff decides to run away with it, is it?
“I think I made myself clear: a virus or worm running under user account on a Linux box would not have hard time to get root account on that box, and with the root acount (or even without in some cases) it would not have problems to spreading as far and wide as it would if it were attacking Windows.”
Depends on how you define hard. Impossible? No. Harder than windows? Definitely since there usually is less services to attack, and they mostly run with less priveilegies. Also don’t forget that you don’t need to run as root to do stuff like writing cd’s in linux, which makes one hurdle more if you try something evil, since on windows most peopel already run as admin to be able to do mundane tasks. Ie. if you go for windows, you don’t always need to escalate your privilegies since it’s very probable you allready have what you need.
“There is also easy way: an email from Linus Torvalds telling user to apply a latest kernel patch, message from the bank telling to install secure access software, or just plain old good Web download accelerator.”
And this is the same as in the beginning. Social engineering, and furthermore one of the ways I pointed out initially as the one most probable to work. Thanks for stressing my point. 🙂
“A) How many users you know will use something like ?+AxMPa(+DWfjuy,p. as a password?”
Not as many as should. However, this was an example for a root password. Obviously you could use something simpler for users like “fudFejni”. There is usually no need for the ordinary user evoke the rootpassword for anything else than installing software, and this isn’t something you do every day. Therefore you could set this password to a messy one.
That said, in my mind the hardest part is to untrain all these people whom have been condiditoned on windows to click yes on *anything* just to get it out of their face, and are firm belivers in the religion of “When you use a computer you don’t need to use your brain”.
“Amazing! You did not get that? OK, here it is: you’ve got a code running under user account, doing what looks like nothing at all. User ignores it like Windows users ignore spyware up to the point when they can’t boot- but this code does not affect performance. Computer is fast, responsive, no visible problems. Just once in a while that code dials ‘mother server’ for new local priviledge escalation exploits. Gets them, runs from user account, and if Linux desktop does not have latest patch applied- gets root.”
And the difference compared to windows is that with that you don’t have to phone home and ask about new holes..
“We know that Linux can be hacked:”
Of couse it can. It’s just not *that* easy, if the distributor didn’t do an appallingly bad job, and the user isn’t a gulliable fool. If you compare to windows where #1 is already fullfilled, and #2 is all too common, the threat is to move a lot of #2 users to linux. That increases the risk, but you are still missing #1.
“But I NEVER have the same doubts with FreeBSD 4.7 box”
So, you think that it’s so secure, yep?
RE:I’d say deleted your init scripts will pretty much fuck you up the next time you reboot.”
i never actually done this, but i think the command to reboot will not work so you wont be able to reboot or shut off the machine (properly) becuase the reboot command needs to run init runlevel 6
correct me if i am wrong, but i am not about to use my computer as the guini pig in this experiment
Well, I once did a rm -rf / as root… Heh.
You could screw up /etc/inittab without screwing the default reset runlevel… Like deleting everything but 6 in it. It will be recoverable, but it might be a PITA to do so.
I once suffered from some corruption (thanks Hans Reiser for making such a fragile FS) and I *had* to reinstall Linux because too many files were FUBAR. I am now making backups of /etc… I think it’s just idiot that /etc is “safer” than the registry. IMO, both are relying on backups.
exactly. I think that it is so secure and it is so secure.
You’re free to think what you want. This article is just propaganda to me. It almost show up Linux as a perfect OS. I have an hard time with articles that don’t show both sides of a medal.
By the way, as you probably already know, I am a Linux user (not completely on the desktop, but my two servers are running Linux)… so I’m not some MS schill or whatever.
Hi,
Oh boy, here we go….=)
Read through the first couple of posts, and then gave up. Seriously, read through the article, and you’ll find its fairly well written and informative, despite what the slavering attack dogs on this site will tell you. They just keep quoting the same sentences, “modular vs monolithic”, “multi-user” etc. – have you guys actually read the article in its *entirety* ? Try removing those ‘oh, I’m like, so cool’ shades and do it again.
Then again, the zealots on the other side aren’t too innocent either – Windows isn’t some two-bit pile of bollocks, either. You shouldn’t need to rubbish the opposition to prove your inherent worth. Now, enough fair and impartial objective stuff – *nix rocks!!!!
(And T_Solaris owns them all, hehehe).
bye,
victor
“http://seclists.org/lists/nmap-dev/2003/Jul-Sep/0024.html
You talk a decent talk but not a good walk. ”
You make some arrogant assumptions. I can google too. And I know that most OSes respond to port 0 TCP and UDP. I know that the fingerprints for port 0 responses was exactly the same for Linux as it was for Windows. I know that even OpenBSD would respond to some port 0 communications.
“You never gave an answer to ring 0.
You never gave a good example of why an OS should have a browser embedded into its core functionality, same w/wmp and so on. ”
Because I wasn’t asked about ring 0. Because IE and WMP do not run in ring 0 no matter how much you Linux gigolos would like them to. Because a general purpose OS should have browsing and media facilities in the 1990s and 2000s. They provide standard services for other applications. That’s one of the reasons Windows is so far ahead of Linux. Consistent GUI, reliably available browsing and media services in the OS. I’d also point out that integration leads to a more modular design. A Windows system doesn’t have to have half a dozen browsers installed to satisfy the requirements of other applications. That’s a security advantage too for you zealots. One browser instead of six to keep up to date.
Active Directory is an amalgamation of DNS, LDAP, Kerberos and Windows networking services. There are extremely stable BSD licensed implementations of DNS, LDAP and Kerberos that Microsoft basically lifted for Windows 2000 and made them as incompatible with the originals as they could possibly get away with whilst still trying to look open.
Okay, let’s pretend you are entirely correct… At least Microsoft made something useful out of these programs. I am still waiting for a decent open-source system that is as easy to use as AD… or even a decent replacement of the domain concept. Not that I don’t like NIS + NFS but… let’s be frank: they suck and they are everything but secure.
A Windows system doesn’t have to have half a dozen browsers installed to satisfy the requirements of other applications
Call me stupid, but the only reason that i have to install a browser on linux is because I want a browser
. And by the way, I only need to install ONE.
AD is just a directory service with the ability to store information in an LDAP-like directory. That’s pretty much it. Other programs and the OS calls the framework to determine what machines do.
This is really not much different than LDAP on other systems. It’s just that they are tied together in an insecure, unscalable, proprietary and unreliable way in AD.
Yep. That’s the real facts.
Redmond v. Linux Security
On my Linux Box:
ClamScan reports zero on virus;
Firestarter reports zero negative hits;
Firefox on Linux is spyware free; and
All documents unchanged since the last scan.
Walk over to my wife’s redmond-based laptop:
Norton (I installed) reports several viruses stoped;
Spy watcher (I installed) reports a boatload of spyware
stopped as well as a key-logger;
Firewall (I installed) reports several serious
attempts; and
No way to check if someone has gained access and
changed her docs without going through them all.
Can you imagine organizations that work without computer technicians and run insecure system? Good lord.
Okay, let’s pretend you are entirely correct… At least Microsoft made something useful out of these programs. I am still waiting for a decent open-source system that is as easy to use as AD…
Red Hat and Suse do it, and they tie them all together for you.
or even a decent replacement of the domain concept.
A domain in Windows is now part of a tree structure, which is now a DNS structure. Yep, Microsoft moved to using DNS domains and sub-domains like they should have done to start off with. So much for Microsoft’s old domain concept then.
Not that I don’t like NIS + NFS but… let’s be frank: they suck and they are everything but secure.
Of course, Windows networking is totally secure and gives fantastic performance.
> Okay, let’s pretend you are entirely correct… At least Microsoft made something useful out of these programs. I am still waiting for a decent open-source system that is as easy to use as AD…
If you need some serious punch, you can always use Sun ONE Directory server on Linux — it is a very full featured and easy to use product that has really matured over the years. Plus Sun ONE Directory Server will blow AD out of the water in terms of performance any time of the day
How does someone compare a filesystem corruption to a registry fault. Filesystem corrupts, you are usually done for, and sometimes it is not the OS’s fault.
I have managed to screw up my initscripts before, and I have always managed to fix them. Required use of the rescue cd, but it does help a lot that you are running more or less the same OS when you are doing a fix. So you have the same programs available, and you boot of a cd, but everything else is (mostly) the same.
The registry does not give you the chance to go directly to the problem and fix it. /etc is not always changing. It very much stays the same until you su and change something there More than I can say for the registry. /etc is inherently much easier to backup than the registry, and sometimes these backups are automatic. you usually get the backup files when you change your configuration manually, the files ending in “~”. One level of backup. You can easily replace the files you need to replace, replace all of them, edit just the one you need, etc. This is not comparable to te need to back up and restore the whole registry for the off chance that 1 line changed in the registry which is messing everything up.
Shh. Don’t give them ideas. <joke>
Red Hat and Suse do it, and they tie them all together for you.
Well, I don’t remember have seen something like that in Redhat but I never have tried SuSE. I might check it out soon but I’m not sure they are including it in their “personal edition”.
A domain in Windows is now part of a tree structure, which is now a DNS structure. Yep, Microsoft moved to using DNS domains and sub-domains like they should have done to start off with. So much for Microsoft’s old domain concept then.
Maybe. But I had more success with Windows NT4 and Windows 2000 than with Linux. “Ease of use” might be the least of your worries when you are setting up a network for >1000 users, but I believe it is when you are setting up smaller ones.
Of course, Windows networking is totally secure and gives fantastic performance.
Well, it’s more secure than NFS… The performance is relatively good, at least on a small-to-medium network. For some reason, I get better performance with SAMBA/Windows than NFS/Linux… and yes, I have tweaked NFS. I’m checking out Coda and AFS but they are quite hard to set up. Linux has a lot of potential but, for some reason, I have never been lucky with the documentation.
If you need some serious punch, you can always use Sun ONE Directory server on Linux — it is a very full featured and easy to use product that has really matured over the years. Plus Sun ONE Directory Server will blow AD out of the water in terms of performance any time of the day
Well, I remember that Novell had a directory system (not free though) but I completely forgot Sun. Is it free (as in beer and freedom)? I doubt it is. That’s why I said an open-source system in my original post. Anyway, it’s probably a bit overkill for what I want to do…
http://www.nu2.nu/pebuilder/
For God’s sake. Microsoft likes to tell us how attacked Windows is because it is popular. Apache runs two-thirds of the web and doesn’t have anything like the disruption caused by exploited IIS servers (IIS accounts for far less of the web sites on the Internet) and Windows boxes. That is WTF it has to do with anything.
The disruption caused by IIS servers ? Care to elaborate on that one ? You mean blaster or one of its variants ? That wasn’t an IIS exploit, it was an RPC exploit.
You mean some exploit from 2-3 years ago ? When was the last big IIS exploit that brought the world to its knees. I mean obviously your a pro here and have all the answers. Please enlighten me.
You’ve obviously never connected a Windows box directly up to broadband. I don’t know where people get this crap about there only being e-mail viruses and that they’re the only threat.
You are right. I don’t do that with any computer because I usually have a hardware firewall/router in place.
They aren’t the only threat but of course you totally missed what I said. I said most viruses aren’t spread by IIS servers, which is true.
It is a two-pronged attack. Attack by e-mail, virus takes over your system, sends itself to everyone in your addressbook usually and uses any network connections to flood your network (or your fellow broadband users) with traffic to seek out any exploitable Windows machines and IIS servers.
Oh I see so now it does involve email. A minute ago you claimed all it took was hooking to the web ‘naked’, which doing with any OS is a braindead move.
So you didn’t take the time to respond to the part of my post about blaster and variants of that nature, that do exploit a running service by default.
Like a true troll you latched on to a couple of phrases and went on the war path.
Congrats. Heres a cookie.
Anyone who doesn’t know this is in denial.
I run Windows servers and I see the logs. Tons of old useless exploit attempts that were plugged long ago.
Wow. I guess if I weren’t patched and had no fing clue I might be concerned.
Unless it does several things you don’t know about.
Yeah I’ll agree. Thats why I don’t recommend that n00bs setup web servers or other services open to the internet until they understand how the OS and those services tick.
Read it and make an IT professional judgement. If you can’t, don’t post about it.
I just did. It shouldn’t take articles like this, which just make the author look as uninformed as everyone else you kids like to bitch about, to prove anything.
In case you weren’t listening or had your ears blocked I said Linux is more secure out of the box
Happy now ? Your temp coming down yet?
Grow up man.
both sides of this debate are playing loose and fast with the facts and appear to be equally uninformed…as usual.
As usual you still find the time to post don’t ya ? haha
In a world of uninformed people you take the time to post about it. w00t.
Welcome to the club my friend. You are just as clueless as the rest. Or does that mean you have an opinion like everyone else ?
Windows 2003 server is the most secure version of Windows so far. Windows 2000 server was FULL of holes.
The funny part of this all is that an OS from a multi, multi Billion dollar company is being compared to an OS created in a dorm room!
You would think that M$ would have made their OS more secure a lot sooner then 2003.
People always say “well there are alot more Windows machines out there then Linux machines so that is why there are more problems” But that logic makes no sense because 1. There are 1000’s of Linux machines facing the internet and running very important tasks such as DNS, Webserving, Routing, Firewalls etc, yet don’t get taken down as often as Windows machines. 2. The problems in Windows would be there no matter if Microsoft has 100 million or 1 million machines on line. A lot of the problems go back to Windows NT 3.51 (And are still in current versions of the OS) Microsoft spent too much time in the past on making the software as easy to use as the Mac and did not focus on security.
It took the little Kernel that could to make MS focus on security. I bet if Linux did not exsist MS would still not focus on security.
Walk over to my wife’s redmond-based laptop:
Norton (I installed) reports several viruses stoped;
Spy watcher (I installed) reports a boatload of spyware
stopped as well as a key-logger;
Firewall (I installed) reports several serious
attempts; and
You *do* understand the difference between an *attempted* exploit and a *successful* one, right ?
No way to check if someone has gained access and
changed her docs without going through them all.
Have you considered using the same method you use to make sure none of the documented on your Linux box have been modified ?
You quite clearly stated that Windows has been multi-user since about 1988. This is quite clearly not the case at all and shows a woeful lack of knowledge.
I said Windows NT has always been multiuser. Whether you want to count that from when it started development (1988) or when it had its first release (1993) is up to you.
My definition of multi-user is that the thing actually works:
1. Printing and other services always work as an ordinary user in multi-user, roaming environments.
And they do.
2. Installing software as administrator shows up and works for every user on the system in their menus. This doesn’t always work at all – don’t blame application installers.
It works fine when the installer and application is written for NT. If it’s not, it’s an installer problem.
3. Installing software is locked down. Users cannot half-install software or run applications from their home directory.
And it is.
The fact that they’re patched or not is pretty irrelevant.
Actually it is. Unless you intend to propose Windows is the only OS that suffers from coding bugs.
Look under the section, “Do not rely on Win2k security”, and much of this applies to Windows XP as well since it is Windows 2000 with some minor architectural changes.
I’m wondering why I should trust (or even bother reading) _anything_ from a site written by someone with the mentality of a ten year old.
What am I going to do? Apply several thousand patches every time I install a system? One of these days you’ll realise that patching in the scheme of security is an absolute last resort.
It’s a fact of life. Software has bugs. Those bugs get fixed.
Coming from someone who thinks Windows has been multi-user since 1988, that’s quite funny.
Since your best argument that it isn’t multiuser user basically boils down to stamping your foot and saying it isn’t, I’m not too concerned _what_ you think.
TCP/IP stack, BIND (DNS), network services…. What on Earth do you think Active Directory is?
And your source for this assertion that Windows 2000 was a major redesign using BSD code is…?
Even if it were, what’s the problem ? That *is* why the BSD license is written the way it is.
Well, in the case of Windows you’d have a simple Windows Update admin daemon running – or a unified way of remote admining the system with very little dependency on the server side. Running IE on the server and farming it out through Terminal Services is not secure and is not really remote administration in my book.
Please explain why you think a TS session is any different to an SSH session.
Woeful lack of knowledge and pretty naive. If you don’t know about Linux-based systems don’t embarrass yourself. Where’s the equivalent of VBScript and ActiveX that can be directly run within an e-mail client or through a browser without user knowledge or intervention?
Since neither can be without user intervention in a properly configured system, I’d have to say one of the myriad languages that tend to be easily accessible on the typical unix desktop – sh, perl, C, python – take your pick.
You’re seriously comparing the ease of writing a VB Script and sending it to someone with trying to get around SSH, or tricking a user into running a PERL script that wouldn’t get at the core system anyway as it is run under a user’s ID?
I’m comparing Apples to Apples. You, like the typical zealot, are not.
The problem with Windows is how easy it is to spread a virus. You might conceivably bring down a Linux box with an exploit at some point, but would it spread as far and wide as it would if you were attacking Windows?
Of course not. There’s an order of magnitude less Linux boxes and a much higher proportion of competent users on the Linux platform.
You still need to reboot Windows – period. Black and white. Largely and always is always in a business environment.
No business that *really* needs a service 24/7 is running it on a single machine. Period. As such, individual machine reboots are largely insignificant.
Install various pieces of software as an administrator and then ask your users whether they have showed up in their Start menus or whether they can even run them and whether they work. The answers will be various, but mostly they won’t be working at all and serious manual intervention is necessary.
Works fine for some software, thus implying the underlying system _is_ quite capable and the problems lies with individual installers.
It’s laughable you even try to bring this up as a point, though, given how poorly the average program install integrates into the GUI under linux.
The applications aren’t broken – you don’t get away with that line. Microsoft specifies how software will be installed, provides the interfaces and the system for doing it. If they haven’t accounted for this it is their fault.
Except they have. Ergo, when it doesn’t work in some cases, it’s *isn’t* their fault.
It didn’t have a built-in dependency system that worked the last I checked – something developers need.
You’ll have to work pretty hard to convince me MSI can’t check DLL versions.
>My dog just crapped out a bad clone of a 30 year old OS!
Must have been the same dumb dog that has passed the MCSE exam. I really start tyo understand why people use Windows as a server since their dogs make the dicissions….pfff
windows nt is a multi-user os as in processes have owners and whatnot. however, it has a real hard time with stuff like simultanious logins. aparently you can actually do it as long as noone is logged in locally in 2k3, but im not positive on that.
and youre right, i dont know what that david guy is smoking
matt b is giving matts a bad name.
If you don’t know how to allow simultaneous multiple user logons (even if the console is in use) on either 2000 Server or 2003 you really don’t know enough to bother commenting. Terminal Services are very capable.
As well, nearly every NT/2000/XP/2003 uses multiple user logons to run services in different user contexts.
last i checked terminal services was not encrypted.
Why do you not give up and tell us why you use Windows.
Its not a problem for me if you admit you actually use Windows because its more easy to use for you because you are known to it.
That does not mean its secure however and as blind said:
>last i checked terminal services was not encrypted.
We are talking about security here not if it multi-user or easy to use.
Terminal services might be available encrypted but it also might be you need third party tools for it. I use ssh -X on a vpn connection, now that is quite secure.
I will not go in on the rest off the things you said while i do tihnk different about them you make some points but you really lost some too.
>Actually it is. Unless you intend to propose Windows is the only OS that
>suffers from coding bugs.
Not coding bugs but real stupid bugs that can bring the whole system down. Windows is known for that and problems and bugs like these are keep poping every day.
>Please explain why you think a TS session is any different to an SSH
> session.
Please explain why you think TS and SSH are the same?
I will tell you why i think SSH is good.
1. SSH is a very secure
2. You can run programs of userX on a SSH connection while logged in as userB.
3. you can export your display to use the remote one
4. you do not have to run X to use SSH.
5. SSH is encrypted
6. you can transfer files with ssh, in ssh not using any other protocol or program.
7. SSH runs on almost every platform, including Windows.
8. SSH is very configurable
9. SSH is very lightweight. Consumes little mem/proc
10. SSH can be used in scripts for cerain automated tasks
11. SSH is very robust, i never seen it crash or disfunction.
12. SSH can do secure FTP connections/transfers
13. SSH is FREE as in Freedom.
14. SSH is open-source.
15. SSH is cool.
>No business that *really* needs a service 24/7 is running it on a
>single machine. Period. As such, individual machine reboots are
>largely insignificant.
I hope they will never have to depend on your skills in a real company.
Where i work we need our servers up 24 hours a day 7 days a week.
14. SSH is open-source.
15. SSH is cool.
heh.. very convincing points
You can find ssh for Windows here:
http://www.ssh.com/products/tectia/server/
If you prefer Openssh:
http://sshwindows.sf.net
I hope they will never have to depend on your skills in a real company.
Where i work we need our servers up 24 hours a day 7 days a week.
Could you give me your IP addresses? I like unpatched systems. Thank you.
>You can find ssh for Windows here:
>http://www.ssh.com/products/tectia/server/
So? that was not the question, besides i even mentioned Windows
in my points. ssh for Windows cannot even come close to the function it has using SSH in a Unix envoirment.
If you prefer Openssh:
http://sshwindows.sf.net
I hope they will never have to depend on your skills in a real company.
Where i work we need our servers up 24 hours a day 7 days a week.
>Could you give me your IP addresses?
212.129.181.134 = redhat 7
212.129.181.133 = redhat 8
> I like unpatched systems.
You do not have always have to reboot to patch.
So while they have patches they did not reboot for at least
2 years.
It’s true that linux patches do not really require a reboot…unless it’s a kernel update. Updating the kernel every so often is a good idea. That said, I had a little linux server set up for a small business (file and print, domain controller for Win2K clients, DHCP, DNS)I was supporting that ran nonstop for two and a half years (except for 2 power outages that lasted longer than the UPS allowed for). It’s nice to not have to mess with a machine all the time to keep it running well.
Just noticed one of those machines of yours is still running RedHat 7…I guess if it ain’t broke there’s no need to fix it 😉
I agree, all the windows fanboys are in denial
1)You quite clearly stated that Windows has been multi-user since about 1988. This is quite clearly not the case at all and shows a woeful lack of knowledge.
“I said Windows NT has always been multiuser. Whether you want to count that from when it started development (1988) or when it had its first release (1993) is up to you.”
Actually, it depends on your definition of multi-user.
In terms of network use, multi-user is a term commonly used to define a computer capable of allowing multiple users to connect to a network. (www.computerhope.com). NT Workstations did allow this.
DrSmithy is correct in this regard; however, the topic of conversation was on OS and security issues.
In terms of operating system use, a multi-user system is a computer with an Operating system that supports multiple users at once.
David was right in his description of a true muli-user Operating System. NT workstation does not have the ability for 2 users to run multiple user sessions on the same machine at the same time. Terminal Server does not apply, as it is a server-based product, not a workstation based product.
XP now has “limited” multi-user functionality, in which you can switch between different users, but one user is normally logged out in favor of another.
2. Installing software as administrator shows up and works for every user on the system in their menus. This doesn’t always work at all – don’t blame application installers.
“It works fine when the installer and application is written for NT. If it’s not, it’s an installer problem.”
Not if the installer needs Windows API calls from pieces of the OS that Microsoft has refused to provide. I worked for a developer for software that interacted with low-level device drivers. Microsoft was unwilling to provide the suitable information for the product, and so in order to make to product work, hacks had to be put in.
Yes the installer is at direct fault, but as a result of the OS provider.
3)What am I going to do? Apply several thousand patches every time I install a system? One of these days you’ll realise that patching in the scheme of security is an absolute last resort.
“It’s a fact of life. Software has bugs. Those bugs get fixed.”
For any network or system admin, it is an incredibly inefficient use of time and resources to redo a system in which several (any number over 10) key patches have to be put onto a system before it is connected to a network or Internet connection. DrSmithy’s answer either denotes a lack of general understanding of the position of network/system admin, or a grasp of reasoning in regards to software security. If several patches have to be implemented for security means, then the respectability of the product comes into question, and based on my 14 years of experience in this business, I would be convincing members of the company’s management to look for something where several patches don’t need to be applied at once in order for a system to be placed on my network.
4)Well, in the case of Windows you’d have a simple Windows Update admin daemon running – or a unified way of remote admining the system with very little dependency on the server side. Running IE on the server and farming it out through Terminal Services is not secure and is not really remote administration in my book
“Please explain why you think a TS session is any different to an SSH session.”
Simple…RDP is a protocol with no security built into it. No encryption layer like ICA. ICA at least has 128 bit encryption built into it, same as SSH. Plus RDP is now widely used in both Terminal Services, Remote Accessibility and NetMeeting, and with XP is wrapped around more system functions than you can shake a stick at. It’s not a secure protocol and SSH is.
5)You still need to reboot Windows – period. Black and white. Largely and always is always in a business environment.
“No business that *really* needs a service 24/7 is running it on a single machine. Period. As such, individual machine reboots are largely insignificant.”
Once again, a lack of understanding of general IT principles.
1) Services must be kept up as long as possible
2) Services must be updated as quick as possible.
First, some installers for software don’t give you the option; it reboots your system. Second, if the installer of the product requires a windows service to start, then reboots are the only way the systems get loaded correctly.
Third, no reboot is insignificant on a Windows machine. Each time a system is restarted, the potential for a dormant virus, security vulnerability, or malware to run amuk is intensified. The less reboots that need to be done the better.
Service Packs, updates and hotfixes do not get applied correctly until a restart is made.
6)The applications aren’t broken – you don’t get away with that line. Microsoft specifies how software will be installed, provides the interfaces and the system for doing it. If they haven’t accounted for this it is their fault.
“Except they have. Ergo, when it doesn’t work in some cases, it’s *isn’t* their fault.”
First, this is a yes/no type of response to a rational argument. The answer doesn’t really carry a lot of weight.
Second, what David may have forgotten to mention in his statement is that Microsoft specifies how software will be installed and provides subject to their control certain interfaces and the system for doing it.
If Microsoft provided the correct and open ended information to do this with, then 3rd party installers would work much more efficiently. You’re blaming the 3rd party; all they’re doing is going off the information “spoon-fed” to them by Microsoft.
ooh, so you’ve got two years worth of unpatched kernel vulnerabilities on those machines? I predict fun!
both sides of this debate are playing loose and fast with the facts and appear to be equally uninformed…as usual.
As usual you still find the time to post don’t ya ? haha
In a world of uninformed people you take the time to post about it. w00t.
Welcome to the club my friend. You are just as clueless as the rest. Or does that mean you have an opinion like everyone else ?
I didn’t write the comment you quoted, but I responded to it with some facts and a link. Maybe you should actually read the comments?
This is just too easy.
I said Windows NT has always been multiuser. Whether you want to count that from when it started development (1988) or when it had its first release (1993) is up to you.
Take your pick – neither is true
.
And they do.
Nope. You haven’t set up any, or anywhere near enough.
It works fine when the installer and application is written for NT. If it’s not, it’s an installer problem.
Yep, it’s an installer problem – in other words Microsoft’s. “Oh, it’s all the sly third-parties’ fault!”
And it is.
Nope. Power users? What the hell is that about? Then you start giving access inside program files is system32 because a user needs access to certain files… Don’t tell me – it’s all the third-party peoples’ fault.
Actually it is. Unless you intend to propose Windows is the only OS that suffers from coding bugs.
The amount of patches you install does not correlate to making a system secure.
I’m wondering why I should trust (or even bother reading) _anything_ from a site written by someone with the mentality of a ten year old.
Why not? You presumably read Windows Update. I guess you didn’t actually read it then….sigh. When someone gives you a hard and fast link with a wealth of information, just say it was written by a ten year old
.
It’s a fact of life. Software has bugs. Those bugs get fixed.
Don’t confuse bugs with security patches. Patching is not a fact of life with security – if you have to do it then you’ve already failed.
Since your best argument that it isn’t multiuser user basically boils down to stamping your foot and saying it isn’t, I’m not too concerned _what_ you think.
Nope – it hasn’t been multi-user since 1988. DOS wasn’t, Win95/98 weren’t and NT/2000/XP tried to be but failed. Why? The multi-user scenarios don’t work as you always end up having to run someone as an administrator or rummaging through the individual NTFS permissions to make things work.
And your source for this assertion that Windows 2000 was a major redesign using BSD code is…?
Reading. Since you don’t read any sources or links someone gives then I will leave that as an exercise for you. I haven’t seen any from you – don’t expect to either.
Please explain why you think a TS session is any different to an SSH session.
TS? Don’t make me laugh. It has nothing to do with the service – it’s the fact that IE is installed when it shouldn’t or needn’t be.
Since neither can be without user intervention in a properly configured system, I’d have to say one of the myriad languages that tend to be easily accessible on the typical unix desktop – sh, perl, C, python – take your pick.
Doesn’t mean anything. Getting quick easy access to it a la VBScript and with applications like Outlook – that’s the issue.
I’m comparing Apples to Apples. You, like the typical zealot, are not.
You’re due for a serious eye test. You’re comparing the direct access a virus writer has to VBScript in Outlook and Windows to PERL on a Linux/UNIX-based system?! Go ahead – make my day.
Of course not. There’s an order of magnitude less Linux boxes and a much higher proportion of competent users on the Linux platform.
Maybe you didn’t read the article – Apache on Linux runs the vast majority of the internet. The popularity line for Windows is crap, done, dusted. Maybe you didn’t read when I told you how easy it is for a virus writer to use the VBScript features in Outlook and Windows. Apples with apples?
No business that *really* needs a service 24/7 is running it on a single machine. Period. As such, individual machine reboots are largely insignificant.
The vast majority of businesses do not go out like sheep and buy a dozen Windows servers just to load balance their services or cover for the crap performance they have found Windows giving. They consolidate on one (as small businesses do – Small Business Server?) or a small number.
Anyway, the point is that reboots are just not necessary. No other OS needs to do it like Windows.
Works fine for some software, thus implying the underlying system _is_ quite capable and the problems lies with individual installers.
Microsoft creates the specifications and the instructions for installing software, and installers follow them. Microsoft did not take into account software installation on a multi-user system therefore it is Microsoft’s fault and therefore Windows cannot claim to be a multi-user system that wants to be taken seriously. Therefore, it isn’t.
It’s laughable you even try to bring this up as a point, though, given how poorly the average program install integrates into the GUI under linux.
Coming from a Windows world I can see you have difficulty with things working. A Linux installer and package (RPM or otherwise) works and slots right into the system once it is installed. It has no trouble in a multi-user set up because everyone knows what is expected in such a serious environment – there are well thought out specifications and standards.
Except they have. Ergo, when it doesn’t work in some cases, it’s *isn’t* their fault.
Except – they haven’t. Microsoft have never taken consideration of a multi-user system when installing software since Windows started. Installers followed what they were given, therefore not their fault. Therefore, Windows is not a multi-user system.
You’ll have to work pretty hard to convince me MSI can’t check DLL versions.
Checking individual files results in DLL hell and an absolute total mess. The GAC for .Net is unbelievable?
Does the installer system check the versions and dependencies of other MSI files as packages? Can the Windows system be built with MSI files to satisfy these dependencies reliably at a system level? Errrrr, no.
This is just too easy.
and youre right, i dont know what that david guy is smoking
Something you’re not perhaps? You’ve had our comments totally ripped to pieces and there’s been no response out of you.
ooh, so you’ve got two years worth of unpatched kernel vulnerabilities on those machines? I predict fun!
I predict – nothing happening!
With a system designed with security in mind, even if you have got a vulnerability somewhere the system will be cordoned off according to your needs or it will only be exploutable in certain circumstances. If you absolutely do need to patch it will be once in a blue moon.
The futility of patching just doesn’t seem to get through to some people…..
Fantastic. I see some people can’t take the heat.
Anyway, about the two-pronged e-mail/network attack though as, seriously, it is important:
It involves e-mail and attack over the network as a two pronged attack. The virus gets past your oh so secure firewall using the moronic features of Windows and Outlook, and then it infects your naked network internally because it is then on an unprotected network.
This happens. A firewall will not protect you in any way shape or form from this. Whatever you do, please understand that.
Hmm. I’m not convinced. Kill me if you like, but I don’t remember every kernel vulnerability in the last few years, but I’m fairly sure there was at least a remote DoS in there somewhere. And I certainly don’t believe it’s a better idea to leave your systems up 24×7 and trust your infrastructure to preserve you from kernel vulnerabilities than it is to just take them offline for a couple of minutes when a kernel problem is discovered.
but I’m fairly sure there was at least a remote DoS in there somewhere.
How is it exploited? What software does someone use to exploit this DoS vulnerability at the kernel level? You might have a vulnerability but how do you get at it? Windows makes this easy because of all the silly technology it wraps around Windows.
One of these days, we might get there.
and trust your infrastructure to preserve you from kernel vulnerabilities than it is to just take them offline for a couple of minutes when a kernel problem is discovered.
The demands of life and business don’t always allow it. It is impossible to implement patching as a solution to every conceivable security issue you might have.
and , in the end, does anyone’s opinion change?
How is it exploited? What software does someone use to exploit this DoS vulnerability at the kernel level? You might have a vulnerability but how do you get at it?
Just like that:
A bug in version 2.6 of the Linux kernel allows remote users to crash systems running SuSE’s latest enterprise and consumer software
Roman Drahtmueller, head of Linux security at SuSE Linux, said this version of the kernel is available to all Linux distributers, but as SuSE is one of the few commercial distributions to actually use the 2.6 kernel it was a priority for them to resolve the security hole quickly.
The flaw will allow a malicious remote user to crash a PC that is running one of the affected SuSE products and a firewall, by sending a specially crafted IP packet.
SuSE has advised users to disable firewall logging of IP and TCP options and to update the kernel. Version 2.6.8 of the kernel contains a fix for this bug, as does the latest version, 2.6.9, which was released last week.
Not coding bugs but real stupid bugs that can bring the whole system down. Windows is known for that and problems and bugs like these are keep poping every day.
Like what ?
Please explain why you think TS and SSH are the same?
You missed the point. Someone was saying using TS to connect wasn’t “real” remote administration because it was really no different that sitting at the console. I want to know why anyone considers that to be different from an SSH session.
1. SSH is a very secure
RDP is encrypted as well.
2. You can run programs of userX on a SSH connection while logged in as userB.
As you can over a TS session.
i3. you can export your display to use the remote one
I’m not entirely sure there’s a valid comparison here. What is the objective ?
4. you do not have to run X to use SSH.
You need to run it on the server and you need to have the appropriate libraries on the X client. Apparently since everyone considers having a a dormat IE on Windows is just as bad as running it, then having dormat X libraries on a machine must be just as bad as running X.
5. SSH is encrypted
So is RDP.
6. you can transfer files with ssh, in ssh not using any other protocol or program.
As you can with RDP.
7. SSH runs on almost every platform, including Windows.
Client or server ? There’s an RDP client for just about everything.
8. SSH is very configurable
“Very configurable” is gravy. “Configurable enough” is all you need.
9. SSH is very lightweight. Consumes little mem/proc
RDP clients run on PalmPCs.
10. SSH can be used in scripts for cerain automated tasks
Not really relevant to the context of this comparison.
11. SSH is very robust, i never seen it crash or disfunction.
I’ve never seen an RDP session crash either.
12. SSH can do secure FTP connections/transfers
You mean SFTP ?
13. SSH is FREE as in Freedom.
14. SSH is open-source.
15. SSH is cool.
None of which are relevant, although I appreciate your candour
.
I hope they will never have to depend on your skills in a real company.
Where i work we need our servers up 24 hours a day 7 days a week.
No, you need your *services* up 24/7. If you *really* need each of your *servers* up 24/7, then you are walking on a knife-edge of disaster – it’s only a matter of time.
Actually, it depends on your definition of multi-user.
No, it depends on *your* definition of multiuser. By *my* definition – and the definition used by computer scientists and OS developers everywhere – NT is multiuser.
In terms of operating system use, a multi-user system is a computer with an Operating system that supports multiple users at once.
No. Multiuser is the ability of an OS to run processes in different user contexts. Those users do not need to be logged in. Those users do not need to be interactive. Hell, those “users” don’t even have to be people.
I can setup a Linux machine such that it has no facility whatsoever for interactive use. However, this does not stop Linux being a multiuser OS. Processes still run in different user contexts and are separated from each other.
Likewise, I can setup a DOS or Windows 9x machine to allow multiple people to use the system simultaneously. However, this does not make either of those two OSes multiuser.
David was right in his description of a true muli-user Operating System. NT workstation does not have the ability for 2 users to run multiple user sessions on the same machine at the same time.
Nor does it need to. It just needs to be able to run processes under different user contexts simultaneously. Which it can do.
Terminal Server does not apply, as it is a server-based product, not a workstation based product.
They’re all Windows NT.
Multiuser isn’t something you just “plug in”, it’s a fundamental feature of the OS’s *design* and core functionality.
XP now has “limited” multi-user functionality, in which you can switch between different users, but one user is normally logged out in favor of another.
By your definition of multiuser, a DOS box running a BBS is multiuser. I can assure you, DOS is not multiuser. Your definition is broken.
Not if the installer needs Windows API calls from pieces of the OS that Microsoft has refused to provide.
Your explanation of what possible advantage Microsoft gains by not allowing software vendors to install software on their OS (a piece of software whose primary purpose is running other pieces of software) is awaited. Should be pretty funny.
I mean, if you were trying to claim they were targeting specific software vendors I could at least see some reasoning behind it – but your implication is that they want to stop *anyone* from installing software onto their OS.
Yes the installer is at direct fault, but as a result of the OS provider.
The simple fact is _some_ companies manage to use an installer to install a piece of software that functions and is made available for all users. This _strongly_ suggests that the facility exists and functions perfectly, just some developers can’t (or don’t) use it.
Now, if _everyone_ except Microsoft had trouble writing installers and applications that worked with multiple users in NT, you might have the basis of a rational argument. But they don’t.
For any network or system admin, it is an incredibly inefficient use of time and resources to redo a system in which several (any number over 10) key patches have to be put onto a system before it is connected to a network or Internet connection.
I agree. Which is why you _automate_ the procedure. My servers roll off the “production line” completely patched and up to date before they are plugged into their first “live” network.
DrSmithy’s answer either denotes a lack of general understanding of the position of network/system admin, or a grasp of reasoning in regards to software security.
My answer reflects my ability to do my job instead of whining at Microsoft because I don’t know how to use their products. Completely nonsensical. It would be akin to claiming BMW design their cars such that only BMW employess could drive them.
If several patches have to be implemented for security means, then the respectability of the product comes into question, and based on my 14 years of experience in this business, I would be convincing members of the company’s management to look for something where several patches don’t need to be applied at once in order for a system to be placed on my network.
So, which OSes are you thinking of that haven’t had _any_ security related patches ever released for them ?
Simple…RDP is a protocol with no security built into it. No encryption layer like ICA. ICA at least has 128 bit encryption built into it, same as SSH. Plus RDP is now widely used in both Terminal Services, Remote Accessibility and NetMeeting, and with XP is wrapped around more system functions than you can shake a stick at. It’s not a secure protocol and SSH is.
RDP is encrypted.
Once again, a lack of understanding of general IT principles.
Untrue. An *excellent* understanding of general IT principles.
1) Services must be kept up as long as possible
2) Services must be updated as quick as possible.
Correct. Note: Services not Servers.
First, some installers for software don’t give you the option; it reboots your system. Second, if the installer of the product requires a windows service to start, then reboots are the only way the systems get loaded correctly.
Firstly, services can nearly always be restarted individually. Almost always, if they can’t, the service developer is at fault.
Secondly, if you’re installing software or applying updates outside of maintenance periods, wherein reboots should be permissible at any time, then it is *you* who have a lack of understanding in how to be a sysadmin. That or you’re a cowboy. Either way, you’re in no position to be lecturing me on good practice.
Third, no reboot is insignificant on a Windows machine. Each time a system is restarted, the potential for a dormant virus, security vulnerability, or malware to run amuk is intensified. The less reboots that need to be done the better.
Service Packs, updates and hotfixes do not get applied correctly until a restart is made.
Clearly, you have NFI what you’re talking about. *Some* service packs and hotfixes are not completely applied until the machine or service is restarted (those which replace files in use). But not all.
First, this is a yes/no type of response to a rational argument. The answer doesn’t really carry a lot of weight.
It’s a Yes/No respose because that’s all the “argument” requires. There’s no discussion to be had – Windows installers are quite *capable* of doing what he says they can’t. Whether or not they *do* is up to the discretion of the developer.
If Microsoft provided the correct and open ended information to do this with, then 3rd party installers would work much more efficiently. You’re blaming the 3rd party; all they’re doing is going off the information “spoon-fed” to them by Microsoft.
I’m blaming the third party because the existence of *other* third parties who manage to get it to work fine strongly implies they are the ones at fault. Again, if *no-one* could manage to get it to work, you might have the beginnings of a point.
Take your pick – neither is true
.
Millions of computer scientists and OS developers across the world disagree with you. Whose opinion do you think carries more weight ?
Pick up an Operating Systems textbook some time. NT is a standard example of a multiuser system.
Nope. You haven’t set up any, or anywhere near enough.
Printers are a nightmare under any OS. However, I’ve setup enough printers in Windows networks to know they can and do work without a hitch in multiuser environments. This suggests that if they *don’t* it’s the fault of the printer manufacturer.
Yep, it’s an installer problem – in other words Microsoft’s.”
If it was Microsoft’s fault it wouldn’t work for _anyone_.
Power users? What the hell is that about?
It’s called a preconfigured group. Something like the standard “users”, “staff”, etc groups on the average unix box.
Then you start giving access inside program files is system32 because a user needs access to certain files… Don’t tell me – it’s all the third-party peoples’ fault.
Firstly, I thought we were talking about locking down the user’s ability to install software, not getting broken applications to work ?
Secondly, if some developer writes their software so that it needs to write to something in system32 then it *is* their fault. Do you blame Linus if someone’s broken Linux app wants to write to stuff in /usr/lib ?
The amount of patches you install does not correlate to making a system secure.
I never suggested it did. You seem pretty convinced it does though.
Why not? You presumably read Windows Update. I guess you didn’t actually read it then….sigh.
Nothing littered with “Windoze” and “M$” is worth taking seriously. The person may not be 10, but their mentality certainly hasn’t made it out of their teenage years.
Don’t confuse bugs with security patches. Patching is not a fact of life with security – if you have to do it then you’ve already failed.
So which OS (or service) are you thinking of that’s never had a security-related patch ?
DOS wasn’t, Win95/98 weren’t and NT/2000/XP tried to be but failed. Why? The multi-user scenarios don’t work as you always end up having to run someone as an administrator or rummaging through the individual NTFS permissions to make things work.
You do realise the only reason you have to bother with permissions _at all_ is because the OS is multiuser, right ?
Reading. Since you don’t read any sources or links someone gives then I will leave that as an exercise for you. I haven’t seen any from you – don’t expect to either.
I haven’t made any claims that aren’t either common knowledge or common sense.
TS? Don’t make me laugh. It has nothing to do with the service – it’s the fact that IE is installed when it shouldn’t or needn’t be.
Seems to me you were suggesting that using TS to RDP into a server to admin it wasn’t “real remote admin”. I want to know if you feel the same way about SSH and, if you do, what you use to admin your unix boxes.
Doesn’t mean anything. Getting quick easy access to it a la VBScript and with applications like Outlook – that’s the issue.
You’re due for a serious eye test. You’re comparing the direct access a virus writer has to VBScript in Outlook and Windows to PERL on a Linux/UNIX-based system?! Go ahead – make my day.
I’m comparing the availabilty of scripting languages on both platforms and their ability to be use ddestructively. I’m also noting that if you restrict your activities to a regular user account in Windows the same way you do in unix the damage will be similarly contained.
Maybe you didn’t read the article – Apache on Linux runs the vast majority of the internet. The popularity line for Windows is crap, done, dusted. Maybe you didn’t read when I told you how easy it is for a virus writer to use the VBScript features in Outlook and Windows. Apples with apples?
If you think that pathetic excuse for anti-Microsoft FUD and Linux cheerleading even *remotely* discredits the fact that Windows’ popularity has a significant impact on how much it is targeted and the relative negative effects of any exploits, then you probably shouldn’t even bother trying to appear objective. Since author is either too dumb or too biased to recognise that the *massively* different risk profiles of a unix webserver and a home desktop makes any sort comparison invalid, I’ve no reason to think they’ve applied any greater level of critical analysis to the rest of their document.
The vast majority of businesses do not go out like sheep and buy a dozen Windows servers just to load balance their services or cover for the crap performance they have found Windows giving. They consolidate on one (as small businesses do – Small Business Server?) or a small number.
I repeat, no business that *really* needs a service 24/7 runs it on a single machine. Doing so is walking a tightrope and disaster is only a matter of time.
Now, they might not *realise* they really need that service 24/7 until the first time it fails, but that is the fault of the IT manager/CIO/sysadmin/whoever, not Microsoft.
It’s laughable how people here talk about the “single point of failure” in the Windows registry being a disaster because it might affect a single machine, but then often turn around and advocate single points of failure in *the entire business process* as something to strive for.
Anyway, the point is that reboots are just not necessary. No other OS needs to do it like Windows.
And the counterpoint is that scheduled reboots are irrelevant in any properly run environment. Sure, if you measure your ego by your uptime they might be a problem, but the business world has more important issues to deal with than the insecurities of their IT staff.
Microsoft creates the specifications and the instructions for installing software, and installers follow them. Microsoft did not take into account software installation on a multi-user system therefore it is Microsoft’s fault and therefore Windows cannot claim to be a multi-user system that wants to be taken seriously. Therefore, it isn’t.
Then if it is inherently impossible (as you seem to be implying) how can _some_ developers manage to do it at all ?
Coming from a Windows world I can see you have difficulty with things working. A Linux installer and package (RPM or otherwise) works and slots right into the system once it is installed. It has no trouble in a multi-user set up because everyone knows what is expected in such a serious environment – there are well thought out specifications and standards.
Very few packages that install via RPM (or otherwise) modify the GUIs on the systems appropriately to make themselves obvious and available to users.
Except – they haven’t. Microsoft have never taken consideration of a multi-user system when installing software since Windows started. Installers followed what they were given, therefore not their fault.
So why can some developers manage to do it then ?
Therefore, Windows is not a multi-user system.
Your definition of multiuser is broken.
Does the installer system check the versions and dependencies of other MSI files as packages? Can the Windows system be built with MSI files to satisfy these dependencies reliably at a system level?
Windows is not Linux. Criticising it for not behaving like Linux is an exercise in stupidity. Linux needs all that complicated dependency checks because of its fragmented nature and the lack of any real commitment towards binary compatibility and standard environments. Windows does not.
<tyrade> The mere fact that this article has garnered this many threads (and banter) should serve as an example of what _not_ to post on OSNews. This article is as unbiased against Windows as the Taliban are against America. What I see more and more around here (OSNews) is the incesant whining of .edu students who have no fucking clue what they are talking about, they simply regurgitate whatever they’ve been taught in acedamia: this has no semblance in the real business world…if it did, MS would have ceased to exist years ago. The *nix banter on this topic is so typical of every compsci student i’ve known, and they’ve changed their tune to “whatever works in a given scenario” quite rapidly upon entering the professional IT realm (where you find a pretty good mix of many OS’s)…and in this realm there is no time for the senseless rubbish as has been said in these threads. There will be exploits, there will be reboots, there will be [fill in the blank] for ANY os out there, it’s up to the admins to be the guardians against whatever comes their way. Windows works, *nix works…it’s the places in which they work best that needs to be recognized and applied. So, enough already…no OS reigns supreme in all areas, and its up to us to use our best jugdment in each scenario. I happen to agree with drsmithy on most of his posts, as he seems pretty well informed and i’m guessing he’s been in the pro IT realm for a while. christ, i’ve never ranted on this site before…guess there’s a first for everything though…i’ve just never seen so much BS as has been presented here. < /tyrade>
No, it depends on *your* definition of multiuser. By *my* definition – and the definition used by computer scientists and OS developers everywhere – NT is multiuser.
It’s not just my definition of multiuser, it’s the definiton in most of my manuals, it’s listed on several websites (www.computerhope.com, howstuffworks.com,) and based on my years in this business, the accepted definition. Yours is information while precise, is not technically correct; mine is cited, yours is subjective.
In terms of operating system use, a multi-user system is a computer with an Operating system that supports multiple users at once.
No. Multiuser is the ability of an OS to run processes in different user contexts. Those users do not need to be logged in. Those users do not need to be interactive. Hell, those “users” don’t even have to be people.
True that the users do not have to be interactive, but the definition does not specifiy that. The definition as stated is for multiple users simultaneously to have access.
I can setup a Linux machine such that it has no facility whatsoever for interactive use. However, this does not stop Linux being a multiuser OS. Processes still run in different user contexts and are separated from each other.
Linux meets the accepted definition of a multi-user OS. I don’t dispute that.
Likewise, I can setup a DOS or Windows 9x machine to allow multiple people to use the system simultaneously. However, this does not make either of those two OSes multiuser.
That’s a neat trick. Sharing does not qualify as having multi-user capabilities. DOS and Windows 9x ar not multi-user OS’s as for the accepted definition.
Nor does it need to. It just needs to be able to run processes under different user contexts simultaneously. Which it can do.
Once again, that is not the determining factor in the definition. Specified, simultaneous users accessing the same resources within their own userspace is the determining factor, for which Win9x and DOS do not allow for.
Multiuser isn’t something you just “plug in”, it’s a fundamental feature of the OS’s *design* and core functionality.
True; Linux, UNIX, MacOSX and XP do have that fundamental feature. I don’t dispute that it’s not a part of the core functionality of these OS’s. I dispute the fact that NT, 9X or DOS operating systems had this capability, and based on my cited information, my definition is the accepted one.
Not if the installer needs Windows API calls from pieces of the OS that Microsoft has refused to provide.
Your explanation of what possible advantage Microsoft gains by not allowing software vendors to install software on their OS (a piece of software whose primary purpose is running other pieces of software) is awaited. Should be pretty funny.
I mean, if you were trying to claim they were targeting specific software vendors I could at least see some reasoning behind it – but your implication is that they want to stop *anyone* from installing software onto their OS.
I.E. (Netscape) Netscape had no foothold on being able to interact correctly in userspace because Microsoft witheld certain API’s that would have allowed Netscape to interact the same way Internet Explorer would have worked, had they released those API’s. (Microsoft vs. DOJ)
Yes the installer is at direct fault, but as a result of the OS provider.
The simple fact is _some_ companies manage to use an installer to install a piece of software that functions and is made available for all users. This _strongly_ suggests that the facility exists and functions perfectly, just some developers can’t (or don’t) use it.
The logic of “some do it, so all should be able to” is once again, while precise, not correct.
Part of sofware development in Windows is based upon the nature of the product in relation to both the Operating System, system hardware, and other third party software.
Software for a comm program requires different API calls and device support than a paint program. There is no direct comparison between the two.
For any network or system admin, it is an incredibly inefficient use of time and resources to redo a system in which several (any number over 10) key patches have to be put onto a system before it is connected to a network or Internet connection.
I agree. Which is why you _automate_ the procedure. My servers roll off the “production line” completely patched and up to date before they are plugged into their first “live” network.
The problem does not stem from supposed “improper automated procedure”. The fundamental issue is that the OS is severely insecure and because it requires an abundance of patches, makes for a product that is unusable for security in a business environment.
RDP is encrypted.
I will concede this point. I am professional enough to know that I can make a mistake once in awhile.
Once again, a lack of understanding of general IT principles.
Untrue. An *excellent* understanding of general IT principles.
1) Services must be kept up as long as possible
2) Services must be updated as quick as possible.
Correct. Note: Services not Servers.
Services are run on servers, as well as workstations, but for this point, servers.
First, some installers for software don’t give you the option; it reboots your system. Second, if the installer of the product requires a windows service to start, then reboots are the only way the systems get loaded correctly.
Firstly, services can nearly always be restarted individually. Almost always, if they can’t, the service developer is at fault.
Not entirely correct. Some services can be restarted; some such as Net Login, Workstation, Server, and others (which are Microsoft based services, require a restart)
Secondly, if you’re installing software or applying updates outside of maintenance periods, wherein reboots should be permissible at any time, then it is *you* who have a lack of understanding in how to be a sysadmin. That or you’re a cowboy. Either way, you’re in no position to be lecturing me on good practice.
I’m sorry, but I’m not sure which type of company you’re working in, but management for mine has the final determination as to when maintenance “periods” can be taken.
We are a sales driven business, so sales, marketing, engineering, production and quality all have to have their systems running and operational at least 16 hours per day, and rebooting servers during business hours is not feasible; which leaves a small window in which to do so, which is what David’s point was.
Third, no reboot is insignificant on a Windows machine. Each time a system is restarted, the potential for a dormant virus, security vulnerability, or malware to run amuk is intensified. The less reboots that need to be done the better.
Service Packs, updates and hotfixes do not get applied correctly until a restart is made.
Clearly, you have NFI what you’re talking about. *Some* service packs and hotfixes are not completely applied until the machine or service is restarted (those which replace files in use). But not all.
I’m confused; first you use a generalization when a specific should be used, then you isolate on a specific, when a generalization should be applied…
If Microsoft provided the correct and open ended information to do this with, then 3rd party installers would work much more efficiently. You’re blaming the 3rd party; all they’re doing is going off the information “spoon-fed” to them by Microsoft.
I’m blaming the third party because the existence of *other* third parties who manage to get it to work fine strongly implies they are the ones at fault.
Once again, a generalization for all software. Some software that may be true, others it is not.
Again, if *no-one* could manage to get it to work, you might have the beginnings of a point.
I think I’ve made my point.
>Not coding bugs but real stupid bugs that can bring the whole
>system down. Windows is known for that and problems and bugs like >these are keep poping every day.
>Like what ?
I am not going in further on this but there a security bulletin boards and website i suggest you actually visit them.
i give you 2 for free.
http://www.securityspace.com/smysecure/catid.html?id=11146
http://seclists.org/lists/microsoft/2004/Oct-Dec/0000.html
as you can read TS is mentioned quite often.
>Please explain why you think TS and SSH are the same?
>I want to know why anyone considers that to be different from an
>SSH session.
>1. SSH is a very secure
>RDP is encrypted as well.
Yo missed the point, i said VERY secure.
RDP is only 56-bits or 128-bits if you use the high encryption pack wich is third party and not part of RDP or TS. SSH can use 256bit. With third party SSH can even do 512bit encryption. Enough said. You asked why SSH is different well its more secure.
(unsecure.org)
“In our eyes the biggest design flaw (in RDP) is that there is no authentication prior to the windows authentication. PCs in a locked office are more secure than a Terminal Server out on the public internet… because you need a key to get into the office. ”
As previously stated, the largest flaw is the lack of pre-Windows
authentication. For a more secure system, a non-Windows authentication should be first, and then once authenticated, access to the Terminal Services/Remote Desktop authentication process (connected to Windows authentication) should be granted.
>2. You can run programs of userX on a SSH connection while logged
>in as userB.
As you can over a TS session.
No you have to login as the userB you can not run a program as userB
while logged in as userX.
>3. you can export your display to use the remote one
>
>I’m not entirely sure there’s a valid comparison here. What is the
>objective ?
That is prob. the reason you do not use SSH or Unix.
>4. you do not have to run X to use SSH.
>You need to run it on the server and you need to have the
>appropriate libraries on the X client. Apparently since everyone
> considers having a a dormat IE on Windows is just as bad as
>running it, then having dormat X libraries on a machine must be just
>as bad as running X.
Again, you do not have to run X to use ssh.
Most system admins run in on a X-less server.
>5. SSH is encrypted
>So is RDP.
see point 1
>6. you can transfer files with ssh, in ssh not using any other protocol
>or program.
As you can with RDP.
Nope you cannot. You need third party programs to do it.
Like SDT from Tricerat etc.
7. SSH runs on almost every platform, including Windows.
>Client or server ?
>There’s an RDP client for just about everything.
Server / client for almost any dev. know to human (incl. palm,zaurus) .
ButfFirst you talk about TS now its RDP where where talking about TS and SSH if a remeber correctly.
>8. SSH is very configurable
>”Very configurable” is gravy. “Configurable enough” is all you need.
No its not enough and the good thing is that if its not enough you can
change its since SSH is open-source. Examples?
>9. SSH is very lightweight. Consumes little mem/proc
>RDP clients run on PalmPCs.
So does SSH and you know what i looked it up do you know that RDP consumes almost 70% cpu power and 24mb of mem while using it on a palm tungsten T3. Thats hardly lightweight. Besides its third party , expensive and does not come with the MS TS package.
>10. SSH can be used in scripts for cerain automated tasks
>Not really relevant to the context of this comparison.
Why not you asked for diffences, you got them and now you ignore them.
>11. SSH is very robust, i never seen it crash or disfunction.
>I’ve never seen an RDP session crash either.
I believe that. Point taken.
>13. SSH is FREE as in Freedom.
>14. SSH is open-source.
>15. SSH is cool.
>None of which are relevant, although I appreciate your candour
.
There are very relevant, say we have over 1500 servers and workstations and a couple of palm and zaurus devices.
Can you calculate how much money we save by using a more secure, wide spread, compatible, free, open, scalable remote admin tool?
Besides that i am very keen on open-source and free software because i believe they deliver more security and quality at the end.
They are also more accessable for poor people/countries. They really make a diffence.
>I hope they will never have to depend on your skills in a real
>company.Where i work we need our servers up 24 hours a day 7
>days a week.
>No, you need your *services* up 24/7. If you *really* need each of
>your *servers* up 24/7, then you are walking on a knife-edge of
>disaster – it’s only a matter of time.?
That why we do not use Windows but Solaris, Linux and FreeBSD.
I know its hard for a Windows users to get but i NEVER saw any server crash while i saw windows NT and 2000 crash very often.
We had a Windows 2000 test server (managment was considering a custom build Windows program) wich did run MS SQL and that machine was under admin of a colluege of mine who is very dedicated to Windows and knows a great deal of it.
Thew machine sometimes without making any logs where he could find the reason. This got worse after service pack 3 got installed (i guess only the service pack was about 3 reboots) i will spare you details etc. but it ended in that we installed Linux (redhat 9) in January of this year on it and its running fine.
I do not know why the machine with Windows 2000 failed nor did my colluege. There was know way of getting to know why it was happening and as i understand from several other companies and/or friends is very common for Windows.
There are some servers that need to reboot very rarely but it only because of kernel updates. Most servers i admin have not done a reboot in there 2/3 years live. We mostly include kernel patches if we replace a server and if for some reason hardware fails.
It’s not just my definition of multiuser, it’s the definiton in most of my manuals, it’s listed on several websites (www.computerhope.com, howstuffworks.com,) and based on my years in this business, the accepted definition. Yours is information while precise, is not technically correct; mine is cited, yours is subjective.
Quite the opposite. My definition is precise *and* technically correct. My definition is the one understood, accepted and taught by computer scientists and OS developers everywhere.
Whether or not an OS is multiuser is completely and utterly independent of its ability to host simultaneous, interactive user sessions. By your definition, Linux on an embedded device isn’t a multiuser OS.
True that the users do not have to be interactive, but the definition does not specifiy that. The definition as stated is for multiple users simultaneously to have access.
*Your* definition. Not *the* definition.
Linux meets the accepted definition of a multi-user OS. I don’t dispute that.
In the scenario I described it does not meet *your* definition of a multiuser OS, because two interactive users cannot simulatenously be on the system at once.
That’s a neat trick.
Not really. KVM-sharing software to allow multiple people to simultaneously use a DOS or Windows machine has been around for years. As has BBS software.
Sharing does not qualify as having multi-user capabilities. DOS and Windows 9x ar not multi-user OS’s as for the accepted definition.
By your definition, the above scenario means DOS and Windows _are_ multiuser.
Once again, that is not the determining factor in the definition.
Yes, it is. It is the absolute, fundamental *core* of the definition. Everything else – permissions, multiple logins, simultaneous users, etc – builds on a multiuser OSes ability to separate running processes into different user contexts.
True; Linux, UNIX, MacOSX and XP do have that fundamental feature. I don’t dispute that it’s not a part of the core functionality of these OS’s. I dispute the fact that NT, 9X or DOS operating systems had this capability, and based on my cited information, my definition is the accepted one.
Windows XP *is* Windows NT. Additionally, I never claimed Win9x or DOS had this capability.
I.E. (Netscape) Netscape had no foothold on being able to interact correctly in userspace because Microsoft witheld certain API’s that would have allowed Netscape to interact the same way Internet Explorer would have worked, had they released those API’s. (Microsoft vs. DOJ)
Precisely what couldn’t Netscape do that they wanted to do ?
Not to mention, what relevance does this have to the original assertion that Microsoft don’t divulge *installer* information to third parties ?
The logic of “some do it, so all should be able to” is once again, while precise, not correct.
Not only is it perfectly correct, it’s also a direct contradiction to the original assertion that “*none* can do it”.
Software for a comm program requires different API calls and device support than a paint program. There is no direct comparison between the two.
You seem to have lost the plot. We were talking about *installing* applications, not running them.
The problem does not stem from supposed “improper automated procedure”. The fundamental issue is that the OS is severely insecure and because it requires an abundance of patches, makes for a product that is unusable for security in a business environment.
In which case, I ask again, which OS or service are you thinking of that *doesn’t* have an “abundance of patches” ?
Services are run on servers, as well as workstations, but for this point, servers.
You miss the point. A service and the hardware it runs on are independant in a properly configured 24/7 environment.
I’m sorry, but I’m not sure which type of company you’re working in, but management for mine has the final determination as to when maintenance “periods” can be taken.
Companies I work[ed] in define periodic, repeating, well-known maintenance periods, during which times servers and/or services can be expected to be unavailable. For places that didn’t have such periods, defining them was one of the first things I did.
We are a sales driven business, so sales, marketing, engineering, production and quality all have to have their systems running and operational at least 16 hours per day, and rebooting servers during business hours is not feasible; which leaves a small window in which to do so, which is what David’s point was.
David’s “point” was that Windows often requires reboots to apply patches, etc. My point point is that you don’t do _anything_ – regardless of whether or not you think it might have an impact on service availability – outside of those maintenance periods except in the most dire circumstances.
In short, you shouldn’t be doing _anything_ that could potentially affect the availability of a service outside of its maintenance period. If you are doing so, you are acting irresponsibly and unprofessionally.
I’m confused; first you use a generalization when a specific should be used, then you isolate on a specific, when a generalization should be applied…
Huh ? I’m refuting his incorrect assertion that *all* hotfixes, patches, etc require a restart to “fully apply”.
Once again, a generalization for all software. Some software that may be true, others it is not.
Only one example (although there are many) is required to refute the claim that *no* installers are capable of functioning “correctly”.
Next time you decide you need to post on TS/RDP turn your computer off instead.
It’s quite obvious you know next to nothing about it.
He even admits he has little Windows experience:
“That why we do not use Windows but Solaris, Linux and FreeBSD. ”
but still rants on about whatever he thinks TS/RDP feature set is. I could make a very solid argument about the OSes Bas uses if I were free to make up “facts” about them.
>Next time you decide you need to post on TS/RDP turn
> your computer off instead.
>It’s quite obvious you know next to nothing about it.
You are right i do know little abbout it but enough to
tell somebody the diffences between TS and SSH and that was what i did. Whatever spooks in your head is not my problem. You have not been able to give any normal comment so i expect you to fail this time again.
Good luck.
>He even admits he has little Windows experience:
I have enough experience with to not use it anymore, its being used in our company and i am confronted with i every day. I just do not admin it no more.
I speak a lot with my fellow admins who work with Windows XP and 2000 and they are often amazed what the tiny BSD can crack out.
That you do know nothing
is clear now, trying to shelter behind somebodies back
making worthless comments about nothing that have zero
mass.
> I could make a very solid argument about the OSes Bas
> uses if I were free to make up “facts” about them.
Please do i will be happy to overrule them….
I am not going in further on this but there a security bulletin boards and website i suggest you actually visit them.
Two is a long way from “every day”. Now, I’ve no doubt you didn’t mean it literally, but AFAIK other OSes get bug reports about as often as Windows.
Yo missed the point, i said VERY secure.
RDP is only 56-bits or 128-bits if you use the high encryption pack wich is third party and not part of RDP or TS. SSH can use 256bit. With third party SSH can even do 512bit encryption. Enough said. You asked why SSH is different well its more secure.
RDP is 128 bit by default. No, it’s not 512 bit, but it is enough for most people to consider it “secure”.
As previously stated, the largest flaw is the lack of pre-Windows
authentication. For a more secure system, a non-Windows authentication should be first, and then once authenticated, access to the Terminal Services/Remote Desktop authentication process (connected to Windows authentication) should be granted.
This criticism is meaningless because it applies to _any_ service on “the public internet”.
No you have to login as the userB you can not run a program as userB
while logged in as userX.
You can run any application as another user by using “runas”.
That is prob. the reason you do not use SSH or Unix.
I do use them, which is why I don’t think the comparison is valid. What are you trying to demonstrate is possible ? What’s your objective ?
Again, you do not have to run X to use ssh.
Most system admins run in on a X-less server.
You do, however, need X libs on the server side if you want to use X applications over SSH, which is what I was driving at.
<i.Nope you cannot. You need third party programs to do it.
Like SDT from Tricerat etc. [/i]
You can map the client machine’s drives to the RDP session. This lets you transfer files within the session without requiring additional software.
ButfFirst you talk about TS now its RDP where where talking about TS and SSH if a remeber correctly.
TS and RDP are basically the same thing.
No its not enough and the good thing is that if its not enough you can
change its since SSH is open-source. Examples?
Examples of what ?
So does SSH and you know what i looked it up do you know that RDP consumes almost 70% cpu power and 24mb of mem while using it on a palm tungsten T3. Thats hardly lightweight. Besides its third party , expensive and does not come with the MS TS package.
And you need third party software to get SSH on some platforms as well.
Why not you asked for diffences, you got them and now you ignore them.
It’s not really relevant because you don’t do things via RDP that you would be automating, in the same sense that you do with SSH.
There are very relevant, say we have over 1500 servers and workstations and a couple of palm and zaurus devices.
Can you calculate how much money we save by using a more secure, wide spread, compatible, free, open, scalable remote admin tool?
Except if you’re dealing with 1500 Windows servers, workstations and PalmPCs it’s *not*.
That why we do not use Windows but Solaris, Linux and FreeBSD.
It doesn’t matter *what* platform you’re on. If you have an essential service on a single machine you’re asking for trouble.
I know its hard for a Windows users to get but i NEVER saw any server crash while i saw windows NT and 2000 crash very often.
If you’ve never seen Solaris, Linux or FreeBSD crash, you can’t have been using any of them for very long or in very diverse environments.
I do not know why the machine with Windows 2000 failed nor did my colluege. There was know way of getting to know why it was happening and as i understand from several other companies and/or friends is very common for Windows.
It’s not common at all. I don’t accept my Windows machines crashing any more than I do my other machines. No-one else should either.
There are some servers that need to reboot very rarely but it only because of kernel updates. Most servers i admin have not done a reboot in there 2/3 years live.
So if you haven’t rebooted a machine in 2-3 years, how do you know it will come back up again with full functionality if it *does* suddenly reboot ?
>You do, however, need X libs on the server side if you want
>to use X applications over SSH, which is what I was driving
>at.
I stated that SSH unlike TS does not need X, you are trying to state that its needs X if you want to run X trough SHH.
It is a difference SSH does not need X.
>This criticism is meaningless because it applies to _any_
>service on “the public internet”.
Re-read. the statement.
>If you’ve never seen Solaris, Linux or FreeBSD crash, you
>can’t have been using any of them for very long or in very
>diverse environments.
About 8 years. I really never saw any Solaris, Linux or FreeBSD server crash. Never. I saw Linux disfunction, i saw BSD disfunction but i all those cases i manged to fix it without any reboot.
>So if you haven’t rebooted a machine in 2-3 years, how do
>you know it will come back up again with full functionality
>if it *does* suddenly reboot ?
Why should i worry about it? do you actually reboot servers to see if they will come up after the reboot ??
Its not like a Windows server wich *does* suddenly reboot.
Why should it suddely reboot? I agree that hardware can fail but a good OS will be only crippeld not reboot.
We had a Linux server wich had a broken SCSI drive, it disfunctioned. SCSI drive out new one in (raid) and its still running. No need for reboots. If a cpu or memory disfunctioned then i guess you are right.
But since we work in a cluster/load balancing envoirment there is no need to worry about 1 or 2 nodes.
But i am damn sure not to reboot any server just to see if its still going up after the reboot.
There is a game called uptime.
I stated that SSH unlike TS does not need X, you are trying to state that its needs X if you want to run X trough SHH.
It is a difference SSH does not need X.
But this is a meaningless point… It’s like me saying TS doesn’t need SSL.
Re-read. the statement.
I did. They don’t like it because connecting via RDP means the RDP authentication is made before any aother authentication. That’s like saying connecting via SSH means SSH authentication is made before any other authentication. It’s just silly.
About 8 years. I really never saw any Solaris, Linux or FreeBSD server crash. Never. I saw Linux disfunction, i saw BSD disfunction but i all those cases i manged to fix it without any reboot.
I find that amazing. In less than 2 years in an environment packed full of different unixes on dozens (if not hundreds) of machines, I saw everything from Digital Unix, through Linux and FreeBSD to Solaris (on an e10k, no less) roll over and kick its feet in the air. Certainly, it wasn’t common, by any stretch, but it did happen.
Why should i worry about it? do you actually reboot servers to see if they will come up after the reboot ??
Not for that express purpose, no, but doing so as a part of other maintenance does confirm that they will restart properly.
I’ve seen more than one unix box not startup properly because someone had modified aspects of its startup scripts – but no-one had noticed because the machine wasn’t rebooted for another 6 months, when it failed to start properly (or, more accurately, the system started ok but many of its services did not).
Its not like a Windows server wich *does* suddenly reboot.
No, they don’t. If they do, they’re broken.
Why should it suddely reboot? I agree that hardware can fail but a good OS will be only crippeld not reboot.
Power failures (even through a UPS). Accidentally turned off (in a rack full of 1U units, it can be easy to misidentify them). Cleaner unplugging the machine to plug in the vacuum. Gamma rays. Divine intervention.
It happens. Rather than insist it doesn’t, I prefer to be prepared.
We had a Linux server wich had a broken SCSI drive, it disfunctioned. SCSI drive out new one in (raid) and its still running. No need for reboots.
And Windows would do the same.
There is a game called uptime.
*Service* uptime is important. *Server* uptime is not.
@drsmitty
While i do agree on some points with you, Bas is actually
prove that SSH is completly different from Terminal Service for several reasons. You are mostly ingoring them and re-create new point to not have to go on the points he makes.
I have read all lot of comments from you and its always the same in those, trying to twist and turn without actually answering.
SSH is very different from TS if you are to blind or to dumb to actually admit this you have a real problem.
>*Service* uptime is important. *Server* uptime is not.
I fully agree we use Winfows 2000 and 2003 on our servers and while they not crash free they preform very well. In the end the customer is
looking for a service not a server.
@Bas
I think you rant to much about Windows and are really “open” to it.
LInux and BSD are very nice but in a business envoirment Windows suits best. Windows server is a very capabale OS is is more used then any other server OS in the world.
>@Bas
>I think you rant to much about Windows and are really “open” to it.
>LInux and BSD are very nice but in a business envoirment Windows
>suits best. Windows server is a very capabale OS is is more used then
>any other server OS in the world.
While i do appreciate you agree with me on the SSH vs TS thing.
Quality and security of n OS have nothing to do with how much its used and vice versa. In fact it looks to me the best product never gets the most used one.
Betamax vs Vhs
Beos vs Windows
Toyota vs Opel/Volkswagen (in the Netherlands at least)
The list is long and it looks like people choice is often not based on the quality of a product but more on the feeling/marketing of a product.
I am glad Solaris and BSD are somewhat better than Linux.
re: I am glad Solaris and BSD are somewhat better than Linux.
-1 ttroll heh, kidding, but you could start another huge discussion that amounts to nowhere