One of the main reasons for the Firefox browser’s successful seizure of market share from Microsoft’s Internet Explorer is the desire to escape the inundation of PC-slowing spyware. However, spyware experts indicate that with its increased popularity, Firefox itself will become a target for spyware creators, who are already poking at the open source browser alternative. Elsewhere, a new security hole found affecting most browsers except IE.
I’m sure that it will become a target, I just expect the Mozilla Foundation and the community to be more active in stopping it. Firefox does have an update mechanism that allows patches to be sent to it.
I think this exploit is actually a failure of some “trusted” Certificate Authority to verify that the domain names they are signing are not spoofed. IDN certainly cannot work without the cooperation of certificate authorities.
What is extraordinary (and should be fixed) is that neither of the browsers I have locally installed (Safari and Firefox) have an obvious interface to verify which certificate authority signed the SSL certificate for the spoofed paypal site.
Apparently, once a browser trusts a CA, it forces you to trust it as well.
The a between the p and y shows up as a weird symbol, not as a spoofed paypal site. I guess my FreeBSD PC has better unicode support. Is it an OS issue about how symbols are rendered?
What if the amount of security holes a product has is not a function of it’s popularity?
I can understand how popularity would lead to more attempted attacks, but there is no reason to believe that the product becomes less securely *designed* as it becomes more popular.
I guess it’s assumed to be true because it’s the excuse Microsoft has been giving us for a decade or so.
You can read a discussions at http://it.slashdot.org/it/05/02/07/1323206.shtml?tid=172&tid=113&ti…
Hmmm. This is good. I always wanted to try the theory that linux and mac get less viruses because they are less popular. Firefox is a good experiment for this theory. If spyware dont get to firefox, then the theory dies. We can wait to Linux to become more accepted, but this will take longer. Of course, this is not it. It is just a first step trying to prove or disprove that stupid theory.
Opera is a great browser and has made achieved only a little market share. So comes to the worst, I know my next move.
What about all those signed java applets out there all ready?
I see them every time I’m on a shady site(cracks.am) f.ex.
The user only needs to press ‘OK'(which they usually do) and the applet gets full access(hence the signing).
Doesn’t look much better than ActiveX to me
Really?~ Thanks experts, noone else could’ve speculated on that. News worthy indeed.
this is the beauty of open source… if they make spyware, someone can hardcode their copy of the OS to not allow that spyware
ment browser, not OS
There was a lot of discussions about i18n domains, but every technically aware people was sure they are evil.
And, by the way, my konqeror does not seem to render that URLs correctly – maybe bu a bug, but anyway
And this is anyway not a damn bug in programs – this is a bug in DNA.
What if the amount of security holes a product has is not a function of it’s popularity?
It’s not.
I guess it’s assumed to be true because it’s the excuse Microsoft has been giving us for a decade or so.
No, they haven’t.
Yes they have. Thats is exactly the excuse Microsoft has been giving us. What planet are you from?!
Short answer “Only when its running on a standards Windows install.”
Why? I’m running two windows PCs and a freebsd workstation at home. I changed the standard configuration on XP Home Edition so that the Windows users were not Admins. This simple act made MSIE much safer. (I still prefer Firefox, but that is for features as much as for security. So, I’m rather confident that on a standard *nix install (where users are not running as root!) or on safe Windows installs, (where users are not given Admin privilidges) that Firefox will be much less suceptable to Trojans and other spyware.
And even if Firefox is comprimised, it isn’t as bad as having MSIE, which is famously built into the OS. In the event that MSIE is comprimised, the attacker can potentially have elevated security. (This should have been the lesson of the Morris Worm, but it seems that Microsoft has only learned this in the last couple of years.)
PS. Mike your comment to Ilyak was in poor taste. Should we ask American posters to foreswear invading oil-rich countries and/or driving Hummers? There are probably more American in Hummers than Russians that write Trojans. Guilt by association is poor thinking.
Why bring Americans into this? He’s from Brazil.
Hmmm. This is good. I always wanted to try the theory that linux and mac get less viruses because they are less popular. Firefox is a good experiment for this theory. If spyware dont get to firefox, then the theory dies. We can wait to Linux to become more accepted, but this will take longer. Of course, this is not it. It is just a first step trying to prove or disprove that stupid theory.
An even more intereting theory is that whatever malware, that eventuall manages to hit Firefox will only work on windows.
A good SELinux would stop at least some of it. Attempts based on social engineering is hard to stop though, if the user thinks he wants to install something on his computer he will be hard to stop. It doesn’t matter what browser we use.
Actually, it seems like *you* don’t have unicode properly set up. The “weird symbol” is a lower case cyrillic “a” (http://en.wikipedia.org/wiki/A_%28Cyrillic%29), which looks just like the latin counterpart.
This issue has been brought up a number of times. The problem (as I see it) is that apparently some registrars will let you register an IDN domain that is displayed identically to an existing domain that belongs to someone else.
Of course, browsers could prevent this by Namepreping (http://www.ietf.org/rfc/rfc3491.txt) URLs, i.e. mapping graphically identical characters to the same character.
I tried using Firefox. I honestly thought it would be a nice browser but when it cant even let me check my Yahoo mail account because it could not render the page correctly I really got disheartened and here I am back to using good old IE 6. Never had problems with IE 6 and spyware so I guess I am gonna keep using this bad boy. Take care guys
quick question for you! does your ‘normal user’ have read, write and execute permissions in his own home directory? is all your actual valuable data in your home directory, read/writeable by your ‘normal user’? Then, yay, all the stuff you actually care about is entirely susceptible to malware. Sure, it can’t wipe out /usr, but then it’s actually a hell of a lot easier to get /usr back than it is to get /home . All you need to get the boring nuts and bolts back is a reinstall.
Firefox lives in an development community that will find, reveal, advertise, and fix, exploits.
There is motivation amongst the developer/user base to find, identify, and broadcast the names and organizations of those who would try to benefit from said wares.
Thus part of the need for backing up your files.
Did anybody read that security advisment?
The only reason Microsoft isn’t vulnerable is because it doesn’t even support IDN.
Which makes me laugh at how inadequate IE is even more…
So much for making IE look more secure.
it isn’t a case of firefox not rendering yahoo mail correctly. it is a case of yahoo being incompetant and not writing it correctly. you trust your mail service to a company which owes its success to the web, yet can’t create a standards compliant page?
http://www.shmoo.com/idn/
This test, doesn’t work with my Firefox
So i am not affected
So, I’m rather confident that on a standard *nix install (where users are not running as root!) or on safe Windows installs, (where users are not given Admin privilidges) that Firefox will be much less suceptable to Trojans and other spyware.
Why ? Regular users can install software to their home directories and run it. Regular users can scan through their own address book (and possibly others) to extract email address for mass mailing. Regular users can make outgoing network connections and start services/daemons listening.
Regular users can (obviously) modify and/or delete their own files – which are far more important than the restricted-access OS files.
And even if Firefox is comprimised, it isn’t as bad as having MSIE, which is famously built into the OS. In the event that MSIE is comprimised, the attacker can potentially have elevated security.
False. IE runs as a regular user process just like Firefox and always has.
Yes they have. Thats is exactly the excuse Microsoft has been giving us.
No, it’s not (well, it’s possible someone dumb from Microsoft has said that, but I doubt it’s ever been said in any official sense).
It is, however, what the typical anti-Microsoft zealot *thinks* they say.
What Microsoft have said, is that Windows is *attacked* and *exploited* more and that those attacks and exploits have much greater impacts, due to Windows’ greater prevalence.
This is a very, very different thing to drawing a causal relationship between platform prevalence and actual security vulnerabilities (be they from design or coding problems).
a possible solution / check could be:
* show an alert icon that indicates that the URL being displayed is using a strange charset or using strange characters outside the normal ascii range you see on your keyboard
* perhaps also show the original code for that display – as a tooltip or in the status bar – this would allow the use to quickly see the source of the URL.
i must say – sometimes the protocol designers just don’t consider security. but i guess it is in the spirit of the net that it was built to be open and useful, not restrictive and secure.
… for implementing new and more modern features; something I.E. has not. The rest, by adding these things, are open to new problems and exploits that may arise from the new technologies.
I can live with that, can’t I?
That…and always type in sites that you plan to spend money on.
I say bring it ON! I’m ready todo battle and trace some IPs.
My cyrillic a is rendered with slightly different size relations, but it is not noticible if you don’t know what you’re searching for.
I think Bill Gates has said something like that in the Spiegel interview. (that Windows is a more popular goal because it has more users)
“I tried using Firefox. I honestly thought it would be a nice browser but when it cant even let me check my Yahoo mail account because it could not render the page correctly”.
That’s strange because it renders my Yahoo mail correctly (both Mac OSX and Win XP). In fact I find it renders pages more accurately than Safari (Safari also renders Yahoo correctly). What gives? Is your Java environment out of date or something?
There are plenty of examples, and probably better ones than this…
“Microsoft Vice President Cliff Reeves points out the company’s commitment in an interview with Computerworld, “You can say the number of attacks is a result of market success, which is true.””
But, I’d have to agree with drsmithy that they never say it is the *cause* of Windows’s poor security record.
The problem is, they don’t offer any other explanation.
I’ve never read a Microsoft statement that said
“Well, we really messed up with ActiveX”, or “Wow, allowing Outlook to execute .scr files as code was a real mistake, wasn’t it?”, or “Not pushing home users towards a restricted account was a slip up.”.
So, in the absence of any better explanation, can we assume Windows will always be insecure, even if it was used by virtually no one?
“Microsoft Vice President Cliff Reeves points out the company’s commitment in an interview with Computerworld, “You can say the number of attacks is a result of market success, which is true.””
But, I’d have to agree with drsmithy that they never say it is the *cause* of Windows’s poor security record.
Correct. Note that an “attack” is a very different thing to a “vulnerability”.
The problem is, they don’t offer any other explanation.
That’s because saying that 90% of the security “problems” are caused because of dumb and/or ignorant users wouldn’t be particularly popular.
I’ve never read a Microsoft statement that said
“Well, we really messed up with ActiveX”, or “Wow, allowing Outlook to execute .scr files as code was a real mistake, wasn’t it?”, or “Not pushing home users towards a restricted account was a slip up.”.
ActiveX was a victim of out-of-control featurism. It’s since been restricted with a reasoanble security model.
Outlook has never directly executed files (apart from buffer-overrun coding bugs). All it does is hand them off to the shell to do whatever it is the shell thinks is the default action. This is, I’m pretty sure, exactly the same thing most other similarly-featured mail clients do.
Users aren’t in restricted accounts by default yet because of the plethora of software out there that is poorly written and breaks. Personally I think the argument for is just as good as the argument against, when you take into account the business case.
So, in the absence of any better explanation, can we assume Windows will always be insecure, even if it was used by virtually no one?
There is a very, very large difference between “insecurity” and “successful exploits”.
“Is SELinux suitable for desktop users ?
SELinux in its current state doesn’t secure the desktop environment (kde, gnome) very well and has significant overhead in maintenance. If you install a new application onto a SELinux system and that app didn’t have a policy written for it, the average home user would probably have to switch SELinux off to get the app working. In the future this should improve as more policies are developed.”
-The Unofficial SELinux FAQ, last updated January 24th.
Can you show me a statement that this is indeed not the case for this fabled desktop SELinux system you speak of surfing the web using Firefox on?
I’m sorry but firefox works with yahoo mail perfectly. I’ve used firefox with it since its early days, .2/.3 back when it was Phoenix and it worked just fine then.
The whole theory of “if open source was more popular, then it would have more security proplems” is total crap. Just look at IIS vs Apache.
if it was going to happen then why hasn’t it in Mozilla which has been with us for a long while now?
That’s because saying that 90% of the security “problems” are caused because of dumb and/or ignorant users wouldn’t be particularly popular.
Bullshit. I am tired of your constantly vindicating the users for Microsoft’s shortcomings. I like the way you phrased it as security problems. There is more that affects windows than traditional security problems. The amount of worms and viruses that are not directly related to security but to the design of windows is an order of magnitude greater.
ActiveX was a victim of out-of-control featurism. It’s since been restricted with a reasoanble security model.
Let me get this straight and unsugar-coat it. Active X is badly designed and a security model is bolted on after the fact and the users are to blame not Microsoft.
Sugar-coating a design flaw or lack of thought to security as “out-of-control featurism” is not going to absolve Microsoft of any blame. WTF is “out-of-control featurism” anyway?? Features are designed in or out, they don’t magically appear, they shouldn’t.
Outlook has never directly executed files (apart from buffer-overrun coding bugs). All it does is hand them off to the shell to do whatever it is the shell thinks is the default action.
Who designed and wrote the shell? Who wrote the code in outlook that hands code out to said shell? Why wasn’t there a policy to begin with that prevents said handing of the code?
What point are you trying to make here?
That Microsoft botched up the design of the whole process of email attachment handling and the user is dumb that is why there are security problems!!!!!!
Users aren’t in restricted accounts by default yet because of the plethora of software out there that is poorly written and breaks.
Who designed APIs for App writers to develop to? Users or Microsoft.
Let me get this straight. Microsoft can’t manage thier ISVs and users are to blame for security problems.
I am tired of your constantly vindicating the users for Microsoft’s shortcomings
Should read:
I am tired of your constantly blaming the users for Microsoft’s shortcomings
I’d be interested in seeing the browser break off and run under its own user account. Give it read access to parts of the system it seems, write access to cache, settings, and download folders within its own home directory. A lot of programs already do this, apache, ssh.
I am thinking in UNIX terms here, but could apply to windows perhaps.
I’d be interested in seeing the browser break off and run under its own user account. Give it read access to parts of the system it seems, write access to cache, settings, and download folders within its own home directory. A lot of programs already do this, apache, ssh.
I am thinking in UNIX terms here, but could apply to windows perhaps.
Trivial. Create a “guest” account and then right click IE’s shortcut and Run As.
In my case, freebsd, so slightly different steps. I haven’t looked at that setup on windows but sounds pretty straight forward.
In my case, freebsd, so slightly different steps.
In that case a ‘sudo firefox’ or similar would get the same effect – but if you’re especially paranoid you could run it in a jail(8).
quick question for you! does your ‘normal user’ have read, write and execute permissions in his own home directory? is all your actual valuable data in your home directory, read/writeable by your ‘normal user’? Then, yay, all the stuff you actually care about is entirely susceptible to malware. Sure, it can’t wipe out /usr, but then it’s actually a hell of a lot easier to get /usr back than it is to get /home . All you need to get the boring nuts and bolts back is a reinstall.
With proper SELinux properties you can prevent Firefox, Thunderbird from accessing whatever files you want. You can even make files Firefox touched unexecutable, unless you switch security role which would require you to enter a password.
The problem is of course that this doesn’t prevent a social enginering attacks, where the user is fooled into installing something or turning off security. If the user thinks he is doing the right thing it doesn’t matter how many passwords he has to enter.
Can you show me a statement that this is indeed not the case for this fabled desktop SELinux system you speak of surfing the web using Firefox on?
You are quite right SELinux can be tricky.In fact any secure system will be a bit tricky to use. Simple and secure unfortunately doesn’t add up.
The article speeks of a future of malware for Firefox, and when that future is here we can expect most Linux distros will ship with useful security policys. Even Fedora 2 had some protection for Mozilla that made it impossible for root to run things downloaded by Mozilla, unless changeing the security context. This could quite easily be expanded to other users.
The new network administrator at our high school (who is a devout windows user) says that he has been getting almost no spyware with firefox for the last six months. Whenever I get a windows machine to fix or set up, I put firefox and/or thunderbird because Outlook and Explorer are trash. I needn’t say more, because I dont want to piss anyone off, just giving examples.
Of course you can (and you can do it in other ways without using SElinux, as the previous page of this thread discusses). However, I was referring to someone who doesn’t, he simply thinks that Firefox not running as root somehow makes him less vulnerable to spyware. To be concise, he’s a dumbass.
Certainly there is nothing to stop people from writing malicious XUL based applets that spy on your browsing habits and so forth, but reinstalling mozilla or firefox is simple. Reinstalling IE is a whole different story. In addition, since add-ons for mozilla don’t affect the entire OS I don’t think such spyware for mozilla would be as bad. Last I recalled there already was at least one attempt at a malicious app for mozilla about a year ago and the response was to disable on_load dialogs so that the user would have to actively request an app as opposed to simply going to a URL that loaded a dialogue for a malicious app. This will make spreading such spyware for firefox much more difficult, but not impossible.
Yeah i’ve thought about a jail or chroot. With sudo I have run into a few annoyances, but more or less it works fine.
The whole theory of “if open source was more popular, then it would have more security proplems” is total crap. Just look at IIS vs Apache.
Surf over to Secunia.org and compare Apache vulnerabilities to IIS 6.0. You’re in for a rude awakening. It’s a complete rewrite — and it shows. IIS 6 kicks Apache’s ass in terms of security and functionality.
I read the article, and indeed it is very interesting.
But just to be on the safe side I decided to open it up with my browser and see if he’s right. (never trust a non-proven concept).
My konqueror, based upon khtml, show no signs of errors.
the links pointed to paypal, not to anyone else.
this is not only a browser-dependant issue!
As long as you’re in a OS wich supports utf8 (all the way, that is), using a relative new browser you shouldn’t have to worry. (haven’t tried firefox, but I’m confident it works, and if not, it will be patched soon
turned out he was right about the fact that a and a need not be the same. I got the the bad page.
either way it’s not a browser issue then, as it went to the page that it was linked to.
it only did what you told it to do, great, it works!
my a when I entered the site is half-height compared to a normal a, perhaps I should start using a better font ^-^
in the name of the links he uses normal ‘a’s, in the link url he uses the other a. when I hold my mouse over the link I can CLEARLY see that it’s a suspicious a thanks to my unicode font.
I don’t see where this would be anything wrong at all, you tell my browser with a link to go to a site, and it goes there!
Bullshit.
‘Fraid not. Any security expert should be able to tell you the vast majority of security breaches are caused by “users” – sometimes malicously (sabotage), sometimes innocently (social engineering), sometimes accidentally (clicking the wrong button).
This applies to security in all situations, by the way, not just computers (and it’s hardly limited to Microsoft on computers, either).
I like the way you phrased it as security problems. There is more that affects windows than traditional security problems.
That’s because it’s easier to write “problems” than enumerate every possible exploit.
I’ve never said *all* security problems are the fault of users. Just the vast majority of them.
The amount of worms and viruses that are not directly related to security but to the design of windows is an order of magnitude greater.
Name some and the design problem responsible. Be specific. Explain how it could be fixed. Do not try and pretend configuration settings and coding bugs are design problems.
Let me get this straight and unsugar-coat it. Active X is badly designed and a security model is bolted on after the fact and the users are to blame not Microsoft.
Microsoft are to blame for pushing it out into the world while it was immature and getting people reliant on its *initially* flawed implementation. Now the legacy support is biting them.
WTF is “out-of-control featurism” anyway?? Features are designed in or out, they don’t magically appear, they shouldn’t.
When features are added for the sake of features, or to look impressive in software demos. Happens to everyone.
Who designed and wrote the shell? Who wrote the code in outlook that hands code out to said shell? Why wasn’t there a policy to begin with that prevents said handing of the code?
Why should there be ? The vast bulk of situations where this happens (opening attachments with movies, word documents, jpegs, other messages, text files, meeting requests, etc) the behaviour is desirable and harmless.
Additionally, Outlook has *never* (coding bugs aside) opened attachments without first warning the user it might be dangerous in a dialog that defaults to “No”.
That Microsoft botched up the design of the whole process of email attachment handling and the user is dumb that is why there are security problems!!!!!!
Again, I’ll point out it’s behaviour emulated by nearly every other similarly featureful GUI out there – the attachment is passed off to the shell (or whatever module it is that determines what program to open what files with).
They almost all also use the same “workaround” to enhance security that Outlook has for a few versions now as well – blocking (or severely restricting) certain filetypes.
Who designed APIs for App writers to develop to? Users or Microsoft.
The problem isn’t the APIs, it’s lazy (or incompetent) developers.
Windows NT has been on the market now for *12 years*. It’s been mainstream – at least in business environments – for nearly *9 years*. That’s how long developers have had to be writing multiuser friendly code. Yet even companies like id do *stupid* things like try and write to files inside the program directory.
Even the “but everyone was using Windows 9x” argument doesn’t wash – the same multiuser features (from a developer perspective) and APIs were also introduced in later versions of Windows *95*, so it’s not like the consumers Windows OS hasn’t had them for more than long enough.
Let me get this straight. Microsoft can’t manage thier ISVs […]
Please explain how Microsoft can stop companies writing bad code, because I’m sure companies *everywhere* would like to know.
[…] and users are to blame for security problems.
No, users are to blame for doing things like deliberately running malicious code (most email trojans/viruses) and not applying software patches (most worms).
that when I click the ssl link it tells me a big fat warning that the certificate was not issued to this host, letting me see more details and any sane person intending to spend money would read that as a “no go”.
Very easy and simple to implement.Extensions(unsigned) are the Archilles spot for Firefox, just like javascript most of the times generally is.Firefox it’s strength is it’s weakness in the same time in terms of security,think extensions.
Firefox is an very good browser in almost in the google class.I enjoy it’s features every (working) day.However i’m a bit worried about (unsigned) extensions.
Any security expert should be able to tell you the vast majority of security breaches are caused by “users”
Security breaches are not the same as Worms, Virus, Phishing attacks caused by sloppy code development and design processes. Most security experts can tell you the distinction, in fact most security sites clearly distinguish viruses and vulnerabilities by classifiying them seperately.
Most adminstrators think that users are to blame, experts don’t. You are an admin, aren’t you?
http://securityfocus.com/infocus/1804
Many of the vulnerabilities a worm finds are, in fact, not things that can be patched (such as bad file share protections, poor user policies, and so on). This leads many administrators to repeatedly take only a single lesson away from infections, and often it is the conclusion that their users are entirely to blame. While everyone has a few stories about odd user behavior, they are not always at fault.
Malware Myths and Misinformation, Part Two: Attachments, AV Software and Firewalls
http://securityfocus.com/infocus/1698
A good read in general, and quite clearly illustrates the Myth and Misconceptions you propogate. Namely you don’t need AV or any sort of prevention software.
This whole topic and most of them recently haven’t be about malicious intents of users or social enigneered attacks. Don’t change the subject.
The article is talking about a fundamental flaw in IDN not malicious intent by users.
Name some and the design problem responsible. Be specific. Explain how it could be fixed. Do not try and pretend configuration settings and coding bugs are design problems.
Hunh. WTF. So coding bugs are not security issues and default configurations are not the vendors fault????
Majority of the Vulnerabilites in any system can be attributed to coding bugs. Coding bugs that should have been found in testing and review.
Again, I’ll point out it’s behaviour emulated by nearly every other similarly featureful GUI out there – the attachment is passed off to the shell (or whatever module it is that determines what program to open what files with).
Please enumerate a few examples and show me how each of these are affected by similar vulnerabilties that outlook and internet explorer have.
They almost all also use the same “workaround” to enhance security that Outlook has for a few versions now as well – blocking (or severely restricting) certain filetypes.
Adding a feature/workaround in reaction to a few attacks having zipped through the internet is not good desgin, it’s an after thought. These versions showed up in 2001 and 2002 years after Microsoft started taking hea for them.
Why should there be ? The vast bulk of situations where this happens (opening attachments with movies, word documents, jpegs, other messages, text files, meeting requests, etc) the behaviour is desirable and harmless.
Not when the code that detects the file type makes a mistake and executes a disguised script/binary. There should be a policy to say deny execute permission to any binary or script launched by say outlook or it’s shell.
When the script or binary tries to execute it should get a NO_EXEC failure and die. That is the proper way to design it and not restrict certain files types.
Additionally, Outlook has *never* (coding bugs aside) opened attachments without first warning the user it might be dangerous in a dialog that defaults to “No”.
“Never executed” and coding bugs aside don’t gel. It is either “never” or “has” given your statement.
Coding bugs that cause security holes are the company’s problem. Most companies serious about security have strict code review and security review processes.
The problem isn’t the APIs, it’s lazy (or incompetent) developers.
Windows NT has been on the market now for *12 years*. It’s been mainstream – at least in business environments – for nearly *9 years*. That’s how long developers have had to be writing multiuser friendly code. Yet even companies like id do *stupid* things like try and write to files inside the program directory.
Last I checked Id didn’t make software for the business environment.
Really why didn’t Microsoft kill the windows 9x and Me 12 years or 9 years ago. ID wrote to the market that made them money and Microsoft let them.
They only did stupid things like write to the program directory because Microsoft told them it was OK and part of the API. Microsoft also let ISVs replace windows dlls. Then later put a stupid hack to recover by caching the dlls and replacing them if they were changed. Micorosft should have as a part of thier Programming guidelines stictily prohibited these things.
Like on unix, /sbin /usr/sbin /opt/ /var/ exist for these very reasons and most ISVs or developer have not broken these rules in 30+ years.
Microsoft set the precendant and developets followed.
Even the “but everyone was using Windows 9x” argument doesn’t wash – the same multiuser features (from a developer perspective) and APIs were also introduced in later versions of Windows *95*, so it’s not like the consumers Windows OS hasn’t had them for more than long enough.
Bullshit. All it took was a cancel on the enter your user name and password dialog to bypass win9x series OS multiuser capabilities, which they never had, there was only one user. The username thing was for networking.
Please explain how Microsoft can stop companies writing bad code, because I’m sure companies *everywhere* would like to know.
By publishing APIs and ABIs that prevent developers from writing said bad code. By not Making the windows directory the central repository for all driver files and libraries and program files for Apps,for a start, Like OS X has bundles and an Application folder for just Apps . Unix had a well partitioned directory structure since the 70s, developers knew where things went.
Ever heard of the term DLL hell and wondered why it only applies in the windows context?
http://msdn.microsoft.com/msdnmag/issues/02/06/debug/default.aspx
DLL Hell stems from applications installing alternative DLLs over existing ones which may be critical to the operation of the installation, and in the absence of anything you could even loosely describe as version control, this process has been trashing Windows systems since 3.1. Windows 2000 has Windows File Protection (WFP), which stops key system files from being overwritten, and an equivalent of this will be present in Windows ME. Microsoft has also been giving its operating systems the ability to run multiple different DLLs of the same name, allowing different applications to use the particular version they need. Putting all the software you might need into a big pile of bloat strikes us as a particularly Microsoft fix, but what do we know?
http://www.theregister.co.uk/2000/05/05/windows_dll_hell_needs_more…
http://www.infoworld.com/cgi-bin/displayNew.pl?/livingst/980112bl.h…
No, users are to blame for doing things like deliberately running malicious code (most email trojans/viruses) and not applying software patches (most worms).
Most users have no clue what different file types or mime types even mean, they can’t deliberately run malicious code. Deliberate means you know something is harmful and you still execute it. Most users don’t know something is harmful.
How does one apply a software patch that is not even out yet? Most patches to vulnerabilities are released reactionary to them being found and made public. Some hackers are nice enough to work with the vendor others aren’t.
The blame always lies on the part that is broken that someone has to take time to fix. If there is a vulnerability it is the vendors responsibility, it should never have been there in the first place. Attacks have a direct correlation to vulnerablities, it usually takes a vulnerabilty to form an attack, barring DDOS of course. Even DDOS attacks are caused by a design flaw in tcp making it vulnerable to syn flood attacks.
Take sasser for instance, all it took was a malformed packet to cause and overflow in Active directory and execute code. The user didn’t have to execute anything
jp wrote:
> Hmmm. This is good. I always wanted to try the theory that linux and mac get less viruses because they are less popular. Firefox is a good experiment for this theory. If spyware dont get to firefox, then the theory dies.
Apache is a better example. Your theory is dead.
No, Douglas, it isn’t. See previous comment about apache 2.0 (24 vulns) vs. IIS 6.0 (3 vulns) in the same time period (2+ years).
It was going to start happening as soon as mozilla started to gain share… I’m just worried that the true problem with the concept of open source is going to show now… Something that has bothered me since I first heard of the concept.
Closed source means that most often to even find vulnerability you have to dissassemble the code and look at it as machine code… Now x86 assembly is NOT the most common skill out there… Now since most Open Source programs are written in some variation of C, the number of people who have an understanding of how it works goes up by a factor of a few thousand or more… Do we see the problem here?
You have the FULL source code – removing the hardest part of virus writing in the first place: Reverse Engineering what you want to infect. It makes exploiting it EASIER… It also makes fixing the exploits easier… Which of course leads to another issue…
Just how long was the patch for windows that prevented the Blaster virus out and available before the blaster virus even existed? Increasingly we are seeing viruses coming out AFTER the patches that prevent them. Considering that most such exploits are made by disassembling the patch to find out what it does… Imagine what someone with those skills could do with a CVS file and the full documented source code.
I have the sinking feeling the pace of ‘whack a mole’ on patching exploits is about to get VERY interesting.
One problem is that while it may be possible to find a potential buffer overflow/double free etc by looking at the source, to take advantage of that against a *compiled* version of the program to run arbitrary code, you still have to have a good understanding of assembly and memory management.
Logical errors are fair game though, and made much easier by having the code. Still, these are also the easiest for the programmers to find while bug hunting.
By the way, the domain name spoofing attack mentioned in the article does not require having the source code, or knowing any programming language at all. The problem is that the IDN implementation is working perfectly.
http://www.microsoft.com/technet/prodtechnol/windows2000pro/evaluat…
“The Windows Installer service is an operating system component. It will be included in Windows 2000 and will also be provided as a service pack for the Windows 95, Windows 98, and Windows NT® 4.0 operating systems1ws Installer-enabled applications.
In the past, every application provided its own setup executable file or script. Therefore, each application had to ensure that the proper installation rules (such as file versioning rules) were followed. Furthermore, no central reference for installation rules existed because setup was not considered to be a proper part of the development process; few, if any, best practice guidelines were available for developers authoring the setup routines.“
This basically says, “Look chaps, we didn’t think that installation was a part of development process, we let you screw up, we are fixing it with windows 2000”. There you have it straight from the horse’s mouth. They wrote the APIs and didn’t publish installer guidelines.
There is a trend with you, which basically boils down to, “Micosoft can do no wrong”. Developers and users are stupid and that is why windows has security problems. If a windows machine is slow, it is broken. OS X is slow (regardless of how much more advnaced it is compared to XP graphically). Every one but Microsoft is stupid and it’s almost always the user or developer to blame. yada yada yada.
Come to think of it you sound like a windows admin. In fact I vaguely remember you claiming to be one. It all makes sense.
While FireFox does follow the links, it is clear that I am not in paypal – The ‘a’ clearly is rendered as a non-printable character in the URL field. And yes I look there all the time.
one suspicious site tried to install an extension to my firefox browser, so i click no. but you know joe users, they’ll just click yes and then you know what will happen.
Yeah, let’s look at IIS vs. Apache.
IIS6 has had nearly 0 exploits, Apache 2 has patches every week.
There’s a world of diffence between becoming a target and getting shot.
“This is your ass”.
<bam>
“This is your ass with a cap busted in it. Any questions?”
If you have market share you’ll be a target, wow, cutting edge rock science there. I take it one step further, if it’s on the internet it’s a target.
IIS is essentially a component of Windows Server 2K3. Let’s see a backport of it to Win2K and we’ll see if it holds up as well.
Security breaches are not the same as Worms, Virus, Phishing attacks caused by sloppy code development and design processes. Most security experts can tell you the distinction, in fact most security sites clearly distinguish viruses and vulnerabilities by classifiying them seperately.
However, here in OSNews it’s all Windows’ fault. Or so you and others like you like to say.
Most adminstrators think that users are to blame, experts don’t.
Since you’ve not specified exactly what blame is being placed *for*, that’s a pretty open-ended statement.
You are an admin, aren’t you?
Correct. I actually have to deal with the real world, not write articles on the internet. As such, I’m very suspicious of people who claim to be “experts” but have never actually had to deal with real life environments. The real world is not a research lab.
Many of the vulnerabilities a worm finds are, in fact, not things that can be patched (such as bad file share protections, poor user policies, and so on). This leads many administrators to repeatedly
take only a single lesson away from infections, and often it is the conclusion that their users are entirely to blame. While everyone has a few stories about odd user behavior, they are not always at fault.
I’ve never said users are always at fault. Nor have I ever suggested the systems are never at fault.
I’ll also point out an admin *is* a user. If their incompetence and/or ignorance causes a security breach, that’s still a user problem.
Take your strawmen elsewhere.
A good read in general, and quite clearly illustrates the Myth and Misconceptions you propogate. Namely you don’t need AV or any sort of prevention software.
I’ve never suggested “you” don’t need AV or any sort of prevention software, I’ve merely pointed out that if you’re reasonably well prepared and careful, you don’t need to *rely* on such software.
I insist my users have AV software and firewalls because I know they like to do foolish things like blindly run things their friends told them to.
This whole topic and most of them recently haven’t be about malicious intents of users or social enigneered attacks. Don’t change the subject.
I wasn’t. You’re the one belabouring the point.
Hunh. WTF. So coding bugs are not security issues and default configurations are not the vendors fault????
No, they’re not *DESIGN* issues. You said:
“There is more that affects windows than traditional security problems. The amount of worms and viruses that are not directly related to security but to the design of windows is an order of
magnitude greater.”
A coding bug is not a design issue.
A poor configuration setting is not a design issue.
Majority of the Vulnerabilites in any system can be attributed to coding bugs. Coding bugs that should have been found in testing and review.
Coding bugs are hardly the sole domain of Windows.
Please enumerate a few examples and show me how each of these are affected by similar vulnerabilties that outlook and internet explorer have.
OS X’s Mail.app.
KDE’s kmail.
Both use the system-wide handles for filetypes to determine what to do with certain attachments.
Note that using the same technique does not imply they will suffer from the same problems, given the problems are largely not caused by the technique, but by the users.
Not when the code that detects the file type makes a mistake and executes a disguised script/binary.
The code isn’t making a mistake. It’s identifying a script or binary perfectly well and doing exactly the same thing to them that would be done if the user had double-clicked them anywhere else in the OS.
There should be a policy to say deny execute permission to any binary or script launched by say outlook or it’s shell.
Like the way Outlook has been blocking those sorts of attachments for years now ?
When the script or binary tries to execute it should get a NO_EXEC failure and die. That is the proper way to design it and not restrict certain files types.
Your proposal *is* restricting certain filetypes, just in a different way.
“Never executed” and coding bugs aside don’t gel. It is either “never” or “has” given your statement.
The assertion is that Outlook is faulty because it launches attachments without prompting the user. The point is that this isn’t true, except for a few (fixed) bugs.
This is a bit like saying $PROGRAM is fault because at some stage in its past a bug exhibited unwanted behaviour. What software are you thinking of that has never been bug-free, to hold Outlook up to such a standard ?
Most companies serious about security have strict code review and security review processes.
So who are these companies that have never had coding bugs in released software ?
Last I checked Id didn’t make software for the business environment.
Does it matter who they make software for ?
Really why didn’t Microsoft kill the windows 9x and Me 12 years or 9 years ago.
Because they didn’t exist 12 years ago and 9 years ago end users were still crying out for extensive DOS legacy support.
You really have NFI about computer history and how businessses actually operate in the real world, do you ?
ID wrote to the market that made them money and Microsoft let them.
The market they wrote to was Windows 2000 and XP (Doom 3 is unsupported – althoug it runs – on Windows 9x). Their out-of-the box requirement for Administrator privileges to run Doom 3 is both unnecessary and ridiculous.
[i]They only did stupid things like write to the program directory because Microsoft told them it was OK and part of the API.
No, Microsoft didn’t tell them it was OK. Indeed, Microsoft specifically told them *not* to do it.
The API is irrelevant – all it is concerned with is opening and writing to files. The security aspect (ie: why Admin privileges are required) is handled elsewhere.
Microsoft also let ISVs replace windows dlls.
Actually with Windows File Protection they don’t – and I’ve yet to see any other OS that implements such a feature.
“Linux” lets “ISVs” replace its shared libraries as well. Indeed, so does every other platform.
Then later put a stupid hack to recover by caching the dlls and replacing them if they were changed. Micorosft should have as a part of thier Programming guidelines stictily prohibited these things.
Their programming guideliness do prohibit “such things”.
Bullshit. All it took was a cancel on the enter your user name and password dialog to bypass win9x series OS multiuser capabilities, which they never had, there was only one user. The username thing was for networking.
I said from a developer perspective, not an OS perspective. To the developers, all the features they needed to become multiuser-friendly were there – specific and designated areas for storing configuration data, specific and designated areas for writing files, guidelines to avoid writing to system or program areas, etc, etc.
By publishing APIs and ABIs that prevent developers from writing said bad code.
Again, I’d be fascinated to know how you’d plan to do this. I also find it rather hypocritical you hold Microsoft up to a much higher standard than anyone else.
By not Making the windows directory the central repository for all driver files and libraries and program files for Apps,for a start, […]
It’s not (well, it’s not supposed to be – but as previously mentioned, there’s little Microsoft can do to stop developers plonking their files wherever they want).
Like OS X has bundles and an Application folder for just Apps . Unix had a well partitioned directory structure since the 70s, developers knew where things went.
So does Windows.
Security breaches are not the same as Worms, Virus, Phishing attacks caused by sloppy code development and design processes. Most security experts can tell you the distinction, in fact most security sites clearly distinguish viruses and vulnerabilities by classifiying them seperately.
However, here in OSNews it’s all Windows’ fault. Or so you and others like you like to say.
Most adminstrators think that users are to blame, experts don’t.
Since you’ve not specified exactly what blame is being placed *for*, that’s a pretty open-ended statement.
You are an admin, aren’t you?
Correct. I actually have to deal with the real world, not write articles on the internet. As such, I’m very suspicious of people who claim to be “experts” but have never actually had to deal with real life environments. The real world is not a research lab.
Many of the vulnerabilities a worm finds are, in fact, not things that can be patched (such as bad file share protections, poor user policies, and so on). This leads many administrators to repeatedly
take only a single lesson away from infections, and often it is the conclusion that their users are entirely to blame. While everyone has a few stories about odd user behavior, they are not always at fault.
I’ve never said users are always at fault. Nor have I ever suggested the systems are never at fault.
I’ll also point out an admin *is* a user. If their incompetence and/or ignorance causes a security breach, that’s still a user problem.
Take your strawmen elsewhere.
A good read in general, and quite clearly illustrates the Myth and Misconceptions you propogate. Namely you don’t need AV or any sort of prevention software.
I’ve never suggested “you” don’t need AV or any sort of prevention software, I’ve merely pointed out that if you’re reasonably well prepared and careful, you don’t need to *rely* on such software.
I insist my users have AV software and firewalls because I know they like to do foolish things like blindly run things their friends told them to.
This whole topic and most of them recently haven’t be about malicious intents of users or social enigneered attacks. Don’t change the subject.
I wasn’t. You’re the one belabouring the point.
Hunh. WTF. So coding bugs are not security issues and default configurations are not the vendors fault????
No, they’re not *DESIGN* issues. You said:
“There is more that affects windows than traditional security problems. The amount of worms and viruses that are not directly related to security but to the design of windows is an order of
magnitude greater.”
A coding bug is not a design issue.
A poor configuration setting is not a design issue.
Majority of the Vulnerabilites in any system can be attributed to coding bugs. Coding bugs that should have been found in testing and review.
Coding bugs are hardly the sole domain of Windows.
Please enumerate a few examples and show me how each of these are affected by similar vulnerabilties that outlook and internet explorer have.
OS X’s Mail.app.
KDE’s kmail.
Both use the system-wide handles for filetypes to determine what to do with certain attachments.
Note that using the same technique does not imply they will suffer from the same problems, given the problems are largely not caused by the technique, but by the users.
Not when the code that detects the file type makes a mistake and executes a disguised script/binary.
The code isn’t making a mistake. It’s identifying a script or binary perfectly well and doing exactly the same thing to them that would be done if the user had double-clicked them anywhere else in the OS.
There should be a policy to say deny execute permission to any binary or script launched by say outlook or it’s shell.
Like the way Outlook has been blocking those sorts of attachments for years now ?
When the script or binary tries to execute it should get a NO_EXEC failure and die. That is the proper way to design it and not restrict certain files types.
Your proposal *is* restricting certain filetypes, just in a different way. You can’t get away from “restricting certain filetypes” when you only want to, well, restrict certain filetypes atypically in a limited number of scenarios.
“Never executed” and coding bugs aside don’t gel. It is either “never” or “has” given your statement.
The assertion is that Outlook is faulty by design because it launches attachments without prompting the user. The point is that this isn’t true, except for a few (fixed) bugs.
This is a bit like saying $PROGRAM is fault because at some stage in its past a bug exhibited unwanted behaviour. What software are you thinking of that has never been bug-free, to hold Outlook up to such a standard ?
Most companies serious about security have strict code review and security review processes.
So who are these companies that have never had coding bugs in released software ?
Last I checked Id didn’t make software for the business environment.
Does it matter who they make software for ?
Really why didn’t Microsoft kill the windows 9x and Me 12 years or 9 years ago.
Because they didn’t exist 12 years ago and 9 years ago end users were still crying out for extensive DOS legacy support.
You really have NFI about computer history and how businessses actually operate in the real world, do you ?
ID wrote to the market that made them money and Microsoft let them.
The market they wrote to was Windows 2000 and XP (Doom 3 is unsupported – althoug it runs – on Windows 9x). Their out-of-the box requirement for Administrator privileges to run Doom 3 is both unnecessary and ridiculous.
They only did stupid things like write to the program directory because Microsoft told them it was OK and part of the API.
No, Microsoft didn’t tell them it was OK. Indeed, Microsoft specifically told them *not* to do it.
The API is irrelevant – all it is concerned with is opening and writing to files. The security aspect (ie: why Admin privileges are required) is handled elsewhere.
Microsoft also let ISVs replace windows dlls.
Actually with Windows File Protection they don’t – and I’ve yet to see any other OS that implements such a feature.
“Linux” lets “ISVs” replace its shared libraries as well. Indeed, so does every other platform.
Then later put a stupid hack to recover by caching the dlls and replacing them if they were changed. Micorosft should have as a part of thier Programming guidelines stictily prohibited these things.
Their programming guideliness do prohibit “such things”.
Bullshit. All it took was a cancel on the enter your user name and password dialog to bypass win9x series OS multiuser capabilities, which they never had, there was only one user. The username thing was for networking.
I said from a developer perspective, not an OS perspective. To the developers, all the features they needed to become multiuser-friendly were there – specific and designated areas for storing configuration data, specific and designated areas for writing files, guidelines to avoid writing to system or program areas, etc, etc.
By publishing APIs and ABIs that prevent developers from writing said bad code.
Again, I’d be fascinated to know how you’d plan to do this. I also find it rather hypocritical you hold Microsoft up to a much higher standard than anyone else.
By not Making the windows directory the central repository for all driver files and libraries and program files for Apps,for a start, […]
It’s not (well, it’s not supposed to be – but as previously mentioned, there’s little Microsoft can do to stop developers plonking their files wherever they want).
Like OS X has bundles and an Application folder for just Apps . Unix had a well partitioned directory structure since the 70s, developers knew where things went.
So does Windows.
Ever heard of the term DLL hell and wondered why it only applies in the windows context?
Probably because they’re not called DLLs on any other platform, because every other platform certainly has its fair share of shared library woes. The dependency cascades of Linux systems are as legendary as the DLL problems of Windows 3.1.
Of course, since then – since about, say, 1998, “DLL hell” has largely be an anachronism. “Dependency hell” is still well and truly alive in the Linux world, however.
Note that no other platforms provides any more protection than Windows for its system files, and most provide a great deal less.
Most users have no clue what different file types or mime types even mean, they can’t deliberately run malicious code. Deliberate means you know something is harmful and you still execute it.
No, that’s malicious. What you are describing is ignorance.
Most users don’t know something is harmful.
Correct. That’s one of the biggest problems. Know why ? Because “harmful” is almost completely dependent on context, and computers are pretty bad at figuring out context.
How does one apply a software patch that is not even out yet? Most patches to vulnerabilities are released reactionary to them being found and made public. Some hackers are nice enough to work with the vendor others aren’t.
False. The majority of worms, etc exploit holes that were fixed before they were published.
Attacks have a direct correlation to vulnerablities, […]
I’d be interested to see your reasoning (or evidence) for that. I mean, Windows 3.1 has some fairly major vulnerabilities, but you don’t see many attacks on it these days.
[…] it usually takes a vulnerabilty to form an attack, barring DDOS of course.
No, it takes a vulnerability to form an *exploit*.
Even DDOS attacks are caused by a design flaw in tcp making it vulnerable to syn flood attacks.
That’s not the only type of DDoS.
Take sasser for instance, all it took was a malformed packet to cause and overflow in Active directory and execute code. The user didn’t have to execute anything
So ? I’ve never said the user is *always* at fault. Nasties like Sasser are very much in the minority. Not to mention, if I’m not mistaken, the patch for the vulnerability Sasser exploited was posted a couple of weeks before it was released.
This basically says, “Look chaps, we didn’t think that installation was a part of development process, we let you screw up, we are fixing it with windows 2000”. There you have it straight from the horse’s mouth. They wrote the APIs and didn’t publish installer guidelines.
No, it says that an automated method wasn’t distributed with the OS until Windows 2000 – prior to that developers had to write their own installers to check and comply with the *already established guidelines*. This is why there used to be third party installer tools like Installshield.
Consider it analagous to unix/linux before and after the prevalence of package management tools – the guidelines for where to install, what to check for, etc were well known, there just wasn’t an automated tool distributed with the OS so developers used their own (typically a makefile).
Windows – since the days of at NT 3.1 – has had well known and established guidelines for where to install applications, where to store user configuration data, where to store system-wide configuration data, where to store user files, where to store system wide files, where to install system libraries to, where to install application specific libraries, etc, etc, etc.
That developers don’t follow these standards, due to either laziness, incompetence, or ignorance is not Microsoft’s fault. Nor is there anything, practically speaking, Microsoft can do to force them to, apart from breaking their applications (which they then get criticised for).
“Of course, since then – since about, say, 1998, “DLL hell” has largely be an anachronism. “Dependency hell” is still well and truly alive in the Linux world, however.”
How so? I haven’t had trouble with deps for quite a while, and that covers multiple distros in real environments, not just my home boxes. Yes, you need connectivity to resolve deps, but with apt, yum, and the rest of the package managers available, I cannot see how people are still suffering from “Dependancy Hell.”
Probably because they’re not called DLLs on any other platform, because every other platform certainly has its fair share of shared library woes. The dependency cascades of Linux systems are as legendary as the DLL problems of Windows 3.1.
Nonsense. Shared libraries exist in lot of OSes and aren’t as much of a problem.
Of course, since then – since about, say, 1998, “DLL hell” has largely be an anachronism. “Dependency hell” is still well and truly alive in the Linux world, however.
Microsoft just put assemblies in to XP to deal with it, how could it have been fixed in 1998.
I am not a linux zealot and don’t reall care much for it’s dependence hell. Though I think newer tools have fixed the issue mroe elgantly than MS.
Note that no other platforms provides any more protection than Windows for its system files, and most provide a great deal less.
Could it be because they don’t need to?!!
No, that’s malicious. What you are describing is ignorance.
Explain to me how you examine malicious code as an admin. Do you disassemble it and examine it. Do you know for a fact that a file with a .mpeg extenstion doesn’t contain hidden code that gets execued by a buffer overflow caused by playing it in WMP? How do you know what is malicious or not.
Correct. That’s one of the biggest problems. Know why ? Because “harmful” is almost completely dependent on context, and computers are pretty bad at figuring out context.
When a file that shoouldn’t be executable is executed, the Os knows the context. You can always map a non executable media file’s pages non_execute and a PC in that page will cause a trap when the cpu tries to exuecute said PC. You print a message saying blah.mpeg just try to execute and gracefully exit.
More than enough context. BTW people develop code that gives you what ever context you use files in.
False. The majority of worms, etc exploit holes that were fixed before they were published.
Prove it.
I’d be interested to see your reasoning (or evidence) for that. I mean, Windows 3.1 has some fairly major vulnerabilities, but you don’t see many attacks on it these days. ‘
Is anybody using it these days?
That’s not the only type of DDoS.
That is the fundamental Idea behind a DOS, you could always go layers above TCP and do it..
No, it says that an automated method wasn’t distributed with the OS until Windows 2000 – prior to that developers had to write their own installers to check and comply with the *already established guidelines*. This is why there used to be third party installer tools like Installshield.
You really can’t read, can you?
What does this mean:
Furthermore, no central reference for installation rules existed because setup was not considered to be a proper part of the development process; few, if any, best practice guidelines were available for developers authoring the setup routines.”
Windows – since the days of at NT 3.1 – has had well known and established guidelines for where to install applications, where to store user configuration data, where to store system-wide configuration data, where to store user files, where to store system wide files, where to install system libraries to, where to install application specific libraries, etc, etc, etc.
I just posted a Microsoft document on the subject proving you on and you are still rambling.
Post those guidelines and official Microsoft documents to make your case. Or I suggest you shutup and not look dumber than you already do.
That developers don’t follow these standards, due to either laziness, incompetence, or ignorance is not Microsoft’s fault. Nor is there anything, practically speaking, Microsoft can do to force them to, apart from breaking their applications (which they then get criticised for).
Micorsoft didn’t have any standards, by thier own admission. Provide info to back your claims up.
Hunh. Do you have ADD? We are talking about windows security problems. the paragraphs leading to the statment you quoted say so.
Which *aspects* of “Windows security problems”. Vulnerabilities ? Running malicious code ? Not patching ? Typing in their banking details when some random email asks for them ?
They are not. But there are design flaws in Windows.
You agree they’re not design issues, but then assert they’re design flaws in Windows. Do you think you can make your mind up ?
Coding bugs are not design flaws.
Poor configuration is not a design flaw.
Talking about *a bug* is different than the amount of bugs outlook has had.
Which programs – of similar complexity – are you thinking of that have had significantly less ?
Users of these clients are just a dumb as you claim users to be then why haven’t there been more issues with them.
If no-one’s writing malicious code to take advantage of it, there’s not going to be many problems, are there ?
Most of them seem to have had less issues than Microsoft.
By what measure ?
Most gammers kept win98 around till XP came along. win98 didn’t have multiuser support.
Windows 98 had the necessary APIs to write applications to so that they were “multiuser friendly” to NT, however.
Microsoft couldn’t do what they did in 2001 with XP 9-12 years ago, when it would have been easier, with less users than in 2001. Really!!
Really. Too many people with much slower machines requiring *substantial* amounts of low-level DOS and hardware support that is – for all intents and purposes – impossible under NT.
I have more knowledge of computer history and business in my pinky than you do in your entire being.
Right. Yet you don’t even know about legacy DOS support requirements.
ID has been writing sofware for Microsofts paltforms since 1991. I enjoyed playing Commander Keen, back in the day.
And…? I said, quite clearly (you even quote it) that Doom 3 was targeted at Windows 2000 and XP (Windows 98 is not a supported platform).
Here is id software’s history….
What’s your point ?
I just posted a Microsoft Tech article claiming didn’t have any set guidelines which lead to the installer in win2k.
You posted a tech article explaining that there weren’t any guidelines for how installers go about installing things.
This is a very, very different thing to saying there weren’t any guidelines about where to put certain types of data.
That is a bunch of crap. When I say open file in any OS if I don’t have permission the call should fail.
Which is what I just said.
I thought I already covered this and called it a hack.
What you call it is laughably irrelevant.
Linux is a kernel it doesn’t care what ISVs do with libraries. That is handled by distros like Redhat and SUSE…… they publish thier guidelines for ISVs.
Hence the “Linux”. However, if you prefer – Suse and Redhat let their ISVs “replace” system libraries as well.
I already covered this read Microsofts own docs. Or at the very least provide docs that confirm you ridiculous claim.
msdn.microsoft.com
Here’s one to get you started:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/ms…
Note the various system areas and how far back compatibility for them goes.
Nonsense. Shared libraries exist in lot of OSes and aren’t as much of a problem.
They aren’t much of a problem in Windows.
Microsoft just put assemblies in to XP to deal with it, how could it have been fixed in 1998.
You can’t “fix” it. All you can do is try to stop developers “breaking” it. Anytime you have shared libraries, you have the potential for them to cause problems. Why do you criticise Microsoft for trying to make their implementation better ?
“DLL hell” hasn’t been a problem in Windows for ~8 years now. No matter how much you jump and yell, this will not change.
Could it be because they don’t need to?!!
Given the extensive dependency problems in many Linux installs, somehow I don’t think so.
Explain to me how you examine malicious code as an admin. Do you disassemble it and examine it. Do you know for a fact that a file with a .mpeg extenstion doesn’t contain hidden code that gets execued by a buffer overflow caused by playing it in WMP? How do you know what is malicious or not.
A filename like “annaKnaked.mpeg” is usually a major giveaway.
When a file that shoouldn’t be executable is executed, the Os knows the context.
You don’t seem to understand that these were files that *were* supposed to be executed, according to the shell. That’s what you do with binaries – you execute them.
More than enough context. BTW people develop code that gives you what ever context you use files in.
Really ? What context are you going to use to decide if deleting a file is harmful ? How about making an outgoing network connection ? How are you going to determine that programmatically ?
Prove it.
Prove it youself. Look up the major Windows worms. Compare their release data with the release dates of the patches that fix the vulnerabilities they execute.
How many worms can you name that exploited *unpatched* vulnerabilities ?
Is anybody using it these days?
Don’t tell me you’re admitting how many people are using a platform is a relevant issue ?
This is a major event. I think we need to take a few minutes to catch our breath.
That is the fundamental Idea behind a DOS, you could always go layers above TCP and do it..
No, the fundamental idea behind a DDoS is you hit a host with more traffic than it is capable of responding to. Whether this incapacity is caused by fundamental aspects of TCP/IP, bugs in the networking stack or simply not enough bandwidth is a matter of semantics.
You really can’t read, can you?
I can read perfectly well. You’re the person having trouble distinguishing between two different things.
If you know any, talk to some Windows developers. You might find it enlightening.
I just posted a Microsoft document on the subject proving you on and you are still rambling.
You posted a Microsoft document and then drew a conclusion equivalent to “before the days of rpm and apt, there were no standards for installing applications in linux”.
You agree they’re not design issues, but then assert they’re design flaws in Windows. Do you think you can make your mind up ?
You are saying that all security problems in windows are coding bugs. My mind is clear you aren’t.
Windows 98 had the necessary APIs to write applications to so that they were “multiuser friendly” to NT, however.
No it didn’t. All the win32 APIs that deal with multi-user stuff for NT are not supported on windows 95/98/Me
They aren’t much of a problem in Windows.
I just posted data and even you claimed they were till Windows file protection came along.
You can’t “fix” it. All you can do is try to stop developers “breaking” it. Anytime you have shared libraries, you have the potential for them to cause problems. Why do you criticise Microsoft for trying to make their implementation better ?
I am not criticizing MS. I am saying they screwed up and are fixing it. You claimed they always were in the right. Funny, even Microsoft says they screwed up.
“DLL hell” hasn’t been a problem in Windows for ~8 years now. No matter
how much you jump and yell, this will not change.
I am not jummping around, I posted facts. You have yet to pony up any real data to back up your speculation.
Right. Yet you don’t even know about legacy DOS support requirements.
Why did MS solve said requirement in 2000 and XP?
Really ? What context are you going to use to decide if deleting a file is harmful ? How about making an outgoing network connection ? How are you going to determine that programmatically ?
You have firewalls for the network connection and intrusion detection systems that detect files that have changed.
Prove it youself.
I don’t have to. You are the one making a claim.
Don’t tell me you’re admitting how many people are using a platform is a relevant issue ?
There is a difference between no one using a 12 year old deprecated platform and currently shipping systems.
Gee I wonder if my ZX specturm had any vulnerability!!!! Should make drsmithy feel smart if it did.
I can read perfectly well. You’re the person having trouble distinguishing between two different things.
Not the impression I got.
You posted a Microsoft document and then drew a conclusion equivalent to “before the days of rpm and apt, there were no standards for installing applications in linux”.
I didn’t draw any such conclusion. There are no standards for installing applications in linux. Apt, Rpms are distro specific.
Microsoft makes the distribution they specify what stnadards developers should follow. I posted some thing from which any reasonable person can draw only one conclusion. And I have proved that not only are you not the brightest person, you aren’t reasonable either.
Here’s one to get you started:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/ms…..
I Love it when people make my point for me.
CommonAppDataFolder
Full path to the file directory containing application data for all users. Exists on Windows 2000.
CSIDL_COMMON_APPDATA
Notice common app data folder is win2k specific.
Also notice the lack of any multiuser support for anything pre win2k.
The installer sets the PersonalFolder property to the full path to the Personal folder for the current user.
Remarks
Common values for this property are C:WinntProfiles[LogonUser]Personal (Windows NT/Windows 2000) and C:My Documents (Windows 95 and Windows 98).
Notice the common files folder, Surprise it’s in Program Files.
CommonFilesFolder Property
The installer sets the CommonFilesFolder property to the full path to the Common Files folder for the current user.
Remarks
The installer sets this property. For example, on 32-bit Windows, the value may be C:Program FilesCommon Files. On 64-bit Windows, the value may be C:Program Files (x86)Common Files.
Again notice the lack of Multiuserness in 95 and 98
SendToFolder Property
The installer sets the SendToFolder property to the full path to the SendTo folder for the current user.
Remarks
Common values for this property are C:WinntProfiles[LogonUser]SendTo (Windows NT/Windows 2000) and C:WindowsSendTo (Windows 95 and Windows 98).
Need I go on further. My point about microsoft’s guidelines were spot on. Even your flawed notion that win95 and win98 had multi-users APIs was handily disproven by the above examples.
Thanks for the link. Just because a property exists on many platforms in an API doesn’t meant they have the same semantics on all of them.
You are saying that all security problems in windows are coding bugs. My mind is clear you aren’t.
I am saying nothing of the sort.
No it didn’t. All the win32 APIs that deal with multi-user stuff for NT are not supported on windows 95/98/Me
Not true. FOr an example, consider the call used to load one of the multiuser aspects of the registry (HKEY_USER):
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/sy…
Note support for this call goes back to Windows 95.
Or consider this:
http://www.windowsitlibrary.com/Content/356/03/2.html
Security
The major WIN32 API implementation difference between Windows 95/98 and Windows NT is security. Windows 95/98’s implementation does not have any support for security. In all the Win32 API functions that have SECURITY ATTRIBUTES as one of the parameters, Windows 95/98’s implementation just ignores these parameters. This has some impact on the way a developer programs. Registry APIs such as RegSaveKey and RegRestoreKey work fine under Windows 95/98. However, under Windows NT, you need to do a few things before you can use these functions. In Windows NT, there is a concept of privileges. There are different kinds of privileges, such as Shutdown, Backup, and Restore. Before using a function such as RegSaveKey, you need to acquire the Backup privilege. To use RegRestoreKey, you need to acquire the Restore privilege, and to use the InitiateSystemShutdown function, you need to acquire the Shutdown privilege.
Ie: same calls, just Windows 9x has no way of *enforcing* the security restrictions aspect of multiuser.
I just posted data and even you claimed they were till Windows file protection came along.
You posted data indicating a method Microsoft are using to *improve* DLL handling, not demonstrating that it is bad.
I also made no such claim about WFP. The disappearance of “DLL hell” predates WFP, WFP was just another layer of robustness.
I am not criticizing MS. I am saying they screwed up and are fixing it.
Sure they did, *15 years ago* with Windows 3.x (which carried over into early versions of Windows 9x).
You claimed they always were in the right. Funny, even Microsoft says they screwed up.
No, I claimed it hadn’t been a problem for a very long time (~8 years).
I am not jummping around, I posted facts. You have yet to pony up any real data to back up your speculation.
Your “facts” are not supporting your assertions.
Why did MS solve said requirement in 2000 and XP?
They didn’t – by the time Windows 2000 had rolled around the majority of customers had weaned themselves off their DOS apps and legacy hardware. The “requirement” was not so much solved, as made irrelevant.
You have firewalls for the network connection and intrusion detection systems that detect files that have changed.
Telling whether or not a file has changed is useless. I’m asking for a programmatic way of determing whether that file was changed harmfully.
I don’t have to. You are the one making a claim.
I’m stating common knowledge. If it’s an important issue to you, the evidence is easily available to prove me wrong. I’m just not interested in spending hours collating data to appear a troll on OSNews.
Do you think you can even find half a dozen unique worms that exploited unpatched vulnerabilities ?
There is a difference between no one using a 12 year old deprecated platform and currently shipping systems.
Why ? What difference does it make if a certain minority of the market is using a 12 year old OS or a current OS ?
I didn’t draw any such conclusion. There are no standards for installing applications in linux. Apt, Rpms are distro specific.
Of course there were. Linux inherited them from unix (/lib, /bin, /usr, /usr/local, etc). You even said so yourself:
“Like on unix, /sbin /usr/sbin /opt/ /var/ exist for these very reasons and most ISVs or developer have not broken these rules in 30+ years.”
Microsoft makes the distribution they specify what stnadards developers should follow. I posted some thing from which any reasonable person can draw only one conclusion.
Well, clearly not. All your cite said was that until Windows 2000 there had been no bundled, standard method for installing files, *not* that there were no established, published places for files to go.
Again, I’ll make the analogy and hope it gets through – prior to tools like rpm, apt, etc there weren’t bundled, standard methods for installing apps. There were, however, established, published places for certain things to go (as you noted yourself). Windows installer is basically the same principle – now that we have MSIs, there’s no need for tools like Installshield, et al. That doesn’t mean there weren’t established places for those third party installers to place their files (%PROGRAMFILE%, %USERPROFILE%, %APPDATA%, %SYSTEMROOT%, HKEY_USER, etc).
Ie: same calls, just Windows 9x has no way of *enforcing* the security restrictions aspect of multiuser.
Ergo, it doesn’t support multiple users properly. So using such APIs is beneficial to a developer how exactly?
It’s funny becuase the next paragraph goes:
Under Windows 95/98, anybody can install a VXD. To install a kernel-mode device driver under Windows NT, you need administrator privilege for security purposes. As mentioned previously, device drivers are trusted components of the operating system and have access to the entire hardware. By requiring privileges to install a device driver, Windows NT restricts the possibility that a guest account holder will install a device driver, which could potentially bring the whole system down to its knees.
I just got done showing you, with your own examples, how wrong your are. You are just to dense to understand.
I think you should focus on admin work and leave APIs and the like to us developers.
also made no such claim about WFP. The disappearance of “DLL hell” predates WFP, WFP was just another layer of robustness.
That’s news to me and most people. Why planet do you live on???
Your “facts” are not supporting your assertions.
Yes they are, you are too dense to see them.
Telling whether or not a file has changed is useless. I’m asking for a programmatic way of determing whether that file was changed harmfully.
What are you trying to get at? My point was that a user couldn’t tell if a file was harmful, no the name is not a giveaway always, You said a user could. Now you are asking me how I can determine a file has harmful data programaticaly.
Funny I thought virus scanners were doing the very same thing for years.
The point is it is impossible for a user, any user, to know a file has malicious content. So most users wouldn’t be able to deliberately run malicious code.
Note deliberate means, thinking about it and knowing something is wrong. Which would lead to malicious intent. Most users can’t just look at a file and say it would do harmful things. The only way it to do a bit compare with a known good file, like using md5sums. Or using a program to search for known harmful code within the file.
With out these techniques, it is impossible for the first recipients of a harmful file to know it is harmful. Not until the virus companies catch up can you do it in an automated and programatic manner. And not until the word is widespread that one even knows what to look for. But by then it’s already to late.
Of course there were. Linux inherited them from unix (/lib, /bin, /usr, /usr/local, etc). You even said so yourself:
“Like on unix, /sbin /usr/sbin /opt/ /var/ exist for these very reasons and most ISVs or developer have not broken these rules in 30+ years.”
I said UNIX specifically you conflaed it with linux. Linux didn’t inherit anything, it is not of the the UNIX lineage, it borrowed concepts. Also the GNU folks made the directory structure particularly different compared to UNIX.
The point here is UNIX had a long established guidelines, windows didn’t. Read my next post tearing an nice whole in your argument, with your own example.
Again, I’ll make the analogy and hope it gets through – prior to tools like rpm, apt, etc there weren’t bundled, standard methods for installing apps. There were, however, established, published places for certain things to go (as you noted yourself). Windows installer is basically the same principle – now that we have MSIs, there’s no need for tools like Installshield, et al. That doesn’t mean there weren’t established places for those third party installers to place their files (%PROGRAMFILE%, %USERPROFILE%, %APPDATA%, %SYSTEMROOT%, HKEY_USER, etc).
Most of these places still point to Program Files or the C:windows, thw places you have problems with ID software touching.
I am going to make one last point here.
90% of the world are not tech savvy and hoping that they will suddenly be savvy enough to understand how computers work, is a geek and computer administators wet dream. It’s utopia.
The responsibilties for security falls on the shoulders of the company making the software system for endusers. In corporate, enviroments the It department and Admins. Just like 90% of the world’s population can’t defned them selves in danger, we have police forces, armies and governments to do that job.
When a company with as much money and power as microsoft fails miserably to make sure it can make secure products, it is a problem. Which thankfully they have finally acknowledged, not their recent internal programs.
I have nothing against Microsoft in particular. At least not in the way you want to defend it.
I however have a problem with a smart aleck windows administrator, blaming users fo his and the company he supports shortcommings.
It’s your job to educate users and it is Microsoft’s to design and implement secure systems. Microsoft finally got the message, thanks to the emergence of alternatives like linux and OS X. It’s time you did too.
There are no standards for installing applications in linux. Apt, Rpms are distro specific.
Wrong. LSB has existed since 1998 and every major distribution adheres to it by policy. Non-conformance is regarded as a bug.
Ergo, it doesn’t support multiple users properly.
I never said it did.
So using such APIs is beneficial to a developer how exactly?
Allows them to write code that will still work when executed on a machine that *does* enforce multiuser security. Ie: exactly what I said half a dozen posts ago.
What are you trying to get at?
The same thing I’ve always been “trying to get at”. That most security problems are caused by users, because “most users don’t know something is harmful” and detecting whether or not something is harmful – *programmatically* – is very difficult.
Funny I thought virus scanners were doing the very same thing for years.
With varying degrees of success and generally only *after* harm has been caused to someone, somewhere.
The point is it is impossible for a user, any user, to know a file has malicious content.
It is, however, quite possible for them to make a reasonable *inference*.
If someone downloads a signed RPM from Redhat that contains exploited code because some cracker infiltrated http://ftp.redhat.com, that’s an innocent and accidental mistake.
When someone launches an .exe attachment that promises to give them a lifetime supply of porn for $9.95 if they just type in their credit card, that’s deliberate stupidity.
Note deliberate means, thinking about it and knowing something is wrong.
No, that’s malicious.
By your definition of “deliberate”, it’s impossible to “deliberately” do something right.
The example I gave above of someone executing an obviously questionable attachment is certainly deliberate, but in most cases not malicious.
Most users can’t just look at a file and say it would do harmful things.
No, most users *won’t* just look at a file and say it would do harmful things.
The only way it to do a bit compare with a known good file, like using md5sums. Or using a program to search for known harmful code within the file.
Or just exercise a few moments of common sense and critical analysis. That wipes out 90% of the potential vectors.
I don’t know why, but you seem to be under the impression that most malware vectors are subtle, carefully planned and expertly executed attacks that even the smartest person would have trouble detecting. I have no idea where you’ve gotten this idea from, but it’s a long, long way from the truth.
The most popular and common malware vectors are spam and “dodgy” websites (eg: warez, “free porn”).
With out these techniques, it is impossible for the first recipients of a harmful file to know it is harmful. Not until the virus companies catch up can you do it in an automated and programatic manner. And not until the word is widespread that one even knows what to look for. But by then it’s already to late.
You must be constantly amazed by people who managed to avoid viruses and malware without the benefits of virus scanners then.
I said UNIX specifically you conflaed it with linux. Linux didn’t inherit anything, it is not of the the UNIX lineage, it borrowed concepts.
Linux is a reimplementation of unix.
Linux has been aspiring to unix standards (both official and de facto) since its inception.
Also the GNU folks made the directory structure particularly different compared to UNIX.
Uh huh. I see your knowledge of unix is about as good as your knowledge of Windows.
The biggest differences (relevant to this discussion) between GNU and traditional unixes is a propensity for GNU folks to plonk large amounts of things that traditionally went in /usr under /. That and their insistence on using bash as a replacement for sh, despite its incompatibilities.
There’s a hell of a lot more that’s similar about linux and unix than there is that’s different. Certainly, the parts relevant to this discussion are, for all intents and purposes, identical.
Then of course there’s the LSB, as another poster has reminded me.
The point here is UNIX had a long established guidelines, windows didn’t. Read my next post tearing an nice whole in your argument, with your own example.
False. Windows has had similar published standards since the early 90s. These standards ahve certainly expanded over time – but so have everyone’s.
Most of these places still point to Program Files or the C:windows, thw places you have problems with ID software touching.
No, they point to various different points in both the system’s directory structure and its registry.
Some of this is trivial to demonstrate on an NT machine – just type ‘set’ into a command prompt. Other things you have to determine programmatically.
90% of the world are not tech savvy and hoping that they will suddenly be savvy enough to understand how computers work, is a geek and computer administators wet dream. It’s utopia.
I have no expectation whatsoever that users will ever become tech savvy. *My* “wet dream” is that computers will become sufficiently automated and robust such that they don’t have to. OS X (and MacOS before it) has made good strides towards this goal.
I can’t wait for the “computer appliance”.
The responsibilties for security falls on the shoulders of the company making the software system for endusers. In corporate, enviroments the It department and Admins. Just like 90% of the world’s population can’t defned them selves in danger, we have police forces, armies and governments to do that job.
No, the responsibility for providing the security *infrastructure* falls on whoever is writing the software.
Just like the police can’t stop some random crazy pulling out a gun in a mall and shooting people, OS vendors can’t stop developers writing bad code that does silly or ignorant things. They can *try*, certainly – Windows File Protection is but one example – but at some level certain actions (like writing to files) have to be allowed to make the system usable.
When a company with as much money and power as microsoft fails miserably to make sure it can make secure products, it is a problem. Which thankfully they have finally acknowledged, not their recent internal programs.
Microsoft have been providing a securable OS infrastructure for about 12 years now. Certainly, they’ve had their share of bugs and some of the configuration details of their systems have been less than ideal (RPC, for example) but the majority of the causes remain with lazy and/or ignorant developers not using the facilities available to them for writing programs that can be secure by, for example, not needing to be run with elevated privileges without reason.
Going back to my previous example, there is no excuse for id choosing to store savegames in the program’s directory when a perfectly good area already exists (%USERPROFILE%) and has for a very long time. All that it takes to allow Doom 3 to run as a regular user is adding a write privilege to the save game directory for that user (or group). All that it would have taken for id to have made Doom 3 multiuser-friendly is to have stored their savegames somewhere under %USERPROFILE% (ideally somewhere in %APPDATA%) like other games do. The vast bulk of problems running windows software as non-admins fall into this sort of category – applications erroneously (and needlessly) trying to write to system-wide areas like the program directory or HKEY_LOCAL_MACHINE in the registry. Again, this is nothing more than programmer error, as the *correct* areas for writing per-user data and configuration information have been available and published for years and years.
I however have a problem with a smart aleck windows administrator, blaming users fo his and the company he supports shortcommings.
I place blame where it belongs. Unlike you, who always places blame on Microsoft.
Incidentally, I admin more than just Windows machines. Windows Systems Administration is a rather recent development in my career.
It’s your job to educate users and it is Microsoft’s to design and implement secure systems.
Both of which are done. Which is why none of *my* environments have ever had any major virus or malware outbreaks, or known cracker incursions.
Microsoft finally got the message, thanks to the emergence of alternatives like linux and OS X. It’s time you did too.
The guidelines I use for securing my environments are OS independent.
No, the responsibility for providing the security *infrastructure* falls on whoever is writing the software.
Can you name the company that wrote the software that Nimda, Blaster, CodeRed, Sasser, Melissa etc etc. exploited vulnerabilities?
The case in point, it is clear Microsoft if is to blame. But you are blaming the users for security problems.
Going back to my previous example, there is no excuse for id choosing to store savegames in the program’s directory when a perfectly good area already exists (%USERPROFILE%) and has for a very long time.
Only since NT. It think the debate here is if Windows 95 and co had it. They didn’t.
http://home.earthlink.net/~rlively/MANUALS/ENV/INDEX.HTM
All that it would have taken for id to have made Doom 3 multiuser-friendly is to have stored their savegames somewhere under %USERPROFILE% (ideally somewhere in %APPDATA%) like other games do. ……
Again, this is nothing more than programmer error, as the *correct* areas for writing per-user data and configuration information have been available and published for years and years.
Please publish the said configuration guidelines. So far you have been claiming the have been published for years but have yet to post a link to the documents. The links that you did post proved you wrong.
Note deliberate means, thinking about it and knowing something is wrong.
No, that’s malicious.
By your definition of “deliberate”, it’s impossible to “deliberately” do something right.
The example I gave above of someone executing an obviously questionable attachment is certainly deliberate, but in most cases not malicious.
This is funny. The discussion that has so far enused doesn’t support your claim. I think raptor asked you how a user would know what “an obviously” harmful file would look like. File names can be anything.
Anyway. you definitions as usual are wrong.
de·lib·er·ate
Pronunciation Key (d-lbr-t)
adj.
1. Done with or marked by full consciousness of the nature and effects; intentional:
2. Arising from or marked by careful consideration: a deliberate decision. See Synonyms at voluntary.
3. Unhurried in action, movement, or manner, as if trying to avoid error: moved at a deliberate pace. See Synonyms at slow.
raptor said a user executing a attachment with with harmful code, can never be deliberate, if they didn’t know it was harmful. The dictionary agrees.
ma·li·cious Pronunciation Key (m-lshs)
adj.
Having the nature of or resulting from malice; deliberately harmful; spiteful:
mal·ice Pronunciation Key (mls)
n.
1. A desire to harm others or to see others suffer; extreme ill will or spite.
2. Law. The intent, without just cause or reason, to commit a wrongful act that will result in harm to another.
I can garuntee you most users you talk about have no such intentions.
The maliciousness here in this case lies on the person who made the harmful file. Becuase said person took the time and knew the consequences.
A person clicking to open the file for any reason other than to bring down a network or cause havoc can not be deliberate.
What you described the user doing “downloading something that promises free p0rn”, is stupid or ignorant but not deliberate.
Most users can’t correlate running a program or opening an email with termendous damage. You can because you are an admin.
If you think every user should posess the skills and common sense to the level to administer a network, then I have to tell you I would think twice before hiring you to admin any network.
No, most users *won’t* just look at a file and say it would do harmful things.
You have said this many times already. Please treat me as a normal user and tell me what to look for in a file that says it contains harmful code. Assume I received an email attachment.
Can you name the company that wrote the software that Nimda, Blaster, CodeRed, Sasser, Melissa etc etc. exploited vulnerabilities?
Can you grasp the difference between security infrastructure and actual security ?
Can you name any software with a perfect, bug-free history ?
The case in point, it is clear Microsoft if is to blame. But you are blaming the users for security problems.
Why is it Microsoft’s fault if a vulnerability that they have already patched is exploited ?
Only since NT. It think the debate here is if Windows 95 and co had it. They didn’t.
Actually the original assertion was only since Windows 2000. While your page below handily points out that is false:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/sh…
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/sh…
There are plenty more about user profiles in Windows 9x if you want to go digging.
(Awaiting the predictable semantic, strawman argument about my usage of %USERPROFILE% to denote user profiles).
Please publish the said configuration guidelines. So far you have been claiming the have been published for years but have yet to post a link to the documents. The links that you did post proved you wrong.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dn…
(Awaiting the predictable complaint that it’s been updated in the last year).
This is funny. The discussion that has so far enused doesn’t support your claim. I think raptor asked you how a user would know what “an obviously” harmful file would look like. File names can be anything.
Certainly they can. Typically, however, they’re fairly indicative.
Anyway. you definitions as usual are wrong.
[…]
raptor said a user executing a attachment with with harmful code, can never be deliberate, if they didn’t know it was harmful. The dictionary agrees.
The deliberate act is executing the attachment. With the forewarning of it being harmful (ie: raptor’s definition) it becomes malicious.
Inadvertently opening an attachment that turned out to be harmful is still deliberate (because the user consciously performed the action) but not malicious.
Suffering from a coding bug that launches an attachment (or code) without the users intervention, is not deliberate.
I’ll point out again that by your definition, deliberate actions can only be bad, which should indicate that it is broken.
I can garuntee you most users you talk about have no such intentions.
A point I’ve made myself on more than one occasion.
A person clicking to open the file for any reason other than to bring down a network or cause havoc can not be deliberate.
Of course it can. When you double click an attachment, you’re performing a deliberate action. When you double click it with the foreknowledge that the effects will be harmful, you’re performing a malicious action.
Most users can’t correlate running a program or opening an email with termendous damage. You can because you are an admin.
These days they certainly should be able to.
If you think every user should posess the skills and common sense to the level to administer a network, then I have to tell you I would think twice before hiring you to admin any network.
Not in the slightest. Quite the opposite, in fact.
You have said this many times already. Please treat me as a normal user and tell me what to look for in a file that says it contains harmful code. Assume I received an email attachment.
Is the message expected ?
Who is it from ?
Who is is sent to ?
What are the spelling and grammar like ?
Does it have an attachment ?
What’s the name of the attachment ?
How’s the spelling in the attachment file name ?
Is the message unsolicited ?
Etc.
Can you grasp the difference between security infrastructure and actual security ?
Can you name any software with a perfect, bug-free history ?
You obviously can’t grasp the difference.
Actually the original assertion was only since Windows 2000. While your page below handily points out that is false
You confused your self again. The orginal argument was that not until win2k was microsoft trying to merge the consumer and professional versions of thier Oses.
ID software has been makng software from the days of DOS for the consumer line of Microsofts business. They aren’t stupid. MS never had clear guidelines and APIs.
Unfortunately for you, everything you have provided proves I am right.
hese API docs also show User Profiles have been around since Windows 95 (+some updates)
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/sh…..
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/sh…..
There are plenty more about user profiles in Windows 9x if you want to go digging.
You are grasping at straws aren’t you?
(Awaiting the predictable semantic, strawman argument about my usage of %USERPROFILE% to denote user profiles).
Sorry to disappoint you. but you will get a predictable but different argument. Note the predictability here is that I am proven right once again by your own research.
Read those APIs carefully. Especially this. IE 5.0 never shipped as a part of win95 or 98. IE 5.0 was released in 1999.
Windows 95 with Internet Explorer 5.0, Windows 98 with Internet Explorer 5.0, Windows 98 Second Edition (SE), Windows NT 4.0 with Internet Explorer 5.0, Windows NT 4.0 with Service Pack 4 (SP4)
Note:- On older systems that require the redistributable SHFolder.dll, you must explicitly link to SHFolder.lib before you link to Shell32.lib.
Only some CSIDLs are supported, including the following:
These are bolt on hacks, in traditional microsoft fashion, that too incomplete ones.
The deliberate act is executing the attachment. With the forewarning of it being harmful (ie: raptor’s definition) it becomes malicious.
How are they forewarned? that is the whole point of contention here. What you are trying to say is every harmful file jumps around and advertizes that is is harmful!!! If the user still clicks it they are being malicious.
Most email attachment viruses semm inocuous.
Inadvertently opening an attachment that turned out to be harmful is still deliberate (because the user consciously performed the action) but not malicious.
How? Boggles the mind that the definition of deliberate is posted a few posts above and you still can’t get it.
An inadvertent action by definition can never be dliberate.
in·ad·ver·tent P Pronunciation Key (nd-vûrtnt)
adj.
Not duly attentive.
Marked by unintentional lack of care. See Synonyms at careless.
adj : without intention (especially resulting from heedless action); “with an inadvertent gesture she swept the vase off the table”; “accidental poisoning”; “an accidental shooting” [syn: accidental]
When you double click it with the foreknowledge that the effects will be harmful, you’re performing a malicious action.
That’s what I have been saying that is a deliberate acion. Knowing the consequences and doing something is deliberate, doing a deliberate action to harm someone is malicious.
Most users do not posess the skill to determine an attachment is harmful. So they cannot have knowledge of the consequences, so while they may inadvertently open a harmful attachment, but not deliberately.
Is the message expected ?
I get 150+ emails a day at work, I can’t possibly be expecting every on of them.
Who is it from ?
Who is is sent to ?
Most email viruses are transmitted by someone you know who has been infected and sent to you.
What are the spelling and grammar like ?
How many grammer and spelling mistakes can something like ” Hey check this out” have.
Does it have an attachment ?
What if it does?
What’s the name of the attachment ?
What if it is something like financials.ppt?
Is the message unsolicited ?
Most email people recieve are unsolicited. When a person sends an email to someone for the firs time it is unsolicited (unrequested).[/i]
An invitation to a party, is unsolicited. It may contain a misspelled attachment, with grammer and spelling mistakes. and be perfectly legitimate.
None of the questions you have asked with any reliability differentiate a legitimate email from a harmful one.
You confused your self again. The orginal argument was that not until win2k was microsoft trying to merge the consumer and professional versions of thier Oses.
Uh, no. That’s the first time that line of reasoning has been raised in this discussion.
You can keep running around the field with those goalposts if you want, but it’s getting boring.
ID software has been makng software from the days of DOS for the consumer line of Microsofts business.
Which is currently Windows XP, the platform Doom 3 was targeted at.
They aren’t stupid.
I’ve no doubt. They might be lazy though.
MS never had clear guidelines and APIs.
False.
Sorry to disappoint you. but you will get a predictable but different argument. Note the predictability here is that I am proven right once again by your own research.
Read those APIs carefully. Especially this. IE 5.0 never shipped as a part of win95 or 98. IE 5.0 was released in 1999.
Maybe you need to read them a bit closer yourself:
“This function is a superset of SHGetSpecialFolderPath […]”
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/sh…
“Windows 2000, Windows NT 4.0 with Internet Explorer 4.0, Windows 98, Windows 95 with Internet Explorer 4.0”
These are bolt on hacks, in traditional microsoft fashion, that too incomplete ones.
IE4 was a substantial system update for Windows 95 and NT4, and integrated into Windows 98. It was about as much a “bolt on hack” as Linux 2.6 is a “bolt on hack” of 2.4.
As usual, Microsoft just can’t win with you. They get criticised for the (mythical) “forced upgrade cycle” and they also get criticised for releasing major system updates for free.
How are they forewarned? that is the whole point of contention here. What you are trying to say is every harmful file jumps around and advertizes that is is harmful!!! If the user still clicks it they are being malicious.
I have said no such thing.
I have said a user whose software is exploited by a vulnerability particpates in an inadvertent act.
I have said a user executing an attachment is a deliberate act.
I have said a user executing an attachment they know to be hardmful is a malicious act.
You are the person insisting “deliberate” == “malicious”.
How? Boggles the mind that the definition of deliberate is posted a few posts above and you still can’t get it.
It boggles the mind that you are insisting your definition of deliberate is correct when it implicitly indicates it’s only possible to deliberately perform harmful actions.
That’s what I have been saying that is a deliberate acion. Knowing the consequences and doing something is deliberate, doing a deliberate action to harm someone is malicious.
You seem to be having a great deal of difficulty distinguishing between the “opening attachment” part and the “exploit” part.
I get 150+ emails a day at work, I can’t possibly be expecting every on of them.
Nor did I say you would be. I’d be surprised if you couldn’t eliminate upwards of 75% as expected though.
Most email viruses are transmitted by someone you know who has been infected and sent to you.
But many aren’t.
How many grammer and spelling mistakes can something like ” Hey check this out” have.
Amazingly enough, quite a lot. Bad spelling and grammar are probably two of the biggest giveaways.
What if it does?
Then it deserves extra attention.
What if it is something like financials.ppt?
Would there be any reason someone would send you a file called “financials.ppt” ?
Most email people recieve are unsolicited.
I doubt that. Assuming they’ve got a semi-decent antispam system, that is.
When a person sends an email to someone for the firs time it is unsolicited (unrequested).
Most people only receive an email from someone “the first time” once, strangely enough.
An invitation to a party, is unsolicited. It may contain a misspelled attachment, with grammer and spelling mistakes. and be perfectly legitimate.
It might indeed.
None of the questions you have asked with any reliability differentiate a legitimate email from a harmful one.
Yet, amazingly enough, when people actually start doing those things they have fewer experiences with virus and malware carrying emails. They’re pretty handy for filtering out spam as well.
Nothing I’ve suggested is a silver bullet, nor have I claimed it as such. All they are is a set of guidelines to apply to help isolate malware-laden emails.
You, however, seem to think there’s some magical tool out there that makes computers secure, since you’ve done nothing but dismiss out of hand the numerous methods I have suggested (and successfully employed) for making computing safer. I’m curious, what software is it you have that can automatically – and with 100% accuracy – detect malicious code, discern deliberate user behaviour from accidental, discern deliberate user behaviour from inadvertent, identify whether or not an email is legitimate, etc ?
Uh, no. That’s the first time that line of reasoning has been raised in this discussion.
You can keep running around the field with those goalposts if you want, but it’s getting boring.
Go read it again. And yes I find you very boring.
It boggles the mind that you are insisting your definition of deliberate is correct when it implicitly indicates it’s only possible to deliberately perform harmful actions.
Hunh. Really….. all I have said is deliberate actions are those where a person thinks before he or she performs and is aware of the consequenses.
By your defintion every action is deliberate. If one intends to get up from a chair, it is a deliberate action. If the same person knocks of a vase it is also a deliberate action because he intended to get up,
But many aren’t.
You agree most are.
Amazingly enough, quite a lot. Bad spelling and grammar are probably two of the biggest giveaways.
Your assertion is that a email with grammer and spelling mistakes with an attachment must have a harmful one. I don’t know wether to laugh or pity you with that line of reasoning.
Then it deserves extra attention.
I am asking you for the last time WTF is that attention. You so far haven;t produced anything.
Would there be any reason someone would send you a file called “financials.ppt” ?
Why wouldn’t they? I work for a company it is plausible.
Most people only receive an email from someone “the first time” once, strangely enough. </I?
No. Unsolicited means unrequested, you did not solicit and email. Do you pick up the phone and call someone to tell them to email you? or do they call you and request permission to send you and email?
[i]Yet, amazingly enough, when people actually start doing those things they have fewer experiences with virus and malware carrying emails. They’re pretty handy for filtering out spam as well.
do you have any solid data other than just your opinion?
Nothing I’ve suggested is a silver bullet, nor have I claimed it as such. All they are is a set of guidelines to apply to help isolate malware-laden emails.
Yes you have. You said 90% of security problems are fault of users. You went on to claim that users deliberately open harmful attachments even with forewarning.
Your guidelines can’t even detect with even 10% accuracy if a malicious file is disguised as a legitimate email. The To and From address can easily spoofed by simply telneting to port 25 on a SMTP server, The grammer and spelling can be impeccable, so can the attachments spelling.
Infact it can even be a script with the false name and extenstion. Your guidelines won’t do squat.
Yet soemhow by magic users are supposed to get an epiphany that a file is harmful and that epiphany is thier forewarning according to you. And if a user opens said attachment they are the problem not the compnay that wrote the software with the whole. Great logic.
You, however, seem to think there’s some magical tool out there that makes computers secure, since you’ve done nothing but dismiss out of hand the numerous methods I have suggested (and successfully employed) for making computing safer.
No I don’t. I am paranoid about security and take more stringent steps than you have outlines to protect my machines.
I’m curious, what software is it you have that can automatically – and with 100% accuracy – detect malicious code, discern deliberate user behaviour from accidental, discern deliberate user behaviour from inadvertent, identify whether or not an email is legitimate, etc ?
I don’t and I can garauntee a 100% that you don’t either. But you expect users who know nothing about computers and software to bear the blame for Microsoft’s sloppy standards devleopers and code.
You have said numerous times that users are to blame. But have yet to convince me that you have the skills or technology to detect a malicious file or email. and you contend that users are the problem not the idiot who put the bug in the code in the first place.
Go read it again. And yes I find you very boring.
The closest you come is this:
“Microsoft couldn’t do what they did in 2001 with XP 9-12 years ago, when it would have been easier, with less users than in 2001. Really!! ”
At no stage (before the post I replied to there) do you even *hint* you’re talking in the context of “trying to merge the consumer and professional versions of thier Oses”.
Hunh. Really….. all I have said is deliberate actions are those where a person thinks before he or she performs and is aware of the consequenses.
“Deliberate means you know something is harmful and you still execute it.”
“Note deliberate means, thinking about it and knowing something is wrong.”
By your *previous* definitions of deliberate (but conveniently not the one in this post) “deliberate” -> “harmful”.
By your defintion every action is deliberate. If one intends to get up from a chair, it is a deliberate action.
Correct. You’re getting it.
If the same person knocks of a vase it is also a deliberate action because he intended to get up,
Ah, no you aren’t.
The deliberate action was getting up.
The inadvertent action was knocking over the vase.
You agree most are.
Depends on the environment. The profile of malware-laden mails can be dramatically different depending on the environment.
Your assertion is that a email with grammer and spelling mistakes with an attachment must have a harmful one.
No, it isn’t.
I am asking you for the last time WTF is that attention. You so far haven;t produced anything.
Look. Think. Does the combination of attributes on a given message make it a likely candidate for being legitimate or not.
I’m amazed you can survive out in the real world given your apparent reluctance (or inability) to apply any sort of critical analysis to a situation.
Why wouldn’t they? I work for a company it is plausible.
Would the building’s maintenace staff commonly be sending you a file called “financials.ppt” ?
do you have any solid data other than just your opinion?
Experience. Something it’s plainly obvious you have very little of.
[Nothing I’ve suggested is a silver bullet, nor have I claimed it as such. All they are is a set of guidelines to apply to help isolate malware-laden emails.
]
Yes you have.
No, I haven’t.
You said 90% of security problems are fault of users.
They are. Note that being “at fault” does not mean have done something deliberately.
You went on to claim that users deliberately open harmful attachments even with forewarning.
I did not. Indeed, I specifically said on more than one occasion that a significant security problem was users who opened harmful attachments without knowing what the attachment would really do.
Your guidelines can’t even detect with even 10% accuracy if a malicious file is disguised as a legitimate email.
Maybe I can play your stupid game as well…
Do you have any evidence for that 10% figure ?
The To and From address can easily spoofed by simply telneting to port 25 on a SMTP server, The grammer and spelling can be impeccable, so can the attachments spelling.
They certainly can be.
Infact it can even be a script with the false name and extenstion. Your guidelines won’t do squat.
Suite yourself. The amount of spam you must read every day must be phenomenal, however, given you apparently don’t follow any procedures that might help you identify it.
Yet soemhow by magic users are supposed to get an epiphany that a file is harmful and that epiphany is thier forewarning according to you.
No.
And if a user opens said attachment they are the problem not the compnay that wrote the software with the whole. Great logic.
If a user runs an obviously illegitimate email attachment, then yes, it is their fault.
If a sysadmin has modified the default Outlook settings such that potentially dangerous attachments can be run *at all*, then it most certainly is the admin’s fault.
If an attachment exploits an Outlook bug that’s been fixed, then it’s the fault of the person who hadn’t applied the update.
If an attachment exploits an unfixed bug then it’s Microsoft’s fault.
No I don’t. I am paranoid about security and take more stringent steps than you have outlines to protect my machines.
You’re “paranoid about security” but you don’t even take into consideration things like attachment filenames, email contents or the to and from addresses.
Right. Must be some definition of “paranoia” I’m unfamiliar with.
I don’t and I can garauntee a 100% that you don’t either.
I can also guarantee I don’t. Indeed, a rather large portion of my previous points has been explaining things that must be done *because* no such software exists.
But you expect users who know nothing about computers and software to bear the blame for Microsoft’s sloppy standards devleopers and code.
No, I don’t.
You have said numerous times that users are to blame. But have yet to convince me that you have the skills or technology to detect a malicious file or email.
Nothing could convince you if that would mean even *hinting* that Microsoft was less than 100% culpable for every security incident their software has ever been involved in.
Also, I managed to miss this earlier:
Notice common app data folder is win2k specific.
“CommonAppDataFolder
Full path to the file directory containing application data for all users. Exists on Windows 2000, Windows NT, Windows 98, andWindows 95.
CSIDL_COMMONAPPDATA”
Notice the common files folder, Surprise it’s in Program Files.
So how come unix doesn’t suck when everything “just points to /” ?
Again notice the lack of Multiuserness in 95 and 98
Funny, most of those documented locations seem to be defined for Windows 9x.
My point about microsoft’s guidelines were spot on.
Which explains why they’re shown to exist by that (and many other) pages on MSDN…
Even your flawed notion that win95 and win98 had multi-users APIs was handily disproven by the above examples.
Because most of those locations exist in Windows 9x, right ?
I’m starting to get the hang of this. Basically if I take the exact opposite of what you say, I know what the truth is.
You don’t seem to understand that these were files that *were* supposed to be executed, according to the shell. That’s what you do with binaries – you execute them
I missed this before. Mpegs, Jpegs, PPT are all binary files they are not executable. You need to get your terminologies straight,
I’m amazed you can survive out in the real world given your apparent reluctance (or inability) to apply any sort of critical analysis to a situation.
I am thriving.
Would the building’s maintenace staff commonly be sending you a file called “financials.ppt” ?
First the buildings maintainence staff would not have email access because they are contracted out. My companies strength is in upwards of 28000 employess. If I double checked every email against the organizationl chart to verify it’s authencity, I would never get any real work done now would I.
Let’s not beat a dead horse. Your guidlines do nothing to distiguish a real email from a hramful one. You seem to work for a tiny company, I a large one. Your experiences can not be similar to mine.
do you have any solid data other than just your opinion?
To use your own words.
Experience. Something it’s plainly obvious you have very little of.
They are. Note that being “at fault” does not mean have done something deliberately.
This is what you said, to which I responded. You are conviently chaning you story and even data. I’ll show you more.
No, users are to blame for doing things like deliberately running malicious code (most email trojans/viruses) and not applying software patches (most worms).
You went on to claim that users deliberately open harmful attachments even with forewarning.
I did not. Indeed, I specifically said on more than one occasion that a significant security problem was users who opened harmful attachments without knowing what the attachment would really do.
Read your statement above. Read the defintition of the word deliberate and the problem wiht your statement should be obvious.
You are now changing your story.
Look. Think. Does the combination of attributes on a given message make it a likely candidate for being legitimate or not.
I did and you guidleines of what is legitimate or not is usesless and unreliable.
if a user runs an obviously illegitimate email attachment, then yes, it is their fault.
That’s what I said.
If a sysadmin has modified the default Outlook settings such that potentially dangerous attachments can be run *at all*, then it most certainly is the admin’s fault.
No we are getting somwhere.
If an attachment exploits an Outlook bug that’s been fixed, then it’s the fault of the person who hadn’t applied the update.
Better.
If an attachment exploits an unfixed bug then it’s Microsoft’s fault.
Was that so hard. Those set of statements make sense. Your blanket 90% of all security problems are caused by users is worng.
You’re “paranoid about security” but you don’t even take into consideration things like attachment filenames, email contents or the to and from addresses.
I never said I don’t do it. I merely said most users are capable of doing it, so they can’t be at fault. I write kernel and device driver software for a living, most programmers in the world can’t.
But you expect users who know nothing about computers and software to bear the blame for Microsoft’s sloppy standards devleopers and code.
No, I don’t.
You said this earlier.
No, most users *won’t* just look at a file and say it would do harmful things.
This basically means that you are doubting the users intents. I am doubting thier capabilities. chaning things again.
Nothing could convince you if that would mean even *hinting* that Microsoft was less than 100% culpable for every security incident their software has ever been involved in.
Yes. Let’s take another example. Cars have recalls yes. Say a car has a faulty sensor which caused it to sudeenly stall and stop while going at any speed (certain manufacturers have had this issue). If the car where to develop this fault, mid driving and cause the driver to get into an accident, whose fault would it be.
The drivers or the Car companies. Microsoft’s software is chugging along all day, week year, suddenly one day doing what you do normally cause all hell to break loose, whose fault is it?
Notice common app data folder is win2k specific.
“CommonAppDataFolder
Full path to the file directory containing application data for all users. Exists on Windows 2000, Windows NT, Windows 98, andWindows 95.
CSIDL_COMMONAPPDATA”
WTF, load the page again and look. You are lying and changing things again.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/ms…
CommonAppDataFolder
Full path to the file directory containing application data for all users. Exists on Windows 2000.
CSIDL_COMMON_APPDATA
Platform SDK: Windows Installer
Clicking the link gives this……….
CommonAppDataFolder Property
The CommonAppDataFolder property is the full path to the file directory containing application data for all users. A common path is C:WINNTProfilesAll UsersApplication Data.
Remarks
It doesn’t exist on anything prior to win2k. Microsoft’s page says so.
Oh I see it, You took the description for AppDataFolder and stuck it under CommonAppDataFolder.
And you are telling me that you can spot a malicious email attachment. Please. you can’t even read a webpage correctly.
So how come unix doesn’t suck when everything “just points to /” ?
What points to /? you do realize that there are other directories and even filesysems under / don’t you. /home can be a completely different filesystem that the root filesystem, it can even be mounted from across the world with any permission and ACLs with NFSV4.
Funny, most of those documented locations seem to be defined for Windows 9x.
You are blind and obviously stupid. Did you look at where those locations were.
I’m starting to get the hang of this. Basically if I take the exact opposite of what you say, I know what the truth is.
You must have been looking into a mirror and having one of you episodes. I have proved that you are lying more than once in this post. Just look at the CommonAppdatafolder, the link you pointed to says something completely different than what you posted.
I never said I don’t do it. I merely said most users are capable of doing it, so they can’t be at fault. I write kernel and device driver software for a living, most programmers in the world can’t.
I never said I don’t do it. I merely said most users are incapable of doing it, so they can’t be at fault. I write kernel and device driver software for a living, most programmers in the world can’t.
You must have been looking into a mirror and having one of you episodes. I have proved that you are lying more than once in this post. Just look at the CommonAppdatafolder, the link you pointed to says something completely different than what you posted
Oh yes don’t claim you made a mistake. You dliberately posted wrong information, the fact that you clicked the link and posted the information means you did it deliberatly.
You are at fault.
I missed this before. Mpegs, Jpegs, PPT are all binary files they are not executable. You need to get your terminologies straight.
Stop nitpicking. “Execute” in this context means “passed off to their associated program”.
First the buildings maintainence staff would not have email access because they are contracted out. My companies strength is in upwards of 28000 employess. If I double checked every email against the organizationl chart to verify it’s authencity, I would never get any real work done now would I.
Try thinking outside the square for a change.
Let’s not beat a dead horse. Your guidlines do nothing to distiguish a real email from a hramful one. You seem to work for a tiny company, I a large one. Your experiences can not be similar to mine.
I’ve also worked in large environments.
[No, users are to blame for doing things like deliberately running malicious code (most email trojans/viruses) and not applying software patches (most worms).]
You went on to claim that users deliberately open harmful attachments even with forewarning.
[I did not. Indeed, I specifically said on more than one occasion that a significant security problem was users who opened harmful attachments without knowing what the attachment would really do.]
Read your statement above. Read the defintition of the word deliberate and the problem wiht your statement should be obvious.
The deliberate action is opening the file attachment. Regardless of whether or not the user knew about the impact of said file attachment, they still opened it deliberately and are still at fault (or the admin that changed the default attachment rules to allow them to open it was at at fault).
Note that this does not mean they acted maliciously.
Simple test: take the users (or, more accurately, the users’ ability to open any attachments) out of the equation. Also assume the software is kept up to date with any relevant patches.
Now, with the “user” aspect almost completely removed from the equation, how many successful email-bourne exploits do you think there would be ?
I did and you guidleines of what is legitimate or not is usesless and unreliable.
Yet are successfully employed by millions of people every day.
I am curious, however, since you apparently don’t bother taking into consideration who sent an email, who it was sent to, what is in it, the attributes of any attachments or whether or not it was expected and the combination of all these features…
Just how *do* you identify spam and potentially hostile emails ?
[if a user runs an obviously illegitimate email attachment, then yes, it is their fault.]
That’s what I said.
No, you said it was Microsoft’s fault for allowing the user to do it at all.
Was that so hard. Those set of statements make sense. Your blanket 90% of all security problems are caused by users is worng.
Three out of the four things I posted are users’ responsibilities. Outlook patches fixing bugs allowing code execution without user interaction are fairly low in number. What reasoning do you have to support your claim ?
I never said I don’t do it.
Funny, you keep critisicing me for suggesting other people do it…
I write kernel and device driver software for a living, most programmers in the world can’t.
Which, I imagine, gives you a great deal of access to, and deep insight into the mind of, the “average user”.
[No, most users *won’t* just look at a file and say it would do harmful things.]
This basically means that you are doubting the users intents. I am doubting thier capabilities. chaning things again.
No, I say most users won’t just look at a file because most users tend to switch off any thinking skills they might have had when they sit in front of a computer. That’s why there are helpdesk calls like “my computer won’t work” that have resolutions like “turn your monitor on”.
People are scared of computers. That fear stops them from even *trying* to figure them out.
Yes. Let’s take another example. Cars have recalls yes. Say a car has a faulty sensor which caused it to sudeenly stall and stop while going at any speed (certain manufacturers have had this issue). If the car where to develop this fault, mid driving and cause the driver to get into an accident, whose fault would it be.
The drivers or the Car companies. Microsoft’s software is chugging along all day, week year, suddenly one day doing what you do normally cause all hell to break loose, whose fault is it?
Well, that would depend on whether or not a recall has been issued and the driver has ignored it.
WTF, load the page again and look. You are lying and changing things again.
3rd item from the bottom.
What points to /? […]
By your logic:
“Most of these places still point to Program Files or the C:windows […]”
“Notice the common files folder, Surprise it’s in Program Files.”
(When in fact Common Files is typically a subdirectory of Program Files)
Everything that’s a subdirectory of / is really “in /”.
[…] you do realize that there are other directories and even filesysems under / don’t you.
I do. However, since you seem to think C:\Program Files\Common Files is the same as C:\Program Files, it would appear you don’t.
You are blind and obviously stupid. Did you look at where those locations were.
“Common path” != “Always”.
User profiles in Windows 9x were a user-enabled option. If they weren’t enabled, everyone got the same My Documents, Start Menu, etc. If they *were* enabled, they got user-specific My Documents, Start Menu, etc. That’s why these APIs exist, so developers don’t need to (wrongly) hard code file paths into their software.
I am curious, however, since you apparently don’t bother taking into consideration who sent an email, who it was sent to, what is in it, the attributes of any attachments or whether or not it was expected and the combination of all these features…
Just how *do* you identify spam and potentially hostile emails ?
I never said I don’t do these. But doing these doesn’t garauntee anything. I also can because I know how this stuff works. 90% of the world doesn’t.
If everyone was at the same skill level of you, me and the readership of OsNews, everyone would be running a variety of OSes, writing code and debugging stuff for fun. Microsoft wouldn’t be a dominant player in the market.
Now, with the “user” aspect almost completely removed from the equation, how many successful email-bourne exploits do you think there would be ?
Why not take email out of the equation and leave the users there? the conclusion would still be the same. Eliminating one solves the problem.
Or to refine it more, take the fact that a email could do any harm out of the equation and the user could click away till the die of exhaustion and still not cause harm.
So you see we should try to fix problems that are easy. Fixing the email being harmful scenario is easier than educating every user on the planet in computers. Now you know why Microsoft is to blame.
Don’t get me wrong any piece of software will have numerous bugs. But I would rather tackle the problem and think in innovative ways to fix the problem. And not blame users and try to educate everyone on the planet..
Funny, you keep critisicing me for suggesting other people do it…
Suggesting is one thing. But calling people stupid and blaming them for someone elses faults all because they don’t possess a skill is not right.
Which, I imagine, gives you a great deal of access to, and deep insight into the mind of, the “average user”.
I used to be the technical assistant in the computer labs while I was in college and 3 years of dealing with users, security issues, wintel machines, linux machines, hp-ux boxes, Sun workstations and quite a lot of Windows NT issues, gives me the deep insight.
Everything that’s a subdirectory of / is really “in /”.
In unix, not always no.
this slashdot post illustrates what I have been trying to get at with this rat hole semantics based discussion.
I will play devil’s advocate.
Bingo, the problem isn’t Windows, its Windows Users.
Really, this stance strikes me as the antithesis of the problem. It is programmers who bear the blame here. I’m not singling out Microsoft programmers (despite the large and tempting target they present). I’m talking about most people who write system software or applications for general use.
Here on slashdot, we are predominantly geeks. We enjoy technology and learning about technology. In some cases, a large minority of us mistake our interests in these as evidence that these activities are somehow inherently important. Those who do so gain certain psychological and social pleasure from this knowledge and interest. This is part of being human. We consider ourselves special and important.
Computers and software are marketed to and used by the general public. People, being people, think that their interests and their knowledge is important. Learning about hardware/software/security, etc. is not interesting to them, therefor the fact that they tend not to spend time doing so should come as a great surprise. Geeks tend to see this lack of interest as evidence of a problem (and at times as an affront to their own sense of self worth). This seems a rather shallow and unproductive view. Human beings focus on those things that interest them. Pleading with them to attend to things we think are important, or looking down on them for this lack of interest, is a fruitless path.
The problem is not users. The problem is that we have created hardware and software which does not adequately match the needs of the users. Software should match the requirements of its users not require them to change their typical behaviors to meet the needs of the software.
Some people are destructive and malicious. Well designed software takes this into account, and provides authorized users with reasonable protection from those who would try to harm them. Well designed software behaves in consistent and predictable ways so that users of varying levels of experience, knowledge or interest can benefit from its use.
Software should be designed for the people who will use it. Most programs suck, because they are designed for a particular business goal, or designed by geeks based on their own knowledge of how they would like to use it. It is no wonder, that most software leaves the average person cold. It is arcane, inconsistent, and requires too much knowledge. Users are not stupid. They are not lacking in intelligence or ability. They are lacking in a sense of enjoyment and sufficient interest to use software the way the geeks designers intend.
Great software takes its users interests and expectations into account.
Great developers strive to understand users and write software which serves them.
So, we are the problem, not the users. Blaming people for their own human nature is not the way to go here. Projecting our own failures of understanding onto the users is a misguided attempt to pass the buck.
I never said I don’t do these.
Yes you consistently insist they’re worthless.
But doing these doesn’t garauntee anything. I also can because I know how this stuff works. 90% of the world doesn’t.
I never said it was a guarantee.
Or to refine it more, take the fact that a email could do any harm out of the equation and the user could click away till the die of exhaustion and still not cause harm.
Fantastic idea. How are you planning on programmatically identifying “harm” ?
So you see we should try to fix problems that are easy. Fixing the email being harmful scenario is easier than educating every user on the planet in computers. Now you know why Microsoft is to blame.
Because they started blocking harmful and directly executable attachments by default oh, about 5 years ago now ?
Don’t get me wrong any piece of software will have numerous bugs. But I would rather tackle the problem and think in innovative ways to fix the problem. And not blame users and try to educate everyone on the planet..
You still haven’t explained how you are planning to make programs both useful *and* non harmful.
Suggesting is one thing. But calling people stupid and blaming them for someone elses faults all because they don’t possess a skill is not right.
I didn’t call them all stupid. Ignorant != stupid.
The thing is people *do* “possess a skill” – the *problem* is they don’t exercise that skill.
Identifying most illegitimate communcation isn’t particularly hard. The average adult has been doing it for decades with junk mail, get-rich-quick schemes, false advertising, street hawkers, etc. The problem is most of them refuse to apply those same heuristics to email (and just about anything else they do via a computer). As I said, for most people, sitting in front a computer seems to have a prerequisite of complete disengagement of common sense.
In unix, not always no.
Nor in Windows. Nor does that directory *have* to be under %PROGRAMFILES%.
Bingo, the problem isn’t Windows, its Windows Users.
I agree with pretty much everything you’ve said. However, the major hiccup – and the issue you haven’t covered – is that the different between completely normal, expected, desired behaviour and unexpected, unwanted, destructive behaviour is almost completely a matter of context.
We do not yet have the technology – or even the *theory*, AFAIK – to be able to programmatically determine whether or not some arbitrary operation (eg: deleting a file) is performed by a user with the full knowledge of its impact and consequences. To put it bluntly, we are not yet in a position to be able to do this:
“Software should match the requirements of its users not require them to change their typical behaviors to meet the needs of the software.”
Software isn’t – and IMHO isn’t likely to be anytime soon – advanced enough to be transparent. I live for the day it happens, but until that day arrives, the burden remains on *people* to learn how to use the tools at hand and not assume the machine knows what they *want* to do, rather than what they *did* do. As with most things in life, it’s pretty much impossible to make a computer that can’t be misused.
Again, this is hardly something unique to computers. If you’re driving down the highway at 150km/h, there’s no way for your car to know whether or not that is dangerous – that distinction is the *job* of the “end user”, ie: the driver.
Let’s face It. Firefox has a very different Design, It sandboxes the incoming content, does not allow pages to take control (no ActiveX), Is much more secure for lot of reasons.
Those “Experts” that say Firefox will be the next target are not Genious for say that, My Mom told me It some weeks ago, so That’s not a new.
If someone uses firefox for say to a user “click here to win a car” and they click and install every thing the page say, sorry, It’s not Firefox Failure, It’s the user.
someone should close the mouth and work on the code they type… If they do.