

Microsoft in 2010 on Google fully disclosing after a few days:
advocate for responsible disclosure is that the software vendor who
wrote the code is in the best position to fully understand the root
cause. While this was a good find by the Google researcher, it turns out
that the analysis is incomplete and the actual workaround Google
suggested is easily circumvented. In some cases, more time is required
for a comprehensive update that cannot be bypassed, and does not cause
quality problems.
Full disclosure time windows are a complicated matter and often things are not that cut and dry. I do agree with Full Disclosure, I'm just not sure what the amount of time should be that passes before a disclosure is made.
As usual, you vastly oversimplify a complicated matter.
There are a lot of variables involved in software engineering, and any one change can affect various hardware configurations running on that platform, especially something as important as say, Windows.
What one person considers a fix might break something else, and cause major quality headaches down the road.
How do you deal with that? Would you appreciate a Windows Update screwing your install? Itd be a disaster.
You can be advised via partial disclosure of a flaw and act accordingly. There is full disclosure, then there's being unreasonable.
There are potentially millions at risk, not something to be taken lightly.
Regarding security fixes, I would have spontaneously assumed that a company the size of Microsoft would have boatloads of automated regression tests in place in order to ensure that a security patch won't likely break a customer's machine (unless he is using code that binds to undocumented APIs or crap like that). Isn't that the case ?
Edited 2013-06-02 14:27 UTC
I work on a fairly small code-base if there is a bug, it can take weeks before it goes through the QA process and I get the go-ahead to release.
This is not taking into account my own time ... and when I can be put on task for it.
Then maybe there is something wrong with the whole process. I'd say: hold companies accountable starting 7 days after they've been notified. Let good old capitalism take care of this. You'll be surprised how quickly the process adapts towards better security (fixing and prevention).
Sometimes there is no quick fix or it isn't easily identifiable.
Everyone assumes this fantasy scenario where things can be fixed instantly by a bit of heroic coding.
In corporations you don't just throw a patch in and hope it sticks. These longer processes are in place for a reason ... most of them legal.
Edited 2013-06-03 10:40 UTC
All too often there is no quick fix due to: 1) a lack of testing before release, 2) negligence, 3) too much bureaucracy.
Exactly, so use the 'legal' argument to alter these processes. If it costs money, too bad. When a household appliance is malfunctioning, the manufacturer is held accountable as well. It's called warranty and it lasts at least 2 years in Europe. From europa.eu: "If a product cannot be repaired or replaced within a reasonable time or without inconvenience, you may request a refund or price reduction." Most companies seem to have a policy of about two weeks (and that includes returning and reshipping, which are not applicable for software.)
Those longer processes are in place for one reason only: to save money. And it saves them money because they are not held accountable for the downsides of those processes (i.e. long times until security issues get fixed). So make it cost those corporations money for willfully putting their customers at risk longer than necessary and they'll change their priorities.
By altering the market conditions a bit, it will (perhaps slowly, but steadily) optimise itself for these new conditions: those who fail to invest in security will disappear, those with good security practices will be rewarded and their "processes" will be copied and optimised further.
Edited 2013-06-03 12:20 UTC
Long processes are in there to stop these sort of mistakes happening in the first place or worse making the situation worse.
Except I do work in the software industry and I've seen both sides. And you sound like you're suffering from a serious tunnel vision, probably because it's always been that way to you and it's become rather hard to think outside your cubicle.
Big companies have these long processes to prevent their army of brainless code monkeys from screwing up because they're too cheap to invest in proper development. So yes, they're entirely to blame when their customers' systems get compromised as a result of those long processes. This is just a way of shifting costs that's rather unique to the software industry.
Like I said, other industries have to do a refund, a replacement or a recall when security issues are discovered, and they manage perfectly fine with their own "long processes to stop these sorts of mistakes from happening".
Edited 2013-06-03 16:29 UTC
A bug is not the same as a critical security vulnerability. If you lump them together, then it's you who has no clue.
Security vulnerabilities have high priorities and just like bugs are classified Minor, Moderate, Major and Critical.
I've had to patch a few critical security vulnerabilities. The total response time for them ranges 8-72 hours, including QA. A week to patch, or even put out an advisory, is exceptionally generous.
Since we are talking about software, most would consider it a software defect which is more commonly known as a bug. Sorry you are being a pedantic dick-piece.
I've had to patch a few critical security vulnerabilities. The total response time for them ranges 8-72 hours, including QA. A week to patch, or even put out an advisory, is exceptionally generous
But you still have to go through a change management process.
Also you make no mention of whether you actually created the patch, deployed it or the complexity.
i.e. Fixing an SQL injection vunerability is relatively easy compared to something like patching a vunerability in some critical part of the OS.
I can claim to have fixed critical security vunerabilities when all I really did was change a particular procedure to use parameterised queries and a SPROC.
Edited 2013-06-03 11:45 UTC
No. A bug would be like a broken design for the car radio. A security vulnerability is like a broken design for the brake system. The former gets fixed at the garage, the latter gets recalled and costs a lot of money to the manufacturer. Ask Toyota how that went, even though ultimately they may not have been at fault.
Also, name calling only decreases any credibility you had left.
Edited 2013-06-03 12:33 UTC
The classic OSNews pile on. Why am I not surprised. Anyway, the differences are well known, and completely irrelevant.
Its obvious what he meant, and nit picking aside, his point still stands. Where as you and JAlexoid have spent time splitting semantic hairs, none of you have addressed the actual real concerns that he raised.
That is - for a fact - not true. Design flaws are not bugs. A lot of security vulnerabilities are and were not bugs, but a perfectly correct implementations of designs and requirements.
And I just hope that you don't work on any of the software that stores my private information...
How about all three steps, on multiple occasions and none of them were SQL injection.
And since when does anyone give a f**k about complexity when it comes to critical vulnerabilities?
That is - for a fact - not true. Design flaws are not bugs. A lot of security vulnerabilities are and were not bugs, but a perfectly correct implementations of designs and requirements.
The mistake you made is in assuming that you're both talking about the same classification of "bug". He obviously used the word questionably, and you called him out on it. It is though even more obvious that he didn't mean a run of the mill bug or software defect, but a very real showstopping critical vulnerability.
So you going on about the differences between bug and vulnerability is an example of pedantry. Its nice that you know the difference, as I'm sure a lot of us do, but its superfluous to this discussion.
And since when does anyone give a f**k about complexity when it comes to critical vulnerabilities?
Because the implications of patching the vulnerability can extend deeply into the code base and cause other issues down the road, which is why QA processes are necessary, and they don't necessarily have a constant time. More complex code takes longer to evaluate, especially when it runs on an increasingly complicated array of software.
The oversimplification of this entire thing is what I think Lucas is getting at, and its disgusting. People here think that software engineering runs on pixie dust and good feelings. There are actual people working on these projects and it takes actual time to get a fix out of the door in a responsible manner.
Its great that you have had a situation where you got a fix out in a relatively short amount of time, but I hardly think that your experience is one that is necessarily universal.
1) Most security vulnerabilities are implementation based (a la SQL injections and buffer overflows). They do not alter the external interface at all. Any business that delays those patches either has a shitty update process or simply has a shitty QA.
2) Design vulnerabilities should cost you money. I don't see why the software industry should get a free pass where as any other industry is responsible for recalls and repairs within a reasonable amount of time (during the warranty) - or else it's a free replacement or refund.
Simply because your company is incompetent at handling critical vulnerabilities, does not mean other companies are. I think punishing those incompetent companies will reward those that do care. And to be honest, I doubt the former are incompetent, they're mostly just negligent as they care more about their wallet than their customers.
No. There is a process and urgency difference between a regular bug and a critical bug and a critical security vulnerability. This is at the heart of the issue.
I'm happy for you if you develop software that does not store critical data, that does not mean that others aren't under serious threat from these hushed up for 60 days and "we'll get to it" vulnerabilities. I personally have seen "big boys" jump though burning hoops to get fixes and workarounds out(Like Microsoft did quite a few patches for Telia's Exchange servers within 8 hours, IBM for StoraEnso's Websphere Portal in 4 hours or Oracle for Vodafone).
Seriously... Why would you ignore the word critical there? When it's critical no one cares how complex it is to test, verify or fix it correctly. There is an immediate need for a fix - PERIOD.
Breaking ribs to restart your heart is a non-optimal way of making sure that you live, but when you're in a critical condition no one cares.
No. I had to drop all my work and actually work non stop till the issue was resolved, a few times. SLAs are there for a reason and in the industries that I have worked at they carry hefty fines.
It's not my problem that most companies are really bad at protecting their customers.
I can 100% guarantee that you will be using something with a vulnerability in it

The point of the matter is that people affected by a 0-day should know ASAP.
Some other news outlets erroneously reported something along the lines of "they better have a fix in 7 days or else". Mitigation should be possible if not by the vendor at least by the customer(s).
That 7 day window is already too large because I have the feeling that once a 0-day is uncovered and reported , the people that could do harm already know about it.
I hope there are not people in the crowd following OSnews that believe that blackhats get their exploit info from reading CVEs
That 7 day window is already too large because I have the feeling that once a 0-day is uncovered and reported , the people that could do harm already know about it.
Have BlackHats traditionally independently discovered and exploited the same 0day a WhiteHat disclosed? I don't doubt they have the skill to discover an exploit, I'm just not certain if they'd be one in the same.
Have BlackHats traditionally independently discovered and exploited the same 0day a WhiteHat disclosed? I don't doubt they have the skill to discover an exploit, I'm just not certain if they'd be one in the same.
Sometimes vulnerabilities are found that black hats haven't discovered themselves. Often vulnerabilities are found black hats have already been aware of (and often even using already).
So it's better to assume that an exploit is already in common use and have full disclosure early on (and thus allow critical systems to have additional protections where necessary) than keep things secret until patches finally trickle their way downstream, in the hope that the white hats were lucky enough to find the vulnerability first (the former is security in practice, the latter is security through obscurity)
Edited 2013-06-03 08:17 UTC
So that was the relevant part of your post. The quote from Microsoft, of which you don't even comment on, was there purely to confuse readers I assume. Have you ever considered writing for a tabloid? Even Andrew Orlinski could learn a thing or two from you.
http://tech.slashdot.org/story/13/06/01/120204/questioning-googles-...
It is important to emphasize that the 7-day policy applies only to unpatched and actively exploited vulnerabilities. This means that this vulnerability is already known to criminals. So possible negative effects from disclosure are mostly limited to bad PR for the vendor and maybe increased script kiddie activity.
If we are to believe that article 95% of the worlds software companies are run by, and employ, only incompetent buffoons. Granted, we all know that "enterprise software" is just another name for "software so crap that only corporate purchasing will buy it" but 95% is probably too high. Maybe 70%.
Seriously though, if a company can't get a fix, or at least an advisory with a workaround, out in 7 days they deserve to be out of business.
Then you release a hotfix along with your advisory, and your customers have to test whether their Windows 3.0 software still works with that fix before applying it to production systems.