Google is changing its disclosure policy for zero-day exploits – both in their own software as in that of others – from 60 days do 7 days. “Seven days is an aggressive timeline and may be too short for some vendors to update their products, but it should be enough time to publish advice about possible mitigations, such as temporarily disabling a service, restricting access, or contacting the vendor for more information. As a result, after 7 days have elapsed without a patch or advisory, we will support researchers making details available so that users can take steps to protect themselves. By holding ourselves to the same standard, we hope to improve both the state of web security and the coordination of vulnerability management.” I support this 100%. It will force notoriously slow-responding companies – let’s not mention any names – to be quicker about helping their customers. Google often uncovers vulnerabilities in other people’s software (e.g. half of patches fixed on some Microsoft ‘patch Tuesdays’ are uncovered by Google), so this could have a big impact.
Microsoft in 2010 on Google fully disclosing after a few days:
Full disclosure time windows are a complicated matter and often things are not that cut and dry. I do agree with Full Disclosure, I’m just not sure what the amount of time should be that passes before a disclosure is made.
If I’m using something that has a vulnerability in it that’s serious, I want to know so that I can stop using said software, disable the feature in question, or apply a workaround.
It’s not my problem that most companies are really bad at protecting their customers.
As usual, you vastly oversimplify a complicated matter.
There are a lot of variables involved in software engineering, and any one change can affect various hardware configurations running on that platform, especially something as important as say, Windows.
What one person considers a fix might break something else, and cause major quality headaches down the road.
How do you deal with that? Would you appreciate a Windows Update screwing your install? Itd be a disaster.
You can be advised via partial disclosure of a flaw and act accordingly. There is full disclosure, then there’s being unreasonable.
There are potentially millions at risk, not something to be taken lightly.
Regarding security fixes, I would have spontaneously assumed that a company the size of Microsoft would have boatloads of automated regression tests in place in order to ensure that a security patch won’t likely break a customer’s machine (unless he is using code that binds to undocumented APIs or crap like that). Isn’t that the case ?
Edited 2013-06-02 14:27 UTC
Yes I do assume so, which is likely why a proper fix would take time to develop and thoroughly assess. There are also obviously things not covered by tests yet, so identifying the root cause of the issue can probably lead to a more robust fix.
You have no idea do you?
I work on a fairly small code-base if there is a bug, it can take weeks before it goes through the QA process and I get the go-ahead to release.
This is not taking into account my own time … and when I can be put on task for it.
Then maybe there is something wrong with the whole process. I’d say: hold companies accountable starting 7 days after they’ve been notified. Let good old capitalism take care of this. You’ll be surprised how quickly the process adapts towards better security (fixing and prevention).
Sometimes there is no quick fix or it isn’t easily identifiable.
Everyone assumes this fantasy scenario where things can be fixed instantly by a bit of heroic coding.
In corporations you don’t just throw a patch in and hope it sticks. These longer processes are in place for a reason … most of them legal.
Edited 2013-06-03 10:40 UTC
All too often there is no quick fix due to: 1) a lack of testing before release, 2) negligence, 3) too much bureaucracy.
Exactly, so use the ‘legal’ argument to alter these processes. If it costs money, too bad. When a household appliance is malfunctioning, the manufacturer is held accountable as well. It’s called warranty and it lasts at least 2 years in Europe. From europa.eu: “If a product cannot be repaired or replaced within a reasonable time or without inconvenience, you may request a refund or price reduction.” Most companies seem to have a policy of about two weeks (and that includes returning and reshipping, which are not applicable for software.)
Those longer processes are in place for one reason only: to save money. And it saves them money because they are not held accountable for the downsides of those processes (i.e. long times until security issues get fixed). So make it cost those corporations money for willfully putting their customers at risk longer than necessary and they’ll change their priorities.
By altering the market conditions a bit, it will (perhaps slowly, but steadily) optimise itself for these new conditions: those who fail to invest in security will disappear, those with good security practices will be rewarded and their “processes” will be copied and optimised further.
Edited 2013-06-03 12:20 UTC
This sounds like the usual nonsense from someone who doesn’t work software industry.
Long processes are in there to stop these sort of mistakes happening in the first place or worse making the situation worse.
Except I do work in the software industry and I’ve seen both sides. And you sound like you’re suffering from a serious tunnel vision, probably because it’s always been that way to you and it’s become rather hard to think outside your cubicle.
Big companies have these long processes to prevent their army of brainless code monkeys from screwing up because they’re too cheap to invest in proper development. So yes, they’re entirely to blame when their customers’ systems get compromised as a result of those long processes. This is just a way of shifting costs that’s rather unique to the software industry.
Like I said, other industries have to do a refund, a replacement or a recall when security issues are discovered, and they manage perfectly fine with their own “long processes to stop these sorts of mistakes from happening”.
Edited 2013-06-03 16:29 UTC
Sorry but that simply isn’t true. Firstly I don’t work in a cubicle. Secondly understanding business processes and why they exist rather than demanding a highly unrealistic scenario.
Again you are over-simplifying the situation. Defects are found in even the most tested application frameworks. In fact pretty much every software testing book admits that anything beyond “hello world” examples are likely to have flaws.
Also not all software devs are created equal, processes have to be realistic and have to take this fact into account. Again what you are suggesting is highly unrealistic.
This all happens because someone made a cock-up earlier this is why they exist (and btw it happens whatever the ability of the developer, people are human).
e.g. I spent 3 days negotiating with a dev that was external to my department that I wanted him to setup to separate database instances one for each test environment … it should be bloody obvious.
Why do you think this is a bad thing? These processes also exist in pretty much every other industry as well.
Because a lot of other industries don’t do anything nearly as complex as software and guess what they have rigorous QA processes before anything goes into production as well, just like the software industry.
What you are asking for is highly unrealistic, especially taking into account that the industry is fairly young and misunderstood compared to other industries e.g. manufacturing which is a couple of hundred years old.
At the end of the day, software isn’t a physical object and it shouldn’t be treated like one.
A bug is not the same as a critical security vulnerability. If you lump them together, then it’s you who has no clue.
Security vulnerabilities have high priorities and just like bugs are classified Minor, Moderate, Major and Critical.
I’ve had to patch a few critical security vulnerabilities. The total response time for them ranges 8-72 hours, including QA. A week to patch, or even put out an advisory, is exceptionally generous.
Since we are talking about software, most would consider it a software defect which is more commonly known as a bug. Sorry you are being a pedantic dick-piece.
But you still have to go through a change management process.
Also you make no mention of whether you actually created the patch, deployed it or the complexity.
i.e. Fixing an SQL injection vunerability is relatively easy compared to something like patching a vunerability in some critical part of the OS.
I can claim to have fixed critical security vunerabilities when all I really did was change a particular procedure to use parameterised queries and a SPROC.
Edited 2013-06-03 11:45 UTC
No. A bug would be like a broken design for the car radio. A security vulnerability is like a broken design for the brake system. The former gets fixed at the garage, the latter gets recalled and costs a lot of money to the manufacturer. Ask Toyota how that went, even though ultimately they may not have been at fault.
Also, name calling only decreases any credibility you had left.
Edited 2013-06-03 12:33 UTC
The classic OSNews pile on. Why am I not surprised. Anyway, the differences are well known, and completely irrelevant.
Its obvious what he meant, and nit picking aside, his point still stands. Where as you and JAlexoid have spent time splitting semantic hairs, none of you have addressed the actual real concerns that he raised.
That is – for a fact – not true. Design flaws are not bugs. A lot of security vulnerabilities are and were not bugs, but a perfectly correct implementations of designs and requirements.
And I just hope that you don’t work on any of the software that stores my private information…
How about all three steps, on multiple occasions and none of them were SQL injection.
And since when does anyone give a f**k about complexity when it comes to critical vulnerabilities?
The mistake you made is in assuming that you’re both talking about the same classification of “bug”. He obviously used the word questionably, and you called him out on it. It is though even more obvious that he didn’t mean a run of the mill bug or software defect, but a very real showstopping critical vulnerability.
So you going on about the differences between bug and vulnerability is an example of pedantry. Its nice that you know the difference, as I’m sure a lot of us do, but its superfluous to this discussion.
Because the implications of patching the vulnerability can extend deeply into the code base and cause other issues down the road, which is why QA processes are necessary, and they don’t necessarily have a constant time. More complex code takes longer to evaluate, especially when it runs on an increasingly complicated array of software.
The oversimplification of this entire thing is what I think Lucas is getting at, and its disgusting. People here think that software engineering runs on pixie dust and good feelings. There are actual people working on these projects and it takes actual time to get a fix out of the door in a responsible manner.
Its great that you have had a situation where you got a fix out in a relatively short amount of time, but I hardly think that your experience is one that is necessarily universal.
Thanks for explaining it a lot better than I.
1) Most security vulnerabilities are implementation based (a la SQL injections and buffer overflows). They do not alter the external interface at all. Any business that delays those patches either has a shitty update process or simply has a shitty QA.
2) Design vulnerabilities should cost you money. I don’t see why the software industry should get a free pass where as any other industry is responsible for recalls and repairs within a reasonable amount of time (during the warranty) – or else it’s a free replacement or refund.
Simply because your company is incompetent at handling critical vulnerabilities, does not mean other companies are. I think punishing those incompetent companies will reward those that do care. And to be honest, I doubt the former are incompetent, they’re mostly just negligent as they care more about their wallet than their customers.
No. There is a process and urgency difference between a regular bug and a critical bug and a critical security vulnerability. This is at the heart of the issue.
I’m happy for you if you develop software that does not store critical data, that does not mean that others aren’t under serious threat from these hushed up for 60 days and “we’ll get to it” vulnerabilities. I personally have seen “big boys” jump though burning hoops to get fixes and workarounds out(Like Microsoft did quite a few patches for Telia’s Exchange servers within 8 hours, IBM for StoraEnso’s Websphere Portal in 4 hours or Oracle for Vodafone).
Seriously… Why would you ignore the word critical there? When it’s critical no one cares how complex it is to test, verify or fix it correctly. There is an immediate need for a fix – PERIOD.
Breaking ribs to restart your heart is a non-optimal way of making sure that you live, but when you’re in a critical condition no one cares.
No. I had to drop all my work and actually work non stop till the issue was resolved, a few times. SLAs are there for a reason and in the industries that I have worked at they carry hefty fines.
It is still a bug that goes through a process at any sane company. Emergency change is normally is what it was.
In any case, seriously a flash vulnerability (one of the examples) on someone’s machine isn’t a critical vulnerability. Sorry to break this to you if these machines need to be that secure it should have never been allowed to be installed and it shouldn’t have internet access.
You are being overly dramatic.
Because they had an SLA, guess what most companies especially those on the web have no SLA agreement with their customers and are business critical.
This isn’t even equatable to what Google is highlighting. If the OS and machine is business critical it probably should have no outside access or very limited (set of white-listed sites).
Life threatening condition doesn’t equate to software development on the internet.
It ultimately depends what industry you are creating software for. Browsing the net isn’t in the same category as a the software running on a insulin pump or pacemaker.
And I expect once you put the band-aid on it was done properly afterwards.
In any-case we aren’t talking about those industries, we are talking about vulnerabilities in (two of their examples (I can’t read Japanese)) that are part of a bloody web browser.
At the end of the day when you sign up for a popular web-service you are putting your data in their hands and you have agreed to their terms and conditions. If you didn’t like it you shouldn’t have signed up. If the software has security problems don’t use it or if you have to use it (mandated by work) the network administrator should have locked it down.
Edited 2013-06-03 18:03 UTC
Sure… eBanking systems should never have been built. Since they have to be secure and using your statement “shouldn’t have internet access”.
We are not talking about any piece of consumer software exclusively. Like Flash is.
These machines are locked down, you know what the point was.
Yes, I’m aware, and still then, not all critical vulnerabilities are created equal. They might have a higher priority, but they are not magically simpler to fix in a robust and responsible manner.
I think this discussion was always an off shoot of the main discussion, which is that companies in general rushing fixes leads to worse solutions, and gives context to why they have such processes.
Obviously any company who cannot muster a response of some sort within 60 days isn’t anyone to make excuses for, but I don’t think anyone here is advocating for that except for maybe you. A full 60 days is an extraordinary amount of time.
Going back to your first point though, I think I’d be a little concerned to work at a company that threw established process away in the favor of brevity. That is under no circumstances acceptable, and often exacerbates the problem. Frankly, I’m surprised you hold this position at all.
Right, and I’m sure you’re also appreciative of the relative scope and impact of that fix compared to a fix to a more widespread incident.
I think you’re using short turn around times as proof that longer ones can’t exist, which is nonsensical in light of my argument that the times are variable in the first place.
No. Hell no. This is decidedly not how it works. You do not forego established lifecycle processes in the name of a quick fix, you thoroughly understand the root cause and fix it once. Yes, even for a critical vulnerability.
If it really is an easy fix, then you’ll be able to clear the QA process quickly given that it is expedited. If its a more complex exploit you’re going to obviously need to be more thorough.
Thankfully, Software engineering is not the same as open heart surgery.
Not everyone has an SLA, and if you’re basing your argument on experience trying to honor an SLA then you’re obviously going to have a skewed perspective.
Never did I say that it’s easier. If anything it’s harder due to pressure. And responsible comes later.
A fix is not always a code patch. Sometimes it’s a workaround, which is a temporary fix.
A full 7 days of response is a massive amount of time for a critical vulnerability.
And where are you getting that from? Also most established SDPs have to accommodate this kind of urgency, where applicable.
Thankfully it isn’t, but in a lot of cases lives and livelihoods can be affected.
Yes, not everyone. There is a reason why some companies hate the idea that Google will start disclosing that information early. These companies have SLA’s with their clients that state that they have to release fixes(temporary and permanent) for publicly disclosed vulnerabilities in certain amount of time or pay hefty fines.
I can 100% guarantee that you will be using something with a vulnerability in it ) –> nature of the beast
The point of the matter is that people affected by a 0-day should know ASAP.
Some other news outlets erroneously reported something along the lines of “they better have a fix in 7 days or else”. Mitigation should be possible if not by the vendor at least by the customer(s).
That 7 day window is already too large because I have the feeling that once a 0-day is uncovered and reported , the people that could do harm already know about it.
I hope there are not people in the crowd following OSnews that believe that blackhats get their exploit info from reading CVEs
Have BlackHats traditionally independently discovered and exploited the same 0day a WhiteHat disclosed? I don’t doubt they have the skill to discover an exploit, I’m just not certain if they’d be one in the same.
No dude, I’m sure that they “discover” exploits by reading CVEs. LoL
Edited 2013-06-01 21:32 UTC
No need for the smart ass sarcasm, I asked you a legitimate question. Either provide a thoughtful response (might be a stretch) or don’t respond.
I’m aware that you think it is unlikely, which is why I asked for any historical trend as I was genuinely curious.
Sometimes vulnerabilities are found that black hats haven’t discovered themselves. Often vulnerabilities are found black hats have already been aware of (and often even using already).
So it’s better to assume that an exploit is already in common use and have full disclosure early on (and thus allow critical systems to have additional protections where necessary) than keep things secret until patches finally trickle their way downstream, in the hope that the white hats were lucky enough to find the vulnerability first (the former is security in practice, the latter is security through obscurity)
Edited 2013-06-03 08:17 UTC
So that was the relevant part of your post. The quote from Microsoft, of which you don’t even comment on, was there purely to confuse readers I assume. Have you ever considered writing for a tabloid? Even Andrew Orlinski could learn a thing or two from you.
I’m still awaiting your ultra insightful comment that I’m sure you’re furiously typing away at.
Why should I? It’s not like you made any effort. Why hold me to a different standard?
Since you’re not going to contribute, and you’re just talking in circles, then why are you still posting? Just enjoy talking to me?
Note that this is for vulnerabilities under *active attack*. If the responsible party can’t solve that in 7 days I don’t know what the fuck they’re doing and if they need 60 days? Stop writing software.
You’re right, this is less bad than it seems. Probably not even bad at all. 60 days is an insanely long time for something being actively exploited and undisclosed.
This part is the important bit.
http://tech.slashdot.org/story/13/06/01/120204/questioning-googles-…
It is important to emphasize that the 7-day policy applies only to unpatched and actively exploited vulnerabilities. This means that this vulnerability is already known to criminals. So possible negative effects from disclosure are mostly limited to bad PR for the vendor and maybe increased script kiddie activity.
+1, and if I’m understanding correctly, just posting an advisory is enough to put off the disclosure (at least temporarily?)
If so this seems like a non issue, and like you said, just bad PR
Hopefully it depends on what the definition of “advisory” is. A full CVE is an advisory. “lol there’s totally a bug guiz! h4x!” isn’t.
I think it is mostly accepted that an advisory at least contains descriptions of the following:
1. affected product(s)
2. impact
3. countermeasures
Anything less and letting Google disclose the vulnerability would be preferable to me.
As much as I read the reports it’s the “journalists” blowing it out of proportion(with partial reporting) and Microsoft fanboys jumping onto slamming Google.
If we are to believe that article 95% of the worlds software companies are run by, and employ, only incompetent buffoons. Granted, we all know that “enterprise software” is just another name for “software so crap that only corporate purchasing will buy it” but 95% is probably too high. Maybe 70%.
Seriously though, if a company can’t get a fix, or at least an advisory with a workaround, out in 7 days they deserve to be out of business.
When you’re dealing with an OS-level bug, where the fix could break tons of software (especially given that Windows 8 can still run Windows 3.0 software)?
Then you release a hotfix along with your advisory, and your customers have to test whether their Windows 3.0 software still works with that fix before applying it to production systems.
I really don’t see how that would prevent releasing an advisory with a workaround, if one exist.
When you are reporting an exploitable “feature” present from Windows 3.0, then maybe that feature should be killed off? People that still run Windows 3.0 apps better have a good plan for migration and should be aware of the implications of running those apps.
Hopefully the new version of OSNews has te ability to change your vote on a comment, or I have to stop using this site on my mobile.
Anyway, this is a good move by Google. If the comment votes above don’t reflect this, it’s partially my fault.