Linked by Thom Holwerda on Sat 1st Jun 2013 18:43 UTC
Privacy, Security, Encryption Google is changing its disclosure policy for zero-day exploits - both in their own software as in that of others - from 60 days do 7 days. "Seven days is an aggressive timeline and may be too short for some vendors to update their products, but it should be enough time to publish advice about possible mitigations, such as temporarily disabling a service, restricting access, or contacting the vendor for more information. As a result, after 7 days have elapsed without a patch or advisory, we will support researchers making details available so that users can take steps to protect themselves. By holding ourselves to the same standard, we hope to improve both the state of web security and the coordination of vulnerability management." I support this 100%. It will force notoriously slow-responding companies - let's not mention any names - to be quicker about helping their customers. Google often uncovers vulnerabilities in other people's software (e.g. half of patches fixed on some Microsoft 'patch Tuesdays' are uncovered by Google), so this could have a big impact.
Thread beginning with comment 563522
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[7]: Comment by Nelson
by JAlexoid on Mon 3rd Jun 2013 15:16 UTC in reply to "RE[6]: Comment by Nelson"
JAlexoid
Member since:
2009-05-19

Its nice that you know the difference, as I'm sure a lot of us do, but its superfluous to this discussion.


No. There is a process and urgency difference between a regular bug and a critical bug and a critical security vulnerability. This is at the heart of the issue.

I'm happy for you if you develop software that does not store critical data, that does not mean that others aren't under serious threat from these hushed up for 60 days and "we'll get to it" vulnerabilities. I personally have seen "big boys" jump though burning hoops to get fixes and workarounds out(Like Microsoft did quite a few patches for Telia's Exchange servers within 8 hours, IBM for StoraEnso's Websphere Portal in 4 hours or Oracle for Vodafone).

Because the implications of patching the vulnerability can extend deeply into the code base and cause other issues down the road, which is why QA processes are necessary, and they don't necessarily have a constant time.

Seriously... Why would you ignore the word critical there? When it's critical no one cares how complex it is to test, verify or fix it correctly. There is an immediate need for a fix - PERIOD.
Breaking ribs to restart your heart is a non-optimal way of making sure that you live, but when you're in a critical condition no one cares.

Its great that you have had a situation where you got a fix out in a relatively short amount of time, but I hardly think that your experience is one that is necessarily universal.


No. I had to drop all my work and actually work non stop till the issue was resolved, a few times. SLAs are there for a reason and in the industries that I have worked at they carry hefty fines.

Reply Parent Score: 4

RE[8]: Comment by Nelson
by lucas_maximus on Mon 3rd Jun 2013 17:56 in reply to "RE[7]: Comment by Nelson"
lucas_maximus Member since:
2009-08-18

No. There is a process and urgency difference between a regular bug and a critical bug and a critical security vulnerability. This is at the heart of the issue.


It is still a bug that goes through a process at any sane company. Emergency change is normally is what it was.

In any case, seriously a flash vulnerability (one of the examples) on someone's machine isn't a critical vulnerability. Sorry to break this to you if these machines need to be that secure it should have never been allowed to be installed and it shouldn't have internet access.

You are being overly dramatic.

I'm happy for you if you develop software that does not store critical data, that does not mean that others aren't under serious threat from these hushed up for 60 days and "we'll get to it" vulnerabilities. I personally have seen "big boys" jump though burning hoops to get fixes and workarounds out(Like Microsoft did quite a few patches for Telia's Exchange servers within 8 hours, IBM for StoraEnso's Websphere Portal in 4 hours or Oracle for Vodafone).


Because they had an SLA, guess what most companies especially those on the web have no SLA agreement with their customers and are business critical.

This isn't even equatable to what Google is highlighting. If the OS and machine is business critical it probably should have no outside access or very limited (set of white-listed sites).

Seriously... Why would you ignore the word critical there? When it's critical no one cares how complex it is to test, verify or fix it correctly. There is an immediate need for a fix - PERIOD.
Breaking ribs to restart your heart is a non-optimal way of making sure that you live, but when you're in a critical condition no one cares.


Life threatening condition doesn't equate to software development on the internet.

It ultimately depends what industry you are creating software for. Browsing the net isn't in the same category as a the software running on a insulin pump or pacemaker.

No. I had to drop all my work and actually work non stop till the issue was resolved, a few times. SLAs are there for a reason and in the industries that I have worked at they carry hefty fines.


And I expect once you put the band-aid on it was done properly afterwards.

In any-case we aren't talking about those industries, we are talking about vulnerabilities in (two of their examples (I can't read Japanese)) that are part of a bloody web browser.

At the end of the day when you sign up for a popular web-service you are putting your data in their hands and you have agreed to their terms and conditions. If you didn't like it you shouldn't have signed up. If the software has security problems don't use it or if you have to use it (mandated by work) the network administrator should have locked it down.

Edited 2013-06-03 18:03 UTC

Reply Parent Score: 3

RE[9]: Comment by Nelson
by JAlexoid on Tue 4th Jun 2013 11:22 in reply to "RE[8]: Comment by Nelson"
JAlexoid Member since:
2009-05-19

Sorry to break this to you if these machines need to be that secure it should have never been allowed to be installed and it shouldn't have internet access.

Sure... eBanking systems should never have been built. Since they have to be secure and using your statement "shouldn't have internet access".

We are not talking about any piece of consumer software exclusively. Like Flash is.

Reply Parent Score: 3

RE[8]: Comment by Nelson
by Nelson on Mon 3rd Jun 2013 23:20 in reply to "RE[7]: Comment by Nelson"
Nelson Member since:
2005-11-29


No. There is a process and urgency difference between a regular bug and a critical bug and a critical security vulnerability. This is at the heart of the issue.


Yes, I'm aware, and still then, not all critical vulnerabilities are created equal. They might have a higher priority, but they are not magically simpler to fix in a robust and responsible manner.


I'm happy for you if you develop software that does not store critical data, that does not mean that others aren't under serious threat from these hushed up for 60 days and "we'll get to it" vulnerabilities.


I think this discussion was always an off shoot of the main discussion, which is that companies in general rushing fixes leads to worse solutions, and gives context to why they have such processes.

Obviously any company who cannot muster a response of some sort within 60 days isn't anyone to make excuses for, but I don't think anyone here is advocating for that except for maybe you. A full 60 days is an extraordinary amount of time.

Going back to your first point though, I think I'd be a little concerned to work at a company that threw established process away in the favor of brevity. That is under no circumstances acceptable, and often exacerbates the problem. Frankly, I'm surprised you hold this position at all.


I personally have seen "big boys" jump though burning hoops to get fixes and workarounds out(Like Microsoft did quite a few patches for Telia's Exchange servers within 8 hours


Right, and I'm sure you're also appreciative of the relative scope and impact of that fix compared to a fix to a more widespread incident.

I think you're using short turn around times as proof that longer ones can't exist, which is nonsensical in light of my argument that the times are variable in the first place.


Seriously... Why would you ignore the word critical there? When it's critical no one cares how complex it is to test, verify or fix it correctly. There is an immediate need for a fix - PERIOD.


No. Hell no. This is decidedly not how it works. You do not forego established lifecycle processes in the name of a quick fix, you thoroughly understand the root cause and fix it once. Yes, even for a critical vulnerability.

If it really is an easy fix, then you'll be able to clear the QA process quickly given that it is expedited. If its a more complex exploit you're going to obviously need to be more thorough.


Breaking ribs to restart your heart is a non-optimal way of making sure that you live, but when you're in a critical condition no one cares.


Thankfully, Software engineering is not the same as open heart surgery.


No. I had to drop all my work and actually work non stop till the issue was resolved, a few times. SLAs are there for a reason and in the industries that I have worked at they carry hefty fines.


Not everyone has an SLA, and if you're basing your argument on experience trying to honor an SLA then you're obviously going to have a skewed perspective.

Reply Parent Score: 2

RE[9]: Comment by Nelson
by JAlexoid on Tue 4th Jun 2013 11:38 in reply to "RE[8]: Comment by Nelson"
JAlexoid Member since:
2009-05-19

They might have a higher priority, but they are not magically simpler to fix in a robust and responsible manner.

Never did I say that it's easier. If anything it's harder due to pressure. And responsible comes later.

companies in general rushing fixes leads to worse solutions, and gives context to why they have such processes

A fix is not always a code patch. Sometimes it's a workaround, which is a temporary fix.

A full 60 days is an extraordinary amount of time.

A full 7 days of response is a massive amount of time for a critical vulnerability.

I think I'd be a little concerned to work at a company that threw established process away in the favor of brevity

And where are you getting that from? Also most established SDPs have to accommodate this kind of urgency, where applicable.

Thankfully, Software engineering is not the same as open heart surgery.

Thankfully it isn't, but in a lot of cases lives and livelihoods can be affected.

Not everyone has an SLA

Yes, not everyone. There is a reason why some companies hate the idea that Google will start disclosing that information early. These companies have SLA's with their clients that state that they have to release fixes(temporary and permanent) for publicly disclosed vulnerabilities in certain amount of time or pay hefty fines.

Reply Parent Score: 3