Linked by Thom Holwerda on Sat 1st Jun 2013 18:43 UTC
Privacy, Security, Encryption Google is changing its disclosure policy for zero-day exploits - both in their own software as in that of others - from 60 days do 7 days. "Seven days is an aggressive timeline and may be too short for some vendors to update their products, but it should be enough time to publish advice about possible mitigations, such as temporarily disabling a service, restricting access, or contacting the vendor for more information. As a result, after 7 days have elapsed without a patch or advisory, we will support researchers making details available so that users can take steps to protect themselves. By holding ourselves to the same standard, we hope to improve both the state of web security and the coordination of vulnerability management." I support this 100%. It will force notoriously slow-responding companies - let's not mention any names - to be quicker about helping their customers. Google often uncovers vulnerabilities in other people's software (e.g. half of patches fixed on some Microsoft 'patch Tuesdays' are uncovered by Google), so this could have a big impact.
Order by: Score:
Comment by Nelson
by Nelson on Sat 1st Jun 2013 19:26 UTC
Nelson
Member since:
2005-11-29

Microsoft in 2010 on Google fully disclosing after a few days:

One of the main reasons we and many others across the industry
advocate for responsible disclosure is that the software vendor who
wrote the code is in the best position to fully understand the root
cause. While this was a good find by the Google researcher, it turns out
that the analysis is incomplete and the actual workaround Google
suggested is easily circumvented. In some cases, more time is required
for a comprehensive update that cannot be bypassed, and does not cause
quality problems.


Full disclosure time windows are a complicated matter and often things are not that cut and dry. I do agree with Full Disclosure, I'm just not sure what the amount of time should be that passes before a disclosure is made.

Reply Score: 6

RE: Comment by Nelson
by Thom_Holwerda on Sat 1st Jun 2013 19:41 UTC in reply to "Comment by Nelson"
Thom_Holwerda Member since:
2005-06-29

If I'm using something that has a vulnerability in it that's serious, I want to know so that I can stop using said software, disable the feature in question, or apply a workaround.

It's not my problem that most companies are really bad at protecting their customers.

Reply Score: 5

RE[2]: Comment by Nelson
by Nelson on Sat 1st Jun 2013 21:10 UTC in reply to "RE: Comment by Nelson"
Nelson Member since:
2005-11-29

As usual, you vastly oversimplify a complicated matter.

There are a lot of variables involved in software engineering, and any one change can affect various hardware configurations running on that platform, especially something as important as say, Windows.

What one person considers a fix might break something else, and cause major quality headaches down the road.

How do you deal with that? Would you appreciate a Windows Update screwing your install? Itd be a disaster.

You can be advised via partial disclosure of a flaw and act accordingly. There is full disclosure, then there's being unreasonable.

There are potentially millions at risk, not something to be taken lightly.

Reply Score: 2

RE[3]: Comment by Nelson
by Neolander on Sun 2nd Jun 2013 14:26 UTC in reply to "RE[2]: Comment by Nelson"
Neolander Member since:
2010-03-08

Regarding security fixes, I would have spontaneously assumed that a company the size of Microsoft would have boatloads of automated regression tests in place in order to ensure that a security patch won't likely break a customer's machine (unless he is using code that binds to undocumented APIs or crap like that). Isn't that the case ?

Edited 2013-06-02 14:27 UTC

Reply Score: 4

RE[4]: Comment by Nelson
by Nelson on Sun 2nd Jun 2013 16:30 UTC in reply to "RE[3]: Comment by Nelson"
Nelson Member since:
2005-11-29

Yes I do assume so, which is likely why a proper fix would take time to develop and thoroughly assess. There are also obviously things not covered by tests yet, so identifying the root cause of the issue can probably lead to a more robust fix.

Reply Score: 3

RE[2]: Comment by Nelson
by lucas_maximus on Mon 3rd Jun 2013 07:03 UTC in reply to "RE: Comment by Nelson"
lucas_maximus Member since:
2009-08-18

You have no idea do you?

I work on a fairly small code-base if there is a bug, it can take weeks before it goes through the QA process and I get the go-ahead to release.

This is not taking into account my own time ... and when I can be put on task for it.

Reply Score: 2

RE[3]: Comment by Nelson
by cfgr on Mon 3rd Jun 2013 09:47 UTC in reply to "RE[2]: Comment by Nelson"
cfgr Member since:
2009-07-18

You have no idea do you?

I work on a fairly small code-base if there is a bug, it can take weeks before it goes through the QA process and I get the go-ahead to release.

This is not taking into account my own time ... and when I can be put on task for it.


Then maybe there is something wrong with the whole process. I'd say: hold companies accountable starting 7 days after they've been notified. Let good old capitalism take care of this. You'll be surprised how quickly the process adapts towards better security (fixing and prevention).

Reply Score: 3

RE[4]: Comment by Nelson
by lucas_maximus on Mon 3rd Jun 2013 10:33 UTC in reply to "RE[3]: Comment by Nelson"
lucas_maximus Member since:
2009-08-18

Sometimes there is no quick fix or it isn't easily identifiable.

Everyone assumes this fantasy scenario where things can be fixed instantly by a bit of heroic coding.

In corporations you don't just throw a patch in and hope it sticks. These longer processes are in place for a reason ... most of them legal.

Edited 2013-06-03 10:40 UTC

Reply Score: 2

RE[5]: Comment by Nelson
by cfgr on Mon 3rd Jun 2013 12:20 UTC in reply to "RE[4]: Comment by Nelson"
cfgr Member since:
2009-07-18

All too often there is no quick fix due to: 1) a lack of testing before release, 2) negligence, 3) too much bureaucracy.

In corporations you don't just throw a patch in and hope it sticks. These longer processes are in place for a reason ... most of them legal.

Exactly, so use the 'legal' argument to alter these processes. If it costs money, too bad. When a household appliance is malfunctioning, the manufacturer is held accountable as well. It's called warranty and it lasts at least 2 years in Europe. From europa.eu: "If a product cannot be repaired or replaced within a reasonable time or without inconvenience, you may request a refund or price reduction." Most companies seem to have a policy of about two weeks (and that includes returning and reshipping, which are not applicable for software.)

Those longer processes are in place for one reason only: to save money. And it saves them money because they are not held accountable for the downsides of those processes (i.e. long times until security issues get fixed). So make it cost those corporations money for willfully putting their customers at risk longer than necessary and they'll change their priorities.

By altering the market conditions a bit, it will (perhaps slowly, but steadily) optimise itself for these new conditions: those who fail to invest in security will disappear, those with good security practices will be rewarded and their "processes" will be copied and optimised further.

Edited 2013-06-03 12:20 UTC

Reply Score: 3

RE[6]: Comment by Nelson
by lucas_maximus on Mon 3rd Jun 2013 15:46 UTC in reply to "RE[5]: Comment by Nelson"
lucas_maximus Member since:
2009-08-18

This sounds like the usual nonsense from someone who doesn't work software industry.

Long processes are in there to stop these sort of mistakes happening in the first place or worse making the situation worse.

Reply Score: 2

RE[7]: Comment by Nelson
by cfgr on Mon 3rd Jun 2013 16:28 UTC in reply to "RE[6]: Comment by Nelson"
cfgr Member since:
2009-07-18

This sounds like the usual nonsense from someone who doesn't work software industry.

Long processes are in there to stop these sort of mistakes happening in the first place or worse making the situation worse.

Except I do work in the software industry and I've seen both sides. And you sound like you're suffering from a serious tunnel vision, probably because it's always been that way to you and it's become rather hard to think outside your cubicle.

Big companies have these long processes to prevent their army of brainless code monkeys from screwing up because they're too cheap to invest in proper development. So yes, they're entirely to blame when their customers' systems get compromised as a result of those long processes. This is just a way of shifting costs that's rather unique to the software industry.

Like I said, other industries have to do a refund, a replacement or a recall when security issues are discovered, and they manage perfectly fine with their own "long processes to stop these sorts of mistakes from happening".

Edited 2013-06-03 16:29 UTC

Reply Score: 3

RE[3]: Comment by Nelson
by JAlexoid on Mon 3rd Jun 2013 11:27 UTC in reply to "RE[2]: Comment by Nelson"
JAlexoid Member since:
2009-05-19

A bug is not the same as a critical security vulnerability. If you lump them together, then it's you who has no clue.

Security vulnerabilities have high priorities and just like bugs are classified Minor, Moderate, Major and Critical.
I've had to patch a few critical security vulnerabilities. The total response time for them ranges 8-72 hours, including QA. A week to patch, or even put out an advisory, is exceptionally generous.

Reply Score: 3

RE[4]: Comment by Nelson
by lucas_maximus on Mon 3rd Jun 2013 11:43 UTC in reply to "RE[3]: Comment by Nelson"
lucas_maximus Member since:
2009-08-18

A bug is not the same as a critical security vulnerability. If you lump them together, then it's you who has no clue.


Since we are talking about software, most would consider it a software defect which is more commonly known as a bug. Sorry you are being a pedantic dick-piece.

Security vulnerabilities have high priorities and just like bugs are classified Minor, Moderate, Major and Critical.

I've had to patch a few critical security vulnerabilities. The total response time for them ranges 8-72 hours, including QA. A week to patch, or even put out an advisory, is exceptionally generous


But you still have to go through a change management process.

Also you make no mention of whether you actually created the patch, deployed it or the complexity.

i.e. Fixing an SQL injection vunerability is relatively easy compared to something like patching a vunerability in some critical part of the OS.

I can claim to have fixed critical security vunerabilities when all I really did was change a particular procedure to use parameterised queries and a SPROC.

Edited 2013-06-03 11:45 UTC

Reply Score: 3

RE[5]: Comment by Nelson
by cfgr on Mon 3rd Jun 2013 12:29 UTC in reply to "RE[4]: Comment by Nelson"
cfgr Member since:
2009-07-18

Since we are talking about software, most would consider it a software defect which is more commonly known as a bug. Sorry you are being a pedantic dick-piece.


No. A bug would be like a broken design for the car radio. A security vulnerability is like a broken design for the brake system. The former gets fixed at the garage, the latter gets recalled and costs a lot of money to the manufacturer. Ask Toyota how that went, even though ultimately they may not have been at fault.

Also, name calling only decreases any credibility you had left.

Edited 2013-06-03 12:33 UTC

Reply Score: 2

RE[6]: Comment by Nelson
by Nelson on Mon 3rd Jun 2013 13:30 UTC in reply to "RE[5]: Comment by Nelson"
Nelson Member since:
2005-11-29

The classic OSNews pile on. Why am I not surprised. Anyway, the differences are well known, and completely irrelevant.

Its obvious what he meant, and nit picking aside, his point still stands. Where as you and JAlexoid have spent time splitting semantic hairs, none of you have addressed the actual real concerns that he raised.

Reply Score: 3

RE[5]: Comment by Nelson
by JAlexoid on Mon 3rd Jun 2013 13:02 UTC in reply to "RE[4]: Comment by Nelson"
JAlexoid Member since:
2009-05-19

most would consider it a software defect which is more commonly known as a bug

That is - for a fact - not true. Design flaws are not bugs. A lot of security vulnerabilities are and were not bugs, but a perfectly correct implementations of designs and requirements.

Sorry you are being a pedantic dick-piece.

And I just hope that you don't work on any of the software that stores my private information...

Also you make no mention of whether you actually created the patch, deployed it or the complexity.

How about all three steps, on multiple occasions and none of them were SQL injection.
And since when does anyone give a f**k about complexity when it comes to critical vulnerabilities?

Reply Score: 3

RE[6]: Comment by Nelson
by Nelson on Mon 3rd Jun 2013 13:28 UTC in reply to "RE[5]: Comment by Nelson"
Nelson Member since:
2005-11-29


That is - for a fact - not true. Design flaws are not bugs. A lot of security vulnerabilities are and were not bugs, but a perfectly correct implementations of designs and requirements.


The mistake you made is in assuming that you're both talking about the same classification of "bug". He obviously used the word questionably, and you called him out on it. It is though even more obvious that he didn't mean a run of the mill bug or software defect, but a very real showstopping critical vulnerability.

So you going on about the differences between bug and vulnerability is an example of pedantry. Its nice that you know the difference, as I'm sure a lot of us do, but its superfluous to this discussion.



And since when does anyone give a f**k about complexity when it comes to critical vulnerabilities?


Because the implications of patching the vulnerability can extend deeply into the code base and cause other issues down the road, which is why QA processes are necessary, and they don't necessarily have a constant time. More complex code takes longer to evaluate, especially when it runs on an increasingly complicated array of software.

The oversimplification of this entire thing is what I think Lucas is getting at, and its disgusting. People here think that software engineering runs on pixie dust and good feelings. There are actual people working on these projects and it takes actual time to get a fix out of the door in a responsible manner.

Its great that you have had a situation where you got a fix out in a relatively short amount of time, but I hardly think that your experience is one that is necessarily universal.

Reply Score: 3

RE[7]: Comment by Nelson
by lucas_maximus on Mon 3rd Jun 2013 13:30 UTC in reply to "RE[6]: Comment by Nelson"
lucas_maximus Member since:
2009-08-18

Thanks for explaining it a lot better than I.

Reply Score: 2

RE[7]: Comment by Nelson
by cfgr on Mon 3rd Jun 2013 14:33 UTC in reply to "RE[6]: Comment by Nelson"
cfgr Member since:
2009-07-18

Because the implications of patching the vulnerability can extend deeply into the code base and cause other issues down the road, which is why QA processes are necessary, and they don't necessarily have a constant time. More complex code takes longer to evaluate, especially when it runs on an increasingly complicated array of software.


1) Most security vulnerabilities are implementation based (a la SQL injections and buffer overflows). They do not alter the external interface at all. Any business that delays those patches either has a shitty update process or simply has a shitty QA.

2) Design vulnerabilities should cost you money. I don't see why the software industry should get a free pass where as any other industry is responsible for recalls and repairs within a reasonable amount of time (during the warranty) - or else it's a free replacement or refund.

Simply because your company is incompetent at handling critical vulnerabilities, does not mean other companies are. I think punishing those incompetent companies will reward those that do care. And to be honest, I doubt the former are incompetent, they're mostly just negligent as they care more about their wallet than their customers.

Reply Score: 2

RE[7]: Comment by Nelson
by JAlexoid on Mon 3rd Jun 2013 15:16 UTC in reply to "RE[6]: Comment by Nelson"
JAlexoid Member since:
2009-05-19

Its nice that you know the difference, as I'm sure a lot of us do, but its superfluous to this discussion.


No. There is a process and urgency difference between a regular bug and a critical bug and a critical security vulnerability. This is at the heart of the issue.

I'm happy for you if you develop software that does not store critical data, that does not mean that others aren't under serious threat from these hushed up for 60 days and "we'll get to it" vulnerabilities. I personally have seen "big boys" jump though burning hoops to get fixes and workarounds out(Like Microsoft did quite a few patches for Telia's Exchange servers within 8 hours, IBM for StoraEnso's Websphere Portal in 4 hours or Oracle for Vodafone).

Because the implications of patching the vulnerability can extend deeply into the code base and cause other issues down the road, which is why QA processes are necessary, and they don't necessarily have a constant time.

Seriously... Why would you ignore the word critical there? When it's critical no one cares how complex it is to test, verify or fix it correctly. There is an immediate need for a fix - PERIOD.
Breaking ribs to restart your heart is a non-optimal way of making sure that you live, but when you're in a critical condition no one cares.

Its great that you have had a situation where you got a fix out in a relatively short amount of time, but I hardly think that your experience is one that is necessarily universal.


No. I had to drop all my work and actually work non stop till the issue was resolved, a few times. SLAs are there for a reason and in the industries that I have worked at they carry hefty fines.

Reply Score: 4

RE[2]: Comment by Nelson
by Deviate_X on Mon 3rd Jun 2013 15:09 UTC in reply to "RE: Comment by Nelson"
Deviate_X Member since:
2005-07-11

If I'm using something that has a vulnerability in it that's serious, I want to know so that I can stop using said software, disable the feature in question, or apply a workaround.

It's not my problem that most companies are really bad at protecting their customers.


I can 100% guarantee that you will be using something with a vulnerability in it ;) ) --> nature of the beast

Reply Score: 2

RE: Comment by Nelson
by silviucc on Sat 1st Jun 2013 20:07 UTC in reply to "Comment by Nelson"
silviucc Member since:
2009-12-05

The point of the matter is that people affected by a 0-day should know ASAP.

Some other news outlets erroneously reported something along the lines of "they better have a fix in 7 days or else". Mitigation should be possible if not by the vendor at least by the customer(s).

That 7 day window is already too large because I have the feeling that once a 0-day is uncovered and reported , the people that could do harm already know about it.

I hope there are not people in the crowd following OSnews that believe that blackhats get their exploit info from reading CVEs ;)

Reply Score: 6

RE[2]: Comment by Nelson
by Nelson on Sat 1st Jun 2013 21:13 UTC in reply to "RE: Comment by Nelson"
Nelson Member since:
2005-11-29


That 7 day window is already too large because I have the feeling that once a 0-day is uncovered and reported , the people that could do harm already know about it.


Have BlackHats traditionally independently discovered and exploited the same 0day a WhiteHat disclosed? I don't doubt they have the skill to discover an exploit, I'm just not certain if they'd be one in the same.

Reply Score: 2

RE[3]: Comment by Nelson
by silviucc on Sat 1st Jun 2013 21:31 UTC in reply to "RE[2]: Comment by Nelson"
silviucc Member since:
2009-12-05

No dude, I'm sure that they "discover" exploits by reading CVEs. LoL

Edited 2013-06-01 21:32 UTC

Reply Score: 2

RE[4]: Comment by Nelson
by Nelson on Sat 1st Jun 2013 21:56 UTC in reply to "RE[3]: Comment by Nelson"
Nelson Member since:
2005-11-29

No need for the smart ass sarcasm, I asked you a legitimate question. Either provide a thoughtful response (might be a stretch) or don't respond.

I'm aware that you think it is unlikely, which is why I asked for any historical trend as I was genuinely curious.

Reply Score: 3

RE[3]: Comment by Nelson
by Laurence on Mon 3rd Jun 2013 08:15 UTC in reply to "RE[2]: Comment by Nelson"
Laurence Member since:
2007-03-26


Have BlackHats traditionally independently discovered and exploited the same 0day a WhiteHat disclosed? I don't doubt they have the skill to discover an exploit, I'm just not certain if they'd be one in the same.

Sometimes vulnerabilities are found that black hats haven't discovered themselves. Often vulnerabilities are found black hats have already been aware of (and often even using already).

So it's better to assume that an exploit is already in common use and have full disclosure early on (and thus allow critical systems to have additional protections where necessary) than keep things secret until patches finally trickle their way downstream, in the hope that the white hats were lucky enough to find the vulnerability first (the former is security in practice, the latter is security through obscurity)

Edited 2013-06-03 08:17 UTC

Reply Score: 4

RE: Comment by Nelson
by Vanders on Sat 1st Jun 2013 23:49 UTC in reply to "Comment by Nelson"
Vanders Member since:
2005-07-06

I do agree with Full Disclosure, I'm just not sure what the amount of time should be that passes before a disclosure is made.

So that was the relevant part of your post. The quote from Microsoft, of which you don't even comment on, was there purely to confuse readers I assume. Have you ever considered writing for a tabloid? Even Andrew Orlinski could learn a thing or two from you.

Reply Score: 6

RE[2]: Comment by Nelson
by Nelson on Sun 2nd Jun 2013 00:58 UTC in reply to "RE: Comment by Nelson"
Nelson Member since:
2005-11-29

I'm still awaiting your ultra insightful comment that I'm sure you're furiously typing away at.

Reply Score: 4

RE[3]: Comment by Nelson
by Vanders on Sun 2nd Jun 2013 01:17 UTC in reply to "RE[2]: Comment by Nelson"
Vanders Member since:
2005-07-06

I'm still awaiting your ultra insightful comment

Why should I? It's not like you made any effort. Why hold me to a different standard?

Reply Score: 4

RE[4]: Comment by Nelson
by Nelson on Sun 2nd Jun 2013 02:10 UTC in reply to "RE[3]: Comment by Nelson"
Nelson Member since:
2005-11-29

Since you're not going to contribute, and you're just talking in circles, then why are you still posting? Just enjoy talking to me?

Reply Score: 3

RE: Comment by Nelson
by Soulbender on Sun 2nd Jun 2013 01:38 UTC in reply to "Comment by Nelson"
Soulbender Member since:
2005-08-18

Note that this is for vulnerabilities under *active attack*. If the responsible party can't solve that in 7 days I don't know what the fuck they're doing and if they need 60 days? Stop writing software.

Reply Score: 11

RE[2]: Comment by Nelson
by Nelson on Sun 2nd Jun 2013 02:07 UTC in reply to "RE: Comment by Nelson"
Nelson Member since:
2005-11-29

You're right, this is less bad than it seems. Probably not even bad at all. 60 days is an insanely long time for something being actively exploited and undisclosed.

Reply Score: 2

RE: Comment by Nelson
by JAlexoid on Mon 3rd Jun 2013 11:23 UTC in reply to "Comment by Nelson"
JAlexoid Member since:
2009-05-19

Based on our experience, however, we believe that more urgent action -- within 7 days -- is appropriate for critical vulnerabilities under active exploitation.

This part is the important bit.

Reply Score: 4

The FUD is already rolling against this
by chithanh on Sat 1st Jun 2013 21:26 UTC
chithanh
Member since:
2006-06-18

http://tech.slashdot.org/story/13/06/01/120204/questioning-googles-...

It is important to emphasize that the 7-day policy applies only to unpatched and actively exploited vulnerabilities. This means that this vulnerability is already known to criminals. So possible negative effects from disclosure are mostly limited to bad PR for the vendor and maybe increased script kiddie activity.

Reply Score: 12

Nelson Member since:
2005-11-29

+1, and if I'm understanding correctly, just posting an advisory is enough to put off the disclosure (at least temporarily?)

If so this seems like a non issue, and like you said, just bad PR

Reply Score: 2

Vanders Member since:
2005-07-06

just posting an advisory is enough to put off the disclosure (at least temporarily?)

Hopefully it depends on what the definition of "advisory" is. A full CVE is an advisory. "lol there's totally a bug guiz! h4x!" isn't.

Reply Score: 3

chithanh Member since:
2006-06-18

I think it is mostly accepted that an advisory at least contains descriptions of the following:

1. affected product(s)
2. impact
3. countermeasures

Anything less and letting Google disclose the vulnerability would be preferable to me.

Reply Score: 2

JAlexoid Member since:
2009-05-19

As much as I read the reports it's the "journalists" blowing it out of proportion(with partial reporting) and Microsoft fanboys jumping onto slamming Google.

Reply Score: 3

Soulbender Member since:
2005-08-18

If we are to believe that article 95% of the worlds software companies are run by, and employ, only incompetent buffoons. Granted, we all know that "enterprise software" is just another name for "software so crap that only corporate purchasing will buy it" but 95% is probably too high. Maybe 70%.

Seriously though, if a company can't get a fix, or at least an advisory with a workaround, out in 7 days they deserve to be out of business.

Reply Score: 5

bhtooefr Member since:
2009-02-19

When you're dealing with an OS-level bug, where the fix could break tons of software (especially given that Windows 8 can still run Windows 3.0 software)?

Reply Score: 3

chithanh Member since:
2006-06-18

When you're dealing with an OS-level bug, where the fix could break tons of software (especially given that Windows 8 can still run Windows 3.0 software)?

Then you release a hotfix along with your advisory, and your customers have to test whether their Windows 3.0 software still works with that fix before applying it to production systems.

Reply Score: 4

Soulbender Member since:
2005-08-18

When you're dealing with an OS-level bug, where the fix could break tons of software


I really don't see how that would prevent releasing an advisory with a workaround, if one exist.

Reply Score: 4

JAlexoid Member since:
2009-05-19

When you are reporting an exploitable "feature" present from Windows 3.0, then maybe that feature should be killed off? People that still run Windows 3.0 apps better have a good plan for migration and should be aware of the implications of running those apps.

Reply Score: 3

Change comment votes
by geertjan on Sun 2nd Jun 2013 07:15 UTC
geertjan
Member since:
2010-10-29

Hopefully the new version of OSNews has te ability to change your vote on a comment, or I have to stop using this site on my mobile.

Anyway, this is a good move by Google. If the comment votes above don't reflect this, it's partially my fault.

Reply Score: 2