Linked by Thom Holwerda on Thu 11th Jul 2013 21:35 UTC
Microsoft Documents released by Snowden show the extent to which Microsoft helped the NSA and other security agencies in the US. "Microsoft helped the NSA to circumvent its encryption to address concerns that the agency would be unable to intercept web chats on the new Outlook.com portal; The agency already had pre-encryption stage access to email on Outlook.com, including Hotmail; The company worked with the FBI this year to allow the NSA easier access via Prism to its cloud storage service SkyDrive, which now has more than 250 million users worldwide; [...] Skype, which was bought by Microsoft in October 2011, worked with intelligence agencies last year to allow Prism to collect video of conversations as well as audio; Material collected through Prism is routinely shared with the FBI and CIA, with one NSA document describing the program as a 'team sport'." Wow. Just wow.
Permalink for comment 567098
To read all comments associated with this story, please click here.
RE[6]: Now we know what happend.
by Valhalla on Sun 14th Jul 2013 19:32 UTC in reply to "RE[5]: Now we know what happend."
Valhalla
Member since:
2006-01-24


If you believe that you can extrapolate from your own home experiences to a Enterprise servers, you need to have some work experience in IT. These are different worlds.

And if you believe you can extrapolate from my own system running a bleeding edge distro to that of companies running stable Linux distros on enterprise servers, you are moving the discussion into a 'different world' indeed.

Of course no one has ever claimed that every Linux upgrade crash drivers, no one has said that. But it happens from time to time, which even you confess.

This happens to ALL operating systems 'from time to time', as 'from time to time' there will be a bug in a driver if it has been modified.

This is why you run a stable distro for mission critical systems, which uses an old stable kernel where drivers (or any other part of the kernel) isn't being modified other than possibly having bugfixes backported.

I've had 3 problems in 6 years on a bleeding edge distro, do you even understand the difference between bleeding edge and a stable distro like for instance Debian Stable?

Again, those three problems (during a six year period) I've had would not have bitten me had I used a stable distro, as those kernels/packages where fixed long before any stable distro would have picked them up.

The problem is that vendors such as HP must spend considerable time and money to recompile their drivers.

HP doesn't need to spend any time to recompile their drivers if they submit them for inclusion to the kernel (which where 99% of Linux hardware support actually resides).

If they choose to keep proprietary out of tree drivers then that is their choice and they will have to maintain the drivers against kernel changes themselves.

Again, extremely few hardware vendors choose this path, which has lead to Linux having the largest hardware support out-of-the-box by far.

I'll leave you with this link: if HP, one of the largest OEMs on the entire planet, can't get Linux to work without running their own fork, what chance does the rest of us have?
http://www.theinquirer.net/inquirer/news/1530558/ubuntu-broken-dell...

Is this some joke? What fork of Linux are you talking about? Do you know what a fork is?

The 'article' (4 years old) describes Dell as having sold a computer with a faulty driver, but if you read the actual story it links, it turns out it was a faulty motherboard which caused the computer to freeze. Once exchanged, everything ran fine.

Did you even read the 'article', what the heck was this supposed to show, where is the goddamn Linux fork you mentioned???

kerneltrap.org/Linux/2.6.23-rc6-mm1_This_Just_Isnt_Working_Any_More

6 year old story where Andrew Morton (Linux kernel developer) complains about code contributions which hasn't been tested to compile against the current kernel.

As such he must fix them so that they compile, which is something he shouldn't have to do as his job is to review the code, and he should not have to spend time getting it to compile in the first place.

A perfectly reasonable complaint which doesn't say anything negative about the code which finally makes it into the linux kernel.

Again, as shown by your previous comments you seem to believe that just because someone contributes code to Linux it just lands in the kernel and is shipped.


If you read the original (german) article, Linus doesn't say that 'the kernel is too complex'. He acknowledges that certain subsystems has become so complex that only a handful of developers know them very well' which of course is not an ideal situation.

It says nothing about 'bad Linux code quality', some code categories are complex by nature, like crypto for instance. It's not an ideal situation but it's certainly not a problem specific to Linux.


4 year old article where Linus describes Linux to be bloated compared to what he envisioned 15 years ago

Linus:
Sometimes it’s a bit sad that we are definitely not the streamlined, small hyper efficient kernel I envisioned 15 years ago. The kernel is huge and bloated and our iCache footprint is scary. There’s no question about that, and whenever we add a new feature, it only gets worse.

Yeah, adding more features means bigger code, again this has nothing to do with your claim of 'bad Linux code quality', again you are taking a quote out of context to serve your agenda.


The well known back-story of course, is that Con Kolivas is bitter (perhaps rightly so) for not having his scheduler chosen for mainline Linux, so he is hardly objective. Also, in this very blog post Kolivas wrote:

Now I don't claim to be any kind of expert on code per-se. I most certainly have ideas, but I just hack together my ideas however I can dream up that they work, and I have basically zero traditional teaching, so you should really take whatever I say about someone else's code with a grain of salt.

Linux kernel maintainer Andrew Morton
http://lwn.net/Articles/285088/

5 year old link describing problems with fixing regressions due to lack of bug reports. He urges people to send bug reports regarding regressions and he advocates a 'bugfix-only release' (which I think sounds like a good idea if the problems with regressions is still as he describes it 5 years ago).

Linux hackers:
www.kerneltrap.org/Linux/Active_Merge_Windows

Already answered this above.

For instance, bad scalability. There are no 16-32 cpu Linux SMP servers for sale, because Linux can not scale to 16-32 cpus.

You still sticking to this story after this discussion ?
http://phoronix.com/forums/showthread.php?64939-Linux-vs-Solaris-sc...

etc. It surprises me that you missed all this talk about Linux having problems.

Looking at your assorted array of links, most of which are from 4-5 years ago, it's clear that you've just been googling for any discussion of a Linux 'problem' you can find which you then try to present as 'proof' of Linux having bad code quality.

During this discussion you've shown without the shadow of a doubt that you don't even have the slightest understanding of how the Linux development process works, you've tried to claim that code which is submitted to Linux enters the mainline kernel without review, you seem to lack any comprehension of the difference between bleeding edge and stable, and you continously take quotes out of context.

and this resulted in personal attacks from you?

You yourself admitted that you attack Linux because Linus Torvalds said bad things about your favourite operating system, you called it 'defence'.

I say that I find that to be crazy, again by your logic I should now start attacking Solaris because you as a Solaris fanboy is attacking Linux. Yes, that's crazy in my book.

But it certainly goes right along with your 'proof' of Linux being or poor code quality, which consists of nothing but old posts from Linux coders describing development problems which are universal to any project of this scope.

Reply Parent Score: 3