Linked by Thom Holwerda on Thu 11th Jul 2013 21:35 UTC
Microsoft Documents released by Snowden show the extent to which Microsoft helped the NSA and other security agencies in the US. "Microsoft helped the NSA to circumvent its encryption to address concerns that the agency would be unable to intercept web chats on the new Outlook.com portal; The agency already had pre-encryption stage access to email on Outlook.com, including Hotmail; The company worked with the FBI this year to allow the NSA easier access via Prism to its cloud storage service SkyDrive, which now has more than 250 million users worldwide; [...] Skype, which was bought by Microsoft in October 2011, worked with intelligence agencies last year to allow Prism to collect video of conversations as well as audio; Material collected through Prism is routinely shared with the FBI and CIA, with one NSA document describing the program as a 'team sport'." Wow. Just wow.
Thread beginning with comment 567029
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[4]: Now we know what happend.
by Valhalla on Sat 13th Jul 2013 19:28 UTC in reply to "RE[3]: Now we know what happend."
Valhalla
Member since:
2006-01-24

No one can keep up with those amounts of new code that gets incorportaed in Linux. I showed you proof in the links. For instance, the last link says "we need to review things more". Read it.

A link from 5 years ago where a developer says that they need to review code more before it enters the merge window so as to minimize the breakage that occurs during the merge window does NOT mean that code gets incorporated into Linux without review.

It's proof of absolutely nothing of the sort.

Code that breaks during the merge window is either reviewed and fixed or it doesn't make it into a mainline release at all, so your bullshit about untested code getting into mainline is just that, bullshit.


But this should not come as a surprise. You know that Linux upgrades breaks software and device drivers. You have experienced it yourself, if you have used Linux for some time.

Your links doesn't show one shred of fact to support your claim of HP spending millions of us dollars to keep up with drivers due to linux changes.

All you've done is link to well known linux hater bassbeast/hairyfeet's unsubstantiated attacks on Linux with nothing to back it up.

I've used Linux as my day-to-day OS for 6 years now, most of that time on a bleeding edge distro (Arch) and I've had to downgrade the kernel twice in those 6 years, once because of a unstable network driver during a large network rewrite, and once when I had just recently switched to Nouveau, where it became unstable against a new kernel upgrade.

I also had my Wacom Bamboo functionality fail with an upgrade of the xf86-input-wacom package which led me to downgrade said package while waiting for a fix.

That's three problems where I had to downgrade in 6 years, and these where all fixed within one to two weeks and allow me to upgrade with full funcitonality/stability.

Again this is on a bleeding edge distro, stable distros won't use the bleeding edge packages, they will wait until they've gone through lots of more testing and regression/bug fixing. So if I'd been using a stable distro I wouldn't have been bitten by any of the above.

So no, if you actually used Linux for 'some time' you'd know that the whole 'kernel upgrades continously crash drivers' is nonsense coming from people who doesn't even use Linux, just like you.

Not even proprietary drivers are a problem in practice, as while they do break between kernel upgrades, the proprietary hardware vendors like NVidia and AMD continously recompile their drivers against the new kernel versions.

Just read my links. Much code gets accepted without anyone knowing what it really does. For instance, the link with "Does this belong here?"

Stop lying, you have shown absolutely zero evidence of any code being accepted without anyone 'knowing what it really does', it's nothing but your own fabrication.

The link with 'does this belong here' means absolutely nothing, there's no context whatsoever, you'll find questions like this in any large code base where many developers collaborates, one developer new to a part of code questions a piece of code or a function and other developers who know the code responds.

You trying to pose this unsubstantiated quote by some guy named 'Lok' as some proof of 'code getting accepted without anyone knowing what it really does' only shows how desperate you are to downright lie in order to push your agenda.

But fact is that the code review process is too sloppy, just read the links to Linux devs who complain that they need to review things more.

You've shown no fact to support your claims at all, developers complaining that code needs more review before it enters certain stages doesn't mean that any unreviewed or sloppily reviewed code ever gets into the linux mainline releases. And there's ALWAYS going to be complaints about 'more code review' in ALL large projects, it proves nothing except.


So much Linux code gets accepted from anyone that no one can review all the new code. Just read my links.

I've read your links, they say nothing of the sort. Any code that gets into Linux mainline release will have had extensive review and bug/regression tests during several stages. Stop lying.

The thing is, Linux supporters believe Linux is best in every way,

I'm a Linux supporter and I certainly don't claim it is best in 'every way', as an example I prefer Haiku OS for desktop purposes.

when in fact, it is terrible.


Linux has bad stability, it has bad security, The code is bad (according to Linux kernel devs, I can show you numerous links on this), etc

More links? More quotes from a mailing list post 5 years ago where a developer is unhappy with some part of the development?

Bad stability and security? Based upon what? Compared to what?

If Linux was anywhere near as 'bad' as you try to portray it, it would have been abandoned ages ago instead of being used practically everywhere. You've offered nothing even remotely fact-like to support your claims. It's dominating supercomputers and HPC, it's vastly used in everything from mobile to fridges to servers to desktops to embedded. It did not get there by being bad at stability and or security.

That doesn't mean that it's the best in all these areas, but it sure as hell isn't 'terrible' in any of them.

So my question is to you: why are you attacking everybody and every OS? Why not leave them be?

What? Where am I attacking everybody and every OS, I'm not attacking ANY OS, you on the other hand are.

Then we would not have to defend ourself.

You are attacking Linux because you are angry at Linus for saying bad things about your favourite OS'es, this pretty much explains your mentality and how you can resort to such desperate fabrications.

I don't agree with Linus statements on OpenBSD and Solaris, but I don't use Linux because I adore Linus, I use Linux because it works for me.

Unlike you however, I don't hate Solaris just because a Solaris-fanboy like you attack Linux. That's just crazy, which sadly seems to apply to you.

But no one has time to audit everything. Just read my links "we need to review more".

Again stop lying, saying they need to review more doesn't mean the code that actually gets into linux mainline releases isn't properly reviewed. The link you posted was a 5 year old post where a developer wanted better reviewed code before it enters the merge window to minimize merge window breakage, the code in question won't make it into mainline release until it actually has been properly reviewed.

In the earlier days, less code was accepted. Today too much code is accepted, which no one has time to review thoroughly, so the review process is worse today.

You fail to understand (or more likely you simply ignore in order to perpetuate your lies) that just because code is accepted to the Linux project it doesn't mean that it ever makes it into mainline releases. And if it does, it does so after having gone through several stages, each with testing and review.

Reply Parent Score: 5

Kebabbert Member since:
2007-07-27

This post is in two parts, the links are in the second part.

"No one can keep up with those amounts of new code that gets incorportaed in Linux. I showed you proof in the links. For instance, the last link says "we need to review things more". Read it.

A link from 5 years ago where a developer says that they need to review code more before it enters the merge window so as to minimize the breakage that occurs during the merge window does NOT mean that code gets incorporated into Linux without review.

It's proof of absolutely nothing of the sort.

Code that breaks during the merge window is either reviewed and fixed or it doesn't make it into a mainline release at all, so your bullshit about untested code getting into mainline is just that, bullshit.
"
Thanks for your constructive remarks, you sound pleasant and well mannered, just like Linus Torvalds ("you are full of shit", "OpenBSD developers are m*sturbating monkeys", etc). BTW, Andrew Morton said in an interview that he wished a test tool set for Linux, because "we see so many regressions that we never fix". And, are Linux developers ignoring bug reports? etc. See further down for links.
http://www.linuxtoday.com/developer/2007091400326OPKNDV
http://www.kerneltrap.org/Linux/mm_Instability



"But this should not come as a surprise. You know that Linux upgrades breaks software and device drivers. You have experienced it yourself, if you have used Linux for some time.

Your links doesn't show one shred of fact to support your claim of HP spending millions of us dollars to keep up with drivers due to linux changes.

All you've done is link to well known linux hater bassbeast/hairyfeet's unsubstantiated attacks on Linux with nothing to back it up.

I've used Linux as my day-to-day OS for 6 years now, most of that time on a bleeding edge distro (Arch) and I've had to downgrade the kernel twice in those 6 years, once because of a unstable network driver during a large network rewrite, and once when I had just recently switched to Nouveau, where it became unstable against a new kernel upgrade.

That's three problems where I had to downgrade in 6 years, and these where all fixed within one to two weeks and allow me to upgrade with full funcitonality/stability....So if I'd been using a stable distro I wouldn't have been bitten by any of the above.
"
Jesus. You remind of those people saying "I have been running Windows on my desktop for 6 years, and it has crashed only twice, so you are lying: Windows is stable!"

To those Windows users I say: it is one thing to run Windows at home with no load, and no users and no requirements. But to run a fully loaded windows server with lots of users is a different thing. If you believe that you can extrapolate from your own home experiences to a Enterprise servers, you need to have some work experience in IT. These are different worlds.

There are many stories of sysadmins complaining about Linux breaking drivers, and this is a real problem. As I said: even you have experienced this - which you confessed. And even though I have predicted your problems, you insist it is nothing. You are too funny. Ive told you exactly what problems you had, and you basically say "yes, you are right I had those problems, but these problems are nothing to worry about, you are just lying when you say Linux has these problems". So... I was right all the time. First you confess I am right, and then you say I am wrong. (For those mathematically inclined, this is called a contradiction). ;)




So no, if you actually used Linux for 'some time' you'd know that the whole 'kernel upgrades continously crash drivers' is nonsense coming from people who doesn't even use Linux, just like you...Not even proprietary drivers are a problem in practice, as while they do break between kernel upgrades, the proprietary hardware vendors like NVidia and AMD continously recompile their drivers against the new kernel versions.

Of course no one has ever claimed that every Linux upgrade crash drivers, no one has said that. But it happens from time to time, which even you confess. The problem is that vendors such as HP must spend considerable time and money to recompile their drivers. If you dont understand it is a problem, then you need to get some IT work experience, and not just sit home toying with your PC and play games?

Linux device drivers model is broken:
"Quick, how many OSes OTHER than Linux use Torvald's driver model? NONE. How many use stable ABIs? BSD,Solaris, OSX,iOS,Android,Windows, even OS/2 has a stable driver ABI....I'm a retailer, I have access to more hardware than most and I can tell you the Linux driver model is BROKEN. I can take ANY mainstream distro, download the version from 5 years ago and update to current (thus simulating exactly HALF the lifetime of a Windows OS) and the drivers that worked in the beginning will NOT work at the end."

I'll leave you with this link: if HP, one of the largest OEMs on the entire planet, can't get Linux to work without running their own fork, what chance does the rest of us have?
http://www.theinquirer.net/inquirer/news/1530558/ubuntu-broken-dell...

(Yes, I know, this link is a lie, too. Why bother, you dont have to read it, you have missed all the complaints on Linux device driver model. Even if Linus Torvalds says it is broken, you will not believe him, how could someone make you understand?)



Stop lying, you have shown absolutely zero evidence of any code being accepted without anyone 'knowing what it really does', it's nothing but your own fabrication.
...
You trying to pose this unsubstantiated quote by some guy named 'Lok' as some proof of 'code getting accepted without anyone knowing what it really does' only shows how desperate you are to downright lie in order to push your agenda.

Jesus. There are numerous links about the bad code quality Linux has. Let me show you some links. How about links from Linus Torvalds himself? Would that do? Probably not. So, what kind of links do you require? Linus Torvalds will not do, maybe God is ok? If you dont trust Linus, do you trust God? Probably not either. I dont know how to make someone with zero work experience understand?

Sure I have showed some links that are a few years old. But those "old" links does not disprove my point. My point is that constantly during all the time Linux has been in development there has always been complaints about how bad the Linux code quality is. I have links from last year, and to links several years old - and every time in between. First, the original Unix creators studied the Linux code and they said it was bad. And now, last year Linus Torvalds talked about the problems. And even today, we all witness the problems that Linux has, for instance the broken device driver model. It has not been better with time. Linus Torvalds can not convince you of the problems, your own experiences of all problems can not convince you that Linux has problems - so how could I convince you? That would be impossible.

You others, can read these links. below. To be continued...

Reply Parent Score: 3

Valhalla Member since:
2006-01-24


If you believe that you can extrapolate from your own home experiences to a Enterprise servers, you need to have some work experience in IT. These are different worlds.

And if you believe you can extrapolate from my own system running a bleeding edge distro to that of companies running stable Linux distros on enterprise servers, you are moving the discussion into a 'different world' indeed.

Of course no one has ever claimed that every Linux upgrade crash drivers, no one has said that. But it happens from time to time, which even you confess.

This happens to ALL operating systems 'from time to time', as 'from time to time' there will be a bug in a driver if it has been modified.

This is why you run a stable distro for mission critical systems, which uses an old stable kernel where drivers (or any other part of the kernel) isn't being modified other than possibly having bugfixes backported.

I've had 3 problems in 6 years on a bleeding edge distro, do you even understand the difference between bleeding edge and a stable distro like for instance Debian Stable?

Again, those three problems (during a six year period) I've had would not have bitten me had I used a stable distro, as those kernels/packages where fixed long before any stable distro would have picked them up.

The problem is that vendors such as HP must spend considerable time and money to recompile their drivers.

HP doesn't need to spend any time to recompile their drivers if they submit them for inclusion to the kernel (which where 99% of Linux hardware support actually resides).

If they choose to keep proprietary out of tree drivers then that is their choice and they will have to maintain the drivers against kernel changes themselves.

Again, extremely few hardware vendors choose this path, which has lead to Linux having the largest hardware support out-of-the-box by far.

I'll leave you with this link: if HP, one of the largest OEMs on the entire planet, can't get Linux to work without running their own fork, what chance does the rest of us have?
http://www.theinquirer.net/inquirer/news/1530558/ubuntu-broken-dell...

Is this some joke? What fork of Linux are you talking about? Do you know what a fork is?

The 'article' (4 years old) describes Dell as having sold a computer with a faulty driver, but if you read the actual story it links, it turns out it was a faulty motherboard which caused the computer to freeze. Once exchanged, everything ran fine.

Did you even read the 'article', what the heck was this supposed to show, where is the goddamn Linux fork you mentioned???

kerneltrap.org/Linux/2.6.23-rc6-mm1_This_Just_Isnt_Working_Any_More

6 year old story where Andrew Morton (Linux kernel developer) complains about code contributions which hasn't been tested to compile against the current kernel.

As such he must fix them so that they compile, which is something he shouldn't have to do as his job is to review the code, and he should not have to spend time getting it to compile in the first place.

A perfectly reasonable complaint which doesn't say anything negative about the code which finally makes it into the linux kernel.

Again, as shown by your previous comments you seem to believe that just because someone contributes code to Linux it just lands in the kernel and is shipped.


If you read the original (german) article, Linus doesn't say that 'the kernel is too complex'. He acknowledges that certain subsystems has become so complex that only a handful of developers know them very well' which of course is not an ideal situation.

It says nothing about 'bad Linux code quality', some code categories are complex by nature, like crypto for instance. It's not an ideal situation but it's certainly not a problem specific to Linux.


4 year old article where Linus describes Linux to be bloated compared to what he envisioned 15 years ago

Linus:
Sometimes it’s a bit sad that we are definitely not the streamlined, small hyper efficient kernel I envisioned 15 years ago. The kernel is huge and bloated and our iCache footprint is scary. There’s no question about that, and whenever we add a new feature, it only gets worse.

Yeah, adding more features means bigger code, again this has nothing to do with your claim of 'bad Linux code quality', again you are taking a quote out of context to serve your agenda.


The well known back-story of course, is that Con Kolivas is bitter (perhaps rightly so) for not having his scheduler chosen for mainline Linux, so he is hardly objective. Also, in this very blog post Kolivas wrote:

Now I don't claim to be any kind of expert on code per-se. I most certainly have ideas, but I just hack together my ideas however I can dream up that they work, and I have basically zero traditional teaching, so you should really take whatever I say about someone else's code with a grain of salt.

Linux kernel maintainer Andrew Morton
http://lwn.net/Articles/285088/

5 year old link describing problems with fixing regressions due to lack of bug reports. He urges people to send bug reports regarding regressions and he advocates a 'bugfix-only release' (which I think sounds like a good idea if the problems with regressions is still as he describes it 5 years ago).

Linux hackers:
www.kerneltrap.org/Linux/Active_Merge_Windows

Already answered this above.

For instance, bad scalability. There are no 16-32 cpu Linux SMP servers for sale, because Linux can not scale to 16-32 cpus.

You still sticking to this story after this discussion ?
http://phoronix.com/forums/showthread.php?64939-Linux-vs-Solaris-sc...

etc. It surprises me that you missed all this talk about Linux having problems.

Looking at your assorted array of links, most of which are from 4-5 years ago, it's clear that you've just been googling for any discussion of a Linux 'problem' you can find which you then try to present as 'proof' of Linux having bad code quality.

During this discussion you've shown without the shadow of a doubt that you don't even have the slightest understanding of how the Linux development process works, you've tried to claim that code which is submitted to Linux enters the mainline kernel without review, you seem to lack any comprehension of the difference between bleeding edge and stable, and you continously take quotes out of context.

and this resulted in personal attacks from you?

You yourself admitted that you attack Linux because Linus Torvalds said bad things about your favourite operating system, you called it 'defence'.

I say that I find that to be crazy, again by your logic I should now start attacking Solaris because you as a Solaris fanboy is attacking Linux. Yes, that's crazy in my book.

But it certainly goes right along with your 'proof' of Linux being or poor code quality, which consists of nothing but old posts from Linux coders describing development problems which are universal to any project of this scope.

Reply Parent Score: 3

Kebabbert Member since:
2007-07-27

kerneltrap.org/Linux/2.6.23-rc6-mm1_This_Just_Isnt_Working_Any_More
Andrew Morton complains about the bad code quality that every body tries to merge into Linux is not tested, and sometimes the developer have not even compiled the kernel after changes. This forces Andrew to fix all problems. So poor Andrew writes "this is not working anymore".
Swedish link:
http://opensource.idg.se/2.1014/1.121585

Linus Torvalds:
http://www.tomshardware.com/news/Linux-Linus-Torvalds-kernel-too-co...
"The Linux kernel source code has grown by more than 50-percent in size over the past 39 months, and will cross a total of 15 million lines with the upcoming version 3.3 release.
In an interview with German newspaper Zeit Online, Torvalds recently stated that Linux has become "too complex" and he was concerned that developers would not be able to find their way through the software anymore. He complained that even subsystems have become very complex and he told the publication that he is "afraid of the day" when there will be an error that "cannot be evaluated anymore.!!!!!"

Linus Torvalds and Intel:
http://www.theregister.co.uk/2009/09/22/linus_torvalds_linux_bloate...
"Citing an internal Intel study that tracked kernel releases, Bottomley said Linux performance had dropped about two per centage points at every release, for a cumulative drop of about 12 per cent over the last ten releases. "Is this a problem?" he asked.
"We're getting bloated and huge. Yes, it's a problem," said Torvalds."

Linux kernel hacker Con Kolivas:
http://ck-hack.blogspot.be/2010/10/other-schedulers-illumos.html
"[After studying the Solaris source code] I started to feel a little embarrassed by what we have as our own Linux kernel. The more I looked at the code, the more it felt like it pretty much did everything the Linux kernel has been trying to do for ages. Not only that, but it's built like an aircraft, whereas ours looks like a garage job with duct tape by comparison"

Linux kernel maintainer Andrew Morton
http://lwn.net/Articles/285088/
"Q: Is it your opinion that the quality of the kernel is in decline? Most developers seem to be pretty sanguine about the overall quality problem(!!!!!) Assuming there's a difference of opinion here, where do you think it comes from? How can we resolve it?
A: I used to think it was in decline, and I think that I might think that it still is. I see so many regressions which we never fix."

Linux hackers:
www.kerneltrap.org/Linux/Active_Merge_Windows
"the [Linux source] tree breaks every day, and it's becoming an extremely non-fun environment to work in. We need to slow down the merging, we need to review things more, we need people to test their [...] changes!"


Linux developer Ted Tso, ext4 creator:
http://phoronix.com/forums/showthread.php?36507-Large-HDD-SSD-Linux...
"In the case of reiserfs, Chris Mason submitted a patch 4 years ago to turn on barriers by default, but Hans Reiser vetoed it. Apparently, to Hans, winning the benchmark demolition derby was more important than his user's data. (It's a sad fact that sometimes the desire to win benchmark competition will cause developers to cheat, sometimes at the expense of their users.!!!!!!)...We tried to get the default changed in ext3, but it was overruled by Andrew Morton, on the grounds that it would represent a big performance loss, and he didn't think the corruption happened all that often (!!!!!) --- despite the fact that Chris Mason had developed a python program that would reliably corrupt an ext3 file system if you ran it and then pulled the power plug "

What does Linux hacker Dave Jones mean when he is saying that "the kernel is going to pieces"? What does Linux hacker Alan Cox mean, when he say that "the kernel should be fixed"?

Other developers:
http://milek.blogspot.se/2010/12/linux-osync-and-write-barriers.htm...
"This is really scary. I wonder how many developers knew about it especially when coding for Linux when data safety was paramount. Sometimes it feels that some Linux developers are coding to win benchmarks and do not necessarily care about data safety, correctness and standards like POSIX. What is even worse is that some of them don't even bother to tell you about it in official documentation (at least the O_SYNC/O_DSYNC issue is documented in the man page now)."

Does Linus still overcommit RAM? Linux gives the requested RAM to every process so Linux might give away more RAM than exists in the server. If RAM is suddenly needed and about to end, Linux starts to kill processes randomly. That is really bad design. Randomly killing processes makes Linux unstable. Other OSes does not give away too much RAM, so there is no need to randomly kill processes.
http://opsmonkey.blogspot.se/2007/01/linux-memory-overcommit.html

etc. It surprises me that you missed all this talk about Linux having problems. For instance, bad scalability. There are no 16-32 cpu Linux SMP servers for sale, because Linux can not scale to 16-32 cpus. Sure, Linux scales excellent on clusters such as super computers, but that is just a large network on a fast switch. The SGI Altix supercomputer with 2048 cores, is just a cluster running a software hypervisor which tricks Linux into believing that SGI Altix is a SMP server - when it is in fact, a cluster. There are 2048 core Linux servers for sale, but no 16-32 cpu servers. Because Linux scales well on a cluster, but scales bad on a SMP server (one big fat server with as many as 16 or 32 cpus, just like IBM or Oracle or HP having 1000kg large SMP servers with up to 32 cpus). There are no 16-32 cpus for sale, please show me one if you have a link. There are none. If Linux scaled the crap out of other OSes, then there should be 16-32 cpu servers for sale, or even 64 cpu servers, and 128 cpus. But no one sells such Linux servers. Because Linux running on 16-32 cpus does not cut it, it does not scale.

Regarding Amazon, Google, etc - yes, they all run huge Linux clusters. But one architect said they run at a low utilization, because Linux does not cope with high load that well. In Enterprise settings. Unix and Mainframes can run at 100% utilization without getting unstable. But Windows can not. Linux can not either.



Hey, I just talked about security. When we talk about security, OpenBSD might be a better choice than Linux - and this resulted in personal attacks from you? Maybe you should calm down a bit? And when we talk about innovation, I will mention Plan9 - will you attack me again then? What is your problem? Maybe you had a bad day?

Reply Parent Score: 3