Linked by Thom Holwerda on Thu 11th Jul 2013 21:35 UTC
Microsoft Documents released by Snowden show the extent to which Microsoft helped the NSA and other security agencies in the US. "Microsoft helped the NSA to circumvent its encryption to address concerns that the agency would be unable to intercept web chats on the new Outlook.com portal; The agency already had pre-encryption stage access to email on Outlook.com, including Hotmail; The company worked with the FBI this year to allow the NSA easier access via Prism to its cloud storage service SkyDrive, which now has more than 250 million users worldwide; [...] Skype, which was bought by Microsoft in October 2011, worked with intelligence agencies last year to allow Prism to collect video of conversations as well as audio; Material collected through Prism is routinely shared with the FBI and CIA, with one NSA document describing the program as a 'team sport'." Wow. Just wow.
Thread beginning with comment 567081
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[5]: Now we know what happend.
by Kebabbert on Sun 14th Jul 2013 13:44 UTC in reply to "RE[4]: Now we know what happend."
Kebabbert
Member since:
2007-07-27

This post is in two parts, the links are in the second part.

"No one can keep up with those amounts of new code that gets incorportaed in Linux. I showed you proof in the links. For instance, the last link says "we need to review things more". Read it.

A link from 5 years ago where a developer says that they need to review code more before it enters the merge window so as to minimize the breakage that occurs during the merge window does NOT mean that code gets incorporated into Linux without review.

It's proof of absolutely nothing of the sort.

Code that breaks during the merge window is either reviewed and fixed or it doesn't make it into a mainline release at all, so your bullshit about untested code getting into mainline is just that, bullshit.
"
Thanks for your constructive remarks, you sound pleasant and well mannered, just like Linus Torvalds ("you are full of shit", "OpenBSD developers are m*sturbating monkeys", etc). BTW, Andrew Morton said in an interview that he wished a test tool set for Linux, because "we see so many regressions that we never fix". And, are Linux developers ignoring bug reports? etc. See further down for links.
http://www.linuxtoday.com/developer/2007091400326OPKNDV
http://www.kerneltrap.org/Linux/mm_Instability



"But this should not come as a surprise. You know that Linux upgrades breaks software and device drivers. You have experienced it yourself, if you have used Linux for some time.

Your links doesn't show one shred of fact to support your claim of HP spending millions of us dollars to keep up with drivers due to linux changes.

All you've done is link to well known linux hater bassbeast/hairyfeet's unsubstantiated attacks on Linux with nothing to back it up.

I've used Linux as my day-to-day OS for 6 years now, most of that time on a bleeding edge distro (Arch) and I've had to downgrade the kernel twice in those 6 years, once because of a unstable network driver during a large network rewrite, and once when I had just recently switched to Nouveau, where it became unstable against a new kernel upgrade.

That's three problems where I had to downgrade in 6 years, and these where all fixed within one to two weeks and allow me to upgrade with full funcitonality/stability....So if I'd been using a stable distro I wouldn't have been bitten by any of the above.
"
Jesus. You remind of those people saying "I have been running Windows on my desktop for 6 years, and it has crashed only twice, so you are lying: Windows is stable!"

To those Windows users I say: it is one thing to run Windows at home with no load, and no users and no requirements. But to run a fully loaded windows server with lots of users is a different thing. If you believe that you can extrapolate from your own home experiences to a Enterprise servers, you need to have some work experience in IT. These are different worlds.

There are many stories of sysadmins complaining about Linux breaking drivers, and this is a real problem. As I said: even you have experienced this - which you confessed. And even though I have predicted your problems, you insist it is nothing. You are too funny. Ive told you exactly what problems you had, and you basically say "yes, you are right I had those problems, but these problems are nothing to worry about, you are just lying when you say Linux has these problems". So... I was right all the time. First you confess I am right, and then you say I am wrong. (For those mathematically inclined, this is called a contradiction). ;)




So no, if you actually used Linux for 'some time' you'd know that the whole 'kernel upgrades continously crash drivers' is nonsense coming from people who doesn't even use Linux, just like you...Not even proprietary drivers are a problem in practice, as while they do break between kernel upgrades, the proprietary hardware vendors like NVidia and AMD continously recompile their drivers against the new kernel versions.

Of course no one has ever claimed that every Linux upgrade crash drivers, no one has said that. But it happens from time to time, which even you confess. The problem is that vendors such as HP must spend considerable time and money to recompile their drivers. If you dont understand it is a problem, then you need to get some IT work experience, and not just sit home toying with your PC and play games?

Linux device drivers model is broken:
"Quick, how many OSes OTHER than Linux use Torvald's driver model? NONE. How many use stable ABIs? BSD,Solaris, OSX,iOS,Android,Windows, even OS/2 has a stable driver ABI....I'm a retailer, I have access to more hardware than most and I can tell you the Linux driver model is BROKEN. I can take ANY mainstream distro, download the version from 5 years ago and update to current (thus simulating exactly HALF the lifetime of a Windows OS) and the drivers that worked in the beginning will NOT work at the end."

I'll leave you with this link: if HP, one of the largest OEMs on the entire planet, can't get Linux to work without running their own fork, what chance does the rest of us have?
http://www.theinquirer.net/inquirer/news/1530558/ubuntu-broken-dell...

(Yes, I know, this link is a lie, too. Why bother, you dont have to read it, you have missed all the complaints on Linux device driver model. Even if Linus Torvalds says it is broken, you will not believe him, how could someone make you understand?)



Stop lying, you have shown absolutely zero evidence of any code being accepted without anyone 'knowing what it really does', it's nothing but your own fabrication.
...
You trying to pose this unsubstantiated quote by some guy named 'Lok' as some proof of 'code getting accepted without anyone knowing what it really does' only shows how desperate you are to downright lie in order to push your agenda.

Jesus. There are numerous links about the bad code quality Linux has. Let me show you some links. How about links from Linus Torvalds himself? Would that do? Probably not. So, what kind of links do you require? Linus Torvalds will not do, maybe God is ok? If you dont trust Linus, do you trust God? Probably not either. I dont know how to make someone with zero work experience understand?

Sure I have showed some links that are a few years old. But those "old" links does not disprove my point. My point is that constantly during all the time Linux has been in development there has always been complaints about how bad the Linux code quality is. I have links from last year, and to links several years old - and every time in between. First, the original Unix creators studied the Linux code and they said it was bad. And now, last year Linus Torvalds talked about the problems. And even today, we all witness the problems that Linux has, for instance the broken device driver model. It has not been better with time. Linus Torvalds can not convince you of the problems, your own experiences of all problems can not convince you that Linux has problems - so how could I convince you? That would be impossible.

You others, can read these links. below. To be continued...

Reply Parent Score: 3

Valhalla Member since:
2006-01-24


If you believe that you can extrapolate from your own home experiences to a Enterprise servers, you need to have some work experience in IT. These are different worlds.

And if you believe you can extrapolate from my own system running a bleeding edge distro to that of companies running stable Linux distros on enterprise servers, you are moving the discussion into a 'different world' indeed.

Of course no one has ever claimed that every Linux upgrade crash drivers, no one has said that. But it happens from time to time, which even you confess.

This happens to ALL operating systems 'from time to time', as 'from time to time' there will be a bug in a driver if it has been modified.

This is why you run a stable distro for mission critical systems, which uses an old stable kernel where drivers (or any other part of the kernel) isn't being modified other than possibly having bugfixes backported.

I've had 3 problems in 6 years on a bleeding edge distro, do you even understand the difference between bleeding edge and a stable distro like for instance Debian Stable?

Again, those three problems (during a six year period) I've had would not have bitten me had I used a stable distro, as those kernels/packages where fixed long before any stable distro would have picked them up.

The problem is that vendors such as HP must spend considerable time and money to recompile their drivers.

HP doesn't need to spend any time to recompile their drivers if they submit them for inclusion to the kernel (which where 99% of Linux hardware support actually resides).

If they choose to keep proprietary out of tree drivers then that is their choice and they will have to maintain the drivers against kernel changes themselves.

Again, extremely few hardware vendors choose this path, which has lead to Linux having the largest hardware support out-of-the-box by far.

I'll leave you with this link: if HP, one of the largest OEMs on the entire planet, can't get Linux to work without running their own fork, what chance does the rest of us have?
http://www.theinquirer.net/inquirer/news/1530558/ubuntu-broken-dell...

Is this some joke? What fork of Linux are you talking about? Do you know what a fork is?

The 'article' (4 years old) describes Dell as having sold a computer with a faulty driver, but if you read the actual story it links, it turns out it was a faulty motherboard which caused the computer to freeze. Once exchanged, everything ran fine.

Did you even read the 'article', what the heck was this supposed to show, where is the goddamn Linux fork you mentioned???

kerneltrap.org/Linux/2.6.23-rc6-mm1_This_Just_Isnt_Working_Any_More

6 year old story where Andrew Morton (Linux kernel developer) complains about code contributions which hasn't been tested to compile against the current kernel.

As such he must fix them so that they compile, which is something he shouldn't have to do as his job is to review the code, and he should not have to spend time getting it to compile in the first place.

A perfectly reasonable complaint which doesn't say anything negative about the code which finally makes it into the linux kernel.

Again, as shown by your previous comments you seem to believe that just because someone contributes code to Linux it just lands in the kernel and is shipped.


If you read the original (german) article, Linus doesn't say that 'the kernel is too complex'. He acknowledges that certain subsystems has become so complex that only a handful of developers know them very well' which of course is not an ideal situation.

It says nothing about 'bad Linux code quality', some code categories are complex by nature, like crypto for instance. It's not an ideal situation but it's certainly not a problem specific to Linux.


4 year old article where Linus describes Linux to be bloated compared to what he envisioned 15 years ago

Linus:
Sometimes it’s a bit sad that we are definitely not the streamlined, small hyper efficient kernel I envisioned 15 years ago. The kernel is huge and bloated and our iCache footprint is scary. There’s no question about that, and whenever we add a new feature, it only gets worse.

Yeah, adding more features means bigger code, again this has nothing to do with your claim of 'bad Linux code quality', again you are taking a quote out of context to serve your agenda.


The well known back-story of course, is that Con Kolivas is bitter (perhaps rightly so) for not having his scheduler chosen for mainline Linux, so he is hardly objective. Also, in this very blog post Kolivas wrote:

Now I don't claim to be any kind of expert on code per-se. I most certainly have ideas, but I just hack together my ideas however I can dream up that they work, and I have basically zero traditional teaching, so you should really take whatever I say about someone else's code with a grain of salt.

Linux kernel maintainer Andrew Morton
http://lwn.net/Articles/285088/

5 year old link describing problems with fixing regressions due to lack of bug reports. He urges people to send bug reports regarding regressions and he advocates a 'bugfix-only release' (which I think sounds like a good idea if the problems with regressions is still as he describes it 5 years ago).

Linux hackers:
www.kerneltrap.org/Linux/Active_Merge_Windows

Already answered this above.

For instance, bad scalability. There are no 16-32 cpu Linux SMP servers for sale, because Linux can not scale to 16-32 cpus.

You still sticking to this story after this discussion ?
http://phoronix.com/forums/showthread.php?64939-Linux-vs-Solaris-sc...

etc. It surprises me that you missed all this talk about Linux having problems.

Looking at your assorted array of links, most of which are from 4-5 years ago, it's clear that you've just been googling for any discussion of a Linux 'problem' you can find which you then try to present as 'proof' of Linux having bad code quality.

During this discussion you've shown without the shadow of a doubt that you don't even have the slightest understanding of how the Linux development process works, you've tried to claim that code which is submitted to Linux enters the mainline kernel without review, you seem to lack any comprehension of the difference between bleeding edge and stable, and you continously take quotes out of context.

and this resulted in personal attacks from you?

You yourself admitted that you attack Linux because Linus Torvalds said bad things about your favourite operating system, you called it 'defence'.

I say that I find that to be crazy, again by your logic I should now start attacking Solaris because you as a Solaris fanboy is attacking Linux. Yes, that's crazy in my book.

But it certainly goes right along with your 'proof' of Linux being or poor code quality, which consists of nothing but old posts from Linux coders describing development problems which are universal to any project of this scope.

Reply Parent Score: 3