Linked by Thom Holwerda on Thu 11th Jul 2013 21:35 UTC
Microsoft Documents released by Snowden show the extent to which Microsoft helped the NSA and other security agencies in the US. "Microsoft helped the NSA to circumvent its encryption to address concerns that the agency would be unable to intercept web chats on the new Outlook.com portal; The agency already had pre-encryption stage access to email on Outlook.com, including Hotmail; The company worked with the FBI this year to allow the NSA easier access via Prism to its cloud storage service SkyDrive, which now has more than 250 million users worldwide; [...] Skype, which was bought by Microsoft in October 2011, worked with intelligence agencies last year to allow Prism to collect video of conversations as well as audio; Material collected through Prism is routinely shared with the FBI and CIA, with one NSA document describing the program as a 'team sport'." Wow. Just wow.
Thread beginning with comment 567082
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[5]: Now we know what happend.
by Kebabbert on Sun 14th Jul 2013 13:46 UTC in reply to "RE[4]: Now we know what happend."
Kebabbert
Member since:
2007-07-27

kerneltrap.org/Linux/2.6.23-rc6-mm1_This_Just_Isnt_Working_Any_More
Andrew Morton complains about the bad code quality that every body tries to merge into Linux is not tested, and sometimes the developer have not even compiled the kernel after changes. This forces Andrew to fix all problems. So poor Andrew writes "this is not working anymore".
Swedish link:
http://opensource.idg.se/2.1014/1.121585

Linus Torvalds:
http://www.tomshardware.com/news/Linux-Linus-Torvalds-kernel-too-co...
"The Linux kernel source code has grown by more than 50-percent in size over the past 39 months, and will cross a total of 15 million lines with the upcoming version 3.3 release.
In an interview with German newspaper Zeit Online, Torvalds recently stated that Linux has become "too complex" and he was concerned that developers would not be able to find their way through the software anymore. He complained that even subsystems have become very complex and he told the publication that he is "afraid of the day" when there will be an error that "cannot be evaluated anymore.!!!!!"

Linus Torvalds and Intel:
http://www.theregister.co.uk/2009/09/22/linus_torvalds_linux_bloate...
"Citing an internal Intel study that tracked kernel releases, Bottomley said Linux performance had dropped about two per centage points at every release, for a cumulative drop of about 12 per cent over the last ten releases. "Is this a problem?" he asked.
"We're getting bloated and huge. Yes, it's a problem," said Torvalds."

Linux kernel hacker Con Kolivas:
http://ck-hack.blogspot.be/2010/10/other-schedulers-illumos.html
"[After studying the Solaris source code] I started to feel a little embarrassed by what we have as our own Linux kernel. The more I looked at the code, the more it felt like it pretty much did everything the Linux kernel has been trying to do for ages. Not only that, but it's built like an aircraft, whereas ours looks like a garage job with duct tape by comparison"

Linux kernel maintainer Andrew Morton
http://lwn.net/Articles/285088/
"Q: Is it your opinion that the quality of the kernel is in decline? Most developers seem to be pretty sanguine about the overall quality problem(!!!!!) Assuming there's a difference of opinion here, where do you think it comes from? How can we resolve it?
A: I used to think it was in decline, and I think that I might think that it still is. I see so many regressions which we never fix."

Linux hackers:
www.kerneltrap.org/Linux/Active_Merge_Windows
"the [Linux source] tree breaks every day, and it's becoming an extremely non-fun environment to work in. We need to slow down the merging, we need to review things more, we need people to test their [...] changes!"


Linux developer Ted Tso, ext4 creator:
http://phoronix.com/forums/showthread.php?36507-Large-HDD-SSD-Linux...
"In the case of reiserfs, Chris Mason submitted a patch 4 years ago to turn on barriers by default, but Hans Reiser vetoed it. Apparently, to Hans, winning the benchmark demolition derby was more important than his user's data. (It's a sad fact that sometimes the desire to win benchmark competition will cause developers to cheat, sometimes at the expense of their users.!!!!!!)...We tried to get the default changed in ext3, but it was overruled by Andrew Morton, on the grounds that it would represent a big performance loss, and he didn't think the corruption happened all that often (!!!!!) --- despite the fact that Chris Mason had developed a python program that would reliably corrupt an ext3 file system if you ran it and then pulled the power plug "

What does Linux hacker Dave Jones mean when he is saying that "the kernel is going to pieces"? What does Linux hacker Alan Cox mean, when he say that "the kernel should be fixed"?

Other developers:
http://milek.blogspot.se/2010/12/linux-osync-and-write-barriers.htm...
"This is really scary. I wonder how many developers knew about it especially when coding for Linux when data safety was paramount. Sometimes it feels that some Linux developers are coding to win benchmarks and do not necessarily care about data safety, correctness and standards like POSIX. What is even worse is that some of them don't even bother to tell you about it in official documentation (at least the O_SYNC/O_DSYNC issue is documented in the man page now)."

Does Linus still overcommit RAM? Linux gives the requested RAM to every process so Linux might give away more RAM than exists in the server. If RAM is suddenly needed and about to end, Linux starts to kill processes randomly. That is really bad design. Randomly killing processes makes Linux unstable. Other OSes does not give away too much RAM, so there is no need to randomly kill processes.
http://opsmonkey.blogspot.se/2007/01/linux-memory-overcommit.html

etc. It surprises me that you missed all this talk about Linux having problems. For instance, bad scalability. There are no 16-32 cpu Linux SMP servers for sale, because Linux can not scale to 16-32 cpus. Sure, Linux scales excellent on clusters such as super computers, but that is just a large network on a fast switch. The SGI Altix supercomputer with 2048 cores, is just a cluster running a software hypervisor which tricks Linux into believing that SGI Altix is a SMP server - when it is in fact, a cluster. There are 2048 core Linux servers for sale, but no 16-32 cpu servers. Because Linux scales well on a cluster, but scales bad on a SMP server (one big fat server with as many as 16 or 32 cpus, just like IBM or Oracle or HP having 1000kg large SMP servers with up to 32 cpus). There are no 16-32 cpus for sale, please show me one if you have a link. There are none. If Linux scaled the crap out of other OSes, then there should be 16-32 cpu servers for sale, or even 64 cpu servers, and 128 cpus. But no one sells such Linux servers. Because Linux running on 16-32 cpus does not cut it, it does not scale.

Regarding Amazon, Google, etc - yes, they all run huge Linux clusters. But one architect said they run at a low utilization, because Linux does not cope with high load that well. In Enterprise settings. Unix and Mainframes can run at 100% utilization without getting unstable. But Windows can not. Linux can not either.



Hey, I just talked about security. When we talk about security, OpenBSD might be a better choice than Linux - and this resulted in personal attacks from you? Maybe you should calm down a bit? And when we talk about innovation, I will mention Plan9 - will you attack me again then? What is your problem? Maybe you had a bad day?

Reply Parent Score: 3