Linked by Thom Holwerda on Thu 11th Jul 2013 21:35 UTC
Microsoft Documents released by Snowden show the extent to which Microsoft helped the NSA and other security agencies in the US. "Microsoft helped the NSA to circumvent its encryption to address concerns that the agency would be unable to intercept web chats on the new Outlook.com portal; The agency already had pre-encryption stage access to email on Outlook.com, including Hotmail; The company worked with the FBI this year to allow the NSA easier access via Prism to its cloud storage service SkyDrive, which now has more than 250 million users worldwide; [...] Skype, which was bought by Microsoft in October 2011, worked with intelligence agencies last year to allow Prism to collect video of conversations as well as audio; Material collected through Prism is routinely shared with the FBI and CIA, with one NSA document describing the program as a 'team sport'." Wow. Just wow.
Order by: Score:
Crazy
by pmac on Thu 11th Jul 2013 21:55 UTC
pmac
Member since:
2009-07-08

I feel like the world isn't making a big enough deal of all this. It's a big story, but it feels like it should be 10 times bigger.

Reply Score: 19

But why?
by WorknMan on Thu 11th Jul 2013 22:06 UTC
WorknMan
Member since:
2005-11-13

Like any other publically traded corporation, Microsoft would only do something if one of two things are true:

1. They do it because it helps their bottom line
2. They do it because they are forced to

In this case, unless the government is paying them lots of money, I'd say the second one is probably true. Otherwise, what exactly does MS have to gain by handing over customer data to the government? Certainly not the good will of its users.

'But why would they make the process easier?' For the same reason that Google has put in place systems to make it easier for copyright holders to have content removed when it violates the DMCA. Whether or not they like the law, they still have to comply with it, so might as well streamline the process and make it less costly.

'But why don't they fight the government on this, and/or make it as hard as possible?' Well, think of it like this... you are running a large tech company, and the government comes to you and says, 'We need information on these people, and we have a court order that says you have to give it to us.' Will you then at that point tell the government to go f themselves, thereby possibly having your company sued into oblivion and perhaps getting yourself arrested in the process? No, probably not.

IMO, it's really not the job of these companies to play the heroes. If you want to blame anyone for this mess, then blame the fucktards who elected officials into office that enacted these laws in the first place.

Reply Score: 3

RE: But why?
by Stephen! on Thu 11th Jul 2013 22:25 UTC in reply to "But why?"
Stephen! Member since:
2007-11-24

'But why don't they fight the government on this, and/or make it as hard as possible?'


Maybe they fear the government. The DOJ did once threaten to break Microsoft up.

Reply Score: 7

RE: But why?
by Morgan on Thu 11th Jul 2013 23:12 UTC in reply to "But why?"
Morgan Member since:
2005-06-29

'But why don't they fight the government on this, and/or make it as hard as possible?' Well, think of it like this... you are running a large tech company, and the government comes to you and says, 'We need information on these people, and we have a court order that says you have to give it to us.' Will you then at that point tell the government to go f themselves, thereby possibly having your company sued into oblivion and perhaps getting yourself arrested in the process? No, probably not.


Not to mention they have a monetary interest in keeping the government as a happy customer. I could imagine a conversation along the lines of "What? You won't give us the data? I suppose we'll have to let our IT managers know they should explore options with Apple. After all, they gave us unfettered access to their documents..." or something like that.

Reply Score: 10

RE: But why?
by Soulbender on Fri 12th Jul 2013 01:20 UTC in reply to "But why?"
Soulbender Member since:
2005-08-18

In this case, unless the government is paying them lots of money


It doesn't have to be money. MS has a lot of lucrative government contracts that they probably don't want to lose.

Reply Score: 4

RE: But why?
by Soulbender on Fri 12th Jul 2013 01:37 UTC in reply to "But why?"
Soulbender Member since:
2005-08-18

'We need information on these people, and we have a court order that says you have to give it to us.' Will you then at that point tell the government to go f themselves, thereby possibly having your company sued into oblivion and perhaps getting yourself arrested in the process? No, probably not.


So we shouldn't really be upset with tech companies when they co-operate with, say, China or Russia to spy on their own citizens or other countries?

Reply Score: 6

RE: But why?
by martijn on Sat 13th Jul 2013 21:16 UTC in reply to "But why?"
martijn Member since:
2010-11-06

Like any other publically traded corporation, Microsoft would only do something if one of two things are true:

1. They do it because it helps their bottom line
2. They do it because they are forced to

In this case, unless the government is paying them lots of money, I'd say the second one is probably true.



In Europe MS has faced ~1G€ fines. Not in the US. You do not have to be paranoid to assume a link.

Edited 2013-07-13 21:20 UTC

Reply Score: 2

Force is indeed the keyword
by Berend de Boer on Thu 11th Jul 2013 23:34 UTC
Berend de Boer
Member since:
2005-10-19

The commentators here are spot on. The US holds a gun against Microsoft's head and tells them: jump.

Anyone jumps in that case. The only question is: how high do you want me to jump.

The public was sold that there were mechanisms in place to prevent abuse: i.e. court orders needed.

But what do we find out? We have secret courts, so secret that its rulings are secret too. We have law interpretations so secret, we cannot be told. The IRD is pursuing political enemies after Obama mentioned this as strategy. Millions of Americans have sufficient security clearance to read all your email.

Governments cannot be trusted. It's all to keep you safe! And its time the internet routes around obstacles again.

Reply Score: 5

RE: Force is indeed the keyword
by Soulbender on Fri 12th Jul 2013 01:23 UTC in reply to "Force is indeed the keyword"
Soulbender Member since:
2005-08-18

The US holds a gun against Microsoft's head and tells them: jump.


We don't really know how much pressure, if any, was applied.

Governments cannot be trusted.


Unfortunately companies can't be trusted either.

Reply Score: 8

RE[2]: Force is indeed the keyword
by Nelson on Fri 12th Jul 2013 11:06 UTC in reply to "RE: Force is indeed the keyword"
Nelson Member since:
2005-11-29

I think this is true in general, we don't know much of anything. There are little to no concrete details coming out with all this. Some vaguely worded powerpoints, unverified reports of unreleased information, etc.

The story seems to be much more nuanced than this, so I'm going to withhold judgment in hopes a proper story comes out.

It could be that MSFT and others willingly went along, or it can be that the Govt exerted tremendous legal pressure. Or anything in between that. Who knows.

Reply Score: 3

RE: Force is indeed the keyword
by otrov on Fri 12th Jul 2013 04:27 UTC in reply to "Force is indeed the keyword"
otrov Member since:
2012-06-02

But what do we find out? We have secret courts, so secret that its rulings are secret too. We have law interpretations so secret, we cannot be told. The IRD is pursuing political enemies after Obama mentioned this as strategy. Millions of Americans have sufficient security clearance to read all your email.


If you need Snowden for such observation, then dream on.

It's reasonable to carry believe in certain easy-to-digest-transparency image about the government role and politics, as even slight actual insight would definitely hurt and require IQ and integrity that most can't provide. But also no sane person really wants to know more then needed or something else then the rest. It's all about your safety - keep it as mantra.

What I found laughable in this charade is reaction from hypocritical EU countries as if they don't know anything about and as if they don't do the same.
What to expect from societies that communicate through PR agencies and press conferences, other then idiotic outcome?

Reply Score: 2

Intranet
by corbintechboy on Fri 12th Jul 2013 00:06 UTC
corbintechboy
Member since:
2006-05-02

We need to create a citizen run Intranet. Then all the regulations can take a flying hike. Make it private and non public. Make the system run by the users that use it. Corporate greed not allowed. Startups can create web services on this net and the world can be happy.

Only takes a few to start such a project.

Reply Score: 2

RE: Intranet
by ricegf on Fri 12th Jul 2013 00:28 UTC in reply to "Intranet"
ricegf Member since:
2007-04-25

Are you thinking encrypted wireless mesh network?

Reply Score: 2

RE: Intranet
by Soulbender on Fri 12th Jul 2013 01:33 UTC in reply to "Intranet"
Soulbender Member since:
2005-08-18

Only takes a few to start such a project.


Really? You're going to create an entirely new, global infrastructure just like that?

Reply Score: 3

RE[2]: Intranet
by corbintechboy on Fri 12th Jul 2013 02:08 UTC in reply to "RE: Intranet"
corbintechboy Member since:
2006-05-02

It is not that hard and could start in a small area and spread.

Start with a local network of shared somethings (files and webpages and or services or whatnot). Dedicate a small always on system (maybe a small Intel Atom system). Wire what can be wired and wireless what cannot.

Creates a chain from one place to the other, across a small area, then a town, then a city, then a state, and so on.

This does not have to be this huge costing project. We can already contribute with what we use everyday, our computer.

Reply Score: 2

RE[3]: Intranet
by Morgan on Fri 12th Jul 2013 02:58 UTC in reply to "RE[2]: Intranet"
Morgan Member since:
2005-06-29

Nice idea in theory, but I think it would work better if you used smartphones. Say what? Let me explain:

Smartphones are already connected to the Internet via the carrier networks. It would be fairly simple to create a cross-platform app (possibly even a web app) -- let's call it Join A Mesh or JAM for short -- that would use the phone's a-GPS combined with an Internet-connected app to map out and find other JAM users. The first time you connect to another JAMmer and handshake successfully, you exchange keys that keep you connected via WiFi or Bluetooth depending on proximity. As you each add other JAMmers, you also decrease your dependency on the carrier network since you can piggyback on another JAMmer's connection to the Internet.

Eventually there will be a need for more powerful "mega nodes", which is where your Atom based nettop machine would come into play. It can act as a file server, web server, and bridge to the "real" Internet if necessary. More powerful machines could serve denser meshes. But the larger the mesh is, the less dependent on the current Internet backbone everyone will be. There would be mesh based news sites, entertainment sites, research, hobby, social networking sites, and so on, just like the old 'Net.

Basically, since nearly every cellphone in a given populated area can potentially connect to another cellphone nearby, you would end up with a living, moving, slightly less dependable but much more vibrant version of the current Internet. It would be an altogether different beast, but it would be an amazing feat.

Unfortunately it won't happen anytime soon if ever, but it's a great theory.

Reply Score: 5

RE[4]: Intranet
by corbintechboy on Fri 12th Jul 2013 03:42 UTC in reply to "RE[3]: Intranet"
corbintechboy Member since:
2006-05-02

Now I like that idea.

And with the power of current smart phones we could have a nice, complete internet. And if you split resources across devices (in some instances many devices) you could create a very complex system.

The only issue this leaves however is the current caps. This would not really be able to be an unlimited service. At the same time however, if a developer of some sort who is into cell technology could figure out a way for phones to "talk" to each other without need for cell towers, we could have a winner (this might be what your getting at as well).

I like the idea.

Edited 2013-07-12 03:43 UTC

Reply Score: 0

RE[5]: Intranet
by Alfman on Fri 12th Jul 2013 04:58 UTC in reply to "RE[4]: Intranet"
Alfman Member since:
2011-01-28

corbintechboy,

While I am fond of the idea of having a massive public mesh network, there are a number of impediments that would likely hold it back.

1. There are legal implications. Users who volunteer their IPs to build mesh network gateways are in danger of becoming victims of the court system (ie running open wifi).

2. On the internet IP routing is accomplished via powerful BGP routers that build routing tables by trusting the routes advertised by peers. This works as long as the peers are trustworthy (which they generally are, but things can go wrong, see http://www.techrepublic.com/blog/networking/black-hole-routes-the-g...). However in a mesh between *untrusted/adhoc* peers, this becomes a distinct vulnerability for the mesh network.

3. Performance is an issue: bandwidth, latency, low cpu, packet loss, etc. Without centralized packet management, one hog might consume the bandwidth of everyone else in the vicinity. Realtime applications like VOIP could prove difficult.

4. Existing technology in mobile devices might not be adequate, consider that generally WiFi APs/Clients only support a single channel at a time, significantly limiting scalability. Professional mesh networks can use multiple radios simultaneously for this reason.

5. Having a mesh doesn't necessarily bring more privacy or security if packets still go through a compromised ISP. Even if traffic doesn't go through an ISP, it becomes easier than ever to perform a MITM-attack in an adhoc mesh network. Even secure encryption schemes require keys to be exchanged beforehand or the use of a CA (which can be compromised).

Don't get me wrong, I'd very much like to see a public mesh network succeed and participate on it, but I'm also skeptical as to some of the security features some people might want it to have in the context of government spying.

Reply Score: 6

RE[6]: Intranet
by Morgan on Sat 13th Jul 2013 02:19 UTC in reply to "RE[5]: Intranet"
Morgan Member since:
2005-06-29

All of that is why I had said it was a nice idea but probably will never happen. It's interesting to theorize about though!

Reply Score: 2

RE[7]: Intranet
by unoengborg on Sat 13th Jul 2013 13:04 UTC in reply to "RE[6]: Intranet"
unoengborg Member since:
2005-07-06

Even if all technical problems were solved, and we managed to get such a network to work. We would quickly see new regulations that somehow made it illegal to user or impossible in some other way

Reply Score: 2

RE[5]: Intranet
by zima on Thu 18th Jul 2013 23:59 UTC in reply to "RE[4]: Intranet"
zima Member since:
2005-07-06

if a developer of some sort who is into cell technology could figure out a way for phones to "talk" to each other without need for cell towers, we could have a winner

Radio modules aren't like that, you can't simply "hack" them via an app to work in a totally different configuration than intended when designing them. Plus, the GSM spectrum is very regulated, you'd get into trouble for attempting to (essentially) jam cell towers.

Mesh networks in general don't really scale up, it chokes up ...that's why ISPs are happy about giving you wifi routers, they choke up local spectrum.
Constantly running mesh network would also kill battery life.

The best you could hope for, when using mobiles, is something like http://en.wikipedia.org/wiki/OpenBTS ...but this still gets into the issue of GSM spectrum being very regulated.

Edited 2013-07-19 00:02 UTC

Reply Score: 2

RE[4]: Intranet
by Soulbender on Fri 12th Jul 2013 04:10 UTC in reply to "RE[3]: Intranet"
Soulbender Member since:
2005-08-18

Smartphones are already connected to the Internet via the carrier networks


And that makes you rely on the phone carriers so big business is still involved.

Reply Score: 3

RE[5]: Intranet
by corbintechboy on Fri 12th Jul 2013 04:19 UTC in reply to "RE[4]: Intranet"
corbintechboy Member since:
2006-05-02

But imagine if a phone can be hacked to communicate directly with another phone. Make it via an app or whatnot.

Got like a 3-5 mile radius I connect to someone within that range and it can jump all the way across the USA (or any country) via using the devices as hops.

Great idea! Just need a hacker and a developer and we would have a new tech that would change the game.

Edited 2013-07-12 04:19 UTC

Reply Score: 0

RE[6]: Intranet
by Alfman on Fri 12th Jul 2013 05:17 UTC in reply to "RE[5]: Intranet"
Alfman Member since:
2011-01-28

corbintechboy,

I'm not sure that would be possible, isn't GSM an asymmetric protocol? The hardware may not be cable of communicating to other peers without using a base station.

Something which may be of interest are femtocells, which are actually for sale today to enable cell phones to connect locally instead of through the carrier's towers.

https://en.wikipedia.org/wiki/Femtocell

I like the mesh network idea, but it seems likely that consumers would need to roll out hardware specialized for the purpose.

Reply Score: 4

RE[7]: Intranet
by corbintechboy on Fri 12th Jul 2013 05:36 UTC in reply to "RE[6]: Intranet"
corbintechboy Member since:
2006-05-02

You have a point.

My original idea was to use existing routers/extenders/bridges to do the job. Open wifi would not be an issue, I would trust my neighbour and he/she would do the same. Not connected to the net as we know it, but creating our own network.

If I allow a person to connect to my network that is my choice (of course without using my ISP in the process). So my theory is kind of like this:

1) I grant a user access to my network
2) I share on my network a webpage or whatnot
3) I could simply be a hop and share nothing

Now this could spread and work (at least in theory). Security is an issue, I however don't think the problem will be any worse then now. The problem will be a different case but users of this can do things to protect themselves easily (dual routers or whatnot).

I think this could work. And with it being a form of a private network, it would be perfectly legal. Courts would have a hard time trying to get into it. Users would have a huge legal leg to stand on.

Reply Score: 0

RE[5]: Intranet
by Morgan on Fri 12th Jul 2013 23:00 UTC in reply to "RE[4]: Intranet"
Morgan Member since:
2005-06-29

The scenario I laid out implied that initially the phones would rely on the carriers to find and connect to each other, but as the mesh network spreads in a particular area, the phones will be connected via WiFi and Bluetooth. Eventually we would reach a point where carrier connections are the option and the mesh is the "real" network in play.

I guess I should have spelled that out more specifically, I thought it would be easy to infer.

Reply Score: 2

RE[6]: Intranet
by Soulbender on Sat 13th Jul 2013 01:42 UTC in reply to "RE[5]: Intranet"
Soulbender Member since:
2005-08-18

The scenario I laid out implied that initially the phones would rely on the carriers to find and connect to each other,


Yeah, I noticed that after I had replied ;)

Reply Score: 2

RE[4]: Intranet
by lucas_maximus on Fri 12th Jul 2013 10:52 UTC in reply to "RE[3]: Intranet"
lucas_maximus Member since:
2009-08-18
Now we know what happend.
by cmost on Fri 12th Jul 2013 00:12 UTC
cmost
Member since:
2006-07-16

Back in the late 90s early 2000s, Microsoft was tried for antitrust practices and a judge ordered it split into two companies: a software company and an operating system company. Then, suddenly a new judge was assigned and all that went away. Shortly thereafter, Bill Gates retired. I find that funny really since back in the late 80s the government wanted a back door in Windows that would allow it unfettered access to user data for the purposes of national security. Bill Gates staunchly refused then, but when Microsoft's back was against a wall I think he agreed but left because he opposed it on other grounds. I think the government leveraged the penalty of splitting Microsoft in exchange for the back door it wanted. It's one of the reasons I migrated to Linux over a decade ago. Don't do Windows people, it's bad for you.

Reply Score: 8

RE: Now we know what happend.
by Kebabbert on Fri 12th Jul 2013 13:03 UTC in reply to "Now we know what happend."
Kebabbert Member since:
2007-07-27

It's one of the reasons I migrated to Linux over a decade ago. Don't do Windows people, it's bad for you.

I would not count on Linux being much safer. There are very subtle attempts to introduce back doors into Linux:
http://www.theregister.co.uk/2003/11/07/linux_kernel_backdoor_block...

"That's the kind of pub talk that you end up having," says BindView security researcher Mark 'Simple Nomad' Loveless. "If you were the NSA, how would you backdoor someone's software? You'd put in the changes subtly. Very subtly."
"Whoever did this knew what they were doing," says Larry McVoy, founder of San Francisco-based BitMover, which hosts the Linux kernel development site that was compromised. "They had to find some flags that could be passed to the system without causing an error, and yet are not normally passed together... There isn't any way that somebody could casually come in, not know about Unix, not know the Linux kernel code, and make this change. Not a chance."


The problem with Linux is the extremely high code turn over. Most code is replaced within... 6(?) months. There is no way you can keep up and audit all changes. HP spends millions of USD to keep up with the device drivers, because Linux upgrades frequently breaks the drivers. HP has a very hard time to update only the HP drivers. Now imagine how hard it would be to scan new code for back doors? That is impossible. Especially when the back doors are as difficult to spot as in the link above. There are probably many more back doors that are not spotted.

OpenBSD seems to be much rigorous with the code review and audit. NSA probably hates OpenBSD because it is focused on security and being safe. Linux has a chaotic development process and all code is not reviewed nor understood, which makes Linux a haven for NSA and other malicious users. I would avoid the very complex SELinux additions from NSA, to make Linux "safer". God nows how many backdoors there are in SELinux.

http://www.forbes.com/2005/06/16/linux-bsd-unix-cz_dl_0616theo.html
"Lok Technologies , a San Jose, Calif.-based maker of networking gear, started out using Linux in its equipment but switched to OpenBSD four years ago after company founder Simon Lok, who holds a doctorate in computer science, took a close look at the Linux source code.
“You know what I found? Right in the kernel, in the heart of the operating system, I found a developer’s comment that said, ‘Does this belong here?’ “Lok says. “What kind of confidence does that inspire? Right then I knew it was time to switch.”

This proves that Linux developers does not review all code, nor understand what the code does. It is wildly chaotic with lots of contributions from everywhere, including from NSA.

http://www.kerneltrap.org/Linux/Active_Merge_Windows
"The [linux source code] tree breaks every day, and it's becomming an extremely non-fun environment to work in.
We need to slow down the merging, we need to review things more, we need people to test their f--king changes!"


From a security view point, Linux should be avoided. OpenBSD is built for safety and every line of code is reviewed and understood.

Edited 2013-07-12 13:04 UTC

Reply Score: 6

RE[2]: Now we know what happend.
by shmerl on Fri 12th Jul 2013 14:49 UTC in reply to "RE: Now we know what happend."
shmerl Member since:
2010-06-08

This isn't about attempts. It's about ability to review and find them. Which is simply close to impossible with Windows and its closed development. Really a classic issue about open vs closed source.

Edited 2013-07-12 14:50 UTC

Reply Score: 1

RE[2]: Now we know what happend.
by Valhalla on Fri 12th Jul 2013 21:46 UTC in reply to "RE: Now we know what happend."
Valhalla Member since:
2006-01-24


There is no way you can keep up and audit all changes

Only code that is actually a candidate to make it into the kernel needs to be audited, are you saying code gets merged into a mainline release without being audited? Show me some proof.

HP spends millions of USD to keep up with the device drivers, because Linux upgrades frequently breaks the drivers.

Citation needed.


OpenBSD seems to be much rigorous with the code review and audit.

No argument here, OpenBSD is the most security oriented operating system I can think of, of course it leads to drawbacks like being very slowly developed.

Also OpenBSD's focus on security above (pretty much) all else doesn't mean that Linux has 'bad' security in any way.

Linux has a chaotic development process and all code is not reviewed nor understood, which makes Linux a haven for NSA and other malicious users.

Bullshit, how is Linux development chaotic?

People/companies submit code, code is audited by the maintainer/maintainers of the specific subsystem the code belongs to, then if it passes their audit it's put in staging where it will go through testing and more eyeballs as at this stage it's actually a candidate for mainline.

Then when the subsystem maintainer feels the code is mature enough he/she waits for the merge window to open and then sends a pull request to Linus.

Linus then has the final say on whether or not it will make it into the merge window, if it does it will go through further testing during the merge window, and if it passes it will finally make it into a mainline release.

How is this a chaotic development process?


“You know what I found? Right in the kernel, in the heart of the operating system, I found a developer’s comment that said, ‘Does this belong here?’ “Lok says. “What kind of confidence does that inspire? Right then I knew it was time to switch.”

This proves that Linux developers does not review all code, nor understand what the code does.

A 2005 quote from some 'Lok' about a comment he found in the Linux source code, without any context whatsoever as to what the comment even related to is something you claim to be proof of Linux developers not reviewing or understanding the code? Your trolling seems to know no bounds.

Now that you seem to have given up championing Solaris you've instead embarked on a anti-Linux crusade, I guess I shouldn't be surprised.

Du borde hitta något konstruktivare att tillbringa din tid med, istället för att hata och attackera Linux, varför inte fokusera på att lyfta fram egenskaper hos de operativsystem du gillar? Har aldrig förstått mig på din typ av beteende.

It is wildly chaotic with lots of contributions from everywhere, including from NSA.

How is getting code contributions chaotic?

These contributions, if they make it into the kernel mainline release at all, only make it in once they've been audited and tested.


http://www.kerneltrap.org/Linux/Active_Merge_Windows
"The [linux source code] tree breaks every day, and it's becomming an extremely non-fun environment to work in.
We need to slow down the merging, we need to review things more, we need people to test their f--king changes!"

You dig up a 5 year old e-mail where a developer states that they need to slow down the amount of merging during the merge window or make the merge window longer as proof of what exactly?

That five years ago they had a dialogue about the amount of code which should be merged during a merge window?

Reply Score: 5

RE[3]: Now we know what happend.
by Kebabbert on Sat 13th Jul 2013 11:47 UTC in reply to "RE[2]: Now we know what happend."
Kebabbert Member since:
2007-07-27

" There is no way you can keep up and audit all changes
Only code that is actually a candidate to make it into the kernel needs to be audited, are you saying code gets merged into a mainline release without being audited? Show me some proof. "
I am saying that the code audit and review process is crippled because of the high code turn over. No one can keep up with those amounts of new code that gets incorportaed in Linux. I showed you proof in the links. For instance, the last link says "we need to review things more". Read it.




"HP spends millions of USD to keep up with the device drivers, because Linux upgrades frequently breaks the drivers.
Citation needed. "
http://www.osnews.com/permalink?561866
http://www.osnews.com/permalink?561858
But this should not come as a surprise. You know that Linux upgrades breaks software and device drivers. You have experienced it yourself, if you have used Linux for some time.


" OpenBSD seems to be much rigorous with the code review and audit.
No argument here, OpenBSD is the most security oriented operating system I can think of, of course it leads to drawbacks like being very slowly developed. Also OpenBSD's focus on security above (pretty much) all else doesn't mean that Linux has 'bad' security in any way. "
I am not saying that Linux has bad security, I am saying that Linux has some problems in the code review and audit process. Just read my links. Much code gets accepted without anyone knowing what it really does. For instance, the link with "Does this belong here?"



"Linux has a chaotic development process and all code is not reviewed nor understood, which makes Linux a haven for NSA and other malicious users.
Bullshit, how is Linux development chaotic? "
Maybe "chaotic" was not the correct word. But fact is that the code review process is too sloppy, just read the links to Linux devs who complain that they need to review things more. So much Linux code gets accepted from anyone that no one can review all the new code. Just read my links.



" “You know what I found? Right in the kernel, in the heart of the operating system, I found a developer’s comment that said, ‘Does this belong here?’ “Lok says. “What kind of confidence does that inspire? Right then I knew it was time to switch.” This proves that Linux developers does not review all code, nor understand what the code does.
A 2005 quote from some 'Lok' about a comment he found in the Linux source code, without any context whatsoever as to what the comment even related to is something you claim to be proof of Linux developers not reviewing or understanding the code? "
I doubt OpenBSD devs does accept that much code that they dont know what all code does. This link is an example of Linux devs accepting code that they dont know what it does. It does not give confidence to the Linux code review process, does it?


Your trolling seems to know no bounds. Now that you seem to have given up championing Solaris you've instead embarked on a anti-Linux crusade, I guess I shouldn't be surprised.

-I have not given up Solaris. The thing is, when we talk about security then OpenBSD has the best reputation, so I advocate OpenBSD.
-When we talk about innovative Unix, I advocate Solaris because it is best (everybody talks about ZFS (BTRFS), DTrace (Systemtap), SMF (systemd), Crossbow (openVswitch), Containers (Linux has copied this as well), etc. Linux has copied everything that Solaris has.
-And if we talk about stable OSes, then I advocate OpenVMS (OpenVMS clusters are brutal, and best in the world, with uptime surpassing Mainframes, measuring in decades).
-When we talk about innovative OS, I advocate Plan9 (my favourite OS).
-Best realtime Unix, I advocate QNX.
etc

Maybe you missed all my posts where I say that compared to OpenVMS, all Unix are unstable and can not compare? It seems that you believe I claim Solaris is best in every way? Secure, uptime, performance, realtime, etc? Well I dont. Solaris is the most innovative Unix, that is a fact (everybody tries to mimic Solaris - why if Solaris is bad?).

The thing is, Linux supporters believe Linux is best in every way, when in fact, it is terrible. It has bad scalability (show me any 32 cpu Linux servers for sale? There are none for sale, because Linux does not scale to 32 cpus), Linux has bad stability, it has bad security, The code is bad (according to Linux kernel devs, I can show you numerous links on this), etc

I would have no problems with Linux being bad, if Linux did not attack everyone, including OpenBSD (m*sturbating monkeys because they focus on security), Solairs (wished it was dead), etc. So my question is to you: why are you attacking everybody and every OS? Why not leave them be? Then we would not have to defend ourself. It is Linus Torvlads who has attitude problems with his big Ego, and he attacks everyone, including his own developers. Are you surprised other OS supporters gets upset when they are attacked? Why?



How is getting code contributions chaotic? These contributions, if they make it into the kernel mainline release at all, only make it in once they've been audited and tested.

But no one has time to audit everything. Just read my links "we need to review more". It is too much code accepted all the time. Too much is rewritten all the time. I have many links to Linux kernel devs, where they say that the Linux code quality is not good, and bad. You want to read all my links? I can post them for you if you wish.

Sure some links are a few years old, but I doubt the process is better today, because Linux is larger than ever and more bloated and more code than ever gets accepted every day. In the earlier days, less code was accepted. Today too much code is accepted, which no one has time to review thoroughly, so the review process is worse today.

Reply Score: 1

RE[4]: Now we know what happend.
by Valhalla on Sat 13th Jul 2013 19:28 UTC in reply to "RE[3]: Now we know what happend."
Valhalla Member since:
2006-01-24

No one can keep up with those amounts of new code that gets incorportaed in Linux. I showed you proof in the links. For instance, the last link says "we need to review things more". Read it.

A link from 5 years ago where a developer says that they need to review code more before it enters the merge window so as to minimize the breakage that occurs during the merge window does NOT mean that code gets incorporated into Linux without review.

It's proof of absolutely nothing of the sort.

Code that breaks during the merge window is either reviewed and fixed or it doesn't make it into a mainline release at all, so your bullshit about untested code getting into mainline is just that, bullshit.


But this should not come as a surprise. You know that Linux upgrades breaks software and device drivers. You have experienced it yourself, if you have used Linux for some time.

Your links doesn't show one shred of fact to support your claim of HP spending millions of us dollars to keep up with drivers due to linux changes.

All you've done is link to well known linux hater bassbeast/hairyfeet's unsubstantiated attacks on Linux with nothing to back it up.

I've used Linux as my day-to-day OS for 6 years now, most of that time on a bleeding edge distro (Arch) and I've had to downgrade the kernel twice in those 6 years, once because of a unstable network driver during a large network rewrite, and once when I had just recently switched to Nouveau, where it became unstable against a new kernel upgrade.

I also had my Wacom Bamboo functionality fail with an upgrade of the xf86-input-wacom package which led me to downgrade said package while waiting for a fix.

That's three problems where I had to downgrade in 6 years, and these where all fixed within one to two weeks and allow me to upgrade with full funcitonality/stability.

Again this is on a bleeding edge distro, stable distros won't use the bleeding edge packages, they will wait until they've gone through lots of more testing and regression/bug fixing. So if I'd been using a stable distro I wouldn't have been bitten by any of the above.

So no, if you actually used Linux for 'some time' you'd know that the whole 'kernel upgrades continously crash drivers' is nonsense coming from people who doesn't even use Linux, just like you.

Not even proprietary drivers are a problem in practice, as while they do break between kernel upgrades, the proprietary hardware vendors like NVidia and AMD continously recompile their drivers against the new kernel versions.

Just read my links. Much code gets accepted without anyone knowing what it really does. For instance, the link with "Does this belong here?"

Stop lying, you have shown absolutely zero evidence of any code being accepted without anyone 'knowing what it really does', it's nothing but your own fabrication.

The link with 'does this belong here' means absolutely nothing, there's no context whatsoever, you'll find questions like this in any large code base where many developers collaborates, one developer new to a part of code questions a piece of code or a function and other developers who know the code responds.

You trying to pose this unsubstantiated quote by some guy named 'Lok' as some proof of 'code getting accepted without anyone knowing what it really does' only shows how desperate you are to downright lie in order to push your agenda.

But fact is that the code review process is too sloppy, just read the links to Linux devs who complain that they need to review things more.

You've shown no fact to support your claims at all, developers complaining that code needs more review before it enters certain stages doesn't mean that any unreviewed or sloppily reviewed code ever gets into the linux mainline releases. And there's ALWAYS going to be complaints about 'more code review' in ALL large projects, it proves nothing except.


So much Linux code gets accepted from anyone that no one can review all the new code. Just read my links.

I've read your links, they say nothing of the sort. Any code that gets into Linux mainline release will have had extensive review and bug/regression tests during several stages. Stop lying.

The thing is, Linux supporters believe Linux is best in every way,

I'm a Linux supporter and I certainly don't claim it is best in 'every way', as an example I prefer Haiku OS for desktop purposes.

when in fact, it is terrible.


Linux has bad stability, it has bad security, The code is bad (according to Linux kernel devs, I can show you numerous links on this), etc

More links? More quotes from a mailing list post 5 years ago where a developer is unhappy with some part of the development?

Bad stability and security? Based upon what? Compared to what?

If Linux was anywhere near as 'bad' as you try to portray it, it would have been abandoned ages ago instead of being used practically everywhere. You've offered nothing even remotely fact-like to support your claims. It's dominating supercomputers and HPC, it's vastly used in everything from mobile to fridges to servers to desktops to embedded. It did not get there by being bad at stability and or security.

That doesn't mean that it's the best in all these areas, but it sure as hell isn't 'terrible' in any of them.

So my question is to you: why are you attacking everybody and every OS? Why not leave them be?

What? Where am I attacking everybody and every OS, I'm not attacking ANY OS, you on the other hand are.

Then we would not have to defend ourself.

You are attacking Linux because you are angry at Linus for saying bad things about your favourite OS'es, this pretty much explains your mentality and how you can resort to such desperate fabrications.

I don't agree with Linus statements on OpenBSD and Solaris, but I don't use Linux because I adore Linus, I use Linux because it works for me.

Unlike you however, I don't hate Solaris just because a Solaris-fanboy like you attack Linux. That's just crazy, which sadly seems to apply to you.

But no one has time to audit everything. Just read my links "we need to review more".

Again stop lying, saying they need to review more doesn't mean the code that actually gets into linux mainline releases isn't properly reviewed. The link you posted was a 5 year old post where a developer wanted better reviewed code before it enters the merge window to minimize merge window breakage, the code in question won't make it into mainline release until it actually has been properly reviewed.

In the earlier days, less code was accepted. Today too much code is accepted, which no one has time to review thoroughly, so the review process is worse today.

You fail to understand (or more likely you simply ignore in order to perpetuate your lies) that just because code is accepted to the Linux project it doesn't mean that it ever makes it into mainline releases. And if it does, it does so after having gone through several stages, each with testing and review.

Reply Score: 5

RE[5]: Now we know what happend.
by Kebabbert on Sun 14th Jul 2013 13:44 UTC in reply to "RE[4]: Now we know what happend."
Kebabbert Member since:
2007-07-27

This post is in two parts, the links are in the second part.

"No one can keep up with those amounts of new code that gets incorportaed in Linux. I showed you proof in the links. For instance, the last link says "we need to review things more". Read it.

A link from 5 years ago where a developer says that they need to review code more before it enters the merge window so as to minimize the breakage that occurs during the merge window does NOT mean that code gets incorporated into Linux without review.

It's proof of absolutely nothing of the sort.

Code that breaks during the merge window is either reviewed and fixed or it doesn't make it into a mainline release at all, so your bullshit about untested code getting into mainline is just that, bullshit.
"
Thanks for your constructive remarks, you sound pleasant and well mannered, just like Linus Torvalds ("you are full of shit", "OpenBSD developers are m*sturbating monkeys", etc). BTW, Andrew Morton said in an interview that he wished a test tool set for Linux, because "we see so many regressions that we never fix". And, are Linux developers ignoring bug reports? etc. See further down for links.
http://www.linuxtoday.com/developer/2007091400326OPKNDV
http://www.kerneltrap.org/Linux/mm_Instability



"But this should not come as a surprise. You know that Linux upgrades breaks software and device drivers. You have experienced it yourself, if you have used Linux for some time.

Your links doesn't show one shred of fact to support your claim of HP spending millions of us dollars to keep up with drivers due to linux changes.

All you've done is link to well known linux hater bassbeast/hairyfeet's unsubstantiated attacks on Linux with nothing to back it up.

I've used Linux as my day-to-day OS for 6 years now, most of that time on a bleeding edge distro (Arch) and I've had to downgrade the kernel twice in those 6 years, once because of a unstable network driver during a large network rewrite, and once when I had just recently switched to Nouveau, where it became unstable against a new kernel upgrade.

That's three problems where I had to downgrade in 6 years, and these where all fixed within one to two weeks and allow me to upgrade with full funcitonality/stability....So if I'd been using a stable distro I wouldn't have been bitten by any of the above.
"
Jesus. You remind of those people saying "I have been running Windows on my desktop for 6 years, and it has crashed only twice, so you are lying: Windows is stable!"

To those Windows users I say: it is one thing to run Windows at home with no load, and no users and no requirements. But to run a fully loaded windows server with lots of users is a different thing. If you believe that you can extrapolate from your own home experiences to a Enterprise servers, you need to have some work experience in IT. These are different worlds.

There are many stories of sysadmins complaining about Linux breaking drivers, and this is a real problem. As I said: even you have experienced this - which you confessed. And even though I have predicted your problems, you insist it is nothing. You are too funny. Ive told you exactly what problems you had, and you basically say "yes, you are right I had those problems, but these problems are nothing to worry about, you are just lying when you say Linux has these problems". So... I was right all the time. First you confess I am right, and then you say I am wrong. (For those mathematically inclined, this is called a contradiction). ;)




So no, if you actually used Linux for 'some time' you'd know that the whole 'kernel upgrades continously crash drivers' is nonsense coming from people who doesn't even use Linux, just like you...Not even proprietary drivers are a problem in practice, as while they do break between kernel upgrades, the proprietary hardware vendors like NVidia and AMD continously recompile their drivers against the new kernel versions.

Of course no one has ever claimed that every Linux upgrade crash drivers, no one has said that. But it happens from time to time, which even you confess. The problem is that vendors such as HP must spend considerable time and money to recompile their drivers. If you dont understand it is a problem, then you need to get some IT work experience, and not just sit home toying with your PC and play games?

Linux device drivers model is broken:
"Quick, how many OSes OTHER than Linux use Torvald's driver model? NONE. How many use stable ABIs? BSD,Solaris, OSX,iOS,Android,Windows, even OS/2 has a stable driver ABI....I'm a retailer, I have access to more hardware than most and I can tell you the Linux driver model is BROKEN. I can take ANY mainstream distro, download the version from 5 years ago and update to current (thus simulating exactly HALF the lifetime of a Windows OS) and the drivers that worked in the beginning will NOT work at the end."

I'll leave you with this link: if HP, one of the largest OEMs on the entire planet, can't get Linux to work without running their own fork, what chance does the rest of us have?
http://www.theinquirer.net/inquirer/news/1530558/ubuntu-broken-dell...

(Yes, I know, this link is a lie, too. Why bother, you dont have to read it, you have missed all the complaints on Linux device driver model. Even if Linus Torvalds says it is broken, you will not believe him, how could someone make you understand?)



Stop lying, you have shown absolutely zero evidence of any code being accepted without anyone 'knowing what it really does', it's nothing but your own fabrication.
...
You trying to pose this unsubstantiated quote by some guy named 'Lok' as some proof of 'code getting accepted without anyone knowing what it really does' only shows how desperate you are to downright lie in order to push your agenda.

Jesus. There are numerous links about the bad code quality Linux has. Let me show you some links. How about links from Linus Torvalds himself? Would that do? Probably not. So, what kind of links do you require? Linus Torvalds will not do, maybe God is ok? If you dont trust Linus, do you trust God? Probably not either. I dont know how to make someone with zero work experience understand?

Sure I have showed some links that are a few years old. But those "old" links does not disprove my point. My point is that constantly during all the time Linux has been in development there has always been complaints about how bad the Linux code quality is. I have links from last year, and to links several years old - and every time in between. First, the original Unix creators studied the Linux code and they said it was bad. And now, last year Linus Torvalds talked about the problems. And even today, we all witness the problems that Linux has, for instance the broken device driver model. It has not been better with time. Linus Torvalds can not convince you of the problems, your own experiences of all problems can not convince you that Linux has problems - so how could I convince you? That would be impossible.

You others, can read these links. below. To be continued...

Reply Score: 3

RE[6]: Now we know what happend.
by Valhalla on Sun 14th Jul 2013 19:32 UTC in reply to "RE[5]: Now we know what happend."
Valhalla Member since:
2006-01-24


If you believe that you can extrapolate from your own home experiences to a Enterprise servers, you need to have some work experience in IT. These are different worlds.

And if you believe you can extrapolate from my own system running a bleeding edge distro to that of companies running stable Linux distros on enterprise servers, you are moving the discussion into a 'different world' indeed.

Of course no one has ever claimed that every Linux upgrade crash drivers, no one has said that. But it happens from time to time, which even you confess.

This happens to ALL operating systems 'from time to time', as 'from time to time' there will be a bug in a driver if it has been modified.

This is why you run a stable distro for mission critical systems, which uses an old stable kernel where drivers (or any other part of the kernel) isn't being modified other than possibly having bugfixes backported.

I've had 3 problems in 6 years on a bleeding edge distro, do you even understand the difference between bleeding edge and a stable distro like for instance Debian Stable?

Again, those three problems (during a six year period) I've had would not have bitten me had I used a stable distro, as those kernels/packages where fixed long before any stable distro would have picked them up.

The problem is that vendors such as HP must spend considerable time and money to recompile their drivers.

HP doesn't need to spend any time to recompile their drivers if they submit them for inclusion to the kernel (which where 99% of Linux hardware support actually resides).

If they choose to keep proprietary out of tree drivers then that is their choice and they will have to maintain the drivers against kernel changes themselves.

Again, extremely few hardware vendors choose this path, which has lead to Linux having the largest hardware support out-of-the-box by far.

I'll leave you with this link: if HP, one of the largest OEMs on the entire planet, can't get Linux to work without running their own fork, what chance does the rest of us have?
http://www.theinquirer.net/inquirer/news/1530558/ubuntu-broken-dell...

Is this some joke? What fork of Linux are you talking about? Do you know what a fork is?

The 'article' (4 years old) describes Dell as having sold a computer with a faulty driver, but if you read the actual story it links, it turns out it was a faulty motherboard which caused the computer to freeze. Once exchanged, everything ran fine.

Did you even read the 'article', what the heck was this supposed to show, where is the goddamn Linux fork you mentioned???

kerneltrap.org/Linux/2.6.23-rc6-mm1_This_Just_Isnt_Working_Any_More

6 year old story where Andrew Morton (Linux kernel developer) complains about code contributions which hasn't been tested to compile against the current kernel.

As such he must fix them so that they compile, which is something he shouldn't have to do as his job is to review the code, and he should not have to spend time getting it to compile in the first place.

A perfectly reasonable complaint which doesn't say anything negative about the code which finally makes it into the linux kernel.

Again, as shown by your previous comments you seem to believe that just because someone contributes code to Linux it just lands in the kernel and is shipped.


If you read the original (german) article, Linus doesn't say that 'the kernel is too complex'. He acknowledges that certain subsystems has become so complex that only a handful of developers know them very well' which of course is not an ideal situation.

It says nothing about 'bad Linux code quality', some code categories are complex by nature, like crypto for instance. It's not an ideal situation but it's certainly not a problem specific to Linux.


4 year old article where Linus describes Linux to be bloated compared to what he envisioned 15 years ago

Linus:
Sometimes it’s a bit sad that we are definitely not the streamlined, small hyper efficient kernel I envisioned 15 years ago. The kernel is huge and bloated and our iCache footprint is scary. There’s no question about that, and whenever we add a new feature, it only gets worse.

Yeah, adding more features means bigger code, again this has nothing to do with your claim of 'bad Linux code quality', again you are taking a quote out of context to serve your agenda.


The well known back-story of course, is that Con Kolivas is bitter (perhaps rightly so) for not having his scheduler chosen for mainline Linux, so he is hardly objective. Also, in this very blog post Kolivas wrote:

Now I don't claim to be any kind of expert on code per-se. I most certainly have ideas, but I just hack together my ideas however I can dream up that they work, and I have basically zero traditional teaching, so you should really take whatever I say about someone else's code with a grain of salt.

Linux kernel maintainer Andrew Morton
http://lwn.net/Articles/285088/

5 year old link describing problems with fixing regressions due to lack of bug reports. He urges people to send bug reports regarding regressions and he advocates a 'bugfix-only release' (which I think sounds like a good idea if the problems with regressions is still as he describes it 5 years ago).

Linux hackers:
www.kerneltrap.org/Linux/Active_Merge_Windows

Already answered this above.

For instance, bad scalability. There are no 16-32 cpu Linux SMP servers for sale, because Linux can not scale to 16-32 cpus.

You still sticking to this story after this discussion ?
http://phoronix.com/forums/showthread.php?64939-Linux-vs-Solaris-sc...

etc. It surprises me that you missed all this talk about Linux having problems.

Looking at your assorted array of links, most of which are from 4-5 years ago, it's clear that you've just been googling for any discussion of a Linux 'problem' you can find which you then try to present as 'proof' of Linux having bad code quality.

During this discussion you've shown without the shadow of a doubt that you don't even have the slightest understanding of how the Linux development process works, you've tried to claim that code which is submitted to Linux enters the mainline kernel without review, you seem to lack any comprehension of the difference between bleeding edge and stable, and you continously take quotes out of context.

and this resulted in personal attacks from you?

You yourself admitted that you attack Linux because Linus Torvalds said bad things about your favourite operating system, you called it 'defence'.

I say that I find that to be crazy, again by your logic I should now start attacking Solaris because you as a Solaris fanboy is attacking Linux. Yes, that's crazy in my book.

But it certainly goes right along with your 'proof' of Linux being or poor code quality, which consists of nothing but old posts from Linux coders describing development problems which are universal to any project of this scope.

Reply Score: 3

RE[5]: Now we know what happend.
by Kebabbert on Sun 14th Jul 2013 13:46 UTC in reply to "RE[4]: Now we know what happend."
Kebabbert Member since:
2007-07-27

kerneltrap.org/Linux/2.6.23-rc6-mm1_This_Just_Isnt_Working_Any_More
Andrew Morton complains about the bad code quality that every body tries to merge into Linux is not tested, and sometimes the developer have not even compiled the kernel after changes. This forces Andrew to fix all problems. So poor Andrew writes "this is not working anymore".
Swedish link:
http://opensource.idg.se/2.1014/1.121585

Linus Torvalds:
http://www.tomshardware.com/news/Linux-Linus-Torvalds-kernel-too-co...
"The Linux kernel source code has grown by more than 50-percent in size over the past 39 months, and will cross a total of 15 million lines with the upcoming version 3.3 release.
In an interview with German newspaper Zeit Online, Torvalds recently stated that Linux has become "too complex" and he was concerned that developers would not be able to find their way through the software anymore. He complained that even subsystems have become very complex and he told the publication that he is "afraid of the day" when there will be an error that "cannot be evaluated anymore.!!!!!"

Linus Torvalds and Intel:
http://www.theregister.co.uk/2009/09/22/linus_torvalds_linux_bloate...
"Citing an internal Intel study that tracked kernel releases, Bottomley said Linux performance had dropped about two per centage points at every release, for a cumulative drop of about 12 per cent over the last ten releases. "Is this a problem?" he asked.
"We're getting bloated and huge. Yes, it's a problem," said Torvalds."

Linux kernel hacker Con Kolivas:
http://ck-hack.blogspot.be/2010/10/other-schedulers-illumos.html
"[After studying the Solaris source code] I started to feel a little embarrassed by what we have as our own Linux kernel. The more I looked at the code, the more it felt like it pretty much did everything the Linux kernel has been trying to do for ages. Not only that, but it's built like an aircraft, whereas ours looks like a garage job with duct tape by comparison"

Linux kernel maintainer Andrew Morton
http://lwn.net/Articles/285088/
"Q: Is it your opinion that the quality of the kernel is in decline? Most developers seem to be pretty sanguine about the overall quality problem(!!!!!) Assuming there's a difference of opinion here, where do you think it comes from? How can we resolve it?
A: I used to think it was in decline, and I think that I might think that it still is. I see so many regressions which we never fix."

Linux hackers:
www.kerneltrap.org/Linux/Active_Merge_Windows
"the [Linux source] tree breaks every day, and it's becoming an extremely non-fun environment to work in. We need to slow down the merging, we need to review things more, we need people to test their [...] changes!"


Linux developer Ted Tso, ext4 creator:
http://phoronix.com/forums/showthread.php?36507-Large-HDD-SSD-Linux...
"In the case of reiserfs, Chris Mason submitted a patch 4 years ago to turn on barriers by default, but Hans Reiser vetoed it. Apparently, to Hans, winning the benchmark demolition derby was more important than his user's data. (It's a sad fact that sometimes the desire to win benchmark competition will cause developers to cheat, sometimes at the expense of their users.!!!!!!)...We tried to get the default changed in ext3, but it was overruled by Andrew Morton, on the grounds that it would represent a big performance loss, and he didn't think the corruption happened all that often (!!!!!) --- despite the fact that Chris Mason had developed a python program that would reliably corrupt an ext3 file system if you ran it and then pulled the power plug "

What does Linux hacker Dave Jones mean when he is saying that "the kernel is going to pieces"? What does Linux hacker Alan Cox mean, when he say that "the kernel should be fixed"?

Other developers:
http://milek.blogspot.se/2010/12/linux-osync-and-write-barriers.htm...
"This is really scary. I wonder how many developers knew about it especially when coding for Linux when data safety was paramount. Sometimes it feels that some Linux developers are coding to win benchmarks and do not necessarily care about data safety, correctness and standards like POSIX. What is even worse is that some of them don't even bother to tell you about it in official documentation (at least the O_SYNC/O_DSYNC issue is documented in the man page now)."

Does Linus still overcommit RAM? Linux gives the requested RAM to every process so Linux might give away more RAM than exists in the server. If RAM is suddenly needed and about to end, Linux starts to kill processes randomly. That is really bad design. Randomly killing processes makes Linux unstable. Other OSes does not give away too much RAM, so there is no need to randomly kill processes.
http://opsmonkey.blogspot.se/2007/01/linux-memory-overcommit.html

etc. It surprises me that you missed all this talk about Linux having problems. For instance, bad scalability. There are no 16-32 cpu Linux SMP servers for sale, because Linux can not scale to 16-32 cpus. Sure, Linux scales excellent on clusters such as super computers, but that is just a large network on a fast switch. The SGI Altix supercomputer with 2048 cores, is just a cluster running a software hypervisor which tricks Linux into believing that SGI Altix is a SMP server - when it is in fact, a cluster. There are 2048 core Linux servers for sale, but no 16-32 cpu servers. Because Linux scales well on a cluster, but scales bad on a SMP server (one big fat server with as many as 16 or 32 cpus, just like IBM or Oracle or HP having 1000kg large SMP servers with up to 32 cpus). There are no 16-32 cpus for sale, please show me one if you have a link. There are none. If Linux scaled the crap out of other OSes, then there should be 16-32 cpu servers for sale, or even 64 cpu servers, and 128 cpus. But no one sells such Linux servers. Because Linux running on 16-32 cpus does not cut it, it does not scale.

Regarding Amazon, Google, etc - yes, they all run huge Linux clusters. But one architect said they run at a low utilization, because Linux does not cope with high load that well. In Enterprise settings. Unix and Mainframes can run at 100% utilization without getting unstable. But Windows can not. Linux can not either.



Hey, I just talked about security. When we talk about security, OpenBSD might be a better choice than Linux - and this resulted in personal attacks from you? Maybe you should calm down a bit? And when we talk about innovation, I will mention Plan9 - will you attack me again then? What is your problem? Maybe you had a bad day?

Reply Score: 3

Comment by shmerl
by shmerl on Fri 12th Jul 2013 00:16 UTC
shmerl
Member since:
2010-06-08

Wow. Just wow.


Wow, really? This doesn't sound like news. They were known to have cozy relationship with governments for years (including backdoors and other such stuff which was mentioned above already). And I mean many governments, besides the US.

Edited 2013-07-12 00:17 UTC

Reply Score: 2

RE: Comment by shmerl
by lucas_maximus on Fri 12th Jul 2013 18:55 UTC in reply to "Comment by shmerl"
lucas_maximus Member since:
2009-08-18

TBH I couldn't give a shit. If they want to wade through my inane ramblings to my mates after work ... let them.

It is kinda like comments on here. They are irrelevance to anyone outside of this site.

Edited 2013-07-12 18:55 UTC

Reply Score: 3

Microsoft Responds
by runjorel on Fri 12th Jul 2013 01:04 UTC
runjorel
Member since:
2009-02-09

Check out the update at the bottom of this related article:
http://www.theverge.com/2013/7/11/4514938/nsa-could-pull-email-or-v...

According to Microsoft, they said they don't provide access to blank requests.

Who know's anymore.

Reply Score: 2

Killer Timing
by gan17 on Fri 12th Jul 2013 03:15 UTC
gan17
Member since:
2008-06-03

Read about this at Ars just now. Gotta love the timing.

You had this article;
http://arstechnica.com/tech-policy/2013/07/nsa-taps-skype-chats-new...

...and you had this one posted a few minutes earlier;
http://arstechnica.com/gaming/2013/07/why-the-xbox-one-might-actual...

Busy day. Couldn't even read the usual Prenda Law articles without getting distracted.

Reply Score: 8

RE: Killer Timing
by Nelson on Fri 12th Jul 2013 12:00 UTC in reply to "Killer Timing"
Nelson Member since:
2005-11-29

Holy sh*t that Xbox One article is stupid. Wtf. Xbox team needs their heads checked.

Reply Score: 3

RE[2]: Killer Timing
by Soulbender on Sun 14th Jul 2013 06:46 UTC in reply to "RE: Killer Timing"
Soulbender Member since:
2005-08-18

I almost fell off my chair at "a Wi-Fi keyboard and mouse".

the Xbox One from the start could really let the system stand in for the office computer in many basic work situations.


The level of delusion at Microsoft is too damn high!

Reply Score: 2

As Per SCOTUS
by Lorin on Fri 12th Jul 2013 05:45 UTC
Lorin
Member since:
2010-04-06

ANY law that in any way contradicts the clearly written restrictions in the Constitution is void and never was valid, they go on to say that also applies to court rulings.

So the excuse of "They made me do it" is pure BS.

Reply Score: 3

Are we really surprised?
by vitae on Fri 12th Jul 2013 23:30 UTC
vitae
Member since:
2006-02-20

Did we expect anything different? Corporations leaning on the government to protect against "piracy", and now the government expecting something back. Campaign contributions/bribes are only part of the deal.

Reply Score: 2