When you run smbd -V
on your Snow Leopard installation, you’ll see it’s running SAMBA version 3.0.28a-apple. While I’m not sure how much difference the “-apple” makes, version 3.0.28a is old. Very old. In other words, it’s riddled with bugs. Apple hasn’t updated SAMBA in 3 years, and for Lion, they’re dumping it altogether for something homegrown. The reason? SAMBA is now GPLv3.
Apple has included SAMBA for file sharing in Windows networks since 2002’s Mac OS X 10.2. However, recently, SAMBA switched to version 3 of the GPL, which includes protections against patent threats. As the GPLv3 quick guide states – “Whenever someone conveys software covered by GPLv3 that they’ve written or modified, they must provide every recipient with any patent licenses necessary to exercise the rights that the GPL gives them. In addition to that, if any licensee tries to use a patent suit to stop another user from exercising those rights, their license will be terminated.”
As a patent-happy company, Apple obviously doesn’t like this, and as such, they didn’t have much of a choice. This also explains the truly irresponsibly old version of SAMBA Apple is still shipping with Snow Leopard; SAMBA switched to GPLv3 for version 3.2.0, released July 2008. SAMBA 3.2.0 was the next version after 3.0.x. It doesn’t explain, however, why Apple has ignored 9 more point releases in the 3.0 branch, but alas.
Anyway, Lion will include a homegrown replacement for SAMBA, AppleInsider reports, called SMBX. SMBX supports Microsoft’s new, more efficient and faster SMB2 protocol, used by Windows Vista and Windows 7, but doesn’t include support for NT Domain Controller. The SMB2 protocol is proprietary to Microsoft, but the specifications are freely available. SMB2 support is coming in SAMBA 3.6.
Considering the dangers associated with using such outdated software as SAMBA 3.0.28a, this move is better than nothing. It will open up a whole new can of bugs, as when it ships with Lion, it’ll be largely untested, but at least it’s not 3 years old. I would’ve preferred Apple stuck to SAMBA, but heck, realistically, nobody ever expected Apple to work with GPLv3 software.
So, it is a fork of SAMBA? so it is a open source also?
Not, it is a compliant version for SMB2
http://en.wikipedia.org/wiki/Server_Message_Block#SMB2
It’s Cocoa APIs for SMB2 to be current with Microsoft’s standard.
Confirmed by test cases written by Samba.
For the simple reason MS SMB2 documents don’t match the real world implementation.
MS was forced to hand over there alteration documentation to Samba and IBM. For the simple reason SMB is not MS protocol. It is IBM’s. This includes the SMB2 alterations.
MS also has to attend the compatibility test meeting of implementations of SMB protocols hosted by Samba.
So no matter what Apple will still have to do business with Samba. Might not be using Samba code but hey.
MS is not in charge of certificating if something is SMB 2 or not.
All Apple should care about is that Windows Networks interoperate with OS X Networks for mounted resources.
Unless I am missing something, this means that Apple is paying Microsoft for using Linux related technologies. Or they plan on adding this technology to a device that you can’t change the software on. Do the iPads have Samba support on them now?
EDIT: Trying to think of what in the GPL3 is making Apple dumpy Samba.
Edited 2011-03-26 02:29 UTC
My guess: a patents clause
Hmm.. they also dropped gcc, so I don’t think it’s just the patent issue. Perhaps the fact that software has basically been losing more freedom every time a GPL revision is made.
No, I meant GPLv3 has a patent clause, GPLv2 did not. The latest versions of GCC and Samaba are both licensed under the GPLv3 instead of GPLv2.
“Trying to think of what in the GPL3 is making Apple dumpy Samba.”
Well, it’s got to be either software patents or hardware restrictions on software modifications. In apple’s case probably both.
iPad do not have support in-the-box, but there are some apps, which do.
That’s not paying Microsoft for Linux technology. The SMB protocol was developed by a Microsoft , SaMBa being an open-source implementation. As the article states Apple will implement its own version of MS’ SMB2.
Wrong, the smb protocol was designed and developed by Barry Feigenbaum for IBM. It was later extended and continued to be developed by Microsoft, but it is not per se a Microsoft technology.
Well, that means it’s still not a linux technology, it is a MS/IBM technology, which the original poster was trying to address.
Furthermore, since Windows is the only OS that natively uses SMB, it is a defacto MS technology, especially since as you noted, it continues to be developed by MS, and not IBM.
Edited 2011-03-26 16:09 UTC
When I said in my original post that Apple was paying Microsoft, I was not trying to imply that Microsoft owns rights to SMB. Just that Microsoft tries to collect royalties off of Linux from a lot of people.
Further, Windows is not the only OS that used SMB by default. It also is installed by default on most Linux distros.
Microsoft does NOT control SMB. They can write their own version of it, but they have to give those changes to the SAMBA team per EU sanctions.
The original article made the claim about Windows 7 SMB being more efficient…what a crock. Do a network capture of an SMB exchange versus an SMB2 exchange on Windows 7 sometime for something easy like deleting a file. SMB only uses a few packets to complete the actions. SMB2 uses hundreds. Now I will grant SMB2 may support more Windows mannerisms, but I wouldn’t call that efficiency.
As for protocols, I have found iSCSI to be about the same as NFS. I get ~95% total possible bandwidth at home. So on a gigabit network, figuring in 20% overhead for the networking, you can get a maximum 800 Mbits/s or 100MB/s. I generally run around 92-95 MB/s throughput. SMB gives me a bit less, but since its what Windows can use, I live with it.
This has nothing to do with the EU, who forced an interoperability decree on MS AFTER the Samba devs had already done all the work. It was not needed because the samba devs did an awesome job, they didn’t really need the documentation. Just because the EU forces MS to share the documentation does not mean that MS doesn’t control the direction of SMB, as they can change it as much as they want, they just have to document it after the fact.
It may be installed by default, but it SMB is not a native file sharing technology of UNIX, NFS and it’s like are. SMB is bolted on to the side of Linux, not part of the OS.
You could argue the same for Windows then as it’s user space drivers that are optionally (albeit pushed by default) installed along with the other networking stacks.
“Windows File and Print Sharing” services can be stopped and even uninstalled just like the SAMBA daemon can.
SMB also doesn’t support the file permissions on nearly all file systems used by linux and various unices…
Well that’s not entirely true.
If you’re running a SAMBA server and set the host filesystem to read only, then the SMB protocol can’t overwrite that (as you would hope to expect). Same goes for owner permissions too. SAMBA also allows me to set the executable permission which isn’t present for Windows filesystems.
In fact, SAMBA exposes all of the host filesystem’s permissions from user and group to the RWXRWXRWX and special permissions (eg set UID).
Now I couldn’t comment whether these permissions were visible in Windows (it wouldn’t surprise me if they weren’t as they’re not native permissions for a Windows filesystem) nor if it native to SMB or whether it’s a SAMBA hack, but none the less all the file and folder permissions are there on both my BSD and Linux SMB hosts when viewing with a Linux SAMBA client.
I’d be interested to know if this is just a *nix only SAMBA hack or if there is official support in SMB for other filesystem permissions – and if the latter, if Windows supports them or not (I’m guessing not?).
Windows with services for Unix display posix permissions under SMB1. http://www.unixsmb2.org/ The SMB2 full Posix permission support is not written yet.
But once it is as per normal there will be a update for Windows to support it.
Basically MS writes the side of SMB protocal for their file-systems. Unix groups write the SMB protocal for Posix platforms.
This is the big problem of thinking SMB is MS only. Its a joint written protocol. Always has been.
Problem comes about is that a lot of applications for windows are built to presume the file-system on the other end is NTFS. So don’t use the extra features built into to windows to support posix file systems and other file system types properly.
From an operational point of view SMB2 is not complete yet. But of course MS will want to deploy it so creating lockouts.
Basically if you read just the MS documents for SMB protocols you miss the Unix addons that MS does support. Since MS does not write those.
Yes so a SMB protocol based off MS documents alone is incomplete. Always you have to refer to samba collected documents on the subject if implementing SMB to implement it correctly.
The SMB (Server Message Blocks) protocol was not developed by Microsoft. It was developed IBM who made the protocol open for everyone to use. Microsoft launched its embrace and extend policy on this and renamed the result CIFS (Common Internet File System) which like the Holy Roman Empire which was neither Holy, Roman or an Empire, CIFS was neither Common, Internet related or a File System.
It then launched its Active Directory which was an embrace and extend proprietary combination of the open SMB, Kereberos and LDAP standards an software with deliberately added incompatabilities.
Thanks should be given to Tridge and Jeremy for SAMBA and their ability to get round MS’s road blocks and to promote compatibility. They chose GPLv3 to shove it in the face of those who try to use patents to stifle innovation like Apple and MS.
Edited 2011-03-26 22:25 UTC
Actually, the original SMB protocol is an IBM invention.
Obviously, it is scary to rely on GPLv3, so Apple will try to replace all GPLv3 software eventually. XCode 4 uses LLVM already, dumping Gnu compilers, Samba is next. And existing Samba is damn slow, comparing with Windows built-in networking, so I’m glad it will be replaced, hope it will be faster.
Saying that Samba is slow is complete utter bullshit. Those of us that use it daily in large networks know otherwise.
Stop spreading FUD to support the agenda of your employer.
Much faster than sftp at least though that may be due to lack of encryption in the protocol.
I use it daily at home. My photo-collection is sitting on Synology server, and I access it from all my home computers, including MacOS and Ubuntu. The slowest machine is Windows Xp, but access is fastest, i guess, because of intensive caching. So i use NFS on unix machines, it is also slow, but seems a bit faster.
I do not want to comment employer thing, it is a fud.
Synology is just a Linux box so chances are it’s running SAMBA. Thus your anecdotal evidence is complete bullshit because if SAMBA was the bottleneck, it would run at the same reduced performance regardless of whether the guest platform was XP, OS X or Linux.
In fact, unless you’re specifically mounting Synology’s remote share on your Linux guest, the chances are you’re not even using SAMBA’s FUSE modules to browse SMB shares on that box but instead whatever CIFS API’s your desktop environment ships with (FYI Nautilus and KDE both have their own SMB bindings)
This is the great thing about anecdotal evidence – it’s usually wrong.
Edited 2011-03-27 08:35 UTC
It does. It is slow. It is processor bound.
Are you saying that linux does not use samba client? Or you are saying that without fuse it run faster? There is only clear intention to insult me, but the point is not clear. Maybe fuse is a bottleneck, maybe. I do mount shares. But user measures overral experience anyway.
SAMBA wouldn’t be maxing out your CPU, so clearly the bottleneck isn’t your processor.
No, I’m saying Linux doesn’t exclusively use SAMBA client. I’m saying desktop environments like GNOME and KDE have their own SMB clients built in as well.
Also, FUSE wouldn’t make any difference here because your network speed is not going to be faster than the memory swapping between user space and kernel space.
Edited 2011-03-28 07:39 UTC
No gnome and kde don’t have there own SMB clients. They have frontends to samba tech.
I knew KDE (or rather Dolphin) navigated SMB shares using KDE libraries in kdelibs, but after having a dig around it seems you are right and that kdelibs then references samba APIs (and thus kdelibs needs to be compiled with SAMBA dependencies to support SMB).
Most front ends directly call SAMBA where as Dolphin called kdelibs I made the (incorrect) assumption that KDE’s SMB support was built inhouse.
Agreed.
SAMBA has a long and very well documented history of stomping Microsoft at file sharing performance via SMB. Last I really looked (admittedly, some time ago), it was frequently by as much as 30% or better performance.
Saying SAMBA is slow is like saying Microsoft’s own is glacial.
It’s actually been my experience that the opposite is true. A Windows to Windows copy is slower than a Windows to Linux or even a Linux to Linux (though why you’d use Samba between two Linux machines, I’ll never know. It’s like some people I know using SMB to communicate between an IBM AS/400 and Linux. Apparently they weren’t aware of NFS, or the billion other ways of setting it up.)
The reason is, probably, is that windows reserves part of the band, something about 30%, for other communications. And performance will depend from file size. It is particularly slow with small files. In our scenarios th difference would be with servers: i have slow server and fast clients, you, probably, fast server.
Actually for backups I use rsync. It is the fastest way, if you configure it correctly, and it is available on all platforms. And it uses entire band on windows.
Sorry for typos, typed on ipad.
Agreed, that’s what I use, rsync is fast and easy.
There is nothing free about the GPLv3, forcing political agendas is not free.
The GPL is supposed to be infectious. RMS developed it so everyone would have the right to modify their software.
GPL IS SUPPOSED TO TAKE OVER THE WORLD WITH ITS SUPERIOR SOFTWARE, BEING LINKED TO EVERYTHING OF VALUE.
Though I’m a fan of all open source software in general, I’ve always preferred BSD style licensing to the GPL myself, for several reasons.
Now I wonder…will Apple license their new SMB tech under a BSD license, or will they keep it closed like the GUI stuff in OS X?
It would most likely be released under the APSLv2 like all of Apple’s system software.
http://www.opensource.apple.com/release/mac-os-x-1066/
Seems to me that the only thing Apple doesn’t open source is GUI related components.. and drivers they’ve licensed from 3rd parties.
Personally I have no interest in SMB/CIFS, but maybe the Apple implementation will be of better quality than Samba.. perhaps it will get forked into an open source project to work on portability to BSD/Linux.
Grand Central Dispatch is under the Apache License version 2, so that’s also a possibility.
I cannot imagine why they would not open source it, since it is not really a differentiator, only an attempt to avoid falling into the GPLv3 patent hell trap (where you either have to negotiate a patent license for all users and redistributors, or stop distributing the software).
GCD is very much a system library. And as GP said, they usually do opensource those.
GPLv3 does not give you an option to negotiate. It tells you that you give a free license any downstream receivers of the software.
You have to negotiate with the patent owner to give a license to ‘downstream’, if you are not the patent owner. If you cannot negotiate such a deal (which is likely), either the patent holder or the GPLv3 prevents you in distributing software, depending on wether you enter a patent contract or not.
I thought the fact that a thing was GPLv3 included the patent permissions for downstream. Wouldn’t Apple simply be obliged to not sue over related patents and make the source available?
Anyone want to provide some constructive critisism and point out why this question is so offensive, off topic or otherwise justifiably modded down? At least have the stones to support your disaproval.
If you are not the patent owner, the patent clauses in the GPLv3 do not apply. There is no need to negotiate anything on behalf of anybody.
Yes, you do have to in a common situation:
– Vendor X redistributes GPLv3ed software.
– Vendor Y sues vendor X over patents violated in this software.
– Now there three possibilities:
1. Vendor X negotiates a patent license for all of downstream.
2. Vendor X negotiates a patent license for its own redistribution, but not downstream redistribution. Vendor X is in violation of the GPLv3.
3. Vendor X does not negotiate at all. X is in compliance with the GPLv3, but in violation with Y’s patents.
(1) would be the only positive outcome for vendor X, but Y is not likely make such blanket license agreements.
The GPLv3 is very clear about this:
You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license […]
The patent clause is explicit in the GPLv3 to prevent things like the Microsoft-Novell deal.
Edited 2011-03-27 12:56 UTC
Option 4. Vendor X is member of the patent pools like OIN refers Y vendor to go have a polite chat to patent pool. Since other members in the pool are using the same software Vendor Y problem has now got a lot larger.
Option 5 contact http://www.patentcommons.org/ for assistance. This does bring IBM and others onto the table.
Option 6. Vendor X is member of http://www.protocolfreedom.org/ That is also a member of many patent pools and tells Y vendor to go have a polite chat with them.
Option 4, 5 and 6. Y vendor is now most likely wishing they never raised the patent issue in the first place. Lot of cases no agreement but they just disappear as well and never bother X vendor again.
Patent profit making depends on divide and attack. When attacked unify and counter profit attempting companies run for the hills.
Now if you have done 4,5 and 6 at the same time odds of a deal coming out of Y vendor to make it go away is kinda high.
Edited 2011-03-27 14:15 UTC
The OIN patent pool is minuscule (and too scattered) compared of the patent arsenal of the average vendor. Vendor Y doesn’t care if vendor X is part of the OIN if the OIN only holds one patent against Y and Y 20 patents against X.
Besides that, OIN also limits counter-measures, since OIN members promise not to assert patents over the ‘Linux system’.
If OIN were so effective, why did one of its founding members (Novell) have to make a patent deal with Microsoft? I wouldn’t be surprised if some of the other members also have cross-licensing deals with Microsoft over patents that affect e.g. Linux (IBM, Sony?).
Patents litigation can kill small to medium-sized companies. It’s often easier to come to some agreement, which is, unfortunately not possible according the GPLv3.
From aggressive stand there is also an agreement in there over ‘Linux System’ to use there patents defensively.
Please be aware. MS-Novell deal. MS paid Novell large block of cash for usage of Novell patents. A company in money trouble has a large problem saying no to that. Also notice now that Novell is being sold that MS is doing everything it can to get its hands on Novell patents. Also lot of parties are doing everything they can to make sure MS does not. They are high risk weapons to MS. MS was trying to disarm OIN when it did that deal in particular areas.
Also if you watch carefully MS stays well clear of touching Linux. MS will touch closed source Android front ends but you don’t see a case against the Android kernel or core.
IBM deal with MS dissolves if anything it has an interest in is attacked by MS. This is the same with every patent deal with IBM. Yes IBM does use Samba. So yes contacting OIN inform IBM that would bring IBM patent pools onto the table. Please be aware IBM is the biggest patent holder in their own right.
The combined pool of OIN is large and unless you have agreement with all members the pool is still huge.
IBM holds patents on most of the electronic means for searching for patent infringement in code. Yes so prove you case in court could bring you in breach of IBM patents.
IBM is not alone in doing agreements on patents this way. Orcale and Redhat also do patent agreements in the model if anything they are using is attacked all bets can be off.
Really I don’t get where you get minuscule from. There are of course the odd 1 or 2 patent in key locations OIN cannot help you with. Like Fat long to short file-names.
That limitation is not as limited as it appears. Since OIN hosts a list of projects that are classed key to Linux existence any attack on those projects is classed as an attack against ‘Linux System’ no matter where.
‘Linux system’ key applications directly and officially agreed by OIN members does include Samba. So any attack against Samba is an attack against the Linux System no matter the platform its on. Other thing is libreoffice is in that list. Patent trolls go near them at your own risk.
So yes moving away from Samba Apple has given up a patent shield. A very strong and lethal one that MS is trying to disarm.
Situations 2 and 3 would be exactly the same if vendor X distributed their software under the GPLv2, though. The only difference there is that you as the recipient of that GPLv2 work would have no idea which of situations 1,2 or 3 may apply.
Why all the GPL V3 hate?
What’s so wrong about GPL V3. People who write software should be able to determine the circumstances under which it can be used, right? Whats wrong with a bunch of people giving away code with the condition that it can’t be locked up by anyone else?
Certainly not if it’s say Apple saying “you can only run this on our hardware”, but I suppose because it’s GPL people view it with a different set of glasses – ones that have a rosey tinge…
Legally this is the only choice Apple has. GPLv3 is simply not compatible with large corporations, and Apple won’t be the only ones distancing themselves from products that use it. While the goals of those patent clauses might seem honourable, the legal advisers to major corporations would be foolhardy to advise they do anything other than step away from such products, because taking on responsibility for someone else’s patent compliance is just opening a can of worms, and certainly not something viable for big companies.
Aside form the fact that a EULA is different that an open source code license, I actually am not that upset over apple’s EULA.
Re: patents
Its obvious we have a lot of armchair legal quarterbacks here who are teaming up with Apple cheerleaders. I don’t think the patent clause is that anti-corperate. I’d like to hear a reasoning form an actual corporation why they don’t want to use gpl v3 software due to the patent clause. The old “its because of patents” refrain is weak reasoning. Its like asking why a company is bankrupt and being told ” its because of money “.
Maybe, but in a world where software patens exist, the “Patents” clause is a huge risk.
Nothing really. It’s their code and they can attach whatever license they want to it. I guess people are rubbed the wrong way by how it is always touted as an instrument of freedom and such and how it is, supposedly, infectious.
Personally, I don’t like the fact that it is god-knows-how-many-pages long and riddled with lawyerisms. I think RMS is a closet lawyer.
I much prefer the simplicity of the BSD license, aside from the fact that it’s more free.
A license is a legal contract, if you are going to do one, you have to do it with all the necessary details and forms. If not, a lawyer can come and bite you in the details.
For more intuitive explanations, you can write FAQs, quick guides, etc, like in
http://www.gnu.org/licenses/gpl.html
You can see FSF’s board of directors
http://www.fsf.org/about/board
the directors of the Software Freedom Law Center
http://www.softwarefreedom.org/about/team/
and mainly its boss: Eben Moglen, professor of law and legal history at Columbia University.
The FSF released the GPLv3, approved by the Software Freedom Law Center.
I hope you remember this next time you call someone “closet lawyer”.
Nonsense.
Only if you want to impose additional restrictions and clauses, such as what the GPL does.
Wow. Impressive! or not. They approved their own license, pretty much.
Yeah, I’ll remember not to poke fun at RMS again. Honestly. It seems it’s a sensitive topic.
Edited 2011-03-27 04:29 UTC
> > Contracts. if you are going to do one, you have to
> > do it with all the necessary details and forms.
> Only if you want to impose additional restrictions
> and clauses […].
Let me repeat that if you are going to do contracts, do them well, with all the necessary details and forms, with the supervision of a lawyer every time there is a substantially different contract than can have big effects. If you don’t do it, sooner or later you’ll learn it the hard way.
> Wow. Impressive! or not. They approved their own
> license, pretty much.
You talked like if it was only a “closet lawyer” who made GPLv3, I showed you it was not this way. I gave you data. You can say “Wow” the times you want.
Well, Apple has already got bitten by the fact that GNU licenses have unforeseeable strings attached to who-knows-what. It’s not just the legal letter (which is thoroughly intertwined with propaganda) but there’s the “SPIRIT” of the license which comes back to haunt you after you think you’ve complied with all the written obligations (think Webkit, for example). And you still get a public bashing. So why bother with the GNUisances then?
There is nothing wrong with the GPLv3 like there is nothing wrong with corporations and individuals providing software on whatever basis they see fit. On the other hand talking about right and wrong implies a moral judgement that is inappropriate in what should be primarily an economic decision. Buyers are not more virtuous than sellers.
There is no such thing as a free lunch – anything in life will always come with some gotcha, some sort of requirement to give up something to gain something. Apple has decided, based on a number of reasons (licensing being one of them) to write their own in house SMB implementation. Although GPL3 maybe a hugely obvious reason I am sure that technical reasoning is probably more likely the motivation behind it.
If you think about it, there are free/gratis things, like oxygen (I still have not paid for it :-)). And there are also the actions caused by the love of a mother, or due to friendship. There is also people that do things for unknown people, for love or whatever good reason. For example, if you have made a program for you… you can share it! yes! and have that good feeling that your work is being useful to other people, imagine them facing the same problems that you faced and solving them.
Edited 2011-03-26 07:04 UTC
But when someone writes something and releases it under GPL/LGPL they’re sharing it but there are conditions on that sharing, “I’m happy to share if only you’re happy to share the changes you made” – so it is a code-for-code transaction; the original author isn’t asking for money but instead that the ‘payment’ if you can call it that in the form of code. Personally for me I prefer the LGPL because it is a lot more flexible but I’m happy with BSD and more liberal licences too.
Regarding those other reasons, there is no such thing as a pure altruist – the fireman who has the exhilaration of saving someone, the volunteer who is happy because she/he is felt wanted/need by the community etc.
I’m sure that even you have done things for other people, not for you. Thinking about it, you can see that another person can behave similarly.
Yes but it is all for some form of benefit. Evolution would have long-ago weeded out the genes of those who help others for truly 0 benefit. Hard but true – life isn’t a hollywood movie
If you drive a car with a catalytic converter, you are, in effect, paying for oxygen. A portion of your tax dollars, a portion of the cost of every chemical you buy, etc. goes to pay for for clean air. Believe me, the air is cleaner now, than when I was your age.
I have paid for purchases of cars, for their maintenance, etc, not paying for oxygen. I have no bills for oxygen consumption. This would wrong when doing accounting, for example. I can breathe oxygen and nobody goes with bills to make me pay more for breathing more. It’s probable that your case it’s the same.
To say this, you would have to know that you are older than me. It puzzles me.
kaiwai,
“Although GPL3 maybe a hugely obvious reason I am sure that technical reasoning is probably more likely the motivation behind it.”
Under other circumstances that could be true, but in this case apple stopped updates only since Samba went to GPL3. If the reason was technical, there would be no reason for apple to discontinue updates while their in house version was in development. Therefore, we can be fairly certain the GPL switch was the cause.
True but it depends on how they actually implement the integration between the Finder and SAMBA given that SAMBA is licensed under GPL and the said libraries cannot be linked directly to – unless of course the SAMBA libraries are licensed under LGPL. It all comes down to, I guess what the situation is behind the scenes – I’m sure there is a logical reason for it but unless they come forward and explain it in detail I see all explanations so far as being mere rectum plucking (along with my own take on the matter).
Freedom is always relative. You will learn that there is no one true universal freedom. There is religious freedom and freedom from religion. There is political freedom and freedom from politics. There is GPL freedom and freedom from GPL. In that case, GPL enforces freedom from patents. On the other hand, it removes freedom to use patents. You are free to walk but it removes your freedom to stop someone from walking.
So yes, the GPL is free from patents and several other problems. Those who say it is not free have a political agenda that the GPL is trying to stop.
Edited 2011-03-26 06:52 UTC
Software patents are horribly bad, anyway. So as a tool against software patents GPL3 has a point.
Of course but try to explain to Joe and Jane Sixpack why they should care – they’re more concerned about the boggy men (God, Guns and Gays) put up by senators and congressmen than actually demanding that these candidates put forward some coherent policies that extend beyond sound bite laden drivel. I know in the case of New Zealand outside the mainstream parties issues such as patents, intellectual property, net neutrality and so forth are discussed right up there with health, education and social welfare but once you get into the big mainstream parties such topics aren’t even on the radar.
I just don’t get it. In every talk about the GPL, there is always somebody raising the argument that Joe Sixpack does not care about the license. What is the point about Joe Sixpack exactly? What does Joe Sixpack have to do with or against the GPL?
It’s not about Joe Sixpack. I’m not Joe SixPack. Joe Sixpack does not write software and does not distribute it. Joe Sixpack does not know what software patents and the GPL are and he does not have to.
Joe Sixpack buys a computer and does not understand why it can’t interact with his smartphone and why he can’t read the document he wrote some years ago with an old word processor. Joe Sixpack does not want to know why it does not work. He just want his damn computer to work.
Ok let me explain this for Joe Sixpack why companies refusing GPLv2/GPLv3 should draw worry.
Main reason for not wanting GPLv2/GPLv3 is companies wanting exclusive control over your machine. Lets take the apple store. Apple reserves the right to remove any application that conflicts for market with one of their applications.
Effective mean of this. If Apple makes a buggy application for ipad/iphone you cannot have competing product.
Anti tivo clause in GPLv3 also requires Apple to allow user to install newer version. Users want there computer to work. What are you going todo if Apple version of SMB protocal is buggy in your device worse part is Apple decides not to allow an update. So now you have to buy a new device.
GPLv3 is about protecting the User. From being forced to buy a new product just because apple/who ever decided not to update it. Yet third parties will be able to offer updates for those devices due to the anti-tivo clause.
The clause has no requirement to hand over the appstore key. Only a signing key that will allow the gplv3 software to work.
So yes gplv3 gives Joe Sixpack a better chance that his phone and computer can be made integrate due to the fact the device will have better odds of being able to get working updates one way or another.
Anti-GPL is pro device maker anti end user.
So Joe SixPack has to choose between pleasant effective software and hardware systems from Apple today but subjects himself to the hypothetical danger that Apple might perform some nefarious act in the future. I think his choice is obvious, its no wonder the market has spoken quite loudly in Apple’s favor.
Also Joe SixPack has spoken quite clearly for Android and MS Windows as well. Of course its simple to forget that.
Problem here with Apple app store I am not talking about hypothetical disagreements. There is already software missing from there due to possible competition to Apple products.
Really in a lot of areas Linux designs for desktop usage had not been bothered about. Linux world would caught on the back foot when the first Linux netbooks took off. Since then lot of work has been going on sorting out internal design issues.
Edited 2011-03-28 03:56 UTC
If Joe Sixpack saw patent reforms as an important matter then he would put pressure on the political establishment – that if they want his vote they have to come up with a patent reform package that would convince him to vote for said candidate. The issue isn’t about licenses, it isn’t about GPL, about reading the thread before opening your mouth. The original post by woozx:
I am replying EXPLICITLY about patents, not GPL3, not the snow man or the only gay eskimo in the tribe but software patents. Look at his post, look at my post – you bringing up GPL3 has absolutely NO relevance what so ever to what I posted. If you want to address GPL3 then reply to HIS post, if you want to talk about software patents and only software patents then reply to my post.
Edited 2011-03-27 09:07 UTC
That is if of course Joe Sixpack understands how much patents are costing him.
Big problem Joe Sixpack don’t understand exactly the effects patents have. Like a drug can can be made for 10 cents a shot being sold for 200 dollars + per shot. So daily people die from not being able to afford a drug. That they would be able to afford if it was 400% on the production price as a max limit.
Yes Mr Joe Sixpack patents can cost you your life because you might not be able afford the drugs you need.
Same thing in time could happen in software.
There is nothing insightful about your comment. It represents the tired agenda of those that want to build walled gardens because they are afraid of what would happen in a world where ongoing collaboration was the norm.
Here´s an idea: Work like every other person does, including doctors, architects. You create some software, make some money by supporting it, extending it, but let your users fix your bugs if you won´t do it yourself.
The GPL3 is a great license in a world where big companies and patent trolls, sue small developers for their innovative ideas.
I have not been active in forums in a while, but seeing your senseless gibbering has given me the motivation to be.
Doctors and architects produce products that are not naturally duplicable and transferrable, so they do not quite compete with each other the way software and computer hardware companies do. Professionals like doctors, lawyers, architects and scientists mostly sell their personal talents and time to their customers. One doctor can’t see every potential patient in the world.
If you’re in competition where the ‘winner’ can more easily take over everything, like the software industry where the cost of duplication is minimal, collaboration is less desirable. GPL tries to force it, but any company that is as highly successful as Apple has been over the past decade will probably want to limit and control their collaboration with competitors.
Every other person does not give their work away, and certainly does not allow people to fix “bugs” after the fact. If a Dr. replaces my shoulder, I certainly am not going to fix any errors he made myself, I’m going to go to another Dr. Same with an architect, if an architect designs my building, i certainly am not going to fix any problems myself, I will get an architect or an engineer too.
Most other professions do not use the OSS or Free Software model, and never can, or will.
You don’t fix problems yourself, but you get a detailed explanation along with radiographies/blueprints so you can get another doctor/architect/engineer to fix those problems.
I’ve yet to see any doctor/architect hiding information to their customers and/or forbidding them from hiring third parties to perform further modifications. It looks quite FOSS to me.
Really? Just because they don’t use lock-in? That’s really not FOSS, that’s just information about your body/building. On the other hand, the medical profession has patented genes(or at least tried), did unauthorized experiments on patients (google syphilis and the American south, or LSD trials on prisoners in the US).
Doesn’t sound much like FOSS to me, when you don’t have control over your own body.
It’s not just absence of lock-in, you get full information about what’s been done, how and why, be it a surgical intervention or a drug treatment.
Drugs on medical treatments are diagnosed based on the active principle, and documentation on it’s full composition and effects (and hence the reasoning behind the prescription) is publicly available.
You can request any other doctor to take on your illness at any point and continue from there, with full access to all your medical records and complete info about your previous treatment.
When it comes to architecture, you have access not only to the blueprints but also to a comprehensive list of the materials used.
I wouldn’t consider subjects of unauthorized experiments to be actual “customers”. If they were to be considered patients that would be a violation of the Hippocratic Oath, and as such a exception rather than the norm.
That still isn’t the same as software development, foss, or otherwise, it’s different issue all together. One (the doctor) has no choice, he isn’t working for himself, building a product, he is dealing directly with you, and your body. It’s your body, and you might die if he withholds information/treatment whatever. You don’t have a choice in using another body if yours is malfunctioning. You can change doctors, that’s it.
The other is software development, where the developer is building a product. He has the right to distribute the product anyway he wants, and you have the right to use a different product. You have the right to demand timely fixes to bugs, something a doctor can’t provide. The developer has a right to get paid. The doctor gets paid whether you die or not. It’s in his best interests to make all the relevant facts available to you. It might not be in the developers best interests to release the code. It may not be in his best interests to close the code. That’s up to him, and it’s up to you if you use it.
They are not the same, in any way.
You are grasping at straws there: both doctors and architects (the examples given, but you can find that in other professions) provide full comprehensive documentation of their work (ie. the “source”, not just abstracts), from procedures down to materials, which can be used by anyone else to make improvements or fix errors.
If the public availability of blueprints and used materials is not akin to FOSS (to the extent it can be, being completely unrelated professions) I don’t know what it is.
That was the OP’s point.
If you want to hire a doctor for an opinion and not actual surgery, then likewise hire a software architect to tell you how to build your own friggin’ word processor. Both will expect to be paid. That’s NOT what you’re paying for when you buy a copy of Word.
If you want to hire a drug company to research / invent new drugs for you, expect it to cost a lot and assume you’re going to be trying to set up ways to cover the R&D costs, unless of course you’re independently wealthy, in which case you can of course espouse communism in whatever profession you want.
Or maybe you’re thinking of an author who expects to be paid per copy of his book even if you feel like you could write it yourself or edit certain sentences – or re-print it yourself if the publisher goes belly up?
The software communism crap is just ludicrous.
You are, of course, under the old tired wrong assumption that developing FOSS inherently means not being paid.
Just in case you didn’t notice, FOSS relies on copyright. That is, you know, ownership. The kind of thing you don’t have with communism.
Someone who would say “Hey, there come the communists”, would not be reasonable:
http://www.linuxfoundation.org/about/members
But I am *free* to go to another doctor if I want a second opinion or feel that my current doctor isn’t doing his job correctly. Likewise with an architect…
That has nothing to do with the Free Software model, you can do that with any service or product. You are free to use Windows if you don’t like Linux, or vice-versa, that’s just freedom of choice.
But your doctor will not hide lie about your disease (your “bugs”) or shouldn´t unless he is morally bankrupt. He will also provide all the diagnostic material that he has on you so that you can get a second opinion or even change doctors. In other words, he will allow others to build on his work, will share the results of his findings with others so that other specialists may treat you and so on and so forth.
Same goes for architects and building design.
You may want to continue to be paid for the same piece of code ad infinitum. It´s a very appealing proposition. It doesn´t mean it is the most ethical one, particularly if you prevent users from helping themselves or their friends by turning the act of sharing into a crime as most proprietary licenses do.
Microsoft does not stop you from helping yourself fix problems, the sheer volume of documentation MS publishes, the 2 free incidents, the free service packs, updates, free(as in beer) software is proof of that. They just don’t give you the code, or the ability to change that code. MS doesn’t sue websites with technical info or forums out of existence, they don’t stop people from sharing information on how to fix or work around problems.
I can’t speak about Apple, as I own none of their products, but it seems to me the behavior you are describing is not as prevalent as you seem to think.
So you write a book, sell 1 copy, and have others take it share it with all of their friends. Maybe make derivative books from it, change a chapter here or there, re-use your characters. No problems, right? You sold 1 copy, what are you complaining about?
If you want your own book to modify and share / re-sell, write it yourself instead of stealing it.
OSX’s SMB implementation is already crap. I can’t count how many kernel panics I’ve experienced because of it. Now, we’re going to get an untested from-scratch replacement? Oh, Joy! Yet another step by Apple to prevent me from using Linux or Windows servers for Time Machine backups!
They’ve had over 2 years to write and test it – and all of it according to the specifications from day one. Why do you automatically assume it is going to be crap? based on what evidence? Microsoft made some major re-writes in Windows 7 and look at what has happened, one of the best selling version of Windows of all time.
I would like to add that, unlike some “vendors” say, Windows is not sold. Windows is a property of Microsoft(r). I would also like to add that a version of Windows can increase its own percentage of the market, but at the same time the total Windows market can be decreased, because the new version mainly ate the market of prior Windows versions, and at the same time was not able to keep the total Windows percentage.
That is what would be seen in those reports:
http://stats.wikimedia.org/archive/squid_reports/2011-02/SquidRepor…
http://stats.wikimedia.org/archive/squid_reports/2010-10/SquidRepor…
http://www.w3schools.com/browsers/browsers_os.asp
It would be interesting to see an actual report from the web of Osnews.
Edited 2011-03-26 07:32 UTC
Actually, Vista had those major rewrites and was one of the worst selling versions of Windows of all time. Windows 7 improved on those rewrites and “got it right” resulting in being one of the best selling versions of Windows of all time.
Rewrites that are done with only internal testing very often break when tested by the larger public. (Look at OsX 10.0 for the best proof that Apples rewrites can be very crappy)
No, all Windows 7 did was actually finish the job; GDI acceleration should have been something in Windows Vista rather than it being held off till the next version. What you’re talking about has nothing to do with re-writing and everything to do with Microsoft not finishing the job properly. They were in a situation that either the hold it back for another 6 months and risk the thing turning into vapourware or just getting the damn thing out the door asap.
Where is the evidence that Apple’s implementation of SMB2 will be ‘feature incomplete’?
Mac OS X 10.0 was never designed for mass consumption – it was to get it out the door, that is why they offered a free upgrade to 10.1, that is why they offered dual boot configurations. Apple realised that 10.0 was ‘work in progress’ and hence they never thrusted it upon people – end users always had the option of going with Mac OS 9 if they wanted and many did go that route until 10.1 was released 6 months after 10.0 was released.
Edited 2011-03-27 04:52 UTC
Vista was a beta shoved out the door to give them something to sell while they built the REAL version, aka Windows 7…nothing more. If you look up the history of Longhorn (what was to be Vista) you’ll see it was due to be out in 2003 and not only did it end up nearly half a decade behind schedule but but they had to drop a whole bunch of features such as WinFS just to get it out the door at all.
I have to agree that with that is so loyal Apple is nuts not to be beta testing this software in the wild. As for GPL V3, what did everyone expect? Thanks to the “TiVo trick” GPL V2 is officially useless, hell you might as well just release as PD for all the freedom GPL V2 gives you now. That is why I’ve been trying to warn people that think the GPL will keep them from getting boned by Android because Google went out of their way to avoid GPL V3 and already we are seeing the fruits of that avoidance with more phones coming out locked down while running Android. What good is having the code if you can’t actually use it?
So while I support Apple’s right to choose whatever software runs on their OS (and I still think MSFT should have been busted for the OEM clauses and NOT IE, as they should be able to bundle anything they want with THEIR product) I would advise all those that support FOSS to be pushing developers to switch to GPL V3 ASAP, because otherwise the corps can just TiVo trick your rights away. With eFuses and code signing being so cheap all GPL V2 does is give corps a license to rip off developers while not only giving nothing back but actually taking away the rights of users to modify and improve, which was the whole point of GPL in the first place, which some seem to forget.
For those that don’t know or have forgotten RMS came up with GPL in the first place because he wasn’t allowed the code to improve a printer driver at MIT. And this is why GPL will always above all support the right of the user to modify and improve the code so that GPL users don’t find themselves in the same situation RMS was in all those years ago.
You have a good point. And I’m hopeful they’ve done a good job.
However, a common problem with any software development is that you just don’t get some bugs until the software is out in the wild, and they can be doozies. Anything that’s hard to reproduce is less likely to be caught during alpha and beta.
But I’d say that their implementation of SMB2 will be a whole lot better than the current situation; dealing with a code base that is a horrible and nasty hack and trying to at the same time add new features whilst not breaking something. In the case of Apple’s SMB2 implementation it is a clean break without having to think about SMB1, they can make design decisions knowing that it’ll be in a constant state of evolution etc. It can introduce new bugs for sure but at the same time I think the benefits far out weigh the current situation.
Apple’s NFS implementation was horrible until sometime after 10.5 – and NFS has been an open spec since the 80s. I somehow doubt that they’re going to come up with a reasonable 1.0 release of this mess.
The reason why NFS was horrible is because they hadn’t touched it in something like 20 years. Apple has 100 ‘engineering resources’ (a made up unit for this example) that they can allocate, do they allocate these said resources to features and parts of the operating system very few utilise or do they focus in their energies on the parts that 90% of the end users touch on a daily basis? SMB2 is fully documented, there is no weird undocumented parts of it, and better still we’re talking about something that is in high demand, a feature that the market to one and all with Windows interoperability being one of Mac OS X biggest strengths (according to Apple). It would make little or no sense for Apple to treat their SMB2 implementation like NFS given how important it is.
Interesting enough those WebDAV is apparently going to become the ‘protocol of choice’ for iPod Touch/iPad/iPhone file sharing in the future which makes me wonder whether some time in the future Apple will be looking to maybe replace AFP in the long run.
Btw, one thing that hasn’t been discussed yet – how does this change impact on their SMB implementation on their Airport Extreme and Time Capsule routers they sell.
Have fun there is a reason why Samba has taken over 12 months to get to a working copy of SMB2. SMB2 documentation does not match real world implementation by MS. This is why the EU sanctions run for so long against MS. MS did not have valid documentation as soon as the samba developers started writing test cases from the MS documentation error after error appeared. Lot of the errors were legacy code recycled for SMB2. That MS themselves had not documented the errors.
Samba has built test-cases testing every feature of SMB2. In the process Samba also has made MS correct bugs. There has been a joint operation between Samba and MS to sort out the mess.
Yes SMB2 is an impressive example why you cannot trust MS to write a protocol alone. MS internals don’t seam to be upto the job.
Also to be fun MS SMB2 documentation does not include extensions that have been added to MS implementation to support old SMB1 functionality on SMB2 protocol for Unix based systems. Yes these features are in Vista sp1 and up.
Basically only one party has the correct information Samba. Usage of GPLv3 code in the form of test-cases will still be required if Apple wants their code to work.
Most people are not aware that Samba hosts the meet up to compared SMB implementations. Each SMB implementation is free to add extensions. SMB2 MS tried skipping out on this process. Now they are back in the process MS now has to accept alterations from other parties as well.
Just accept the fact SMB is not Microsofts. Most nas servers and other items providing SMB are not Microsofts. Samba is common but there are about 8 different implementations out there. Lot have not moved my to SMB2 yet due to the blockade on valid information to build information.
Out of the process of sorting SMB2 out as well. Samba has patent protection from MS. This is why is so funny. Apple path may equal them having to hand over cash.
If you already use Linux, consider using Netatalk to serve your Time Machine backups!
After a couple of years’ stagnation, development has catched up nicely. Netatalk now supports ACL’s (Posix from Netatalk 2.2 (currently a beta release) and NFSv4 with ZFS (FreeBSD + Solaris/OpenSolaris) since 2.1), AFP 3.3 (Netatalk 2.2), Time Machine backups as mentioned earlier, extended attributes and network connect/reconect.
If it’s Mac OS X you wish to support from Linux, AFP might be as good as (or better than) SMB, anyhow
That is good to know. Can netatalk be used to serve up network homes?
Just don’t. You’ll quickly find Netatalk to be a terribly disappointment.
I think I might trying NFS, but I was never able to get it working right.
Netatalk has worked very well for me for several years now, and it’s been the main reason why Mac users at the office never notice they’re dealing with a Linux server.
I haven’t used it to serve home directories, but it works just fine for file sharing in general and I see no reason why it shouldn’t work for home directories as well.
Integrating Netatalk in a Kerberos based SSO environment is also easily done.
I tried using the latest version of Netatalk. I had two major problems. One is that it’ll fail authentication randomly about once out of every 5 or 10 connection attempts. When I reported it, they told me they wouldn’t look into unless a corporate customer had that problem. The second is that OSX will lock up hard if you put the machine to sleep in the middle of a TM backup. This is because Netatalk lacks Replay Cache, and thus, there is no solution. Without Replay Cache, the TM backup is likely to get corrupted if you sleep the machine during backup. (Unless you use sleepwatcher to unmount on sleep, which I do.)
Netatalk is NOT an option, because it’s completely inadequate.
I wasn’t aware that these problems existed with TM backups. Let’s hope that Netatalk 2.2 and its reconnect features cure this problem once and for all.
Concerning the problems with authentication, I’ve used Netatalk for several years with Kerberos only authentication and I have yet to experience the trouble you describe.
I’ve heard from other sources that PAM is quite buggy, so that may be the source of the trouble.
I would love to know how you manage to use kerberos directly without using a PAM plugin. From your comment, I’m inferring that although basically every Linux app that uses authentication uses PAM to communicate with the underlying authentication system (system-auth, kerberos, LDAP, etc.), you’re saying that Netatalk can use Kerberos directly. How do you make that work?
Thanks.
Yes, indeed it does!
-Actually, Netatalk uses its own set of authentication plugins that work independently of the surrounding architecture.
Netatalk kan use Kerberos for authentication with just a singe requirement met: The Kerberos keytab (e.g. /etc/krb5.keytab) needs to contain a service principal key for use with Netatalk. This is usually called afpserver/[email protected] .
Create this service principal key in the following manner:
$ kadmin.local: addprinc –randkey afpserver/yourserver.example.org
(you can omit the realm as it’s implied by kadmin.local)
then:
$ kadmin.local: ktadd -k /etc/krb5.keytab afpserver/yourserver.example.org
to add the new key to the existing keytab.
Once you’ve created this from kadmin.local, you can go on to setup Netatalk to use the newly created key by creating a setup like this:
– -tcp -no ddp -uamlist uams_gss.so -k5service afpserver -k5keytab /etc/krb5.keytab -k5realm EXAMPLE.ORG -fqdn yourserver.example.org:548
Basically, this tells Netatalk to bind to all interfaces, use TCP protocol, use GSSAPI (Kerberos5) for authentication, using the newly created afpserver service principal key from the system’s Kerberos keytab and present to others a service of the type afpserver, identifying itself as yourserver.example.org within the realm EXAMPLE.ORG running AFP on port 548
I hope this example helps!
Well, I followed your instructions, but no matter what I do, I get access denied when I try to mount. What might I be doing wrong?
I’ve tried a few other guides to this, and none of them worked either.
Any idea how I might diagnose it? Something’s clearly not setup right. BTW, I did restart all the krb and atalk processes.
How do you try to mount the share?
-If you’re using the links in Finder’s Sidebar, you’re out of luck, since Apple only uses its own Local KDC setup for this, which needs to be able to discover a remote KDC by use of a special plugin (LKDC Helper og OD Helper) …
Therefore this approach only works between two Mac OS X computers (server versions included).
Using connect to server, though, works fine for me – full single sign-on from my Linux-based KDC.
-Are you sure your AppleVolumes.default has been setup correctly?
That’s really weird. When I was using PAM, I had no problems with authenticating as any particular user or mounting from the Finder sidebar. But with Kerberos, this doesn’t work?
I think I’m just going to go back to using SMB. Netatalk is broken in more ways than just random authentication failures. For instance, if you put a Mac to sleep during a backup, it’ll lock up because Netatalk doesn’t support Replay Cache.
Thanks anyway for your help.
As steted previously, I suspect this issue is fixed from v. 2.2, which is currently in beta release.
The issue with Finder’s Sidebar is only an issue with Kerberos based single sign-on, due to Apple’s implementation of Finder’s Sidebar.
On Debian and Ubuntu (at least), Netatalk supports DHX and DHX2 password schemes which allows for single sign-on if you save your password in a hidden file in your home directory on the server.
I’ve never myself used PAM with Netatalk and I’ve grown to hate Apple’s Bonjour notoriously (mainly due to these little oddities Apple tends to introduce every now and them to distance themselves from standards just enough to be incompatible (did anyone say Microsoft??)) so I might not be able to help you with that, as much as I’d wish!
Apple’s neglected Samba for the three years or so since it went GPL3.. of course it’s buggy as all get out because they haven’t bothered to give you any of the Samba updates since. You can thank Apple for leaving your system vulnerable the last several years too.
But, given the focus they put on there own products, the in-house developed replacement should be very usable when it ships (or did it ship already?). I don’t see the inhouse implementation leaving you without updates for several years either so that’s a bonus.
No. Lion’s SMB2 implementation is orders faster than Snow Leopard or Leopard’s support. Yes, all is relative until heavy numbers arrive.
You’ll be happy.
Well, that’s good as the Leopards do not support SMB2.
In any case do you know if they’ll back port SMB2 to the earlier OS versions? It seems like they should to ensure a gradual transition, but Apple often doesn’t support older OS versions with features from newer operating systems.
Have they ever?
The app store? It was released for Snow Leopard.
I just think it would be silly to have macs on a network one using an old broken version of SMB, and others using the newer SMB2. Not sure how well that would work.
I’m not sure how a company that uses macs approach OS upgrades. Windows is usually a gradual process: testing new OS upgrades on a few machines before deploying company wide. It would be difficult to do that if they couldn’t talk to the same network shares. But Macs aren’t really used in the numbers at work that Windows PCs are. So maybe Apple doesn’t care that much.
He said “features”
That he was me. As such, I’ll give Apple a bone and declare the app store to be a feature for the sake of argument. A feature that I’ve never used, and don’t really ever plan on using. At some point, I’ll probably remove the icon from the doc.
sudo port install samba3
If you are a corporation, GPLv3 is indeed bad. Not so much for fear the patent clause will require you to license one of your own patents, but because it opens up a bunch of liabilities for you and is an invitation to get sued. Getting sued is expensive. GPLv3 requires each distributor of the the software or patches to indemnify that software against patent violations – including patents held by third parties. This means that, if you do it right, you need to evaluate every bit of code and identify all the patents it may infringe upon and verify that the license terms for those patents is fulfilled. It’s possible, but not practical. If you aren small potatoes, you can probably get away with breaking the rules, but bigger fish need to watch out.
This probably doesn’t matter for Apple, though, since, as was pointed out, Apple’s using an older version of Samba that was released under GPLv2. They could simply use that and fork it if they wanted to.
Presumably, Apple’s reasoning is two-fold: they want better SMB/CIFS support, and they don’t see GPLv3 Samba or maintaining a GPLv2 fork as being practical. Specifically, Apple wants better performance and better Windows 7 and AD support – none of that was forthcoming or worth the effort to cram into an old version of Samba, so they wrote their own.
This isn’t necessarily a bad thing. Lion devs report better performance, and it works with Windows 7’s modified authentication.
If the plan of those corporations was: getting the customers into a vendor lock-in… yes, it’s bad for those corporations.
If the plan included putting hardware restrictions on software modifications, so that John had to depend on those corporations to modify the device that John bought… yes, it’s bad for those money-sucking corporations [see what happened with “tivoization”].
This is not even real, instead of a long explanation, we can see the example of Linux distributions, plenty of GPLv3 software, and companies like Canonical, Novell, Red Hat, Oracle, etc that distribute them. I can tell you that they have lawyers 🙂
Edited 2011-03-26 05:52 UTC
True. I also think history has shown us that sometimes companies need to be protected from themselves. The economy as a whole works best when there is more competition. The corporate modus operandi is to grow profits as large as possible, which leads to monopolistic behaviour, which would kill itself in the long run. Its like when an organ of the body becomes cancerous, its cells just want to grow and multiply but end up killing the whole body instead of improving the functioning of the body.
We need controls on corporate greed, for the sake of the good of all the corporations ( as well as the consumers, obviously). GPL V3 is such a control. The increased adoption of it, would be a good thing for many companies, including those that are afraid of it.
Agreed; from what I have heard SMB 1.0 was pretty much a walking disaster area with SAMBA and Microsoft programmers both screaming in horror when having to maintain it given that it was never cleanly documented from day one. SMB2 is a fresh start for all concerned, written from the ground up with all the parts properly documented and I’m sure Apple realised that the vast majority of people are running Windows Vista and 7 in the next couple of years so it would be best to focus on supporting the technology that people are and will be using than trying to maintain a horrible mess that should have been jettisoned years ago.
What I’d love to know is how does this affect their SMB implementation on Airport Extreme and Time Capsule? Will we see a firmware upgrade to SMB2 or will Apple do a silent refresh of their products to bring SMB2 to the said devices?
Hmmm.. so why is IBM, Red Hat, Novell, Intel, AMD, Google etc all contributing code to GPLv3 licenced projects such as GCC?
Please point me to the GPLv3 paragraph (or any other source of information) that says distributors have to indemnify GPLv3 software against patent violations.
What GPLv3 says is that if you contribute your patented code to a GPLv3 licenced project you can’t turn around and sue the recipients for patent infingement, instead you grant the recipients the right to modify and redistribute your patented code. Again, show me the licence text pertaining to patent indemnification against patent violations. I sure haven’t found any:
http://www.gnu.org/licenses/gpl-3.0-standalone.html
As for Apple not using GPLV3 (and not GPLv2) it’s obviously because they want to incorporate the code into their proprietary products. Just like llvm in XCode, Webkit into Safari etc.
Back in the days of Next, Steve Jobs had his first brush with the GPL when he tried to use GCC as a frontend to a proprietary ObjectC backend, this wouldn’t fly and after the threat of going to court Next decided to release ObjectC as GPL since they needed to use the GCC frontend, thus GCC got ObjectC support and everyone was happy, well apart from Steve Jobs/Next.
So it’s not hard to see why Jobs went and supported llvm when the opportunity arised, but this ended up being a good thing (imo) since llvm is open source (and hopefully Apple will continue to send or their non-XCode related in-house patches upstream) and it also provides much needed competition to GCC.
So if Apple does indeed make their own Samba replacement open source then that’s obviously great. If not then it’s not like they were contributing to Samba anyway afaik.
You are cherry picking companies. I could list a thousand companies that don’t. I believe that GPL is indeed bad for companies, but that’s beside the scope of this issue.
How is GPLv2 different ot GPLv3 in this regard? The exact same restrictions apply. Apple has been able to work with (or around) them either by open sourcing their code (webkit, objc support in gcc) or by not directly linking with GPLed code (eg spawning new processes to use gcc and gdb inside Xcode).
Here we agree. Making a big deal about licenses is for zealots. If Apple makes the new implementation open source it’s a win-win-win situation.
First thing before you get too far ahead of yourself. Who’s protocol is SMB. Answer IBM’s. What is IBM’s policy requirements for patent coverage for all their patents. GPLv2 or higher unless you have approval from IBM legal department.
Samba is to IBM requirements. This game is not as simple as it first appears.
Yes there was a reason why MS basically got raped in the EU court over what they were doing with SMB based protocols.
Yes there are legal requirements at play here. Same with the Linux kernel. There are a lot of different techs in the Linux kernel that BSD is not allowed to have. Due to the fact those techs are only licensed for GPLv2 code.
You might be about to find thousands of companies that disagree but those don’t have a legal vested interest in this problem. Also at times those thousands of companies will have to use GPLv2+ to get patent grants.
Yet lot of those thousands of companies also call for patents. Are they mad? If you hate being forced to use GPLv2+ you should hate patents as well. Because that is the very thing that will force you to use GPLv2+.
He was asking a question. The question was not answered. GPLv3 is “bad” for companies that want:
– a vendor lock-in and devices where you pay for them and you can not modify them. A monopoly where you are not free to do X and Y. GPLv3 avoids tivoization and companies like Tivo do not like it.
– planned obsolescence, so you must buy another product because this is what not was planned.
– etc 🙁
Here we agree, too.
I am naming big companies that I KNOW are contributing code under a GPLv3 licence, which was in response to the following -‘If you are a corporation, GPLv3 is indeed bad.’ which suggested that corporations don’t want to use GPLv3 because it would be bad for them.
Then why did you revisit this subject?
You misread me, I meant that Apple is moving away from using both GPLv2 and GPLv3. Again because they can’t use such licenced code within their proprietary software.
Since they want to keep XCode proprietary, they can’t actually integrate GDB debugging and GCC into XCode, hence them allocating resources to the development of LLVM which will allow them just that, a proprietary integrated development platform, just like their proprietary performance analyzer ‘Instruments’ is built upon DTRACE.
LLDB Project, under the LLVM License scheme is extending itself to distributed debugging with Linux and OS X–lots of work going into it.
Corporations heavily invested in Linux but don’t want the GPLv2/GPLv3 structure will be able to leverage fully with LLVM their needs for development.
The Plugins for LLDB, so far, are OS X, Linux and GDBServer.
Where are you getting this? What anti-GPL FUD outlet did you read to get to such conclusion?
Well Sun did their own CIFS/SMB server implementation for OpenSolaris/Solaris 11. If you use one of their “OpenStorage boxes” (7310 for example) it doesn’t use Samba to enable CIFS shares. Apple has a history of taking code from OpenSolaris, namely:
* DTrace
* NFS v4
* other stuff i can’t remember
So they either rolled their own (with their financial resources quite easy to do) or they got code from a third party.
“SMB2 support is coming in SAMBA 4”
Actually, it’s coming in 3.6. Look at their git repository.
EDIT: http://git.samba.org/?p=samba.git;a=blob_plain;f=WHATSNEW.txt;hb=HE…
Edited 2011-03-26 10:00 UTC
Not correct. Closer but not correct. http://samba.org/samba/history/samba-3.5.0.html 3.5.0 almost a year ago included the start of SMB2. 3.6.0 is to include production ready SMB2.
Also there is more going on here than there seams as well. 3.5.0 is the start of migrations of working Samba 4.0.0 techs out to production usage.
3.5.0 is the start off of https://wiki.samba.org/index.php/Franky Yes a merge between Samba 4 and Samba 3 lines.
End result will be something strange but interesting. An server that supports all the best of both. So yes Samba SMB2 coming in Samba 4 is kinda true. But completely misses the fact of Franky that sees Samba 4 and Samba 3 lines merge. Starting at 3.5.0 so 3.6.0 will be more Samba 4 features mixed with the old Samba 3 line.
Since Apple has licensed and implemented Microsoft Exchange in Snow Leopard and iOS, I’m guessing they’ll license some SMB implementation from Microsoft as well for Lion. I take it the Exchange licensing deal was a first step and now Apple is ready to license more from Redmond.
I’m pretty sure that this is not about patents (why would Apple have patents on SMB?). The reason is that binaries in /System are signed with Apple’s keys, which the haven’t made public. As far as I know this is also disallowed by GPLv3. My understanding is that this is the reason they are staying away from GPLv3 altogether.
The usual IANAL applies and I also haven’t read GPLv3, but googling turned up this quote from RMS:
That’s just amazingly stupid and ingorant. Give out the signing key? Wtf? Yeah, and lets also require all Linux distros to publish their package signing keys.
I mean, what’s the harm?
Jeeesh.
They’d only need to give each user a machine signature so that he could use his own modified versions.
However, Apple makes money from not letting users run any software at all without money going into the company’s pockets, so they obviously can’t distribute GPLv3 software.
All things considered, I think the GPLv3 patent clauses do more harm than good. The only thing they achieve is keeping “free” software from being used at all.
Even “good” companies acquiring patents for self-defense have to avoid it. If Google released a WebM codec under GPLv3 all the protection they get from their related patents against the MPEG mafia is lost as soon as soon as MPEG companies download a copy.
IP laws are like nuclear weapons, there is no way to use them for good, the only winning move is not to play.
Playing the wargame is the basic error of the FSF and their licenses.
In fact no. Read GPLv3 its cover disappears if you are part of an attack. Even a edge form. So Google would be still fully free to use there patents fully to counter an attack.
Yes there is a clause particularly allowing “software patent retaliation” in GPLv3 if attacked.
GPLv3 forbids patent aggression. If MPEG mafia download and use VP8 and it was under GPLv3. They would find themselves locked from attacking VP8. Of course other GPLv3 programs they don’t use they could still attack and not end up in a battle over VP8 as long as those other programs did not relate to developers of VP8.
GPLv3 patent grants is nicer than this http://www.webmproject.org/license/additional/
This is how foolish you are being. GPLv3 just places the mostly standard patent grant agreements in the copyright document. So you don’t get caught with your pants down. Ie something can be BSD but that does not have the patent grant so you include that in your program and you get you tailed sued off.
http://www.apache.org/licenses/LICENSE-2.0.html Patent grant word in the Apache version 2.0 license covers the same as GPLv3.
Basically the Patent grant stuff is fud. BSD and MIT you have to find the matching patent grant. Legally you are safer and better off with GPLv3.
Also due to the legal wording in GPLv2 patent grant maybe required to conform with GPLv2. No one has tested this in court.
GPLv3 just really lays out the rules. Rules that could possibly still apply if you are using GPLv2 with patents.
The only major change really is the anti-tivo sections.
Edited 2011-03-27 09:15 UTC
Hi,
one of the related projects at MacOSForge is certainly Apple’s DCERPC:
http://www.dcerpc.org/
Adrian
…I haven’t seen regarding this is whether Apple’s implementation will provide any domain services to Windows Clients. As a heterogenous shop running OD, we’ve been stuck w/ XP due to Apple’s limited SMB tools. Sure, our goal is to get off of OD, but we won’t be moving to AD.
Also, I’m surprised there isn’t more momentum behind projects such as pGina that allow Vista and Win7 clients to bind and authenticate via LDAPS.
It will open up a whole new can of bugs, as when it ships with Lion, it’ll be largely untested, but at least it’s not 3 years old.
Yes, no one at Apple, nor the developers being give access to Lion are testing this function. We will all be guinea pigs when it’s released.
Give me a friggin’ break.
I use Samba too extensively, at work and at home, but I’m not automatically writing off Apple’s “SMBX” implementation either.
I bet they are just using likewise..
http://www.likewise.com/
Seems to be a pretty solid implimination
Other than the fact it depends directly on samba for anything other than authorization.
There is a native cifs implementation that doesn’t seem to depend on samba… although I haven’t played with it
http://www.likewise.com/resources/documentation_library/manuals/cif…
You are mad if you use that. Seriously. It GPLv2 same as Old samba. Because it is old Samba. Apple would be just as well off where they are.
Yes its a samba fork pre NTFS filesystem compatibility.
http://en.wikipedia.org/wiki/Server_Message_Block
Really out of all the SMB server and client software out there.
Only 2 have SMB2. MS implementations and Samba implementations.
To top that off. Additions by third parties to SMB2 are already being written. http://www.unixsmb2.org
This is the big problem you hit. SMB protocal is not MS alone. MS has written what they want in it with SMB2 now the Unix/Posix guys are going to add what they want.
So a lot of implementers are basically sitting on the side line until the storm blows over.
Yes its really simple to think the last party to write an alteration to the protocol is the owner.
Samba in history has the core of been the neutral one between all vendors with the most support all round. No particular implementer givin favoritism.
From what I am hearing from apple is they are wanting to implement their own. Or is it there own? Title Apple is using that Apple might pay MS for a copy of their implementation.
Actually, Samba is not 100% windows compatible. You need CIFS to be really 100% compatible, this explains problems you had with Samba.
http://blogs.sun.com/amw/entry/cifs_in_solaris
“There is a common misconception that Windows interoperability is just a case of implementing file transfer using the CIFS protocol. Unfortunately, that doesn’t get you very far. Windows interoperability also requires… ”
Maybe that is one the reasons that Apple dropped Samba?
Edited 2011-03-27 11:22 UTC
3 year old documentation. Ok applies the crappy version of samba, OS X was running. Also does not apply to all versions of samba even back then. Samba-tng since that bugger could run on windows.
Samba 4 “ntvfs handler” that yes is appearing in Samba 3.5+ line. This is a virtual NTFS with multi backends long term even possible to be back-ending to a real NTFS.
Issues talking about in 2007 are part of the reasons for Samba 4. Problem is that article explains why OS X has todo something now. They cannot stay sitting on the last GPLv2 version any more. Because soon that will look second rate next to current version.
Ok, so you say that newer Samba versions are fully Windows compatible? Do you have any links to this? I would like to learn more.
FYI, an argument like yours: “the article you linked to is three years old” does not count as a valid argument. I hope you have better backup than this?
Ok, so you say that newer Samba versions are fully Windows compatible? Do you have any links to this? I would like to learn more.
FYI, an argument like yours: “the article you linked to is three years old” does not count as a valid argument. I hope you have better backup than this? [/q]
I gave the valid argument ntvfs. The issue talked about by the document pulled up are the differences between the Posix secuirty/filesystem framework and Windows secuity/filesystem the very issue ntvfs was created in samba 4 to address. And one why ntvfs will appear in the samba 3 line under GPLv3.
Also the ntvfs issue is an issue OS X will have to solve as well since OS X secuirty frameworks and filesystems don’t conform to Windows secuirty/filesystem either.
http://www.winehq.org/pipermail/wine-devel/2005-April/035751.html Please note the date that NTVFS is being talked about 2 years prior to the solaris document. Solaris document was basically stating a issue that samba developers had run into and were working on solving.
Solaris solution is dependent on ZFS. NTVFS Due to being a virtual mapping supporting many backends can get the same advantages as what the solaris guys are talking about without locking to a single filesystem.
Basically that document pulled up form Solaris is ZFS marketing. Because at the time it was not 100 percent true. Just because you decided to solve the problem a different way to mainline does not mean mainline does not have a solution in the works. Issue is 3 years has passed. Solution in works is starting to move out to mainline in samba.
Yes the document is historic documents really don’t cut it. Particularly when the history was not 100 percent correct at the time.
Edited 2011-03-27 22:19 UTC
Ok, so you claim that Samba v4 is fully Windows compatible, just like CIFS. And you talk about NTVFS. And say that my link was about security issues.
In my link, someone from the Samba team wrote:
“Solaris could expand on this by giving us access to atomic NT-ACL create, NTFS stream support, the ability to push SID credentials into the system from winbindd and attach to a process etc. We already support case-insensitive filesystems of course.”
Are these also security issues? Where can I read more about NTVFS?
Sorry to say person you pulled up. Is not samba main team. Is in fact Solaris. http://blogs.sun.com/amw/entry/cifs_in_solaris Not Samba. Solaris does in fact have its own implementation alterations.
Don’t try bluffing you way past me again please. You pulled up some random document that is basically by the wrong people.
NTVFS is where most of that stuff is implemented in Samba 4. Define of secuirty issues. NTVFS corrects the permissions processing issues that windows programs expect.
Prior still fully obeyed posix and Linux permissions that caught a few windows applications out normally causing them to crash when using a samba share. Their are a stack of options in samba 3 to alter responses to applications work. Of course being strictly posix would not been a issue if MS was not pushing NTFS expectations over the wire.
Support for DOS attributes (archive, hidden, read-only and system)
Case-insensitive file name operations.
There are three modes: case-sensitive, case-insensitive and mixed.
Support for ubiquitous cross-protocol file sharing through an option to ensure UTF8-only name encoding.
Atomic ACL-on-create semantics.
Enhanced ACL support for compatibility with Windows.
All this stuff directly relates to NTFS emulation.
http://www.samba.org/samba/ftp/slides/samba4_tridge.pdf This shows the major difference between samba 3 and samba 4. Major internal redesign ntvfs is one of the core parts of the redesign.
SID handling from Winbindd and ntvfs are both plugins.
All the solaris guy is talking about really is one of many ways todo it.
CIFS has been fully windows compatible with windows as long as Windows applications have not been expecting NTFS drives on the other end all the way through Samba 3. There are samba 3 flags to give applications what the expect.
None of these things data secuirty issues. More likely to be a issue of secuirty blocking applications from running.
Please be aware the first implementation of SMB by IBM only knows posix permissions and posix acls. Have complete no idea about NT-ACL at all. Yes directly setting posix-acls is supported over the wire using SMBv1. Will be reinstated in SMBv2 as well.
Apple always ensures that itself benefits; it really doesn’t care what its users actually want/need. Ditching SAMBA because Apple can’t somehow profit from it later on down the road is just lame.
No support for DFS. Yes, it’s a microsoft thing…..so what. You want a Mac that’s worth something in the workplace, other than for flaky marketing departments? Build in support for DFS….and hidden file shares, for that matter.