And over the weekend, the saga regarding Canonical, GNOME, and KDE has continued. Lots of comments all over the web, some heated, some well-argued, some wholly indifferent. Most interestingly, Jeff Waugh and Dave Neary have elaborated on GNOME’s position after the initial blog posts by Shuttleworth and Seigo, providing a more coherent look at GNOME’s side of the story.
Jeff Waugh created a timeline of events regarding the StatusNotifier specification, the example that’s being used to demonstrate how, supposedly, GNOME is not collaborating with the greater Free software community in the same way KDE has been doing for a long time now.
His timeline is supposed to show several things. First, that Canonical did not do what was required to get the API into GNOME as an external dependency. Second, that GNOME did not reject it because of ‘not invented here’. And third, that the rejection is meaningless because it had no adverse effect on Canonical’s goals.
The first point has resulted in a word-against-word situation; Waugh claims Canonical didn’t do enough, Shuttleworth disagrees. I honestly wouldn’t know what exactly is needed in order to get an external dependency accepted into GNOME, and I’d love to have an impartial developer look over this bit.
We do know this interesting bit. GNOME developer Owen Taylor states some of the things Canonical apparently didn’t do: “They didn’t engage with GNOME to make the user interface idea part of future designs. They didn’t even propose changes to core GNOME components to support application indicators,” Taylor states.
The first point is already a strange one to make (the API leaves the graphical side open to be implemented by the respective graphical user interfaces (GNOME, Xfce, Unity, KDE, etc.) as seen fit), but the second one is even stranger, if you were to believe Shuttleworth: changes to core GNOME components were proposed, but they weren’t accepted… Because the API wasn’t an external dependency!
Dave Neary also stated a few requirements for external dependencies in a personal email to Shuttleworth – “make the case for the dependency, which should be a few sentences or so, and wait a short while for people to check it out (e.g. making sure it builds)” – but those had all been met as well. Update: Neary posted the full email – with more details – as a comment.
In the meantime, all I see is that Canonical and KDE managed to work things out and get their applications to integrate well with each other; KDE applications integrate nicely within Unity when it comes to the API in question. I’d say that if KDE – of all parties – manages to work together with Canonical, then that’s proof enough that Canonical did enough. You’d think collaboration between Canonical and KDE would be harder than collaboration between Canonical and GNOME, considering Ubuntu’s GNOME-centric history.
The second point is interesting, and clearly shows the failings of Waugh’s timeline. Waugh hammers on about how Unity was unveiled a few days after the API was rejected from GNOME, but he fails to mention that “announcing” Unity actually means “announcing” nothing more than a name. Unity already existed for a long time before its name was unveiled – it was called Netbook Remix.
Then there’s the issue of the timeline focussing entirely on mailing list discussions and blog posts, ignoring the various talks that take place on IRC or offline. Shuttleworth insists that Canonical’s planned work on the API was shown to GNOME developers back in 2008 (a year before the first entry in Waugh’s timeline), and that the GNOME developers saw the API as ‘a valued contribution to the shell’. I’m guessing that Canonical – having close ties with the GNOME community – saw this as an assurance, and I have to admit that since we’re talking about a community here, and not a cold and heartless company, I agree with them.
Someone’s word should mean something, especially in the more informal world of open source/Free software development. It all seems like too convenient an excuse – sure, we may have made promises to you, but hey, we didn’t sign a contract so tough luck! That doesn’t seem like the kind of attitude that works well in the open source/Free software world. In fact, it’d be nigh-on impossible, since the interpersonal dynamics in an open source/Free software project are far more complex than in a company (where you have a clearly defined managerial structure).
Of course, the fact that GNOME developers liked Canonical’s ideas back in 2008 is further demonstrated by the fact that GNOME developed its own alternative, and incompatible, API years later for GNOME Shell. KDE chose to collaborate – despite parts of the work coming from outside KDE; GNOME chose to take a my-way-or-the-highway approach. They could’ve engaged with Canonical and KDE for the good of users and developers alike, but decided otherwise.
The third point Waugh tried to illustrate through the timeline and his blog post is that Canonical didn’t need GNOME to be included in the design process since they could achieve their goals even without the API becoming an external dependency. While manydevelopers have indeed included support for the API despite the fact GNOME hasn’t blessed it, Shuttleworth claims that because the API wasn’t blessed by GNOME, several developers have stated they are concerned about the repercussions of including support for it anyway.
“The uncertainty created by the rejection as an external dependency creates a barrier to that collaboration [between Canonical and GNOME]. As Jeff says, those patches can land without any problems. But to land such a patch, after the refusal, takes some guts on the part of the maintainer,” Shuttleworth claims, “Lots have done it (thank you!) but some have said they are concerned they will be criticised for doing so.”
Dave Neary’s blog post is far more extensive than Waugh’s, and covers several aspects Waugh didn’t touch upon, most importantly the role of FreeDesktop.org – which he considers to be a broken organisation that needs to be repurposed. On this, Shuttleworth agrees.
“There are a number of things we can do to move forward from where we are now,” Neary summarises his post, “Improve processes & structure for freedesktop.org (this will require buy-in from key GNOME & KDE people), make the operation of GNOME (and the operation of individual modules) more transparent to outsides, cut out a lot of the back-channel conversations that have been happening over the phone, in person & on IRC, in favour of documented & archived discussions and agreements on mailing lists & wikis, and work to ensure that people working on similar problem areas are talking to each other.”
I think this is about as accurate a description of the lessons learned from all this as you can get. Who, exactly, is to blame – well, you can make up your own mind. I think I, at least, have been pretty clear who I think needs to change the most.
Grow the **** up…
…probably sums up my overall feeling. I’d say that the best interests of users and developers have not been kept at heart by GNOME, and instead of frantically trying to rationalise everything, they should just be a man about it and admit they’ve been at fault. I say this as a former heavy GNOME user (I don’t use Linux anymore), but GNOME developers might want to sit down and a have good chat with their KDE counterparts about how to interact with the greater Free software community, and how you sometimes need to make compromises for the sake of interoperability – which actually translates into for the sake of users and developers.
Thanks to this little spat, something as bloody simple and elementary as menu bar status icons have become quite the headache for developers (and thus users). Shove that damn pride away, and do what’s best for developers and users. We don’t give a rat’s bum about who invented what, where it came from, or the small technical details. We want our crap to work
this comment on Aaron blog, extracted by Dave Neary tells a lot:
It still do not fix the problems. Fact is, gnome people did know that an effort was in course to provide a common notification area.
Now, instead of reload the process and work to provide even a temporary solution for this mess, some guys are just hunting witches.
yes witch hunt is bad but what could GNOME developers do if the effort in course was done privately?, become Canonical employees to have access to those discussions?, or wait until the code dump is thrown in their faces, accept it!!!. If GNOME developers decided to implement something resembling what Canonical was doing in closed doors, I do not blame them
Edited 2011-03-14 19:58 UTC
They still knew that the KDE guys where looking for a common solution.
I don’t want to repeat all was said from the different sides. If there are more than one human inside a room, it is quite obvious that unpleasantness will show up.
Fact is, FOSS desktops are by far in small quantities on a world full of hostile competitors. If the FOSS community wants to be relevant on desktop, they need to sort out the differences and cooperate.
The spec was open and people on the kde side did make sure it was known. Gnome did not have to accept canonicals work, but could have implemented their own version following the spec if they had problems with canonicals implementation. The thing here is that the new gnome systray was developed later and some people had full knowledge that there were working implementations by kde and canonical.
robmv, that is one of the *reasons* GNOME won’t accept any wrong doing for *NOT* collaborating, but certainly not a *valid* reason at all for not working together, regardless if those were *DONE* in private, the effort of collaboration should be high given to promote the use of Linux on the desktop, and thus collaborating between DEs is a must to make it easy for application developers! Note: ISVs(gaming, productivity suites,etc) are the most critical component of the Linux desktop, and that compromise should *have* been made. Look, just run KDE apps on the GNOME, it’s not native looking, that is an example of not collaborating, very far away from the *private* argument.
It doesn’t tell us a lot because CSD or ‘Client Side Decorations’ are not anything to do with app indicators. It was just an experiment, and as far as I can see a long way from ever being included in either Gnome or KDE. Whether such an experiment is commited to the Gnome repo or Canonical one doesn’t actually matter much in practice.
It talks about one example of how Canonical does not prefer to develop with the community, they forced the developer in this case to work privately
We the users will suffer from desktop environment quirks while the egos battle it out, watching Microsoft, Google and Apple laugh all the way to bank.
Yep, well, go tell that to Dave Neary. He now apparently doesn’t think D-Bus is important, after years of work, and doesn’t think that applications should be able to communicate with one another.
Yes, that’s my position. Applications shouldn’t be able to communicate. I’ve been telling Linus for years that we don’t need semaphores or shared memory in the kernel too.
Please, if you’re going to contribute to a discussion, can you at least try to add to it, rather than subtract?
Dave.
What have you added to the discussion? Your sarcastic comment didn’t help me in the least to understand what’s going on.
Rehdon
And this is different from how things are now in what way? They’ve already got enough reasons to laugh at the major F/OSS operating systems without even looking at the GUI side of things. Let’s try Linux: kernel modules, no stable API or ABI, subsystems being redone and overlayed atop one another… should I go on?
I fail to see how that comment applies to what I wrote. I would say the main reason that Linux as an OS has a chance in corporate is because of the open and fast moving development of a technically great kernel.
This is more of the same deluded thinking that needs to end.
The Linux kernel is not magic. Both Windows and OSX can run corporate software without any problems. There is no kernel bottleneck with Windows or OSX.
The open process has not given the Linux kernel supernatural abilities. This type of thinking reminds me of WWII history where Germans believed their special Teutonic spirit would defeat the allies even when the odds were clearly against them.
Don’t bet on mysticism.
Incorrect as normal. nt_jerkface. Please go and look how you do fully functional real-time support on Windows. There are some major bottlenecks inside windows just like any other OS. In fact one of the biggest disruptions to real-time on x86 archs can becoming from the motherboard. SMM http://en.wikipedia.org/wiki/System_Management_Mode Thank you intel for adding something that ruins the platform.
Also a lot of crashes on windows can be traced to kernel mode driver conflicts. Yes there is not magic.
There is also commercial software that installs on many different Linux distributions without issue as well.
Really the open process does not give the kernel supernatural abilities but it does prevent black box connecting to black box equaling untraceable failure.
There are cases of Windows 7 SP1 destroying stored data on harddrives that do directly track to black box + black box equal o my god data loss. The ide controller driver worked perfectly with Windows 7 but the SP1 has changed some of the interfaces locking methods now leading to data lose since the locking in the OS is not what the driver expects any more. Yes you don’t need to change the ABI to break drivers from time to time 12 ways from sunday.
Open Process prevents these nightmares. Particular places you don’t want black box + black box. Also note for core drivers like this Linux tries to have them all in 1 tree so alteration here does not lead to incompatible there.
Really supernatural abilities is believing that you can connect a stack of black boxs up with each other built to a interface spec and not have something strange to extremely bad not happen from time to time.
OS X and Windows are following a supernatural way of doing things. Yes it works for a while but one day it badly wrong.
nt_jerkface your argument against Linux in a lot of ways is baseless. Main reason why Linux does not have many commercial apps is market-share nothing else. Its not like supporting multi versions of windows is a walk in the park.
I think you exaggerate this feature as you have in the past but even at face value it means nothing to 99.9% of corporations or users.
Linux is more likely to crash from driver conflicts, especially if it is related to video. Just ask Thom.
Haven’t heard of that. But I do know about the data loss issues with EXT4.
http://linux.slashdot.org/story/09/03/19/1730247/Ext4-Data-Losses-E…
Commercial apps like VMServer that have been broken with kernel updates.
As I have pointed out before the iphone had better commercial software support when it had 1/10th the marketshare of Linux. Linux could be more appealing to proprietary software and hardware developers but the people behind it simply don’t give a shit. They value open source ideology over market share. That’s a fact, not an opinion.
But keep defending the status quo. MS would be upset if Linus & co decided to provide a platform that was proprietary friendly. They like how Linux stays ideological and at 1%. You probably deserve a few dozen copies of Windows 7 for all the defending you do of the status quo. That and a mac mini from Apple.
When you do that where all the locking issues are that cause people to complain there computer is slow to start up and so on show up really badly.
So no its not a feature 99.9% of users would directly use. But its more picky to the issues. You know something has gone badly wrong when the only way to make it work is basically embed another kernel inside.
OS X and Linux kernel both can be made operate real-time. Neither has major design flaws any more preventing it. And where those flaws are are in source code you can alter. Hello OS X kernel is part open source. There is no reason why key low levels of Windows OS’s could not be released open source as well to prevent nightmares. These are preventable errors.
There are reasons why I don’t tar OS X completely with the same brush as Windows. At least some sane selections in the Apple camp have been done on what has to be closed source.
Linux driver issues are all basically in 1 spot mostly drivers in development or drivers developed in black box method so touchy to any kernel alteration. Yes I have spoke in person to Thom.
Windows they are all over the place. Some are really funny. Like insert a usb device that is not defective or tampered with into a particular machine and watch the machine reboot. Usb controller and device driver for the usb device you just plugged in conflit. Leave the device in the machine will remain in a never ending reboot cycle until its removed. Change either of the usb controller or the device drivers and problem disappears. Yes its very much with windows do you fell lucky.
How often are you going to bring this up. At the time Ext4 was not marked production ready. It was marked testing. If you are dumb enough to format a drive ext4 in 2009 and you lose your data because it was not tested who fault is that. The persons who used a filesystem before testing was complete for data.
Its like saying running a beta copy of windows and it screwed up and eats you data its Microsoft fault. Sorry no its not.
Of course if the same fault happened this year I would be out for the maintainers blood. Because a fault like that should not happen.
VMServer also was broken on OS X, Windows and Linux at different times due to kernel updates. This is not a Linux Unique issue.
Of course you don’t mention that VMServer has regularly failed on all. Black box in kernel space a risk of failure. VMware in recent times have been working more with mainline Linux and the failure rates are reducing. In fact to lower rates of fail per year under Linux then and OS X or Windows.
Also you most likely want to forget the big one where all MS clients running inside VMServer failed completely due to a Windows update recently. Yes the clients become incompatible with VMServer paravirtalisation drivers.
So really you want a direct example of why not blackbox you don’t have to do anything else then chart VMServer failures per platform against how black box the platform is to VMware.
Big error. iphone core OS is same as OS X. Lot of small applications from OS X could port to iphone without much recoding.
Android has also taken off with commercial applications and it has a Linux kernel. So any kernel design selection has Zero effect on if commercial makers will make for Linux or not.
Linus is the kernel maintainer. Android disproves you point solid. Application developers for over 99 percent of cases don’t need drivers custom for them or anything custom in kernel.
Have distributions made it too hard at times for commercials. I have not defended that. Also redhat has more closed source commercial applications from third parties alone while iphone was a small marketshare.
So its a pure myth that Linux does not have closed source applications.
The major reason for people not being on for linux desktop pc can be addressed in 2 words. Microsoft Office. The defacto standard that has become. Platform compatibility issues.
There is another issue on Linux. Nero struck it head on. The simple problem was feature wise open source k3b had more features and better features than nero burning software. So releasing nero on Linux that does exist has a 15 dollar max price tag and almost never sells.
Linux huge pool of open source applications so not leaving much room for large numbers of commerical applications.
Google and HP(android/web os) solution break the api to give commercial applications a chance. Yes creating a short fall(ie clean slate) has shown how mythical lot of arguments to explain Linux lack of closed source applications is.
Final point what has prevented comercial applications makers for joining up and making there own unified framework to run their applications on Linux.
That right there bitter rivalry. About time you start doing the blame correctly.
Making stuff up again. I’d like to hear of these cases where VMServer was broken on Windows Server due to kernel updates.
I can source numerous breaks with kernel updates; rebuilding vmware kernel modules is a standard affair for Linux.
That’s a big fat lie.
Most iphone games are not available for OS X.
Android is a completely different subject. It’s designed by engineers who actually want to build a platform that encourages development of both proprietary and open source applications.
I’m not mixing kernel and application development problems, they just happen to intersect in some areas. The main problem with Linux distros is that their distribution systems are designed around open source.
Server applications that are mostly command line or output to a webpage. Those companies only need to target a single distro and their customers are server admins that don’t expect the same level of usability. It’s a completely different market and one that works.
I never once claimed that it doesn’t. Linux is a PITA for companies like EA Games. That’s what it comes down to.
That is a major factor but even on netbooks where MS Office is not expected Linux has caused too many problems for new users.
Um that’s nice, Linux has a lousy game selection so it’s not as if all bases are covered.
Except for a docx compatible office suite, tax software, bookkeeping software, video editing, and more. The best open source applications get ported to Windows so I really don’t see what your point is.
Nice! So Windows users will be able to avoid programs that put them into vendor lock-ins, they will be able to improve the program, study its source code, inspect it, etc. When they get rid of monopolies, periodic forced pays for doing basically the same as they were doing before, maybe some ways of thinking will change.
> > Linux huge pool of open source applications so not
> > leaving much room for large numbers of commercial
> > applications.
> Except for a docx compatible office suite,
Linux “office” programs can open most documents, but not all. That’s a usual thing, changing the format of documents and keeping some things secret… is an old way to keep a vendor lock-in and force users.
> tax software, bookkeeping software
That depends on the country, so my particular case would be of no use to most of people.
> video editing, and more.
I’ve used Kdenlive with good results to cut “ads” from saved tv shows, resize the videos and so on. It also can add titles, transitions, effects, etc. Enough for me and for most of people. It also has a lot of more capabilities and is being used in professional places. We can see more in http://www.kdenlive.org/features
DOCX is the problem.
It’s an open format, albeit a complex one.
The problem is a lack of open source developers that want to work on office suites and other types of software that do not attract hobbyists. Hence the abundance of mp3 and torrent programs but no decent quickbooks alternative.
There are proprietary applications that can handle docx just fine. It’s an economic problem that needs to be solved with funding.
Yes, it’s a big one, and Microsoft keeps changing it.
Like it was written before:
I beg to differ
http://www.consortiuminfo.org/standardsblog/article.php?story=20070…
http://www.groklaw.net/article.php?story=2007011720521698
http://www.grokdoc.net/index.php/EOOXML_objections
http://www.grokdoc.net/index.php/EOOXML_at_JTC-1
Refer here:
http://www.groklaw.net/article.php?story=2007011720521698
(section – comments by Marbux):
File formats with no specification
…
However, the specifications for those legacy Microsoft file formats — the sole justification offered for duplicating the functionality of the OpenDocument standard — appear nowhere in the EOOXML specification and are unavailable to other developers. Yet those formats’ implementation is mandatory for conformance with the specification.
…
The unavailability of the specifications for virtually all of the the legacy file formats also clashes irreconcilably with the verifiability requirements of section A.4 of Annex A to IEC Directives, Rules for the structure and drafting of International Standards, Part 2, (“[w]hatever the aims of a product standard, only such requirements shall be included as can be verified”). If compatibility with and implementations of the specifications for those legacy formats are mandatory for conformance with the proposed standard, disclosure of the specifications for the legacy file formats is necessary even to consider whether EOOXML achieves Ecma’s stated goal of compatibility with those formats.
…
Vendor-specific application dependencies
The EOOXML specification is inappropriately replete with dependencies on a single vendor’s software. As an example, “autoSpaceLikeWord95” (page 2161) merely defines semantics in reference to a legacy application whose specific behavior is nowhere specified. Instead, vendors are repeatedly urged to study the referenced applications to determine appropriate behavior. But no relevant specification is available for other developers to use and Microsoft’s Open Specification Promise grants no right to decompile and reverse engineer the company’s legacy applications.
Edited 2011-03-16 21:13 UTC
You beg to differ that it’s an open format?
Well have a look at it for yourself.
http://www.ecma-international.org/publications/standards/Ecma-376.h…
I beg to differ here. MS Office does not produce document called docx that conform to that start at all. So that document is only a standard point.
Also nt_jerkface please get your facts straight.
MS Office 2007/2003 Produces Ecma 376 not to spec with undocumented extentions and calls it docx
MS Office 2010 produces ISO/IEC 29500:2008 Transitional Migration Features. That is not properly Ecma-376 compatible or properly compatible with MS Office 2007/2003. Worse ISO/IEC 29500:2008 Transitional Migration Features allows MS to use features in there files that are not documented.
So no way in hell is docx currently an open format.
To get to a true open format we have to get to ISO/IEC 29500:2008 Strict. That MS says hopefully the will support is MS Office 12 and might be put back to 14.
There are libraries that are open source that attempts to read this nightmare mess.
Libreoffice is serous-ally has what they call Laugh OOXML. What is basically OOXML with stack of reversed engineering to understand the MS undocumented parts.
And why were you quote an expired standard? nt_jerkface. Yes the ECMA-376 by MS is dead. About time you do homework I think before commenting again.
Is that the latest excuse? Funny how I’ve never heard of anyone having problems exchanging docx files made with either 2007 or 2010.
There is no excuse for OpenOffice being unable to open and write to basic docx files. Softmaker Office can do it, so why not OpenOffice? I may have to do a review to show how far ahead it is in this area. Once again we have an economic limitation in the open source world, not a technical one.
That is anecdotal evidence. You haven’t tested every possible combination of ooxml document. Frankly considering the rats nest specification, I highly doubt that even microsoft is able to test every possible combination…
The problem here isn’t *basic* docx..but whatever other legacy crap is embedded inside it. I’ve seen Pages mangle docx files…
Edited 2011-03-17 02:15 UTC
Did you really read the
http://www.osnews.com/thread?466449
http://www.osnews.com/thread?466481
posts? Do you really think they can deal with unspecified (in the public EOOXML specification) format rules like “autoSpaceLikeWord95”, “useWord97LineBreakRules” and having binary closed blobs in a file?
Do you think it’s possible? 🙂
Yes, it’s a great idea. For example starting with the documents of
http://katana.oooninja.com/w/reference_sample_documents
And with more complex documents we would see how they could understand binary closed blobs and unspecified EOOXML specifications. All knowing that Microsoft’s Open Specification Promise forbids decompiling and reverse engineering the company’s applications.
If anyone reads the text that I gave, he sees the
case, there is also the “useWord97LineBreakRules” case that was not specified (and so you could not implement it), etc.
There are binary closed blobs in OOXML documents, without there necessarily having been any graphics or multimedia content embedded:
“When you get an OOXML document, you don’t know what is inside. It might use the deprecated VML specification for vector graphics, or it might using DrawingML. It might use the line spacing defined in WordProcessingML, or it might have undefined legacy compatibility overrides for Word 95. It might have all of its content in XML, or it might have it mostly in RTF, HTML, MHTML, or “plain text”. Or it may have any mix of the above. Even the most basic application that reads OOXML will also need to be conversant in RTF, HTML and MHTML.”
There’s more in that analysis:
http://www.robweir.com/blog/2007/01/calling-captain-kirk.html
Edited 2011-03-17 01:42 UTC
Maybe for the use that you would give to it, KMyMoney would be useful for you. Which “use cases” would be for you?
http://kmymoney2.sourceforge.net/screenshots.html
We would like to know which are them (without using Microsoft’s privative libraries, which could tell us something).
KMyMoney is alternative to Quicken, not Quickbooks. Please do not make the mistake that I am new to open source just because I don’t carry the revolutionary attitude.
Softmaker office has docx support.
There are open source libraries that let you read and write to docx files.
The problem is economic. Stallman’s plan of having hobby developers code everything has failed so that means LibreOffice needs cash if it wants to compete with MS Office.
> > Maybe for the use that you would give to it,
> > KMyMoney would be useful for you. Which “use
> > cases” would be for you?
> KMyMoney is alternative to Quicken, not Quickbooks.
As I wrote: “Maybe for the use that you would give to it”. I was talking about your particular use.
> > > There are proprietary applications that can handle docx just fine.
> > We would like to know which are them (without using
> > Microsoft’s privative libraries, which could tell us
> > something).
> Softmaker office has docx support.
But can they “handle docx just fine” as you said? Or in particular cases they find problems because of not-public specifications of the format?
> Stallman’s plan of having hobby developers code everything
Where did you get that from? This is not real. That’s why companies (http://www.linuxfoundation.org/about/members) are paying professional developers, professional developers from other companies also share the code that they make at their company, the code that they make in their free time, etc
Edited 2011-03-17 01:47 UTC
To be correct its not a lie. Of course people don’t read that vmware is only tested with the longterm release versions. Vmware on Linux long term kernel is many times better than Vmware on windows due to the poor quality of a lot of MS third party drivers. I really don’t count non enterprise distributions that serve up crap as Linux. Since they are not obeying the policies of the Linux Kernel they are not Linux in my eyes. They should not be Linux in anyone eyes. They should be just a pack of incompetence to stay clear of.
Any user using a distribution not working well with upstream should expect to be hurt.
No it was also design to not compatible with existing open source applications and that is for reason under.
Let do some basics here. Yes there are in fact 4 closed source office suites for Linux that do docx. Most all EU. There is closed source bookkeeping software for Linux. Video editing you hit a true weakness but that is being addressed.
Items Like nero that failed to sell on Linux are held up as examples that Linux personal with not pay for software. The result is a lot of bookkeeping software is available server only on Linux. Since Nero who is a bigger company than a lot of bookkeeping software making companies could not make a go of it they will not try. Because in their eyes something open source like k3b could jump up and take their lunch.
Seriously something like K3b jumping up and taking there lunch is something software makers better get use to.
In video editing lightworks is going threw the process of becoming open source and for all platforms. This effectively renders most video editing software in the higher end no longer required.
So Yes video editing lunch is about to be completely eaten.
Ok K3B find me a port on Windows that any normal windows user might install. Current windows installer is 1Gb plus. Some of the best Linux applications really don’t want to port to Windows. The idea that all the best open source applications have made it to windows is false. And even so some of them due to the differences in windows will always be more suited to a Linux environment. Windows memory management and Linux is different and some of the highend applications simply don’t like Windows memory management so run slower and crash more often on Windows. That is closed and open source. This is why 3d rendering workstations are more often than not Linux or OS X not windows. Even OS X has its bug of amount of memory a single application can allocate.
So unless MS does a radical overhall there will always been those applications were you are nuts to run on Windows due to windows design flaws. Just like there are some classes of applications at the moment you would also be nuts to try to run on Linux due to its design flaws.
IE no OS can run the best of everything at this stage.
You still haven’t provided any examples of VMServer being broken on 2003 or 2008 from a system update.
At the end of the day Windows Server has been a more reliable platform for VMWare. I’m know that is heresy to you since it is server software but the record is clear.
The problem of VMWare and kernel updates is an established problem.
http://stackoverflow.com/questions/827862/why-i-need-to-re-compile-…
http://www.tuxyturvy.com/blog/index.php?/archives/48-Automating-VMw…
Your claim of it breaking just as often on Windows was a lie. I hope everyone takes note of the fact that you will tell bold face lies and just hope that no one questions them.
Are you talking about VMware Server? That’s an entry-level server virtualization server from VMware, it runs on top of any supported OS on the planet. And if your argument is valid, why is it that VMWare ESXi runs on top of a Linux kernel instead of Windows, given if Windows is more economically feasible because it won’t give VMware headaches?
Please check:
http://en.wikipedia.org/wiki/VMware_ESX#Linux_dependencies
net_jerkface, if you do not know system programming, please stay calm and accept it.
Back in the day EA (like 20 years ago) had no problem porting to a myriad of platforms. Ah, the good old 80’s when there was actually competition. Yeah yeah, I know games nowadays are more complicated, requires more manpower etc etc. That’s all valid points.
On the other hand, there are OSS and commercial game development platforms that provides easy cross-platform deployment.
I cant remember having any problems open docx on OOo.
Edited 2011-03-17 05:39 UTC
If one wanted an objective opinion on Windows vs Linux, you’d expect anyone to go to Thom? Seriously?
My last Windows upgrade was from XP to XP64 and it sure came with driver hell, including video drivers. I also remember a ton of problems when people were upgrading to Vista. And no, this is no evidence of Windows being worse than Linux either, it just shows that there’s no basis for your ‘more likely’ since there are certainly flaky drivers in both Windows and Linux.
Totally different market segments, I’m pretty sure companies realises that targeting Linux desktop with ‘fart apps’ would be a commercial suicide, just like they aren’t targeting the Windows desktop with it either.
Linux has a small desktop market share, which is reflected in the amount of commercial software available for it. However the whole ‘not appealing to proprietary developers thing’ is just bullshit. If the market is there then so are the apps. Just look at 3D/SFX, Linux is huge there and that is why all the latest versions of commercial top applications like Maya, XSI, Mudbox, Houdini, Nuke, Renderman, etc are available for Linux.
The reason this market exists on Linux is because it’s the platform of choice for pretty much every large SFX/3D company, so despite the overall small market share, Linux is extremely well supported in this segment.
All this proves is that Linux is primarily use by people with a high level of technical proficiency or where the core OS can be hidden by the user (such as Android Devices).
There will never be a market for it anywhere else because of core usabililty issues.
Yes I would, he has an interest in alternative operating systems and has given Linux a fair trial on numerous occasions. From the way he writes I can tell that he wants to like Linux but has had too many problems with it. He sure as hell is no Paul Thurott.
XP to XP64 is a major upgrade. XP64 is based on Server 2003. You went from a desktop to server OS.
Linux is far more likely to break drivers between minor upgrades. You’d have to be pretty deluded to believe otherwise. The problem is not with the actual Linux drivers but a kernel level driver model that is not designed around end users or hardware companies.
Fart apps? There are hundreds of full length games on the iphone. Why isn’t The Sims 3 available for Linux? It is on every other platform including the iphone.
No distro is trying to cater to proprietary developers. They have software distribution systems that are designed around open source. Ubuntu has been moving towards supporting proprietary developers but is still centered around the repository system which favors open source.
Linux is used in rendering farms but is a minority platform when it comes to desktop drawing.
Please tell me you don’t think this is due to not having a stable kernel ABI. Please? Userspace apps like games do have a stable interface.
The problem here is what you call a minor upgrades.
2.6.37 to 2.6.38 is technically not a minor upgrade this is a kernel rework equal to the kernel change between 2003 and 2008 server. Yes the number of alterations to kernel between 2.6.37 to 2.6.38 that happened in 3 months is about the same number of alterations that happened between 2003 and 2008.
2.6.35.1-11 all are minor upgrades. This is a longterm kernel minor upgrades on it don’t break things. These are all driver compatible if distribution build them all with the same compiler no issues.
Finally there are the Ubuntu’s out there. Who apply 12 megs + of non upstream patches that hardware makers don’t normally see first. Then people wonder why they get a black eye.
Lets blame Linux its simpler. Than blaming the distribution that they are using for be incompetent and not giving me warning that they are just doing a major OS upgrade that could turn my computer into swiss cheese and have hidden the boot loader so making it hard to swap back to the prior kernel that worked. Yes skin save in a lot of cases. Upgrade failed switch back.
Problem is distributions and users not understanding the differences. Made worse by users blaming the wrong party. Upstream Linux does provide points of stable API. They need distributions to provide stable compiler for ABI.
[/q]
I am sorry to do a PS like this but I missed something most people don’t know and is completely critical and explains a lot of the flaky driver issues down to the ground.
Fact 1 only longterm Linux kernels are in fact tested against closed source drivers.
Fact 2 longterm kernels come once every 12 months only. This is what hardware companies have asked for.
Fact 3 longterm kernels are maintained for at-least 5 years can be taken to 10 years if there is a particular demand from hardware makers.
Fact 4 Updates design not to interfere with driver to kernel ABI as long as same compiler is used are the only ones applied to long term kernels.
Binary drivers for Linux normally ship with a small source wrapper to cover the interface issue caused by distributions using different compilers and end users using different compilers.
Distributions have never agreed to a universal compiler for building kernels. For longterm kernels it would be really really nice if they would. Ok the non longterm kernels you should not be using with binary drivers they can do what ever they like and I would not care as long as users had option of the cleanly built long term kernel.
Problem here with ubuntu 11.04 is being released with a 2.6.38 kernel. The next longterm is 2.6.39. Outbox not being provide with an option to install with 2.6.35.11/12 for people using closed source drivers. So yes expect trouble. Same also applies to X.org and other parts. The closed source drivers sync once per year that is it unless companies are felling really nice or really pressured.
Ubuntu 10.10 comes with close to 2.6.35 but 12 megs of non approve patches have been applied so reducing compatibility.
So yes flaky driver is explained in most cases either flaky/modified kernel provide by distribution or providing a untested kernel for kernel mode binary drivers. You could view the 3 kernels middle of longterm versions of kernel as like Windows Beta versions when it comes to closed source drivers. Of course they are going to be trouble. Expecting anything else is really showing you are not educated in how it works.
Items that distrobutions should be completely yelled at for doing. Not yelling at the main Linux kernel saying provide stable driver abi. They have a pattern to provide a stable API with a clear dependable cycle.
Also around this they are providing a complete set of userspace solutions are are not effected by compilers or kernel selected.
What more can the Linux kernel people really do. Its end users job now to be on distributions back to provide the right kernels. Failure to provide the long term kernels should be enough grounds to not use test or demo that distribution. To at least provide the option of a non messed with long term kernel/s for binary drivers.
And hopefully force distributions into an agreement for a common compiler for non messed with long term kernel/s
This is why I get so pissed with we want a common Kernel API request. The issue is done dusted sorted every way possible other than distributions messing it up.
Yes that the Linux kernel does not do everything possible for “hardware companies” is complete BS. Particularly when you wake up 70 percent of the people who can vote on topics are Hardware companies staff. Yes they win by majority vote every time.
Please go and look at the board of the Linux foundation. Remember they are Linus bosses who are behind his wages getting paid.
To paraphrase another thread….
I can only recommend developers to try to hack with only Windows in mind and experience the freedom and the opportunities this offers you. So get yourself a copy of Windows Game Programming With DirectX, ignore everything it says about other operating systems and hack away your amazing Windows games.
Edited 2011-03-16 04:28 UTC
By what metric? I would say though, that he is not quite as partial as he is with the xbox360 vs ps3 coverage which is insanely biased.
So? Microsoft get’s a carte blanche because they release new versions very seldom and thus breakage is to be expected?
I’m assuming you are talking about ‘binary blob’ drivers, which can break due to changes in the kernel ABI/API. And certainly that can be a problem if you live on the bleeding edge, and the providers of said binary blobs are not up to speed. Having been using a ‘rolling’ distribution (Arch Linux) for a couple of years, I’ve yet to see my only binary blob dependancy (NVidia) to lag with providing up-to-date binaries, but afaik they are very good at this so maybe they are not representative. But since all other hardware I use is supported straight out of the kernel, I’ve had no problems when upgrading (technically NVidia is supported through Noveau aswell which would allow me to omit binary blobs altogether, but there is quite a performance difference for my card).
Because of market share, again it’s not a problem shipping proprietary products (with DRM no less) onto Linux if the market is there, hence the strong showing of 3D/SFX software available (which all contain DRM mechanisms). But the desktop market just isn’t big enough to warrant interest from the big players in say, the game industry. That’s not to say that they don’t sell to Linux users though, Wine makes it possible to reach Linux users without paying a penny to do so, 3d hardware acceleration together with virtualization is another option but that requires the need of a guest operating system.
The only thing I can think of here is that there isn’t any automated way to push DRM onto end user systems, which isn’t really an issue since a binary distribution can easily setup their own DRM (you know, like commercial software does on Windows), which is also what’s being done on those aforementioned 3D/SFX packages for Linux. Apart from that, how are repositories discriminating towards proprietary software?
No, that’s no longer the case and hasn’t been for quite some time. This is the reason we are seeing native Linux ports of pure modeling/sculpting programs like Mudbox, which unlike ZBrush had performance problems running under wine. Of course programs like Maya/XSI already provided native modeling capacity for Linux. But you are right that Linux as render farms is what opened the flood-gates for Linux within that industry, given the results it didn’t take long before companies wanted to run their entire pipeline through Linux, which is what has resulted in Linux being so well-catered in this area.
Well said.
I just wanted to add that, also, 3D computer graphics, 3D animations, etc, were done in Unix workstations; so the move of software and experts to Linux workstations and Linux render farms was a clear one (and cost-effective :-)).
Edited 2011-03-16 11:02 UTC
First thing please provide a case requiring a kernel module. Remember most drivers in linux can be done kernel module or userspace. Userspace ABI for making drivers is 100 percent non changing and kernel netural. Yes 10 year old userspace drivers on Linux still work today with the latest kernel no issues. Some userspace drivers for Linux also run on Freebsd without change.
Now that you have a case requiring a kernel module please now look at what damage that module can do if it malfunctions.
Basically you want a secure stable OS. Closed source kernel modules are not compatible. So there is no requirement for a stable ABI for kernel space. Treat Linux as minix when creating drivers as a closed source third party and you are basically fine. Don’t Linux will fight with you as it correct to improve performance and secuirty.
Layered subsystems even Windows has this. Subsystems being redone natural development of all OS’s.
Another classic case of a person putting up a baseless argument. Linux kernel does not have the huge ammount of embed usage if creating closed source drives were tricky. Also userspace drivers put you completely out the path of the Linux GPL license and its requirements.
Being a kernel module on the other hand you can link by mistake against GPL only functions so leading to being in breach of GPL. The way Linux is design is to protect closed source developers legal ass.
Linux is designed to piss off closed source driver developers, period.
They won’t even keep a subset of the kernel stable for VMWare.
How long are you going to defend them mr. ham? They’ve gone with the F*** proprietary drivers attitude for years and Linux just sits at 1%. God forbid they try something different.
Even when hardware companies submit drivers they still have the support lag problem.
You know I bet Microsoft loves their F*** proprietary companies attitude. A match made in heaven actually. Ideologues pissing off potential partners which just keeps them on the side of Microsoft.
Interfaces are not supposed to change rapidly. It’s a key software engineering concept … the interface remains the same the implementation behind it changes.
If the interface does need to change you depreciate the old one and give people time to move over.
The only reason they keep changing it is either poor design or it is a deliberate attempt to force other devs to open source their drivers (if this is true it is another case of “freedom but as we tell you).
People can bash Windows all they like but a driver written for Windows XP in 2001 will still work with Windows XP today, the same is true also with Drivers between Solaris Versions.
Linux dev’s really have no excuse.
Funny Linux did support Unified Unix Driver Standard. No drivers were made for it. It was binding up a kernel mode ABI for no gain. Developers insisted on calling the internal API’s they could see not the stable ABI. Was even supported in the 2.6 line. To state what the hardware makers said repeatly for not using the Unified Unix Driver Standard was that the performance hit(less than 0.01 of a percent) was too great.
Pardon. Windows XP drivers written in 2001 still work with Windows XP today. This is freeze kernel progression. Yes 2.4.37.11 that was released is the 2.4 tree that was first released in 2001 that has a stable kernel abi across the 2.4 line has basically no closed source drivers.
Now lets move on to vista. Vista just like Linux kernel 2.6 is pushing lots of drivers to user-space for the same basic reasons no need to tie hands behind back.
Problem is people like you are blind.
http://www.kernel.org/ Please note the kernels tagged longterm. They will be API compatible for as long as XP if not longer. Why not ABI. Something interesting. Turns out you must use the same compiler to have ABI. Reason why MS shipped driver development kits containing a different compiler to the normal Windows SDK. If you don’t have the same compiler you must wrap the API that does cost performance.
Anyone who builds open solaris themselves with different compilers has also found out that from time to time solaris closed source drivers don’t run stable either. So this is a selection between stable and not stable.
Userspace is already wrapped with the syscall framework. Userspace is simpler to provide compatibility libraries. Something people are not aware is some of the old syscalls on linux called from userspace are not processed by the Linux kernel but redirected to userspace libraries. So historic compatibility does not mean kernel bloat.
A userspace driver frameworks its far more stable. Drivers written in them like cups drivers you can pick up cups drivers from 1880~ from a few different unix systems and use them on current Linux by using loaders. 1993 from Linux system and use them as well.
Basically userspace proper solves the kernel to kernel issue. And driver support from hardware makers has been as bad as it always was.
I am sorry but the userspace framework on scale of stable is massive far passing the time frame any Kernel base ABI could offer. Its done in such away they never need to be revised in a way to break backwards compatibility. At worst redirect some syscalls to userspace for userspace handling.
http://www.yl.is.s.u-tokyo.ac.jp/~tosh/kml/
Something I have not mentioned so far. Is that the userspace-kernel bridge in Linux can be turned into a kernel ABI with a third party patch. Keeping many advantages
Basically shutup asking for a kernel ABI. Linux developers are providing driver makers with a Highly stable ABI that can be made operate in kernel mode if required. There are not enough drivers to justify Kernel Mode Linux to be integrated mainline.
I really should have though all this out.
Other thing about Linux userspace drivers are they are really forever drivers in many ways.
Qemu can wrap a userspace driver to run on basically any cpu type. So maker only gives my 32 bit x86 and I have an arm processor no problems. Yes inside qemu its runs a bit slower but at least the driver will work.
chroots/openvz zones can be created to provide an old system appearance to a userspace driver.
And of course Linux kernel userspace syscall off load. So kernel can drop syscalls and userspace never needs to know since they are now being provided from userspace.
Not something hardware makers particularly like the idea. Once Linux has a open source or userspace driver it has the possibility of having that driver for every CPU and ARCH type linux supports.
Yes Linux design is going after the same thing MS .Net OS dreams have been going after.
Of course being userspace code has not excluded it from being loaded kernel space. And userspace kernel is kernel netural. So what is the problem. Linux had a problem that every other OS has suffered from through time and they design a solution. Most likely the only solution that can proper work in all cases unless you go to something like a java or .net OS core.
Of course there is a disadvantage of userspace drivers. No nasty stunts can be done to stop reverse engineering.
So all drivers are better off in userspace?
nt_jerkface good question.
Some drivers at moment will perform badly at the moment done userspace. Mostly due to context switches. But if there was enough demand merging of kernel mode linux would become important. That basically fixes those performance issues completely.
There are a small number like memory management cpu initialization, video card first initialization that simple cannot be done as userspace basically the ones that a base kernel image of linux will not run without. Yes they are still drivers even that they are in the single file kernel blob.
All the module loadable drivers other than a very small percentage(The ones that have todo operations from ring 0 like virtualisation ie ring 0 is not on offer to user-space for very good reasons) would be in most cases be better off done using the userspace api’s.
In fact some drivers in the Linux kernel are being tagged to be ported to the userspace api simply to get rid of them out of kernel space.
Most important thing about being done off the userspace api is that if you are having kernel crashes and you suspect a driver if they are are done in the userspace api you could basically switch to a microkernel model. Run the driver in userspace. Application or driver crashing in userspace normally does not equal complete system stop so making it a little simpler to find that one suspect driver.
So why are they not. fuse cuse and buse are the last 8 years. So drivers prior to that were done in kernel space because there really was no other way that would work.
Next is kernel space does have some advantages. Those advantages do explain why the API in there is unstable.
Main reason for using the kernel space over userspace is speed. For the speed there is a price Kernel space driver can bring the system to a grinding halt with a minor error. No such thing as a free lunch.
Due to the fact that kernel space is for speed. Any design error has to be removable at any time. So the API in kernel space are in flux. BKL was a classic example. Good idea a the time. Many years latter it had to go. Stable Kernel ABI based on internal kernel structs would have prevented that removal as fast as it was. Why you are using Kernel mode explains why Linux kernel mode is in flux.
So the deal you choose between with user-space and kernel mode basically is.
Userspace highly stable, issueless with future versions of kernel, unless something really rare happens never crashes you computer(ie driver might get restarted) but slightly slower depending on the device this may even be undetectable, can be cross platform and cross arch at basically the same speed.
Kernel Space. Fast, Can crash your computer with even the smallest error, Will have issues with future versions of kernel at some point due to ABI/API/Locking changes, normally not cross arch or cross platform if cross arch or cross platform normally as slow as the userspace api used from kernel mode or worse slower than using the userspace interfaces in the first place(what is the point).
Note those deals apply to Windows Linux and Solaris to different amounts. Linux with its faster kernel major version cycles shows up will have issues with future versions of kernel more often. Lot of people remember getting Windows 7 and XP before it and finding a stack of devices no longer worked safely ie add the driver upset computer.
Due the risks in kernel space is why Linux people want the source code in there so it can be fully audited. Basically do you like the Blue/Red screen of death or Linux kernel panic. If no you really should agree with what the Linux developers want. Even MS is giving up on the idea. Most of the gains of kernel space are not with the loss in stability.
The way I put is that closed source driver makers wanting to use kernel space are like carrying possible drug using gear into an airport and trying to refuse having you ass and other private areas inspected. Basically inspection should be expected.
Do I expect driver makers or any poor person who has to be inspected at a airport to be happy about it. No I don’t that would be asking too much. But they should be understanding why they got what they did.
It was a rhetorical question that both you and I know the answer to. GPU drivers are the big stinking elephant in the room and there is also the reality that most hardware companies would prefer to write a proprietary binary driver for a stable interface.
Why not let users decide? The security paranoid can use basic open source drivers and everyone else can use proprietary drivers.
Like most defenses of the Linux driver model you ignore problems resulting from that model that users continually face.
The user-space interface is only mostly stable. Every year you defend the Linux driver model and every year a new Linux user gets some trivial device broken from an update and goes back to Windows or OSX.
There is no Linux distro that can be trusted to auto-update itself along with a typical desktop application suite and a basic set of peripherals. No Linux distro has a reliable record. Linux would have far more than 1% if the people at the top were concerned with building a system that finds a balance between the needs of users, open source advocates and hardware companies. But Linux is designed by open source advocates with little regard for users or hardware companies. Linus doesn’t want proprietary drivers in his precious kernel and will gladly sacrifice marketshare to achieve this goal.
They do decide. They decide to update to a new kernel/distro every 6 months rather than sticking with LTS/Enterprise distros that freeze the API for years and test out binary drivers extensively. Users choose to decide that’s not important to them.
No, they decide to avoid open source desktops.
This in fact shows how little you know. GPU driver itself is two halfs. One to prep code for GPU and one to control where GPU takes and places memory.
The code prep is in fact in most case a speed boost if in userspace. The fact that GPU can be got to write back anywhere in memory the memory management controls are critical to be in kernel space.
Nvidia of all things has code prep and memory management in kernel space. The result is a more unstable more harmful driver than what it should be. Since bugs in processing instructions for GPU can crash the complete kernel. Worse the processing instructions for GPU could receive anything from user-space. Processing for GPU is a highly complex operation its very much like running a compiler in kernel space not wise at all.
Would giving up the secret to memory management to there GPU really expose there trade secrets no it would not. Since that information is already known.
Remember I said there are some items that are not suitable. Closed source memory management is one of those things since that can create secuirty flaws so simply. What you need to be in kernel space is normally items that need to be 100 percent audited to have a secure system.
ATI/AMD, VIA and all other Video card makers have accepted the fact that the memory management of video cards and processing for video cards should be split. Nvidia is the only hold out. Nvidia GPU drivers on Windows Vista and 7 are also not design to the MS specs requesting the two parts split either.
The elephant in the room is not GPU drivers. It is Nvidia and its for all OS makers not just Linux. Instructions given for stability of OS are not being obeyed by Nvidia.
Very little really needs to be in kernel space and what ever is not required in kernel space the driver makers can keep closed source as much as they like. Mostly stable is not true at all.
Besides the mainline Linux kernel is only lacking distribution support to have the Longterm kernels provide a kernel abi that closed source drivers can use. Yes distributions not messing with longterm kernels and agreeing to use the same compiler.
Userspace API provided by the Linux kernel is a work around to the lack of cooperation from lots of distributions. It also provides increased stability in many cases.
Simple fact of the matter nt_jerkface you don’t know the topic and every argument you have had is a dead end. Not based on facts of the situation. Each time you are going to lead to particular parties doing the wrong things on all platforms.
That’s irrelevant. It needs kernel space access which is the point.
You make this claim and yet it is widely agreed upon that Nivida has the best (proprietary) drivers for Linux. Go to any gaming forum and you will see a consensus that Nvidia is the way to go. BTW the “just open yer specs” excuse is no longer valid after AMD did this and the result was not high quality video drivers from the community.
Like I said before the kernel could be designed to accommodate both open and closed drivers. Provide a stable interface for binary drivers that cannot run entirely in userspace and let users take the risk if they want. It’s called a compromise, something status quo defenders like yourself don’t seem to understand. But again how dare I question the resounding success that is Linux on the desktop.
And yet NVIDIA and AMD engineers want to do it. This reminds me of Greg telling hardware card companies what their needs are instead of asking them. A wise OS development team builds a kernel around the needs of partners that can add value to the platform. Linux is designed to be hell for hardware companies that want to write binary drivers. Most companies want to write binary drivers and be able to provide the latest version directly to users. That is the disconnect.
Split which still requires kernel access.
That is BS, are you going to tell me that userspace drivers never get broken? If a company writes a proprietary userspace driver, how long can they expect it to work? How many years? Oh but let me guess, your solution is for them to open source it so they don’t have to risk getting it broken. Back to the endlessly looping defense for the current model which is really just anti-proprietary.
1. You should open source your driver
2. You don’t have to though, we have a stable user interface.
3. Oh I see you have learned that the userspace interface is only mostly stable and there is no guarantee that your proprietary driver will even last 6 months. GOTO 1.
There is also the support lag problem which you haven’t addressed. With Windows and OSX video card companies can release a new driver overnight. With Linux it can take months to get accepted.
At the end of the day the Linux driver model is not designed to accommodate hardware companies. It imposes expectations of them that do not exist in Windows or OSX. This is not a wise strategy for platform that has such a small marketshare. It would make more sense to at least accommodate the needs of hardware companies temporarily and then push for open drivers at a later date. But Linux is not about strategy or marketshare so its fans will have to accept its niche status on the desktop while MS and Apple delight in its limited ability to compete.
Agreement between video card makers explains why this is not important
Excuse me. AMD engineers sat down with linux developers and talked through the issues. AMD engineers agreed that the secuirty issues were valid. In fact they had documented bug reports in there drivers that could have lead to major secuirty issues. So changing the struct and opening up the in kernel stuff to third party audits they could not say was not required. Yes unhappy about having to. But they were prepaid to put the secuirty of the OS first. Current ATI closed source drivers are moving what they can to userspace. Long term AMD will drop all bar a few libraries as closed source. Few libraries covering trade secrets and patents.
Intresting thing moving the GPU complier to userspace infact increased the speed of the ATI closed source driver. Kernel space was in fact slowing stuff down. Number of copy operations.
Next issue came up in the agreement was the duel initialization of video cards. Where the kernel on startup has had to fire up the video card to get text out and the like. Then down the track another driver had to start. Issues was that the kernel error messages being hidden by video card makers drivers.
AMD pointed out that duel initializing video cards in itself was suspect. So this means the main kernel image has to include enough to initialize the video card far enough to do what ever else is required.
This lead to the invention of the atombios. This is a binary format that contains all the information Linux kernel mode need to perform the require operations on the video card including sending and receiving memory from GPU. So no closed source driver in kernel space for AMD/ATI recent hardware is required. This atombios goes on the video card itself and is for the OS to use. Even AMD video card drivers use this. Kernel mode requirement for AMD video cards is covered by the atombios. There own format. This atombios is even used by Windows drivers and has improved the stability of AMD drivers on all platforms.
Main reason for atombios you cannot when installing predict that you will need X driver all the time. With the card fired up right loading the driver should not require a reboot or card reset.(Again some stupid distribution do). Yes it possible that the way windows fires up for latter problems to be exposed or for the video card to block blue/red screens from being displayed.
atombios is cpu and OS netural with a decomplier provided by AMD so it can inspected for defects by anyone.
Yes the atombios is why 2d support in Linux for ATI/AMD cards are almost instant when new cards are released. atombios also contains things like how todo power management on the card. Also one of the other key things AMD include was the means to software load the atombios in case of defective on card.
Lot of ATI stablity issues on Windows were in fact solved by a Redhat developer auditing the atombios code and found a few minor appearing errors.
Out of the talks came the longterm kernel deal to make closed source kernel mode drivers work simpler in the translation.
Only party not agreeing here is Nvidia. Every other parity the agreements have come down the same way. Linux developers gave the closed source makers a chance to make there case that there was not a secuirty risk or some way to contain it.
So no AMD cannot be used here they are for the current status. So this is another case of not doing you homework. That you used AMD shows you know nothing about the subject at all. So shutup do your homework proper find the long debates in the kernel mailing lists and read what the outcomes were. That the parties did in fact walk away completely happy in the end with the outcome.
Nvidia so far has not sat down at the table and told the world exactly why. So have not raised what there exact issues are. Only thing they did say on the topic is they would never put legally action against any party making open source drivers for there hardware.
Basically Nvidia is the hold off and they are not stating why. So you really don’t have a good case here.
Strange 12 year old brother userspace drivers for Linux work unchanged today. Most of the video stack interfaces changes that are happening are at the request of AMD and Intel. Because the simple fact was the old system never could be made work proper.
Old X11 userspace drivers with memory management in userspace. Still work today. Not that a person will want to use them Since then you have framebuffer and X11 trying to manage the video card twice so leading to crashes.
The alterations are underway to solve major problems in Linux video stack design. This is not a closed vs open issue. If the design is crap open or closed drivers don’t change the fact. Changing the design of course will create disruptions.
Rule 1 video card must only have 1 master. 2 or more masters and it will not like you. Old designs of Linux really at worst gave video card 4 different masters. The userspace solutions still work today using the old ways. They are only of course as stable as they were back then. Userspace is not a magic solution its a dependable one.
Oh how noble of them, they must have been shown the light! Answer me this, if AMD and Nvidia had their way, would they ever release open source drivers? Would they provide a binary interface to target or would they keep the status quo? Just because they made a few positive p.r. points does not change the fact that they have hated supporting Linux when compared to Windows or OSX.
Nvidia is the new prick hardware company and yet they have the best gpu drivers for Linux (which you haven’t denied).
Oh look anecdotal evidence. It doesn’t change the fact that userspace drivers have and will be broken. There is no guarantee that a userspace driver will last even a year. It’s a mostly stable interface. Or would you like to direct me to a time guarantee that Linus provides?
Difference between Windows, OS X and Linux performance with ATI/AMD closed source drivers have reduced massively and that is all due to the changes they are doing to meet what the Linux developers asked. Also Windows and OS X have seen performance and stablity gains from the changes from what the Linux developers requested being applied to those platforms. If a request in OS design is valid you normally gain by applying it everywhere. The request is completely sane.
Nvidia did release the nv driver open source along time ago. The old R100 R200 open source drivers were directly from ATI.
If fact when they don’t fear competition do evil things to them most of the hardware companies most of the fear it self traces to utah glx in 2002 and Nvidia interactions with that. After that lot of video card makers got very careful handing out specs. Ie pre 2002 getting interface specs for most video cards was quite simple todo all the stuff you require todo in kernel mode. Most of them would post you a printed book for free just for asking.
The hard part for a lot to accept nt_jerkface is this of hardware companies going mine mine mine. Is not natural for a hardware companies. At-least not proper ones design their own hardware. Since they will make the most money if more people use it. If a proper hardware company is doing this they have either been burnt in a deal or something else is wrong internally.
Now of course non proper hardware companies ie brands who don’t design anything at all like hiding that fact.
The closed source drivers have a few legal issues. Almost 90 percent of AMD opensource process for ATI has been spent in the legal department auditing everything ATI has ever done. The reason why ATI did not like dealing with open source came very clear when AMD took them over. There internal legal processes were basically non existent. Yes in fact ATI did not know that there were patents they were paying for the use of they never had ever used on every one of their cards. No wonder they were not as profitable as they should have been.
Open source drivers there are still major stack issues. 3d drivers don’t magically form over night most require years of development behind them to be any good.
Linux Developers in the main kernel are not past common sense its going to take a while so providing a way to support closed source drivers was wise.
Problem is this don’t extend up into a Lot of distributions.
Again for me Nvidia is a major pain in but they are the worse company in the world to me when I am building embed systems that don’t use X11. ATI and Intel(even with its horrid performance) is better for that kind of work.
Nvidia is only the best in some cases as well. Renderfarm issues of cooling you can be better off with 2 ATI cards than 1 Nvidia due to the massive heat bleed Nvidia cards produce. Having the best X11 drivers don’t really amount to a lot.
To be correct a userspace driver build using fuse cuse and buse static linked not dynamic so its only using Kernel to userspace API. Will be good for the next 10 + years no problems. cuse and fuse one would also work on freebsd in Linux emulation mode. So you just produced a driver for 2 platforms at once.
syscall api stability is something you change on risk of death in the Linux world.
syscall alterations that are not 100% compatible with old software only happen when a.b.c.d when the a and b parts of a kernel version change.
So 1.0 1.1 1.2 1.3 2.0 2.2 2.4 2.6 Yes 8 times in Linux complete history.
All the 1.x stuff happened in the 4-5 years of Linux life when Linux still was working out what it should become. MS early years are not exactly what you call sane either.
2.4 to 2.6 way firewalling was done was changed so if you driver had nothing todo with firewalling it was not effected at all. Remember 2.4.0 is the year 2001. So supporting any userspace driver from 2.4.0 is still possible why there were no network userspace drivers back then.
2.6.0 in 2004 introduces network drivers in userspace.
2.2 to 2.4 hardware SMP support requiring all drivers in kernel space to be updated. Userspace API changes nil to possibility break backward compatibility nil.
2.0 to 2.2 Introduction of BKL. Wish we had not. To provide basic SMP. Userspace API changes nil.
So the last major disruption was the middle of 1996 with the release of 2.0 and this release is the release the mandate against messing up syscalls came into existence. The 1.x line had no such mandate but still there was not a 100 percent change when the 2.0 line started. And you seam to be still talking like prior to 1996.
Sorry the facts are against you. Userspace drivers are not broken by anything the Linux kernel itself has done in over 14 years.
Yes distributions at times have done things to break the userspace drivers by screwing up libraries they depend on. Installing a Old Linux in a chroot can get around this issue as well.
The evidence that the interface is not stable basically does not exist. nt_jerkface. But each point the userspace abi lives threw unharmed is write new kernel drivers not matter what because the complete OS internals were turned upside down at the time.
Historic data backs completely write for userspace. It is stable. Historic data also tells you be careful what userspace libs you depend on because distributions will mess with them. Better solution don’t depend on them.
There is your rock sold stable API. Linux Kernel syscall interface. Use anything else you might have problems.
That’s irrelevant. It needs kernel space access which is the point. p/q]
net_jerface, did you know that the subject you are in right now demands that you must have sufficient low-level knowledge of how software driver works and their relationships to the *whatever* OS kernel they support?
Do you know win/*nix driver development in real world? If nothing, then let us just stay mute and get back on the subject at hand.
Hi, please read:
http://www.kernel.org/pub/linux/kernel/people/gregkh/misc/2.6/stabl…
And the resounding success of Linux on the desktop proves he was right. Drivers are never a problem in Linux. How dare anyone question Greg KH!
http://ubuntuforums.org/showthread.php?t=1678203
http://ubuntuforums.org/showthread.php?t=1485772
http://ubuntuforums.org/showthread.php?t=1700897
http://ubuntuforums.org/showthread.php?t=948919
It’s doing better than the BSDs. Anyone who claims that the reason Linux hasn’t had a “year of the desktop” yet because of the lack of a stable api for kernel drivers should first have to explain why that hasn’t worked for other OSs.
> > Hi, please read:
> > http://www.kernel.org/pub/linux/kernel/people/gregkh/misc/2.6/stabl…
> Drivers are never a problem in Linux. How dare anyone question Greg KH!
> http://ubuntuforums.org/showthread.php?t=1678203
> http://ubuntuforums.org/showthread.php?t=1485772
> http://ubuntuforums.org/showthread.php?t=1700897
> http://ubuntuforums.org/showthread.php?t=948919 [/q]
You talk like if you had understood what Greg KH wrote. If someone reads what Greg wrote, he sees:
and
And later you write links about complains to closed source, proprietary drivers. Don’t you find it strange?
You talk like if you had understood what Greg KH wrote. If someone reads what Greg wrote, he sees:
and
And later you write links about complains to closed source, proprietary drivers. Don’t you find it strange? [/q]
Also he forgot my comment before that about the long term kernel that binary drivers are tested against.
That distributions are truly making life hard for end users not agreeing to reduce the numbers of kernels out there for binary drivers.
Edited 2011-03-17 10:15 UTC
Note: it seems that in the parent post there was a problem with the “quote parent”.
Edited 2011-03-17 13:20 UTC
That’s the Kernel no stable API nonsense?
http://www.kernel.org/pub/linux/kernel/people/gregkh/misc/2.6/stabl…
That’s the Kernel no stable API nonsense?
http://www.kernel.org/pub/linux/kernel/people/gregkh/misc/2.6/stabl… [/q]
This is a key block of text.
This is being written to try to explain why Linux does not have a binary kernel interface, nor does it have a stable kernel interface. Please realize that this article describes the _in kernel_ interfaces, not the kernel to userspace interfaces. The kernel to userspace interface is the one that application programs use, the syscall interface. That interface is _very_ stable over time, and will not break. I have old programs that were built on a pre 0.9something kernel that still works just fine on the latest 2.6 kernel release. This interface is the one that users and application programmers can count on being stable.
Notice this is very clear to point out kernel to userspace has been stable from 0.9 Linux something. So there is a stable ABI/API. Of course everyone raising the arguement forgets the existance of these.
http://lwn.net/Articles/296388/ CUSE
http://fuse.sourceforge.net/ fuse
and of course Buse.
There are all ways to create drivers using the stable kernel to user-space interface. Now the question becomes why do you need a Kernel ABI in the first place other than the Userspace one?
Education. But I do not think that Hardware manufacturers are ignorant of this information. I am not a developer of this level not to mention being a driver developer, but why is it that most Printer/Scanner/peripherals etc have no Linux driver by default, if they can write it easily using the userpace without touching GPL?
Printer and Scanner drivers on all platforms are user space. No kernel mode part on any. So yes there is no excuse other than deciding that Linux is too small of market share to support.
Another thing a lot of people are not aware is that OS X and Linux use the same printer service struct. Yet there are drivers for OS X and no drivers for the same printer for Linux from the hardware maker. Cannon excuse is lack of means provide userspace applications. But really the two drivers would be basically the same source code with minor alterations.
Also some is MS interference. Before MS Novell deal. Novell was working on providing the interface to take MS Windows drivers for printers and use them directly under Linux so solving that problem. They stopped work on that the very day the contract with Microsoft was signed.
Also under Vista+ there is technically no reason for usb devices to have a driver in kernel space at all. Yet hardware makers are still releasing drivers using kernel space under windows for Vista and 7 for usb drivers so causing unstable states and bad outcomes. Reason its cheaper they don’t have to redo there code base from XP. Yes the issue of lack of or badly written is not restricted to Linux.
There is also the nightmare under windows where 1 device can have 50 different drivers even that its the same device it all depends who brand is on the device what driver you get.
Yes Linux is the same with USB drivers as windows no need for kernel space. PCI and PCI-E don’t have to be kernel space either under Linux. But windows that might be asking too much. PCI and PCI-E passed to userspace is why virtual machines under Linux can pass PCI and PCI-E cards to contained OS untouched.
In the open source world if the hardware makers will release the specs most cases they don’t have to write drivers. Also releasing the specs would allow consumers to see that X product here for 200 dollars is the same product over here for 20 dollars just with a different company brand on it. Yes Consumers are being ripped off all the way to the bank and getting inferior quality drivers at the same time. Yes this is a major issue why some hardware makers don’t want to work with open source. It unmasks the truth.
As soon as you start talking kernel space and drivers. You are talking a pandoras box where a lot of parties don’t want you to start looking. Because a lot of the unstable you suffer from is magically explained. And the fact you are being ripped of by particular hardware makers becomes clear as day.
It very simple to try to just blame Linux for its poor driver support. The big thing to be aware is that its not Linux alone. Its hardware makers who don’t want the truth out there of there hardware.
Like every HP scanner and printer on the shelf today has perfectly support under Linux today as open source. Every brother printer on the shelf you could buy today has perfect support for linux as a binary driver. Ok bit of a tricky driver to install but its provided.(Yes they need to work on there install method language so a normal human could understand it).
These are not reversed support. Final argument is lack of market share. But with the slap shot support I see for Vista and 7. I Have to say 90 percent of the reason is profit making nothing more at the buyers expense.
Yes that Linux can do it does not matter they just will not spend the money or give the open source world the specs so they can do it themselves.
Nt supporters like nt_facejerk don’t really want people to be aware of the truth. Since people aware of the truth might start asking for full interface specs to products and other nasty things.
Interesting thing here as well. I have found hardware makers who provide open source drivers more often maintain better quality drivers for Windows. Yes that a bit of hardware runs under Linux should be seen as a good sign that you will have less trouble under windows with that device compared to the device sitting next to it that does not support Linux.
Known as CUPS.
A CUPS PPD file works on any system running CUPS, so long as no extra binaries/filters are needed. I’ve used MacOS X PPD files on Linux and on FreeBSD without any issues.
A “CUPS driver” that needs extra binaries is not a true CUPS printer driver, and it’s those drivers (like SpliX, HPLIP, etc) that cause problems with cross-OS support.
Please read CUPS specs. CUPS printer drivers are allows to include platform dependent interface parts to talk protocols these are called cup filters.
Cups driver without its matching Cups filter requirements is a paperweight.
Yes its still a Cups driver if its dependent on a filter. Since filter support is build into cups driver design.
Bullshit! What the heck have you in mind? I don’t know any hardware that fits into your description:
Graphics cards? No. The costly part is the chip, and the chip is always either from Intel, Nvidia or AMD.
Hard disks? No
Chipsets? Same as with graphics cards.
Printers? Sorry no, there IS a difference between a $200 cannon and some $20 no name.
Boohoo. Where is the basement developer army? So Loonix needs ebil Nov€ll now? According to Queen Pamela Jones, Nov€ll was ebil before they signed the deal with M$ anyway. How can you use something so tainted?
And most of them work. Where is the problem actually? As if it is a common scenario that users have so many versions of the same hardware.
Stop deluding yourself. What full interface specs? The people who care know what needs to be known already. Haven’t you never noticed the tons of hardware sites like Tomshardware.com? And the ones who don’t bother won’t bother anyway.
Easy. It’s because vendors who can afford to support Windows and Linux are usually not some poor mom&pop shop.
Are you seriously claiming that they gain some leet programming skillz just by creating a Linux driver?
OOOPS! I used the word “creating”. One of the words to avoid:
http://www.gnu.org/philosophy/words-to-avoid.html
Sorry.
$20 to $200 happens in webcams and secuirty cams at times. Where in fact they are exactly the same chip exactly the same optics from exactly the same factory. Even the case is from the same factory only thing different is the branding.
Also scanners are another location with same chipset scaning head metal frame of scanner and circuit board are shipped to customer in a different plastic case with different branding on it at major-ally different prices.
Note all these devices have something in common when you read the spec sheets they are normally devoid of the what inside. They have physical shape and number of dpi/pixels. Yes small items people are not paying attention to they are getting ripped off on.
Lot of DVD-Rom and Blueray drives branded to be from different makers are in fact out the same factory and tested to the same quality standards so that 20 dollar price difference on the self for the drive could be nothing more than a brand payment. Same with floppy drives and particular power supplies.
Items like video cards, motherboards, harddrives and printers people normally pay more attention to. But even in video cards reading makers specs for the chip on them would warn you that particular video cards that are running faster then another card are in fact overclocked and is going to have a short life.
So yes you are paying more money for a video card that will burn out sooner. When another card that is a different chipset next to it that is running at the same speed same price is not overlocked so will last.
Consumer is getting ripped off many ways its not funny because they really don’t have the information to make correct selections.
Staff working on it were pre Novell from SUSE. Not everyone in Novell is evil.
In fact with webcams of the same chipset most of them don’t work properly(Ie the work enough that the web cam appears to function). They cause random untraceable computer crashes. Lot of these I have solved for windows users by going to Linux and finding out what Linux driver runs the device then making way back to the company that made the chip inside the device and get there current driver and tweak its usb ids so it works under windows. Yes ATI and Nvidia are 100 times simpler to make it back to the chipset maker and get the generic driver that works.
This is your problem. People complain about windows being unstable. Lot of cases its these poor quality drivers or computer was assembled without doing basic things like checking ram with a memory testing software as per many ram makers instructions in fact.
Tomshardware does not tell you the maker of the chipset inside. Even the groups that disassemble the devices in a lot of cases cannot tell you due to chip over branding or production of chip not being done by party that designed it. The Full interface specs might start being asked by Tomshardware and others so users can get generic drivers that work. Not snapshots of generic drivers that have not been updated. Of course it would be simpler of course if device makers just started being more truthful the design of X device is from here refer to them for third party drivers.
No I am not. It not leet programming skills. Its called peer review. Key point of science. Without peer review you are talking alchemy methods.
Producing dependable results peer review is a good process. Errors that otherwise would be missed get spoted. Yes lot of cases the windows closed source driver and the Linux open source driver share common code. So any peer review error seen in the open source driver ends up fixing the windows one. Also since open source driver exists.
The difference exists due to processes that happen. Does not matter how large or small the company is. You can even seen a difference inside the same company between 2 devices made by them one with and one without open source drivers made by them. Just you have not been looking.
Company deciding to kill support for profit also is reduced by open source driver existing. Ie don’t make a windows 7 drive for X device so people have to buy Y device. Many or many not work if an open source driver exists but if it don’t work it will make them look really foolish. So this leads to devices with open source drivers have longer life span usable on windows and maintained longer.
This is the big problem. People don’t want to get open source drivers help windows users as well as Linux OS X and other platforms.
That is the simple facts of the mess.
BUSE? Block device in User SpacE? Cool, didn’t know that one. 🙂
“Microsoft, Google and Apple laugh all the way to bank.”
I think that Microsoft, Google or Apple are largely indifferent Gnome / KDE or Unity … the Linux Desktop is (at the moment) just not a serious threat to them (now the mobile space with Linux/Android etc. is a different matter). And before I am shot down … I use Linux/Gnome exclusively, am very happy with it and think personally it to be a better desktop OS than Windows 7 (not sure of Mac OSX, haven’t really used it and am not prepared to pay the apple premium) … I just think that there is not the mindshare in the general public in terms of Linux to be real threat …
Create a stack of infighting and they hope we will forgot that Canonical is redirecting the amazon music store to pay them 75 percent and 25 percent to Gnome that use to be 100 percent Gnome.
Yes this is something that has to be settled. Big problem was trying to shove blame on Freedesktop.org for not working. Everything was done by the book and right at Freedesktop.org. Freedesktop.org even has an way for anyone to dispute. So at no time can Freedesktop.org really be blamed.
Now next question. Freedesktop should be a place with multi spot fires of people in dispute if its fully working.
Items that have not been submitted to Freedesktop.org that should be. KDE and Gnome configuration systems. Why two places to store the same data is very stupid and lead to incompatibilities. This duplication removal is one of the reasons Freedesktop exists is to provide a place to sort it out. Lack of what should have been submitted has made Freedesktop lack the flame fights it should have.
KDE and Gnome neither has the right to be on a high horse over Freedesktop. Yes KDE has been better supporting but still KDE you have more todo.
Canonical thinking that Freedesktop.org exists to make a unified Linux desktop that is User friendly. Your lack of involvement speaks volumes.
True is the charge against Canonical of working in locations that require copyright assignment to Canonical so locking out a section of the open source development community. There are a lot of developers not allowed todo copyright assignment
True is that Canonical used blackmail to get what they want with amazon music store. Canonical please clean up your own house before throwing more stones. Because stones might be thrown back.
That’s true and all, and also, completely irrelevant to the issue at hand. Mark’s and Seigo’s problem is that GNOME refuses to collaborate on enhancing interoperability across the whole F/LOSS landscape. One means of collaboration is via freedesktop.org, and by establishing low-level frameworks (think of D-BUS as a shining example of something that benefited ALL F/LOSS desktop environments) and specs. When it comes to these specs (or to the StatusNotifier specification) Canonical, of course, does not require copyright assignment.
I don’t get the point of Jeff Waugh. For years he’s seemed to have been a corrosive influence, villified by people even in the community he claims to represent and for some reason he miraculously turns up over this issue and starts dishing out his pearls of wisdom.
I really don’t know what else could have been done to get StatusNotifier into Gnome. It was discussed at length, it didn’t just appear out of nowhere, the Gnome devs asked for some changes they seemed quite receptive to and when they were duly accommodated there was silence and it was duly rejected with few, if any, reasons given. That’s usually a classic tactic. You ask for changes that you hope won’t be done and then you stonewall when they are.
I think we can all agree that Canonical have got a lot of things a bit off skew, but all I know is that the stuff that Canonical put into Unity and KDE put into KDE actually works and it hasn’t hurt anyone.
This little gem in Dave Neary’s blog tells you all you need to know about how they really feel about collaboration:
I don’t know what you can say to that. D-Bus was initiated many years ago, by a prominent Gnome developer no less, to ensure that apps and desktops could communicate with each and work, thus helping those very same users. KDE embraced and uses D-Bus extensively. I have no idea what’s been going on with Gnome. As far as I can see they’ve reimplemented it several times with little in the way of results.
As for the Freedesktop nonsense, Seigo and many others have been trying to get Freedesktop working for years and haven’t been helped one iota. Mainly Gnome developers then turn around every time and say that it is broken as a justification for not putting in any input. It will never be fixed, mark my words, but Gnome not being a part of it might not be very important anyway.
The distasteful thing is that various Gnome devs don’t just come out and say “Look’ we don’t care about Freedesktop or collaboration and it’s not worth our time”. They paint their position as the exact opposite, reject anything related to it and then spin like crazy to try and tell everyone how they have ‘misunderstood’, certain things weren’t done in a ‘He said, she said’ type exchange with people (very important that things can’t be proved) and try and paint another different picture of what went on on mailing lists because they know the discussions are too broad to nail them down.
I’ve never got this distasteful attitude that seems to exist at the core of Gnome. It certainly doesn’t happen everywhere in the project or many of its applications, but it does happen at the core of it.
I think you may have missed the point of this quote. Dave isn’t talking about collaboration. Instead, he’s talking about how to write a good specification, which starts with defining the problem statement.
Dave is saying, in a slightly humorous way, that “Notifications don’t use D-Bus” is not a proper problem statement. I completely agree.
Did you understand what he wrote differently than I did?
Except that wasn’t the problem and nobody ever said that it was. The problem was Xembed (which was unflexible and designed for a different era). The solution was to use dbus for ipc (would he rather people reimplement a new ipc just for systrays!?).
Gnome shell seems to have tackled the problem of notification and tray area from the functional point of view, not just the technical one.
Thus not “xembed is unflexible, a dbus protocol to signal application status would be better if coupled with a new tech for notify icons and menus”
But “why is an application putting an icon in tray? Is it sending a notification the user must interact with? is it a system-wide status signal and control point? What kind of actions would be possible in a transient notification, which should always be accessible? What about urgency level affecting the display?”
From this derived the basic mismatch you can read about in the discussion, with Gnome coders dubious about a pure transmission method that poses no ipothesis on the interaction on the receiving part, whereas they seemed to think the coupling important for the redesign of the whole.
And KDE devs stating that yes, the data was just being sent and totally decoupled from the presentation… so much that there’s no indication whatsoever of what the user-facing application can or should do with tha data it receives.
Frankly, both aspects seem worthy of work to me, on different levels, but I’m sure it’s a prerogative of the Gnome shell coders to deem that the tech proposal was not solving the UX design problems they were interested in wrt the tray – at least at the moment. And what should have they done with a tech they had no practical interest in, if not let it be until they could contribute with real case requirements?
The problem was “there are too many applications creating icons in the systray/creating custom panel applets, they all behave in slightly different & inconsistent ways, and there is no straightforward way for an application developer to indicate the state of his application across different desktop environments without redoing a bunch of work”.
XEmbed is an implementation detail.
Cheers,
Dave.
Certainly that is the case, it doesn’t matter what the old implementation was Xembed etc. the thing was every app decided how their icon was drawn in the systray instead of the shell choosing what was the best representation.
It should be up to the DE to choose how this information is displayed to the user be that using a systray, floating icons, whatever. The spec as I understand it left that completely open enabling people maximum flexibility (and hopefully enable cool stuff in the process) instead of a rigid systray2 approach.
An implementation detail that happens to be very important for users, because XEmbed is widely agreed to have a great deal of things wrong with it.
The question is, what replaces it? Does it get replaced by something that desktops and applications can agree on and communicate through and work with or do we go the traditional Unix and CDE route with a a ton of fragmentation that provides no benefits to anyone?
You know fine well why Aaron specifically mentioned D-Bus in what he wrote. Do we really need to go over why D-Bus was initiated and what benefits common communication between differing applications and desktops brings to users?
I think you’re just digging a bigger hole here Dave, and it’s sad to see.
Whatever it is, fear has been intensifying it. For all its flaws, Ubuntu is the most popular Linux distribution on the desktop, and therefore, one of GNOME’s biggest customers – certainly the most visible customer.
And they’ve lost this customer.
In order to be a customer, they’d have to be paying for something. Given the recent Banshee/amazon MP3 issue, I don’t think Canonical are paying GNOME anything, somehow.
A big yes. Sad news to Red Hat, but I will never recommend Fedora(I’m using it) to corporate users(those adventurous managers) and even Internet Cafes and home users, but Ubuntu, since it will be the obvious choice which has a long term support and commitment every three years for a desktop. Even The National Bookstore in the Philippines was using Ubuntu, I am quite sure they use it in servers(not necessarily Ubuntu), but the point is the GNOME Desktop, which was the default shell of Ubuntu and thus making GNOME more popular to the public.
I am the one who criticized GNOME Shell’s hiding of Power Off/Shutdown button, and this design I think was created because of their short-sightedness and their unwilling to collaborate even to their very own users.
If you want to get GNOME to adopt an interface, then talk about what the interface is intended to achieve. Don’t ignore the history in the discussions either. Look up Galago, for example, as some background context, and libnotify.
Are you willfully and deliberately inferring something I didn’t say, or is it accidental?
Look at what I said: no user ever had a problem because notifications didn’t use DBus. Allow me to rephrase: No user cares what under-the-covers technology is used to fix the issues he has, or implement features he’s interested in.
User problems are of the type: “I want to know when my computer connects to a wifi network” or “I want to know when I have an appointment coming up without opening a calendar application” or “I want to know when I have new email without opening my email client”. And I don’t care whether that’s implemented in the back-end with DBus messages, shared memory, small applets that use inotify to watch mbox files, or whatever. It doesn’t matter to me, the user what the desktop environment & application developer do to solve my problem.
Dave.
Not directly, but it does matter indirectly, and that’s what you’re ignoring. Because of GNOME’s my-way-or-the-highway approach to this particular issue, developers now have to go out of their way to support multiple APIs for something as elementary and basic as as this, meaning additional work, additional code, and thus, additional room for bugs. This WILL matter to users, even if they don’t know about it or can’t put it into words.
Worse yet – it may mean some developers will choose to ignore one implementation, which will also adversely affect users. They may think “screw this” and stick to Xembed, which will also adversely affect users. Especially now that the most popular desktop distribution is going all-out with Unity, you might see developers giving the virtual finger to GNOME, which will – again – adversely affect your users.
This is an element that I’ve been missing from GNOME’s side of the story, and it’s the element that actually matters. KDE gets this – interoperability benefits users, even if that means that KDE developers must swallow their pride and use something that could be a bit sub-par or didn’t originate from within KDE.
As a user, it looks like to me that GNOME simply can’t stand Ubuntu going with Unity – and that’s fine. You have the right to be unhappy with this. However, fighting this out in a way that hurts users is bad – and antithetical to the values of Free/open source software. This is behaviour I come to expect from Apple and Microsoft – not from the Free software community.
Edited 2011-03-15 10:51 UTC
I agree. During my reading of all the related blogs (except for the witch hunt), my conclusion is not favourable to the GNOME camp. It is sad because I always prefer GNOME over any desktop.
On the positive side, it is helpful for the reason that it awakens me and informed me as a user for this whole collaboration issue and its impact to me as a user. If every DE in the FOSS world collaborated with each other in the past, I may be getting an awesome and innovative desktop today, and it may help me as an application developer to develop application faster using the innovative development tools available as a result.
He, he, don’t ignore history indeed. Nice of you to bring up that particular fuck up, as it’s another nice example showing off Gnomes cross desktop “collaboration” and illustrate the projects persistent NIH issues. It underline aseigos argument quite nicely, and show it’s not a new problem.
Ys, what DID happen to Galago? I remember that from years back but then it just…disappeared.
Well, I mostly agree with your assessment Thom, except the “more coherent picture” part – this is anything but coherent. What’s more, not a single specific point raised by Mark or Aaron has been addressed properly. Instead, it’s a who said what back in 2008. They can’t get any more evasive than this, boggling down the whole technical aspect of collaboration in minute and irrelevant details. Yeah, GNOME is difficult to understand => nobody understands us properly (and Mark and all critics misunderstand GNOME completely) – what bullshit!
Seigo’s critique still stands – rejecting the spec on the basis that no GNOME app uses it is pure tautology: no GNOME app uses it because it is not accepted by GNOME as a spec. I’m not kidding, that was one of the reasons for rejection! These obviously political reasons (read either Mark’s or Seigo’s blog for the other 3) are the issue at hand, and of course it’s kinda difficult to tackle those, so here we are, in who said what land, and of course, oh my, people just don’t get it how GNOME works of coure…
good post about external dependencies
http://www.markshuttleworth.com/archives/661#comment-347450
and marks response
http://www.markshuttleworth.com/archives/661#comment-347452
AFAIK, mark is shifting blame on gnome there, while project maintainers were the ones refusing patches, not gnome core. but then again, no one specifies which projects were the ones refusing patches, so i could be wrong with my putting blame on mark. in that case, better question is which projects and why. surely there is bugzilla entry where they refused
Edited 2011-03-14 22:46 UTC
Hmm, that’s interesting, for a specification or any low level common framework only makes sense if it can be expected to be present on every system. Again, D-BUS comes to mind – D-BUS (and fontconfig, libixml, etc.) would not make any sense if desktops could not expect it on any system they are being installed on. If statusnotifier is to become a spec, then it must be just like that.
Mark seems to mix up external and optional dependencies. Actually, some cross-desktop frameworks are BOTH! HAL comes to mind – if present, DE’s can (or could, it’s being deprecated) take advantage of it, if not (for example, FreeBSD got HAL much later than Linux), they would fall back on their old mechanisms. Regardless, this is not a really crucial issue, it’s more like a distraction (look, Mark is wrong, muhahaha). Not to mention the fact that the very first line on the page the poster links to reads like this (bold is mine):
So are external dependencies the absolute minimum requirements that MUST be there on every system, or are they optional?
in whole drama is this part
Grow the **** up…
couldn’t agree more. if they really need to have pissing and shitting contests, they could at least have them in private. now all the public can see are exposed dicks and not wiped asses.
this nitpicking only makes whole charade to look even worse. there were/are/will be mistakes and rights on all sides, meaning there will be plenty more material to throw around.
my take on problem:
– if gnome is so sure they have the right solution, let them try the public reaction, it is their neck being at risk after all
– if canonical is not happy with gnome and they are so sure in them selves, they should simply fork it and try try the public reaction, it is their neck being at risk after all
– kde should simply wait another cycle (6 months is not long) and see who was accepted from user side and then focus cooperation on them and ignore the other, they have no risk in this game of charade
Edited 2011-03-14 21:58 UTC
That seems to be the only solution.
I am loosing faith in the Linux community. Everywhere I turn within the different Linux communities there are enormous egos fighting each other. Personally I think that Gnome looks like the bad guy in this, which is sad. However, pointing to KDE for good example is just plain stupid. Everyone already forgot the “fuck you” attitude of Aaron Seigo during the first KDE4 release (no offense man, you do write good software)? It made me switch to Gnome, and now I don’t know where to turn if Gnome don’t get their shit together.
I am indeed loosing faith. Forget the egos and politics, and just stick to writing good software!
Well don’t get involved in the politics then because most of the time, facts get lost in the FUD.
Users are actually just as bad, if not worse, slagging off KDE4.0 and GNOME 3.0 even before they have been released.
I for tested KDE 4.0 marked STABLE and it was still alpha quality software at that stage. GNOME 3.0’s ridiculous design decision of hiding Shutdown button is a worthy of criticism.
That was unfortunate error. KDE 4.0 is what happens when marketing and developers don’t understand each other. KDE 4.0 was released as a stable api/abi reference for application developers. Somewhere between lead developers and the marketing sections of the KDE teams the “api/abi reference for application developers” Got lost. Stable api/abi for applications developers does not mean all the applications sitting on top were stable.
Very sorry for you pain. allenregistos. At the time I was unable to correct the media teams incorrect understanding of what the developers said. Its one of those unfortunate errors. KDE media release policy has been changed since then that statements on upcoming releases have to go back past lead developer to check if they have been interpreted correctly. So hopefully that was a one off nightmare.
Yes there is another issue of course. There is no word to mark a clear difference between ABI/API reference release and a end user version. Something that still has to be invented.
Quite interesting point. With oss developement communication wide open the PR activities must actually be tightly integrated into the developement process.
Le sigh
KDE 4.0 was never marked stable. It was marked “Developer Preview”. 4.1 was the first “stable” release of KDE for non-devs to use.
Must we really go over this every single time someone mentions KDE 4.0??
As an end user, I really do not know that until today. If I recall correctly, I am using Fedora at that time when I first heard of the released and install it because I am so excited with KDE4, only to be disappointed. Anyway, this is an information gap, maybe I have to read more if there are software releases that promises big changes or revolutionary changes.
For the GNOME 3.0’s release, I think I have done enough for now, as I am testing it on and off.
Unlike KDE4, GNOME Shell is consistent for being stable even for early releases, except of course for some slow downs, but it is stable. For all this drama, regardless of who is at fault, GNOME developers deserve the respect for making GNOME Shell a possibility.
Regards,
Allan
You got annoyed that kde wasn’t implementing the features you wanted and you went to *gnome*, that is like cutting off your nose to spite your face.
ROFL
Well it was not really about the quality of the software (although some might disagree). It did have everything to do with a key developer telling the users to go fuck themselves on his own blog. Basically his words were more in the line of “I contribute, you guys (the users) don’t, go fuck yourself”, not in those _exact_ words of course, but not far from it. For me, that is a deal breaker no matter how good the software is.
I believe that this is a recurring theme in the Linux community, unfortunately. Remember GAIM/Pidgin a couple of years back? Basically the same story. Make unpopular changes and telling the users to go f themselves. Now I feel Gnome is heading somewhat in the same direction with Gnome-shell and the key developers inability to listen for input from external contributers. They may argue that Canonical is to blame for a lot of things, and they might even be correct, but that does not excuse them from their own fault. They should clean their own house before pointing fingers because they can not possibly believe that all of the criticism is 100% wrong?
The best thing for Gnome would be to admit what could be handled better, and try to improve for later. That would be to “grow the fuck up…”
I am exactly in the same boat as you, man.
It is a pity that in order to use OS X you have only 2 choiches:
1)Buy their overpriced, often underspecced hardware.
2)Run it in a hackintosh. Apart from technical issues, you are in breach of EULA.
I forgot of where I wrote my disappointment of KDE 4.0. Previously I am so excited about KDE4, testing the beta will always leave my desktop blank. And with the 4.0 released, I’m running to use it to be disappointed only. It barely works, and often crashed, so I plainly told the KDE people that they lied to the public by releasing an alpha software branded as stable. For now, KDE4 becomes impressive with so many features. Thanks for that.
I think the next releases of Ubuntu will be challenging. This is going to be the proof of fire Ubuntu will face. Delivering Unity will estrange many users. I feel it already. GNOME Shell will also estrage many users. The scenario will be most of us stuck with GNOME 2.x series to wait and see what happens next. Perhaps then we will see a rethinking from both Canonical and GNOME community and how division made a harm to Linux as a desktop. Not counting the ego wars we’ve been reading for some time now. Thanks to OSNews, we know this.
With all of this conflict going on, maybe webOS will have it’s spot on the desktop afterall.
Yeah, Internet Explorer with ActiveX required.
Huh??? WebOS has nth. to do with Internet Explorer, let alone Windows …
Linux has long been held back by the lack of a single desktop.
I don’t think that is the biggest problem but it is in the top 3.
Collaboration among desktops still leaves a lack of a cohesive alternative. In this day and age you can’t ask users to leave Windows or OSX along with their software and then explain how they need to pick from 1 of 5 desktops.
Desktop groups don’t have a good history of working together so I don’t see why anyone would assume that will change.
The Linux desktop is back to hobby status so I think it is better to just let them duke it out. Gnome 3 is a mess, KDE should just ignore it.
You rightly say “it is back”, because I always felt that KDE 3 was very professional and improving all the time.
History you lack it. Most progress on common standards is also the times when the desktops are at each other throats threatening to kill each other.
Basically its been too passive for progress in recent years. This is sign of a possible good time to come.
Whatever you say Yoda.
KDE and GNOME have always been the bestest pals and always collaborate. No one ever complains about KDE and GNOME conflicts. Everything is fine in Linuxland, not a single tank.
I dunno about that. Five years ago maybe but not now. I can’t really say I notice a big difference between gnome and kde apps when it comes to user interface and basic functionality. I’d say this is true on all mainstream distros.
Lets’ not underestimate the consumers intelligence. Consumers are perfectly capable of making decisions between a dizzying number of other products like phones, so picking a desktop isn’t that much of a stretch,
On that we can agree. if Gnome don’t want to play ball, f–k em.
Edited 2011-03-16 06:49 UTC
I think the problem is intellectual laziness and not a lack of intelligence.
The average consumer is intimidated by computers and is averse to learning new software. I would also suspect that the majority would prefer to have only one choice when it comes to mobile operating systems. They like a range of colors and sizes but when it comes to software they are resistant to anything new.
I wanted to like Unity, but so far I haven’t been too pleased with it. The user interface isn’t great, nor is it very gast.
Unfortunately I also think that Gnome has been going in the wrong direction for some time. It looks like Gnome3 will be even worse, but I’ll reserve judgement until it’s released.
I’m sure glad that Xubuntu is out there. Simple and fast, the way Linux should be.
everyone just forgets about projects like XFCE, Enlightenment, and various window managers.
I get that they aren’t that popular at the moment, but there’s no reason why people can’t adapt to them. Especially, if the politics of the larger projects are bothersome.
Though if we aren’t in the mood for adapting to smaller projects, perhaps when things cool down people might take a good hard look at what’s been going on with the feature bonanza that are DEs.
Personally, I have no clue why anyone would care about notifications. I don’t use a system tray and I don’t use a window manager that allows programs to alert me to their needs. It isn’t their place to demand my attention, I will tend to them when I get to them. Though I don’t expect other people to work like this, it’s just that it might be helpful if people take a good hard look at the things they take for granted (like a system tray) and see if they really need or want that functionality (I for one don’t, it’s more bothersome to me than helpful.)
http://www.icedrobot.org/
Yes there is a project to bring android applications to the normal Linux desktop.
Android API might have become the home for closed source applications on Linux at least for a while. Also act like java for other posix platforms. There is one problem MS Windows users you are cut out this loop.
Of course Android has a native binary api as well. So the unified application installer for Linux platforms might now exist. Also KDE plasma widgets also avoid distributions packaging so are the same across distrobutions.
Pressure is building for KDE and Gnome to bypass distributions.
There are interesting battles to come. Distributions not having unified installers is going to cease at some point. Question is willing by distributions or by the side door method will packing be unified. More infighting between KDE and Gnome for progress would be good.
The game is in play. Strap yourself in expect lot more hostilities as all this plays out.
KDE3 is gone. Can’t use to KDE4. Gnome 3 and unity are horrible.
Screw this I’m getting a mac.
I heard Windows 7 isn’t so bad
If I wanted to watch a soap opera, I could turn to Young and the Restless or General Hospital. I didn’t know open source developers liked that sort of thing.
Mark Shuttleworth is a “show man”, a blablabla man. And he seems to be, at least in my opnion a kind of dictator. Ubuntu is no democracy (it’s widely known), so i think i’ve made my point.
Wow, you mean it’s exactly like the kernel?
Though I disagree with Linus on some of his kernel design choices he is still a reasonable guy and openly admits to being a dictator. He takes an honest position which I can respect.
Shuttleworth on the other hand is a Mac fan who would like to turn Linux into a Mac-junior all while talking about the value of his precious community that he usually ignores.
He revealed his true stripes to everyone with that Banshee incident. I don’t see why so many still want to give him a break. He hasn’t pushed Linux into the mainstream. Ubuntu has just become the de facto Gnome distro for geeks and their grandmas.
Not defending Canonical or Ubuntu, you may have notice that Mark is the one funding the development of Ubuntu, this may change in the future depending on the success of the project. But you have to expect a different governing process of Ubuntu because of this, rather than expecting Ubuntu to behave like Debian. In that case, you may use Debian and avoid Ubuntu.
The founder may have weight on any decisions being made, but that should not hindered or compromise the ideals of free software. As of this moment, I think Mark doesn’t violate any of this as far as FOSS is concerned. Developing in private and releasing it in public for acceptance as open source software is not in any violation of free software licenses available, I believe.
From http://www.markshuttleworth.com/archives/661#comment-347450 :
In the interests of full disclosure, I sent a couple of emails to Mark explaining what I understood was GNOME’s policy towards dependencies. To ensure accuracy, I then spoke to 2 members of the release team, who both confirmed the policy for me. One of the members said some pretty strong things in his comments, which I tempered (without modifying the sense) in my email to Mark.
Here’s an extract from the first email I sent to Mark, where I describe my understanding of external dependencies:
> Mark Shuttleworth wrote:
> > It’s difficult to know what external dependency processes are for: some
> > say they are to bless existing decisions, others say they are a
> > requirement for those decisions to be taken.
>
> I’m writing a follow-up blog entry and I hope that I can clarify that
> for you. There seems to be no such confusion for the release team (at
> least, in the 2.x era): an external dependency is a non-GNOME module
> which is a dependency of a package contained in one of the GNOME module
> sets. And since libappindicator does not fit that definition, there is
> quite simply no need for it to be an external dependency. I can point
> you to 3 or 4 precedents, if you’d like.
Here’s the full email I sent to Mark after speaking to release team members, from which he cites above:
> I got you what is, as far as I can tell, a definitive answer on this.
> First, extracts from the release team policies:
>
> ** From http://live.gnome.org/ReleasePlanning/ModuleRequirements
>
> “Do not add any external dependencies other than those already approved
> for that cycle (e.g.
> http://live.gnome.org/TwoPointSeventeen/ExternalDependencies). This
> includes not depending on a newer version than what is already approved.â€
>
> ** From http://live.gnome.org/ReleasePlanning/ModuleProposing
>
> # I need a new dependency. What should I do?
> * New dependencies for features should be added as soon as possible.
> There are three possibilities for dependencies: make them optional,
> bless them as external or include them in one of our suites. New
> dependencies should be known before feature freezes. A dependency can be
> proposed for inclusion AFTER the 2.27.1 release because it might need
> more time to be ready.
>
> # How to propose an external dependency?
> * If you want to add a new dependency, make a good case for it on
> desktop-devel-list (this may only require a few sentences). In
> particular, explain any impact (compile and run time) on other modules,
> and list any additional external dependencies it would pull in as well
> as any requirements on newer versions of existing external dependencies.
> Be prepared for others to take a few days to test it (in particular, to
> ensure it builds) before giving a thumbs up or down.
>
>
>
> Now, in practice:
> 1. If a maintainer wants to add optional (compile-time) support for a
> new feature that uses a library, there is nothing they have to do beyond
> commit the patch, and let the release team know.
> 2. If a maintainer wants to add unconditional support for a feature
> which requires a new dependency, then they should first write the patch,
> then propose the dependency for inclusion in the next release.
>
> Traditionally, the bar for external dependencies has been low, modulus a
> number of conditions. There is reason to believe that the bar for
> libappindicator would be higher, because of the history involved. One or
> more maintainers arguing for the functionality would help.
>
> I have talked to 2 release team members specifically about
> libappindicator, and have been told by one that:
> * Since libappindicator has a CLA, it can’t be included in the GNOME
> module sets under current policy
> * It could be included as an external dependency, but would meet some
> opposition because of duplicate functionality with libnotify
>
> and by the other that:
> * libappindicator doesn’t make sense as a GNOME dependency because it is
> only useful with Unity, which is not part of GNOME
> * adding appindicator support will only make apps better on one distro,
> and don’t benefit GNOME as a whole
> * If people want to make their app integrate with Unity they’re free to
> do so, but they should add a configure option so the release team
> doesn’t have to worry about it
> * For core GNOME components, providing deep integration with other
> desktops is probably a non-starter
>
> This is of course all personal opinion on the part of the 2 people I
> spoke to.
>
> In short, it’s an unnecessarily emotional issue which has been
> aggravated by all concerned. But if module maintainers want to support
> libappindicator, then they are able to do so. And if you can persuade
> the shell authors to use appindicators in the same way as Unity, then
> there would be nothing apart from copyright assignment preventing
> libappindicator being part of the GNOME platform.
Hopefully it’s clear that Mark’s reading of my email is selective at least. There is no disagreement between the two release team members I talked to, the policies for dependencies are clear & unambiguous, and as others have said, there is no need to do anything if proposing optional compile-time support for a new dependency.
The relevant release team guidelines I quoted are also consistent with the position the release team took for libappindicator.
In fact, the release team adopted almost exactly the same position for libnotify when it was first proposed for inclusion, in 2.20: http://www.0d.be/2011/03/13/libnotify%20adoption/
Cheers,
Dave.
Thanks Dave, I added a link to your comment in the article.
Can I get a clarification?
Suppose that the appindicator devs evangelized it to GNOME developers and got a bunch of them to add support to their apps for the next release. After that release it could be said that things in GNOME use it, so would a proposal to include it as an external dependency then be appropriate?
It seems like the process is to get it used first, then to propose it as part of the platform. Am I right about this?
Wow. Just. Wow. That line right there sums up the entire issue with GNOME.
Other DEs will bend over backward in order to make things work nicely with GNOME. And what does GNOME want to do make things work nicely with other DEs?
Absolutely nothing.
And until that attitude changes, there’s really nothing to discuss.
Just, wow. [shakes head]
Hrm, seems I chopped off the quote:
For all the chatter about the “bazaar”, the “Cathedral”, “Many eyes make bugs cry etc” linux continues to lose its relevance.
In an era when you have the top tablet maker talking “Post PC” – linux didn’t even reach that level of recognition in the first place.
The shift to web apps only makes the problem worse as the linux desktop solves a problem that Microsoft solved a long time ago, and it does it in a more painful fashion.
Developers: Instead of working on this non winner, go build a web app that people will actually use and value.
Morglum
Ever heard of Android?
What do you mean a long time ago. *cough* IE6,7,8 are all a piece of sh**. Only with the release of IE9 will they catch up…
Also if you’re so into web apps Chrome is built on top of (you guessed it) *Linux*…
Also I would like to point out that not everybody enjoys web development.
Edited 2011-03-15 12:29 UTC
How android webos splashtop are just different forms of Linux. So Linux is gaining relevance. Ok some distributions might lose there relevance and disappear has this happened before yes. Should we expect this to happen in future yes.
Old business game change the market place. Linux is working perfectly change the market to suit it.
Really MS did not solve it. Web apps is just another option.
Servers to support web apps are still required. Please don’t forget this. Web apps still play into Linux strong points. This is all about changing the market to suit where Linux is strong and where Windows is weak.
In a market dominated by user interfaces from Apple, Android, and Html5/Javascript, spats between Gnome and Canonical are as relevant as the Beta/VCR debates.
He’s clearly talking about traditional distributions and desktops.
Yes Linux powers Android, but does that really matter? What type of relevance is Linux gaining? Linux also powers my blu-ray player, does that confer benefits to some greater movement?
The Linux old-guard doesn’t want to admit that their stalwarts (RMS, ESR) were wrong in some of their key assumptions. It’s like a religion that refuses to accept reform and would rather die than admit that some of the founders were not actually prophets.
The open source holy war has been a giant mistake. MS and Apple would have more competition if the Linux movement had not been so hostile in its attitude towards proprietary software. That rigid open source ideology has merely served the interest of the largest proprietary companies. Linux needed parters of all types early on but the movement was arrogant and assumed hobby coders could do all the work.
As I said before the Linux desktop is back to being a hobby so this infighting is merely geek entertainment. Whether it has 5 or 500 desktops doesn’t matter at this point.
This is a completely valid opinion…. the only issue is it was RMS, ESR and other old-guard stalwarts who built the foundations we are building on today. They provided us with in infrastructure either directly, or by marshalling people to the cause.
The cause of “Free Software / Open Source Software” is what has created large portions of the modern infrastructure. This infrastructure is intertwined with a certain political (some would say religious) leanings believing that software should be “free”.
The thing a lot of people don’t get is that large swathes of the linux community care more about “Openness” then adoption. They would prefer the software be only used by three guys in their basements then to loose their freedom.
Other projects have more pragmatic approaches such as the BSDs. Distributions could base themselves on freebsd and offer a completely 100% stable ABI / API and what have you. Licensing would not be an issue, however no one has done this. There seems to be more value is being a Linux Distribution then a BSD distribution. I believe that this is because the value of the existing investments by the “Old-guard”.
Each part of the ecosystem is owned by its contributors, many of which don’t want the ecosystem to change, and it is their right to do what they like with their code.
If you think they aren’t doing something advisable, you can tell them. However this community has already decided they either value the ideal of free software OR the contributions done by people who follow the idea.
At the end of the day there are always more sandboxes to play in (OSX, Windows, *BSD, Haiku, etc).
Jason
And the problem is why does Linux have to compete against MS using a traditional desktop model. Nothing says it has to. Android supprise suprise with the work on tablets is adding interface shape a lot like traditional desktops.
Simple fact nt_jerkface get it. Linux is free to change the ball park. And the progress android makes with the project to allow android applications on linux and os x. Will make android progress go back to the desktop as well.
The idea they are different things is a problem. All major distribution today started off as a conner case performing a particular task bar Ubuntu and moved to desktop. History of distribution cycles don’t change. Ubuntu over took its past form now Android the new kid on block is lining up to take out Ubuntu on desktop. After android something else will line up to take it out. Welcome the field of sharks the distribution world is.
Its always Linux has lost relevance when in fact at the moment we could be heading into another distribution blood bath. Where distrobutions who have not been progressing die. Just because relevance is moving between distributions does not mean Linux is losing relevance. It means something interesting is about to happen.
Canonical really has no rights to throw stones up stream.
Its really simple to allow distrobutions to get away with murder and blame the upstream.
nt_facejerk really was not to know that the topic area he took me on in I have taken on one of the lead developers of Reactos and Some of the most important reverses of NT tech it over before. Its a topic area I have never been defeated in and I know extremely well.
Linux Kernel Project technically is causing no issues. Issues are being introduced downstream of it. If you take a stock kernel source from kernel.org and build it no patching try to run ubuntu with default configuration it don’t run very well.
Fedora Redhat SUSE CRUX Debian Arch. All that I have tested personally can operate on a kernel source taken straight from kernel.org without alteration.
What is the most common distributions tested. Ubuntu related using ubuntu patched kernels. This of placing the distribution on a pure stock kernel and see what happens is a good test of what odds you have that closed source drivers will work perfectly. The worse the distribution performs the less likely closed source driver will work. This is pure tampering related.
Next is what is called dependency hell. Distrobution systems have a known flaw. This flaw prevents you from taking binaries from where ever. Only 1 version of a .so(equal to a windows .dll) can be installed at the same time in most cases.
Its really lack of multi version install supported by package managers and the dynamic loader distributions use that cause massive amounts of incompatibility.
Windows uses two items SXS and compatibility shims. Both in fact user space not kernel space to give Windows magical backwards compatibility support for applications. Is there any technology limit provided by the Linux kernel preventing this. Answer No.
These issues of massive incompatibility causes from this also exists on FreeBSD NetBSD and Solaris and most other Unix based systems. So its not a unique Linux defect.
One of Linux biggest historic mistakes was coping how Unix systems did their dynamic libraries. Debian using the freebsd kernel shows just as much trouble as Debian using Linux kernel. This is one of the first things that clued me up that the kernel argument could be complete crap.
Like another case of only providing one. Why does ubuntu have to provide only 1 version of x.org server? When they could have provided 1 versions for open source drivers and 1 version for closed source so avoiding failures. Applications using X11 protocol really would not have cared. The kernel would not have cared. But no diskspace to fit the most flashy features is more important than not hurting the user.
Ubuntu claims user-friendly but a lot of what is does is the worst kind of user-friendly being just a baited trap waiting to bite you.
Ubuntu claim of user-friendly need to be taken as we will be flashy and we don’t give a stuff if your computer crashs or does other bad things to you.
Of course Ubuntu is not the only one that needs to be picked on for providing second rate packaging systems and dynamic loaders for modern day requirements. Fixing these things are not prity.
While people are getting the problem wrong pressure is not applied to where the problem is so the problem never gets fixed.